关于golang:从代码到部署微服务实战一

10次阅读

共计 18916 个字符,预计需要花费 48 分钟才能阅读完成。

以后微服务曾经成为服务端开发的支流架构,而 Go 语言因其简略易学、内置高并发、疾速编译、占用内存小等特点也越来越受到开发者的青眼,微服务实战系列文章将从实战的角度和大家一起学习微服务相干的常识。本系列文章将以一个“博客零碎”由浅入深的和大家一起一步步搭建起一个残缺的微服务零碎

该篇文章为微服务实战系列的第一篇文章,咱们将基于 go-zero+gitlab+jenkins+k8s 构建微服务继续集成和主动构建公布零碎,先对以上模块做一个简略介绍:

  • go-zero 是一个集成了各种工程实际的 web 和 rpc 框架。通过弹性设计保障了大并发服务端的稳定性,禁受了充沛的实战测验
  • gitlab 是一款基于 Git 的齐全集成的软件开发平台,另外,GitLab 且具备 wiki 以及在线编辑、issue 跟踪性能、CI/CD 等性能
  • jenkins 是基于 Java 开发的一种继续集成工具,用于监控继续反复的工作,旨在提供一个凋谢易用的软件平台,使软件的继续集成变成可能
  • kubernetes 常简称为 K8s,是用于主动部署、扩大和治理容器化应用程序”的开源零碎。该零碎由 Google 设计并捐献给 Cloud Native Computing Foundation(今属 Linux 基金会)来应用。它旨在提供“跨主机集群的主动部署、扩大以及运行应用程序容器的平台

实战次要分为五个步骤,上面针对以下的五个步骤别离进行具体的解说

  1. 第一步环境搭建,这里我采纳了两台 ubuntu16.04 服务器别离装置了 gitlab 和 jenkins,采纳 xxx 云弹性 k8s 集群
  2. 第二步生成我的项目,这里我采纳 go-zero 提供的 goctl 工具疾速生成我的项目,并对我的项目做简略的批改以便测试
  3. 第三部生成 Dockerfile 和 k8s 部署文件,k8s 部署文件编写简单而且容易出错,goctl 工具提供了生成 Dockerfile 和 k8s 部署文件的性能十分的不便
  4. Jenkins Pipeline 采纳申明式语法构建,创立 Jenkinsfile 文件并应用 gitlab 进行版本治理
  5. 最初进行我的项目测试验证服务是否失常

<img src=”https://gitee.com/kevwan/static/raw/master/doc/images/experiment_step.png” alt=”experiment_step” style=”zoom: 50%;” />

环境搭建

首先咱们搭建试验环境,这里我采纳了两台 ubuntu16.04 服务器,别离装置了 gitlab 和 jenkins。gtilab 应用 apt-get 间接装置,装置好后启动服务并查看服务状态,各组件为 run 状态阐明服务曾经启动,默认端口为 9090 间接拜访即可

gitlab-ctl start  // 启动服务

gitlab-ctl status // 查看服务状态

run: alertmanager: (pid 1591) 15442s; run: log: (pid 2087) 439266s
run: gitaly: (pid 1615) 15442s; run: log: (pid 2076) 439266s
run: gitlab-exporter: (pid 1645) 15442s; run: log: (pid 2084) 439266s
run: gitlab-workhorse: (pid 1657) 15441s; run: log: (pid 2083) 439266s
run: grafana: (pid 1670) 15441s; run: log: (pid 2082) 439266s
run: logrotate: (pid 5873) 1040s; run: log: (pid 2081) 439266s
run: nginx: (pid 1694) 15440s; run: log: (pid 2080) 439266s
run: node-exporter: (pid 1701) 15439s; run: log: (pid 2088) 439266s
run: postgres-exporter: (pid 1708) 15439s; run: log: (pid 2079) 439266s
run: postgresql: (pid 1791) 15439s; run: log: (pid 2075) 439266s
run: prometheus: (pid 10763) 12s; run: log: (pid 2077) 439266s
run: puma: (pid 1816) 15438s; run: log: (pid 2078) 439266s
run: redis: (pid 1821) 15437s; run: log: (pid 2086) 439266s
run: redis-exporter: (pid 1826) 15437s; run: log: (pid 2089) 439266s
run: sidekiq: (pid 1835) 15436s; run: log: (pid 2104) 439266s

jenkins 也是用 apt-get 间接装置,须要留神的是装置 jenkins 前须要先装置 java,过程比较简单这里就不再演示,jenkins 默认端口为 8080,默认账号为 admin,初始密码门路为 /var/lib/jenkins/secrets/initialAdminPassword,初始化装置举荐的插件即可,前面能够依据本人的须要再装置其它插件

k8s 集群搭建过程比较复杂,尽管能够应用 kubeadm 等工具疾速搭建,但间隔真正的生产级集群还是有肯定差距,因为咱们的服务最终是要上生产的,所以这里我抉择了 xxx 云的弹性 k8s 集群版本为 1.16.9,弹性集群的益处是按需免费没有额定的费用,当咱们试验实现后通过 kubectl delete 立马开释资源只会产生很少的费用,而且 xxx 云的 k8s 集群给咱们提供了敌对的监控页面,能够通过这些界面查看各种统计信息,集群创立好后须要创立集群拜访凭证能力拜访集群

  • 若以后拜访客户端尚未配置任何集群的拜访凭证,即 ~/.kube/config 内容为空,可间接将拜访凭证内容并粘贴入 ~/.kube/config 中
  • 若以后拜访客户端已配置了其余集群的拜访凭证,须要通过如下命令合并凭证

    KUBECONFIG=~/.kube/config:~/Downloads/k8s-cluster-config kubectl config view --merge --flatten > ~/.kube/config
    export KUBECONFIG=~/.kube/config

配置好拜访权限后通过如下命令可查看以后集群

kubectl config current-context

查看集群版本,输入内容如下

kubectl version

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.9", GitCommit:"a17149e1a189050796ced469dbd78d380f2ed5ef", GitTreeState:"clean", BuildDate:"2020-04-16T11:44:51Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.9-eks.2", GitCommit:"f999b99a13f40233fc5f875f0607448a759fc613", GitTreeState:"clean", BuildDate:"2020-10-09T12:54:13Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

到这里咱们的试验曾经搭建实现了,这里版本治理也能够应用 github

生成我的项目

整个我的项目采纳大仓的形式,目录构造如下,最外层我的项目命名为 blog,app 目录下为依照业务拆分成的不同的微服务,比方 user 服务上面又分为 api 服务和 rpc 服务,其中 api 服务为聚合网关对外提供 restful 接口,而 rpc 服务为外部通信提供高性能的数据缓存等操作

   ├── blog
   │   ├── app
   │   │   ├── user
   │   │   │   ├── api
   │   │   │   └── rpc
   │   │   ├── article
   │   │   │   ├── api
   │   │   │   └── rpc

我的项目目录创立好之后咱们进入 api 目录创立 user.api 文件,文件内容如下,定义服务端口为 2233,同时定义了一个 /user/info 接口

type UserInfoRequest struct {Uid int64 `form:"uid"`}

type UserInfoResponse struct {
    Uid   int64  `json:"uid"`
    Name  string `json:"name"`
    Level int    `json:"level"`
}

@server(port: 2233)
service user-api {
    @doc(summary:  获取用户信息)
    @server(handler:  UserInfo)
    get /user/info(UserInfoRequest) returns(UserInfoResponse)
}

定义好 api 文件之后咱们执行如下命令生成 api 服务代码,一键生成真是能大大晋升咱们的生产力呢

goctl api go -api user.api -dir .

代码生成后咱们对代码稍作革新以便前面部署后不便进行测试,革新后的代码为返回本机的 ip 地址

func (ul *UserInfoLogic) UserInfo(req types.UserInfoRequest) (*types.UserInfoResponse, error) {addrs, err := net.InterfaceAddrs()
    if err != nil {return nil, err}
    var name string
    for _, addr := range addrs {if ipnet, ok := addr.(*net.IPNet); ok && !ipnet.IP.IsLoopback() && ipnet.IP.To4() != nil {name = ipnet.IP.String()
        }
    }

    return &types.UserInfoResponse{
        Uid:   req.Uid,
        Name:  name,
        Level: 666,
    }, nil
}

到这里服务生成局部就实现了,因为本节为根底框架的搭建所以只是增加一些测试的代码,前面会持续丰盛我的项目代码

生成镜像和部署文件

个别的罕用镜像比方 mysql、memcache 等咱们能够间接从镜像仓库拉取,然而咱们的服务镜像须要咱们自定义,自定义镜像有多重形式而应用 Dockerfile 则是应用最多的一种形式,应用 Dockerfile 定义镜像尽管不难然而也很容易出错,所以这里咱们也借助工具来主动生成,这里就不得不再夸一下 goctl 这个工具真的是棒棒的,还能帮忙咱们一键生成 Dockerfile 呢,在 api 目录下执行如下命令

goctl docker -go user.go

生成后的文件稍作改变以合乎咱们的目录构造,文件内容如下,采纳了两阶段构建,第一阶段构建可执行文件确保构建独立于宿主机,第二阶段会援用第一阶段构建的后果,最终构建出极简镜像

FROM golang:alpine AS builder

LABEL stage=gobuilder

ENV CGO_ENABLED 0
ENV GOOS linux
ENV GOPROXY https://goproxy.cn,direct

WORKDIR /build/zero

RUN go mod init blog/app/user/api
RUN go mod download
COPY . .
COPY /etc /app/etc
RUN go build -ldflags="-s -w" -o /app/user user.go


FROM alpine

RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata
ENV TZ Asia/Shanghai

WORKDIR /app
COPY --from=builder /app/user /app/user
COPY --from=builder /app/etc /app/etc

CMD ["./user", "-f", "etc/user-api.yaml"]

而后执行如下命令创立镜像

docker build -t user:v1 app/user/api/

这个时候咱们应用 docker images 命令查看镜像会发现 user 镜像曾经创立,版本为 v1

REPOSITORY                            TAG                 IMAGE ID            CREATED             SIZE
user                                  v1                  1c1f64579b40        4 days ago          17.2MB

同样,k8s 的部署文件编写也比较复杂很容易出错,所以咱们也应用 goctl 主动来生成,在 api 目录下执行如下命令

goctl kube deploy -name user-api -namespace blog -image user:v1 -o user.yaml -port 2233

生成的 ymal 文件如下

apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-api
  namespace: blog
  labels:
    app: user-api
spec:
  replicas: 2
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: user-api
  template:
    metadata:
      labels:
        app: user-api
    spec:
      containers:
      - name: user-api
        image: user:v1
        lifecycle:
          preStop:
            exec:
              command: ["sh","-c","sleep 5"]
        ports:
        - containerPort: 2233
        readinessProbe:
          tcpSocket:
            port: 2233
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          tcpSocket:
            port: 2233
          initialDelaySeconds: 15
          periodSeconds: 10
        resources:
          requests:
            cpu: 500m
            memory: 512Mi
          limits:
            cpu: 1000m
            memory: 1024Mi

到此生成镜像和 k8s 部署文件步骤曾经完结了,下面次要是为了演示,真正的生产环境中都是通过继续集成工具主动创立镜像的

Jenkins Pipeline

jenkins 是罕用的持续集成工具,其提供了多种构建形式,而 pipeline 是最罕用的构建形式之一,pipeline 反对声名式和脚本式两种形式,脚本式语法灵便、可扩大,但也意味着更简单,而且须要学习 Grovvy 语言,减少了学习老本,所以才有了申明式语法,声名式语法是一种更简略,更结构化的语法,咱们前面也都会应用声名式语法

这里再介绍下 Jenkinsfile,其实 Jenkinsfile 就是一个纯文本文件,也就是部署流水线概念在 Jenkins 中的表现形式,就像 Dockerfile 之于 Docker,所有的部署流水线逻辑都可在 Jenkinsfile 文件中定义,须要留神,Jenkins 默认是不反对 Jenkinsfile 的,咱们须要装置 Pipeline 插件,装置插件的流程为 Manage Jenkins -> Manager Plugins 而后搜寻装置即可,之后便可构建 pipeline 了

<img src=”https://gitee.com/kevwan/static/raw/master/doc/images/pipeline_build.png” alt=”pipeline_build” style=”zoom: 33%;” />

咱们能够间接在 pipeline 的界面中输出构建脚本,然而这样没法做到版本化,所以如果不是长期测试的话是不举荐这种形式的,更通用的形式是让 jenkins 从 git 仓库中拉取 Jenkinsfile 并执行

首先须要装置 Git 插件,而后应用 ssh clone 形式拉取代码,所以,须要将 git 私钥放到 jenkins 中,这样 jenkins 才有权限从 git 仓库拉取代码

将 git 私钥放到 jenkins 中的步骤是:Manage Jenkins -> Manage credentials -> 增加凭据,类型抉择为 SSH Username with private key,接下来依照提醒进行设置就能够了,如下图所示

<img src=”https://gitee.com/kevwan/static/raw/master/doc/images/gitlab_tokens.png” alt=”gitlab_tokens” style=”zoom:50%;” />

而后在咱们的 gitlab 中新建一个我的项目,只须要一个 Jenkinsfile 文件

在 user-api 我的项目中流水线定义抉择 Pipeline script from SCM,增加 gitlab ssh 地址和对应的 token,如下图所示

<img src=”https://gitee.com/kevwan/static/raw/master/doc/images/jenkinsfile_responsitory.png” alt=”jenkinsfile_responsitory” style=”zoom:40%;” />

接着咱们就能够依照下面的实战步骤进行 Jenkinsfile 文件的编写了

  • 从 gitlab 拉取代码,从咱们的 gitlab 仓库中拉取代码,并应用 commit_id 用来辨别不同版本

    stage('从 gitlab 拉取服务代码') {
        steps {
            echo '从 gitlab 拉取服务代码'
            git credentialsId: 'xxxxxxxx', url: 'http://xxx.xxx.xxx.xxx:xxx/blog/blog.git'
            script {commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()}
        }
    }
  • 构建 docker 镜像,应用 goctl 生成的 Dockerfile 文件构建镜像

    stage('构建镜像') {
        steps {
            echo '构建镜像'
            sh "docker build -t user:${commit_id} app/user/api/"
        }
    }
  • 上传镜像到镜像仓库,把生产的镜像 push 到镜像仓库

    stage('上传镜像到镜像仓库') {
        steps {
            echo "上传镜像到镜像仓库"
            sh "docker login -u xxx -p xxxxxxx"
            sh "docker tag user:${commit_id} xxx/user:${commit_id}"
            sh "docker push xxx/user:${commit_id}"
        }
    }
  • 部署到 k8s,把部署文件中的版本号替换,从近程拉取镜,应用 kubectl apply 命令进行部署

    stage('部署到 k8s') {
        steps {
            echo "部署到 k8s"
            sh "sed -i's/<COMMIT_ID_TAG>/${commit_id}/'app/user/api/user.yaml"
            sh "cp app/user/api/user.yaml ."
            sh "kubectl apply -f user.yaml"
        }
    }

    残缺的 Jenkinsfile 文件如下

    pipeline {
        agent any
    
        stages {stage('从 gitlab 拉取服务代码') {
                steps {
                    echo '从 gitlab 拉取服务代码'
                    git credentialsId: 'xxxxxx', url: 'http://xxx.xxx.xxx.xxx:9090/blog/blog.git'
                    script {commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()}
                }
            }
            stage('构建镜像') {
                steps {
                    echo '构建镜像'
                    sh "docker build -t user:${commit_id} app/user/api/"
                }
            }
            stage('上传镜像到镜像仓库') {
                steps {
                    echo "上传镜像到镜像仓库"
                    sh "docker login -u xxx -p xxxxxxxx"
                    sh "docker tag user:${commit_id} xxx/user:${commit_id}"
                    sh "docker push xxx/user:${commit_id}"
                }
            }
            stage('部署到 k8s') {
                steps {
                    echo "部署到 k8s"
                    sh "sed -i's/<COMMIT_ID_TAG>/${commit_id}/'app/user/api/user.yaml"
                    sh "kubectl apply -f app/user/api/user.yaml"
                }
            }
        }
    }

    到这里所有的配置根本结束,咱们的根底框架也根本搭建实现,上面开始执行 pipeline,点击左侧的立刻构建在上面 Build History 中就回产生一个构建历史序列号,点击对应的序列号而后点击左侧的 Console Output 即可查看构建过程的详细信息,如果构建过程呈现谬误也会在这里输入

构建具体输入如下,pipeline 对应的每一个 stage 都有具体的输入

Started by user admin
Obtained Jenkinsfile from git git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/user-api
[Pipeline] {[Pipeline] stage
[Pipeline] {(Declarative: Checkout SCM)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
The recommended git tool is: NONE
using credential gitlab_token
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git # timeout=10
Fetching upstream changes from git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git
 > git --version # timeout=10
 > git --version # 'git version 2.7.4'
using GIT_SSH to set credentials 
 > git fetch --tags --progress git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision 77eac3a4ca1a5b6aea705159ce26523ddd179bdf (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10
Commit message: "add"
 > git rev-list --no-walk 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {[Pipeline] stage
[Pipeline] {(从 gitlab 拉取服务代码)
[Pipeline] echo
从 gitlab 拉取服务代码
[Pipeline] git
The recommended git tool is: NONE
using credential gitlab_user_pwd
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://xxx.xxx.xxx.xxx:9090/blog/blog.git # timeout=10
Fetching upstream changes from http://xxx.xxx.xxx.xxx:9090/blog/blog.git
 > git --version # timeout=10
 > git --version # 'git version 2.7.4'
using GIT_ASKPASS to set credentials 
 > git fetch --tags --progress http://xxx.xxx.xxx.xxx:9090/blog/blog.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision b757e9eef0f34206414bdaa4debdefec5974c3f5 (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
 > git branch -a -v --no-abbrev # timeout=10
 > git branch -D master # timeout=10
 > git checkout -b master b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
Commit message: "Merge branch'blog/dev'into'master'"
 > git rev-list --no-walk b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
[Pipeline] script
[Pipeline] {[Pipeline] sh
+ git rev-parse --short HEAD
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] {(构建镜像)
[Pipeline] echo
构建镜像
[Pipeline] sh
+ docker build -t user:b757e9e app/user/api/
Sending build context to Docker daemon  28.16kB

Step 1/18 : FROM golang:alpine AS builder
alpine: Pulling from library/golang
801bfaa63ef2: Pulling fs layer
ee0a1ba97153: Pulling fs layer
1db7f31c0ee6: Pulling fs layer
ecebeec079cf: Pulling fs layer
63b48972323a: Pulling fs layer
ecebeec079cf: Waiting
63b48972323a: Waiting
1db7f31c0ee6: Verifying Checksum
1db7f31c0ee6: Download complete
ee0a1ba97153: Verifying Checksum
ee0a1ba97153: Download complete
63b48972323a: Verifying Checksum
63b48972323a: Download complete
801bfaa63ef2: Verifying Checksum
801bfaa63ef2: Download complete
801bfaa63ef2: Pull complete
ee0a1ba97153: Pull complete
1db7f31c0ee6: Pull complete
ecebeec079cf: Verifying Checksum
ecebeec079cf: Download complete
ecebeec079cf: Pull complete
63b48972323a: Pull complete
Digest: sha256:49b4eac11640066bc72c74b70202478b7d431c7d8918e0973d6e4aeb8b3129d2
Status: Downloaded newer image for golang:alpine
 ---> 1463476d8605
Step 2/18 : LABEL stage=gobuilder
 ---> Running in c4f4dea39a32
Removing intermediate container c4f4dea39a32
 ---> c04bee317ea1
Step 3/18 : ENV CGO_ENABLED 0
 ---> Running in e8e848d64f71
Removing intermediate container e8e848d64f71
 ---> ff82ee26966d
Step 4/18 : ENV GOOS linux
 ---> Running in 58eb095128ac
Removing intermediate container 58eb095128ac
 ---> 825ab47146f5
Step 5/18 : ENV GOPROXY https://goproxy.cn,direct
 ---> Running in df2add4e39d5
Removing intermediate container df2add4e39d5
 ---> c31c1aebe5fa
Step 6/18 : WORKDIR /build/zero
 ---> Running in f2a1da3ca048
Removing intermediate container f2a1da3ca048
 ---> 5363d05f25f0
Step 7/18 : RUN go mod init blog/app/user/api
 ---> Running in 11d0adfa9d53
[91mgo: creating new go.mod: module blog/app/user/api
[0mRemoving intermediate container 11d0adfa9d53
 ---> 3314852f00fe
Step 8/18 : RUN go mod download
 ---> Running in aa9e9d9eb850
Removing intermediate container aa9e9d9eb850
 ---> a0f2a7ffe392
Step 9/18 : COPY . .
 ---> a807f60ed250
Step 10/18 : COPY /etc /app/etc
 ---> c4c5d9f15dc0
Step 11/18 : RUN go build -ldflags="-s -w" -o /app/user user.go
 ---> Running in a4321c3aa6e2
[91mgo: finding module for package github.com/tal-tech/go-zero/core/conf
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest/httpx
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/core/logx
[0m[91mgo: downloading github.com/tal-tech/go-zero v1.1.1
[0m[91mgo: found github.com/tal-tech/go-zero/core/conf in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/rest in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/rest/httpx in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/core/logx in github.com/tal-tech/go-zero v1.1.1
[0m[91mgo: downloading gopkg.in/yaml.v2 v2.4.0
[0m[91mgo: downloading github.com/justinas/alice v1.2.0
[0m[91mgo: downloading github.com/dgrijalva/jwt-go v3.2.0+incompatible
[0m[91mgo: downloading go.uber.org/automaxprocs v1.3.0
[0m[91mgo: downloading github.com/spaolacci/murmur3 v1.1.0
[0m[91mgo: downloading github.com/google/uuid v1.1.1
[0m[91mgo: downloading google.golang.org/grpc v1.29.1
[0m[91mgo: downloading github.com/prometheus/client_golang v1.5.1
[0m[91mgo: downloading github.com/beorn7/perks v1.0.1
[0m[91mgo: downloading github.com/golang/protobuf v1.4.2
[0m[91mgo: downloading github.com/prometheus/common v0.9.1
[0m[91mgo: downloading github.com/cespare/xxhash/v2 v2.1.1
[0m[91mgo: downloading github.com/prometheus/client_model v0.2.0
[0m[91mgo: downloading github.com/prometheus/procfs v0.0.8
[0m[91mgo: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
[0m[91mgo: downloading google.golang.org/protobuf v1.25.0
[0mRemoving intermediate container a4321c3aa6e2
 ---> 99ac2cd5fa39
Step 12/18 : FROM alpine
latest: Pulling from library/alpine
801bfaa63ef2: Already exists
Digest: sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436
Status: Downloaded newer image for alpine:latest
 ---> 389fef711851
Step 13/18 : RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata
 ---> Running in 51694dcb96b6
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
v3.12.3-38-g9ff116e4f0 [http://dl-cdn.alpinelinux.org/alpine/v3.12/main]
v3.12.3-39-ge9195171b7 [http://dl-cdn.alpinelinux.org/alpine/v3.12/community]
OK: 12746 distinct packages available
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/2) Installing ca-certificates (20191127-r4)
(2/2) Installing tzdata (2020f-r0)
Executing busybox-1.31.1-r19.trigger
Executing ca-certificates-20191127-r4.trigger
OK: 10 MiB in 16 packages
Removing intermediate container 51694dcb96b6
 ---> e5fb2e4d5eea
Step 14/18 : ENV TZ Asia/Shanghai
 ---> Running in 332fd0df28b5
Removing intermediate container 332fd0df28b5
 ---> 11c0e2e49e46
Step 15/18 : WORKDIR /app
 ---> Running in 26e22103c8b7
Removing intermediate container 26e22103c8b7
 ---> 11d11c5ea040
Step 16/18 : COPY --from=builder /app/user /app/user
 ---> f69f19ffc225
Step 17/18 : COPY --from=builder /app/etc /app/etc
 ---> b8e69b663683
Step 18/18 : CMD ["./user", "-f", "etc/user-api.yaml"]
 ---> Running in 9062b0ed752f
Removing intermediate container 9062b0ed752f
 ---> 4867b4994e43
Successfully built 4867b4994e43
Successfully tagged user:b757e9e
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] {(上传镜像到镜像仓库)
[Pipeline] echo
上传镜像到镜像仓库
[Pipeline] sh
+ docker login -u xxx -p xxxxxxxx
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[Pipeline] sh
+ docker tag user:b757e9e xxx/user:b757e9e
[Pipeline] sh
+ docker push xxx/user:b757e9e
The push refers to repository [docker.io/xxx/user]
b19a970f64b9: Preparing
f695b957e209: Preparing
ee27c5ca36b5: Preparing
7da914ecb8b0: Preparing
777b2c648970: Preparing
777b2c648970: Layer already exists
ee27c5ca36b5: Pushed
b19a970f64b9: Pushed
7da914ecb8b0: Pushed
f695b957e209: Pushed
b757e9e: digest: sha256:6ce02f8a56fb19030bb7a1a6a78c1a7c68ad43929ffa2d4accef9c7437ebc197 size: 1362
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] {(部署到 k8s)
[Pipeline] echo
部署到 k8s
[Pipeline] sh
+ sed -i s/<COMMIT_ID_TAG>/b757e9e/ app/user/api/user.yaml
[Pipeline] sh
+ kubectl apply -f app/user/api/user.yaml
deployment.apps/user-api created
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

能够看到最初输入了 SUCCESS 阐明咱们的 pipeline 曾经成了,这个时候咱们能够通过 kubectl 工具查看一下,- n 参数为指定 namespace

kubectl get pods -n blog

NAME                       READY   STATUS    RESTARTS   AGE
user-api-84ffd5b7b-c8c5w   1/1     Running   0          10m
user-api-84ffd5b7b-pmh92   1/1     Running   0          10m

咱们在 k8s 部署文件中制订了命名空间为 blog,所以在执行 pipeline 之前咱们须要先创立这个 namespance

kubectl create namespace blog

服务曾经部署好了,那么接下来怎么从内部拜访服务呢?这里应用 LoadBalancer 形式,Service 部署文件定义如下,80 端口映射到容器的 2233 端口上,selector 用来匹配 Deployment 中定义的 label

apiVersion: v1
kind: Service
metadata:
  name: user-api-service
  namespace: blog
spec:
  selector:
    app: user-api
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
      targetPort: 2233

执行创立 service,创立完后查看 service 输入如下,留神肯定要加上 - n 参数指定 namespace

kubectl apply -f user-service.yaml
kubectl get services -n blog

NAME               TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)        AGE
user-api-service   LoadBalancer   <none>       xxx.xxx.xxx.xx   80:32470/TCP   79m

这里的 EXTERNAL-IP 即为提供给外网拜访的 ip,端口为 80

到这里咱们的所有的部署工作都实现了,大家最好也能亲自动手来实际一下

测试

最初咱们来测试下部署的服务是否失常,应用 EXTERNAL-IP 来进行拜访

curl "http://xxx.xxx.xxx.xxx:80/user/info?uid=1"

{"uid":1,"name":"172.17.0.5","level":666}

curl http://xxx.xxx.xxx.xxx:80/user/info\?uid\=1

{"uid":1,"name":"172.17.0.8","level":666}

curl 拜访了两次 /user/info 接口,都能失常返回,阐明咱们的服务没有问题,name 字段别离输入了两个不同 ip,能够看出 LoadBalancer 默认采纳了 Round Robin 的负载平衡策略

总结

以上咱们实现了从代码开发到版本治理再到构建部署的 DevOps 全流程,实现了基础架构的搭建,当然这个架构当初仍然很简陋。在本系列后续中,咱们将以这个博客零碎为根底逐步的欠缺整个架构,比方逐步的欠缺 CI、CD 流程、减少监控、博客零碎性能的欠缺、高可用最佳实际和其原理等等

工欲善其事必先利其器,好的工具能大大晋升咱们的工作效率而且能升高出错的可能,下面咱们大量应用了 [goctl]() 工具几乎有点爱不释手了哈哈哈,下次见

因为集体能力无限不免有表白有误的中央,欢送广大观众姥爷批评指正!

我的项目地址

https://github.com/tal-tech/go-zero

欢送应用并 star 反对咱们!????

go-zero 系列文章见『微服务实际』公众号

正文完
 0