作者:老Z,云原生爱好者,目前专一于云原生运维,KubeSphere Ambassador。

Spring Cloud Alibaba 全家桶之 RocketMQ 是一款典型的分布式架构下的消息中间件产品,应用异步通信形式和公布订阅的音讯传输模型。

很多基于 Spring Cloud 开发的我的项目都喜爱采纳 RocketMQ 作为消息中间件。

RocketMQ 罕用的部署模式如下:

  • 单 Master 模式
  • 多 Master 无 Slave 模式
  • 多 Master 多 Slave 模式-异步复制
  • 多 Master 多 Slave 模式-同步双写

更多的部署计划详细信息能够参考官网文档。

本文重点介绍 单 Master 模式和多 Master 多 Slave-异步复制模式在 K8s 集群上的部署计划。

单 Master 模式

这种部署形式危险较大,仅部署一个 NameServer 和一个 Broker,一旦 Broker 重启或者宕机时,会导致整个服务不可用,不倡议线上生产环境应用,仅能够用于开发和测试环境。

部署计划参考官网rocketmq-docker我的项目中应用的容器化部署计划波及的镜像、启动形式、定制化配置。

多 Master 多 Slave-异步复制模式

每个 Master 配置一个 Slave,有多对 Master-Slave,HA 采纳异步复制形式,主备有短暂音讯提早(毫秒级),这种模式的优缺点如下:

  • 长处:即便磁盘损坏,音讯失落的非常少,且音讯实时性不会受影响,同时 Master 宕机后,消费者依然能够从 Slave 生产,而且此过程对利用通明,不须要人工干预,性能同多 Master 模式简直一样;
  • 毛病:Master 宕机,磁盘损坏状况下会失落大量音讯。

多 Master 多 Slave-异步复制模式实用于生产环境,部署计划采纳官网提供的 RocketMQ Operator。

离线镜像制作

此过程为可选项,离线内网环境可用,如果不配置内网镜像,后续的资源配置清单中留神容器的 image 参数请应用默认值。

本文别离介绍了单 Master 模式和多 Master 多 Slave-异步复制模式部署 RocketMQ 应用的离线镜像的制作形式。

  • 单 Master 模式间接采纳 RocketMQ 官网文档中介绍的容器化部署计划中应用的镜像。
  • 多 Master 多 Slave-异步复制模式的离线镜像制作形式采纳 RocketMQ Operator 官网自带的镜像制作工具制作打包,制作过程中很多包都须要到国外网络下载,然而受限于国外网络拜访,默认成功率较低,须要屡次尝试或采取非凡伎俩 ( 懂的都懂)。

    也能够用传统的形式手工的 Pull Docker Hub 上已有的镜像,而后再 Push 到公有镜像仓库。

在一台能同时拜访互联网和内网 Harbor 仓库的服务器上进行上面的操作。

在 Harbor 中创立我的项目

自己习惯内网离线镜像的命名空间跟利用镜像默认的命名空间保持一致,因而,在 Harbor 中创立 apacheapacherocketmq 两个我的项目,能够在 Harbor 治理界面中手工创立我的项目,也能够用上面命令行的形式自动化创立。

curl -u "admin:Harbor12345" -X POST -H "Content-Type: application/json" https://registry.zdevops.com.cn/api/v2.0/projects -d '{ "project_name": "apache", "public": true}'curl -u "admin:Harbor12345" -X POST -H "Content-Type: application/json" https://registry.zdevops.com.cn/api/v2.0/projects -d '{ "project_name": "apacherocketmq", "public": true}'

装置 Go 1.16

RocketMQ Operator 自定义镜像制作须要用到 Go 环境,须要先装置配置。

下载 Go 1.16 系列的最新版:

cd /opt/wget https://golang.google.cn/dl/go1.16.15.linux-amd64.tar.gz

解压源代码到指定目录:

tar zxvf go1.16.15.linux-amd64.tar.gz -C /usr/local/

配置环境变量:

cat >> /etc/profile.d/go.sh << EOF# go environmentexport GOROOT=/usr/local/goexport GOPATH=/srv/goexport PATH=$PATH:$GOROOT/bin:$GOPATH/binEOF
GOPATH 为工作目录也是代码的寄存目录,能够依据本人的习惯配置

配置 Go:

go env -w GO111MODULE=ongo env -w GOPROXY=https://goproxy.cn,direct

验证:

source /etc/profile.d/go.shgo verison

获取 RocketMQ Operator

从 Apache 官网 GitHub 仓库获取 rocketmq-operator 代码。

cd /srvgit clone -b 0.3.0 https://github.com/apache/rocketmq-operator.git

制作 RocketMQ Operator Image

批改 DockerFile:

cd /srv/rocketmq-operatorvi Dockerfile

Notice: 构建镜像的过程需拜访国外的软件源和镜像仓库,在国内拜访有时会受限制,因而能够提前批改为国内的软件源和镜像仓库。

此操作为可选项,如果拜访不受限则不须要配置。

必要的批改内容

# 第 10 行(批改代理地址为国内地址,减速拜访)# 批改前RUN go mod download# 批改后RUN go env -w GOPROXY=https://goproxy.cn,direct && go mod download# 第 25 行(批改源地址为国内源)# 批改前RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras# 批改后RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

可选的批改内容

# 默认装置的 ROCKETMQ版本为 4.9.4,能够批改为指定版本# 第 28 行,批改 4.9.4ENV ROCKETMQ_VERSION 4.9.4

制作镜像:

yum install gcccd /srv/rocketmq-operatorgo mod tidyIMAGE_URL=registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0make docker-build IMG=${IMAGE_URL}

验证镜像构建胜利:

docker images | grep rocketmq-operator

推送镜像:

make docker-push IMG=${IMAGE_URL}

清理长期镜像

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0

制作 RocketMQ Broker Image

批改 DockerFile(可选):

cd /srv/rocketmq-operator/images/broker/alpinevi Dockerfile
此操作为可选项,次要是为了装置软件减速,如果拜访不受限则不须要配置。
# 第 20 行(批改源地址为国内源)# 批改前RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras# 批改后RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

批改镜像构建脚本:

# 批改镜像仓库地址为内网地址sed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' build-broker-image.sh

构建并推送镜像:

./build-broker-image.sh 4.9.4

验证镜像构建胜利:

docker images | grep rocketmq-broker

清理长期镜像:

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0

制作 RocketMQ Name Server Image

批改 DockerFile(可选):

cd /srv/rocketmq-operator/images/namesrv/alpinevi Dockerfile
此操作为可选项,次要是为了装置软件减速,如果拜访不受限则不须要配置。
# 第 20 行(批改源地址为国内源)# 批改前RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras# 批改后RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apk/repositories && \    apk add --no-cache bash gettext nmap-ncat openssl busybox-extras

批改镜像构建脚本:

# 批改镜像仓库地址为内网地址sed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' build-namesrv-image.sh

构建并推送镜像:

./build-namesrv-image.sh 4.9.4

验证镜像构建胜利:

docker images | grep rocketmq-nameserver

清理长期镜像:

docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

依据官网已有镜像制作离线镜像

下面的 RocketMQ 多 Master 多 Slave-异步复制模式部署计划中用到的离线镜像制作计划更适宜于本地批改定制的场景,如果单纯的只想把官网已有镜像不做批改的下载并推送到本地仓库,能够参考上面的计划。

下载镜像:

docker pull apache/rocketmq-operator:0.3.0docker pull apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0docker pull apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0
Notice: 官网仓库最新版的镜像是 2 年前的 4.5.0.

从新打 tag:

docker tag apache/rocketmq-operator:0.3.0-snapshot registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0docker tag apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0docker tag apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0

推送到公有镜像仓库:

docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0

清理长期镜像:

docker rmi apache/rocketmq-operator:0.3.0docker rmi apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0docker rmi apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0

制作 RocketMQ Console Image

本文间接拉取官网镜像作为本地离线镜像,如果须要批改内容并重构,能够参考 RocketMQ Console 应用的 官网 Dockerfile自行构建。

下载镜像:

docker pull apacherocketmq/rocketmq-console:2.0.0

从新打 tag:

docker tag apacherocketmq/rocketmq-console:2.0.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

推送到公有镜像仓库:

docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

清理长期镜像:

docker rmi apacherocketmq/rocketmq-console:2.0.0docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0

筹备单 Master RocketMQ 部署计划波及的离线镜像

单 Master RocketMQ 部署计划波及的镜像跟集群模式部署计划采纳的 RocketMQ Operator 中应用的镜像不同,在制作离线镜像时,间接从官网镜像库拉取而后从新打 tag,再推送本地镜像仓库。

二者具体不同阐明如下:

  • 单 Master 计划应用的是 Docker Hub 中 apache 命名空间下的镜像,并且镜像名称不辨别 nameserver 和 broker,RocketMQ Operator 应用的是 apacherocketmq 命名空间下的镜像,镜像名称辨别 nameserver 和 broker。
  • 单 Master 计划和 RocketMQ Operator 计划中管理工具应用的镜像也不同,单 Master 计划应用的是 apacherocketmq 命名空间下的 rocketmq-dashboard 镜像,RocketMQ Operator 应用的是 apacherocketmq 命名空间下的 rocketmq-console 镜像。

具体的离线镜像制作流程如下:

下载镜像

docker pull apache/rocketmq:4.9.4docker pull apacherocketmq/rocketmq-dashboard:1.0.0

从新打 tag

docker tag apache/rocketmq:4.9.4 registry.zdevops.com.cn/apache/rocketmq:4.9.4docker tag apacherocketmq/rocketmq-dashboard:1.0.0 registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

推送到公有镜像仓库

docker push registry.zdevops.com.cn/apache/rocketmq:4.9.4docker push registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

清理长期镜像

docker rmi apache/rocketmq:4.9.4docker rmi apacherocketmq/rocketmq-dashboard:1.0.0docker rmi registry.zdevops.com.cn/apache/rocketmq:4.9.4docker rmi registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0

单 Master 模式部署

思路梳理

依据 RocketMQ 服务应用的组件,须要部署以下资源

  • Broker StatefulSet
  • NameServer StatefulSet
  • NameServer Cluster Service:外部服务
  • Dashboard Deployment
  • Dashboard External Service:Dashboard 内部治理用
  • ConfigMap:Broker 自定义配置文件

资源配置清单

参考 GitHub 中 Apache rocketmq-docker我的项目中介绍的容器化启动示例配置,编写实用于 K8S 的资源配置清单。

Notice: 每个人技术能力、技术习惯、服务环境有所不同,这里介绍的只是我采纳的一种简略形式,并不一定是最优的计划,大家能够依据理论状况编写适宜本人的配置。

rocketmq-cm.yaml

kind: ConfigMapapiVersion: v1metadata:  name: rocketmq-broker-config  namespace: zdevopsdata:  BROKER_MEM: ' -Xms2g -Xmx2g -Xmn1g '  broker-common.conf: |-    brokerClusterName = DefaultCluster    brokerName = broker-0    brokerId = 0    deleteWhen = 04    fileReservedTime = 48    brokerRole = ASYNC_MASTER    flushDiskType = ASYNC_FLUSH

rocketmq-name-service-sts.yaml

kind: StatefulSetapiVersion: apps/v1metadata:  name: rocketmq-name-service  namespace: zdevopsspec:  replicas: 1  selector:    matchLabels:      app: rocketmq-name-service      name_service_cr: rocketmq-name-service  template:    metadata:      labels:        app: rocketmq-name-service        name_service_cr: rocketmq-name-service    spec:      volumes:        - name: host-time          hostPath:            path: /etc/localtime            type: ''      containers:        - name: rocketmq-name-service          image: 'registry.zdevops.com.cn/apache/rocketmq:4.9.4'          command:            - /bin/sh          args:            - mqnamesrv          ports:            - name: tcp-9876              containerPort: 9876              protocol: TCP          resources:            limits:              cpu: 500m              memory: 1Gi            requests:              cpu: 250m              memory: 512Mi          volumeMounts:            - name: rocketmq-namesrv-storage              mountPath: /home/rocketmq/logs              subPath: logs            - name: host-time              readOnly: true              mountPath: /etc/localtime          imagePullPolicy: Always  volumeClaimTemplates:    - kind: PersistentVolumeClaim      apiVersion: v1      metadata:        name: rocketmq-namesrv-storage      spec:        accessModes:          - ReadWriteOnce        resources:          requests:            storage: 1Gi        storageClassName: glusterfs        volumeMode: Filesystem  serviceName: ''---kind: ServiceapiVersion: v1metadata:  name: rocketmq-name-server-service  namespace: zdevopsspec:  ports:    - name: tcp-9876      protocol: TCP      port: 9876      targetPort: 9876  selector:    name_service_cr: rocketmq-name-service  type: ClusterIP

rocketmq-broker-sts.yaml

kind: StatefulSetapiVersion: apps/v1metadata:  name: rocketmq-broker-0-master  namespace: zdevopsspec:  replicas: 1  selector:    matchLabels:      app: rocketmq-broker      broker_cr: rocketmq-broker  template:    metadata:      labels:        app: rocketmq-broker        broker_cr: rocketmq-broker    spec:      volumes:        - name: rocketmq-broker-config          configMap:            name: rocketmq-broker-config            items:              - key: broker-common.conf                path: broker-common.conf            defaultMode: 420        - name: host-time          hostPath:            path: /etc/localtime            type: ''      containers:        - name: rocketmq-broker          image: 'apache/rocketmq:4.9.4'          command:            - /bin/sh          args:            - mqbroker            - "-c"            - /home/rocketmq/conf/broker-common.conf          ports:            - name: tcp-vip-10909              containerPort: 10909              protocol: TCP            - name: tcp-main-10911              containerPort: 10911              protocol: TCP            - name: tcp-ha-10912              containerPort: 10912              protocol: TCP          env:            - name: NAMESRV_ADDR              value: 'rocketmq-name-server-service.zdevops:9876'            - name: BROKER_MEM              valueFrom:                configMapKeyRef:                  name: rocketmq-broker-config                  key: BROKER_MEM          resources:            limits:              cpu: 500m              memory: 12Gi            requests:              cpu: 250m              memory: 2Gi          volumeMounts:            - name: host-time              readOnly: true              mountPath: /etc/localtime            - name: rocketmq-broker-storage              mountPath: /home/rocketmq/logs              subPath: logs/broker-0-master            - name: rocketmq-broker-storage              mountPath: /home/rocketmq/store              subPath: store/broker-0-master            - name: rocketmq-broker-config              mountPath: /home/rocketmq/conf/broker-common.conf              subPath: broker-common.conf          imagePullPolicy: Always  volumeClaimTemplates:    - kind: PersistentVolumeClaim      apiVersion: v1      metadata:        name: rocketmq-broker-storage      spec:        accessModes:          - ReadWriteOnce        resources:          requests:            storage: 8Gi        storageClassName: glusterfs        volumeMode: Filesystem  serviceName: ''

rocketmq-dashboard.yaml

kind: DeploymentapiVersion: apps/v1metadata:  name: rocketmq-dashboard  namespace: zdevopsspec:  replicas: 1  selector:    matchLabels:      app: rocketmq-dashboard  template:    metadata:      labels:        app: rocketmq-dashboard    spec:      containers:        - name: rocketmq-dashboard          image: 'registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0'          ports:            - name: http-8080              containerPort: 8080              protocol: TCP          env:            - name: JAVA_OPTS              value: >-                -Drocketmq.namesrv.addr=rocketmq-name-server-service.zdevops:9876                -Dcom.rocketmq.sendMessageWithVIPChannel=false          resources:            limits:              cpu: 500m              memory: 2Gi            requests:              cpu: 50m              memory: 512Mi          imagePullPolicy: Always---kind: ServiceapiVersion: v1metadata:  name: rocketmq-dashboard-service  namespace: zdevopsspec:  ports:    - name: http-8080      protocol: TCP      port: 8080      targetPort: 8080      nodePort: 31080  selector:    app: rocketmq-dashboard  type: NodePort

GitOps

本操作为可选项,自己习惯在集体开发服务器上编辑或批改资源配置清单,而后提交到 Git 服务器 (Gitlab、Gitee、GitHub 等),而后在 k8s 节点上从 Git 服务器拉取资源配置清单并执行,从而实现资源配置清单的版本化治理,简略的实现运维 GitOps。

本系列文档的所有 k8s 资源配置清单文件,为了演示和操作不便,都放在了对立的 k8s-yaml 仓库中,理论工作中都是一个利用一个 Git 仓库,更便于利用配置的版本控制。

大家在理论应用中能够疏忽本步骤,间接在 k8s 节点上编写资源配置清单并执行,也能够参考我的应用形式,实现简略的 GitOps。

在集体运维开发服务器上操作

# 在已有代码仓库创立 rocketmq/single 目录mkdir -p rocketmq/single# 编辑资源配置清单vi rocketmq/single/rocketmq-cm.yamlvi rocketmq/single/rocketmq-name-service-sts.yamlvi rocketmq/single/rocketmq-broker-sts.yamlvi rocketmq/single/rocketmq-dashboard.yaml# 提交 Gitgit add rocketmqgit commit -am '增加 rocketmq 单节点资源配置清单'git push

部署资源

在 k8s 集群 Master 节点上或是独立的运维治理服务器上操作

更新镜像仓库代码

cd /srv/k8s-yamlgit pull

部署资源 (分步式,二选一)

测试环境应用分步独自部署的形式,以便测试资源配置清单的准确性。

cd /srv/k8s-yamlkubectl apply -f rocketmq/single/rocketmq-cm.yamlkubectl apply -f rocketmq/single/rocketmq-name-service-sts.yamlkubectl apply -f rocketmq/single/rocketmq-broker-sts.yamlkubectl apply -f rocketmq/single/rocketmq-dashboard.yaml

部署资源 (一键式,二选一)

理论应用中,能够间接 apply 整个目录,实现一键式主动部署,在正式研发和生产环境中应用目录的形式实现疾速部署。

kubectl apply -f rocketmq/single/

验证

ConfigMap:

$ kubectl get cm -n zdevopsNAME                     DATA   AGEkube-root-ca.crt         1      17drocketmq-broker-config   2      22s

StatefulSet:

$ kubectl get sts -o wide -n zdevopsNAME                       READY   AGE   CONTAINERS              IMAGESrocketmq-broker-0-master   1/1     11s   rocketmq-broker         registry.zdevops.com.cn/apache/rocketmq:4.9.4rocketmq-name-service      1/1     12s   rocketmq-name-service   registry.zdevops.com.cn/apache/rocketmq:4.9.4

Deployment:

$ kubectl get deploy -o wide -n zdevopsNAME                 READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS           IMAGES                                                            SELECTORrocketmq-dashboard   1/1     1            1           31s   rocketmq-dashboard   registry.zdevops.com.cn/apacherocketmq/rocketmq-dashboard:1.0.0   app=rocketmq-dashboard

Pods:

$ kubectl get pods -o wide -n zdevopsNAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATESrocketmq-broker-0-master-0           1/1     Running   0          77s   10.233.116.103   ks-k8s-master-2   <none>           <none>rocketmq-dashboard-b5dbb9d88-cwhqc   1/1     Running   0          3s    10.233.87.115    ks-k8s-master-1   <none>           <none>rocketmq-name-service-0              1/1     Running   0          78s   10.233.116.102   ks-k8s-master-2   <none>           <none>

Service:

$ kubectl get svc -o wide -n zdevopsNAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE     SELECTORrocketmq-dashboard-service     NodePort    10.233.5.237    <none>        8080:31080/TCP   74s     app=rocketmq-dashboardrocketmq-name-server-service   ClusterIP   10.233.3.61     <none>        9876/TCP         2m29s   name_service_cr=rocketmq-name-service

通过浏览器关上 K8S 集群中任意节点的 IP:31080,能够看到 RocketMQ 控制台的治理界面。

清理资源

卸载 RocketMQ 或是装置失败须要清理后重新安装,能够在 K8S 集群上应用上面的流程清理资源。

清理 StatefulSet:

kubectl delete sts rocketmq-broker-0-master -n zdevopskubectl delete sts rocketmq-name-service -n zdevops

清理 Deployment:

kubectl delete deployments rocketmq-dashboard -n zdevops

清理 ConfigMap:

kubectl delete cm rocketmq-broker-config -n zdevops

清理服务:

kubectl delete svc rocketmq-name-server-service -n zdevops kubectl delete svc rocketmq-dashboard-service -n zdevops 

清理存储卷:

kubectl delete pvc rocketmq-namesrv-storage-rocketmq-name-service-0 -n zdevopskubectl delete pvc rocketmq-broker-storage-rocketmq-broker-0-master-0 -n zdevops

当然,也能够利用资源配置清单清理资源,更简略快捷 (存储卷无奈主动清理,须要手工清理)。

$ kubectl delete -f rocketmq/single/statefulset.apps "rocketmq-broker-0-master" deletedconfigmap "rocketmq-broker-config" deleteddeployment.apps "rocketmq-dashboard" deletedservice "rocketmq-dashboard-service" deletedstatefulset.apps "rocketmq-name-service" deletedservice "rocketmq-name-server-service" deleted

多 Master 多 Slave-异步复制模式部署

思路梳理

多 Master 多 Slave-异步复制模式的 RocketMQ 部署,应用官网提供的 RocketMQ Operator,部署起来比拟疾速便捷,扩容也比拟不便。

默认配置会部署 1 个 Master 和 1 个对应的 Slave,部署实现后能够依据需要扩容 Master 和 Slave。

获取 RocketMQ Operator

# git 获取代码时指定版本cd /srv git clone -b 0.3.0 https://github.com/apache/rocketmq-operator.git

筹备资源配置清单

本文演示的资源配置清单都是间接批改 rocketmq-operator 默认的配置,生产环境应依据默认配置批改一套适宜本人环境的规范配置文件,并寄存于 git 仓库中。

为 deploy 资源配置清单文件减少或批改命名空间:

cd /srv/rocketmq-operatorsed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_brokers.yamlsed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_consoles.yamlsed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_nameservices.yamlsed -i 'N;8 a \  namespace: zdevops' deploy/crds/rocketmq.apache.org_topictransfers.yamlsed -i 'N;18 a \  namespace: zdevops' deploy/operator.yamlsed -i 'N;18 a \  namespace: zdevops' deploy/role_binding.yamlsed -i 's/namespace: default/namespace: zdevops/g' deploy/role_binding.yamlsed -i 'N;18 a \  namespace: zdevops' deploy/service_account.yamlsed -i 'N;20 a \  namespace: zdevops' deploy/role.yaml

切记此步骤只能执行一次,如果失败了则须要删掉后从新执行。

执行实现后肯定要查看一下后果是否合乎预期 grep -r zdevops deploy/*

批改 example 资源配置清单文件中的命名空间:

sed -i 's/namespace: default/namespace: zdevops/g' example/rocketmq_v1alpha1_rocketmq_cluster.yamlsed -i 'N;18 a \  namespace: zdevops' example/rocketmq_v1alpha1_cluster_service.yaml

批改镜像地址为内网地址:

sed -i 's#apache/rocketmq-operator:0.3.0#registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0#g' deploy/operator.yamlsed -i 's#apacherocketmq#registry.zdevops.com.cn/apacherocketmq#g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml 

批改 RocketMQ 版本 (可选):

sed -i 's/4.5.0/4.9.4/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml 
Notice: 默认的资源配置清单示例中部署 RocketMQ 集群的版本为 4.5.0, 理论应用时请依据需要调整。

批改 NameService 网络模式 (可选):

sed -i 's/hostNetwork: true/hostNetwork: false/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml sed -i 's/dnsPolicy: ClusterFirstWithHostNet/dnsPolicy: ClusterFirst/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: 官网示例默认配置应用 hostNetwork 模式 , 实用于同时给 K8S 集群内、外利用提供服务 , 理论应用时请依据需要调整 .

集体偏向于禁用 hostNetwork 模式 , 不跟内部利用混用 . 如果须要混用 , 则偏向于在内部独立部署 RocketMQ。

批改 storageClassName 为 glusterfs:

sed -i 's/storageClassName: rocketmq-storage/storageClassName: glusterfs/g' example/rocketmq_v1alpha1_rocketmq_cluster.yamlsed -i 's/storageMode: EmptyDir/storageMode: StorageClass/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml
Notice: 演示环境 GlusterFS 存储对应的 storageClassName 为 glusterfs,请依据理论状况批改。

批改 nameServers 为域名的模式:

sed -i 's/nameServers: ""/nameServers: "name-server-service.zdevops:9876"/g' example/rocketmq_v1alpha1_rocketmq_cluster.yaml

Notice: name-server-service.zdevops 是 NameServer service 名称 + 项目名称的组合

默认配置采纳 pod [ip:port] 的模式 , 一旦 Pod IP 发生变化 ,Console 就没法治理集群了 , 且 Console 不会主动变更配置,如果设置为空的话可能还会呈现轻易配置的状况,因而肯定要提前批改。

批改 RocketMQ Console 内部拜访的 NodePort:

sed -i 's/nodePort: 30000/nodePort: 31080/g' example/rocketmq_v1alpha1_cluster_service.yaml
Notice: 官网示例默认配置端口号为 30000, 理论应用时请依据需要调整。

批改 RocketMQ NameServer 和 Console 的 service 配置:

sed -i '32,46s/^#//g' example/rocketmq_v1alpha1_cluster_service.yamlsed -i 's/nodePort: 30001/nodePort: 31081/g' example/rocketmq_v1alpha1_cluster_service.yamlsed -i 's/namespace: default/namespace: zdevops/g' example/rocketmq_v1alpha1_cluster_service.yaml
NameServer 默认应用了 NodePort 的模式,单纯在 K8S 集群外部应用的话,能够批改为集群模式。

GitOps

生产环境理论应用时倡议将下面编辑整理后的资源配置清单,独自整顿,删除 rocketmq-operator 我的项目中多余的文件,行成一套适宜于本人业务须要的资源配置清单,并应用 Git 进行版本控制。

单 Master 模式部署计划中曾经具体介绍过操作流程,此处不再多做介绍。

4.5. 部署 RocketMQ Operator (主动)

官网介绍的主动部署办法,实用于能连贯互联网的环境,部署过程中须要下载 controller-gen 和 kustomize 二进制文件,同时会下载一堆 go 依赖。

不适宜于内网离线环境,这里只是简略介绍,本文重点采纳前面的手动部署的计划。

部署 RocketMQ Operator:

make deploy

部署 RocketMQ Operator (手动)

部署 RocketMQ Operator:

kubectl create -f deploy/crds/rocketmq.apache.org_brokers.yamlkubectl create -f deploy/crds/rocketmq.apache.org_nameservices.yamlkubectl create -f deploy/crds/rocketmq.apache.org_consoles.yamlkubectl create -f deploy/crds/rocketmq.apache.org_topictransfers.yamlkubectl create -f deploy/service_account.yamlkubectl create -f deploy/role.yamlkubectl create -f deploy/role_binding.yamlkubectl create -f deploy/operator.yaml

验证 CRDS:

$ kubectl get crd | grep rocketmq.apache.orgbrokers.rocketmq.apache.org                           2022-11-09T02:54:52Zconsoles.rocketmq.apache.org                          2022-11-09T02:54:54Znameservices.rocketmq.apache.org                      2022-11-09T02:54:53Ztopictransfers.rocketmq.apache.org                    2022-11-09T02:54:54Z

验证 RocketMQ Operator:

$ kubectl get deploy -n zdevops -o wideNAME                READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                           SELECTORrocketmq-operator   1/1     1            1           6m46s   manager      registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0   name=rocketmq-operator$ kubectl get pods -n zdevops -o wideNAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE              NOMINATED NODE   READINESS GATESrocketmq-operator-7cc6b48796-htpk8   1/1     Running   0          2m28s   10.233.116.70   ks-k8s-master-2   <none>           <none>

部署 RocketMQ 集群

创立服务:

$ kubectl apply -f example/rocketmq_v1alpha1_cluster_service.yamlservice/console-service createdservice/name-server-service created

创立集群:

$ kubectl apply -f example/rocketmq_v1alpha1_rocketmq_cluster.yamlconfigmap/broker-config createdbroker.rocketmq.apache.org/broker creatednameservice.rocketmq.apache.org/name-service createdconsole.rocketmq.apache.org/console created

验证

StatefulSet:

$ kubectl get sts -o wide -n zdevopsNAME                 READY   AGE   CONTAINERS     IMAGESbroker-0-master      1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0broker-0-replica-1   1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0name-service         1/1     27s   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

Deployment:

$ kubectl get deploy -o wide -n zdevopsNAME                READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                                           SELECTORconsole             1/1     1            1           52s     console      registry.zdevops.com.cn/apacherocketmq/rocketmq-console:2.0.0    app=rocketmq-consolerocketmq-operator   1/1     1            1           4h43m   manager      registry.zdevops.com.cn/apacherocketmq/rocketmq-operator:0.3.0   name=rocketmq-operator

Pod:

$ kubectl get pods -o wide -n zdevopsNAME                                 READY   STATUS    RESTARTS      AGE     IP              NODE              NOMINATED NODE   READINESS GATESbroker-0-master-0                    1/1     Running   0             47s     10.233.87.24    ks-k8s-master-1   <none>           <none>broker-0-replica-1-0                 1/1     Running   0             17s     10.233.117.28   ks-k8s-master-0   <none>           <none>console-8d685798f-5pwct              1/1     Running   0             116s    10.233.116.84   ks-k8s-master-2   <none>           <none>name-service-0                       1/1     Running   0             96s     10.233.116.85   ks-k8s-master-2   <none>           <none>rocketmq-operator-7cc6b48796-htpk8   1/1     Running   2 (98s ago)   4h39m   10.233.116.70   ks-k8s-master-2   <none>           <none>

Services:

$ kubectl get svc -o wide -n zdevopsNAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE   SELECTORconsole-service                                          NodePort    10.233.38.15    <none>        8080:31080/TCP   21m   app=rocketmq-consolename-server-service                                      NodePort    10.233.56.238   <none>        9876:31081/TCP   21m   name_service_cr=name-service   

通过浏览器关上 K8S 集群中任意节点的 IP:31080,能够看到 RocketMQ 控制台的治理界面。

清理资源

清理 RocketMQ Cluster

部署集群失败或是须要重新部署时,采纳上面的程序清理删除。

kubectl delete -f example/rocketmq_v1alpha1_rocketmq_cluster.yamlkubectl delete -f example/rocketmq_v1alpha1_cluster_service.yaml

清理 RocketMQ Operator

kubectl delete -f deploy/crds/rocketmq.apache.org_brokers.yamlkubectl delete -f deploy/crds/rocketmq.apache.org_nameservices.yamlkubectl delete -f deploy/crds/rocketmq.apache.org_consoles.yamlkubectl delete -f deploy/crds/rocketmq.apache.org_topictransfers.yamlkubectl delete -f deploy/service_account.yamlkubectl delete -f deploy/role.yamlkubectl delete -f deploy/role_binding.yamlkubectl delete -f deploy/operator.yaml

清理存储卷

须要手工查找 Broker 和 NameServer 相干的存储卷并删除。

# 查找存储卷$ kubectl get pvc -n zdevopsNAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGEbroker-storage-broker-0-master-0      Bound    pvc-6a78b573-d72a-47ca-9012-5bc888dfcb0f   8Gi        RWO            glusterfs      3m54sbroker-storage-broker-0-replica-1-0   Bound    pvc-4f096942-505d-4e34-ac7f-b871b9f33df3   8Gi        RWO            glusterfs      3m54snamesrv-storage-name-service-0        Bound    pvc-2c45a77e-3ca1-4eab-bb57-8374aa9068d3   1Gi        RWO            glusterfs      3m54s# 删除存储卷kubectl delete pvc namesrv-storage-name-service-0 -n zdevopskubectl delete pvc broker-storage-broker-0-master-0 -n zdevopskubectl delete pvc broker-storage-broker-0-replica-1-0 -n zdevops

扩容 NameServer

如果以后的 name service 集群规模不能满足您的需要,您能够简略地应用 RocketMQ-Operator 来扩充或放大 name service 集群的规模。

扩容 name service 须要编写并执行独立的资源配置清单,参考官网示例Name Server Cluster Scale,并联合本人理论环境的 rocketmq-operator 配置批改。

Notice: 不要在已部署的资源中间接批改正本数,间接批改不会失效,会被 Operator 干掉。

编辑扩容 NameServer 资源配置清单 , rocketmq_v1alpha1_nameservice_cr.yaml

apiVersion: rocketmq.apache.org/v1alpha1kind: NameServicemetadata:  name: name-service  namespace: zdevopsspec:  size: 2  nameServiceImage: registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0  imagePullPolicy: Always  hostNetwork: false  dnsPolicy: ClusterFirst  resources:    requests:      memory: "512Mi"      cpu: "250m"    limits:      memory: "1024Mi"      cpu: "500m"  storageMode: StorageClass  hostPath: /data/rocketmq/nameserver  volumeClaimTemplates:    - metadata:        name: namesrv-storage      spec:        accessModes:          - ReadWriteOnce        storageClassName: glusterfs        resources:          requests:            storage: 1Gi

执行扩容操作:

kubectl apply -f rocketmq/cluster/rocketmq_v1alpha1_nameservice_cr.yaml

验证 StatefulSet:

$ kubectl get sts name-service -o wide -n zdevopsNAME           READY   AGE   CONTAINERS     IMAGESname-service   2/2     16m   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

验证 Pods:

$ kubectl get pods -o wide -n zdevopsNAME                                 READY   STATUS    RESTARTS   AGE    IP               NODE              NOMINATED NODE   READINESS GATESbroker-0-master-0                    1/1     Running   0          18m    10.233.87.117    ks-k8s-master-1   <none>           <none>broker-0-replica-1-0                 1/1     Running   0          43s    10.233.117.99    ks-k8s-master-0   <none>           <none>console-8d685798f-hnmvg              1/1     Running   0          18m    10.233.116.113   ks-k8s-master-2   <none>           <none>name-service-0                       1/1     Running   0          18m    10.233.116.114   ks-k8s-master-2   <none>           <none>name-service-1                       1/1     Running   0          110s   10.233.87.120    ks-k8s-master-1   <none>           <none>rocketmq-operator-6db8ccc685-5hkk8   1/1     Running   0          18m    10.233.116.112   ks-k8s-master-2   <none>           <none>

特地阐明

NameServer 扩容肯定要谨慎,在理论验证测试中发现 NameServer 扩容会导致重建已有的除了 Broker-0 的 Master 之外的其余 Broker 的 Master 和 所有的 Slave。按官网文档上的阐明,应该是 Operator 告诉所有的 Broker 更新 name service list parameters,以便它们能够注册到新的 NameServer Service。

同时,在 allowRestart: true 策略下,Broker 将逐步更新,因而更新过程也不会被生产者和消费者客户端感知,也就是说实践上不会影响业务(未理论测试)。

然而,所有 Broker 的 Master 和 Slave 重建后,查看集群状态时,集群节点的信息不稳固,有的时候能看到 3 个节点,有的时候则能看到 4 个节点。

因而,生产环境最好在首次部署的时候就配置 NameServer 的正本数为 2 或是 3,尽量不要在前期扩容,除非你能搞定扩容造成的所有结果。

扩容 Broker

通常状况下,随着业务的倒退,现有的 Broker 集群规模可能不再满足您的业务需要。你能够简略地应用 RocketMQ-Operator 来降级、扩容 Broker 集群。

扩容 Broker 须要编写并执行独立的资源配置清单,参考官网示例Broker Cluster Scale,并联合本人理论环境的 rocketmq-operator 配置批改。

编辑扩容 Broker 资源配置清单 , rocketmq_v1alpha1_broker_cr.yaml

apiVersion: rocketmq.apache.org/v1alpha1kind: Brokermetadata:  name: broker  namespace: zdevopsspec:  size: 2  nameServers: "name-server-service.zdevops::9876"  replicaPerGroup: 1  brokerImage: registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0  imagePullPolicy: Always  resources:    requests:      memory: "2048Mi"      cpu: "250m"    limits:      memory: "12288Mi"      cpu: "500m"  allowRestart: true  storageMode: StorageClass  hostPath: /data/rocketmq/broker  # scalePodName is [Broker name]-[broker group number]-master-0  scalePodName: broker-0-master-0  env:    - name: BROKER_MEM      valueFrom:        configMapKeyRef:          name: broker-config          key: BROKER_MEM  volumes:    - name: broker-config      configMap:        name: broker-config        items:          - key: broker-common.conf            path: broker-common.conf  volumeClaimTemplates:    - metadata:        name: broker-storage      spec:        accessModes:          - ReadWriteOnce        storageClassName: glusterfs        resources:          requests:            storage: 8Gi

Notice: 留神重点字段 scalePodName: broker-0-master-0。

抉择源 Broker pod,将从其中将主题和订阅信息数据等旧元数据传输到新创建的 Broker。

执行扩容 Broker:

kubectl apply -f rocketmq/cluster/rocketmq_v1alpha1_broker_cr.yaml
Notice: 执行胜利后将部署一个新的 Broker Pod 组,同时 Operator 将在启动新 Broker 之前将源 Broker Pod 中的元数据复制到新创建的 Broker Pod 中,因而新 Broker 将从新加载已有的主题和订阅信息。

验证 StatefulSet:

$ kubectl get sts  -o wide -n zdevopsNAME                 READY   AGE   CONTAINERS     IMAGESbroker-0-master      1/1     43m   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0broker-0-replica-1   1/1     43m   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0broker-1-master      1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0broker-1-replica-1   1/1     27s   broker         registry.zdevops.com.cn/apacherocketmq/rocketmq-broker:4.9.4-alpine-operator-0.3.0name-service         2/2     43m   name-service   registry.zdevops.com.cn/apacherocketmq/rocketmq-nameserver:4.9.4-alpine-operator-0.3.0

验证 Pods:

$ kubectl get pods -o wide -n zdevopsNAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATESbroker-0-master-0                    1/1     Running   0          44m   10.233.87.117    ks-k8s-master-1   <none>           <none>broker-0-replica-1-0                 1/1     Running   0          26m   10.233.117.99    ks-k8s-master-0   <none>           <none>broker-1-master-0                    1/1     Running   0          72s   10.233.116.117   ks-k8s-master-2   <none>           <none>broker-1-replica-1-0                 1/1     Running   0          72s   10.233.117.100   ks-k8s-master-0   <none>           <none>console-8d685798f-hnmvg              1/1     Running   0          44m   10.233.116.113   ks-k8s-master-2   <none>           <none>name-service-0                       1/1     Running   0          44m   10.233.116.114   ks-k8s-master-2   <none>           <none>name-service-1                       1/1     Running   0          27m   10.233.87.120    ks-k8s-master-1   <none>           <none>rocketmq-operator-6db8ccc685-5hkk8   1/1     Running   0          44m   10.233.116.112   ks-k8s-master-2   <none>           <none>

在 KubeSphere 控制台验证:

在 RocketMQ 控制台验证:

常见问题

没装 gcc 编译工具

报错信息:

[root@zdevops-master rocketmq-operator]# make docker-build IMG=${IMAGE_URL}/data/k8s-yaml/rocketmq-operator/bin/controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths="./..." output:dir=deploy output:crd:artifacts:config=deploy/crdshead -n 14 deploy/role_binding.yaml > deploy/role.yaml.bakcat deploy/role.yaml >> deploy/role.yaml.bakrm deploy/role.yaml && mv deploy/role.yaml.bak deploy/role.yaml/data/k8s-yaml/rocketmq-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."/usr/local/go/src/net/cgo_linux.go:12:8: no such package locatedError: not all generators ran successfullyrun `controller-gen object:headerFile=hack/boilerplate.go.txt paths=./... -w` to see all available markers, or `controller-gen object:headerFile=hack/boilerplate.go.txt paths=./... -h` for usagemake: *** [generate] Error 1

解决方案:

$ yum install gcc

go mod 谬误

报错信息:

# 执行 make docker-build IMG=${IMAGE_URL} 报错go: creating new go.mod: module tmpDownloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0go get: added sigs.k8s.io/controller-tools v0.7.0/data/build/rocketmq-operator/bin/controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths="./..." output:dir=deploy output:crd:artifacts:config=deploy/crdsError: err: exit status 1: stderr: go: github.com/google/uuid@v1.1.2: missing go.sum entry; to add it:        go mod download github.com/google/uuidUsage:  controller-gen [flags]  ......output rules (optionally as output:<generator>:...)+output:artifacts[:code=<string>],config=<string>  package  outputs artifacts to different locations, depending on whether they're package-associated or not.   +output:dir=<string>                               package  outputs each artifact to the given directory, regardless of if it's package-associated or not.      +output:none                                       package  skips outputting anything.                                                                          +output:stdout                                     package  outputs everything to standard-out, with no separation.                                             run `controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths=./... output:dir=deploy output:crd:artifacts:config=deploy/crds -w` to see all available markers, or `controller-gen rbac:roleName=rocketmq-operator crd:generateEmbeddedObjectMeta=true webhook paths=./... output:dir=deploy output:crd:artifacts:config=deploy/crds -h` for usagemake: *** [manifests] Error 1

解决方案:

go mod tidy

结束语

本文只是初步介绍了 RocketMQ 在 K8s 平台上的单 Master 节点和多 Master 多 Slave-异步复制模式部署的办法,属于入门级。

在生产环境中还须要依据理论环境优化配置,例如调整集群的 Broker 数量、Master 和 Slave 的调配、性能调优、配置优化等。