关于k8s:helm安装rabbitmq集群

9次阅读

共计 24503 个字符,预计需要花费 62 分钟才能阅读完成。

前言

最近在应用 k8s 搭建微服务时,发现须要手动批改 yaml 文件外面的 pod name、pod image、svc name、ingress tls 等等,十分麻烦,然而有了 helm 之后状况就不一样了,helm 是 k8s 的包管理器,相似 ubuntu 的 apt-get,centos 的 yum 一样,有了 helm 安装包就十分不便了,上面就解说通过 helm 来装置 rabbitmq。

筹备工作

  • 装置 k8s
    我应用的是阿里云的 ACK k8s 服务。
  • 装置 k8s 客户端:kubectl
    kubectl 装置地址
  • 装置 helm 客户端
    装置 helm
  • 配置 helm repo 源
    上面是我增加的三个源:stable、bitnami 和 ali

    helm repo add stable https://charts.helm.sh/stable
    helm repo add bitnami https://charts.helm.sh/stable
    helm repo add ali https://charts.helm.sh/stable

    查看曾经装置好的 repo 源

    $ helm repo list                                                    
    NAME       URL
    stable     https://charts.helm.sh/stable
    bitnami    https://charts.bitnami.com/bitnami
    ali        https://apphub.aliyuncs.com/stable/

rabbitmq 装置形式

装置 rabbitmq 办法有很多上面列举几个惯例装置办法:

  • centos 7/ 8 装置 rabbitmq 阿里云 ECS CentOS 提供装置
  • k8s 装置 rabbitmq 官网文档提供装置
  • helm 装置 rabbitmq 社区大佬提供装置

helm 装置 rabbitmq

这里咱们应用 helm 来装置 rabbitmq,首先咱们看看 helm 外面有没有 rabbitmq 的 chart,

$ helm search repo rabbitmq
NAME                                   CHART VERSION    APP VERSION    DESCRIPTION
ali/prometheus-rabbitmq-exporter       0.5.5            v0.29.0        Rabbitmq metrics exporter for prometheus
ali/rabbitmq                           6.18.2           3.8.2          DEPRECATED Open source message broker software ...
ali/rabbitmq-ha                        1.47.0           3.8.7          Highly available RabbitMQ cluster, the open sou...
bitnami/rabbitmq                       8.16.1           3.8.18         Open source message broker software that implem...
stable/prometheus-rabbitmq-exporter    0.5.6            v0.29.0        DEPRECATED Rabbitmq metrics exporter for promet...
stable/rabbitmq                        6.18.2           3.8.2          DEPRECATED Open source message broker software ...
stable/rabbitmq-ha                     1.47.1           3.8.7          DEPRECATED - Highly available RabbitMQ cluster,...

能够看到不同的 repo 源提供的 rabbitmq 的 chart 的版本也不同,咱们选用的是 stable/rabbitmq,chart 版本:6.18.2,APP 版本:3.8.2。

接着,咱们要把 stable/rabbitmq 的 chart 文件下载下来,下载下来的 chart 文件是.tgz 的压缩文件,解压一下即可,

helm pull stable/rabbitmq
tar zxf rabbitmq-6.18.2.tgz
$ cd rabbitmq && ls -ls
total 168
 8 -rwxr-xr-x   1 zhangwei  staff    435  1  1  1970 Chart.yaml
72 -rwxr-xr-x   1 zhangwei  staff  34706  1  1  1970 README.md
 0 drwxr-xr-x   5 zhangwei  staff    160  6 30 14:43 ci
 0 drwxr-xr-x  19 zhangwei  staff    608  6 30 14:43 templates
40 -rwxr-xr-x   1 zhangwei  staff  19401  1  1  1970 values-production.yaml
 8 -rwxr-xr-x   1 zhangwei  staff   2854  1  1  1970 values.schema.json
40 -rwxr-xr-x   1 zhangwei  staff  18986  1  1  1970 values.yaml

而后,咱们查看一下 values.yaml 对外提供的可用参数,也能够通过命令查看:

helm show values stable/rabbitmq

values.yaml

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass

## Bitnami RabbitMQ image version
## ref: https://hub.docker.com/r/bitnami/rabbitmq/tags/
##
image:
  registry: docker.io
  repository: bitnami/rabbitmq
  tag: 3.8.2-debian-10-r30

  ## set to true if you would like to see extra information on logs
  ## it turns BASH and NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  debug: false

  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

## String to partially override rabbitmq.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override rabbitmq.fullname template
##
# fullnameOverride:

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## does your cluster have rbac enabled? assume yes by default
rbacEnabled: true

## RabbitMQ should be initialized one by one when building cluster for the first time.
## Therefore, the default value of podManagementPolicy is 'OrderedReady'
## Once the RabbitMQ participates in the cluster, it waits for a response from another
## RabbitMQ in the same cluster at reboot, except the last RabbitMQ of the same cluster.
## If the cluster exits gracefully, you do not need to change the podManagementPolicy
## because the first RabbitMQ of the statefulset always will be last of the cluster.
## However if the last RabbitMQ of the cluster is not the first RabbitMQ due to a failure,
## you must change podManagementPolicy to 'Parallel'.
## ref : https://www.rabbitmq.com/clustering.html#restarting
##
podManagementPolicy: OrderedReady

## section of specific values for rabbitmq
rabbitmq:
  ## RabbitMQ application username
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  username: user

  ## RabbitMQ application password
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # password:
  # existingPasswordSecret: name-of-existing-secret

  ## Erlang cookie to determine whether different nodes are allowed to communicate with each other
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # erlangCookie:
  # existingErlangSecret: name-of-existing-secret

  ## Node name to cluster with. e.g.: `clusternode@hostname`
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # rabbitmqClusterNodeName:

  ## Value for the RABBITMQ_LOGS environment variable
  ## ref: https://www.rabbitmq.com/logging.html#log-file-location
  ##
  logs: '-'

  ## RabbitMQ Max File Descriptors
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ## ref: https://www.rabbitmq.com/install-debian.html#kernel-resource-limits
  ##
  setUlimitNofiles: true
  ulimitNofiles: '65536'

  ## RabbitMQ maximum available scheduler threads and online scheduler threads
  ## ref: https://hamidreza-s.github.io/erlang/scheduling/real-time/preemptive/migration/2016/02/09/erlang-scheduler-details.html#scheduler-threads
  ##
  maxAvailableSchedulers: 2
  onlineSchedulers: 1

  ## Plugins to enable
  plugins: "rabbitmq_management rabbitmq_peer_discovery_k8s"

  ## Extra plugins to enable
  ## Use this instead of `plugins` to add new plugins
  extraPlugins: "rabbitmq_auth_backend_ldap"

  ## Clustering settings
  clustering:
    address_type: hostname
    k8s_domain: cluster.local
    ## Rebalance master for queues in cluster when new replica is created
    ## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance
    rebalance: false

  loadDefinition:
    enabled: false
    secretName: load-definition

  ## environment variables to configure rabbitmq
  ## ref: https://www.rabbitmq.com/configure.html#customise-environment
  env: {}

  ## Configuration file content: required cluster configuration
  ## Do not override unless you know what you are doing. To add more configuration, use `extraConfiguration` of `advancedConfiguration` instead
  configuration: |-
    ## Clustering
    cluster_formation.peer_discovery_backend  = rabbit_peer_discovery_k8s
    cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
    cluster_formation.node_cleanup.interval = 10
    cluster_formation.node_cleanup.only_log_warning = true
    cluster_partition_handling = autoheal
    # queue master locator
    queue_master_locator=min-masters
    # enable guest user
    loopback_users.guest = false

  ## Configuration file content: extra configuration
  ## Use this instead of `configuration` to add more configuration
  extraConfiguration: |-
    #disk_free_limit.absolute = 50MB
    #management.load_definitions = /app/load_definition.json

  ## Configuration file content: advanced configuration
  ## Use this as additional configuraton in classic config format (Erlang term configuration format)
  ##
  ## If you set LDAP with TLS/SSL enabled and you are using self-signed certificates, uncomment these lines.
  ## advancedConfiguration: |-
  ##   [{
  ##     rabbitmq_auth_backend_ldap,
  ##     [{
  ##         ssl_options,
  ##         [{
  ##             verify, verify_none
  ##         }, {
  ##             fail_if_no_peer_cert,
  ##             false
  ##         }]
  ##     ]}
  ##   }].
  ##
  advancedConfiguration: |-

  ## Enable encryption to rabbitmq
  ## ref: https://www.rabbitmq.com/ssl.html
  ##
  tls:
    enabled: false
    failIfNoPeerCert: true
    sslOptionsVerify: verify_peer
    caCertificate: |-
    serverCertificate: |-
    serverKey: |-
    # existingSecret: name-of-existing-secret-to-rabbitmq

## LDAP configuration
##
ldap:
  enabled: false
  server: ""port:"389"
  user_dn_pattern: cn=${username},dc=example,dc=org
  tls:
    # If you enabled TLS/SSL you can set advaced options using the advancedConfiguration parameter.
    enabled: false

## Kubernetes service type
service:
  type: ClusterIP
  ## Node port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # nodePort: 30672

  ## Set the LoadBalancerIP
  ##
  # loadBalancerIP:

  ## Node port Tls
  ##
  # nodeTlsPort: 30671

  ## Amqp port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  port: 5672

  ## Amqp Tls port
  ##
  tlsPort: 5671

  ## Dist port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  distPort: 25672

  ## RabbitMQ Manager port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  managerPort: 15672

  ## Service annotations
  annotations: {}
    # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

  ## Load Balancer sources
  ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ##
  # loadBalancerSourceRanges:
  # - 10.10.10.0/24

  ## Extra ports to expose
  # extraPorts:

  ## Extra ports to be included in container spec, primarily informational
  # extraContainerPorts:

# Additional pod labels to apply
podLabels: {}

## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001
  extra: {}

persistence:
  ## this enables PVC templates that will create one per pod
  enabled: true

  ## rabbitmq data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessMode: ReadWriteOnce

  ## Existing PersistentVolumeClaims
  ## The value is evaluated as a template
  ## So, for example, the name can depend on .Release or .Chart
  # existingClaim: ""

  # If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well.
  size: 8Gi

  # persistence directory, maps to the rabbitmq data directory
  path: /opt/bitnami/rabbitmq/var/lib/rabbitmq

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}

networkPolicy:
  ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
  ##
  enabled: false

  ## The Policy model to apply. When set to false, only pods with the correct
  ## client label will have network access to the ports RabbitMQ is listening
  ## on. When true, RabbitMQ will accept connections from any source
  ## (with the correct destination port).
  ##
  allowExternal: true

  ## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
  ##
  # additionalRules:
  #  - matchLabels:
  #    - role: frontend
  #  - matchExpressions:
  #    - key: role
  #      operator: In
  #      values:
  #        - frontend

## Replica count, set to 1 to provide a default available cluster
replicas: 1

## Pod priority
## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClassName: ""

## updateStrategy for RabbitMQ statefulset
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
  type: RollingUpdate

## Node labels and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
tolerations: []
affinity: {}
podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1
## annotations for rabbitmq pods
podAnnotations: {}

## Configure the ingress resource that allows you to access the
## WordPress installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
  ## Set to true to enable ingress record generation
  enabled: false

  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  ## hostName: foo.bar.com
  path: /

  ## Set this to true in order to enable TLS on the ingress record
  ## A side effect of this will be that the backend wordpress service will be connected at port 443
  tls: false

  ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
  tlsSecret: myTlsSecret

  ## Ingress annotations done as key:value pairs
  ## If you're using kube-lego, you will want to add:
  ## kubernetes.io/tls-acme: true
  ##
  ## For a full list of possible ingress annotations, please see
  ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
  ##
  ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
  annotations: {}
  #  kubernetes.io/ingress.class: nginx
  #  kubernetes.io/tls-acme: true

## The following settings are to configure the frequency of the lifeness and readiness probes
livenessProbe:
  enabled: true
  initialDelaySeconds: 120
  timeoutSeconds: 20
  periodSeconds: 30
  failureThreshold: 6
  successThreshold: 1

readinessProbe:
  enabled: true
  initialDelaySeconds: 10
  timeoutSeconds: 20
  periodSeconds: 30
  failureThreshold: 3
  successThreshold: 1

metrics:
  enabled: false
  image:
    registry: docker.io
    repository: bitnami/rabbitmq-exporter
    tag: 0.29.0-debian-10-r28
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  ## environment variables to configure rabbitmq_exporter
  ## ref: https://github.com/kbudde/rabbitmq_exporter#configuration
  env: {}
  ## Metrics exporter port
  port: 9419
  ## RabbitMQ address to connect to (from the same Pod, usually the local loopback address).
  ## If your Kubernetes cluster does not support IPv6, you can change to `127.0.0.1` in order to force IPv4.
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#networking
  rabbitmqAddress: localhost
  ## Comma-separated list of extended scraping capabilities supported by the target RabbitMQ server
  ## ref: https://github.com/kbudde/rabbitmq_exporter#extended-rabbitmq-capabilities
  capabilities: "bert,no_sort"
  resources: {}
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9419"

  livenessProbe:
    enabled: true
    initialDelaySeconds: 15
    timeoutSeconds: 5
    periodSeconds: 30
    failureThreshold: 6
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    timeoutSeconds: 5
    periodSeconds: 30
    failureThreshold: 3
    successThreshold: 1

  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##      https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
  serviceMonitor:
    ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
    enabled: false
    ## Specify the namespace in which the serviceMonitor resource will be created
    # namespace: ""
    ## Specify the interval at which metrics should be scraped
    interval: 30s
    ## Specify the timeout after which the scrape is ended
    # scrapeTimeout: 30s
    ## Specify Metric Relabellings to add to the scrape endpoint
    # relabellings:
    ## Specify honorLabels parameter to add the scrape endpoint
    honorLabels: false
    ## Specify the release for ServiceMonitor. Sometimes it should be custom for prometheus operator to work
    # release: ""
    ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    additionalLabels: {}

  ## Custom PrometheusRule to be defined
  ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
  ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
  prometheusRule:
    enabled: false
    additionalLabels: {}
    namespace: ""
    rules: []
      ## List of reules, used as template by Helm.
      ## These are just examples rules inspired from https://awesome-prometheus-alerts.grep.to/rules.html
      ## Please adapt them to your needs.
      ## Make sure to constraint the rules to the current rabbitmq service.
      ## Also make sure to escape what looks like helm template.
      # - alert: RabbitmqDown
      #   expr: rabbitmq_up{service="{{ template"rabbitmq.fullname".}}"} == 0
      #   for: 5m
      #   labels:
      #     severity: error
      #   annotations:
      #     summary: Rabbitmq down (instance {{ "{{ $labels.instance}}" }})
      #     description: RabbitMQ node down

      # - alert: ClusterDown
      #   expr: |
      #     sum(rabbitmq_running{service="{{ template"rabbitmq.fullname".}}"})
      #     < {{.Values.replicas}}
      #   for: 5m
      #   labels:
      #     severity: error
      #   annotations:
      #     summary: Cluster down (instance {{ "{{ $labels.instance}}" }})
      #     description: |
      #         Less than {{.Values.replicas}} nodes running in RabbitMQ cluster
      #         VALUE = {{"{{ $value}}" }}

      # - alert: ClusterPartition
      #   expr: rabbitmq_partitions{service="{{ template"rabbitmq.fullname".}}"} > 0
      #   for: 5m
      #   labels:
      #     severity: error
      #   annotations:
      #     summary: Cluster partition (instance {{ "{{ $labels.instance}}" }})
      #     description: |
      #         Cluster partition
      #         VALUE = {{"{{ $value}}" }}

      # - alert: OutOfMemory
      #   expr: |
      #     rabbitmq_node_mem_used{service="{{ template"rabbitmq.fullname".}}"}
      #     / rabbitmq_node_mem_limit{service="{{ template"rabbitmq.fullname".}}"}
      #     * 100 > 90
      #   for: 5m
      #   labels:
      #     severity: warning
      #   annotations:
      #     summary: Out of memory (instance {{ "{{ $labels.instance}}" }})
      #     description: |
      #         Memory available for RabbmitMQ is low (< 10%)\n  VALUE = {{"{{ $value}}" }}
      #         LABELS: {{"{{ $labels}}" }}

      # - alert: TooManyConnections
      #   expr: rabbitmq_connectionsTotal{service="{{ template"rabbitmq.fullname".}}"} > 1000
      #   for: 5m
      #   labels:
      #     severity: warning
      #   annotations:
      #     summary: Too many connections (instance {{ "{{ $labels.instance}}" }})
      #     description: |
      #         RabbitMQ instance has too many connections (> 1000)
      #         VALUE = {{"{{ $value}}" }}\n  LABELS: {{"{{ $labels}}" }}

##
## Init containers parameters:
## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
##
volumePermissions:
  enabled: false
  image:
    registry: docker.io
    repository: bitnami/minideb
    tag: buster
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  resources: {}

## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an
## unknown order.
## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot
##
forceBoot:
  enabled: false

## Optionally specify extra secrets to be created by the chart.
## This can be useful when combined with load_definitions to automatically create the secret containing the definitions to be loaded.
##
extraSecrets: {}
  # load-definition:
  #   load_definition.json: |
  #     {
  #       ...
  #     }

values.yaml 文件是 helm chart 对外公开的配置信息,可依据须要自行批改,因为咱们应用的是阿里云,所以存储应用 alicloud-disk-ssd,留神 alicloud-disk-ssd 最低要求 ssd 为 20G:

storageClass: "alicloud-disk-ssd"
size: 20Gi

因为咱们要通过域名拜访所以要启用 ingress 及域名配置:

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
  hostName: rabbitmq.baidu.com

最初就是要通过 https 拜访,咱们要启用 tls:

tls: true
tlsSecret: tls-secret-name

这里咱们留神一下,因为我应用的是 cert-manager.io/cluster-issuer 注解(我曾经在 k8s 中曾经生成了),所以能够间接生成了 tls 证书,十分不便,感兴趣的能够看看应用 cert-manager 申请收费的 HTTPS 证书

annotations:
    cert-manager.io/cluster-issuer: your-cert-manager-name

最终残缺的 values.yaml 文件如下:

values.yaml

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: your-cert-manager-name
    nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
  hostName: rabbitmq.baidu.com
  tls: true
  tlsSecret: tls-secret-name
persistence:
  storageClass: "alicloud-disk-ssd"
  size: 20Gi

当初咱们来装置 rabbitmq,通过上面命令运行:

# 创立 rabbit 命名空间
kubectl create namespace rabbit
# 创立 rabbitmq 集群
helm install -f values.yaml test-rabbitmq stable/rabbitmq --namespace rabbit

上面是装置时的输入:

WARNING: This chart is deprecated
NAME: test-rabbitmq
LAST DEPLOYED: Fri Jul  2 09:56:10 2021
NAMESPACE: rabbit
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
This Helm chart is deprecated

Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained RabbitMQ Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`)

$ helm repo add bitnami https://charts.bitnami.com/bi…
$ helm install my-release bitnami/<chart> # Helm 3
$ helm install –name my-release bitnami/<chart> # Helm 2


To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute

$ helm upgrade my-release bitnami/<chart>


Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.

** Please be patient while the chart is being deployed **

Credentials:

    Username      : user
    echo "Password      : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-password}"| base64 --decode)"
    echo "ErLang Cookie : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}"| base64 --decode)"

RabbitMQ can be accessed within the cluster on port  at test-rabbitmq.rabbit.svc.cluster.local

To access for outside the cluster, perform the following steps:

To Access the RabbitMQ AMQP port:

    kubectl port-forward --namespace rabbit svc/test-rabbitmq 5672:5672
    echo "URL : amqp://127.0.0.1:5672/"

To Access the RabbitMQ Management interface:

    kubectl port-forward --namespace rabbit svc/test-rabbitmq 15672:15672
    echo "URL : http://127.0.0.1:15672/"
╭─zhangwei@zhangweideMacBook-Pro-2  ~/Workspace/com.awbeci/Seaurl/service/seaurl-parent/helm/rabbitmq  ‹test*› 
╰─$ kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
BnbcuwyupZ%                                                                                                                                                                                          ╭─zhangwei@zhangweideMacBook-Pro-2  ~/Workspace/com.awbeci/Seaurl/service/seaurl-parent/helm/rabbitmq  ‹test*› 
╰─$ kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
Error from server (NotFound): secrets "test-rabbitmq" not found
╭─zhangwei@zhangweideMacBook-Pro-2  ~/Workspace/com.awbeci/Seaurl/service/seaurl-parent/helm/rabbitmq  ‹test*› 
╰─$ helm install -f values.yaml test-rabbitmq stable/rabbitmq --namespace rabbit                                 
WARNING: This chart is deprecated
NAME: test-rabbitmq
LAST DEPLOYED: Fri Jul  2 10:16:36 2021
NAMESPACE: rabbit
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
This Helm chart is deprecated

Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained RabbitMQ Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`)

$ helm repo add bitnami https://charts.bitnami.com/bi…
$ helm install my-release bitnami/<chart> # Helm 3
$ helm install –name my-release bitnami/<chart> # Helm 2


To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute

$ helm upgrade my-release bitnami/<chart>


Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.

** Please be patient while the chart is being deployed **

Credentials:

    Username      : user
    echo "Password      : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-password}"| base64 --decode)"
    echo "ErLang Cookie : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}"| base64 --decode)"

RabbitMQ can be accessed within the cluster on port  at test-rabbitmq.rabbit.svc.cluster.local

To access for outside the cluster, perform the following steps:

To Access the RabbitMQ AMQP port:

    kubectl port-forward --namespace rabbit svc/test-rabbitmq 5672:5672
    echo "URL : amqp://127.0.0.1:5672/"

To Access the RabbitMQ Management interface:

    kubectl port-forward --namespace rabbit svc/test-rabbitmq 15672:15672
    echo "URL : http://127.0.0.1:15672/"
$ kubectl get all -n rabbit
NAME                  READY   STATUS    RESTARTS   AGE
pod/test-rabbitmq-0   1/1     Running   0          3h56m

NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                 AGE
service/test-rabbitmq            ClusterIP   172.21.15.81   <none>        4369/TCP,5672/TCP,25672/TCP,15672/TCP   3h56m
service/test-rabbitmq-headless   ClusterIP   None           <none>        4369/TCP,5672/TCP,25672/TCP,15672/TCP   3h56m

NAME                             READY   AGE
statefulset.apps/test-rabbitmq   1/1     3h56m

期待一段时间会发现 pod、svc、ingress、pvc、pv、statefulset 全副创立实现,即可拜访 rabbitmq 了!

问题

  1. 拜访 rabbitmq 报:503
    如果你配置的域名门路如:baidu.com/rabbitmq,这样的域名,那么你要配置成上面这样,能力正确拜访,参考这篇文章:
rabbitmq:
  extraConfiguration: |-
    management.path_prefix = /rabbitmq/
ingress:
...
  hostName: baidu.com
  path: /rabbitmq/
...
  1. kubectl describe svc your-service-name -n rabbit发现 service endpoint 为空
  • 起因:pvc 没有删除参考这篇文章理解
  • 解决:

    kubectl get pvc
    kubectl delete pvc <name>
  1. running “VolumeBinding” filter plugin for pod “test-rabbitmq-0”: pod has unbound immediate PersistentVolumeClaims
  • 起因:要设置 storageClass 以及阿里云盘最低要求 20Gi
  • 解决:

    persistence:
    storageClass: "alicloud-disk-ssd"
    size: 20Gi
  1. PersistentVolumeClaim “data-test-rabbitmq-0” is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
  • 起因:pvc 的容器不反对在线批改的
  • 解决:删除 pvc 从新创立
  1. Warning ProvisioningFailed 6s (x4 over 14m) diskplugin.csi.alibabacloud.com_iZbp1d2cbgi4jt9oty4m9iZ_3408e051-98e8-4295-8a21-7f1af0807958 (combined from similar events): failed to provision volume with StorageClass “alicloud-disk-ssd”: rpc error: code = Internal desc = SDK.ServerError
    ErrorCode: InvalidAccountStatus.NotEnoughBalance
  • 起因:阿里云账户余额最低 100 元能力创立 ssd 云盘
  • 解决:阿里云账户充值
  1. 登录 rabbitmq 报 401
    起因:明码谬误
    解决:运行上面命令获取明码
echo "Password      : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-password}"| base64 --decode)"

总结

1、kubectl 要配置 k8s 的连贯配置信息能力应用,而 helm 默认就拿到了连贯信息,默认是在~/.kube/config
2、helm3 曾经移除了 helm tillers 和 helm init 命令,所以就不要探讨 helm2 了。
3、如果装置过程中或者之后呈现谬误,能够通过阿里云 ack 管制台下的利用 ->helm release 列表删除即可。
4、不须要配置rabbitmq-plugins enable rabbitmq_management,因为 helm 装置 rabbitmq 后默认启动了。
5、如果不是阿里云的 rabbitmq 太贵,我才不违心本人装呢,唉,被逼的。

援用

Helm 部署 RabbitMQ 集群 _
helm 文档
kubectl exec 是如何工作的?
Helm 教程

正文完
 0