介绍

大多数状况下,单主节点集群大抵足以供开发和测试环境应用。然而,对于生产环境,您须要思考集群的高可用性。如果要害组件(例如 kube-apiserver、kube-scheduler 和 kube-controller-manager)都在同一个主节点上运行,一旦主节点宕机,Kubernetes 和 KubeSphere 都将不可用。因而,您须要为多个主节点配置负载均衡器,以创立高可用集群。您能够应用任意云负载均衡器或者任意硬件负载均衡器(例如 F5)。此外,也能够应用 Keepalived 和 HAproxy,或者 Nginx 来创立高可用集群。

架构

在您开始操作前,请确保筹备了 6 台 Linux 机器,其中 3 台充当主节点,另外 3 台充当工作节点。下图展现了这些机器的详情,包含它们的公有 IP 地址和角色。

配置负载均衡器

您必须在您的环境中创立一个负载均衡器来监听(在某些云平台也称作监听器)要害端口。倡议监听下表中的端口。

服务  协定  端口

apiserver TCP 6443

ks-console  TCP 30880

http  TCP 80

https TCP 443

配置免密

root@hello:~# ssh-keygenroot@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.10root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.11  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.12  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.13  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.14  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.15  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.16  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.51  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.52  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.53  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.54  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.55  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.56  root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.57

下载 KubeKey

Kubekey 是新一代安装程序,能够简略、疾速和灵便地装置 Kubernetes 和 KubeSphere。

root@cby:~# export KKZONE=cnroot@cby:~# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh -Downloading kubekey v1.2.0 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v1.2.0/kubekey-v1.2.0-linux-amd64.tar.gz ...Kubekey v1.2.0 Download Complete!

为 kk 增加可执行权限

root@cby:~# chmod +x kkroot@cby:~# ./kk create config --with-kubesphere v3.2.0 --with-kubernetes v1.22.1

部署 KubeSphere 和 Kubernetes

root@cby:~# vim config-sample.yamlroot@cby:~# root@cby:~# root@cby:~# cat config-sample.yamlapiVersion: kubekey.kubesphere.io/v1alpha1kind: Clustermetadata:  name: samplespec:  hosts:  - {name: master1, address: 192.168.1.10, internalAddress: 192.168.1.10, user: root, password: Cby123..}  - {name: master2, address: 192.168.1.11, internalAddress: 192.168.1.11, user: root, password: Cby123..}  - {name: master3, address: 192.168.1.12, internalAddress: 192.168.1.12, user: root, password: Cby123..}  - {name: node1, address: 192.168.1.13, internalAddress: 192.168.1.13, user: root, password: Cby123..}  - {name: node2, address: 192.168.1.14, internalAddress: 192.168.1.14, user: root, password: Cby123..}  - {name: node3, address: 192.168.1.15, internalAddress: 192.168.1.15, user: root, password: Cby123..}  - {name: node4, address: 192.168.1.16, internalAddress: 192.168.1.16, user: root, password: Cby123..}  - {name: node5, address: 192.168.1.51, internalAddress: 192.168.1.51, user: root, password: Cby123..}  - {name: node6, address: 192.168.1.52, internalAddress: 192.168.1.52, user: root, password: Cby123..}  - {name: node7, address: 192.168.1.53, internalAddress: 192.168.1.53, user: root, password: Cby123..}  - {name: node8, address: 192.168.1.54, internalAddress: 192.168.1.54, user: root, password: Cby123..}  - {name: node9, address: 192.168.1.55, internalAddress: 192.168.1.55, user: root, password: Cby123..}  - {name: node10, address: 192.168.1.56, internalAddress: 192.168.1.56, user: root, password: Cby123..}  - {name: node11, address: 192.168.1.57, internalAddress: 192.168.1.57, user: root, password: Cby123..}  roleGroups:    etcd:    - master1    - master2    - master3    master:    - master1    - master2    - master3    worker:    - node1    - node2    - node3    - node4    - node5    - node6    - node7    - node8    - node9    - node10    - node11  controlPlaneEndpoint:    ##Internal loadbalancer for apiservers    #internalLoadbalancer: haproxy    domain: lb.kubesphere.local    address: "192.168.1.20"    port: 6443  kubernetes:    version: v1.22.1    clusterName: cluster.local  network:    plugin: calico    kubePodsCIDR: 10.233.64.0/18    kubeServiceCIDR: 10.233.0.0/18  registry:    registryMirrors: []    insecureRegistries: []  addons: []---apiVersion: installer.kubesphere.io/v1alpha1kind: ClusterConfigurationmetadata:  name: ks-installer  namespace: kubesphere-system  labels:    version: v3.2.0spec:  persistence:    storageClass: ""  authentication:    jwtSecret: ""  local_registry: ""  # dev_tag: ""  etcd:    monitoring: false    endpointIps: localhost    port: 2379    tlsEnable: true  common:    core:      console:        enableMultiLogin: true        port: 30880        type: NodePort    # apiserver:    #  resources: {}    # controllerManager:    #  resources: {}    redis:      enabled: false      volumeSize: 2Gi    openldap:      enabled: false      volumeSize: 2Gi    minio:      volumeSize: 20Gi    monitoring:      # type: external      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090      GPUMonitoring:        enabled: false    gpu:      kinds:              - resourceName: "nvidia.com/gpu"        resourceType: "GPU"        default: true    es:      # master:      #   volumeSize: 4Gi      #   replicas: 1      #   resources: {}      # data:      #   volumeSize: 20Gi      #   replicas: 1      #   resources: {}      logMaxAge: 7      elkPrefix: logstash      basicAuth:        enabled: false        username: ""        password: ""      externalElasticsearchUrl: ""      externalElasticsearchPort: ""  alerting:    enabled: false    # thanosruler:    #   replicas: 1    #   resources: {}  auditing:    enabled: false    # operator:    #   resources: {}    # webhook:    #   resources: {}  devops:    enabled: false    jenkinsMemoryLim: 2Gi    jenkinsMemoryReq: 1500Mi    jenkinsVolumeSize: 8Gi    jenkinsJavaOpts_Xms: 512m    jenkinsJavaOpts_Xmx: 512m    jenkinsJavaOpts_MaxRAM: 2g  events:    enabled: false    # operator:    #   resources: {}    # exporter:    #   resources: {}    # ruler:    #   enabled: false    #   replicas: 2    #   resources: {}  logging:    enabled: false    containerruntime: docker    logsidecar:      enabled: false      replicas: 2      # resources: {}  metrics_server:    enabled: false  monitoring:    storageClass: ""    # kube_rbac_proxy:    #   resources: {}    # kube_state_metrics:    #   resources: {}    # prometheus:    #   replicas: 1    #   volumeSize: 20Gi    #   resources: {}    #   operator:    #     resources: {}    #   adapter:    #     resources: {}    # node_exporter:    #   resources: {}    # alertmanager:    #   replicas: 1    #   resources: {}    # notification_manager:    #   resources: {}    #   operator:    #     resources: {}    #   proxy:    #     resources: {}    gpu:      nvidia_dcgm_exporter:        enabled: false        # resources: {}  multicluster:    clusterRole: none   network:    networkpolicy:      enabled: false    ippool:      type: none    topology:      type: none  openpitrix:    store:      enabled: false  servicemesh:    enabled: false  kubeedge:    enabled: false      cloudCore:      nodeSelector: {"node-role.kubernetes.io/worker": ""}      tolerations: []      cloudhubPort: "10000"      cloudhubQuicPort: "10001"      cloudhubHttpsPort: "10002"      cloudstreamPort: "10003"      tunnelPort: "10004"      cloudHub:        advertiseAddress:          - ""        nodeLimit: "100"      service:        cloudhubNodePort: "30000"        cloudhubQuicNodePort: "30001"        cloudhubHttpsNodePort: "30002"        cloudstreamNodePort: "30003"        tunnelNodePort: "30004"    edgeWatcher:      nodeSelector: {"node-role.kubernetes.io/worker": ""}      tolerations: []      edgeWatcherAgent:        nodeSelector: {"node-role.kubernetes.io/worker": ""}        tolerations: []root@cby:~#

若是haproxy配置如下:

frontend kube-apiserver  bind *:6443  mode tcp  option tcplog  default_backend kube-apiserverbackend kube-apiserver    mode tcp    option tcplog    option tcp-check    balance roundrobin    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100    server kube-apiserver-1 192.168.1.10:6443 check    server kube-apiserver-2 192.168.1.11:6443 check    server kube-apiserver-3 192.168.1.12:6443 check 

装置所需环境

root@hello:~# bash -x 1.sh root@hello:~# cat 1.shssh root@192.168.1.10 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.11 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.12 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.13 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.14 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.15 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.16 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.51 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.52 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.53 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.54 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.55 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.56 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"ssh root@192.168.1.57 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"

开始装置

配置实现后,您能够执行以下命令来开始装置:

root@hello:~# ./kk create cluster -f config-sample.yaml+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+| name    | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+| node3   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || node1   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || node4   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || node8   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || node11  | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || master1 | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:00 || node5   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:00 || master2 | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:00 || node2   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || node7   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:00 || master3 | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || node6   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || node9   | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 || node10  | y    | y    | y       | y        | y     | y     | y         |        | y          |             |                  | UTC 13:26:01 |+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+This is a simple check of your environment.Before installation, you should ensure that your machines meet all requirements specified athttps://github.com/kubesphere/kubekey#requirements-and-recommendationsContinue this installation? [yes/no]: yesINFO[13:26:06 UTC] Downloading Installation Files              INFO[13:26:06 UTC] Downloading kubeadm ...                      ---略---

验证装置

运行以下命令查看装置日志。

root@cby:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f**************************************************Collecting installation results ...########################################################              Welcome to KubeSphere!           ########################################################Console: http://192.168.1.10:30880Account: adminPassword: P@88w0rdNOTES:  1. After you log into the console, please check the     monitoring status of service components in     "Cluster Management". If any service is not     ready, please wait patiently until all components      are up and running.  2. Please change the default password after login.#####################################################https://kubesphere.io             2021-11-10 10:24:00#####################################################root@hello:~# kubectl get node NAME      STATUS   ROLES                  AGE   VERSIONmaster1   Ready    control-plane,master   30m   v1.22.1master2   Ready    control-plane,master   29m   v1.22.1master3   Ready    control-plane,master   29m   v1.22.1node1     Ready    worker                 29m   v1.22.1node10    Ready    worker                 29m   v1.22.1node11    Ready    worker                 29m   v1.22.1node2     Ready    worker                 29m   v1.22.1node3     Ready    worker                 29m   v1.22.1node4     Ready    worker                 29m   v1.22.1node5     Ready    worker                 29m   v1.22.1node6     Ready    worker                 29m   v1.22.1node7     Ready    worker                 30m   v1.22.1node8     Ready    worker                 30m   v1.22.1node9     Ready    worker                 29m   v1.22.1

在装置后启用插件

应用 admin 用户登录控制台。点击左上角的平台治理,而后抉择集群治理。

点击 CRD,而后在搜寻栏中输出 clusterconfiguration。点击搜寻后果查看其详情页。

在自定义资源中,点击 ks-installer 右侧的 ,而后抉择编辑 YAML。

apiVersion: installer.kubesphere.io/v1alpha1kind: ClusterConfigurationmetadata:  annotations:    kubectl.kubernetes.io/last-applied-configuration: >      {"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.2.0"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":false},"auditing":{"enabled":false},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","externalElasticsearchPort":"","externalElasticsearchUrl":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":false,"volumeSize":"2Gi"},"redis":{"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":false,"jenkinsJavaOpts_MaxRAM":"2g","jenkinsJavaOpts_Xms":"512m","jenkinsJavaOpts_Xmx":"512m","jenkinsMemoryLim":"2Gi","jenkinsMemoryReq":"1500Mi","jenkinsVolumeSize":"8Gi"},"etcd":{"endpointIps":"192.168.1.10,192.168.1.11,192.168.1.12","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":false},"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""],"nodeLimit":"100"},"cloudhubHttpsPort":"10002","cloudhubPort":"10000","cloudhubQuicPort":"10001","cloudstreamPort":"10003","nodeSelector":{"node-role.kubernetes.io/worker":""},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"},"tolerations":[],"tunnelPort":"10004"},"edgeWatcher":{"edgeWatcherAgent":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"enabled":false},"logging":{"containerruntime":"docker","enabled":false,"logsidecar":{"enabled":false,"replicas":2}},"metrics_server":{"enabled":false},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"storageClass":""},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":false},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":false}},"persistence":{"storageClass":""},"servicemesh":{"enabled":false}}}  labels:    version: v3.2.0  name: ks-installer  namespace: kubesphere-systemspec:  alerting:    enabled: true  auditing:    enabled: true  authentication:    jwtSecret: ''  common:    core:      console:        enableMultiLogin: true        port: 30880        type: NodePort    es:      basicAuth:        enabled: true        password: ''        username: ''      elkPrefix: logstash      externalElasticsearchPort: ''      externalElasticsearchUrl: ''      logMaxAge: 7    gpu:      kinds:        - default: true          resourceName: nvidia.com/gpu          resourceType: GPU    minio:      volumeSize: 20Gi    monitoring:      GPUMonitoring:        enabled: true      endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090'    openldap:      enabled: true      volumeSize: 2Gi    redis:      enabled: true      volumeSize: 2Gi  devops:    enabled: true    jenkinsJavaOpts_MaxRAM: 2g    jenkinsJavaOpts_Xms: 512m    jenkinsJavaOpts_Xmx: 512m    jenkinsMemoryLim: 2Gi    jenkinsMemoryReq: 1500Mi    jenkinsVolumeSize: 8Gi  etcd:    endpointIps: '192.168.1.10,192.168.1.11,192.168.1.12'    monitoring: false    port: 2379    tlsEnable: true  events:    enabled: true  kubeedge:    cloudCore:      cloudHub:        advertiseAddress:          - ''        nodeLimit: '100'      cloudhubHttpsPort: '10002'      cloudhubPort: '10000'      cloudhubQuicPort: '10001'      cloudstreamPort: '10003'      nodeSelector:        node-role.kubernetes.io/worker: ''      service:        cloudhubHttpsNodePort: '30002'        cloudhubNodePort: '30000'        cloudhubQuicNodePort: '30001'        cloudstreamNodePort: '30003'        tunnelNodePort: '30004'      tolerations: []      tunnelPort: '10004'    edgeWatcher:      edgeWatcherAgent:        nodeSelector:          node-role.kubernetes.io/worker: ''        tolerations: []      nodeSelector:        node-role.kubernetes.io/worker: ''      tolerations: []    enabled: true  logging:    containerruntime: docker    enabled: true    logsidecar:      enabled: true      replicas: 2  metrics_server:    enabled: true  monitoring:    gpu:      nvidia_dcgm_exporter:        enabled: true    storageClass: ''  multicluster:    clusterRole: none  network:    ippool:      type: weave-scope    networkpolicy:      enabled: true    topology:      type: none  openpitrix:    store:      enabled: true  persistence:    storageClass: ''  servicemesh:    enabled: true

批量将所有服务器设置阿里云减速

root@hello:~# vim 8 root@hello:~# cat 8 sudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://ted9wxpi.mirror.aliyuncs.com"]}EOFsudo systemctl daemon-reloadsudo systemctl restart dockerroot@hello:~# vim 7 root@hello:~# cat 7scp 8 root@192.168.1.11:scp 8 root@192.168.1.12:scp 8 root@192.168.1.13:scp 8 root@192.168.1.14:scp 8 root@192.168.1.15:scp 8 root@192.168.1.16:scp 8 root@192.168.1.51:scp 8 root@192.168.1.52:scp 8 root@192.168.1.53:scp 8 root@192.168.1.54:scp 8 root@192.168.1.55:scp 8 root@192.168.1.56:scp 8 root@192.168.1.57:root@hello:~# bash -x 7root@hello:~# vim 6 root@hello:~# cat 6ssh root@192.168.1.10 "bash -x 8"ssh root@192.168.1.11 "bash -x 8"ssh root@192.168.1.12 "bash -x 8"ssh root@192.168.1.13 "bash -x 8"ssh root@192.168.1.14 "bash -x 8"ssh root@192.168.1.15 "bash -x 8"ssh root@192.168.1.16 "bash -x 8"ssh root@192.168.1.51 "bash -x 8"ssh root@192.168.1.52 "bash -x 8"ssh root@192.168.1.53 "bash -x 8"ssh root@192.168.1.54 "bash -x 8"ssh root@192.168.1.55 "bash -x 8"ssh root@192.168.1.56 "bash -x 8"ssh root@192.168.1.57 "bash -x 8"root@hello:~# bash -x 6

查看node节点

root@hello:~# kubectl  get nodeNAME      STATUS   ROLES                  AGE   VERSIONmaster1   Ready    control-plane,master   11h   v1.22.1master2   Ready    control-plane,master   11h   v1.22.1master3   Ready    control-plane,master   11h   v1.22.1node1     Ready    worker                 11h   v1.22.1node10    Ready    worker                 11h   v1.22.1node11    Ready    worker                 11h   v1.22.1node2     Ready    worker                 11h   v1.22.1node3     Ready    worker                 11h   v1.22.1node4     Ready    worker                 11h   v1.22.1node5     Ready    worker                 11h   v1.22.1node6     Ready    worker                 11h   v1.22.1node7     Ready    worker                 11h   v1.22.1node8     Ready    worker                 11h   v1.22.1node9     Ready    worker                 11h   v1.22.1root@hello:~#  root@hello:~#

Linux运维交换社区

Linux运维交换社区,互联网新闻以及技术交换。

51篇原创内容

公众号

https://blog.csdn.net/qq_3392...

https://my.oschina.net/u/3981543

https://www.zhihu.com/people/...

https://segmentfault.com/u/hp...

https://juejin.cn/user/331578...

https://space.bilibili.com/35...

https://cloud.tencent.com/dev...

知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云

本文应用 文章同步助手 同步