乐趣区

关于云计算:在华为云上安装高可用-KubeSphere

随着多云多集群的场景越来越丰盛,在各个云厂商环境部署 KubeSphere 的需要随之升高。因为各云厂商的云资源应用规定和菜单导航栏各不相同,会使用户花大量工夫去排错。为升高部署过程错误率,本教程应用华为云平台部署一套高可用的 KubeSphere,心愿能够帮忙到大家进步部署体验。

一、部署后期筹备

所需环境的资源满足以下条件即可:

华为云产品 数量 用处 备注
ECS 云服务器 6 master、node
VPC 1 可用区
ELB 2 负载平衡
平安组 1 出入口管控
公网 IP 7 外网拜访

二、云平台资源初始化

1. 创立 VPC

进入到华为云管制,在左侧列表抉择「虚构公有云」,抉择「创立虚构公有云」创立 VPC,配置如下图:

2. 创立平安组

创立一个平安组,设置入方向的规定、关联实例参考如下:

3. 创立实例

在网络配置中,网络抉择第一步创立的 VPC 和子网。在平安组中,抉择上一步创立的平安组。

4. 创立负载均衡器(创立内外两个)

内网 ELB:

外网 ELB:

若集群须要配置公网拜访,则须要为外网负载均衡器配置一个公网 IP 为 所有节点 增加后端监听器,监听端口为 80(测试应用 30880 端口, 此处 80 端口也须要在平安组中凋谢)。

前面配置文件 config.yaml 须要配置在后面创立的 内网 SLB 调配的地址(VIP)

 controlPlaneEndpoint:
         domain: lb.kubesphere.local
         address: "192.168.0.205"
         port: "6443"

部署 KubeSphere 平台

1. 下载 KK

$ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -

2. 增加执行权限

$ chmod +x kk

3. 应用 kubekey 部署

$ ./kk create config --with-kubesphere v3.1.1 --with-kubernetes v1.17.9 -f master-HA.yaml

4. 集群配置调整

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: master-HA
spec:
  hosts:
  - {name: master1, address: 192.168.1.10, internalAddress: 192.168.1.10, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
  - {name: master2, address: 192.168.1.11, internalAddress: 192.168.1.11, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
  - {name: master3, address: 192.168.1.12, internalAddress: 192.168.1.12, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
  - {name: node1, address:  192.168.1.13, internalAddress: 192.168.1.13, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
  - {name: node2, address: 192.168.1.14, internalAddress: 192.168.1.14, password: yourpassword} # Assume that the default port for SSH is 22SSH is 22, otherwise add the port number after the IP address as above
  - {name: node3, address: 192.168.1.15, internalAddress: 192.168.1.15, password: yourpassword} # Assume that the default port for SSH is 22, otherwise add the port number after the IP address as above
  roleGroups:
    etcd:
     - master[1:3]
    master:
     - master[1:3]
    worker:
     - node[1:3]
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: "192.168.1.8"
    port: "6443"
  kubernetes:
    version: v1.17.9
    imageRepo: kubesphere
    clusterName: cluster.local
    masqueradeAll: false  # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
    maxPods: 110  # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
    nodeCidrMaskSize: 24  # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
    proxyMode: ipvs  # mode specifies which proxy mode to use. [Default: ipvs]
  network:
    plugin: calico
    calico:
      ipipMode: Always  # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
      vxlanMode: Never  # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
      vethMTU: 1440  # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: ["https://*.mirror.aliyuncs.com"] # # input your registryMirrors
    insecureRegistries: []
    privateRegistry: ""
  storage:
    defaultStorageClass: localVolume
    localVolume:
      storageClassName: local

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  local_registry: ""
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  etcd:
    monitoring: true        # Whether to install etcd monitoring dashboard
    endpointIps: 192.168.1.10,192.168.1.11,192.168.1.12  # etcd cluster endpointIps
    port: 2379              # etcd port
    tlsEnable: true
  common:
    mysqlVolumeSize: 20Gi # MySQL PVC size
    minioVolumeSize: 20Gi # Minio PVC size
    etcdVolumeSize: 20Gi  # etcd PVC size
    openldapVolumeSize: 2Gi   # openldap PVC size
    redisVolumSize: 2Gi # Redis PVC size
    es:  # Storage backend for logging, tracing, events and auditing.
      elasticsearchMasterReplicas: 1   # total number of master nodes, it's not allowed to use even number
      elasticsearchDataReplicas: 1     # total number of data nodes
      elasticsearchMasterVolumeSize: 4Gi   # Volume size of Elasticsearch master nodes
      elasticsearchDataVolumeSize: 20Gi    # Volume size of Elasticsearch data nodes
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch, it is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
      # externalElasticsearchUrl:
      # externalElasticsearchPort:
  console:
    enableMultiLogin: false  # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
    port: 30880
  alerting:                # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true
  auditing:                # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
    enabled: true
  devops:                  # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image
    enabled: true
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request
    jenkinsVolumeSize: 8Gi     # Jenkins volume size
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true
  logging:                 # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true
    logsidecarReplicas: 2
  metrics_server:                    # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
    enabled: true
  monitoring:                        #
    prometheusReplicas: 1            # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
    prometheusMemoryRequest: 400Mi   # Prometheus request memory
    prometheusVolumeSize: 20Gi       # Prometheus PVC size
    alertmanagerReplicas: 1          # AlertManager Replicas
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the role of host or member cluster
  networkpolicy:       # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
    enabled: true
  notification:        # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.
    enabled: true
  openpitrix:          # Whether to install KubeSphere App Store. It provides an application store for Helm-based applications, and offer application lifecycle management
    enabled: true
  servicemesh:         # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
    enabled: true

5. 执行命令创立集群

# 指定配置文件创立集群
$ ./kk create cluster --with-kubesphere v3.1.1 -f master-HA.yaml

# 查看 KubeSphere 装置日志  -- 直到呈现控制台的拜访地址和登录帐户
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.1.10:30880
Account: admin
Password: P@88w0rd

NOTES:1. After you log into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2020-08-28 01:25:54
#####################################################

拜访公网 IP + Port 为部署后的应用状况,应用默认帐户明码 (admin/P@88w0rd),文章组件装置为最大化,登录点击平台治理 > 集群治理可看到下图装置组件列表和机器状况。在集群概述页面中,能够看到如下图所示的仪表板。

本文由博客一文多发平台 OpenWrite 公布!

退出移动版