应用kubeadm搭建k8s(1.18.2版本)高可用集群
如果想更相熟k8s的各个组件的话还是倡议应用二进制搭建学习。
1 节点布局信息
172.16.197.49 master49
172.16.197.50 master50
172.16.197.56 master56
172.16.197.52 node52
172.16.197.53 node53
172.16.197.200 k8s-lb
2 根底环境筹备
- 环境信息
| 软件 | 版本 || kubernetes | 1.18.2 |
| docker | 19.0.3 |
2.1 环境初始化 (在每台机器上执行)
- 1)配置主机名,以k8s-master49为例(须要顺次依据节点布局角色批改主机名)
k8s-lb不须要设置主机名,只需在及群里的每台机器写hosts(能够了解成keepalive的hosts或者域名,在kubeadmi 创立集群应用k8s-lb:xxxx 所以须要配host)
[root@localhost ~]# hostnamectl set-hostname master49
- 2)配置主机hosts映射(在每台机器上执行)
[root@localhost ~]# vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6172.16.197.49 master49172.16.197.50 master50172.16.197.56 master56172.16.197.52 node52172.16.197.53 node53172.16.197.200 k8s-lb
- 3)禁用防火墙
[root@localhost ~]# systemctl stop firewalld[root@localhost ~]# systemctl disable firewalld`
- 4)敞开selinux
[root@localhost ~]# setenforce 0[root@localhost ~]# sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
- 5)敞开swap分区
[root@localhost ~]# swapoff -a # 长期[root@localhost ~]# sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab #永恒
- 6)工夫同步
[root@localhost ~]# yum install chrony -y[root@localhost ~]# systemctl enable chronyd[root@localhost ~]# systemctl start chronyd[root@localhost ~]# chronyc sources
- 7)配置ulimt
[root@localhost ~]# ulimit -SHn 65535
- 8)配置内核参数
[root@localhost ~]# cat >> /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1vm.swappiness=0EOF[root@localhost ~]# sysctl -p
2.2 内核降级(在每台机器上执行)
因为centos7.6的零碎默认内核版本是3.10,3.10的内核有很多BUG,最常见的一个就是group memory leak(四台主机都要执行)
- 1)下载所须要的内核版本,我这里采纳rpm装置,所以间接下载的rpm包
[root@localhost ~]# wget https://cbs.centos.org/kojifiles/packages/kernel/4.9.220/37.el7/x86_64/kernel-4.9.220-37.el7.x86_64.rpm
- 2)执行rpm降级即可
[root@localhost ~]# rpm -ivh kernel-4.9.220-37.el7.x86_64.rpm
- 3)降级完reboot,而后查看内核是否胜利降级################肯定要重启
[root@localhost ~]# reboot[root@master49 ~]# uname -r
3 组件装置
3.1 装置ipvs (在每台机器上执行)
- 1)装置ipvs须要的软件
因为我筹备应用ipvs作为kube-proxy的代理模式,所以须要装置相应的软件包。
[root@master49 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
- 2)加载模块
[root@master49 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrackmodprobe -- ip_tablesmodprobe -- ip_setmodprobe -- xt_setmodprobe -- ipt_setmodprobe -- ipt_rpfiltermodprobe -- ipt_REJECTmodprobe -- ipipEOF
留神:在内核4.19版本nf_conntrack_ipv4曾经改为nf_conntrack
- 3)配置重启主动加载
[root@master49 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
3.2 装置docker-ce (在每台机器上执行)
所有主机都须要装置
[root@master49 ~]# # 装置须要的软件[root@master49 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2[root@master49 ~]# # 增加yum源[root@master49 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- 装置docker-ce
[root@master49 ~]# yum install docker-ce-19.03.8-3.el7 -y[root@master49 ~]# systemctl start docker[root@master49 ~]# systemctl enable docker`
- 配置镜像减速
[root@master49 ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io[root@master49 ~]# systemctl restart docker
3.3 装置kubernetes组件 (在每台机器上执行)
以上操作在所有节点执行
- 增加yum源
[root@master49 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
- 装置软件
[root@master49 ~]# yum install -y kubelet-1.18.2-0 kubeadm-1.18.2-0 kubectl-1.18.2-0 --disableexcludes=kubernetes
- 将kubelet设置为开机自启动
[root@master49 ~]# systemctl enable kubelet.service
4 集群初始化 (在master所有节点机器上执行)
4.1 配置集群高可用
高可用采纳的是HAProxy+Keepalived来进行高可用和master节点的流量负载平衡,HAProxy和KeepAlived以守护过程的形式在所有Master节点部署
- 装置软件
[root@master49 ~]# yum install keepalived haproxy -y`
- 配置haproxy
所有master节点的配置雷同,如下:
留神:把apiserver地址改成本人节点布局的master地址
[root@master49 ~]# vim /etc/haproxy/haproxy.cfg#---------------------------------------------------------------------# Global settings#---------------------------------------------------------------------global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats#---------------------------------------------------------------------# common defaults that all the 'listen' and 'backend' sections will# use if not designated in their block#---------------------------------------------------------------------defaults mode http log global option httplog option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000#---------------------------------------------------------------------# kubernetes apiserver frontend which proxys to the backends#---------------------------------------------------------------------frontend kubernetes mode tcp bind *:16443 option tcplog default_backend kubernetes-apiserver#---------------------------------------------------------------------# round robin balancing between the various backends#---------------------------------------------------------------------backend kubernetes-apiserver mode tcp balance roundrobin server master49 172.16.197.49:6443 check server master50 172.16.197.50:6443 check server master56 172.16.197.56:6443 check#---------------------------------------------------------------------# collection haproxy statistics message#---------------------------------------------------------------------listen stats bind *:9999 stats auth admin:P@ssW0rd stats refresh 5s stats realm HAProxy Statistics stats uri /admin?stats
- 配置keepalived
master49
[root@master49 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0}# 定义脚本vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 2 weight -5 fall 3 rise 2 }vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 ####master主节点肯定要配置集群中最大的 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.197.200 } # 调用脚本 #track_script { # check_apiserver #}}
master50节点配置
[root@k8s-master02 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0}# 定义脚本vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 2 weight -5 fall 3 rise 2 }vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.197.200 } # 调用脚本 #track_script { # check_apiserver #}}
master56节点配置
[root@k8s-master03 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0}# 定义脚本vrrp_script check_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 2 weight -5 fall 3 rise 2 }vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 98 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.16.197.200 } # 调用脚本 #track_script { # check_apiserver #}}
master节点的 priority 的数值 越大权重越大,当master主节点keepalive进行时,主节点会漂移到 priority 值 高的机器
- 172.16.197.200 keepalive是vip
编写衰弱检测脚本(每台机器都一样即可)
[root@master49 ~]# vim /etc/keepalived/check-apiserver.sh#!/bin/bashfunction check_apiserver(){ for ((i=0;i<5;i++)) do apiserver_job_id=${pgrep kube-apiserver} if [[ ! -z ${apiserver_job_id} ]];then return else sleep 2 fi done apiserver_job_id=0}# 1->running 0->stoppedcheck_apiserverif [[ $apiserver_job_id -eq 0 ]];then /usr/bin/systemctl stop keepalived exit 1else exit 0fi
启动haproxy和keepalived
[root@master49 ~]# systemctl enable --now keepalived[root@master49 ~]# systemctl enable --now haproxy[root@master49 ~]# systemctl status haproxy[root@master49 ~]# systemctl status keepalived
4.2 部署master
1)在master49上,编写kubeadm.yaml配置文件,如下:
- master49
[root@master49 ~]# cat >> kubeadm.yaml <<EOFapiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1.18.2imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containerscontrolPlaneEndpoint: "k8s-lb:16443"networking: dnsDomain: cluster.local podSubnet: 10.17.0.0/16 serviceSubnet: 10.217.0.0/12---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationfeatureGates: SupportIPVSProxyMode: truemode: ipvEOF
2)下载镜像
- master49
[root@master49 ~]# kubeadm config images pull --config kubeadm.yaml
镜像地址是应用的阿里云的地址,实践上应该也会很快,大家也能够间接下载文中结尾所提供的镜像,而后导入到节点中
docker load -i 1-18-kube-apiserver.tar.gzdocker load -i 1-18-kube-scheduler.tar.gzdocker load -i 1-18-kube-controller-manager.tar.gzdocker load -i 1-18-pause.tar.gzdocker load -i 1-18-cordns.tar.gzdocker load -i 1-18-etcd.tar.gzdocker load -i 1-18-kube-proxy.tar.gz阐明:pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0 cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像别离是k8s.gcr.io/kube-apiserver:v1.18.2k8s.gcr.io/kube-controller-manager:v1.18.2k8s.gcr.io/kube-scheduler:v1.18.2k8s.gcr.io/kube-proxy:v1.18.2
3)进行初始化
- master49
[root@master49 ~]# kubeadm init --config kubeadm.yaml --upload-certsW1218 17:44:40.664521 27338 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.2[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [master49 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-lb] and IPs [10.217.0.1 172.16.197.49][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [master49 localhost] and IPs [172.16.197.49 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [master49 localhost] and IPs [172.16.197.49 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "admin.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "kubelet.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "controller-manager.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W1218 17:44:46.319361 27338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W1218 17:44:46.320750 27338 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 19.522265 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:df34e5886551aaf2c4afea226b7b6709d197c43334798e1be79386203893cd3b[mark-control-plane] Marking the node master49 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master49 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: n838nb.1ienzev40tbbhbrp[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8s-lb:16443 --token n838nb.1ienzev40tbbhbrp \ --discovery-token-ca-cert-hash sha256:5d2ac53eb502d852215fc856d23184c05196faba4e210b9257d68ef7cb9fd5c1 \ --control-plane --certificate-key df34e5886551aaf2c4afea226b7b6709d197c43334798e1be79386203893cd3bPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join k8s-lb:16443 --token n838nb.1ienzev40tbbhbrp \ --discovery-token-ca-cert-hash sha256:5d2ac53eb502d852215fc856d23184c05196faba4e210b9257d68ef7cb9fd5c1 [root@master49 ~]# echo $?0
最初输入的kubeadm jion须要记录下来,前面的master节点和node节点须要用
4)配置环境变量
- master49
[root@master49 ~]# cat >> /root/.bashrc <<EOFexport KUBECONFIG=/etc/kubernetes/admin.confEOF[root@master49 ~]# source /root/.bashrc[root@master49 ~]# mkdir -p $HOME/.kube[root@master49 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@master49 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
- master50/56 装置提醒
[root@master49 ~]# cat >> /root/.bashrc <<EOFexport KUBECONFIG=/etc/kubernetes/admin.confEOF[root@master49 ~]# source /root/.bashrc[root@master49 ~]# mkdir -p $HOME/.kube[root@master49 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@master49 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
5)查看节点状态
- master49
[root@master49 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster49 NotReady master 3m47s v1.18.2
6)装置网络插件
- master49
如果有节点是多网卡,所以须要在资源清单文件中指定内网网卡(如何单网卡能够不必批改))
[root@master49 ~]# wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml[root@master49 ~]# vi calico.yaml...... containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: calico/node:v3.8.8-1 env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Wait for the datastore. - name: IP_AUTODETECTION_METHOD # DaemonSet中增加该环境变量 value: interface=ens33 # 指定内网网卡 - name: WAIT_FOR_DATASTORE value: "true" # Set based on the k8s node name. - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName......[root@master49 ~]# grep -C5 10.17.0.0 calico.yaml # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "10.17.0.0/16" ###应用pod网段 # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true"################## 装置calico网络插件[root@master49 ~]# kubectl apply -f calico.yaml
当网络插件装置实现后,查看node节点信息如下:
[root@master49 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster49 Ready master 10m v1.18.2
能够看到状态曾经从NotReady变为ready了。
7)将master50退出集群
- 下载镜像
[root@master50 ~]# kubeadm config images pull --config kubeadm.yaml
- 退出集群
[root@master50 ~]# kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1 --control-plane
- 输入如下:
...This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.* The Kubelet was informed of the new secure connection details.* Control plane (master) label and taint were applied to the new node.* The Kubernetes control plane instances scaled up.* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster....
- 配置环境变量
[root@k8s-master02 ~]# cat >> /root/.bashrc <<EOFexport KUBECONFIG=/etc/kubernetes/admin.confEOF[root@k8s-master02 ~]# source /root/.bashrc
- 另一台的操作一样,把master56退出集群
- 查看集群状态
[root@master49 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster49 Ready master 3h38m v1.18.2master50 Ready master 3h36m v1.18.2master56 Ready master 3h36m v1.18.2node52 Ready <none> 3h34m v1.18.2node53 Ready <none> 3h35m v1.18.2
- 查看集群组件状态
全副都Running,则所有组件都失常了,不失常,能够具体查看pod日志进行排查
- 查看pod
[root@master49 ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-75d555c48-9z5ql 1/1 Running 1 3h36mcalico-node-2xt99 1/1 Running 0 3m57scalico-node-8hd6s 1/1 Running 0 3m45scalico-node-bgg29 1/1 Running 0 3m26scalico-node-pq2pc 1/1 Running 0 4m7scalico-node-rs77f 1/1 Running 0 3m37scoredns-7ff77c879f-ltmr7 1/1 Running 1 3h37mcoredns-7ff77c879f-rzbsj 1/1 Running 1 3h37metcd-master49 1/1 Running 1 3h37metcd-master50 1/1 Running 1 3h36metcd-master56 1/1 Running 1 3h36mkube-apiserver-master49 1/1 Running 1 3h37mkube-apiserver-master50 1/1 Running 1 3h36mkube-apiserver-master56 1/1 Running 1 3h36mkube-controller-manager-master49 1/1 Running 4 3h37mkube-controller-manager-master50 1/1 Running 2 3h36mkube-controller-manager-master56 1/1 Running 2 3h36mkube-proxy-4csh5 1/1 Running 1 3h37mkube-proxy-54pqr 1/1 Running 0 3h34mkube-proxy-h2ttm 1/1 Running 1 3h36mkube-proxy-nr7z4 1/1 Running 0 3h34mkube-proxy-xtrqz 1/1 Running 1 3h35mkube-scheduler-master49 1/1 Running 4 3h37mkube-scheduler-master50 1/1 Running 2 3h36mkube-scheduler-master56 1/1 Running 3 3h36m
4.3 部署node
- node节点只需退出集群即可
[root@master49 ~]# kubeadm join k8s-lb:16443 --token q4ui64.gp5g5rezyusy9xw9 --discovery-token-ca-cert-hash sha256:1b7cd42c825288a53df23dcd818aa03253b0c7e7e9317fa92bde2fb853d899d1
- 输入日志如下:
`W0509 23:24:12.159733 10635 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.`
- 最初而后查看集群节点信息
[root@master49 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster49 Ready master 3h38m v1.18.2master50 Ready master 3h36m v1.18.2master56 Ready master 3h36m v1.18.2node52 Ready <none> 3h34m v1.18.2node53 Ready <none> 3h35m v1.18.2
5 测试集群高可用
敞开master01主机,而后查看整个集群。
# 模仿关掉keepalivedsystemctl stop keepalived# 而后查看集群是否可用##49[root@master49 ~]# systemctl stop keepalived[root@master49 ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:90:60:e2 brd ff:ff:ff:ff:ff:ff inet 172.16.197.49/24 brd 172.16.197.255 scope global noprefixroute ens160 valid_lft forever preferred_lft forever3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 inet 10.17.58.0/32 brd 10.17.58.0 scope global tunl0 valid_lft forever preferred_lft forever4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:c2:74:27:17 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c2:ea:ea:96:4d:71 brd ff:ff:ff:ff:ff:ff6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default link/ether 0e:1b:31:48:12:1c brd ff:ff:ff:ff:ff:ff inet 10.217.0.10/32 brd 10.217.0.10 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.217.0.1/32 brd 10.217.0.1 scope global kube-ipvs0 valid_lft forever preferred_lft forever19: calie4cbb6e0f40@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 020: cali0d463746e41@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 123: cali1153f4082bb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2[root@master49 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster49 Ready master 3h28m v1.18.2master50 Ready master 3h27m v1.18.2master56 Ready master 3h27m v1.18.2node52 Ready <none> 3h25m v1.18.2node53 Ready <none> 3h25m v1.18.2##50[root@master50 ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:90:5d:c3 brd ff:ff:ff:ff:ff:ff inet 172.16.197.50/24 brd 172.16.197.255 scope global noprefixroute ens160 valid_lft forever preferred_lft forever inet 172.16.197.200/32 scope global ens160 valid_lft forever preferred_lft forever3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 inet 10.17.45.128/32 brd 10.17.45.128 scope global tunl0 valid_lft forever preferred_lft forever4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:56:05:6c:76 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 02:92:f7:64:4c:21 brd ff:ff:ff:ff:ff:ff6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default link/ether ae:4f:c7:5d:14:5b brd ff:ff:ff:ff:ff:ff inet 10.217.0.10/32 brd 10.217.0.10 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.217.0.1/32 brd 10.217.0.1 scope global kube-ipvs0 valid_lft forever preferred_lft forever##49[root@master49 ~]# systemctl restart keepalived[root@master49 ~]# [root@master49 ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:90:60:e2 brd ff:ff:ff:ff:ff:ff inet 172.16.197.49/24 brd 172.16.197.255 scope global noprefixroute ens160 valid_lft forever preferred_lft forever inet 172.16.197.200/32 scope global ens160 valid_lft forever preferred_lft forever3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 inet 10.17.58.0/32 brd 10.17.58.0 scope global tunl0 valid_lft forever preferred_lft forever4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:c2:74:27:17 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c2:ea:ea:96:4d:71 brd ff:ff:ff:ff:ff:ff6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default link/ether 0e:1b:31:48:12:1c brd ff:ff:ff:ff:ff:ff inet 10.217.0.10/32 brd 10.217.0.10 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.217.0.1/32 brd 10.217.0.1 scope global kube-ipvs0 valid_lft forever preferred_lft forever19: calie4cbb6e0f40@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 020: cali0d463746e41@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 123: cali1153f4082bb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2[root@master49 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster49 Ready master 3h29m v1.18.2master50 Ready master 3h27m v1.18.2master56 Ready master 3h27m v1.18.2node52 Ready <none> 3h26m v1.18.2node53 Ready <none> 3h26m v1.18.2
6 装置主动补全命令
yum install -y bash-completionsource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc
7 重新安装
[root@master49 ~]# kubeadm reset[reset] Reading configuration from the cluster...[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'W1218 17:43:18.148470 27043 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: configmaps "kubeadm-config" not found[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.[reset] Are you sure you want to proceed? [y/N]: y[preflight] Running pre-flight checksW1218 17:43:21.762933 27043 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory[reset] Stopping the kubelet service[reset] Unmounting mounted directories in "/var/lib/kubelet"[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki][reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf][reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.Please, check the contents of the $HOME/.kube/config file.[root@master49 ~]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[root@master49 ~]# lsanaconda-ks.cfg kernel-4.9.220-37.el7.x86_64.rpm kubeadm.yaml[root@master49 ~]# rm -rf /root/.kube/
如果master,node节点都曾经退出了k8s集群,须要在每台节点上执行上述操作。