背景:

降级是一件继续的事件:Kubernetes 1.16.15降级到1.17.17,Kubernetes 1.17.17降级到1.18.20

集群配置:

主机名零碎ip
k8s-vipslb10.0.0.37
k8s-master-01centos710.0.0.41
k8s-master-02centos710.0.0.34
k8s-master-03centos710.0.0.26
k8s-node-01centos710.0.0.36
k8s-node-02centos710.0.0.83
k8s-node-03centos710.0.0.40
k8s-node-04centos710.0.0.49
k8s-node-05centos710.0.0.45
k8s-node-06centos710.0.0.18

1. 参考官网文档

参照:https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

2. 确认可降级版本与降级计划

yum list --showduplicates kubeadm --disableexcludes=kubernetes

通过以上命令查问到1.19以后最新版本是1.19.12-0版本。master有三个节点还是依照集体习惯先降级k8s-master-03节点

3. 降级k8s-master-03节点管制立体

仍然k8s-master-03执行:

1. yum降级kubernetes插件

yum install kubeadm-1.19.12-0 kubelet-1.19.12-0 kubectl-1.19.12-0 --disableexcludes=kubernetes

2. 凌空节点查看集群是否能够降级

仍然算是复习drain命令:

kubectl drain k8s-master-03 --ignore-daemonsetssudo kubeadm upgrade plan

3. 降级版本到1.19.12

kubeadm upgrade apply 1.19.12

留神:特意强调一下work节点的版本也都是1.18.20了,没有呈现夸更多版本的情况了

[root@k8s-master-03 ~]# sudo systemctl daemon-reload[root@k8s-master-03 ~]# sudo systemctl restart kubelet[root@k8s-master-03 ~]# kubectl uncordon k8s-master-03node/k8s-master-03 uncordoned

4. 降级其余管制立体(k8s-master-01 k8s-master-02)

sudo yum install kubeadm-1.19.12-0 kubelet-1.19.12-0 kubectl-1.19.12-0 --disableexcludes=kubernetessudo kubeadm upgrade nodesudo systemctl daemon-reloadsudo systemctl restart kubelet


5. work节点的降级

sudo yum install kubeadm-1.19.12-0 kubelet-1.19.12-0 kubectl-1.19.12-0 --disableexcludes=kubernetessudo kubeadm upgrade nodesudo systemctl daemon-reloadsudo systemctl restart kubelet

6. 验证降级

 kubectl get nodes

7. 其余

查看一眼kube-system下插件的日志,确认插件是否失常

kubectl logs -f kube-controller-manager-k8s-master-01 -n kube-system


目测是没有问题的就不论了....嗯Prometheus的问题还是留着。原本也筹备装置主线版本了。过来的筹备卸载了.如呈现cluseterrole问题可参照:Kubernetes 1.16.15降级到1.17.17