降级阐明
Kubernetes 集群小版本升级基本上是只须要 要更新二进制文件即可。如果大版本升级须要留神 kubelet 参数的变动,以及其余组件降级之后的变动。因为 Kubernete 版本更新过快许多依赖并没有解决欠缺,并不倡议生产环境应用较新版本。
生产环境降级倡议再测试环境重复测试验证
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#downloads-for-v1217
组件 | 降级前 | 降级后 |
---|---|---|
Etcd | 3.4.13 | 无需降级 |
kube-apiserver | 1.20.13 | 1.21.7 |
kube-scheduler | 1.20.13 | 1.21.7 |
kube-controller-manager | 1.20.13 | 1.21.7 |
kubectl | 1.20.13 | 1.21.7 |
kube-proxy | 1.20.13 | 1.21.7 |
kubelet | 1.20.13 | 1.21.7 |
calico | 3.15.3 | 3.21.1 |
coredns | 1.7.0 | 1.8.6 |
降级 Master
Master01
# 备份
[root@k8s-master01 ~]# cd /usr/local/bin/
[root@k8s-master01 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /tmp
# 下载安装包
[root@k8s-master01 ~]# wget https://storage.googleapis.com/kubernetes-release/release/v1.21.7/kubernetes-server-linux-amd64.tar.gz
[root@k8s-master01 ~]# tar zxf kubernetes-server-linux-amd64.tar.gz
# 进行服务
[root@k8s-master01 ~]# systemctl stop kube-apiserver
[root@k8s-master01 ~]# systemctl stop kube-controller-manager
[root@k8s-master01 ~]# systemctl stop kube-scheduler
# 更新包
[root@k8s-master01 bin]# cd /root/kubernetes/server/bin/
[root@k8s-master01 bin]# ll
total 1075472
-rwxr-xr-x 1 root root 50810880 Nov 17 22:52 apiextensions-apiserver
-rwxr-xr-x 1 root root 44867584 Nov 17 22:52 kubeadm
-rwxr-xr-x 1 root root 48586752 Nov 17 22:52 kube-aggregator
-rwxr-xr-x 1 root root 122204160 Nov 17 22:52 kube-apiserver
-rw-r--r-- 1 root root 8 Nov 17 22:50 kube-apiserver.docker_tag
-rw------- 1 root root 126985216 Nov 17 22:50 kube-apiserver.tar
-rwxr-xr-x 1 root root 116404224 Nov 17 22:52 kube-controller-manager
-rw-r--r-- 1 root root 8 Nov 17 22:50 kube-controller-manager.docker_tag
-rw------- 1 root root 121185280 Nov 17 22:50 kube-controller-manager.tar
-rwxr-xr-x 1 root root 46669824 Nov 17 22:52 kubectl
-rwxr-xr-x 1 root root 55317704 Nov 17 22:52 kubectl-convert
-rwxr-xr-x 1 root root 118390192 Nov 17 22:52 kubelet
-rwxr-xr-x 1 root root 43376640 Nov 17 22:52 kube-proxy
-rw-r--r-- 1 root root 8 Nov 17 22:50 kube-proxy.docker_tag
-rw------- 1 root root 105378304 Nov 17 22:50 kube-proxy.tar
-rwxr-xr-x 1 root root 47349760 Nov 17 22:52 kube-scheduler
-rw-r--r-- 1 root root 8 Nov 17 22:50 kube-scheduler.docker_tag
-rw------- 1 root root 52130816 Nov 17 22:50 kube-scheduler.tar
-rwxr-xr-x 1 root root 1593344 Nov 17 22:52 mounter
[root@k8s-master01 bin]# cp kube-apiserver kube-scheduler kubectl kube-controller-manager /usr/local/bin/
cp: overwrite‘/usr/local/bin/kube-apiserver’? y
cp: overwrite‘/usr/local/bin/kube-scheduler’? y
cp: overwrite‘/usr/local/bin/kubectl’? y
cp: overwrite‘/usr/local/bin/kube-controller-manager’? y
# 启动服务
[root@k8s-master01 bin]# systemctl start kube-apiserver
[root@k8s-master01 bin]# systemctl start kube-controller-manager
[root@k8s-master01 bin]# systemctl start kube-scheduler
# 查看服务,查看是否有 Error
[root@k8s-master01 bin]# systemctl status kube-apiserver
[root@k8s-master01 bin]# systemctl status kube-controller-manager
[root@k8s-master01 bin]# systemctl status kube-scheduler
# 查看版本
[root@k8s-master01 bin]# kube-apiserver --version
Kubernetes v1.21.7
[root@k8s-master01 bin]# kube-scheduler --version
Kubernetes v1.21.7
[root@k8s-master01 bin]# kube-controller-manager --version
Kubernetes v1.21.7
Master02
# 进行服务
[root@k8s-master02 ~]# systemctl stop kube-apiserver
[root@k8s-master02 ~]# systemctl stop kube-controller-manager
[root@k8s-master02 ~]# systemctl stop kube-scheduler
# 更新包
[root@k8s-master01 bin]# scp kube-apiserver kube-controller-manager kube-scheduler root@k8s-master02:/usr/local/bin
kube-apiserver 100% 117MB 72.2MB/s 00:01
kube-controller-manager 100% 111MB 74.3MB/s 00:01
kube-scheduler 100% 45MB 71.1MB/s 00:00
# 启动服务
[root@k8s-master02 ~]# systemctl start kube-apiserver
[root@k8s-master02 ~]# systemctl start kube-controller-manager
[root@k8s-master02 ~]# systemctl start kube-scheduler
# 查看服务,查看是否有 Error
[root@k8s-master02 ~]# systemctl status kube-apiserver
[root@k8s-master02 ~]# systemctl status kube-controller-manager
[root@k8s-master02 ~]# systemctl status kube-scheduler
# 查看版本
[root@k8s-master02 ~]# kube-apiserver --version
Kubernetes v1.21.7
[root@k8s-master02 ~]# kube-scheduler --version
Kubernetes v1.21.7
[root@k8s-master02 ~]# kube-controller-manager --version
Kubernetes v1.21.7
Master03
# 进行服务
[root@k8s-master03 ~]# systemctl stop kube-apiserver
[root@k8s-master03 ~]# systemctl stop kube-controller-manager
[root@k8s-master03 ~]# systemctl stop kube-scheduler
# 更新包
[root@k8s-master01 bin]# scp kube-apiserver kube-controller-manager kube-scheduler root@k8s-master03:/usr/local/bin
kube-apiserver 100% 117MB 72.2MB/s 00:01
kube-controller-manager 100% 111MB 74.3MB/s 00:01
kube-scheduler 100% 45MB 71.1MB/s 00:00
# 启动服务
[root@k8s-master03 ~]# systemctl start kube-apiserver
[root@k8s-master03 ~]# systemctl start kube-controller-manager
[root@k8s-master03 ~]# systemctl start kube-scheduler
# 查看服务,查看是否有 Error
[root@k8s-master03 ~]# systemctl status kube-apiserver
[root@k8s-master03 ~]# systemctl status kube-controller-manager
[root@k8s-master03 ~]# systemctl status kube-scheduler
# 查看版本
[root@k8s-master03 ~]# kube-apiserver --version
Kubernetes v1.21.7
[root@k8s-master03 ~]# kube-scheduler --version
Kubernetes v1.21.7
[root@k8s-master03 ~]# kube-controller-manager --version
Kubernetes v1.21.7
降级 Node
倡议低峰工夫,平滑降级哟
设置不可调度
[root@k8s-master01 ~]# kubectl cordon k8s-node01
node/k8s-node01 cordoned
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 2d21h v1.20.13
k8s-master02 Ready <none> 2d21h v1.20.13
k8s-master03 Ready <none> 2d21h v1.20.13
k8s-node01 Ready,SchedulingDisabled <none> 2d18h v1.20.13
k8s-node02 Ready <none> 2d18h v1.20.13
驱赶 Pod
[root@k8s-master01 ~]# kubectl drain k8s-node01 --delete-local-data --ignore-daemonsets --force
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/k8s-node01 already cordoned
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/busybox; ignoring DaemonSet-managed Pods: kube-system/calico-node-jrc6b
evicting pod default/busybox
pod/busybox evicted
- –delete-local-data: 即便 pod 应用了 emptyDir 也删除
- –ignore-daemonsets: 疏忽 deamonset 控制器的 pod,如果不疏忽,deamonset 控制器管制的 pod 被删除后可能马上又在此节点上启动起来, 会成为死循环;
- –force: 不加 force 参数只会删除该 NODE 上由 ReplicationController, ReplicaSet, DaemonSet,StatefulSet or Job 创立的 Pod,加了后还会删除’裸奔的 pod’(没有绑定到任何 replication controller)
进行服务
[root@k8s-node01 ~]# systemctl stop kube-proxy
[root@k8s-node01 ~]# systemctl stop kubelet
备份包
[root@k8s-node01 ~]# mv /usr/local/bin/kubelet /tmp
[root@k8s-node01 ~]# mv /usr/local/bin/kube-proxy /tmp
更新包
[root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node01:/usr/local/bin
启动服务
[root@k8s-node01 ~]# systemctl start kubelet
[root@k8s-node01 ~]# systemctl start kube-proxy
设置可调度
[root@k8s-master01 bin]# kubectl uncordon k8s-node01
node/k8s-node01 uncordoned
验证降级
# 如下能够查看到 k8s-node01 已降级到 v1.21.7
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 3d v1.20.13
k8s-master02 Ready <none> 3d v1.20.13
k8s-master03 Ready <none> 3d v1.20.13
k8s-node01 Ready <none> 2d20h v1.21.7
k8s-node02 Ready <none> 2d20h v1.20.13
calico 降级
没有非凡需要,个别不倡议降级
依据您的数据存储和节点数量,抉择上面装置形式:
- 应用 Kubernetes API 数据存储装置 Calico,50 个或更少节点
- 应用 Kubernetes API 数据存储装置 Calico,超过 50 个节点
- 应用 etcd 数据存储装置 Calico
备份
# 备份 secret
[root@k8s-master01 bak]# kubectl get secret -n kube-system calico-node-token-d6ck2 -o yaml > calico-node-token-d6ck2.yml
[root@k8s-master01 bak]# kubectl get secret -n kube-system calico-kube-controllers-token-r4v8n -o yaml > calico-kube-controllers-token.yml
[root@k8s-master01 bak]# kubectl get secret -n kube-system calico-etcd-secrets -o yaml > calico-etcd-secrets.yml
# 备份 configmap
[root@k8s-master01 bak]# kubectl get cm -n kube-system calico-config -o yaml > calico-config.yaml
# 备份 ClusterRole
[root@k8s-master01 bak]# kubectl get clusterrole calico-kube-controllers -o yaml > calico-kube-controllers-cr.yml
[root@k8s-master01 bak]# kubectl get clusterrole calico-node -o yaml > calico-node-cr.yml
# 备份 ClusterRoleBinding
[root@k8s-master01 bak]# kubectl get ClusterRoleBinding calico-kube-controllers -o yaml > calico-kube-controllers-crb.yml
[root@k8s-master01 bak]# kubectl get ClusterRoleBinding calico-node -o yaml > calico-node-crb.yml
# 备份 DaemonSet
[root@k8s-master01 bak]# kubectl get DaemonSet -n kube-system calico-node -o yaml > calico-node-daemonset.yml
# 备份 ServiceAccount
[root@k8s-master01 bak]# kubectl get sa -n kube-system calico-kube-controllers -o yaml > calico-kube-controllers-sa.yml
[root@k8s-master01 bak]# kubectl get sa -n kube-system calico-node -o yaml > calico-node-sa.yml
# 备份 Deployment
[root@k8s-master01 bak]# kubectl get deployment -n kube-system calico-kube-controllers -o yaml > calico-kube-controllers-deployment.yml
更新
[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico-etcd.yaml
# 批改配置
sed -i 's#etcd_endpoints:"http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints:"https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379"#g' calico-etcd.yaml
ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
sed -i 's#etcd_ca:""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert:"/calico-secrets/etcd-cert"#g; s#etcd_key:"" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
# 更改此处为本人的 pod 网段
POD_SUBNET="172.16.0.0/12"
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value:"192.168.0.0/16"@ value:'"${POD_SUBNET}"'@g' calico-etcd.yaml
$ kubectl apply -f calico.yaml
secret/calico-etcd-secrets unchanged
configmap/calico-config configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node configured
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
coredns 降级
[root@k8s-master01 ~]# git clone https://github.com/coredns/deployment.git
Cloning into 'deployment'...
[root@k8s-master01 ~]# cd deployment/kubernetes/
[root@k8s-master01 kubernetes]# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -
# 查看
[root@k8s-master01 ~]# kubectl get pod -n kube-system coredns-86f4cdc7bc-ccgr5 -o yaml | grep image
image: coredns/coredns:1.8.6
imagePullPolicy: IfNotPresent
image: coredns/coredns:1.8.6
imageID: docker-pullable://coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
点击 “ 浏览原文 ” 获取更好的浏览体验!