背景:

继续降级过程:Kubernetes 1.16.15降级到1.17.17,Kubernetes 1.17.17降级到1.18.20,Kubernetes 1.18.20降级到1.19.12

集群配置:

主机名零碎ip
k8s-vipslb10.0.0.37
k8s-master-01centos710.0.0.41
k8s-master-02centos710.0.0.34
k8s-master-03centos710.0.0.26
k8s-node-01centos710.0.0.36
k8s-node-02centos710.0.0.83
k8s-node-03centos710.0.0.40
k8s-node-04centos710.0.0.49
k8s-node-05centos710.0.0.45
k8s-node-06centos710.0.0.18

1. 参考官网文档

参照:https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

2. 确认可降级版本与降级计划

yum list --showduplicates kubeadm --disableexcludes=kubernetes

通过以上命令查问到1.20以后最新版本是1.20.9-0版本。master有三个节点还是依照集体习惯先降级k8s-master-03节点

3. 降级k8s-master-03节点管制立体

仍然k8s-master-03执行:

1. yum降级kubernetes插件

yum install kubeadm-1.20.9-0 kubelet-1.20.9-0 kubectl-1.20.9-0 --disableexcludes=kubernetes

2. 凌空节点查看集群是否能够降级

仍然算是复习drain命令:

kubectl drain k8s-master-03 --ignore-daemonsetssudo kubeadm upgrade plan

3. 降级版本到1.20.9

kubeadm upgrade apply 1.20.9

[root@k8s-master-03 ~]# sudo systemctl daemon-reload[root@k8s-master-03 ~]# sudo systemctl restart kubelet[root@k8s-master-03 ~]# kubectl uncordon k8s-master-03node/k8s-master-03 uncordoned[root@k8s-master-03 ~]# kubectl get nodes[root@k8s-master-03 ~]# kubectl get pods -n kube-system


4. 降级其余管制立体(k8s-master-01 k8s-master-02)

yum install kubeadm-1.20.9-0 kubelet-1.20.9-0 kubectl-1.20.9-0 --disableexcludes=kubernetessudo kubeadm upgrade nodesudo systemctl daemon-reloadsudo systemctl restart kubelet


5. work节点的降级

yum install kubeadm-1.20.9-0 kubelet-1.20.9-0 kubectl-1.20.9-0 --disableexcludes=kubernetessudo kubeadm upgrade nodesudo systemctl daemon-reloadsudo systemctl restart kubelet

注: 集体都没有凌空节点,看集体需要了

6. 验证降级

 kubectl get nodes

7. 其余——v1.20.0中禁用了selfLink

因为我的Prometheus oprator是0.4的分支,就筹备卸载重新安装了。版本差距太大了。当初也不想搞什么分支用了间接用主线版本了:
根本过程参照:Kubernetes 1.20.5 装置Prometheus-Oprator。根本过程都没有问题,讲一个有问题的中央:
我的kubernetes1.16降级上来的这个集群storageclass是用的nfs:

kubectl get sc


最初一

kubectl get pods -n monitoringkubectl logs -f prometheus-operator-84dc795dc8-lkl5r -n monitoring



看关键词吧:
additional 貌似是主动发现的配置?
首先将prometheus-prometheus.yaml文件中的主动发现的配置正文掉。嗯服务还是没有起来,再看一眼日志:

kubectl logs -f prometheus-operator-84dc795dc8-lkl5r -n monitoring

没有什么新的输入,然而看一眼pv,pvc没有创立。去看一下nfs的pod日志:

kubectl get pods -n nfskubectl logs -f nfs-client-provisioner-6cb4f54cbc-wqqw9 -n nfs


class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
百度selfLink参照:https://www.orchome.com/10024

批改三个master节点的kube-apiserver.yaml
而后pv,pvc创立胜利 Prometheus 服务启动胜利。而后再回过头来看一眼我的additional主动发现配置:
我在Kubernetes 1.20.5 装置Prometheus-Oprator

拿我的老版本的这个文件试试?:

cat <<EOF > prometheus-additional.yaml- job_name: 'kubernetes-service-endpoints'  kubernetes_sd_configs:  - role: endpoints  relabel_configs:  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]    action: keep    regex: true  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]    action: replace    target_label: __scheme__    regex: (https?)  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]    action: replace    target_label: __metrics_path__    regex: (.+)  - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]    action: replace    target_label: __address__    regex: ([^:]+)(?::\d+)?;(\d+)    replacement: $1:$2  - action: labelmap    regex: __meta_kubernetes_service_label_(.+)  - source_labels: [__meta_kubernetes_namespace]    action: replace    target_label: kubernetes_namespace  - source_labels: [__meta_kubernetes_service_name]    action: replace    target_label: kubernetes_nameEOF
kubectl delete secret additional-configs -n monitoringkubectl create secret generic additional-configs --from-file=prometheus-additional.yaml -n monitoring

再看日志启动起来了。初步先狐疑我配置文件中的prometheus-additional.yaml 有问题。当然了这是集体问题了。强调的次要是master节点kube-apiserver.yaml文件的批改增加:

- --feature-gates=RemoveSelfLink=false