关于kubernetes:k8s集群进行删除并添加node节点

56次阅读

共计 4564 个字符,预计需要花费 12 分钟才能阅读完成。

    在已建设好的 k8s 集群中删除节点后,进行增加新的节点,可参考用于增加全新 node 节点,若新的 node 须要装置 docker 和 k8s 根底组件。

     建设集群能够参考已经的文章:CentOS8 搭建 Kubernetes

Linux 运维交换社区举荐搜寻

k8s 集群

k8s 集群增加节点

    1. 在 master 中,查看节点数和要删除的节点数,因集群 ip 进行了批改,节点呈现了异样。

        [root@k8s-master ~]# kubectl  get nodes

        NAME         STATUS     ROLES    AGE   VERSION

        k8s-master   Ready      master   13d   v1.19.3

        k8s-node1    NotReady   <none>   13d   v1.19.3

        k8s-node2    NotReady   <none>   13d   v1.19.3

    2. 进行删除节点操作。

        [root@k8s-master ~]# kubectl  delete nodes k8s-node1

        node “k8s-node1” deleted

        [root@k8s-master ~]# kubectl  delete nodes k8s-node2

        node “k8s-node2” deleted

    3. 在被删除的 node 节点中清空集群数据信息。

\[root@k8s-node1 ~\]# kubeadm reset
\[reset\] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
\[reset\] Are you sure you want to proceed? \[y/N\]: y
\[preflight\] Running pre-flight checks
W1121 05:40:44.876393    9649 removeetcdmember.go:79\] \[reset\] No kubeadm config, using etcd pod spec to get data directory
\[reset\] No etcd config found. Assuming external etcd
\[reset\] Please, manually reset etcd to prevent further issues
\[reset\] Stopping the kubelet service
\[reset\] Unmounting mounted directories in "/var/lib/kubelet"
\[reset\] Deleting contents of config directories: \[/etc/kubernetes/manifests /etc/kubernetes/pki\]
\[reset\] Deleting files: \[/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf\]
\[reset\] Deleting contents of stateful directories: \[/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni\]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

    4. 在集群中查看集群的 token 值

\[root@k8s-master ~\]# kubeadm token create --print-join-command
W1121 05:38:27.405833   12512 configset.go:348\] WARNING: kubeadm cannot validate component configs for API groups \[kubelet.config.k8s.io kubeproxy.config.k8s.io\]
kubeadm join 10.0.1.48:6443 --token 8xwcaq.qxekio9xd02ed936     --discovery-token-ca-cert-hash sha256:d988ba566675095ae25255d63b21cc4d5a9a69bee9905dc638f58b217c651c14 

    5. 将 node 节点从新增加到 k8s 集群中

\[root@k8s-node1 ~\]# kubeadm join 10.0.1.48:6443 --token 8xwcaq.qxekio9xd02ed936     --discovery-token-ca-cert-hash sha256:d988ba566675095ae25255d63b21cc4d5a9a69bee9905dc638f58b217c651c14
\[preflight\] Running pre-flight checks
  \[WARNING IsDockerSystemdCheck\]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
\[preflight\] Reading configuration from the cluster...
\[preflight\] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
\[kubelet-start\] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
\[kubelet-start\] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
\[kubelet-start\] Starting the kubelet
\[kubelet-start\] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
\* Certificate signing request was sent to apiserver and a response was received.
\* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    6. 查看 pod 状况

\[root@k8s-master ~\]# kubectl get pods -n kube-system -o wide
NAME                                 READY   STATUS              RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
coredns-f9fd979d6-c6qrl              0/1     ContainerCreating   1          13d   <none>       k8s-node1    <none>           <none>
coredns-f9fd979d6-hmpbj              1/1     Running             0          13d   10.244.2.2   k8s-node2    <none>           <none>
etcd-k8s-master                      1/1     Running             5          13d   10.0.1.48    k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running             6          13d   10.0.1.48    k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running             5          13d   10.0.1.48    k8s-master   <none>           <none>
kube-flannel-ds-5ftj9                1/1     Running             4          13d   10.0.1.48    k8s-master   <none>           <none>
kube-flannel-ds-bwh28                1/1     Running             0          23m   10.0.1.50    k8s-node2    <none>           <none>
kube-flannel-ds-ttx7c                0/1     Init:0/1            0          23m   10.0.1.49    k8s-node1    <none>           <none>
kube-proxy-4xxxh                     0/1     ContainerCreating   2          13d   10.0.1.49    k8s-node1    <none>           <none>
kube-proxy-7rs4w                     1/1     Running             0          13d   10.0.1.50    k8s-node2    <none>           <none>
kube-proxy-d5hrv                     1/1     Running             4          13d   10.0.1.48    k8s-master   <none>           <none>
kube-scheduler-k8s-master            1/1     Running             5          13d   10.0.1.48    k8s-master   <none>           <none>

    7. 查看 node 状况

\[root@k8s-master ~\]# kubectl  get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   13d   v1.19.3
k8s-node1    Ready    <none>   24m   v1.19.3
k8s-node2    Ready    <none>   24m   v1.19.3

高新科技园

正文完
 0