Containerd作为Runtime
所有节点装置docker-ce-20.10:
yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
配置Containerd所需的模块(所有节点):
cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF
所有节点加载模块:
modprobe -- overlaymodprobe -- br_netfilter
所有节点,配置Containerd所需的内核:
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF
所有节点加载内核:
sysctl --system
所有节点配置Containerd的配置文件:
mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml
所有节点将Containerd的Cgroup改为Systemd:
vim /etc/containerd/config.toml
找到containerd.runtimes.runc.options,增加SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
所有节点将sandbox_image的Pause镜像改成合乎本人版本的地址
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
所有节点启动Containerd,并配置开机自启动:
systemctl daemon-reloadsystemctl enable --now containerd
所有节点配置crictl客户端连贯的运行时地位:
cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF
Docker作为Runtime(版本小于1.24)
所有节点装置docker-ce 20.10:
yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
CgroupDriver改成systemd:
mkdir /etc/dockercat > /etc/docker/daemon.json <<EOF{ "exec-opts": ["native.cgroupdriver=systemd"]}EOF
所有节点设置开机自启动Docker:
systemctl daemon-reload && systemctl enable --now docker
装置Kubenetes组件
所有节点装置1.23最新版本kubeadm、kubelet和kubectl:
yum install kubeadm-1.23* kubelet-1.23* kubectl-1.23* -y
更改Kubelet的配置应用Containerd作为Runtime:
cat >/etc/sysconfig/kubelet<<EOFKUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"EOF
所有节点设置Kubelet开机自启动
systemctl daemon-reloadsystemctl enable --now kubelet
高可用组件
所有Master节点通过yum装置HAProxy和KeepAlived:
yum install keepalived haproxy -y
所有Master节点配置HAProxy(所有Master节点的HAProxy配置雷同):
mkdir /etc/haproxyvim /etc/haproxy/haproxy.cfg
global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30sdefaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15sfrontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitorfrontend k8s-master bind 0.0.0.0:16443 bind 127.0.0.1:16443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-masterbackend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.2.201:6443 check server k8s-master02 192.168.2.202:6443 check server k8s-master03 192.168.2.203:6443 check
查看端口是否被监听:
netstat -lntp
所有Master节点配置KeepAlived(配置不一样,留神辨别):
mkdir /etc/keepalivedvim /etc/keepalived/keepalived.conf
查看网卡信息(替换keepalived.conf中的interface)
ip a
Master01节点的配置:
! Configuration File for keepalivedglobal_defs { router_id LVS_DEVELscript_user root enable_script_security}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 192.168.2.201 virtual_router_id 51 priority 101 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.2.236 } track_script { chk_apiserver }}
Master02节点的配置:
! Configuration File for keepalivedglobal_defs { router_id LVS_DEVELscript_user root enable_script_security}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.2.202 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.2.236 } track_script { chk_apiserver }}
Master03节点的配置:
! Configuration File for keepalivedglobal_defs { router_id LVS_DEVELscript_user root enable_script_security}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.2.203 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.2.236 } track_script { chk_apiserver }}
所有master节点配置KeepAlived健康检查文件:
vim /etc/keepalived/check_apiserver.sh
#!/bin/basherr=0for k in $(seq 1 3)do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fidoneif [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1else exit 0fi
chmod +x /etc/keepalived/check_apiserver.sh
启动haproxy和keepalived
systemctl daemon-reloadsystemctl enable --now haproxysystemctl enable --now keepalived
测试keepalived是否是失常
ping 192.168.2.236 -c 4telnet 192.168.2.236 16443
如果ping不通且telnet没有呈现 ] ,则认为VIP不能够,不可在持续往下执行,须要排查keepalived的问题,比方防火墙和selinux,haproxy和keepalived的状态,监听端口等:
- 所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
- 所有节点查看selinux状态,必须为disable:getenforce
- master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
- master节点查看监听端口:netstat -lntp