乐趣区

关于kubeadm:Kubeadm搭建高可用集群02组件安装

Containerd 作为 Runtime

所有节点装置 docker-ce-20.10:

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

配置 Containerd 所需的模块(所有节点):

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

所有节点加载模块:

modprobe -- overlay
modprobe -- br_netfilter

所有节点,配置 Containerd 所需的内核:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

所有节点加载内核:

sysctl --system

所有节点配置 Containerd 的配置文件:

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

所有节点将 Containerd 的 Cgroup 改为 Systemd:

vim /etc/containerd/config.toml

找到 containerd.runtimes.runc.options,增加 SystemdCgroup = true

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true

所有节点将 sandbox_image 的 Pause 镜像改成合乎本人版本的地址

sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"

所有节点启动 Containerd,并配置开机自启动:

systemctl daemon-reload
systemctl enable --now containerd

所有节点配置 crictl 客户端连贯的运行时地位:

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

Docker 作为 Runtime(版本小于 1.24)

所有节点装置 docker-ce 20.10:

yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

CgroupDriver 改成 systemd:

mkdir /etc/docker

cat > /etc/docker/daemon.json <<EOF
{"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

所有节点设置开机自启动 Docker:

systemctl daemon-reload && systemctl enable --now docker

装置 Kubenetes 组件

所有节点装置 1.23 最新版本 kubeadm、kubelet 和 kubectl:

yum install kubeadm-1.23* kubelet-1.23* kubectl-1.23* -y

更改 Kubelet 的配置应用 Containerd 作为 Runtime:

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

所有节点设置 Kubelet 开机自启动

systemctl daemon-reload
systemctl enable --now kubelet

高可用组件

所有 Master 节点通过 yum 装置 HAProxy 和 KeepAlived:

yum install keepalived haproxy -y

所有 Master 节点配置 HAProxy(所有 Master 节点的 HAProxy 配置雷同):

mkdir /etc/haproxy
vim /etc/haproxy/haproxy.cfg 
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01    192.168.2.201:6443  check
  server k8s-master02    192.168.2.202:6443  check
  server k8s-master03    192.168.2.203:6443  check

查看端口是否被监听:

netstat -lntp

所有 Master 节点配置 KeepAlived(配置不一样,留神辨别):

mkdir /etc/keepalived
vim /etc/keepalived/keepalived.conf

查看网卡信息(替换 keepalived.conf 中的 interface)

ip a

Master01 节点的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.2.201
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {192.168.2.236}
    track_script {chk_apiserver}
}

Master02 节点的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.2.202
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {192.168.2.236}
    track_script {chk_apiserver}
}

Master03 节点的配置:

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
 interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.2.203
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {192.168.2.236}
    track_script {chk_apiserver}
}

所有 master 节点配置 KeepAlived 健康检查文件:

vim /etc/keepalived/check_apiserver.sh
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[$check_code == ""]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[$err != "0"]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
chmod +x /etc/keepalived/check_apiserver.sh

启动 haproxy 和 keepalived

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

测试 keepalived 是否是失常

ping 192.168.2.236 -c 4
telnet 192.168.2.236 16443

如果 ping 不通且 telnet 没有呈现 ],则认为 VIP 不能够,不可在持续往下执行,须要排查 keepalived 的问题,比方防火墙和 selinux,haproxy 和 keepalived 的状态,监听端口等:

  • 所有节点查看防火墙状态必须为 disable 和 inactive:systemctl status firewalld
  • 所有节点查看 selinux 状态,必须为 disable:getenforce
  • master 节点查看 haproxy 和 keepalived 状态:systemctl status keepalived haproxy
  • master 节点查看监听端口:netstat -lntp
退出移动版