二进制装置Kubernetes(k8s) v1.27.3 IPv4/IPv6双栈 可脱离互联网

https://github.com/cby-chen/Kubernetes 开源不易,帮忙点个star,谢谢了

介绍

kubernetes(k8s)二进制高可用装置部署,反对IPv4+IPv6双栈。

我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。

若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。

不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。

若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相干配置删除或操作,否则会出问题。

强烈建议在Github上查看文档 !!!

Github出问题会更新文档,并且后续尽可能第一工夫更新新版本文档 !!!

手动我的项目地址:https://github.com/cby-chen/Kubernetes

1.环境

主机名称IP地址阐明软件
192.168.1.60外网节点下载各种所需安装包
Master01192.168.0.31master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Master02192.168.0.32master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Master03192.168.0.33master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、haproxy、keepalived、nginx
Node01192.168.0.34node节点kubelet、kube-proxy、nfs-client、nginx
Node02192.168.0.35node节点kubelet、kube-proxy、nfs-client、nginx
192.168.0.36VIP

网段

物理主机:192.168.0.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

安装包曾经整顿好:https://github.com/cby-chen/Kubernetes/releases/download/v1.27.3/kubernetes-v1.27.3.tar

1.1.k8s根底零碎环境配置

1.2.配置IP

# 留神!# 若虚拟机是进行克隆的那么网卡的UUID会反复# 若UUID反复须要从新生成新的UUID# UUID反复无奈获取到IPV6地址# # 查看以后的网卡列表和 UUID:# nmcli con show# 删除要更改 UUID 的网络连接:# nmcli con delete uuid <原 UUID># 从新生成 UUID:# nmcli con add type ethernet ifname <接口名称> con-name <新名称># 从新启用网络连接:# nmcli con up <新名称># 更改网卡的UUIDssh root@192.168.0.31 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"ssh root@192.168.0.32 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"ssh root@192.168.0.33 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"ssh root@192.168.0.34 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"ssh root@192.168.0.35 "nmcli con delete uuid 708a1497-2192-43a5-9f03-2ab936fb3c44;nmcli con add type ethernet ifname eth0 con-name eth0;nmcli con up eth0"# 批改动态的IPv4地址ssh root@192.168.0.154 "nmcli con mod eth0 ipv4.addresses 192.168.0.31/24; nmcli con mod eth0 ipv4.gateway  192.168.0.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"ssh root@192.168.0.156 "nmcli con mod eth0 ipv4.addresses 192.168.0.32/24; nmcli con mod eth0 ipv4.gateway  192.168.0.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"ssh root@192.168.0.164 "nmcli con mod eth0 ipv4.addresses 192.168.0.33/24; nmcli con mod eth0 ipv4.gateway  192.168.0.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"ssh root@192.168.0.166 "nmcli con mod eth0 ipv4.addresses 192.168.0.34/24; nmcli con mod eth0 ipv4.gateway  192.168.0.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"ssh root@192.168.0.167 "nmcli con mod eth0 ipv4.addresses 192.168.0.35/24; nmcli con mod eth0 ipv4.gateway  192.168.0.1; nmcli con mod eth0 ipv4.method manual; nmcli con mod eth0 ipv4.dns "8.8.8.8"; nmcli con up eth0"# 没有IPv6抉择不配置即可ssh root@192.168.0.31 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::10; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"ssh root@192.168.0.32 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::20; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"ssh root@192.168.0.33 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::30; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"ssh root@192.168.0.34 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::40; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"ssh root@192.168.0.35 "nmcli con mod eth0 ipv6.addresses fc00:43f4:1eea:1::50; nmcli con mod eth0 ipv6.gateway fc00:43f4:1eea:1::1; nmcli con mod eth0 ipv6.method manual; nmcli con mod eth0 ipv6.dns "2400:3200::1"; nmcli con up eth0"# 查看网卡配置# nmcli device show eth0# nmcli con show eth0[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=noneDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=noIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=eth0UUID=424fd260-c480-4899-97e6-6fc9722031e8DEVICE=eth0ONBOOT=yesIPADDR=192.168.0.31PREFIX=24GATEWAY=192.168.8.1DNS1=8.8.8.8IPV6ADDR=fc00:43f4:1eea:1::10/128IPV6_DEFAULTGW=fc00:43f4:1eea:1::1DNS2=2400:3200::1[root@localhost ~]# 

1.3.设置主机名

hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02

1.4.配置yum源

# 其余零碎的源地址# https://mirrors.tuna.tsinghua.edu.cn/help/# 对于 Ubuntused -i 's/cn.archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \         -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \         -i.bak \         /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \         -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \         -i.bak \         /etc/yum.repos.d/CentOS-*.repo# 对于公有仓库sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak  /etc/yum.repos.d/CentOS-*.repo

1.5.装置一些必备工具

# 对于 Ubuntuapt update && apt upgrade -y && apt install -y wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl# 对于 CentOS 7yum update -y && yum -y install  wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl# 对于 CentOS 8yum update -y && yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl
1.5.1 下载离线所需文件(可选)

在互联网服务器上安装一个截然不同的零碎进行下载所需包

CentOS7
# 下载必要工具yum -y install createrepo yum-utils wget epel*# 下载全量依赖包repotrack createrepo wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl gcc keepalived haproxy bash-completion chrony sshpass ipvsadm ipset sysstat conntrack libseccomp# 删除libseccomprm -rf libseccomp-*.rpm# 下载libseccompwget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm# 创立yum源信息createrepo -u -d /data/centos7/# 拷贝包到内网机器上scp -r /data/centos7/ root@192.168.0.31:scp -r /data/centos7/ root@192.168.0.32:scp -r /data/centos7/ root@192.168.0.33:scp -r /data/centos7/ root@192.168.0.34:scp -r /data/centos7/ root@192.168.0.35:# 在内网机器上创立repo配置文件rm -rf /etc/yum.repos.d/*cat > /etc/yum.repos.d/123.repo  << EOF [cby]name=CentOS-$releasever - Mediabaseurl=file:///root/centos7/gpgcheck=0enabled=1EOF# 装置下载好的包yum clean allyum makecacheyum install /root/centos7/* --skip-broken -y#### 备注 ###### 装置实现后,可能还会呈现yum无奈应用那么再次执行rm -rf /etc/yum.repos.d/*cat > /etc/yum.repos.d/123.repo  << EOF [cby]name=CentOS-$releasever - Mediabaseurl=file:///root/centos7/gpgcheck=0enabled=1EOFyum clean allyum makecacheyum install /root/centos7/* --skip-broken -y#### 备注 ###### 装置 chrony 和 libseccomp# yum install /root/centos7/libseccomp-2.5.1*.rpm -y# yum install /root/centos7/chrony-*.rpm -y
CentOS8
# 下载必要工具yum -y install createrepo yum-utils wget epel*# 下载全量依赖包repotrack wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl gcc keepalived haproxy bash-completion chrony sshpass ipvsadm ipset sysstat conntrack libseccomp# 创立yum源信息createrepo -u -d /data/centos8/# 拷贝包到内网机器上scp -r centos8/ root@192.168.0.31:scp -r centos8/ root@192.168.0.32:scp -r centos8/ root@192.168.0.33:scp -r centos8/ root@192.168.0.34:scp -r centos8/ root@192.168.0.35:# 在内网机器上创立repo配置文件rm -rf /etc/yum.repos.d/*cat > /etc/yum.repos.d/123.repo  << EOF [cby]name=CentOS-$releasever - Mediabaseurl=file:///root/centos8/gpgcheck=0enabled=1EOF# 装置下载好的包yum clean allyum makecacheyum install /root/centos8/* --skip-broken -y#### 备注 ###### 装置实现后,可能还会呈现yum无奈应用那么再次执行rm -rf /etc/yum.repos.d/*cat > /etc/yum.repos.d/123.repo  << EOF [cby]name=CentOS-$releasever - Mediabaseurl=file:///root/centos8/gpgcheck=0enabled=1EOFyum clean allyum makecacheyum install /root/centos8/* --skip-broken -y
Ubuntu 下载包和依赖
#!/bin/bashlogfile=123.logret=""function getDepends(){   echo "fileName is" $1>>$logfile   # use tr to del < >   ret=`apt-cache depends $1|grep Depends |cut -d: -f2 |tr -d "<>"`   echo $ret|tee  -a $logfile}# 须要获取其所依赖包的包libs="wget psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl gcc keepalived haproxy bash-completion chrony sshpass ipvsadm ipset sysstat conntrack libseccomp"# download libs dependen. deep in 3i=0while [ $i -lt 3 ] ;do    let i++    echo $i    # download libs    newlist=" "    for j in $libs    do        added="$(getDepends $j)"        newlist="$newlist $added"        apt install $added --reinstall -d -y    done    libs=$newlistdone# 创立源信息apt install dpkg-devsudo cp /var/cache/apt/archives/*.deb /data/ubuntu/ -rdpkg-scanpackages . /dev/null |gzip > /data/ubuntu/Packages.gz -r# 拷贝包到内网机器上scp -r ubuntu/ root@192.168.0.31:scp -r ubuntu/ root@192.168.0.32:scp -r ubuntu/ root@192.168.0.33:scp -r ubuntu/ root@192.168.0.34:scp -r ubuntu/ root@192.168.0.35:# 在内网机器上配置apt源vim /etc/apt/sources.listcat /etc/apt/sources.listdeb file:////root/ ubuntu/# 装置deb包apt install ./*.deb

1.6.选择性下载须要工具

#!/bin/bash# 查看版本地址:# # https://github.com/containernetworking/plugins/releases/# https://github.com/containerd/containerd/releases/# https://github.com/kubernetes-sigs/cri-tools/releases/# https://github.com/Mirantis/cri-dockerd/releases/# https://github.com/etcd-io/etcd/releases/# https://github.com/cloudflare/cfssl/releases/# https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG# https://download.docker.com/linux/static/stable/x86_64/# https://github.com/opencontainers/runc/releases/# https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/# https://github.com/helm/helm/tags# http://nginx.org/download/# Version numberscni_plugins_version='v1.3.0'cri_containerd_cni_version='1.7.2'crictl_version='v1.27.0'cri_dockerd_version='0.3.3'etcd_version='v3.5.9'cfssl_version='1.6.4'kubernetes_server_version='1.27.3'docker_version='24.0.2'runc_version='1.1.7'kernel_version='5.4.248'helm_version='3.12.1'nginx_version='1.25.1'# URLs base_url='https://ghproxy.com/https://github.com'kernel_url="http://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el7/x86_64/RPMS/kernel-lt-${kernel_version}-1.el7.elrepo.x86_64.rpm"runc_url="${base_url}/opencontainers/runc/releases/download/v${runc_version}/runc.amd64"docker_url="https://download.docker.com/linux/static/stable/x86_64/docker-${docker_version}.tgz"cni_plugins_url="${base_url}/containernetworking/plugins/releases/download/${cni_plugins_version}/cni-plugins-linux-amd64-${cni_plugins_version}.tgz"cri_containerd_cni_url="${base_url}/containerd/containerd/releases/download/v${cri_containerd_cni_version}/cri-containerd-cni-${cri_containerd_cni_version}-linux-amd64.tar.gz"crictl_url="${base_url}/kubernetes-sigs/cri-tools/releases/download/${crictl_version}/crictl-${crictl_version}-linux-amd64.tar.gz"cri_dockerd_url="${base_url}/Mirantis/cri-dockerd/releases/download/v${cri_dockerd_version}/cri-dockerd-${cri_dockerd_version}.amd64.tgz"etcd_url="${base_url}/etcd-io/etcd/releases/download/${etcd_version}/etcd-${etcd_version}-linux-amd64.tar.gz"cfssl_url="${base_url}/cloudflare/cfssl/releases/download/v${cfssl_version}/cfssl_${cfssl_version}_linux_amd64"cfssljson_url="${base_url}/cloudflare/cfssl/releases/download/v${cfssl_version}/cfssljson_${cfssl_version}_linux_amd64"helm_url="https://files.m.daocloud.io/get.helm.sh/helm-v${helm_version}-linux-amd64.tar.gz"kubernetes_server_url="https://dl.k8s.io/v${kubernetes_server_version}/kubernetes-server-linux-amd64.tar.gz"nginx_url="http://nginx.org/download/nginx-${nginx_version}.tar.gz"# Download packagespackages=(  $kernel_url  $runc_url  $docker_url  $cni_plugins_url  $cri_containerd_cni_url  $crictl_url  $cri_dockerd_url  $etcd_url  $cfssl_url  $cfssljson_url  $helm_url  $kubernetes_server_url  $nginx_url)for package_url in "${packages[@]}"; do  filename=$(basename "$package_url")  if wget -cq --progress=bar:force:noscroll -nc "$package_url"; then    echo "Downloaded $filename"  else    echo "Failed to download $filename"    exit 1  fidone

1.7.敞开防火墙

# Ubuntu疏忽,CentOS执行systemctl disable --now firewalld

1.8.敞开SELinux

# Ubuntu疏忽,CentOS执行setenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.9.敞开替换分区

sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap                    swap    defaults        0 0

1.10.网络配置(俩种形式二选一)

# Ubuntu疏忽,CentOS执行# 形式一# systemctl disable --now NetworkManager# systemctl start network && systemctl enable network# 形式二cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile]unmanaged-devices=interface-name:cali*;interface-name:tunl*EOFsystemctl restart NetworkManager

1.11.进行工夫同步

# 服务端# apt install chrony -yyum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 192.168.0.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd# 客户端# apt install chrony -yyum install chrony -ycat > /etc/chrony.conf << EOF pool 192.168.0.31 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd#应用客户端进行验证chronyc sources -v

1.12.配置ulimit

ulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF

1.13.配置免密登录

# apt install -y sshpassyum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="192.168.0.31 192.168.0.32 192.168.0.33 192.168.0.34 192.168.0.35"export SSHPASS=123123for HOST in $IP;do     sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone

1.14.增加启用源

# Ubuntu疏忽,CentOS执行# 为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 查看可用安装包yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available

1.15.降级内核至4.18版本以上

# Ubuntu疏忽,CentOS执行# 装置最新的内核# 我这里抉择的是稳定版kernel-ml   如需更新长期保护版本kernel-lt  yum -y --enablerepo=elrepo-kernel  install  kernel-ml# 查看已装置那些内核rpm -qa | grep kernel# 查看默认内核grubby --default-kernel# 若不是最新的应用命令设置grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo)# 重启失效reboot# v8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install kernel-lt -y ; grubby --default-kernel ; reboot # v7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum  --disablerepo="*"  --enablerepo="elrepo-kernel"  list  available -y ; yum  --enablerepo=elrepo-kernel  install  kernel-lt -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot # 离线版本 yum install -y /root/cby/kernel-lt-*-1.el7.elrepo.x86_64.rpm ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 

1.16.装置ipvsadm

# 对于CentOS7离线装置# yum install /root/centos7/ipset-*.el7.x86_64.rpm /root/centos7/lm_sensors-libs-*.el7.x86_64.rpm  /root/centos7/ipset-libs-*.el7.x86_64.rpm /root/centos7/sysstat-*.el7_9.x86_64.rpm  /root/centos7/ipvsadm-*.el7.x86_64.rpm  -y# 对于 Ubuntu# apt install ipvsadm ipset sysstat conntrack -y# 对于 CentOSyum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh               16384  0ip_vs_wrr              16384  0ip_vs_rr               16384  0ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack          176128  1 ip_vsnf_defrag_ipv6         24576  2 nf_conntrack,ip_vsnf_defrag_ipv4         16384  1 nf_conntracklibcrc32c              16384  3 nf_conntrack,xfs,ip_vs

1.17.批改内核参数

cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 1EOFsysctl --system

1.18.所有节点配置hosts本地解析

cat > /etc/hosts <<EOF127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.0.31 k8s-master01192.168.0.32 k8s-master02192.168.0.33 k8s-master03192.168.0.34 k8s-node01192.168.0.35 k8s-node02192.168.0.36 lb-vipEOF

2.k8s根本组件装置

留神 : 2.1 和 2.2 二选其一即可

2.1.装置Containerd作为Runtime (举荐)

# https://github.com/containernetworking/plugins/releases/# wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgzcd cby/#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v*.tgz -C /opt/cni/bin/# https://github.com/containerd/containerd/releases/# wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.7.2/cri-containerd-cni-1.7.2-linux-amd64.tar.gz#解压tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C /#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF

2.1.1配置Containerd所需的模块

cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF

2.1.2加载模块

systemctl restart systemd-modules-load.service

2.1.3配置Containerd所需的内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables  = 1net.ipv4.ip_forward                 = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system

2.1.4创立Containerd的配置文件

# 创立默认配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml# 批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#registry.k8s.io#m.daocloud.io/registry.k8s.io#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_imagesed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep certs.dmkdir /etc/containerd/certs.d/docker.io -pv# 配置加速器cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOFserver = "https://docker.io"[host."https://hub-mirror.c.163.com"]  capabilities = ["pull", "resolve"]EOF

2.1.5启动并设置为开机启动

systemctl daemon-reloadsystemctl enable --now containerdsystemctl restart containerd

2.1.6配置crictl客户端连贯的运行时地位

# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz#解压tar xf crictl-v*-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart  containerdcrictl info

2.2 装置docker作为Runtime

2.2.1 装置docker

# 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/# wget https://download.docker.com/linux/static/stable/x86_64/docker-24.0.2.tgz#解压tar xf docker-*.tgz #拷贝二进制文件cp docker/* /usr/bin/#创立containerd的service文件,并且启动cat >/etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=1048576TasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF# 设置开机自启systemctl enable --now containerd.service#筹备docker的service文件cat > /etc/systemd/system/docker.service <<EOF[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.service containerd.serviceWants=network-online.targetRequires=docker.socket containerd.service[Service]Type=notifyExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockExecReload=/bin/kill -s HUP $MAINPIDTimeoutSec=0RestartSec=2Restart=alwaysStartLimitBurst=3StartLimitInterval=60sLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTasksMax=infinityDelegate=yesKillMode=processOOMScoreAdjust=-500[Install]WantedBy=multi-user.targetEOF#筹备docker的socket文件cat > /etc/systemd/system/docker.socket <<EOF[Unit]Description=Docker Socket for the API[Socket]ListenStream=/var/run/docker.sockSocketMode=0660SocketUser=rootSocketGroup=docker[Install]WantedBy=sockets.targetEOF#创立docker组groupadd docker#启动dockersystemctl enable --now docker.socket  && systemctl enable --now docker.service#验证docker info# 配置加速器mkdir /etc/docker/ -pvcat >/etc/docker/daemon.json <<EOF{  "exec-opts": ["native.cgroupdriver=systemd"],  "registry-mirrors": [    "https://docker.m.daocloud.io",    "https://docker.mirrors.ustc.edu.cn",    "http://hub-mirror.c.163.com"  ],  "max-concurrent-downloads": 10,  "log-driver": "json-file",  "log-level": "warn",  "log-opts": {    "max-size": "10m",    "max-file": "3"    },  "data-root": "/var/lib/docker"}EOFsystemctl daemon-reload systemctl stop dockersystemctl restart docker

2.2.2 装置cri-docker

# 因为1.24以及更高版本不反对docker所以装置cri-docker# 下载cri-docker # wget  https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.3/cri-dockerd-0.3.3.amd64.tgz# 解压cri-dockertar xvf cri-dockerd-*.amd64.tgz cp -r cri-dockerd/  /usr/bin/chmod +x /usr/bin/cri-dockerd/cri-dockerd# 写入启动配置文件cat >  /usr/lib/systemd/system/cri-docker.service <<EOF[Unit]Description=CRI Interface for Docker Application Container EngineDocumentation=https://docs.mirantis.comAfter=network-online.target firewalld.service docker.serviceWants=network-online.targetRequires=cri-docker.socket[Service]Type=notifyExecStart=/usr/bin/cri-dockerd/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7ExecReload=/bin/kill -s HUP $MAINPIDTimeoutSec=0RestartSec=2Restart=alwaysStartLimitBurst=3StartLimitInterval=60sLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTasksMax=infinityDelegate=yesKillMode=process[Install]WantedBy=multi-user.targetEOF# 写入socket配置文件cat > /usr/lib/systemd/system/cri-docker.socket <<EOF[Unit]Description=CRI Docker Socket for the APIPartOf=cri-docker.service[Socket]ListenStream=%t/cri-dockerd.sockSocketMode=0660SocketUser=rootSocketGroup=docker[Install]WantedBy=sockets.targetEOF# 进行启动cri-dockersystemctl daemon-reload systemctl enable cri-docker --nowsystemctl restart cri-dockersystemctl status cri-docker

2.3.k8s与etcd下载及装置(仅在master01操作)

2.3.1解压k8s安装包

# 下载安装包# wget https://dl.k8s.io/v1.27.3/kubernetes-server-linux-amd64.tar.gz# wget https://github.com/etcd-io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux-amd64.tar.gz# 解压k8s安装文件cd cbytar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}# 解压etcd安装文件tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/# 查看/usr/local/bin下内容ls /usr/local/bin/containerd               crictl       etcdctl                  kube-proxycontainerd-shim          critest      kube-apiserver           kube-schedulercontainerd-shim-runc-v1  ctd-decoder  kube-controller-managercontainerd-shim-runc-v2  ctr          kubectlcontainerd-stress        etcd         kubelet

2.3.2查看版本

[root@k8s-master01 ~]#  kubelet --versionKubernetes v1.27.3[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.9API version: 3.5[root@k8s-master01 ~]# 

2.3.3将组件发送至其余k8s节点

Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02'# 拷贝master组件for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done# 拷贝work组件for NODE in $Work; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done# 所有节点执行mkdir -p /opt/cni/bin

2.3创立证书相干文件

# 请查看Github仓库 或者进行获取曾经打好的包https://github.com/cby-chen/Kubernetes/https://github.com/cby-chen/Kubernetes/tagshttps://github.com/cby-chen/Kubernetes/releases/download/v1.27.3/kubernetes-v1.27.3.tar

3.相干证书生成

# master01节点下载证书生成工具# wget "https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64" -O /usr/local/bin/cfssl# wget "https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64" -O /usr/local/bin/cfssljson# 软件包内有cp cfssl_*_linux_amd64 /usr/local/bin/cfsslcp cfssljson_*_linux_amd64 /usr/local/bin/cfssljson# 增加执行权限chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

3.1.生成etcd证书

特地阐明除外,以下操作在所有master节点操作

3.1.1所有master节点创立证书寄存目录

mkdir /etc/etcd/ssl -p

3.1.2master01节点生成etcd证书

cd pki# 生成etcd证书和etcd证书的key(如果你感觉当前可能会扩容,能够在ip那多写几个预留进去)# 若没有IPv6 可删除可保留 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-cacfssl gencert \   -ca=/etc/etcd/ssl/etcd-ca.pem \   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \   -config=ca-config.json \   -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.0.31,192.168.0.32,192.168.0.33,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30,::1 \   -profile=kubernetes \   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3将证书复制到其余节点

Master='k8s-master02 k8s-master03'for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2.生成k8s相干证书

特地阐明除外,以下操作在所有master节点操作

3.2.1所有k8s节点创立证书寄存目录

mkdir -p /etc/kubernetes/pki

3.2.2master01节点生成k8s证书

cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca# 生成一个根证书 ,多写了一些IP作为预留IP,为未来增加node做筹备# 10.96.0.1是service网段的第一个地址,须要计算,192.168.0.36为高可用vip地址# 若没有IPv6 可删除可保留 cfssl gencert   \-ca=/etc/kubernetes/pki/ca.pem   \-ca-key=/etc/kubernetes/pki/ca-key.pem   \-config=ca-config.json   \-hostname=10.96.0.1,192.168.0.36,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.0.31,192.168.0.32,192.168.0.33,192.168.0.34,192.168.0.35,192.168.0.36,192.168.0.37,192.168.0.38,192.168.0.39,192.168.1.70,fc00:43f4:1eea:1::10,fc00:43f4:1eea:1::20,fc00:43f4:1eea:1::30,fc00:43f4:1eea:1::40,fc00:43f4:1eea:1::50,fc00:43f4:1eea:1::60,fc00:43f4:1eea:1::70,fc00:43f4:1eea:1::80,fc00:43f4:1eea:1::90,fc00:43f4:1eea:1::100,::1   \-profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3生成apiserver聚合证书

cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个正告,能够疏忽cfssl gencert  \-ca=/etc/kubernetes/pki/front-proxy-ca.pem   \-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \-config=ca-config.json   \-profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4生成controller-manage的证书

在《5.高可用配置》抉择应用那种高可用计划
若应用 haproxy、keepalived 那么为 --server=https://192.168.0.36:9443
若应用 nginx计划,那么为 --server=https://127.0.0.1:8443

cfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager# 设置一个集群项# 在《5.高可用配置》抉择应用那种高可用计划# 若应用 haproxy、keepalived 那么为 `--server=https://192.168.0.36:8443`# 若应用 nginx计划,那么为 `--server=https://127.0.0.1:8443`kubectl config set-cluster kubernetes \     --certificate-authority=/etc/kubernetes/pki/ca.pem \     --embed-certs=true \     --server=https://127.0.0.1:8443 \     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个环境项,一个上下文kubectl config set-context system:kube-controller-manager@kubernetes \    --cluster=kubernetes \    --user=system:kube-controller-manager \    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个用户项kubectl config set-credentials system:kube-controller-manager \     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \     --embed-certs=true \     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置默认环境kubectl config use-context system:kube-controller-manager@kubernetes \     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfigcfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler# 在《5.高可用配置》抉择应用那种高可用计划# 若应用 haproxy、keepalived 那么为 `--server=https://192.168.0.36:8443`# 若应用 nginx计划,那么为 `--server=https://127.0.0.1:8443`kubectl config set-cluster kubernetes \     --certificate-authority=/etc/kubernetes/pki/ca.pem \     --embed-certs=true \     --server=https://127.0.0.1:8443 \     --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \     --client-certificate=/etc/kubernetes/pki/scheduler.pem \     --client-key=/etc/kubernetes/pki/scheduler-key.pem \     --embed-certs=true \     --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-context system:kube-scheduler@kubernetes \     --cluster=kubernetes \     --user=system:kube-scheduler \     --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config use-context system:kube-scheduler@kubernetes \     --kubeconfig=/etc/kubernetes/scheduler.kubeconfigcfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin# 在《5.高可用配置》抉择应用那种高可用计划# 若应用 haproxy、keepalived 那么为 `--server=https://192.168.0.36:8443`# 若应用 nginx计划,那么为 `--server=https://127.0.0.1:8443`kubectl config set-cluster kubernetes     \  --certificate-authority=/etc/kubernetes/pki/ca.pem     \  --embed-certs=true     \  --server=https://127.0.0.1:8443     \  --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-credentials kubernetes-admin  \  --client-certificate=/etc/kubernetes/pki/admin.pem     \  --client-key=/etc/kubernetes/pki/admin-key.pem     \  --embed-certs=true     \  --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-context kubernetes-admin@kubernetes    \  --cluster=kubernetes     \  --user=kubernetes-admin     \  --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

3.2.5创立kube-proxy证书

在《5.高可用配置》抉择应用那种高可用计划
若应用 haproxy、keepalived 那么为 --server=https://192.168.0.36:8443
若应用 nginx计划,那么为 --server=https://127.0.0.1:8443

cfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy   # 在《5.高可用配置》抉择应用那种高可用计划# 若应用 haproxy、keepalived 那么为 `--server=https://192.168.0.36:8443`# 若应用 nginx计划,那么为 `--server=https://127.0.0.1:8443`kubectl config set-cluster kubernetes     \  --certificate-authority=/etc/kubernetes/pki/ca.pem     \  --embed-certs=true     \  --server=https://127.0.0.1:8443     \  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-credentials kube-proxy  \  --client-certificate=/etc/kubernetes/pki/kube-proxy.pem     \  --client-key=/etc/kubernetes/pki/kube-proxy-key.pem     \  --embed-certs=true     \  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-context kube-proxy@kubernetes    \  --cluster=kubernetes     \  --user=kube-proxy     \  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config use-context kube-proxy@kubernetes  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

3.2.5创立ServiceAccount Key ——secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.6将证书发送到其余master节点

#其余节点创立目录# mkdir  /etc/kubernetes/pki/ -pfor NODE in k8s-master02 k8s-master03; do  for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do  scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done;  for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do  scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.7查看证书

ls /etc/kubernetes/pki/admin.csr          controller-manager.csr      kube-proxy.csradmin-key.pem      controller-manager-key.pem  kube-proxy-key.pemadmin.pem          controller-manager.pem      kube-proxy.pemapiserver.csr      front-proxy-ca.csr          sa.keyapiserver-key.pem  front-proxy-ca-key.pem      sa.pubapiserver.pem      front-proxy-ca.pem          scheduler.csrca.csr             front-proxy-client.csr      scheduler-key.pemca-key.pem         front-proxy-client-key.pem  scheduler.pemca.pem             front-proxy-client.pem# 一共26个就对了ls /etc/kubernetes/pki/ |wc -l26

4.k8s零碎组件配置

4.1.etcd配置

4.1.1master01配置

# 如果要用IPv6那么把IPv4地址批改为IPv6即可cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.0.31:2380'listen-client-urls: 'https://192.168.0.31:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.0.31:2380'advertise-client-urls: 'https://192.168.0.31:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.0.31:2380,k8s-master02=https://192.168.0.32:2380,k8s-master03=https://192.168.0.33:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truepeer-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  peer-client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF

4.1.2master02配置

# 如果要用IPv6那么把IPv4地址批改为IPv6即可cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.0.32:2380'listen-client-urls: 'https://192.168.0.32:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.0.32:2380'advertise-client-urls: 'https://192.168.0.32:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.0.31:2380,k8s-master02=https://192.168.0.32:2380,k8s-master03=https://192.168.0.33:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truepeer-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  peer-client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF

4.1.3master03配置

# 如果要用IPv6那么把IPv4地址批改为IPv6即可cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.0.33:2380'listen-client-urls: 'https://192.168.0.33:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.0.33:2380'advertise-client-urls: 'https://192.168.0.33:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.0.31:2380,k8s-master02=https://192.168.0.32:2380,k8s-master03=https://192.168.0.33:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truepeer-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  peer-client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF

4.2.创立service(所有master节点操作)

4.2.1创立etcd.service并启动

cat > /usr/lib/systemd/system/etcd.service << EOF[Unit]Description=Etcd ServiceDocumentation=https://coreos.com/etcd/docs/latest/After=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.ymlRestart=on-failureRestartSec=10LimitNOFILE=65536[Install]WantedBy=multi-user.targetAlias=etcd3.serviceEOF

4.2.2创立etcd证书目录

mkdir /etc/kubernetes/pki/etcdln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/systemctl daemon-reloadsystemctl enable --now etcd

4.2.3查看etcd状态

# 如果要用IPv6那么把IPv4地址批改为IPv6即可export ETCDCTL_API=3etcdctl --endpoints="192.168.0.33:2379,192.168.0.32:2379,192.168.0.31:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+| 192.168.0.33:2379 | 6ae2196f75cd6d95 |   3.5.9 |   20 kB |     false |      false |         2 |          9 |                  9 |        || 192.168.0.32:2379 | 46cbf93f7713a252 |   3.5.9 |   20 kB |     false |      false |         2 |          9 |                  9 |        || 192.168.0.31:2379 | ec6051ffc7487dd7 |   3.5.9 |   20 kB |      true |      false |         2 |          9 |                  9 |        |+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

5.高可用配置(在Master服务器上操作)

留神* 5.1.1 和5.1.2 二选一即可

抉择应用那种高可用计划,同时能够俩种都选用,实现内外兼顾的成果,比方:
5.1 的 NGINX计划实现集群内的高可用
5.2 的 haproxy、keepalived 计划实现集群外拜访

在《3.2.生成k8s相干证书》

若应用 nginx计划,那么为 --server=https://127.0.0.1:8443
若应用 haproxy、keepalived 那么为 --server=https://192.168.0.36:9443

5.1 NGINX高可用计划

5.1.1 进行编译

# 装置编译环境yum install gcc -y# 下载解压nginx二进制文件# wget http://nginx.org/download/nginx-1.25.1.tar.gztar xvf nginx-*.tar.gzcd nginx-*# 进行编译./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_modulemake && make install # 拷贝编译好的nginxnode='k8s-master02 k8s-master03 k8s-node01 k8s-node02'for NODE in $node; do scp -r /usr/local/nginx/ $NODE:/usr/local/nginx/; done

5.1.2 写入启动配置

在所有主机上执行

# 写入nginx配置文件cat > /usr/local/nginx/conf/kube-nginx.conf <<EOFworker_processes 1;events {    worker_connections  1024;}stream {    upstream backend {        least_conn;        hash $remote_addr consistent;        server 192.168.0.31:6443        max_fails=3 fail_timeout=30s;        server 192.168.0.32:6443        max_fails=3 fail_timeout=30s;        server 192.168.0.33:6443        max_fails=3 fail_timeout=30s;    }    server {        listen 127.0.0.1:8443;        proxy_connect_timeout 1s;        proxy_pass backend;    }}EOF# 写入启动配置文件cat > /etc/systemd/system/kube-nginx.service <<EOF[Unit]Description=kube-apiserver nginx proxyAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=forkingExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -tExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginxExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reloadPrivateTmp=trueRestart=alwaysRestartSec=5StartLimitInterval=0LimitNOFILE=65536 [Install]WantedBy=multi-user.targetEOF# 设置开机自启systemctl enable --now  kube-nginx systemctl restart kube-nginxsystemctl status kube-nginx

5.2 keepalived和haproxy 高可用计划

5.2.1装置keepalived和haproxy服务

systemctl disable --now firewalldsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/configyum -y install keepalived haproxy

5.2.2批改haproxy配置文件(配置文件一样)

# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bakcat >/etc/haproxy/haproxy.cfg<<"EOF"global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30sdefaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15sfrontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitorfrontend k8s-master bind 0.0.0.0:9443 bind 127.0.0.1:9443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-masterbackend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server  k8s-master01  192.168.0.31:6443 check server  k8s-master02  192.168.0.32:6443 check server  k8s-master03  192.168.0.33:6443 checkEOF

5.2.3Master01配置keepalived master节点

#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 5     weight -5    fall 2    rise 1}vrrp_instance VI_1 {    state MASTER    # 留神网卡名    interface eth0     mcast_src_ip 192.168.0.31    virtual_router_id 51    priority 100    nopreempt    advert_int 2    authentication {        auth_type PASS        auth_pass K8SHA_KA_AUTH    }    virtual_ipaddress {        192.168.0.36    }    track_script {      chk_apiserver } }EOF

5.2.4Master02配置keepalived backup节点

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 5     weight -5    fall 2    rise 1}vrrp_instance VI_1 {    state BACKUP    # 留神网卡名    interface eth0    mcast_src_ip 192.168.0.32    virtual_router_id 51    priority 80    nopreempt    advert_int 2    authentication {        auth_type PASS        auth_pass K8SHA_KA_AUTH    }    virtual_ipaddress {        192.168.0.36    }    track_script {      chk_apiserver } }EOF

5.2.5Master03配置keepalived backup节点

# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 5     weight -5    fall 2    rise 1}vrrp_instance VI_1 {    state BACKUP    # 留神网卡名    interface eth0    mcast_src_ip 192.168.0.33    virtual_router_id 51    priority 50    nopreempt    advert_int 2    authentication {        auth_type PASS        auth_pass K8SHA_KA_AUTH    }    virtual_ipaddress {        192.168.0.36    }    track_script {      chk_apiserver } }EOF

5.2.6健康检查脚本配置(lb主机)

cat >  /etc/keepalived/check_apiserver.sh << EOF#!/bin/basherr=0for k in \$(seq 1 3)do    check_code=\$(pgrep haproxy)    if [[ \$check_code == "" ]]; then        err=\$(expr \$err + 1)        sleep 1        continue    else        err=0        break    fidoneif [[ \$err != "0" ]]; then    echo "systemctl stop keepalived"    /usr/bin/systemctl stop keepalived    exit 1else    exit 0fiEOF# 给脚本受权chmod +x /etc/keepalived/check_apiserver.sh

5.2.7启动服务

systemctl daemon-reloadsystemctl enable --now haproxysystemctl enable --now keepalived

5.2.8测试高可用

# 能ping同[root@k8s-node02 ~]# ping 192.168.0.36# 能telnet拜访[root@k8s-node02 ~]# telnet 192.168.0.36 9443# 敞开主节点,看vip是否漂移到备节点

6.k8s组件配置

所有k8s节点创立以下目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

6.1.创立apiserver(所有master节点)

6.1.1master01节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \\      --v=2  \\      --allow-privileged=true  \\      --bind-address=0.0.0.0  \\      --secure-port=6443  \\      --advertise-address=192.168.0.31 \\      --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \\      --service-node-port-range=30000-32767  \\      --etcd-servers=https://192.168.0.31:2379,https://192.168.0.32:2379,https://192.168.0.33:2379 \\      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \      --authorization-mode=Node,RBAC  \\      --enable-bootstrap-token-auth=true  \\      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\      --requestheader-allowed-names=aggregator  \\      --requestheader-group-headers=X-Remote-Group  \\      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\      --requestheader-username-headers=X-Remote-User \\      --enable-aggregator-routing=trueRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF

6.1.2master02节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \\      --v=2  \\      --allow-privileged=true  \\      --bind-address=0.0.0.0  \\      --secure-port=6443  \\      --advertise-address=192.168.0.32 \\      --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \\      --service-node-port-range=30000-32767  \\      --etcd-servers=https://192.168.0.31:2379,https://192.168.0.32:2379,https://192.168.0.33:2379 \\      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\      --authorization-mode=Node,RBAC  \\      --enable-bootstrap-token-auth=true  \\      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\      --requestheader-allowed-names=aggregator  \\      --requestheader-group-headers=X-Remote-Group  \\      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\      --requestheader-username-headers=X-Remote-User \\      --enable-aggregator-routing=trueRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF

6.1.3master03节点配置

cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \\      --v=2  \\      --allow-privileged=true  \\      --bind-address=0.0.0.0  \\      --secure-port=6443  \\      --advertise-address=192.168.0.33 \\      --service-cluster-ip-range=10.96.0.0/12,fd00::/108  \\      --service-node-port-range=30000-32767  \\      --etcd-servers=https://192.168.0.31:2379,https://192.168.0.32:2379,https://192.168.0.33:2379 \\      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\      --authorization-mode=Node,RBAC  \\      --enable-bootstrap-token-auth=true  \\      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\      --requestheader-allowed-names=aggregator  \\      --requestheader-group-headers=X-Remote-Group  \\      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\      --requestheader-username-headers=X-Remote-User \\      --enable-aggregator-routing=trueRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF

6.1.4启动apiserver(所有master节点)

systemctl daemon-reloadsystemctl enable --now kube-apiserversystemctl restart kube-apiserversystemctl status kube-apiserver

6.2.配置kube-controller-manager service

# 所有master节点配置,且配置雷同# 172.16.0.0/12为pod网段,按需要设置你本人的网段cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-controller-manager \\      --v=2 \\      --bind-address=0.0.0.0 \\      --root-ca-file=/etc/kubernetes/pki/ca.pem \\      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\      --leader-elect=true \\      --use-service-account-credentials=true \\      --node-monitor-grace-period=40s \\      --node-monitor-period=5s \\      --controllers=*,bootstrapsigner,tokencleaner \\      --allocate-node-cidrs=true \\      --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\      --cluster-cidr=172.16.0.0/12,fc00::/48 \\      --node-cidr-mask-size-ipv4=24 \\      --node-cidr-mask-size-ipv6=120 \\      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pemRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF

6.2.1启动kube-controller-manager,并查看状态

systemctl daemon-reloadsystemctl enable --now kube-controller-managersystemctl restart kube-controller-managersystemctl status kube-controller-manager

6.3.配置kube-scheduler service

6.3.1所有master节点配置,且配置雷同

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-scheduler \\      --v=2 \\      --bind-address=0.0.0.0 \\      --leader-elect=true \\      --kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF

6.3.2启动并查看服务状态

systemctl daemon-reloadsystemctl enable --now kube-schedulersystemctl restart kube-schedulersystemctl status kube-scheduler

7.TLS Bootstrapping配置

7.1在master01上配置

# 在《5.高可用配置》抉择应用那种高可用计划# 若应用 haproxy、keepalived 那么为 `--server=https://192.168.0.36:8443`# 若应用 nginx计划,那么为 `--server=https://127.0.0.1:8443`cd bootstrapkubectl config set-cluster kubernetes     \--certificate-authority=/etc/kubernetes/pki/ca.pem     \--embed-certs=true     --server=https://127.0.0.1:8443     \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-credentials tls-bootstrap-token-user     \--token=c8ad9c.2e4d610cf3e7426e \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-context tls-bootstrap-token-user@kubernetes     \--cluster=kubernetes     \--user=tls-bootstrap-token-user     \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config use-context tls-bootstrap-token-user@kubernetes     \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig# token的地位在bootstrap.secret.yaml,如果批改的话到这个文件批改mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2查看集群状态,没问题的话持续后续操作

kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME                 STATUS    MESSAGE                         ERRORscheduler            Healthy   ok                              controller-manager   Healthy   ok                              etcd-0               Healthy   {"health":"true","reason":""}   etcd-2               Healthy   {"health":"true","reason":""}   etcd-1               Healthy   {"health":"true","reason":""} # 切记执行,别忘记!!!kubectl create -f bootstrap.secret.yaml

8.node节点配置

8.1.在master01上将证书复制到node节点

cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2.kubelet配置

留神 : 8.2.1 和 8.2.2 须要和 上方 2.1 和 2.2 对应起来

8.2.1当应用docker作为Runtime

cat > /usr/lib/systemd/system/kubelet.service << EOF[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetes[Service]ExecStart=/usr/local/bin/kubelet \\    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \\    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\    --config=/etc/kubernetes/kubelet-conf.yml \\    --container-runtime-endpoint=unix:///run/cri-dockerd.sock  \\    --node-labels=node.kubernetes.io/node=[Install]WantedBy=multi-user.targetEOF

8.2.2当应用Containerd作为Runtime (举荐)

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/# 所有k8s节点配置kubelet servicecat > /usr/lib/systemd/system/kubelet.service << EOF[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=containerd.serviceRequires=containerd.service[Service]ExecStart=/usr/local/bin/kubelet \\    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \\    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\    --config=/etc/kubernetes/kubelet-conf.yml \\    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \\    --node-labels=node.kubernetes.io/node=[Install]WantedBy=multi-user.targetEOF

8.2.3所有k8s节点创立kubelet的配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOFapiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationaddress: 0.0.0.0port: 10250readOnlyPort: 10255authentication:  anonymous:    enabled: false  webhook:    cacheTTL: 2m0s    enabled: true  x509:    clientCAFile: /etc/kubernetes/pki/ca.pemauthorization:  mode: Webhook  webhook:    cacheAuthorizedTTL: 5m0s    cacheUnauthorizedTTL: 30scgroupDriver: systemdcgroupsPerQOS: trueclusterDNS:- 10.96.0.10clusterDomain: cluster.localcontainerLogMaxFiles: 5containerLogMaxSize: 10MicontentType: application/vnd.kubernetes.protobufcpuCFSQuota: truecpuManagerPolicy: nonecpuManagerReconcilePeriod: 10senableControllerAttachDetach: trueenableDebuggingHandlers: trueenforceNodeAllocatable:- podseventBurst: 10eventRecordQPS: 5evictionHard:  imagefs.available: 15%  memory.available: 100Mi  nodefs.available: 10%  nodefs.inodesFree: 5%evictionPressureTransitionPeriod: 5m0sfailSwapOn: truefileCheckFrequency: 20shairpinMode: promiscuous-bridgehealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 20simageGCHighThresholdPercent: 85imageGCLowThresholdPercent: 80imageMinimumGCAge: 2m0siptablesDropBit: 15iptablesMasqueradeBit: 14kubeAPIBurst: 10kubeAPIQPS: 5makeIPTablesUtilChains: truemaxOpenFiles: 1000000maxPods: 110nodeStatusUpdateFrequency: 10soomScoreAdj: -999podPidsLimit: -1registryBurst: 10registryPullQPS: 5resolvConf: /etc/resolv.confrotateCertificates: trueruntimeRequestTimeout: 2m0sserializeImagePulls: truestaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 4h0m0ssyncFrequency: 1m0svolumeStatsAggPeriod: 1m0sEOF

8.2.4启动kubelet

systemctl daemon-reloadsystemctl enable --now kubeletsystemctl restart kubeletsystemctl status kubelet

8.2.5查看集群

[root@k8s-master01 ~]# kubectl  get nodeNAME           STATUS     ROLES    AGE   VERSIONk8s-master01   Ready    <none>   18s   v1.27.3k8s-master02   Ready    <none>   16s   v1.27.3k8s-master03   Ready    <none>   16s   v1.27.3k8s-node01     Ready    <none>   14s   v1.27.3k8s-node02     Ready    <none>   14s   v1.27.3[root@k8s-master01 ~]#

8.2.6查看容器运行时

[root@k8s-master01 ~]# kubectl describe node | grep Runtime  Container Runtime Version:  containerd://1.7.2  Container Runtime Version:  containerd://1.7.2  Container Runtime Version:  containerd://1.7.2  Container Runtime Version:  containerd://1.7.2  Container Runtime Version:  containerd://1.7.2[root@k8s-master01 ~]# kubectl describe node | grep Runtime  Container Runtime Version:  docker://24.0.2  Container Runtime Version:  docker://24.0.2  Container Runtime Version:  docker://24.0.2  Container Runtime Version:  docker://24.0.2  Container Runtime Version:  docker://24.0.2

8.3.kube-proxy配置

8.3.1将kubeconfig发送至其余节点

for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done

8.3.2所有k8s节点增加kube-proxy的service文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-proxy \\  --config=/etc/kubernetes/kube-proxy.yaml \\  --v=2Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF

8.3.3所有k8s节点增加kube-proxy的配置

cat > /etc/kubernetes/kube-proxy.yaml << EOFapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection:  acceptContentTypes: ""  burst: 10  contentType: application/vnd.kubernetes.protobuf  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig  qps: 5clusterCIDR: 172.16.0.0/12,fc00::/48configSyncPeriod: 15m0sconntrack:  max: null  maxPerCore: 32768  min: 131072  tcpCloseWaitTimeout: 1h0m0s  tcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables:  masqueradeAll: false  masqueradeBit: 14  minSyncPeriod: 0s  syncPeriod: 30sipvs:  masqueradeAll: true  minSyncPeriod: 5s  scheduler: "rr"  syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250msEOF

8.3.4启动kube-proxy

 systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy systemctl status kube-proxy

9.装置网络插件

留神 9.1 和 9.2 二选其一即可,倡议在此处创立好快照后在进行操作,后续出问题能够回滚

centos7 要降级libseccomp 不然 无奈装置网络插件

# https://github.com/opencontainers/runc/releases# 降级runc# wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64install -m 755 runc.amd64 /usr/local/sbin/runccp -p /usr/local/sbin/runc  /usr/local/bin/runccp -p /usr/local/sbin/runc  /usr/bin/runc#下载高于2.4以上的包yum -y install http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm# 清华源yum -y install https://mirrors.tuna.tsinghua.edu.cn/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm#查看以后版本[root@k8s-master-1 ~]# rpm -qa | grep libseccomplibseccomp-2.5.1-1.el8.x86_64

9.1装置Calico

9.1.1更改calico网段

wget https://mirrors.chenby.cn/https://github.com/projectcalico/calico/blob/master/manifests/calico-typha.yamlcp calico-typha.yaml calico.yamlcp calico-typha.yaml calico-ipv6.yamlvim calico.yaml# calico-config ConfigMap处    "ipam": {        "type": "calico-ipam",    },    - name: IP      value: "autodetect"    - name: CALICO_IPV4POOL_CIDR      value: "172.16.0.0/12"# vim calico-ipv6.yaml# calico-config ConfigMap处    "ipam": {        "type": "calico-ipam",        "assign_ipv4": "true",        "assign_ipv6": "true"    },    - name: IP      value: "autodetect"    - name: IP6      value: "autodetect"    - name: CALICO_IPV4POOL_CIDR      value: "172.16.0.0/12"    - name: CALICO_IPV6POOL_CIDR      value: "fc00::/48"    - name: FELIX_IPV6SUPPORT      value: "true"# 若docker镜像拉不下来,能够应用国内的仓库sed -i "s#docker.io/calico/#m.daocloud.io/docker.io/calico/#g" calico.yaml sed -i "s#docker.io/calico/#m.daocloud.io/docker.io/calico/#g" calico-ipv6.yaml# 本地没有公网 IPv6 应用 calico.yamlkubectl apply -f calico.yaml# 本地有公网 IPv6 应用 calico-ipv6.yaml # kubectl apply -f calico-ipv6.yaml 

9.1.2查看容器状态

# calico 初始化会很慢 须要急躁期待一下,大概十分钟左右[root@k8s-master01 ~]# kubectl  get pod -ANAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGEkube-system   calico-kube-controllers-6747f75cdc-fbvvc   1/1     Running   0          61skube-system   calico-node-fs7hl                          1/1     Running   0          61skube-system   calico-node-jqz58                          1/1     Running   0          61skube-system   calico-node-khjlg                          1/1     Running   0          61skube-system   calico-node-wmf8q                          1/1     Running   0          61skube-system   calico-node-xc6gn                          1/1     Running   0          61skube-system   calico-typha-6cdc4b4fbc-57snb              1/1     Running   0          61s

9.2 装置cilium

9.2.1 装置helm

# [root@k8s-master01 ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3# [root@k8s-master01 ~]# chmod 700 get_helm.sh# [root@k8s-master01 ~]# ./get_helm.shwget https://files.m.daocloud.io/get.helm.sh/helm-v3.12.1-linux-amd64.tar.gztar xvf helm-*-linux-amd64.tar.gzcp linux-amd64/helm /usr/local/bin/

9.2.2 装置cilium

# 增加源helm repo add cilium https://helm.cilium.io# 批改为国内源helm pull cilium/ciliumtar xvf cilium-*.tgzcd cilium/sed -i "s#quay.io/#m.daocloud.io/quay.io/#g" values.yaml# 默认参数装置helm install  cilium ./cilium/ -n kube-system# 启用ipv6# helm install cilium cilium/cilium --namespace kube-system --set ipv6.enabled=true# 启用路由信息和监控插件# helm install cilium cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" 

9.2.3 查看

[root@k8s-master01 ~]# kubectl  get pod -A | grep cilkube-system   cilium-gmr6c                       1/1     Running       0             5m3skube-system   cilium-kzgdj                       1/1     Running       0             5m3skube-system   cilium-operator-69b677f97c-6pw4k   1/1     Running       0             5m3skube-system   cilium-operator-69b677f97c-xzzdk   1/1     Running       0             5m3skube-system   cilium-q2rnr                       1/1     Running       0             5m3skube-system   cilium-smx5v                       1/1     Running       0             5m3skube-system   cilium-tdjq4                       1/1     Running       0             5m3s[root@k8s-master01 ~]#

9.2.4 下载专属监控面板

装置时候没有创立 监控能够疏忽

[root@k8s-master01 yaml]# wget https://mirrors.chenby.cn/https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml[root@k8s-master01 yaml]# sed -i "s#docker.io/#m.daocloud.io/docker.io/#g" monitoring-example.yaml[root@k8s-master01 yaml]# kubectl  apply -f monitoring-example.yamlnamespace/cilium-monitoring createdserviceaccount/prometheus-k8s createdconfigmap/grafana-config createdconfigmap/grafana-cilium-dashboard createdconfigmap/grafana-cilium-operator-dashboard createdconfigmap/grafana-hubble-dashboard createdconfigmap/prometheus createdclusterrole.rbac.authorization.k8s.io/prometheus createdclusterrolebinding.rbac.authorization.k8s.io/prometheus createdservice/grafana createdservice/prometheus createddeployment.apps/grafana createddeployment.apps/prometheus created[root@k8s-master01 yaml]#

9.2.5 下载部署测试用例

阐明 测试用例 须要在 装置CoreDNS 之后即可实现

[root@k8s-master01 yaml]# wget https://mirrors.chenby.cn/https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml[root@k8s-master01 yaml]# sed -i "s#google.com#baidu.cn#g" connectivity-check.yamlsed -i "s#quay.io/#m.daocloud.io/quay.io/#g" connectivity-check.yaml[root@k8s-master01 yaml]# kubectl  apply -f connectivity-check.yamldeployment.apps/echo-a createddeployment.apps/echo-b createddeployment.apps/echo-b-host createddeployment.apps/pod-to-a createddeployment.apps/pod-to-external-1111 createddeployment.apps/pod-to-a-denied-cnp createddeployment.apps/pod-to-a-allowed-cnp createddeployment.apps/pod-to-external-fqdn-allow-google-cnp createddeployment.apps/pod-to-b-multi-node-clusterip createddeployment.apps/pod-to-b-multi-node-headless createddeployment.apps/host-to-b-multi-node-clusterip createddeployment.apps/host-to-b-multi-node-headless createddeployment.apps/pod-to-b-multi-node-nodeport createddeployment.apps/pod-to-b-intra-node-nodeport createdservice/echo-a createdservice/echo-b createdservice/echo-b-headless createdservice/echo-b-host-headless createdciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp createdciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp createdciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created[root@k8s-master01 yaml]#

9.2.6 查看pod

[root@k8s-master01 yaml]# kubectl  get pod -ANAMESPACE           NAME                                                     READY   STATUS    RESTARTS      AGEcilium-monitoring   grafana-59957b9549-6zzqh                                 1/1     Running   0             10mcilium-monitoring   prometheus-7c8c9684bb-4v9cl                              1/1     Running   0             10mdefault             chenby-75b5d7fbfb-7zjsr                                  1/1     Running   0             27hdefault             chenby-75b5d7fbfb-hbvr8                                  1/1     Running   0             27hdefault             chenby-75b5d7fbfb-ppbzg                                  1/1     Running   0             27hdefault             echo-a-6799dff547-pnx6w                                  1/1     Running   0             10mdefault             echo-b-fc47b659c-4bdg9                                   1/1     Running   0             10mdefault             echo-b-host-67fcfd59b7-28r9s                             1/1     Running   0             10mdefault             host-to-b-multi-node-clusterip-69c57975d6-z4j2z          1/1     Running   0             10mdefault             host-to-b-multi-node-headless-865899f7bb-frrmc           1/1     Running   0             10mdefault             pod-to-a-allowed-cnp-5f9d7d4b9d-hcd8x                    1/1     Running   0             10mdefault             pod-to-a-denied-cnp-65cc5ff97b-2rzb8                     1/1     Running   0             10mdefault             pod-to-a-dfc64f564-p7xcn                                 1/1     Running   0             10mdefault             pod-to-b-intra-node-nodeport-677868746b-trk2l            1/1     Running   0             10mdefault             pod-to-b-multi-node-clusterip-76bbbc677b-knfq2           1/1     Running   0             10mdefault             pod-to-b-multi-node-headless-698c6579fd-mmvd7            1/1     Running   0             10mdefault             pod-to-b-multi-node-nodeport-5dc4b8cfd6-8dxmz            1/1     Running   0             10mdefault             pod-to-external-1111-8459965778-pjt9b                    1/1     Running   0             10mdefault             pod-to-external-fqdn-allow-google-cnp-64df9fb89b-l9l4q   1/1     Running   0             10mkube-system         cilium-7rfj6                                             1/1     Running   0             56skube-system         cilium-d4cch                                             1/1     Running   0             56skube-system         cilium-h5x8r                                             1/1     Running   0             56skube-system         cilium-operator-5dbddb6dbf-flpl5                         1/1     Running   0             56skube-system         cilium-operator-5dbddb6dbf-gcznc                         1/1     Running   0             56skube-system         cilium-t2xlz                                             1/1     Running   0             56skube-system         cilium-z65z7                                             1/1     Running   0             56skube-system         coredns-665475b9f8-jkqn8                                 1/1     Running   1 (36h ago)   36hkube-system         hubble-relay-59d8575-9pl9z                               1/1     Running   0             56skube-system         hubble-ui-64d4995d57-nsv9j                               2/2     Running   0             56skube-system         metrics-server-776f58c94b-c6zgs                          1/1     Running   1 (36h ago)   37h[root@k8s-master01 yaml]#

9.2.7 批改为NodePort

装置时候没有创立 监控能够疏忽

[root@k8s-master01 yaml]# kubectl  edit svc  -n kube-system hubble-uiservice/hubble-ui edited[root@k8s-master01 yaml]#[root@k8s-master01 yaml]# kubectl  edit svc  -n cilium-monitoring grafanaservice/grafana edited[root@k8s-master01 yaml]#[root@k8s-master01 yaml]# kubectl  edit svc  -n cilium-monitoring prometheusservice/prometheus edited[root@k8s-master01 yaml]#type: NodePort

9.2.8 查看端口

装置时候没有创立 监控能够疏忽

[root@k8s-master01 yaml]# kubectl get svc -A | grep monitcilium-monitoring   grafana                NodePort    10.100.250.17    <none>        3000:30707/TCP           15mcilium-monitoring   prometheus             NodePort    10.100.131.243   <none>        9090:31155/TCP           15m[root@k8s-master01 yaml]#[root@k8s-master01 yaml]# kubectl get svc -A | grep hubblekube-system         hubble-metrics         ClusterIP   None             <none>        9965/TCP                 5m12skube-system         hubble-peer            ClusterIP   10.100.150.29    <none>        443/TCP                  5m12skube-system         hubble-relay           ClusterIP   10.109.251.34    <none>        80/TCP                   5m12skube-system         hubble-ui              NodePort    10.102.253.59    <none>        80:31219/TCP             5m12s[root@k8s-master01 yaml]#

9.2.9 拜访

装置时候没有创立 监控能够疏忽

http://192.168.0.31:30707http://192.168.0.31:31155http://192.168.0.31:31219

10.装置CoreDNS

10.1以下步骤只在master01操作

10.1.1批改文件

# 下载tgz包helm repo add coredns https://coredns.github.io/helmhelm pull coredns/corednstar xvf coredns-*.tgzcd coredns/# 批改IP地址vim values.yamlcat values.yaml | grep clusterIP:clusterIP: "10.96.0.10"# 示例---service:# clusterIP: ""# clusterIPs: []# loadBalancerIP: ""# externalIPs: []# externalTrafficPolicy: ""# ipFamilyPolicy: ""  # The name of the Service  # If not set, a name is generated using the fullname template  clusterIP: "10.96.0.10"  name: ""  annotations: {}---# 批改为国内源 docker源可选sed -i "s#coredns/#m.daocloud.io/docker.io/coredns/#g" values.yamlsed -i "s#registry.k8s.io/#m.daocloud.io/registry.k8s.io/#g" values.yaml# 默认参数装置helm install  coredns ./coredns/ -n kube-system

11.装置Metrics Server

11.1以下步骤只在master01操作

11.1.1装置Metrics-server

在新版的Kubernetes中系统资源的采集均应用Metrics-server,能够通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

# 单机版 wget https://mirrors.chenby.cn/https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml# 高可用版本wget https://mirrors.chenby.cn/https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml# 批改配置vim components.yamlvim high-availability.yaml---# 1defaultArgs:  - --cert-dir=/tmp  - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  - --kubelet-use-node-status-port  - --metric-resolution=15s  - --kubelet-insecure-tls  - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  - --requestheader-username-headers=X-Remote-User  - --requestheader-group-headers=X-Remote-Group  - --requestheader-extra-headers-prefix=X-Remote-Extra-# 2        volumeMounts:        - mountPath: /tmp          name: tmp-dir        - name: ca-ssl          mountPath: /etc/kubernetes/pki# 3      volumes:      - emptyDir: {}        name: tmp-dir      - name: ca-ssl        hostPath:          path: /etc/kubernetes/pki---# 批改为国内源 docker源可选sed -i "s#registry.k8s.io/#m.daocloud.io/registry.k8s.io/#g" *.yaml# 二选一kubectl apply -f components.yaml# kubectl apply -f high-availability.yaml

11.1.2稍等片刻查看状态

kubectl  top nodeNAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   k8s-master01   197m         4%     1497Mi          39%       k8s-master02   152m         3%     1315Mi          34%       k8s-master03   112m         2%     1274Mi          33%       k8s-node01     142m         3%     777Mi           20%       k8s-node02     71m          1%     682Mi           17% 

12.集群验证

12.1部署pod资源

cat<<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata:  name: busybox  namespace: defaultspec:  containers:  - name: busybox    image: docker.io/library/busybox:1.28    command:      - sleep      - "3600"    imagePullPolicy: IfNotPresent  restartPolicy: AlwaysEOF# 查看kubectl  get podNAME      READY   STATUS    RESTARTS   AGEbusybox   1/1     Running   0          17s

12.2用pod解析默认命名空间中的kubernetes

# 查看namekubectl get svcNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEkubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h# 进行解析kubectl exec  busybox -n default -- nslookup kubernetes3Server:    10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetesAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local

12.3测试跨命名空间是否能够解析

# 查看有那些namekubectl  get svc -ANAMESPACE     NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGEdefault       kubernetes        ClusterIP   10.96.0.1       <none>        443/TCP         76mkube-system   calico-typha      ClusterIP   10.105.100.82   <none>        5473/TCP        35mkube-system   coredns-coredns   ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   8m14skube-system   metrics-server    ClusterIP   10.105.60.31    <none>        443/TCP         109s# 进行解析kubectl exec  busybox -n default -- nslookup coredns-coredns.kube-systemServer:    10.96.0.10Address 1: 10.96.0.10 coredns-coredns.kube-system.svc.cluster.localName:      coredns-coredns.kube-systemAddress 1: 10.96.0.10 coredns-coredns.kube-system.svc.cluster.local[root@k8s-master01 metrics-server]# 

12.4每个节点都必须要能拜访Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443Trying 10.96.0.1...Connected to 10.96.0.1.Escape character is '^]'. telnet 10.96.0.10 53Trying 10.96.0.10...Connected to 10.96.0.10.Escape character is '^]'.curl 10.96.0.10:53curl: (52) Empty reply from server

12.5Pod和Pod之前要能通

kubectl get po -owideNAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATESbusybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>kubectl get po -n kube-system -owideNAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATEScalico-kube-controllers-76754ff848-pw4xg   1/1     Running   0          38m     172.25.244.193   k8s-master01   <none>           <none>calico-node-97m55                          1/1     Running   0          38m     192.168.0.34     k8s-node01     <none>           <none>calico-node-hlz7j                          1/1     Running   0          38m     192.168.0.32     k8s-master02   <none>           <none>calico-node-jtlck                          1/1     Running   0          38m     192.168.0.33     k8s-master03   <none>           <none>calico-node-lxfkf                          1/1     Running   0          38m     192.168.0.35     k8s-node02     <none>           <none>calico-node-t667x                          1/1     Running   0          38m     192.168.0.31     k8s-master01   <none>           <none>calico-typha-59d75c5dd4-gbhfp              1/1     Running   0          38m     192.168.0.35     k8s-node02     <none>           <none>coredns-coredns-c5c6d4d9b-bd829            1/1     Running   0          10m     172.25.92.65     k8s-master02   <none>           <none>metrics-server-7c8b55c754-w7q8v            1/1     Running   0          3m56s   172.17.125.3     k8s-node01     <none>           <none># 进入busybox ping其余节点上的podkubectl exec -ti busybox -- sh/ # ping 192.168.0.34PING 192.168.0.34 (192.168.0.34): 56 data bytes64 bytes from 192.168.0.34: seq=0 ttl=63 time=0.358 ms64 bytes from 192.168.0.34: seq=1 ttl=63 time=0.668 ms64 bytes from 192.168.0.34: seq=2 ttl=63 time=0.637 ms64 bytes from 192.168.0.34: seq=3 ttl=63 time=0.624 ms64 bytes from 192.168.0.34: seq=4 ttl=63 time=0.907 ms# 能够连通证实这个pod是能够跨命名空间和跨主机通信的

12.6创立三个正本,能够看到3个正本散布在不同的节点上(用完能够删了)

cat > deployments.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata:  name: nginx-deployment  labels:    app: nginxspec:  replicas: 3  selector:    matchLabels:      app: nginx  template:    metadata:      labels:        app: nginx    spec:      containers:      - name: nginx        image: nginx        ports:        - containerPort: 80EOFkubectl  apply -f deployments.yaml deployment.apps/nginx-deployment createdkubectl  get pod NAME                               READY   STATUS    RESTARTS   AGEbusybox                            1/1     Running   0          6m25snginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8snginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8snginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s# 删除nginx[root@k8s-master01 ~]# kubectl delete -f deployments.yaml 

13.装置dashboard

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --namespace kube-system

13.1更改dashboard的svc为NodePort,如果已是请疏忽

kubectl edit svc kubernetes-dashboard -n kube-system  type: NodePort

13.2查看端口号

kubectl get svc kubernetes-dashboard -n kube-systemNAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGEkubernetes-dashboard   NodePort   10.108.120.110   <none>        443:30034/TCP   34s

13.3创立token

cat > dashboard-user.yaml << EOFapiVersion: v1kind: ServiceAccountmetadata:  name: admin-user  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: admin-userroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: admin-user  namespace: kube-systemEOFkubectl  apply -f dashboard-user.yaml# 创立tokenkubectl -n kube-system create token admin-usereyJhbGciOiJSUzI1NiIsImtpZCI6ImtHTXRwbS1IR3NabHR5WDhYTUhUX1Rnekt4M1pzNFNNM3NwLXdkSlh3T2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjg3ODc1MjIyLCJpYXQiOjE2ODc4NzE2MjIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZjZiMzYzYzEtZjE1Ni00YTBhLTk5MzUtYmZmN2YzZWJlNTU2In19LCJuYmYiOjE2ODc4NzE2MjIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.uNIwe8tzA7IjdBWiCroZxT7OGw9IiCdPT0R1E1G5k965tVH9spVxz6PFvWLwNl6QnjhvseDUAbz0yBIJ3v42nsp1EYZeKXMYxfPGqgZ_7EQ4xYh-zEEoHLtdVVo20beCVtzTzEV_0doUehV_GLDt1es794OI7s4SlxYOtc1MMg50VUr4jkUvfuDPqHSMh2cirnTJXL9TX_3K-30W4c_fN2TCxWoWpwa4G-5oCORx8j9FLejTldHDFB_Z4TNhirNQLpi05C6OT43HiVxrsD6fgvPUQatUznCedb48RWTjCk8nY0CTsZ3VR6Vby4MOrlHf57asMFfe6lSTIcDSj0lV1g

13.3登录dashboard

https://192.168.0.31:30034/

14.ingress装置

14.1执行部署

wget https://mirrors.chenby.cn/https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml# 批改为国内源 docker源可选sed -i "s#registry.k8s.io/#m.daocloud.io/registry.k8s.io/#g" *.yamlcat > backend.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata:  name: default-http-backend  labels:    app.kubernetes.io/name: default-http-backend  namespace: kube-systemspec:  replicas: 1  selector:    matchLabels:      app.kubernetes.io/name: default-http-backend  template:    metadata:      labels:        app.kubernetes.io/name: default-http-backend    spec:      terminationGracePeriodSeconds: 60      containers:      - name: default-http-backend        image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5         livenessProbe:          httpGet:            path: /healthz            port: 8080            scheme: HTTP          initialDelaySeconds: 30          timeoutSeconds: 5        ports:        - containerPort: 8080        resources:          limits:            cpu: 10m            memory: 20Mi          requests:            cpu: 10m            memory: 20Mi---apiVersion: v1kind: Servicemetadata:  name: default-http-backend  namespace: kube-system  labels:    app.kubernetes.io/name: default-http-backendspec:  ports:  - port: 80    targetPort: 8080  selector:    app.kubernetes.io/name: default-http-backendEOFkubectl  apply -f deploy.yaml kubectl  apply -f backend.yaml cat > ingress-demo-app.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata:  name: hello-serverspec:  replicas: 2  selector:    matchLabels:      app: hello-server  template:    metadata:      labels:        app: hello-server    spec:      containers:      - name: hello-server        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server        ports:        - containerPort: 9000---apiVersion: apps/v1kind: Deploymentmetadata:  labels:    app: nginx-demo  name: nginx-demospec:  replicas: 2  selector:    matchLabels:      app: nginx-demo  template:    metadata:      labels:        app: nginx-demo    spec:      containers:      - image: nginx        name: nginx---apiVersion: v1kind: Servicemetadata:  labels:    app: nginx-demo  name: nginx-demospec:  selector:    app: nginx-demo  ports:  - port: 8000    protocol: TCP    targetPort: 80---apiVersion: v1kind: Servicemetadata:  labels:    app: hello-server  name: hello-serverspec:  selector:    app: hello-server  ports:  - port: 8000    protocol: TCP    targetPort: 9000---apiVersion: networking.k8s.io/v1kind: Ingress  metadata:  name: ingress-host-barspec:  ingressClassName: nginx  rules:  - host: "hello.chenby.cn"    http:      paths:      - pathType: Prefix        path: "/"        backend:          service:            name: hello-server            port:              number: 8000  - host: "demo.chenby.cn"    http:      paths:      - pathType: Prefix        path: "/nginx"          backend:          service:            name: nginx-demo            port:              number: 8000EOF# 等创立实现后在执行:kubectl  apply -f ingress-demo-app.yaml kubectl  get ingressNAME               CLASS   HOSTS                            ADDRESS     PORTS   AGEingress-host-bar   nginx   hello.chenby.cn,demo.chenby.cn   192.168.0.32   80      7s

14.2过滤查看ingress端口

# 批改为nodeportkubectl edit svc -n ingress-nginx   ingress-nginx-controllertype: NodePort[root@hello ~/yaml]# kubectl  get svc -A | grep ingressingress-nginx          ingress-nginx-controller             NodePort    10.104.231.36    <none>        80:32636/TCP,443:30579/TCP   104singress-nginx          ingress-nginx-controller-admission   ClusterIP   10.101.85.88     <none>        443/TCP                      105s[root@hello ~/yaml]#

15.IPv6测试

#部署利用cat<<EOF | kubectl apply -f -apiVersion: apps/v1kind: Deploymentmetadata:  name: chenbyspec:  replicas: 3  selector:    matchLabels:      app: chenby  template:    metadata:      labels:        app: chenby    spec:      containers:      - name: chenby        image: docker.io/library/nginx        resources:          limits:            memory: "128Mi"            cpu: "500m"        ports:        - containerPort: 80---apiVersion: v1kind: Servicemetadata:  name: chenbyspec:  ipFamilyPolicy: PreferDualStack  ipFamilies:  - IPv6  - IPv4  type: NodePort  selector:    app: chenby  ports:  - port: 80    targetPort: 80EOF#查看端口[root@k8s-master01 ~]# kubectl  get svcNAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGEchenby         NodePort    fd00::a29c       <none>        80:30779/TCP   5s[root@k8s-master01 ~]# #应用内网拜访[root@localhost yaml]# curl -I http://[fd00::a29c]HTTP/1.1 200 OKServer: nginx/1.21.6Date: Thu, 05 May 2022 10:20:35 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes[root@localhost yaml]# curl -I http://192.168.0.31:30779HTTP/1.1 200 OKServer: nginx/1.21.6Date: Thu, 05 May 2022 10:20:59 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes[root@localhost yaml]# #应用公网拜访[root@localhost yaml]# curl -I http://[2409:8a10:9e18:9020::10]:30779HTTP/1.1 200 OKServer: nginx/1.21.6Date: Thu, 05 May 2022 10:20:54 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes

16.装置命令行主动补全性能

yum install bash-completion -ysource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc

附录

# 镜像加速器能够应用DaoCloud仓库,替换规定如下cr.l5d.io/  ===> m.daocloud.io/cr.l5d.io/docker.elastic.co/  ===> m.daocloud.io/docker.elastic.co/docker.io/  ===> m.daocloud.io/docker.io/gcr.io/  ===> m.daocloud.io/gcr.io/ghcr.io/  ===> m.daocloud.io/ghcr.io/k8s.gcr.io/  ===> m.daocloud.io/k8s.gcr.io/mcr.microsoft.com/  ===> m.daocloud.io/mcr.microsoft.com/nvcr.io/  ===> m.daocloud.io/nvcr.io/quay.io/  ===> m.daocloud.io/quay.io/registry.jujucharms.com/  ===> m.daocloud.io/registry.jujucharms.com/registry.k8s.io/  ===> m.daocloud.io/registry.k8s.io/registry.opensource.zalan.do/  ===> m.daocloud.io/registry.opensource.zalan.do/rocks.canonical.com/  ===> m.daocloud.io/rocks.canonical.com/# 镜像版本要自行查看,因为镜像版本是随时更新的,文档无奈做到实时更新# docker pull 镜像docker pull registry.cn-hangzhou.aliyuncs.com/chenby/cni:master docker pull registry.cn-hangzhou.aliyuncs.com/chenby/node:masterdocker pull registry.cn-hangzhou.aliyuncs.com/chenby/kube-controllers:masterdocker pull registry.cn-hangzhou.aliyuncs.com/chenby/typha:masterdocker pull registry.cn-hangzhou.aliyuncs.com/chenby/coredns:v1.10.0docker pull registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6docker pull registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.5.2docker pull kubernetesui/dashboard:v2.7.0docker pull kubernetesui/metrics-scraper:v1.0.8docker pull quay.io/cilium/cilium:v1.12.6docker pull quay.io/cilium/certgen:v0.1.8docker pull quay.io/cilium/hubble-relay:v1.12.6docker pull quay.io/cilium/hubble-ui-backend:v0.9.2docker pull quay.io/cilium/hubble-ui:v0.9.2docker pull quay.io/cilium/cilium-etcd-operator:v2.0.7docker pull quay.io/cilium/operator:v1.12.6docker pull quay.io/cilium/clustermesh-apiserver:v1.12.6docker pull quay.io/coreos/etcd:v3.5.4docker pull quay.io/cilium/startup-script:d69851597ea019af980891a4628fb36b7880ec26# docker 保留镜像docker save registry.cn-hangzhou.aliyuncs.com/chenby/cni:master -o cni.tar docker save registry.cn-hangzhou.aliyuncs.com/chenby/node:master -o node.tar docker save registry.cn-hangzhou.aliyuncs.com/chenby/typha:master -o typha.tar docker save registry.cn-hangzhou.aliyuncs.com/chenby/kube-controllers:master -o kube-controllers.tar docker save registry.cn-hangzhou.aliyuncs.com/chenby/coredns:v1.10.0 -o coredns.tar docker save registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6 -o pause.tar docker save registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.5.2 -o metrics-server.tar docker save kubernetesui/dashboard:v2.7.0 -o dashboard.tar docker save kubernetesui/metrics-scraper:v1.0.8 -o metrics-scraper.tar docker save quay.io/cilium/cilium:v1.12.6 -o cilium.tar docker save quay.io/cilium/certgen:v0.1.8 -o certgen.tar docker save quay.io/cilium/hubble-relay:v1.12.6 -o hubble-relay.tar docker save quay.io/cilium/hubble-ui-backend:v0.9.2 -o hubble-ui-backend.tar docker save quay.io/cilium/hubble-ui:v0.9.2 -o hubble-ui.tar docker save quay.io/cilium/cilium-etcd-operator:v2.0.7 -o cilium-etcd-operator.tar docker save quay.io/cilium/operator:v1.12.6 -o operator.tar docker save quay.io/cilium/clustermesh-apiserver:v1.12.6 -o clustermesh-apiserver.tar docker save quay.io/coreos/etcd:v3.5.4 -o etcd.tar docker save quay.io/cilium/startup-script:d69851597ea019af980891a4628fb36b7880ec26 -o startup-script.tar # 传输到各个节点for NODE in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp -r images/  $NODE:/root/ ; done# 创立命名空间ctr ns create k8s.io# 导入镜像ctr --namespace k8s.io image import images/cni.tarctr --namespace k8s.io image import images/node.tarctr --namespace k8s.io image import images/typha.tarctr --namespace k8s.io image import images/kube-controllers.tar ctr --namespace k8s.io image import images/coredns.tar ctr --namespace k8s.io image import images/pause.tar ctr --namespace k8s.io image import images/metrics-server.tar ctr --namespace k8s.io image import images/dashboard.tar ctr --namespace k8s.io image import images/metrics-scraper.tar ctr --namespace k8s.io image import images/dashboard.tar ctr --namespace k8s.io image import images/metrics-scraper.tar ctr --namespace k8s.io image import images/cilium.tar ctr --namespace k8s.io image import images/certgen.tar ctr --namespace k8s.io image import images/hubble-relay.tar ctr --namespace k8s.io image import images/hubble-ui-backend.tar ctr --namespace k8s.io image import images/hubble-ui.tar ctr --namespace k8s.io image import images/cilium-etcd-operator.tar ctr --namespace k8s.io image import images/operator.tar ctr --namespace k8s.io image import images/clustermesh-apiserver.tar ctr --namespace k8s.io image import images/etcd.tar ctr --namespace k8s.io image import images/startup-script.tar # pull tar包 解压后helm pull cilium/cilium# 查看镜像版本root@hello:~/cilium# cat values.yaml| grep tag: -C1  repository: "quay.io/cilium/cilium"  tag: "v1.12.6"  pullPolicy: "IfNotPresent"--    repository: "quay.io/cilium/certgen"    tag: "v0.1.8@sha256:4a456552a5f192992a6edcec2febb1c54870d665173a33dc7d876129b199ddbd"    pullPolicy: "IfNotPresent"--      repository: "quay.io/cilium/hubble-relay"      tag: "v1.12.6"       # hubble-relay-digest--        repository: "quay.io/cilium/hubble-ui-backend"        tag: "v0.9.2@sha256:a3ac4d5b87889c9f7cc6323e86d3126b0d382933bd64f44382a92778b0cde5d7"        pullPolicy: "IfNotPresent"--        repository: "quay.io/cilium/hubble-ui"        tag: "v0.9.2@sha256:d3596efc94a41c6b772b9afe6fe47c17417658956e04c3e2a28d293f2670663e"        pullPolicy: "IfNotPresent"--    repository: "quay.io/cilium/cilium-etcd-operator"    tag: "v2.0.7@sha256:04b8327f7f992693c2cb483b999041ed8f92efc8e14f2a5f3ab95574a65ea2dc"    pullPolicy: "IfNotPresent"--    repository: "quay.io/cilium/operator"    tag: "v1.12.6"    # operator-generic-digest--    repository: "quay.io/cilium/startup-script"    tag: "d69851597ea019af980891a4628fb36b7880ec26"    pullPolicy: "IfNotPresent"--    repository: "quay.io/cilium/cilium"    tag: "v1.12.6"    # cilium-digest--      repository: "quay.io/cilium/clustermesh-apiserver"      tag: "v1.12.6"      # clustermesh-apiserver-digest--        repository: "quay.io/coreos/etcd"        tag: "v3.5.4@sha256:795d8660c48c439a7c3764c2330ed9222ab5db5bb524d8d0607cac76f7ba82a3"        pullPolicy: "IfNotPresent"

对于

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、集体博客

全网可搜《小陈运维》

文章次要公布于微信公众号:《Linux运维交换社区》