二进制装置Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈Kubernetes 开源不易,帮忙点个star,谢谢了
介绍kubernetes(k8s)二进制高可用装置部署,反对IPv4+IPv6双栈。
我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。
若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。
不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。
若不要IPv6 ,不给网卡配置IPv6即可,不要对IPv6相干配置删除或操作,否则会出问题。
https://github.com/cby-chen/K...
手动我的项目地址:https://github.com/cby-chen/K...
脚本我的项目地址:https://github.com/cby-chen/B...
强烈建议在Github上查看文档。Github出问题会更新文档,并且后续尽可能第一工夫更新新版本文档。
1.21.13 和 1.22.10 和 1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.23.7 和 1.24.0 和 1.24.1 和 1.24.2 和 1.24.3 和 1.25.0 文档以及安装包已生成。
1.环境主机名称IP地址阐明软件Master01192.168.1.61master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster02192.168.1.62master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientMaster03192.168.1.63master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、kube-proxy、nfs-clientNode01192.168.1.64node节点kubelet、kube-proxy、nfs-clientNode02192.168.1.65node节点kubelet、kube-proxy、nfs-clientNode03192.168.1.66node节点kubelet、kube-proxy、nfs-clientNode04192.168.1.67node节点kubelet、kube-proxy、nfs-clientNode05192.168.1.68node节点kubelet、kube-proxy、nfs-clientLb01192.168.1.70Lb01节点haproxy、keepalivedLb02192.168.1.75Lb02节点haproxy、keepalived 192.168.1.69VIP 软件版本kernel5.18.0-1.el8CentOS 8v8 或者 v7kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.25.0etcdv3.5.4containerdv1.6.8cfsslv1.6.1cniv1.1.1crictlv1.24.2haproxyv1.8.27keepalivedv2.1.5网段
物理主机:192.168.1.0/24
service:10.96.0.0/12
pod:172.16.0.0/12
安装包曾经整顿好:https://github.com/cby-chen/K...
1.1.k8s根底零碎环境配置1.2.配置IPssh root@192.168.1.100 "nmcli con mod ens18 ipv4.addresses 192.168.1.61/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.106 "nmcli con mod ens18 ipv4.addresses 192.168.1.62/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.110 "nmcli con mod ens18 ipv4.addresses 192.168.1.63/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.114 "nmcli con mod ens18 ipv4.addresses 192.168.1.64/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.115 "nmcli con mod ens18 ipv4.addresses 192.168.1.65/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.116 "nmcli con mod ens18 ipv4.addresses 192.168.1.66/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.117 "nmcli con mod ens18 ipv4.addresses 192.168.1.67/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.118 "nmcli con mod ens18 ipv4.addresses 192.168.1.68/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.156 "nmcli con mod ens18 ipv4.addresses 192.168.1.70/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.160 "nmcli con mod ens18 ipv4.addresses 192.168.1.75/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"# 没有IPv6抉择不配置即可ssh root@192.168.1.61 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::10; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.62 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::20; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.63 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::30; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.64 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::40; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.65 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::50; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.66 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::60; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.67 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::70; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.68 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::80; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.70 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::90; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"ssh root@192.168.1.75 "nmcli con mod ens18 ipv6.addresses 2408:8207:78cc:5cc1:181c::100; nmcli con mod ens18 ipv6.gateway fe80::2e2:69ff:fe3f:b198; nmcli con mod ens18 ipv6.method manual; nmcli con mod ens18 ipv6.dns "2001:4860:4860::8888"; nmcli con up ens18"1.3.设置主机名hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02hostnamectl set-hostname k8s-node03hostnamectl set-hostname k8s-node04hostnamectl set-hostname k8s-node05hostnamectl set-hostname lb01hostnamectl set-hostname lb021.4.配置yum源# 对于 CentOS 7sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于 CentOS 8sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://mirror.centos.org/$contentdir|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \ -i.bak \ /etc/yum.repos.d/CentOS-*.repo# 对于公有仓库sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org/\$contentdir|baseurl=http://192.168.1.123/centos|g' -i.bak /etc/yum.repos.d/CentOS-*.repo1.5.装置一些必备工具yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y1.6.选择性下载须要工具1.下载kubernetes1.25.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.mdwget https://dl.k8s.io/v1.25.0/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releases4.containerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.8/cri-containerd-cni-1.6.8-linux-amd64.tar.gz5.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz1.7.敞开防火墙systemctl disable --now firewalld1.8.敞开SELinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config1.9.敞开替换分区sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 01.10.网络配置(俩种形式二选一)# 形式一# systemctl disable --now NetworkManager# systemctl start network && systemctl enable network# 形式二cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile]unmanaged-devices=interface-name:cali*;interface-name:tunl*EOFsystemctl restart NetworkManager1.11.进行工夫同步 (lb除外)# 服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd# 客户端yum install chrony -ycat > /etc/chrony.conf << EOF pool 192.168.1.61 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd#应用客户端进行验证chronyc sources -v1.12.配置ulimitulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF1.13.配置免密登录yum install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="192.168.1.61 192.168.1.62 192.168.1.63 192.168.1.64 192.168.1.65 192.168.1.66 192.168.1.67 192.168.1.68 192.168.1.70 192.168.1.75"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone1.14.增加启用源 (lb除外)# 为 RHEL-8或 CentOS-8配置源yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 为 RHEL-7 SL-7 或 CentOS-7 装置 ELRepo yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo # 查看可用安装包yum --disablerepo="*" --enablerepo="elrepo-kernel" list available1.15.降级内核至4.18版本以上 (lb除外)# 装置最新的内核# 我这里抉择的是稳定版kernel-ml 如需更新长期保护版本kernel-lt yum --enablerepo=elrepo-kernel install kernel-ml# 查看已装置那些内核rpm -qa | grep kernelkernel-core-4.18.0-358.el8.x86_64kernel-tools-4.18.0-358.el8.x86_64kernel-ml-core-5.16.7-1.el8.elrepo.x86_64kernel-ml-5.16.7-1.el8.elrepo.x86_64kernel-modules-4.18.0-358.el8.x86_64kernel-4.18.0-358.el8.x86_64kernel-tools-libs-4.18.0-358.el8.x86_64kernel-ml-modules-5.16.7-1.el8.elrepo.x86_64# 查看默认内核grubby --default-kernel/boot/vmlinuz-5.16.7-1.el8.elrepo.x86_64# 若不是最新的应用命令设置grubby --set-default /boot/vmlinuz-「您的内核版本」.x86_64# 重启失效reboot# v8 整合命令为:yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --default-kernel ; reboot # v7 整合命令为:yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y ; sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo ; sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo ; yum --disablerepo="*" --enablerepo="elrepo-kernel" list available -y ; yum --enablerepo=elrepo-kernel install kernel-ml -y ; grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) ; grubby --default-kernel ; reboot 1.16.装置ipvsadm (lb除外)yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs1.17.批改内核参数 (lb除外)cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 1EOFsysctl --system1.18.所有节点配置hosts本地解析cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6# 没有IPv6抉择不配置即可2408:8207:78cc:5cc1:181c::10 k8s-master012408:8207:78cc:5cc1:181c::20 k8s-master022408:8207:78cc:5cc1:181c::30 k8s-master032408:8207:78cc:5cc1:181c::40 k8s-node012408:8207:78cc:5cc1:181c::50 k8s-node022408:8207:78cc:5cc1:181c::60 k8s-node032408:8207:78cc:5cc1:181c::70 k8s-node042408:8207:78cc:5cc1:181c::80 k8s-node052408:8207:78cc:5cc1:181c::90 lb012408:8207:78cc:5cc1:181c::100 lb02192.168.1.61 k8s-master01192.168.1.62 k8s-master02192.168.1.63 k8s-master03192.168.1.64 k8s-node01192.168.1.65 k8s-node02192.168.1.66 k8s-node03192.168.1.67 k8s-node04192.168.1.68 k8s-node05192.168.1.70 lb01192.168.1.75 lb02192.168.1.69 lb-vipEOF2.k8s根本组件装置2.1.所有k8s节点装置Containerd作为Runtime# wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/# wget https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz#解压tar -xzf cri-containerd-cni-1.6.8-linux-amd64.tar.gz -C /#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF2.1.1配置Containerd所需的模块cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF2.1.2加载模块systemctl restart systemd-modules-load.service2.1.3配置Containerd所需的内核cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system2.1.4创立Containerd的配置文件# 创立默认配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml# 批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image2.1.5启动并设置为开机启动systemctl daemon-reloadsystemctl enable --now containerd2.1.6配置crictl客户端连贯的运行时地位# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz#解压tar xf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info2.2.k8s与etcd下载及装置(仅在master01操作)2.2.1解压k8s安装包# 下载安装包# wget https://dl.k8s.io/v1.24.2/kubernetes-server-linux-amd64.tar.gz# wget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz# 解压k8s安装文件cd cbytar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}# 解压etcd安装文件tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/# 查看/usr/local/bin下内容ls /usr/local/bin/containerd containerd-shim-runc-v1 containerd-stress critest ctr etcdctl kube-controller-manager kubelet kube-scheduler containerd-shim containerd-shim-runc-v2 crictl ctd-decoder etcd kube-apiserver kubectl kube-proxy2.2.2查看版本[root@k8s-master01 ~]# kubelet --versionKubernetes v1.25.0[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@k8s-master01 ~]# 2.2.3将组件发送至其余k8s节点Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02 k8s-node03 k8s-node04 k8s-node05'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; donemkdir -p /opt/cni/bin2.3创立证书相干文件mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF3.相干证书生成# master01节点下载证书生成工具# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfssl# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljson# 软件包内有cp cfssl_1.6.1_linux_amd64 /usr/local/bin/cfsslcp cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson3.1.生成etcd证书特地阐明除外,以下操作在所有master节点操作
...