参考文章

主参考:
《Centos7.6部署k8s v1.16.4高可用集群(主备模式)》
《应用kubeadm在Centos8上部署kubernetes1.18》
《应用kubeadm部署k8s集群[v1.18.0]》

1、环境规划

a. 主机布局

#   主机名        Centos版本    ip               docker version        flannel version    Keepalived version    主机配置     备注#   master01      7-8.2003    192.168.1.121     19.03.9                v0.11.0            v1.3.5                2C2G        control plane#   master02      7-8.2003    192.168.1.122     19.03.9                v0.11.0            v1.3.5                2C2G        control plane#   master03      7-8.2003    192.168.1.123     19.03.9                v0.11.0            v1.3.5                2C2G        control plane#   work01        7-8.2003    192.168.1.131     19.03.9                /                  /                     2C2G        worker nodes#   work02        7-8.2003    192.168.1.132     19.03.9                /                  /                     2C2G        worker nodes#   work03        7-8.2003    192.168.1.133     19.03.9                /                  /                     2C2G        worker nodes#   VIP           7-8.2003    192.168.1.200     19.03.9                v0.11.0            v1.3.5                2C2G        在control plane上浮动#   client        7-8.2003    192.168.1.201     /                      /                  /                     2C2G        client#   共有7台服务器,3台control plane【1台VirtualPC】,3台work,1台client不动。#   试验机-16G内存:集群节点全副退出master1后,根本内存占用99%,kubectl get nodes查问常常回绝,后内存改为 1200

b. vagrant 筹备

centos7筹备略。

Vagrantfile.default 构建虚拟机
Vagrant.configure("2") do |config|    config.vm.provision "shell", inline: <<-SHELL        echo "Mechines All up"    SHELL        # ssh明码:reload随须要在重启时开启,provision    config.ssh.username="vagrant"    config.ssh.password="vagrant"    config.vm.provision "shell", inline: <<-SHELL        sudo su && sed -i "s/PasswordAuthentication no/# PasswordAuthentication no/" "/etc/ssh/sshd_config" && systemctl restart sshd    SHELL    config.vm.define "k8s-master1" do |node|        node.vm.hostname="master1"        node.vm.box="centos7"        node.vm.network "public_network",ip: "192.168.1.121"        node.vm.provider "virtualbox" do | vb |            vb.memory=1200            vb.cpus=2        end    end        config.vm.define "k8s-master2" do |node|        node.vm.hostname="master2"        node.vm.box="centos7"        node.vm.network "public_network", ip: "192.168.1.122"        node.vm.provider "virtualbox" do | vb |            vb.memory=1200            vb.cpus=2        end    end        config.vm.define "k8s-master3" do |node|        node.vm.hostname="master3"        node.vm.box="centos7"        node.vm.network "public_network", ip: "192.168.1.123"        node.vm.provider "virtualbox" do | vb |            vb.memory=1200            vb.cpus=2        end    end            config.vm.define "k8s-worker1" do |node|        node.vm.hostname="worker1"        node.vm.box="centos7"        node.vm.network "public_network", ip: "192.168.1.131"        node.vm.provider "virtualbox" do | vb |            vb.memory=1200            vb.cpus=2        end    end        config.vm.define "k8s-worker2" do |node|        node.vm.hostname="worker2"        node.vm.box="centos7"        node.vm.network "public_network", ip: "192.168.1.132"        node.vm.provider "virtualbox" do | vb |            vb.memory=1200            vb.cpus=2        end    end        config.vm.define "k8s-worker3" do |node|        node.vm.hostname="worker3"        node.vm.box="centos7"        node.vm.network "public_network", ip: "192.168.1.133"        node.vm.provider "virtualbox" do | vb |            vb.memory=1200            vb.cpus=2        end    end    end

初始化时不增加明码,二次重启vagrant reload --provision(重启并执行) 批量执行初始化,这样能够用明码进行近程登录了。

2、笔记梳理

除client是空机器外,其它6台机器:机器分为 121:master+vip(200)、master(122,123)、work(worker: 131,132,133),执行如下装置过程:

a. 环境初始化

实现初始化工作:零碎参数、防火墙、swap、hosts,ssh不便后续xshell连贯,docker-ce kubelet kubeadm kubectl 的安镜像源设置、软件装置。

零碎初始化脚本
[192.168.1.121]# vim core.sh#!/bin/bash### 0 容许明码认证登录if [[ $(id | grep root) == "" ]]; then  sudo su  echo "go to root"fiid### 1、环境初始化# 1.1 敞开防火墙性能systemctl stop firewalldsystemctl disable firewalld# 1.2.敞开selinuxsed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/configsetenforce 0# 1.3 敞开swap,启动项swapoff -ased -i.bak '/swap/s/^/#/' /etc/fstab# 1.4 服务器布局result=$(cat /etc/hosts | grep "节点主机")if [[ "$result" != "" ]]; then    echoelse    cat <<EOF >>  /etc/hosts#节点主机192.168.1.121 master1192.168.1.122 master2192.168.1.123 master3192.168.1.131 worker1192.168.1.132 worker2192.168.1.133 worker3# GitHub githubusercontent 超时备用199.232.68.133 raw.githubusercontent.comEOFfi# 1.5 长期主机名配置办法,vagrant设置、略 hostnamectl set-hostname master1# 1.6 工夫同步:ntp、chronytimedatectl set-timezone Asia/Shanghaiyum install chrony -ycat <<EOF >  /etc/chrony.confserver ntp1.aliyun.com iburst minpoll 4 maxpoll 10server ntp2.aliyun.com iburst minpoll 4 maxpoll 10server ntp3.aliyun.com iburst minpoll 4 maxpoll 10EOFsystemctl start chronyd.servicesystemctl enable chronyd.service# 1.7 开启转发,即要求iptables不对bridge的数据进行解决cat <<EOF >  /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl -p /etc/sysctl.d/k8s.conf### 2、docker装置# 2.1 更新主机源yum install -y yum-utils device-mapper-persistent-data lvm2 wget bash-completion.noarchdocker-ce.repowget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repowget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repowget -O /etc/yum.repos.d/docker-ce.repo "https://download.docker.com/linux/centos/docker-ce.repo"sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo# 2.2 装置docker,kubelet kubeadm kubectl#yum remove docker docker-common docker-selinux docker-engine -yyum clean all && yum makecache fastyum install -y docker-ce systemctl start kubelet systemctl enable kubelet# 2.3 docker配置cgroup驱动IPADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-eth1|grep IPADDR)ip=${IPADDR:7}point=${ip:10} #视理论地址而定cat <<EOF > /etc/docker/daemon.json{    "graph": "/data/docker",    "storage-driver": "overlay2",    "registry-mirrors": ["https://kuogup1r.mirror.aliyuncs.com"],    "exec-opts": ["native.cgroupdriver=systemd"],    "bip": "172.7.${point}.1/24",    "live-restore": true}EOF#systemctl daemon-reload systemctl start docker systemctl enable dockerecho ">> 实现!"

反对反复执行。

镜像下载脚本
# 2.4 下载k8s系列镜像,并tag(公有仓库)for img in `kubeadm config images list`; do    docker pull "gcrxio/$(echo $img | tr '/' '_')" && docker tag "gcrxio/$(echo $img | tr '/' '_')" $img    docker rmi -f "gcrxio/$(echo $img | tr '/' '_')"; ##必要done

此时节点的 docker-ce kubelet kubeadm kubectl ,以及装置 k8s 的docker镜像筹备结束。

此时节点筹备工作实现。

b. 集群初始化

次要工作:初始化VIP节点,散发 证书 vip配置文件 ,部署flannel网络。

散发证书
[192.168.1.121]#for host in '121' '122' '123'; do    ssh root@192.168.1.$host 'ln -s /opt/etcd-v3.4.9-linux-amd64/ /opt/etcd'    ssh root@192.168.1.$host 'mkdir -p /data/etcd/etcd-server /data/logs/etcd-server /opt/certs'    scp /opt/certs/etcd-peer* root@10.4.7.$host:/opt/certs/    scp /opt/certs/ca.pem root@10.4.7.$host:/opt/certs/done
master1节点
#!/bin/bashcat <<EOF >  kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1.18.3apiServer:  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP  - master1  - master2  - master3  - worker1  - worker2  - worker3  - 192.168.1.121  - 192.168.1.122  - 192.168.1.123  - 192.168.1.131  - 192.168.1.132  - 192.168.1.133  - 192.168.1.200controlPlaneEndpoint: "192.168.1.121:6443"   #DNS, 200的超时networking:  podSubnet: "10.244.0.0/16"EOF# https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#kubeadm init --apiserver-advertise-address 192.168.1.121 --kubernetes-version 1.18.3 --pod-network-cidr 10.244.0.0/16kubeadm init --config=kubeadm-config.yaml# 秘钥、配置文件散发# 这里master2、3建设 mkdir -p /etc/kubernetes/pki/etcd/ ; `不要过多复制`for host in 22 23; do    scp /etc/kubernetes/admin.conf root@192.168.1.1$host:/etc/kubernetes/    scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.pub,sa.key,front-proxy-ca.crt,front-proxy-ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/etcd/done#  VIP配置文件全局变量export KUBECONFIG=/etc/kubernetes/admin.confresult=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")if [[ "$result" != "" ]]then    echoelse    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.localfi# 配置master1无明码近程登录(只测试用)ssh-keygen -f /root/.ssh/id_rsa.pub -N ""CONTROL_PLANE_IPS="192.168.1.122 192.168.1.123 192.168.1.131 192.168.1.132 192.168.1.133"for host in ${CONTROL_PLANE_IPS}; do    sshpass -p 'vagrant' ssh-copy-id -i /root/.ssh/id_rsa.pub root@$hostdone#!/bin/bashcat <<EOF >  kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1.18.3apiServer:  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP  - master1  - master2  - master3  - worker1  - worker2  - worker3  - 192.168.1.121  - 192.168.1.122  - 192.168.1.123  - 192.168.1.131  - 192.168.1.132  - 192.168.1.133  - 192.168.1.200controlPlaneEndpoint: "192.168.1.121:6443"   #DNS, 200的超时networking:  podSubnet: "10.244.0.0/16"EOF# https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#kubeadm init --apiserver-advertise-address 192.168.1.121 --kubernetes-version 1.18.3 --pod-network-cidr 10.244.0.0/16kubeadm init --config=kubeadm-config.yaml# 配置master1无明码近程登录(只测试用)ssh-keygen -f /root/.ssh/id_rsa -N ""CONTROL_PLANE_IPS="192.168.1.122 192.168.1.123 192.168.1.131 192.168.1.132 192.168.1.133"yum install -y sshpassfor host in ${CONTROL_PLANE_IPS}; do    sshpass -p 'vagrant' ssh-copy-id -i /root/.ssh/id_rsa.pub root@$hostdone# 秘钥、配置文件散发# 这里master2、3建设 mkdir -p /etc/kubernetes/pki/etcd/ ; `不要过多复制`for host in 22 23; do    scp /etc/kubernetes/admin.conf root@192.168.1.1$host:/etc/kubernetes/    scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.pub,sa.key,front-proxy-ca.crt,front-proxy-ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/etcd/done#  VIP配置文件全局变量export KUBECONFIG=/etc/kubernetes/admin.confresult=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")if [[ "$result" != "" ]]then    echoelse    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.localfiecho ">> 实现!"
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 192.168.1.121:6443 --token 3l0971.r1y2kl78b2q5zowe \
--discovery-token-ca-cert-hash sha256:7d4d9dab1a73082a7a4b4e2df98c042028cdba08400a91b925669448eb95d6ee \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.121:6443 --token 3l0971.r1y2kl78b2q5zowe \
--discovery-token-ca-cert-hash sha256:7d4d9dab1a73082a7a4b4e2df98c042028cdba08400a91b925669448eb95d6ee

c. non-vip.sh 退出集群(2+3台)

在vip-master1主机查看集群信息。

kubectl get pod --all-namespaceskubectl get node### 如果报错,导入或重载# export KUBECONFIG=/etc/kubernetes/admin.conf # kubectl replace --force -f kube-flannel.yml
master退出集群
# 全局变量export KUBECONFIG=/etc/kubernetes/admin.confresult=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")if [[ "$result" != "" ]]then    echoelse    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.localfikubeadm join 192.168.1.121:6443 --token ovplco.edwbwsrtff7egbio \    --discovery-token-ca-cert-hash sha256:a566c43f8cfa958d17b07193f9bc4e3a0f0303b44108a8ed153d553647d9d566 \    --control-plane 
worker退出集群
kubeadm join 192.168.1.121:6443 --token ovplco.edwbwsrtff7egbio \    --discovery-token-ca-cert-hash sha256:a566c43f8cfa958d17b07193f9bc4e3a0f0303b44108a8ed153d553647d9d566 

master1再次查看集群信息确认。

d. 扩大装置

部署flannel网络
wget -O kube-flannel.yml "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/' kube-flannel.yml kubectl apply -f kube-flannel.yml 
装置kubernetes-dashboard
wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml#批改kubectl create -f recommended.yaml