共计 10082 个字符,预计需要花费 26 分钟才能阅读完成。
参考文章
主参考:
《Centos7.6 部署 k8s v1.16.4 高可用集群 (主备模式)》
《应用 kubeadm 在 Centos8 上部署 kubernetes1.18》
《应用 kubeadm 部署 k8s 集群 [v1.18.0]》
1、环境规划
a. 主机布局
# 主机名 Centos 版本 ip docker version flannel version Keepalived version 主机配置 备注
# master01 7-8.2003 192.168.1.121 19.03.9 v0.11.0 v1.3.5 2C2G control plane
# master02 7-8.2003 192.168.1.122 19.03.9 v0.11.0 v1.3.5 2C2G control plane
# master03 7-8.2003 192.168.1.123 19.03.9 v0.11.0 v1.3.5 2C2G control plane
# work01 7-8.2003 192.168.1.131 19.03.9 / / 2C2G worker nodes
# work02 7-8.2003 192.168.1.132 19.03.9 / / 2C2G worker nodes
# work03 7-8.2003 192.168.1.133 19.03.9 / / 2C2G worker nodes
# VIP 7-8.2003 192.168.1.200 19.03.9 v0.11.0 v1.3.5 2C2G 在 control plane 上浮动
# client 7-8.2003 192.168.1.201 / / / 2C2G client
# 共有 7 台服务器,3 台 control plane【1 台 VirtualPC】,3 台 work,1 台 client 不动。# 试验机 -16G 内存:集群节点全副退出 master1 后,根本内存占用 99%,kubectl get nodes 查问常常回绝,后内存改为 1200
b. vagrant 筹备
centos7 筹备略。
Vagrantfile.default 构建虚拟机
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: <<-SHELL
echo "Mechines All up"
SHELL
# ssh 明码:reload 随须要在重启时开启,provision
config.ssh.username="vagrant"
config.ssh.password="vagrant"
config.vm.provision "shell", inline: <<-SHELL
sudo su && sed -i "s/PasswordAuthentication no/# PasswordAuthentication no/" "/etc/ssh/sshd_config" && systemctl restart sshd
SHELL
config.vm.define "k8s-master1" do |node|
node.vm.hostname="master1"
node.vm.box="centos7"
node.vm.network "public_network",ip: "192.168.1.121"
node.vm.provider "virtualbox" do | vb |
vb.memory=1200
vb.cpus=2
end
end
config.vm.define "k8s-master2" do |node|
node.vm.hostname="master2"
node.vm.box="centos7"
node.vm.network "public_network", ip: "192.168.1.122"
node.vm.provider "virtualbox" do | vb |
vb.memory=1200
vb.cpus=2
end
end
config.vm.define "k8s-master3" do |node|
node.vm.hostname="master3"
node.vm.box="centos7"
node.vm.network "public_network", ip: "192.168.1.123"
node.vm.provider "virtualbox" do | vb |
vb.memory=1200
vb.cpus=2
end
end
config.vm.define "k8s-worker1" do |node|
node.vm.hostname="worker1"
node.vm.box="centos7"
node.vm.network "public_network", ip: "192.168.1.131"
node.vm.provider "virtualbox" do | vb |
vb.memory=1200
vb.cpus=2
end
end
config.vm.define "k8s-worker2" do |node|
node.vm.hostname="worker2"
node.vm.box="centos7"
node.vm.network "public_network", ip: "192.168.1.132"
node.vm.provider "virtualbox" do | vb |
vb.memory=1200
vb.cpus=2
end
end
config.vm.define "k8s-worker3" do |node|
node.vm.hostname="worker3"
node.vm.box="centos7"
node.vm.network "public_network", ip: "192.168.1.133"
node.vm.provider "virtualbox" do | vb |
vb.memory=1200
vb.cpus=2
end
end
end
初始化时不增加明码,二次重启 vagrant reload –provision(重启并执行) 批量执行初始化,这样能够用明码进行近程登录了。
2、笔记梳理
除 client 是空机器外,其它 6 台机器:机器分为 121:master+vip(200)、master(122,123)、work(worker: 131,132,133),执行如下装置过程:
a. 环境初始化
实现初始化工作:零碎参数、防火墙、swap、hosts,ssh 不便后续 xshell 连贯,docker-ce kubelet kubeadm kubectl 的安镜像源设置、软件装置。
零碎初始化脚本
[192.168.1.121]# vim core.sh
#!/bin/bash
### 0 容许明码认证登录
if [[$(id | grep root) == "" ]]; then
sudo su
echo "go to root"
fi
id
### 1、环境初始化
# 1.1 敞开防火墙性能
systemctl stop firewalld
systemctl disable firewalld
# 1.2. 敞开 selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
setenforce 0
# 1.3 敞开 swap,启动项
swapoff -a
sed -i.bak '/swap/s/^/#/' /etc/fstab
# 1.4 服务器布局
result=$(cat /etc/hosts | grep "节点主机")
if [["$result" != ""]]; then
echo
else
cat <<EOF >> /etc/hosts
#节点主机
192.168.1.121 master1
192.168.1.122 master2
192.168.1.123 master3
192.168.1.131 worker1
192.168.1.132 worker2
192.168.1.133 worker3
# GitHub githubusercontent 超时备用
199.232.68.133 raw.githubusercontent.com
EOF
fi
# 1.5 长期主机名配置办法,vagrant 设置、略 hostnamectl set-hostname master1
# 1.6 工夫同步:ntp、chrony
timedatectl set-timezone Asia/Shanghai
yum install chrony -y
cat <<EOF > /etc/chrony.conf
server ntp1.aliyun.com iburst minpoll 4 maxpoll 10
server ntp2.aliyun.com iburst minpoll 4 maxpoll 10
server ntp3.aliyun.com iburst minpoll 4 maxpoll 10
EOF
systemctl start chronyd.service
systemctl enable chronyd.service
# 1.7 开启转发,即要求 iptables 不对 bridge 的数据进行解决
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
### 2、docker 装置
# 2.1 更新主机源
yum install -y yum-utils device-mapper-persistent-data lvm2 wget bash-completion.noarchdocker-ce.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo "https://download.docker.com/linux/centos/docker-ce.repo"
sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# 2.2 装置 docker,kubelet kubeadm kubectl
#yum remove docker docker-common docker-selinux docker-engine -y
yum clean all && yum makecache fast
yum install -y docker-ce
systemctl start kubelet
systemctl enable kubelet
# 2.3 docker 配置 cgroup 驱动
IPADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-eth1|grep IPADDR)
ip=${IPADDR:7}
point=${ip:10} #视理论地址而定
cat <<EOF > /etc/docker/daemon.json
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"registry-mirrors": ["https://kuogup1r.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"bip": "172.7.${point}.1/24",
"live-restore": true
}
EOF
#systemctl daemon-reload
systemctl start docker
systemctl enable docker
echo ">> 实现!"
反对反复执行。
镜像下载脚本
# 2.4 下载 k8s 系列镜像,并 tag(公有仓库)for img in `kubeadm config images list`; do
docker pull "gcrxio/$(echo $img | tr'/''_')"&& docker tag"gcrxio/$(echo $img | tr '/' '_')" $img
docker rmi -f "gcrxio/$(echo $img | tr'/''_')"; ## 必要
done
此时节点的 docker-ce kubelet kubeadm kubectl,以及装置 k8s 的 docker 镜像筹备结束。
此时节点筹备工作实现。
b. 集群初始化
次要工作:初始化 VIP 节点,散发 证书
和 vip 配置文件
,部署 flannel 网络。
散发证书
[192.168.1.121]#
for host in '121' '122' '123'; do
ssh root@192.168.1.$host 'ln -s /opt/etcd-v3.4.9-linux-amd64/ /opt/etcd'
ssh root@192.168.1.$host 'mkdir -p /data/etcd/etcd-server /data/logs/etcd-server /opt/certs'
scp /opt/certs/etcd-peer* root@10.4.7.$host:/opt/certs/
scp /opt/certs/ca.pem root@10.4.7.$host:/opt/certs/
done
master1 节点
#!/bin/bash
cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.3
apiServer:
certSANs: #填写所有 kube-apiserver 节点的 hostname、IP、VIP
- master1
- master2
- master3
- worker1
- worker2
- worker3
- 192.168.1.121
- 192.168.1.122
- 192.168.1.123
- 192.168.1.131
- 192.168.1.132
- 192.168.1.133
- 192.168.1.200
controlPlaneEndpoint: "192.168.1.121:6443" #DNS, 200 的超时
networking:
podSubnet: "10.244.0.0/16"
EOF
# https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
#kubeadm init --apiserver-advertise-address 192.168.1.121 --kubernetes-version 1.18.3 --pod-network-cidr 10.244.0.0/16
kubeadm init --config=kubeadm-config.yaml
# 秘钥、配置文件散发
# 这里 master2、3 建设 mkdir -p /etc/kubernetes/pki/etcd/ ; ` 不要过多复制 `
for host in 22 23; do
scp /etc/kubernetes/admin.conf root@192.168.1.1$host:/etc/kubernetes/
scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.pub,sa.key,front-proxy-ca.crt,front-proxy-ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/etcd/
done
# VIP 配置文件全局变量
export KUBECONFIG=/etc/kubernetes/admin.conf
result=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")
if [["$result" != ""]]
then
echo
else
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.local
fi
# 配置 master1 无明码近程登录(只测试用)ssh-keygen -f /root/.ssh/id_rsa.pub -N ""CONTROL_PLANE_IPS="192.168.1.122 192.168.1.123 192.168.1.131 192.168.1.132 192.168.1.133"
for host in ${CONTROL_PLANE_IPS}; do
sshpass -p 'vagrant' ssh-copy-id -i /root/.ssh/id_rsa.pub root@$host
done
#!/bin/bash
cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.3
apiServer:
certSANs: #填写所有 kube-apiserver 节点的 hostname、IP、VIP
- master1
- master2
- master3
- worker1
- worker2
- worker3
- 192.168.1.121
- 192.168.1.122
- 192.168.1.123
- 192.168.1.131
- 192.168.1.132
- 192.168.1.133
- 192.168.1.200
controlPlaneEndpoint: "192.168.1.121:6443" #DNS, 200 的超时
networking:
podSubnet: "10.244.0.0/16"
EOF
# https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
#kubeadm init --apiserver-advertise-address 192.168.1.121 --kubernetes-version 1.18.3 --pod-network-cidr 10.244.0.0/16
kubeadm init --config=kubeadm-config.yaml
# 配置 master1 无明码近程登录(只测试用)ssh-keygen -f /root/.ssh/id_rsa -N ""CONTROL_PLANE_IPS="192.168.1.122 192.168.1.123 192.168.1.131 192.168.1.132 192.168.1.133"
yum install -y sshpass
for host in ${CONTROL_PLANE_IPS}; do
sshpass -p 'vagrant' ssh-copy-id -i /root/.ssh/id_rsa.pub root@$host
done
# 秘钥、配置文件散发
# 这里 master2、3 建设 mkdir -p /etc/kubernetes/pki/etcd/ ; ` 不要过多复制 `
for host in 22 23; do
scp /etc/kubernetes/admin.conf root@192.168.1.1$host:/etc/kubernetes/
scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.pub,sa.key,front-proxy-ca.crt,front-proxy-ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} root@192.168.1.1$host:/etc/kubernetes/pki/etcd/
done
# VIP 配置文件全局变量
export KUBECONFIG=/etc/kubernetes/admin.conf
result=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")
if [["$result" != ""]]
then
echo
else
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.local
fi
echo ">> 实现!"
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 192.168.1.121:6443 –token 3l0971.r1y2kl78b2q5zowe \
–discovery-token-ca-cert-hash sha256:7d4d9dab1a73082a7a4b4e2df98c042028cdba08400a91b925669448eb95d6ee \
–control-planeThen you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.121:6443 –token 3l0971.r1y2kl78b2q5zowe \
–discovery-token-ca-cert-hash sha256:7d4d9dab1a73082a7a4b4e2df98c042028cdba08400a91b925669448eb95d6ee
c. non-vip.sh 退出集群(2+ 3 台)
在 vip-master1 主机查看集群信息。
kubectl get pod --all-namespaces
kubectl get node
### 如果报错,导入或重载
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl replace --force -f kube-flannel.yml
master 退出集群
# 全局变量
export KUBECONFIG=/etc/kubernetes/admin.conf
result=$(cat /etc/rc.d/rc.local | grep "export KUBECONFIG")
if [["$result" != ""]]
then
echo
else
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/rc.d/rc.local
fi
kubeadm join 192.168.1.121:6443 --token ovplco.edwbwsrtff7egbio \
--discovery-token-ca-cert-hash sha256:a566c43f8cfa958d17b07193f9bc4e3a0f0303b44108a8ed153d553647d9d566 \
--control-plane
worker 退出集群
kubeadm join 192.168.1.121:6443 --token ovplco.edwbwsrtff7egbio \
--discovery-token-ca-cert-hash sha256:a566c43f8cfa958d17b07193f9bc4e3a0f0303b44108a8ed153d553647d9d566
master1 再次查看集群信息确认。
d. 扩大装置
部署 flannel 网络
wget -O kube-flannel.yml "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/' kube-flannel.yml
kubectl apply -f kube-flannel.yml
装置 kubernetes-dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
#批改
kubectl create -f recommended.yaml