Kubenertes 部署(3master+3node)高可用集群
Etce 采纳混部形式
Keepalived 实现 vip 高可用
Haproyxy:以零碎 systemd 模式运行,提供反向代理至 3 个 master 6443 端口;
其余的次要部署组件包含:
Metrics 度量
Dashboard Kubernetes 图形化界面
Helm:kubenetes 包管理工具;
Ingress kubenertes 服务裸露
Longhorn:kubernetes 动静存储组件
部署布局
节点布局
节点主机名 Ip 类型 运行服务
Master01 192.168.58.21 K8s master 01 Docker、etcd、kube-apiserver、kube-schduler、kube-controller-manager
、kubectl、kubelet、metrics、calico、haproxy
Master02 192.168.58.22 K8s master02 Docker、etcd、kube-apiserver、kube-schduler、kube-contrller-manager、kubectl、kubelet、metrics、calico、haproxy
Master03 192.168.58.23 K8s master 03 Docker、etcd、kube-apiserver、kube-schduler、kube-controller-manager、kubelet、metrics、calico、haproxy
Work01 192.168.58.24 K8s node 01 Docker、kubelet、proxy、calico
work02 192.168.58.25 K8s node 02 Docker、kubelet、proxy、calico
work03 192.168.58.26 K8s node 03 Docker、kubelet、proxy、calico
Vip 192.168.58.200
Kubernetes 的高可用次要指的是管制立体的高可用,即值多套 Master 节点和 Etcd 组件,工作节点通过负载平衡连贯到各 Master
Kubernetes 高可用架构中 etcd 与 Master 节点组件混布形式特点
- 所需机器资源少
- 部署简略,利于治理
- 容易进行横向扩大
- 危险大,一台主机挂了,master 和 etcd 就都少了一套,集群冗余度受到的影响比拟大
手动增加解析
[root@localhost ~]# vim /etc/hosts
192.168.58.11 master01
192.168.58.12 master02
192.168.58.13 master03
192.168.58.14 node01
192.168.58.15 node02
192.168.58.16 node03
[root@master01 ~]# vim k8sinit.sh
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv6.conf.all.disable_ipv6 = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf >&/dev/null
swapoff -a
sed -i ‘/ swap / s/^(.*)$/#\1/g’ /etc/fstab
modprobe br_netfilter
Add ipvs modules
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
!/bin/bash
modprobe — ip_vs
modprobe — ip_vs_rr
modprobe — ip_vs_wrr
modprobe — ip_vs_sh
modprobe — nf_conntrack_ipv4
modprobe — nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
Install rpm
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel
openssl-devel
Update kernel
rpm –import http://down.linuxsb.com/RPM-G…
rpm -Uvh http://down.linuxsb.com/elrep…
yum –disablerepo=”*” –enablerepo=”elrepo-kernel” install -y kernel-ml
sed -i ‘s/^GRUB_DEFAULT=.*/GRUB_DEFAULT=0/’ /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg
yum update -y
ADD k8s bin to PATH
echo ‘export PATH=/opt/k8s/bin:$PATH’ >> /root/.bashrc
Reboot the machine.
reboot
批改 hostname
for host in $(cat /etc/hosts|grep ip a|grep global|awk '{print $2}'|cut -d / -f 1
|awk ‘{print $2}’); do hostnamectl set-
hostname $host ;done
免秘钥登录
for host in master0{1..3} work0{1..3};do ssh-copy-id -i ~/.
ssh/id_rsa.pub root@${host};done
环境其余筹备
[root@master01 ~]# vim environment.sh
集群 MASTER 机器 IP 数组
export MASTER_IPS=(192.168.58.21 192.168.58.22 192.168.58.23)
集群 MASTER IP 对应的主机名数组
export MASTER_NAMES=(master01 master02 master03)
集群 NODE 机器 IP 数组
export NODE_IPS=(192.168.58.14 192.168.58.15 192.168.58.16)
集群 NODE IP 对应的主机名数组
export NODE_NAMES=(node01 node02 node03)
集群所有机器 IP 数组
export ALL_IPS=(192.168.58.21 192.168.58.22 192.168.58.23 192.168.58.24 192.168.58.25 192.168.58.26)
集群所有 IP 对应的主机名数组
export ALL_NAMES=(master01 master02 master03 node01 node02 node03)
[root@master01 ~]# source environment.sh
[root@master01 ~]# chmod +x *.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo “>>> ${all_ip}”
scp -rp /etc/hosts root@${all_ip}:/etc/hosts
scp -rp k8sinit.sh root@${all_ip}:/root/
ssh root@${all_ip} “bash /root/k8sinit.sh”
done
集群部署 - 相干组件包
须要在每台机器上都装置一下的的软件包
Kubadm:用来初始化集群的指令;
Kubelet:在集群中的每个节点上用来启动 pod 和 container 等;
Kubectl:用来与集群通信命令工具;
Kubeadm 不能装置或治理 kubelet 或 kubectl,所以的保障他们满足通过 kubadm 装置的 kubernetes 管制层对版本的要求。如果没有满足要求,可能会导致一些意外谬误或者问题。具体相干组件装置 见《附件 001.kubectl 介绍及应用》
正式装置
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo “>>> ${all_ip}”
ssh root@${all_ip} ” yum list |grep kubelet”
done
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
ssh root@${all_ip} “yum install -y kubeadm-1.18.3-0.x86_64 kubelet-1.18.3-0.x86_64 kubectl-1.18.3-0.x86_64 –disableexcludes=kubernetes”
ssh root@${all_ip} “systemctl enable kubelet”
done
高可用组件的装置
Haproxy 装置
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo “>>> ${master_ip}”
ssh root@${master_ip} “yum -y install gcc gcc-c++ make libnl libnl-devel
libnfnetlink-devel openssl-devel wget openssh-clients systemd-devel zlib-devel pcre-devel
libnl3-devel”
ssh root@${master_ip} “wget http://down.linuxsb.com/softw…z”
ssh root@${master_ip} “tar -zxvf haproxy-2.1.6.tar.gz”
ssh root@${master_ip} “cd haproxy-2.1.6/ && make ARCH=x86_64 TARGET=linux-glibc
USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 PREFIX=/usr/local/haprpxy && make install PREFIX=/usr/local/haproxy”
ssh root@${master_ip} “cp /usr/local/haproxy/sbin/haproxy /usr/sbin/”
ssh root@${master_ip} “useradd -r haproxy && usermod -G haproxy haproxy”
ssh root@${master_ip} “mkdir -p /etc/haproxy && cp -r /root/haproxy-2.1.6/examples/errorfiles/ /usr/local/haproxy/”
done
Keepalived 装置
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo “>>> ${master_ip}”
ssh root@${master_ip} “yum -y install gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel”
ssh root@${master_ip} “wget http://down.linuxsb.com/softw…z”
ssh root@${master_ip} “tar -zxvf keepalived-2.0.20.tar.gz”
ssh root@${master_ip} “cd keepalived-2.0.20/ && ./configure –sysconf=/etc –prefix=/usr/local/keepalived && make && make install”
done
创立配置文件
[root@master01 ~]# wget http://down.linuxsb.com/hakek…
[root@master01 ~]# chmod u+x hakek8s.sh
[root@master01 ~]# vim hakek8s.sh
!/bin/sh
ScriptName: hakek8s.sh
Author: xhy
Create Date: 2020-06-08 20:00
Modify Author: xhy
Modify Date: 2020-06-10 18:57
Version: v2
*
set variables below to create the config files, all files will create at ./config directory
master keepalived virtual ip address
export K8SHA_VIP=192.168.58.200
master01 ip address
export K8SHA_IP1=192.168.58.21
master02 ip address
export K8SHA_IP2=192.168.58.22
master03 ip address
export K8SHA_IP3=192.168.58
master01 hostname
export K8SHA_HOST1=master01
master02 hostname
export K8SHA_HOST2=master02
master03 hostname
export K8SHA_HOST3=master03
master01 network interface name
export K8SHA_NETINF1=ens33
master02 network interface name
export K8SHA_NETINF2=ens33
master03 network interface name
export K8SHA_NETINF3=ens33
keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d
kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0
kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0
please do not modify anything below
mkdir -p config/$K8SHA_HOST1/{keepalived,haproxy}
mkdir -p config/$K8SHA_HOST2/{keepalived,haproxy}
mkdir -p config/$K8SHA_HOST3/{keepalived,haproxy}
mkdir -p config/keepalived
mkdir -p config/haproxy
mkdir -p config/calico
wget all files
wget -c -P config/keepalived/ http://down.linuxsb.com/keepa…
wget -c -P config/keepalived/ http://down.linuxsb.com/keepa…
wget -c -P config/haproxy/ http://down.linuxsb.com/hapro…
wget -c -P config/haproxy/ http://down.linuxsb.com/hapro…
wget -c -P config/calico/ http://down.linuxsb.com/calic…
wget -c -P config/ http://down.linuxsb.com/kubea…
wget -c -P config/ http://down.linuxsb.com/kubea…
wget -c -P config/ http://down.linuxsb.com/kubea…
create all kubeadm-config.yaml files
sed \
-e “s/K8SHA_HOST1/${K8SHA_HOST1}/g” \
-e “s/K8SHA_HOST2/${K8SHA_HOST2}/g” \
-e “s/K8SHA_HOST3/${K8SHA_HOST3}/g” \
-e “s/K8SHA_IP1/${K8SHA_IP1}/g” \
-e “s/K8SHA_IP2/${K8SHA_IP2}/g” \
-e “s/K8SHA_IP3/${K8SHA_IP3}/g” \
-e “s/K8SHA_VIP/${K8SHA_VIP}/g” \
-e “s/K8SHA_PODCIDR/${K8SHA_PODCIDR}/g” \
-e “s/K8SHA_SVCCIDR/${K8SHA_SVCCIDR}/g” \
config/kubeadm-config.yaml.tpl > kubeadm-config.yaml
echo “create kubeadm-config.yaml files success. kubeadm-config.yaml”
create all keepalived files
chmod u+x config/keepalived/check_apiserver.sh
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST1/keepalived
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST2/keepalived
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST3/keepalived
sed \
-e “s/K8SHA_KA_STATE/BACKUP/g” \
-e “s/K8SHA_KA_INTF/${K8SHA_NETINF1}/g” \
-e “s/K8SHA_IPLOCAL/${K8SHA_IP1}/g” \
-e “s/K8SHA_KA_PRIO/102/g” \
-e “s/K8SHA_VIP/${K8SHA_VIP}/g” \
-e “s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g” \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST1/keepalived/keepalived.conf
sed \
-e “s/K8SHA_KA_STATE/BACKUP/g” \
-e “s/K8SHA_KA_INTF/${K8SHA_NETINF2}/g” \
-e “s/K8SHA_IPLOCAL/${K8SHA_IP2}/g” \
-e “s/K8SHA_KA_PRIO/101/g” \
-e “s/K8SHA_VIP/${K8SHA_VIP}/g” \
-e “s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g” \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST2/keepalived/keepalived.conf
sed \
-e “s/K8SHA_KA_STATE/BACKUP/g” \
-e “s/K8SHA_KA_INTF/${K8SHA_NETINF3}/g” \
-e “s/K8SHA_IPLOCAL/${K8SHA_IP3}/g” \
-e “s/K8SHA_KA_PRIO/100/g” \
-e “s/K8SHA_VIP/${K8SHA_VIP}/g” \
-e “s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g” \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST3/keepalived/keepalived.conf
echo “create keepalived files success. config/$K8SHA_HOST1/keepalived/”
echo “create keepalived files success. config/$K8SHA_HOST2/keepalived/”
echo “create keepalived files success. config/$K8SHA_HOST3/keepalived/”
create all haproxy files
sed \
-e “s/K8SHA_IP1/$K8SHA_IP1/g” \
-e “s/K8SHA_IP2/$K8SHA_IP2/g” \
-e “s/K8SHA_IP3/$K8SHA_IP3/g” \
-e “s/K8SHA_HOST1/$K8SHA_HOST1/g” \
-e “s/K8SHA_HOST2/$K8SHA_HOST2/g” \
-e “s/K8SHA_HOST3/$K8SHA_HOST3/g” \
config/haproxy/k8s-haproxy.cfg.tpl > config/haproxy/haproxy.conf
create calico yaml file
sed \
-e “s/K8SHA_PODCIDR/${K8SHA_PODCIDR}/g” \
config/calico/calico.yaml.tpl > config/calico/calico.yaml
echo “create haproxy files success. config/$K8SHA_HOST1/haproxy/”
echo “create haproxy files success. config/$K8SHA_HOST2/haproxy/”
echo “create haproxy files success. config/$K8SHA_HOST3/haproxy/”
scp all file
scp -rp config/haproxy/haproxy.conf root@$K8SHA_HOST1:/etc/haproxy/haproxy.cfg
scp -rp config/haproxy/haproxy.conf root@$K8SHA_HOST2:/etc/haproxy/haproxy.cfg
scp -rp config/haproxy/haproxy.conf root@$K8SHA_HOST3:/etc/haproxy/haproxy.cfg
scp -rp config/haproxy/k8s-haproxy.service root@$K8SHA_HOST1:/usr/lib/systemd/system/haproxy.service
scp -rp config/haproxy/k8s-haproxy.service root@$K8SHA_HOST2:/usr/lib/systemd/system/haproxy.service
scp -rp config/haproxy/k8s-haproxy.service root@$K8SHA_HOST3:/usr/lib/systemd/system/haproxy.service
scp -rp config/$K8SHA_HOST1/keepalived/* root@$K8SHA_HOST1:/etc/keepalived/
scp -rp config/$K8SHA_HOST2/keepalived/* root@$K8SHA_HOST2:/etc/keepalived/
scp -rp config/$K8SHA_HOST3/keepalived/* root@$K8SHA_HOST3:/etc/keepalived/
chmod *.sh
chmod u+x config/*.sh
[root@master01 ~]# ./hakek8s.sh
解释:如上仅需在 Master 上执行,执行 hakek8s.sh 脚本后会产生如下配置文件清单:
Kubeadm-config.yaml:kubeadm 初始化配置文件,位于当前目录
Keepalived:keepalived 配置文件,位于各个 master 节点的 /etc/keepalived 目录
Haproxy:haproxy 的配置文件,位于各个 master 节点的 /etc/haproxy/ 目录
Calico.yaml:calico 网络组建部署文件,位于 config/calico 目录
[root@master01 ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
serviceSubnet: “10.20.0.0/16”
podSubnet: “10.10.0.0/16”
dnsDomain: “cluster.local”
kubernetesVersion: “v1.19.4”
controlPlaneEndpoint: “192.168.2.200:16443”
apiServer:
certSANs:
- master01
- master02
- master03
- 127.0.0.1
- 192.168.581
- 192.168.58
- 192.168.58.23
- 192.168.58.200
timeoutForControlPlane: 4m0s
certificatesDir: “/etc/kubernetes/pki”
imageRepository: “k8s.gcr.io”
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
提醒:如上仅须要在 maste01 节点上操作,更多 config 文件参考
https://godoc.org/k8s.io/kube…
kubeadm 部署初始化配置更多参考:
https://pkg.go.dev/k8s.io/kub…
启动服务
[root@master01 ~]# cat /etc/keepalived/keepalived.conf
[root@master01 ~]# cat /etc/keepalived/check_apiserver.sh 确认 keepalive 配置
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo “>>> ${master_ip}”
ssh root@${master_ip} “systemctl start haproxy.service && systemctl enable haproxy.service”
ssh root@${master_ip} “systemctl start keepalived.service && systemctl enable keepalived.service”
ssh root@${master_ip} “systemctl status keepalived.service | grep Active”
ssh root@${master_ip} “systemctl status haproxy.service | grep Active”
done
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo “>>> ${all_ip}”
ssh root@${all_ip} “ping -c1 192.168.58.200”
done
提醒:如上的操作仅须要在 master01 上执行,从而实现所有节点启动服务
解释:如上仅须要 master01 节点上操作。执行 hakek8s.sh 脚本会产生如下的配置文件清单
解释:如上仅需在 Master 上执行,执行 hakek8s.sh 脚本后会产生如下配置文件清单:
Kubeadm-config.yaml:kubeadm 初始化配置文件,位于当前目录
Keepalived:keepalived 配置文件,位于各个 master 节点的 /etc/keepalived 目录
Haproxy:haproxy 的配置文件,位于各个 master 节点的 /etc/haproxy/ 目录
Calico.yaml:calico 网络组建部署文件,位于 config/calico 目录
初始化集群
拉取镜像
kubeadm –kubernetes-version=v1.18.3onfig images list 列出所需的镜像
[root@master01 ~]# cat config/downimage.sh 确认版本信息
!/bin/sh
ScriptName: downimage.sh
Author: xhy
Create Date: 2020-05-29 19:55
Modify Author: xhy
Modify Date: 2020-06-10 19:15
Version: v2
*
KUBE_VERSION=v1.18.3
CALICO_VERSION=v3.14.1
CALICO_URL=calico
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.3-0
CORE_DNS_VERSION=1.6.7
GCR_URL=k8s.gcr.io
METRICS_SERVER_VERSION=v0.3.6
INGRESS_VERSION=0.32.0
CSI_PROVISIONER_VERSION=v1.4.0
CSI_NODE_DRIVER_VERSION=v1.2.0
CSI_ATTACHER_VERSION=v2.0.0
CSI_RESIZER_VERSION=v0.3.0
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
UCLOUD_URL=uhub.service.ucloud.cn/uxhy
QUAY_URL=quay.io
kubeimages=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}
metrics-server-amd64:${METRICS_SERVER_VERSION}
)
for kubeimageName in ${kubeimages[@]} ; do
docker pull $UCLOUD_URL/$kubeimageName
docker tag $UCLOUD_URL/$kubeimageName $GCR_URL/$kubeimageName
docker rmi $UCLOUD_URL/$kubeimageName
done
calimages=(cni:${CALICO_VERSION}
pod2daemon-flexvol:${CALICO_VERSION}
node:${CALICO_VERSION}
kube-controllers:${CALICO_VERSION})
for calimageName in ${calimages[@]} ; do
docker pull $UCLOUD_URL/$calimageName
docker tag $UCLOUD_URL/$calimageName $CALICO_URL/$calimageName
docker rmi $UCLOUD_URL/$calimageName
done
ingressimages=(nginx-ingress-controller:${INGRESS_VERSION})
for ingressimageName in ${ingressimages[@]} ; do
docker pull $UCLOUD_URL/$ingressimageName
docker tag $UCLOUD_URL/$ingressimageName $QUAY_URL/kubernetes-ingress-controller/$ingressimageName
docker rmi $UCLOUD_URL/$ingressimageName
done
csiimages=(csi-provisioner:${CSI_PROVISIONER_VERSION}
csi-node-driver-registrar:${CSI_NODE_DRIVER_VERSION}
csi-attacher:${CSI_ATTACHER_VERSION}
csi-resizer:${CSI_RESIZER_VERSION}
)
for csiimageName in ${csiimages[@]} ; do
docker pull $UCLOUD_URL/$csiimageName
docker tag $UCLOUD_URL/$csiimageName $QUAY_URL/k8scsi/$csiimageName
docker rmi $UCLOUD_URL/$csiimageName
done
提醒:如上仅须要子啊 Master01 上操作,从而实现所有节点主动拉取
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo “>>> ${all_ip}”
scp -p config/downimage.sh root@${all_ip}:/root/
ssh root@${all_ip} “bash downimage.sh &”
done
Master 上初始化
kubeadm init –config=kubeadm-config.yaml –upload-certs
顺次退出其它的节点
执行上图中的标红色的命令(别离对应 master 和 node)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
退出环境变量
[root@master01 ~]# cat << EOF >> ~/.bashrc
export KUBECONFIG=$HOME/.kube/config
EOF
[root@localhost ~]# echo “source <(kubectl completion bash)” >> ~/.bashrc
[root@localhost ~]# source ~/.bashrc
装置 NIC 插件
Calico 是一个平安的 L3 的网络和网络策略提供者。
Canal 联合 Flannel 和 Calico,提供网络和网络策略。
Cilium 是一个 L3 网络和网络策略插件,可能通明施行 HTTP/APL/L7 策略。同时反对路由(routing)和叠加 / 封装(overlay/encapsulation)模式
Contiv 为多种用例提供可配置网络(应用 BGP 的原生 L3,应用 vxlan 的 overlay,经典 L2 和 Cisco-SDN/ACI)和丰盛的 cel 框架。Contiv 我的项目齐全开源。装置工具同时提供基于和不基于 kubeadm 的装置选项、
Flannel 是一个 pod 网络的层 3 解决方案,并且反对 NetworkPolcy API。Kubeadm add-on 装置细节能够在这里找到。
Weave Net 提供了网络分组两端参加工作的网络和网络策略,并且不须要额定的数据库。
CNI-Genie 使 kubernetes 无缝连贯到一种 CNI 插件,例如:Flannel、Calico、Canal、
Romana 或者 Weave。提醒本计划应用 Calico 插件。
设置标签
[root@localhost ~]# kubectl taint nodes –all node-role.kubernetes.io/master- #容许 master 部署利用
部署 calico
[root@master01 ~]# vim config/calico/calico.yaml
-
name: CALICO_IPV4POOL_CIDR
value: "10.10.0.0/16" 查看 pod 网段
-
name: IP_AUTODETECTION_METHOD
value: "interface=ens.*" 查看节点之间的网卡
[root@master01 ~]# kubectl apply -f config/calico/calico.yaml
[root@master01 ~]# kubectl get pods –all-namespaces -o wide 查看部署
[root@master01 ~]# kubectl get nodes
批改 node 端口范畴
[root@master01 ~]# vi /etc/kubernetes/manifests/kube-apiserver.yaml
- –service-node-port-range=1-65535
以上全在 master 上执行
清理集群局部命令
[root@node01 ~]# kubeadm reset
[root@node01 ~]# ifconfig cni0 down
[root@node01 ~]# ip link delete cni0
[root@node01 ~]# ifconfig flannel.1 down
[root@node01 ~]# ip link delete flannel.1
[root@node01 ~]# rm ‐rf /var/lib/cni/
确认验证
[root@master01 ~]# kubectl get nodes 节点状态
[root@master01 ~]# kubectl get cs 组件信息
[root@master01 ~]# kubectl get serviceaccount 服务账户
[root@master01 ~]# kubectl cluster-info 集群信息
[root@master01 ~]# kubectl get pods -n kube-system -o wide 所有服务状态
Metrics 部署
Metrics 介绍
Kubernetes 的晚期版本依附 heapster 来实现残缺的性能数据采集和监控性能,kubernetes 从 1.8 版本开始,性能开始以 MetricsAPI 的形式提供标准化接口,并且从 1.10 版本开始将 Heapster 替换为 Metrics Server。在 kubernetes 新的监控体系中,Metrics Server 用于提供外围指标(CoreMetrics),包含 Node、Pod 的 CPU 和内存应用指标。对其余自定义指标(CustomMetrics)的监控有 Promethues 等组件来实现。
开启聚合层
无关聚合层常识参考:https://blog.csdn.net/liukuan… kubeadm 形式部署默认已开启。
获取部署文件
[root@master01 ~]# mkdir metrics
[root@master01 ~]# cd metrics/
[root@master01 metrics]# wget https://github.com/kubernetes…
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
replicas: 3 依据集群增加规模
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata: 在 template 下 spec 增加 Network=true
name: metrics-server
labels:
k8s-app: metrics-server
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
---kubelet-preferred-addresstypes=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP 增加此 args
正式部署
[root@master01 metrics]# kubectl apply -f components.yaml
[root@master01 metrics]# kubectl -n kube-system get pods -l k8s-app=metrics-server
查看资源监控
[root@master01 ~]# kubectl top nodes
[root@master01 ~]# kubectl top pods –all-namespaces
Dashboard 部署
[root@master01 ~]# wget https://raw.githubusercontent…
批改 recommended.yaml
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort 增加
ports:
- port: 443
targetPort: 8443
nodePort: 30003 增加
—
因为主动生成的证书很多浏览器无奈应用
所以咱们本人创立 正文 kuberntetes-dashboard-certs 对象申明
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
创立证书
[root@master01 ~]# mkdir dashboard-certs
[root@master01 ~]# cd dashboard-certs/
创立命名空间
[root@master01 dashboard-certs]# kubectl create namespace kubernetes-dashboard
创立 key 文件
[root@master01 dashboard-certs]# openssl genrsa -out dashboard.key 2048
证书申请
[root@master01 dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj ‘/CN=dashboard-cert’
自签证书
[root@master01 dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
创立 kubernetes-dashboard-certs 对象
[root@master01 dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs –from-file=dashboard.key –from-file=dashboar
d.crt -n kubernetes-dashboard
装置 dashboard
装置
[root@master01 ~]# kubectl create -f recommended.yaml
查看后果
[root@master01 ~]# kubectl get service -n kubernetes-dashboard -o wide
创立 dashboard 管理员
新建一个 yaml 文件
[root@master01 ~]# vim dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard
为用户调配权限:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
保留后执行
[root@master01 ~]# kubectl apply -f dashboard-admin.yaml
查看并复制用户 Token
[root@master01 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk ‘{p
rint $1}’)
登录 Dashboard
https://192.168.58.200:30003/ 抉择 token 复制方才的生成 token。留神 ip 为任意 node 节点的对外 ip
装置 kuboard
https://kuboard.cn/install/in…
[root@work04 ~]# kubectl apply -f https://kuboard.cn/install-sc…
[root@work04 ~]# kubectl apply -f https://addons.kuboard.cn/met…
查看运行状态
kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system
获取 token(领有的权限)此 token 领有的 ClusterAdmin 的权限,能够执行所有的权限
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk ‘{print $1}’) -o go-template='{{.data.token}}’ |
base64 -d)
拜访 Kuboard
你能够通过 NodePort 拜访 Kuboard
Kuboard Servvice 应用了 NodePort 的形式裸露服务,NodePort 为 32567,你能够按如下的形式拜访 Kuboard
http:// 任意一个 work 节点的 ip:32567/