10、高可用架构(扩容多Master架构)

Kubernetes作为容器集群零碎,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并放弃预期正本数,依据Node生效状态主动在其余Node拉起Pod,实现了应用层的高可用性。

针对Kubernetes集群,高可用性还应蕴含以下两个层面的思考:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd咱们曾经采纳3个节点组建集群实现高可用,本节将对Master节点高可用进行阐明和施行。

Master节点扮演着总控核心的角色,通过一直与工作节点上的Kubelet和kube-proxy进行通信来保护整个集群的衰弱工作状态。如果Master节点故障,将无奈应用kubectl工具或者API做任何集群治理。

Master节点次要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件本身通过抉择机制曾经实现了高可用,所以Master高可用次要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因而对他高可用与Web服务器相似,减少负载均衡器对其负载平衡即可,并且可程度扩容。

多Master架构图:

10.1、装置Docker

10.2、配置主机环境

10.3、部署Master2 Node(192.168.219.164)

Master2 与已部署的Master1所有操作统一。所以咱们只需将Master1所有K8s文件拷贝过去,再批改下服务器IP和主机名启动即可。

10.3.1、创立etcd证书目录

在Master2创立etcd证书目录:
mkdir -p /opt/etcd/ssl

10.3.2、拷贝文件(Master1操作)

拷贝Master1上所有K8s文件和etcd证书到Master2:

scp -r /opt/kubernetes root@192.168.219.165:/optscp -r /opt/cni/ root@192.168.219.165:/optscp -r /opt/etcd/ssl root@192.168.219.165:/opt/etcdscp /usr/lib/systemd/system/kube* root@192.168.219.165:/usr/lib/systemd/systemscp /usr/bin/kubectl  root@192.168.219.165:/usr/bin

10.3.3、批改配置文件IP和主机名

批改apiserver、kubelet和kube-proxy配置文件为本地IP:

vi /opt/kubernetes/cfg/kube-apiserver.conf ...--bind-address=192.168.219.165 \--advertise-address=192.168.219.165 \... vi /opt/kubernetes/cfg/kubelet.conf--hostname-override=k8s-master2 vi /opt/kubernetes/cfg/kube-proxy-config.ymlhostnameOverride: k8s-master2

10.3.4、启动设置开机启动

systemctl daemon-reloadsystemctl start kube-apiserversystemctl start kube-controller-managersystemctl start kube-schedulersystemctl start kubeletsystemctl start kube-proxysystemctl enable kube-apiserversystemctl enable kube-controller-managersystemctl enable kube-schedulersystemctl enable kubeletsystemctl enable kube-proxy

10.3.5、查看集群状态

kubectl get csNAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   ok                  controller-manager   Healthy   ok                  etcd-1               Healthy   {"health":"true"}   etcd-2               Healthy   {"health":"true"}   etcd-0               Healthy   {"health":"true"}

10.3.6、批准kubelet证书申请

kubectl get csrNAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITIONnode-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU   85m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU kubectl get nodeNAME           STATUS   ROLES    AGE   VERSIONk8s-master1    Ready    <none>   34h   v1.18.18k8s-master2    Ready    <none>   83m   v1.18.18k8s-node1      Ready    <none>   33h   v1.18.18k8s-node2      Ready    <none>   33h   v1.18.18

11、部署Nginx负载均衡器

Nginx是一个支流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载平衡。

Keepalived是一个支流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived次要依据Nginx运行状态判断是否须要故障转移(偏移VIP),例如当Nginx主节点挂掉,VIP会主动绑定在Nginx备节点,从而保障VIP始终可用,实现Nginx高可用。

kube-apiserver高可用架构图:

11.1、 装置软件包(主/备)

yum install epel-release -yyum install nginx keepalived -y

11.2、Nginx配置文件(主/备一样)

cat > /etc/nginx/nginx.conf << "EOF"user nginx;worker_processes auto;error_log /var/log/nginx/error.log;pid /run/nginx.pid;include /usr/share/nginx/modules/*.conf;events {    worker_connections 1024;}# 四层负载平衡,为两台Master apiserver组件提供负载平衡stream {    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';    access_log  /var/log/nginx/k8s-access.log  main;    upstream k8s-apiserver {       server 192.168.219.161:6443;   # Master1 APISERVER IP:PORT       server 192.168.219.164:6443;   # Master2 APISERVER IP:PORT    }        server {       listen 6443;       proxy_pass k8s-apiserver;    }}http {    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                      '$status $body_bytes_sent "$http_referer" '                      '"$http_user_agent" "$http_x_forwarded_for"';    access_log  /var/log/nginx/access.log  main;    sendfile            on;    tcp_nopush          on;    tcp_nodelay         on;    keepalive_timeout   65;    types_hash_max_size 2048;    include             /etc/nginx/mime.types;    default_type        application/octet-stream;    server {        listen       80 default_server;        server_name  _;        location / {        }    }}EOF

11.3、keepalived配置文件(Nginx Master)

cat > /etc/keepalived/keepalived.conf << EOFglobal_defs {    notification_email {      acassen@firewall.loc      failover@firewall.loc      sysadmin@firewall.loc    }    notification_email_from Alexandre.Cassen@firewall.loc     smtp_server 127.0.0.1    smtp_connect_timeout 30    router_id NGINX_MASTER} vrrp_script check_nginx {    script "/etc/keepalived/check_nginx.sh"}vrrp_instance VI_1 {     state MASTER     interface ens33  # 批改为理论网卡名    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是惟一的     priority 100    # 优先级,备服务器设置 90     advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒     authentication {         auth_type PASS              auth_pass 1111     }      # 虚构IP    virtual_ipaddress {         192.168.219.188/24    }     track_script {        check_nginx    } }EOF
  • vrrp_script:指定查看nginx工作状态脚本(依据nginx状态判断是否故障转移)
  • virtual_ipaddress:虚构IP(VIP)

查看nginx状态脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"#!/bin/bashcount=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];then    exit 1else    exit 0fiEOFchmod +x /etc/keepalived/check_nginx.sh

11.4、keepalived配置文件(Nginx Backup)

cat > /etc/keepalived/keepalived.conf << EOFglobal_defs {    notification_email {      acassen@firewall.loc      failover@firewall.loc      sysadmin@firewall.loc    }    notification_email_from Alexandre.Cassen@firewall.loc     smtp_server 127.0.0.1    smtp_connect_timeout 30    router_id NGINX_BACKUP} vrrp_script check_nginx {    script "/etc/keepalived/check_nginx.sh"}vrrp_instance VI_1 {     state BACKUP     interface ens33    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是惟一的     priority 90    advert_int 1    authentication {         auth_type PASS              auth_pass 1111     }      virtual_ipaddress {         192.168.219.188/24    }     track_script {        check_nginx    } }EOF

上述配置文件中查看nginx运行状态脚本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"#!/bin/bashcount=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];then    exit 1else    exit 0fiEOFchmod +x /etc/keepalived/check_nginx.sh
注:keepalived依据脚本返回状态码(0为工作失常,非0不失常)判断是否故障转移。

11.5、启动并设置开机启动

systemctl daemon-reloadsystemctl start nginxsystemctl start keepalivedsystemctl enable nginxsystemctl enable keepalived

11.6、查看keepalived工作状态

ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff    inet 192.168.31.80/24 brd 192.168.31.255 scope global noprefixroute ens33       valid_lft forever preferred_lft forever    inet 192.168.31.88/24 scope global secondary ens33       valid_lft forever preferred_lft forever    inet6 fe80::20c:29ff:fe04:f72c/64 scope link        valid_lft forever preferred_lft forever

能够看到,在ens33网卡绑定了192.168.31.88 虚构IP,阐明工作失常。

11.7、Nginx+Keepalived高可用测试

敞开主节点Nginx,测试VIP是否漂移到备节点服务器。
在Nginx Master执行 pkill nginx
在Nginx Backup,ip addr命令查看已胜利绑定VIP。

11.8、拜访负载均衡器测试

11.8.1、找K8s集群中任意一个节点,应用curl查看K8s版本测试,应用VIP拜访:

curl -k https://192.168.219.188:6443/version{  "major": "1",  "minor": "18",  "gitVersion": "v1.18.3",  "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",  "gitTreeState": "clean",  "buildDate": "2020-05-20T12:43:34Z",  "goVersion": "go1.13.9",  "compiler": "gc",  "platform": "linux/amd64"}

能够正确获取到K8s版本信息,阐明负载均衡器搭建失常。该申请数据流程:curl -> vip(nginx) -> apiserver

11.8.2、通过查看Nginx日志也能够看到转发apiserver IP:

tail /var/log/nginx/k8s-access.log -f192.168.219.181 192.168.219.161:6443 - [30/May/2020:11:15:10 +0800] 200 422192.168.219.181 192.168.219.164:6443 - [30/May/2020:11:15:26 +0800] 200 422

到此还没完结,还有上面最要害的一步。

12、批改所有Worker Node连贯LB VIP

试想下,尽管咱们减少了Master2和负载均衡器,然而咱们是从单Master架构扩容的,也就是说目前所有的Node组件连贯都还是Master1,如果不改为连贯VIP走负载均衡器,那么Master还是单点故障。

12.1、因而接下来就是要改所有Node组件配置文件,由原来192.168.219.161批改为192.168.219.188(VIP)

主机名ip
k8s-master1192.168.219.161
k8s-node1192.168.219.162
k8s-node2192.168.219.163
k8s-master2192.168.219.164

也就是通过kubectl get node命令查看到的节点。

12.2、在上述所有Worker Node执行

sed -i 's#192.168.219.161:6443#192.168.219.188:6443#' /opt/kubernetes/cfg/*systemctl restart kubeletsystemctl restart kube-proxy

12.3、查看节点状态

kubectl get nodeNAME             STATUS   ROLES    AGE      VERSIONk8s-master1    Ready    <none>   34h      v1.18.18k8s-master2    Ready    <none>   101m   v1.18.18k8s-node1      Ready    <none>   33h      v1.18.18k8s-node2      Ready    <none>   33h      v1.18.18

至此,一套残缺的 Kubernetes 高可用集群就部署实现了!

PS:如果你是在私有云上,个别都不反对keepalived,那么你能够间接用它们的负载均衡器产品(内网就行,还收费~),架构与下面一样,间接负载平衡多台Master kube-apiserver即可!