前言

春暖花开的五月,疫情根本过来,值得庆祝,在这美妙的日子里咱们来实战一下 K8S 的高可用负载平衡集群吧。

更新历史

  • 2020 年 05月 07 日 - 初稿 - 左程立
  • 原文地址 - https://blog.zuolinux.com/2020/05/07/k8s-cluster-on-centos7.html

平台环境

软件信息
  • CentOS Linux release 7.7.1908 (Kernel 3.10.0-1062.18.1.el7.x86_64)
  • Docker CE 18.09.9
  • Kubernetes v1.18.2
  • Calico v3.8
  • Keepalived v1.3.5
  • HAproxy v1.5.18
硬件信息
主机名ip
master01192.168.10.12
master02192.168.10.13
master03192.168.10.14
work01192.168.10.15
work02192.168.10.16
work03192.168.10.17
VIP192.168.10.19

集群配置

初始化

master/worker 均执行
# cat >> /etc/hosts << EOF192.168.10.12    master01192.168.10.13    master02192.168.10.14    master03192.168.10.15    work01192.168.10.16    work02192.168.10.17    work03EOF
# 敞开防火墙systemctl stop firewalldsystemctl disable firewalld# 敞开 SeLinuxsetenforce 0sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config# 敞开 swapswapoff -ayes | cp /etc/fstab /etc/fstab_bakcat /etc/fstab_bak |grep -v swap > /etc/fstab# 装置wgetyum install wget -y# 备份mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup# 阿里云yum源wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo# 获取阿里云epel源wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo# 清理缓存并创立新的缓存yum clean all && yum makecache# 更新yum update -y#同步工夫timedatectltimedatectl set-ntp true

装置 Docker

master/worker 均装置
# 装置 Docker CE# 设置仓库# 装置所需包yum install -y yum-utils device-mapper-persistent-data lvm2# 新增 Docker 装置源yum-config-manager \    --add-repo \    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 装置 Docker CE.yum install -y containerd.io \    docker-ce-18.09.9 \    docker-ce-cli-18.09.9# 启动 Docker 并增加开机启动systemctl start dockersystemctl enable docker#将Docker 的 Cgroup Driver 批改为 systemd#批改为国内源cat > /etc/docker/daemon.json <<EOF{    "exec-opts": ["native.cgroupdriver=systemd"],    "log-driver": "json-file",    "log-opts": {    "max-size": "100m"    },    "storage-driver": "overlay2",    "registry-mirrors":[        "http://hub-mirror.c.163.com",        "https://docker.mirrors.ustc.edu.cn",        "https://registry.docker-cn.com"    ]}EOFmkdir -p /etc/systemd/system/docker.service.d# Restart docker.systemctl daemon-reloadsystemctl restart docker

装置 kubeadm、kubelet 、kubectl

master/worker 节点均执行
# 配置K8S的yum源,最好应用官网源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg  https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF# 减少配置cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward=1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF# 加载sysctl --system# 装置以后日期最新稳定版(v1.18.2) kubelet、 kubeadm 、kubectlyum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2 --disableexcludes=kubernetes# 启动并设置 kubelet 开机启动systemctl start kubeletsystemctl enable kubelet

HAProxy 实现 apiserver 负载平衡集群

所有 master 节点执行
yum install haproxy-1.5.18 -ycat > /etc/haproxy/haproxy.cfg <<EOFglobal    log         127.0.0.1 local2    chroot      /var/lib/haproxy    pidfile     /var/run/haproxy.pid    maxconn     4000    user        haproxy    group       haproxy    daemon    # turn on stats unix socket    stats socket /var/lib/haproxy/statsdefaults    mode                    http    log                     global    option                  httplog    option                  dontlognull    option http-server-close    retries                 3    timeout http-request    10s    timeout queue           1m    timeout connect         10s    timeout client          1m    timeout server          1m    timeout http-keep-alive 10s    timeout check           10s    maxconn                 3000frontend  k8s-api    mode tcp    option tcplog    bind *:16443    default_backend      k8s-apibackend k8s-api    mode        tcp    balance     roundrobin    server master01 192.168.10.12:6443 check    server master02 192.168.10.13:6443 check    server master03 192.168.10.14:6443 checkEOF
所有 master 节点启动 HAProxy
systemctl start haproxysystemctl enable haproxy

Keepalived实现 apiserver 高可用集群

所有 master 节点执行
yum -y install keepalived psmisc
master01 上 keepalived 的配置:
# cat > /etc/keepalived/keepalived.conf <<EOF! Configuration File for keepalivedglobal_defs {   router_id master01      script_user root   enable_script_security }vrrp_script check_haproxy {    script "killall -0 haproxy"    interval 2    weight 10}vrrp_instance VI_1 {    state MASTER     interface ens192    virtual_router_id 50    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.10.19    }    track_script {        check_haproxy    }}EOF
master02 上 keepalived 的配置:
# cat > /etc/keepalived/keepalived.conf <<EOF! Configuration File for keepalivedglobal_defs {   router_id master02      script_user root   enable_script_security }vrrp_script check_haproxy {    script "killall -0 haproxy"    interval 2    weight 10}vrrp_instance VI_1 {    state BACKUP     interface ens192    virtual_router_id 50    priority 98    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.10.19    }    track_script {        check_haproxy    }}EOF
master03 上 keepalived 的配置:
# cat > /etc/keepalived/keepalived.conf <<EOF! Configuration File for keepalivedglobal_defs {   router_id master03      script_user root   enable_script_security }vrrp_script check_haproxy {    script "killall -0 haproxy"    interval 2    weight 10}vrrp_instance VI_1 {    state BACKUP     interface ens192    virtual_router_id 50    priority 96    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.10.19    }    track_script {        check_haproxy    }}EOF
所有 master 节点执行
service keepalived startsystemctl enable keepalived
留神
  • vrrp_script 中参数 weight 肯定要大于 master 和 backup 的 priority 的相差值
  • vrrp_instance 中参数 nopreempt 能够避免 master 复原后主动回切

创立 K8S 集群

在初始化之前,须要先设置 hosts 解析
MASTER_IP 为 VIP 的地址
APISERVER_NAME 为 VIP 的域名
export MASTER_IP=192.168.10.19export APISERVER_NAME=k8s.apiecho "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
在 master01 上执行 kubeadm init 进行初始化
kubeadm init \ --apiserver-advertise-address 0.0.0.0 \ --apiserver-bind-port 6443 \ --cert-dir /etc/kubernetes/pki \ --control-plane-endpoint k8s.api \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version 1.18.2 \ --pod-network-cidr 192.10.0.0/16 \ --service-cidr 192.20.0.0/16 \ --service-dns-domain cluster.local \ --upload-certs

留神保留后果中两行 kubeadm join 结尾内容,用于增加其余 master、worker 节点到集群中。

加载环境变量

master01 上执行,用于治理集群

如果在root用户下

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profilesource .bash_profile

如果非root用户下

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

装置 Pod 网络组件

master01 上执行
# 获取配置文件mkdir calico && cd calicowget https://docs.projectcalico.org/v3.8/manifests/calico.yaml# 批改配置文件vi calico.yaml# 找到 192.168.0.0/16 ,批改为 192.10.0.0/16# 部署 Pod 网络组件kubectl apply -f calico.yaml
实时查看 pod 的状态
watch kubectl get pods --all-namespaces -o wide

增加其余 master 节点到K8S集群

在其余 master 节点执行

应用 master01 上 kubeadm init 的执行后果中蕴含 join 的指令信息

端口由 6443 批改为 16443

export MASTER_IP=192.168.10.19export APISERVER_NAME=k8s.apiecho "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
kubeadm join k8s.api:16443 --token ztjux9.2tau56zck212j9ra \    --discovery-token-ca-cert-hash sha256:a2b552266902fb5f6620330fc1a6638a9cdd6fec3408edba1082e6c8389ac517 \    --control-plane --certificate-key 961494e7d0a9de0219e2b0dc8bdaa9ca334ecf093a6c5f648aa34040ad39b61a
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profilesource .bash_profile

将所有 Worker 节点增加到K8S集群

在worker 节点执行

应用 master01 上 kubeadm init 的执行后果中蕴含 join 的指令信息

端口由 6443 批改为 16443

export MASTER_IP=192.168.10.19export APISERVER_NAME=k8s.apiecho "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
kubeadm join k8s.api:16443 --token ztjux9.2tau56zck212j9ra \    --discovery-token-ca-cert-hash sha256:a2b552266902fb5f6620330fc1a6638a9cdd6fec3408edba1082e6c8389ac517

master01 上查看集群

watch kubectl get nodes -o wide

全副为 Ready 阐明集群装置胜利。

破坏性测试

  • 把 master01 的 haproxy 停掉
  • 把 master01 机器关机

能够看到 VIP 能够漂移到 master02 上

而后能够在 master02 做同样操作察看 VIP 是否能够漂到 master03 上

结束语

明天次要是实战搭建了 K8S 高可用负载平衡集群,是我实际操作的记录。

那么你有没有发现有个中央其实能够进一步的优化。

送你一碗鸡汤喝

  • 问题遇到的越早越好,如果你没有遇到问题,阐明你的问题更大。

参考文章

https://wsgzao.github.io/post...

https://www.kubernetes.org.cn...

分割我

微信公众号:zuolinux_com