使用kubeadm部署k8s测试环境centos7

62次阅读

共计 4994 个字符,预计需要花费 13 分钟才能阅读完成。

零、3 个节点基本信息如下:

| ip | hostname | 用途 |
| 172.16.180.251 | k8s-master | master 节点 |
| 172.16.180.252 | k8s-node1 | node 节点 1 |
| 172.16.180.253 | k8s-node2 | node 节点 2 |

一、系统设置及资源准备

这部分三个节点都需要进行配置和准备,当然 node 节点部分资源可以不下载,将不需要的部分剔除即可。这里为了方便就统一都一样处理吧

0、配置 selinux 和 firewalld

# Set SELinux in permissive mode
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# Stop and disable firewalld
systemctl disable firewalld --now

1、系统参数与内核模块

# 修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

# 加载内核模块
modprobe br_netfilter
lsmod | grep br_netfilter

2、配置 yum 源

# base repo
cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.bak
curl -o CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
sed -i 's/gpgcheck=1/gpgcheck=0/g' /etc/yum.repos.d/CentOS-Base.repo

# docker repo
curl -o docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# k8s repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# update cache
yum clean all  
yum makecache  
yum repolist

3、禁用 swap

swapoff -a
echo "vm.swappiness = 0">> /etc/sysctl.conf
sysctl -p

如果重启后 swap 又会被挂上,还需要注释掉 /etc/fstab 中的一行配置:

4、安装 docker

安装最新版本:yum install docker-ce

启动 docker:systemctl enable docker --now

查看服务状态:systemctl status docker

5、安装 kubeadm、kubelet 和 kubectl

安装最新版本:yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
启动 kubelet:systemctl enable --now kubelet

6、阿里云镜像加速器配置

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://ih25wpox.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

7、镜像下载

然后我们就可以下载 image,下载完记得打个 tag:docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.10
docker tag mirrorgooglecontainers/etcd-amd64:3.3.10 k8s.gcr.io/etcd:3.3.10
docker pull coredns/coredns:1.3.1
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

二、安装 k8s master

tip:下面的 ip 地址 (172.16.180.251) 需要替换成自己机器的 IP

kubeadm init --pod-network-cidr=10.224.0.0/16 --kubernetes-version=v1.14.3 --apiserver-advertise-address 172.16.180.251 --service-cidr=10.225.0.0/16

–kubernetes-version: 用于指定 k8s 版本;
–apiserver-advertise-address:用于指定 kube-apiserver 监听的 ip 地址;
–pod-network-cidr:用于指定 Pod 的网络范围;
–service-cidr:用于指定 SVC 的网络范围;
如上,跑 kubeadm init 命令后等几分钟。

如果遇到报错,对着错误信息修正一下。比如没有关闭 swap 会遇到 error,系统 cpu 不够会遇到 error,网络不通等等都会出错,仔细看一下错误信息一般都好解决~

跑完上面的命令后,会看到类似如下的输出:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.16.180.251:6443 --token apykm6.1rfitzg6x4bmd0ll \
    --discovery-token-ca-cert-hash sha256:114e913f3d254475c33325e7f18b9f49f3ce97244b782a9c871c70f6a0d7c750

上面输出告诉我们还需要做一些工作:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装 flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

稍等一会,应该可以看到 node 状态变成 ready:

# kubectl get node

NAME STATUS ROLES AGE VERSION

k8s-master Ready master 23m v1.13.3

如果你的环境迟迟都是 NotReady 状态,可以 kubectl get pod -n kube-system 看一下 pod 状态,一般可以发现问题,比如 flannel 的镜像下载失败之类的

三、添加 node 节点

在 2 个 node 节点执行同样的 kube join 命令(具体命令 master 安装完输出的信息里可以找到):

kubeadm join 172.16.180.251:6443 --token apykm6.1rfitzg6x4bmd0ll \
    --discovery-token-ca-cert-hash sha256:114e913f3d254475c33325e7f18b9f49f3ce97244b782a9c871c70f6a0d7c750

将 master 节点的 /etc/kubernetes/admin.conf 复制到 node 节点的相同位置下,并在 node 节点执行:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

执行完成一会后,可以在 master 节点上看看各节点情况

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   16h   v1.14.3
k8s-node1    Ready    <none>   16h   v1.14.3
k8s-node2    Ready    <none>   16h   v1.14.3

正文完
 0