关于kubernetes:kubeadm方式搭建k8s环境

7次阅读

共计 5205 个字符,预计需要花费 14 分钟才能阅读完成。

1. 环境信息

三台 centos7.9 虚机

192.168.56.102 node1 (2 个 cpu)
192.168.56.103 node2
192.168.56.104 node3

设置三台机器间免密登录

ssh-keygen -t rsa

~/.ssh/

cat  ~/.ssh/id_rsa.pub  >  ~/.ssh/authorized_keys

chmod 700  ~/.ssh

chmod 600  ~/.ssh/authorized_keys

ssh-copy-id -i  ~/.ssh/id_rsa.pub  -p  22  root@node1

批改主机名

hostname node1 // 长期批改
hostnamectl set-hostname node1 // 永恒批改 

禁用防火墙

systemctl stop firewalld

systemctl disable firewalld

禁用 selinux

setenforce 0  // 长期敞开

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config // 永恒敞开 

禁用缓冲区

swapoff -a

/etc/fstab  // 正文 swap

装置 docker

 装置工具和驱动 
yum install -y yum-utils device-mapper-persistent-data lvm2

增加 yum 源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

启用源
yum-config-manager --enable docker-ce-nightly

卸载历史版本
yum remove docker docker-common docker-selinux docker-engine docer-io

装置 docker
yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 containerd.io-19.03.15 -y  // 指定版本为 19.03 

启动 docker
systemctl start docker
systemctl enable docker

批改 /etc/docker/daemon.json

{"exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors":["https://docker.mirrors.ustc.edu.cn"]
}

scp daemon.json root@node2:/etc/docker/
scp daemon.json root@node3:/etc/docker/

增加 yum 源
/etc/yum.repo.d/k8s.repo

[kubernetes]
name=kubernetes
enabled=1
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

yum clean all
yum makecache

2. 装置 k8s

yum search kubeadm –showduplicates

三个机器别离执行

yum install kubeadm-1.20.9-0 kubectl-1.20.9-0 kubelet-1.20.9-0 -y

systemctl start kubelet
systemctl enable kubelet

在 master 节点上初始化

kubeadm init \
--apiserver-advertise-address 192.168.56.102 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.9 \
--service-cidr 10.1.0.0/16 \
--pod-network-cidr 10.244.0.0/16

---------------------------------------------------------------------
--apiserver-advertise-address 192.168.56.102  // 主机的 ip
--image-repository registry.aliyuncs.com/google_containers // 各个组件镜像仓库
--kubernetes-version v1.20.9 //k8s 版本
--service-cidr 10.1.0.0/16 //service 网段
--pod-network-cidr 10.244.0.0/16 //pod 的网段 

kubeadm init 实现后

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm join 192.168.56.102:6443 --token 9ei6kq.imr068aa8mvphhvt \
    --discovery-token-ca-cert-hash sha256:07093c3ae6cddb40a32b9648522248d4b02bd27a0e4338f525031663c4fd2aa2

kubectl get nodes

[root@localhost kubernetes]# kubectl get nodes
NAME                    STATUS     ROLES                  AGE   VERSION
localhost.localdomain   NotReady   control-plane,master   17m   v1.20.9

kubectl get pod -A

NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f89b7bc75-76xh8                        0/1     Pending   0          19m
kube-system   coredns-7f89b7bc75-pkzzm                        0/1     Pending   0          19m
kube-system   etcd-localhost.localdomain                      1/1     Running   0          19m
kube-system   kube-apiserver-localhost.localdomain            1/1     Running   0          19m
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running   0          19m
kube-system   kube-proxy-7k48f                                1/1     Running   0          19m
kube-system   kube-scheduler-localhost.localdomain            1/1     Running   0          19m

装置 flannel 网络插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

3. 功败垂成

[root@localhost kubelet]# kubectl get nodes
NAME                    STATUS     ROLES                  AGE     VERSION
localhost.localdomain   Ready      control-plane,master   31m     v1.20.9
node2                   NotReady   <none>                 3m39s   v1.20.9
node3                   Ready      <none>                 3m19s   v1.20.9

4. 遇见的谬误

A. kebelet 启动不起来

起因是 cgroup 类型不对

在 docker daemon.json 中增加

"exec-opts": ["native.cgroupdriver=systemd"],

而后还是起不来

journalctl -xeu kubelet | less

谬误日志
localhost.localdomain kubelet[32128]: F0729 00:04:15.112170   32128 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory

须要在 kubeadm init 之后就能够起来了

B. 从节点没有 admin.conf 文件

mkdir -p $HOME/.kube
[root@localhost ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: 无奈获取 "/etc/kubernetes/admin.conf" 的文件状态 (stat): 没有那个文件或目录
[root@localhost ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
chown: 无法访问 "/root/.kube/config": 没有那个文件或目录 

从主节点 scp 一份过来

C. kube-flannel.yml 文件获取不到的话,能够间接下载下来

kubectl apply 报错

The connection to the server localhost:8080 was refused - did you specify the right host or port?

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile 即可 

D. docker 启动不起来

谬误之处

"exec-opts": ["native.cgroupdriver=systemd"], 写成了 -> "exe-opts": ["native.cgroupdriver=systemd"],

E. master 节点 kubeadm join 信息失落了

-----
kubeadm token create --print-join-command
-----

-----
kubeadm token list
kubeadm token create --ttl 0
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
-----

F. join 出错

kubeadm join 192.168.56.105:6443 --token kby4kb.8pk6i96jyuyqy9q1     --discovery-token-ca-cert-hash sha256:55a6876fb90e37679db74823ab8d469dabbe59ffa5ec927f1e6dc2ea16483732
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase kubelet-start: a Node with name "localhost.localdomain" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

因为我的虚拟机是从同一台机器上克隆过去的,主机名都一样,所以须要批改一下主机名

正文完
 0