关于kubernetes:K8S-笔记-使用-kubeadm-安装-k8s

43次阅读

共计 20269 个字符,预计需要花费 51 分钟才能阅读完成。

阐明:

本文是集体近期部署 k8s 试验环境的残缺笔记,齐全照着做就能顺利部署出一套 k8s 集群。

想理解 k8s 更多内容请移步 k8s 官网:
https://kubernetes.io/

想理解 kubeadm 的更多内容请参考:
https://kubernetes.io/docs/re…

1. 部署 k8s 的后期筹备

1.1 环境需要

虚拟机:3 台,其中 1 台部署 k8s-master,2 台部署 k8s-slave 节点
操作系统:CentOS 7
硬件需要:CPU 至多 2c,内存至多 2G

1.2 环境角色

IP 角色 装置组件
192.168.100.20 k8s-master kube-apiserver, kube-schduler, kube-controller-manager, docker, flannel, kubelet
192.168.100.21 k8s-slave1 kubelet , kube-proxy , docker , flannel
192.168.100.22 k8s-slave2 kubelet , kube-proxy , docker , flannel

2. 设置 hostname(master 和 slave 都执行)

依照下面表格的调配,在三台主机上别离执行:

hostnamectl set-hostname k8s-master

hostnamectl set-hostname k8s-slave1

hostnamectl set-hostname k8s-slave2

3. 设置 /etc/hosts(master 和 slave 都执行)

cat <<EOF > /etc/hosts
192.168.100.20 k8s-master
192.168.100.21 k8s-slave1
192.168.100.22 k8s-slave2
EOF

4. 设置 iptables(master 和 slave 都执行)

iptables -P FORWARD ACCEPT

5. 敞开 swap(master 和 slave 都执行)

swapoff -a

避免开机主动挂载 swap 分区(将 /etc/fstab 中的内容正文掉)

sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

6. 敞开 selinux 和 firewalld(master 和 slave 都执行)

将 selinux 由 enforcing 改为 disabled。留神上面命令中 disabled 后面是数字 1,不是小写的英文字母 l

sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config

长期敞开 enforce

setenforce 0 

验证 enforce 是否已敞开:如敞开,应返回后果:Permissive

getenforce

敞开并禁用防火墙

systemctl stop firewalld && systemctl disable firewalld

查看防火墙是否已敞开

systemctl status firewalld

7. 批改内核参数(master 和 slave 都执行)

第一种办法(举荐!写到独立内核文件中):

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
vm.swappiness=0
EOF

第二种办法(写到默认内核文件中:/etc/sysctl.conf)

cat >> /etc/sysctl.conf <<eof
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
vm.swappiness=0
eof

第三种办法(写到默认内核文件中:/etc/sysctl.conf):

sudo sysctl -w  net.bridge.bridge-nf-call-ip6tables=0
sudo sysctl -w  net.bridge.bridge-nf-call-iptables = 1
sudo sysctl -w  net.ipv4.ip_forward = 1
sudo sysctl -w  vm.max_map_count = 262144
sudo sysctl -w  vm.swappiness=0

参数阐明:

  • net.bridge.bridge-nf-call-ip6tables 和 net.bridge.bridge-nf-call-iptables,netfilter 实际上既能够在 L2 层过滤,也能够在 L3 层过滤的。当值为 0,即要求 iptables 不对 bridge 的数据进行解决。当值为 1,也就意味着二层的网桥在转发包时也会被 iptables 的 FORWARD 规定所过滤,这样就会呈现 L3 层的 iptables rules 去过滤 L2 的帧的问题。
  • max_map_count,文件蕴含限度一个过程能够领有的 VMA(虚拟内存区域)的数量。如不设置可能会报错:“max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]”
  • net.ipv4.ip_forward,出于平安思考,Linux 零碎默认是禁止数据包转发的。所谓转发即当主机领有多于一块的网卡时,其中一块收到数据包,依据数据包的目标 ip 地址将数据包发往本机另一块网卡,该网卡依据路由表持续发送数据包。这通常是路由器所要实现的性能。要让 Linux 零碎具备路由转发性能,须要配置 Linux 的内核参数 net.ipv4.ip_forward = 1
  • swappiness,等于 0 的时候示意最大限度应用物理内存,而后才是 swap 空间,swappiness=100 的时候示意踊跃的应用 swap 分区,并且把内存上的数据及时的搬运到 swap 空间外面。

而后所有节点执行如下命令加载内核使批改失效

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

8. 配置软件源(master 和 slave 都执行)

8.1 备份本地 yum 源

mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/bak/
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo

8.2 获取阿里 yum 源配置

wget -O /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
或者
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/Centos-7.repo
或者
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/Centos-7.repo

8.3 配置 kubernetes 源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

8.4 更新 catch

yum clean all # 革除零碎所有的 yum 缓存 
yum makecache # 生成 yum 缓存

9. 配置 docker 镜像减速(master 和 slave 都执行)

mkdir /etc/docker/
cat <<EOF > /etc/docker/daemon.json
{"registry-mirrors": ["https://q3rmdln3.mirror.aliyuncs.com"],
  "insecure-registries":["http://192.168.100.20:5000"]
}
EOF

10. 装置 docker(master 和 slave 都执行)

查看可用版本

yum list docker-ce showduplicates | sort -r

装置源里的最新版本

yum install docker-ce -y

启动 docker 并设置开机自启动

systemctl enable docker && systemctl start docker

查看版本

docker --version
或者
docker -v

11. 装置 K8S(master 和 slave 都执行)

装置 kubeadm,kubelet 和 kubectl
所有节点执行!!!!!!
查看可用版本

yum list kubeadm kubelet kubectl showduplicates

因为版本更新频繁,能够指定版本号装置:

yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes

也能够不指定版本装置,装置最新版本:

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

参数阐明:

  • –disableexcludes=kubernetes,即装置时只应用 kubernetes 的源,不应用其余源
    其余阐明:
  • kubelet 依赖 conntrack 和 socat。

查看版本

kubelet --version
kubeadm version
kubectl version

设置 kubelet 开机自启动。否则宿主机重启后 pod 无奈主动启动

systemctl enable kubelet

12. 初始化(仅 master 执行)

只在 master 运行!!!!!!

mkdir /data/k8s-install
cd /data/k8s-install

编辑 vim kubeadm.yaml,配置如下(留神几个带 # 正文的中央是须要批改的):

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.100.20 # apiserver 的地址。因为是单节点的 master,所以应用 master 的内网 IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 批改成国内的镜像源,例如阿里镜像源。也能够应用 registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.4
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16" # Pod 网段,flannel 插件须要应用这个网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}

查看可用镜像(均以容器的形式装置)
应该有 7 个:kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,pause,etcd,coredns
如果找不到镜像,可尝试将 kube.yaml 中配置的 imageRepository 镜像源地址改回谷歌的地址:k8s.gcr.io

[root@k8s-master k8s-install]# kubeadm config images list --config kubeadm.yaml 
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.4
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.4
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6

下载镜像到本地

[root@k8s-master k8s-install]# kubeadm config images pull --config kubeadm.yaml 
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

初始化 master 节点:

[root@k8s-master k8s-install]# kubeadm init --config kubeadm.yaml

留神!!!!!!
可能会呈现如下报错:

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

解决办法(举荐优先进行如下配置,而后再进行初始化。所有节点均需执行!!!):
在 /etc/docker/daemon.json 文件中退出 ”exec-opts”: [“native.cgroupdriver=systemd”] 这一行配置,重启 docker 服务并革除一下 kubeadm 信息即可从新初始化。

[root@k8s-master k8s-install]# cat /etc/docker/daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://q3rmdln3.mirror.aliyuncs.com"],
  "insecure-registries":["http://192.168.100.20:5000"]
}

重启 docker 服务

[root@k8s-master k8s-install]# systemctl restart docker

重置 kubeadm 配置

[root@k8s-master k8s-install]# kubeadm reset -f

从新执行初始化

[root@k8s-master k8s-install]# kubeadm init --config kubeadm.yaml

初始化胜利返回后果如下:

……

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.20:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cebc76018abc850a5b948d34e35fe34b021e92ecc65299……

依照提醒在 master 上执行如下命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看 node。此时 node 状态应该是 NotReady,因为还没配置网络插件

[root@k8s-master k8s-install]# kubectl get node
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   16m   v1.23.4

通过 kubectl describe node k8s-master 命令也能查看到网络未筹备好

[root@k8s-master k8s-install]# kubectl describe node k8s-master | grep network
  Ready            False   Fri, 18 Feb 2022 16:06:40 +0800   Fri, 18 Feb 2022 15:35:44 +0800   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

顺便阐明:
排查问题,最重要的伎俩就是用 kubectl describe 来查看这个节点(Node)对象的详细信息、状态和事件(Event)

13. 将 slave 退出到集群中(在所有 slave 执行)

kubeadm join 192.168.100.20:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:cebc76018abc850a5b948d34e35fe34b021e92ecc65299……

若退出胜利,slave 返回后果如下:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

再次查看 node

[root@k8s-master k8s-install]# kubectl get node
NAME         STATUS     ROLES                  AGE    VERSION
k8s-master   NotReady   control-plane,master   24m    v1.23.4
k8s-slave1   NotReady   <none>                 97s    v1.23.4
k8s-slave2   NotReady   <none>                 102s   v1.23.4

查看 pod,从输入中能够看到,coredns 的 Pod 处于 Pending 状态,即调度失败,当然也合乎预期,因为这个 Master 节点的网络尚未就绪

[root@k8s-master k8s-install]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-jrbck              0/1     Pending   0          33m
coredns-6d8c4cb4d-wx5j6              0/1     Pending   0          33m
etcd-k8s-master                      1/1     Running   0          34m
kube-apiserver-k8s-master            1/1     Running   0          34m
kube-controller-manager-k8s-master   1/1     Running   0          34m
kube-proxy-57h47                     1/1     Running   0          33m
kube-proxy-q55pv                     1/1     Running   0          11m
kube-proxy-qs57s                     1/1     Running   0          11m
kube-scheduler-k8s-master            1/1     Running   0          34m

14. 装置 flannel(仅在 master 执行)

因为还要进行一些配置上的批改,所以不倡议间接执行:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

而是先下载 yml 文件,批改之后再执行:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

批改 kube-flannel.yml,在第 200 行左右减少指定网卡的配置:- –iface=XXX

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {"portMappings": true}
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {"Type": "vxlan"}
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.16.3 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.16.3
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.16.3 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.16.3
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33 # 如果机器存在多网卡的话,最好指定内网网卡的名称。CentOS 下个别是 eth0 或者 ens33。不指定的话默认会应用第一块网卡。resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

执行命令装置 flannel
因为是全新装置,所以应用 kubectl create 或者 kubectl apply 都能够。留神二者的区别!

[root@k8s-master k8s-install]# kubectl create -f kube-flannel.yml 
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

再次查看 pod,coredns 的状态曾经变为 Running

[root@k8s-master k8s-install]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-jrbck              1/1     Running   0          83m
coredns-6d8c4cb4d-wx5j6              1/1     Running   0          83m
etcd-k8s-master                      1/1     Running   0          83m
kube-apiserver-k8s-master            1/1     Running   0          83m
kube-controller-manager-k8s-master   1/1     Running   0          83m
kube-flannel-ds-6zvv2                1/1     Running   0          4m53s
kube-flannel-ds-lmthg                1/1     Running   0          4m53s
kube-flannel-ds-p96n9                1/1     Running   0          4m53s
kube-proxy-57h47                     1/1     Running   0          83m
kube-proxy-q55pv                     1/1     Running   0          60m
kube-proxy-qs57s                     1/1     Running   0          60m
kube-scheduler-k8s-master            1/1     Running   0          83m

再次查看 node,状态也变为 Ready

[root@k8s-master k8s-install]# kubectl get node
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   84m   v1.23.4
k8s-slave1   Ready    <none>                 61m   v1.23.4
k8s-slave2   Ready    <none>                 61m   v1.23.4

k8s 集群部署结束!

15. 对于 ROLES(角色)和 LABELS(标签)的设置

查看角色

[root@k8s-master k8s-install]# kubectl get node -o wide
NAME         STATUS   ROLES                  AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master   Ready    control-plane,master   119m   v1.23.4   192.168.100.20   <none>        CentOS Linux 7 (Core)   3.10.0-1160.53.1.el7.x86_64   docker://20.10.12
k8s-slave1   Ready    <none>                 96m    v1.23.4   192.168.100.21   <none>        CentOS Linux 7 (Core)   3.10.0-1160.53.1.el7.x86_64   docker://20.10.12
k8s-slave2   Ready    <none>                 97m    v1.23.4   192.168.100.22   <none>        CentOS Linux 7 (Core)   3.10.0-1160.53.1.el7.x86_64   docker://20.10.12

查看标签

[root@k8s-master k8s-install]# kubectl get node --show-labels
NAME         STATUS   ROLES                  AGE    VERSION   LABELS
k8s-master   Ready    control-plane,master   120m   v1.23.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-slave1   Ready    <none>                 97m    v1.23.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-slave1,kubernetes.io/os=linux
k8s-slave2   Ready    <none>                 97m    v1.23.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-slave2,kubernetes.io/os=linux

k8s slave 节点的 ROLES 默认为 <none>。设置 ROLES 的办法如下:

kubectl label no k8s-slave1 kubernetes.io/role=test-node

参数阐明:

  • kubernetes.io/role 前面跟的是 ROLES 的值

设置 LABELS 的办法如下:

kubectl label no k8s-slave1 roles=dev-pc

留神:

  • 设置的 ROLES 之后,能够同时在 ROLES 和 LABELS 两列看见设置的值,例如下面案例中的 test-node
  • 然而设置 LABELS,只能在 LABELS 列中显示该值,例如下面案例中的 dev-pc
  • 参考资料:https://blog.csdn.net/Lingoes…

16. 对于“默认 master 节点无奈调度业务 pod”的阐明

请留神!!!部署胜利后,默认 master 节点无奈调度业务 pod,即 业务 pod 不可运行于 master 之上!!!

详情阐明:

  • 应用 kubeadm 部署的 kubernetes 集群,其 master 节点默认回绝将 pod 调度运行于其上的,加点官网的术语就是:master 默认被赋予了一个或者多个“污点(taints)”,“污点”的作用是让该节点回绝将 pod 调度运行于其上。那么存在某些状况,比方测试环境资源有余,想让 master 也成为工作节点能够调度 pod 运行怎么办呢?两种形式:(1)去掉“污点”(taints)【生产环境不举荐】;(2)让 pod 可能容忍(tolerations)该节点上的“污点”。

通过 kubectl describe 命令也即可得悉,业务 pod 默认不可运行于 master 之上:

[root@k8s-master k8s-install]# kubectl describe nodes k8s-master | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule

如果想让业务 pod 能够运行在 master 之上,能够执行如下命令(去掉“污点”)来实现(不举荐!!!):

kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-

若执行以上命令,再次查看 kubectl describe 后果应如下:

[root@k8s-master k8s-install]# kubectl describe nodes k8s-master | grep Taints
Taints:             <none>

正文完
 0