共计 14221 个字符,预计需要花费 36 分钟才能阅读完成。
写在开篇
❝
本篇分享 k8s 的污点、污点容忍,感激继续关注我的盆友们。如有考 k8s 认证需要的盆友,可和我取得联系,理解 K8S 认证请猛戳:《K8s CKA+CKS 认证实战班》2023 版
❞
K8s CKA+CKS 认证实战班》2023 版:https://mp.weixin.qq.com/s/h1bjcIwy2enVD203o-ntlA
污点和污点容忍
什么是污点?
❝
节点亲和性是 Pod 的一种属性,它使 Pod 被吸引到一类特定的节点(这可能出于一种偏好,也可能是硬性要求)。污点(Taint)则相同——它使节点可能排挤一类特定的 Pod。也就是说防止 pod 调度到特定 node 上,通知默认的 pod,我回绝你的调配,是一种被动回绝的行为。
❞
什么是污点容忍?
❝
是利用于 Pod 上的。容忍度容许调度器调度带有对应污点的 Pod。容忍度容许调度但并不保障调度。也就是说,容许 pod 调度到持有 Taint 的 node 上,心愿 pod 可能调配到带污点的节点,减少了污点容忍,那么就会认可这个污点,就 「有可能」 调配到带污点的节点(如果心愿 pod 能够被调配到带有污点的节点上,要在 pod 配置中增加污点容忍字段)。
❞
污点和容忍(Toleration)相互配合,能够用来防止 Pod 被调配到不适合的节点上,每个节点上都能够利用一个或多个污点,这示意对于那些不能容忍这些污点的 Pod,是不会被该节点承受的。
再用大白话了解一下,也就是说,基于节点标签的调配,是站在 Pod 的角度上,它是通过在 pod 上增加属性来确定 pod 是否要调度到指定的 node 上的。其实,也还能够站在 Node 的角度上,通过在 node 上增加污点属性,来防止 pod 被调配到不适合的节点上。
语法格局
节点增加污点的语法格局
kubectl taint node xxxx key=value:[effect]
effect(成果):
- NoSchedule:不能被调度
- PreferNoSchedule:尽量不要调度
- NoExecute:岂但不会调度,还会驱赶 Node 上已有的 pod
删除污点的语法格局
kubectl taint node xxxx key=value:[effect]-
实战
- 给 test-b-k8s-node02 节点打上污点,不干涉调度到哪台节点,让 k8s 按本人的算法进行调度,看看这 10 个 pod 会不会调配到带有污点的节点上
# 打污点
kubectl taint node test-b-k8s-node02 disktype=sas:NoSchedule
# 查看 node 详情的 Taints 字段
tantianran@test-b-k8s-master:~$ kubectl describe node test-b-k8s-node02 | grep Taint
Taints: disktype=sas:NoSchedule
goweb-demo.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goweb-demo
namespace: test-a
spec:
replicas: 10
selector:
matchLabels:
app: goweb-demo
template:
metadata:
labels:
app: goweb-demo
spec:
containers:
- name: goweb-demo
image: 192.168.11.247/web-demo/goweb-demo:20221229v3
---
apiVersion: v1
kind: Service
metadata:
name: goweb-demo
namespace: test-a
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8090
selector:
app: goweb-demo
type: NodePort
tantianran@test-b-k8s-master:~/goweb-demo$ kubectl get pod -n test-a -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
goweb-demo-b98869456-84p4b 1/1 Running 0 18s 10.244.240.50 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-cjjj8 1/1 Running 0 18s 10.244.240.13 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-fxgjf 1/1 Running 0 18s 10.244.240.12 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-jfdvl 1/1 Running 0 18s 10.244.240.43 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-k6krp 1/1 Running 0 18s 10.244.240.41 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-kcpsz 1/1 Running 0 18s 10.244.240.6 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-lrkzc 1/1 Running 0 18s 10.244.240.49 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-nqr2j 1/1 Running 0 18s 10.244.240.33 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-pt5zk 1/1 Running 0 18s 10.244.240.28 test-b-k8s-node01 <none> <none>
goweb-demo-b98869456-s9rt5 1/1 Running 0 18s 10.244.240.42 test-b-k8s-node01 <none> <none>
tantianran@test-b-k8s-master:~/goweb-demo$
❝
发现全副都在 test-b-k8s-node01 节点,test-b-k8s-node01 节点有污点,因而回绝承载 pod。
❞
- test-b-k8s-node01 节点曾经有污点了,再通过 nodeSelector 强行指派到该节点,看看会不会调配到带有污点的节点上
goweb-demo.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goweb-demo
namespace: test-a
spec:
replicas: 10
selector:
matchLabels:
app: goweb-demo
template:
metadata:
labels:
app: goweb-demo
spec:
nodeSelector:
disktype: sas
containers:
- name: goweb-demo
image: 192.168.11.247/web-demo/goweb-demo:20221229v3
---
apiVersion: v1
kind: Service
metadata:
name: goweb-demo
namespace: test-a
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8090
selector:
app: goweb-demo
type: NodePort
查看 pod 的创立状况
tantianran@test-b-k8s-master:~/goweb-demo$ kubectl get pod -n test-a
NAME READY STATUS RESTARTS AGE
goweb-demo-54bc765fff-2gb98 0/1 Pending 0 20s
goweb-demo-54bc765fff-67c56 0/1 Pending 0 20s
goweb-demo-54bc765fff-6fdvx 0/1 Pending 0 20s
goweb-demo-54bc765fff-c2bgd 0/1 Pending 0 20s
goweb-demo-54bc765fff-d55mw 0/1 Pending 0 20s
goweb-demo-54bc765fff-dl4x4 0/1 Pending 0 20s
goweb-demo-54bc765fff-g4vb2 0/1 Pending 0 20s
goweb-demo-54bc765fff-htjkp 0/1 Pending 0 20s
goweb-demo-54bc765fff-s76rh 0/1 Pending 0 20s
goweb-demo-54bc765fff-vg6dn 0/1 Pending 0 20s
tantianran@test-b-k8s-master:~/goweb-demo$
❝
难堪的场面很显然了,该节点明明存在污点,又非得往上面指派,因而让所有 Pod 处于在了 Pending 的状态,也就是待调配的状态,那如果非要往带有污点的 Node 上指派 pod,怎么办?看上面的例子
❞
- 非要往带有污点的 Node 上指派 pod,保留 nodeSelector,间接减少污点容忍,pod 是不是必定会调配到带有污点的节点上?测试下便知 goweb-demo.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goweb-demo
namespace: test-a
spec:
replicas: 10
selector:
matchLabels:
app: goweb-demo
template:
metadata:
labels:
app: goweb-demo
spec:
nodeSelector:
disktype: sas
tolerations:
- key: "disktype"
operator: "Equal"
value: "sas"
effect: "NoSchedule"
containers:
- name: goweb-demo
image: 192.168.11.247/web-demo/goweb-demo:20221229v3
---
apiVersion: v1
kind: Service
metadata:
name: goweb-demo
namespace: test-a
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8090
selector:
app: goweb-demo
type: NodePort
查看 Pod 创立状态
tantianran@test-b-k8s-master:~/goweb-demo$ kubectl get pod -n test-a
NAME READY STATUS RESTARTS AGE
goweb-demo-68cf558b74-6qddp 0/1 Pending 0 109s
goweb-demo-68cf558b74-7g6cm 0/1 Pending 0 109s
goweb-demo-68cf558b74-f7g6t 0/1 Pending 0 109s
goweb-demo-68cf558b74-kcs9j 0/1 Pending 0 109s
goweb-demo-68cf558b74-kxssv 0/1 Pending 0 109s
goweb-demo-68cf558b74-pgrvb 0/1 Pending 0 109s
goweb-demo-68cf558b74-ps5dn 0/1 Pending 0 109s
goweb-demo-68cf558b74-rb2w5 0/1 Pending 0 109s
goweb-demo-68cf558b74-tcnj4 0/1 Pending 0 109s
goweb-demo-68cf558b74-txqfs 0/1 Pending 0 109s
❝
在下面的 yaml 中,tolerations 字段为污点容忍,通过测试就能够答复方才的问题:保留 nodeSelector,间接减少污点容忍,pod 是不是必定会调配到带有污点的节点上?通过测试后,给出的答案是:不是。
❞
那怎么办呢?持续测试,当初把 nodeSelector 去掉,只留下污点容忍,看看 pod 会不会有可能调配到打了污点的节点上?
goweb-demo.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goweb-demo
namespace: test-a
spec:
replicas: 10
selector:
matchLabels:
app: goweb-demo
template:
metadata:
labels:
app: goweb-demo
spec:
tolerations:
- key: "disktype"
operator: "Equal"
value: "sas"
effect: "NoSchedule"
containers:
- name: goweb-demo
image: 192.168.11.247/web-demo/goweb-demo:20221229v3
---
apiVersion: v1
kind: Service
metadata:
name: goweb-demo
namespace: test-a
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8090
selector:
app: goweb-demo
type: NodePort
查看 pod 创立状况
tantianran@test-b-k8s-master:~/goweb-demo$ kubectl get pod -n test-a -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
goweb-demo-55ff5cd68c-287vw 1/1 Running 0 110s 10.244.222.57 test-b-k8s-node02 <none> <none>
goweb-demo-55ff5cd68c-7s7zb 1/1 Running 0 110s 10.244.222.24 test-b-k8s-node02 <none> <none>
goweb-demo-55ff5cd68c-84jww 1/1 Running 0 110s 10.244.240.24 test-b-k8s-node01 <none> <none>
goweb-demo-55ff5cd68c-b5l9m 1/1 Running 0 110s 10.244.240.15 test-b-k8s-node01 <none> <none>
goweb-demo-55ff5cd68c-c2gfp 1/1 Running 0 110s 10.244.222.3 test-b-k8s-node02 <none> <none>
goweb-demo-55ff5cd68c-hpjn4 1/1 Running 0 110s 10.244.240.62 test-b-k8s-node01 <none> <none>
goweb-demo-55ff5cd68c-j5bvc 1/1 Running 0 110s 10.244.222.43 test-b-k8s-node02 <none> <none>
goweb-demo-55ff5cd68c-r95f6 1/1 Running 0 110s 10.244.240.16 test-b-k8s-node01 <none> <none>
goweb-demo-55ff5cd68c-rhvmw 1/1 Running 0 110s 10.244.240.60 test-b-k8s-node01 <none> <none>
goweb-demo-55ff5cd68c-rl8nh 1/1 Running 0 110s 10.244.222.8 test-b-k8s-node02 <none> <none>
❝
从下面的调配后果能够看出,有些 Pod 调配到了打了污点容忍的 test-b-k8s-node02 节点上。
❞
- 再玩个小例子,让它容忍任何带污点的节点,master 默认也是有污点的(二进制搭建的除外),那 pod 会不会有可能跑 master 去哦?测试一下便知
先看看 master 的污点状况
tantianran@test-b-k8s-master:~/goweb-demo$ kubectl describe node test-b-k8s-master | grep Taint
Taints: node-role.kubernetes.io/control-plane:NoSchedule
goweb-demo.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goweb-demo
namespace: test-a
spec:
replicas: 10
selector:
matchLabels:
app: goweb-demo
template:
metadata:
labels:
app: goweb-demo
spec:
tolerations:
- effect: "NoSchedule"
operator: "Exists"
containers:
- name: goweb-demo
image: 192.168.11.247/web-demo/goweb-demo:20221229v3
---
apiVersion: v1
kind: Service
metadata:
name: goweb-demo
namespace: test-a
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8090
selector:
app: goweb-demo
type: NodePort
查看 pod 创立状况
tantianran@test-b-k8s-master:~/goweb-demo$ kubectl get pod -n test-a -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
goweb-demo-65bbd7b49c-5qb5m 0/1 ImagePullBackOff 0 20s 10.244.82.55 test-b-k8s-master <none> <none>
goweb-demo-65bbd7b49c-7qqw8 1/1 Running 0 20s 10.244.222.13 test-b-k8s-node02 <none> <none>
goweb-demo-65bbd7b49c-9tflk 1/1 Running 0 20s 10.244.240.27 test-b-k8s-node01 <none> <none>
goweb-demo-65bbd7b49c-dgxhx 1/1 Running 0 20s 10.244.222.44 test-b-k8s-node02 <none> <none>
goweb-demo-65bbd7b49c-fbmn5 1/1 Running 0 20s 10.244.240.1 test-b-k8s-node01 <none> <none>
goweb-demo-65bbd7b49c-h2nnz 1/1 Running 0 20s 10.244.240.39 test-b-k8s-node01 <none> <none>
goweb-demo-65bbd7b49c-kczsp 1/1 Running 0 20s 10.244.240.40 test-b-k8s-node01 <none> <none>
goweb-demo-65bbd7b49c-ms768 1/1 Running 0 20s 10.244.222.45 test-b-k8s-node02 <none> <none>
goweb-demo-65bbd7b49c-pbwht 0/1 ErrImagePull 0 20s 10.244.82.56 test-b-k8s-master <none> <none>
goweb-demo-65bbd7b49c-zqxlt 1/1 Running 0 20s 10.244.222.18 test-b-k8s-node02 <none> <none>
竟然还真的有 2 个 Pod 跑 master 节点去了(test-b-k8s-master),master 节点本地没有对应的 image,且 harbor 服务器也没开机,所以拉取 image 失败了。这个不是关怀的问题。关怀的问题是,竟然有 pod 跑 master 节点去了,是因为 k8s 的这个机制:如果一个容忍度的 key 为空且 operator 为 Exists,示意这个容忍度与任意的 key、value 和 effect 都匹配,即这个容忍度能容忍任何污点。理解了这个 Exists 的机制后,就晓得为啥会跑 master 去了吧?
❝
正告:要留神了,master 之所以默认会打上污点,是因为 master 是治理节点、思考到平安的问题,所以 master 节点是不倡议跑惯例的 pod(或者说是不倡议跑业务 pod)。
❞
- 打了污点的节点,到底有没有方法能够强行调配到该节点上?咱们试试
节点 test-b-k8s-node02 是打了污点的
tantianran@test-b-k8s-master:~/goweb-demo$ kubectl describe node test-b-k8s-node02 | grep Taint
Taints: disktype=sas:NoSchedule
goweb-demo.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test-a
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goweb-demo
namespace: test-a
spec:
replicas: 10
selector:
matchLabels:
app: goweb-demo
template:
metadata:
labels:
app: goweb-demo
spec:
nodeName: test-b-k8s-node02
containers:
- name: goweb-demo
image: 192.168.11.247/web-demo/goweb-demo:20221229v3
---
apiVersion: v1
kind: Service
metadata:
name: goweb-demo
namespace: test-a
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8090
selector:
app: goweb-demo
type: NodePort
❝
在下面的配置中,留神 nodeName 字段,nodeName 指定节点名称,用于将 Pod 调度到指定的 Node 上,它的机制是「不通过调度器」
❞
查看 pod 创立状况
tantianran@test-b-k8s-master:~/goweb-demo$ kubectl get pod -n test-a -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
goweb-demo-dd446d4b9-2zdnx 1/1 Running 0 13s 10.244.222.39 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-4qbg9 1/1 Running 0 13s 10.244.222.6 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-67cpl 1/1 Running 0 13s 10.244.222.63 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-fhsgf 1/1 Running 0 13s 10.244.222.53 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-gp9gj 1/1 Running 0 13s 10.244.222.49 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-hzvs2 1/1 Running 0 13s 10.244.222.9 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-px598 1/1 Running 0 13s 10.244.222.22 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-rkbm4 1/1 Running 0 13s 10.244.222.40 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-vr9mq 1/1 Running 0 13s 10.244.222.17 test-b-k8s-node02 <none> <none>
goweb-demo-dd446d4b9-wnfqc 1/1 Running 0 13s 10.244.222.16 test-b-k8s-node02 <none> <none>
❝
会发现,所有 Pod 都调配到了 test-b-k8s-node02 节点,怎么不会分一些到 test-b-k8s-node01 节点?起因就是,它的机制是不通过调度器的。nodeName 这个字段倡议在生产环境中还是少用,所有 Pod 都在一个节点上,这就存在单点故障了。其实,测试环境下还是能够用的嘛!
❞
最初
❝
对于本篇的分享就到这里,前面还会有更多的小案例分享给大家,欢送继续关注。想要考 K8S 认证的盆友请猛戳理解:https://mp.weixin.qq.com/s/h1bjcIwy2enVD203o-ntlA
本文转载于(喜爱的盆友关注咱们):https://mp.weixin.qq.com/s/qJ8gr4xyuTjXkA6p9Yrp7g