简介
Scheduler是kubernetes的调度器,次要的工作是把定义的pod调配到集群的节点上。听起来非常简单,但有很多要思考的问题:
- 偏心:如何保障每个节点都能被分配资源
- 资源高效利用:集群所有资源最大化被应用
- 效率:调度的性能要好,可能尽快地对大批量的pod实现调度工作
- 灵便:容许用户依据本人的需要管制调度的逻辑
Sheduler是作为独自的程序运行的,启动之后会始终坚挺API Server,获取PodSpec.NodeName为空的pod,对每个pod都会创立一个binding,表明该pod应该放到哪个节点上
调度过程
调度分为几个局部:首先是过滤掉不满足条件的节点,这个过程称为predicate;而后对通过的节点依照优先级排序,这个是priority;最初从中抉择优先级最高的节点。如果两头任何一步骤有谬误,就间接返回谬误
Predicate有一系列的算法能够应用:
- PodFitsResources:节点上残余的资源是否大于pod申请的资源
- PodFitsHost:如果pod指定了NodeName,查看节点名称是否和NodeName匹配
- PodFitsHostPorts:节点上曾经应用的port是否和pod申请的port抵触
- PodSelectorMatches:过滤掉和pod指定的label不匹配的节点
- NoDiskConflict:曾经mount的volume和pod指定的volume不抵触,除非它们都是只读
如果在predicate过程中没有适合的节点,pod会始终在pending状态,一直重试调度,直到有节点满足条件。通过这个步骤,如果有多个节点满足条件,就持续priorities过程:依照优先级大小对节点排序
优先级由一系列键值对组成,键是该优先级项的名称,值是它的权重(该项的重要性)。这些优先级选项包含:
- LeastRequestedPriority:通过计算CPU和Memory的使用率来决定权重,使用率越低权重越高。换句话说,这个优先级指标偏向于资源应用比例更低的节点
- BalancedResourceAllocation:节点上CPU和Memory使用率越靠近,权重越高。这个应该和下面的一起应用,不应该独自应用
- ImageLocalityPriority:偏向于曾经有要应用镜像的节点,镜像总大小值越大,权重越高
通过算法对所有的优先级我的项目和权重进行计算,得出最终的后果
自定义调度器
除了kubernetes自带的调度器,你也能够编写本人的调度器。通过spec:schedulername参数指定调度器的名字,能够为pod抉择某个调度器进行调度。比方上面的pod抉择my-scheduler进行调度,而不是默认的default-scheduler:
apiVersion: v1kind: Podmetadata: name: annotation-second-scheduler labels: name: multischeduler-examplespec: schedulername: my-scheduler containers: - name: pod-with-second-annotation-container image: gcr.io/google_containers/pause:2.0
Node亲和性
pod.spec.nodeAfinity
- preferredDuringSchedulingIgnoredDuringExecution:软策略
- requiredDuringSchedulingIgnoredDuringExecution:硬策略
requiredDuringSchedulingIgnoredDuringExecution
apiVersion: v1kind: Podmetadata: name: affinity labels: app: node-affinity-podspec: containers: - name: with-node-affinity image: myapp:v1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: NotIn values: - k8s-node02
preferredDuringSchedulingIgnoredDuringExecution
apiVersion: v1kind: Podmetadata: name: affinity labels: app: node-affinity-podspec: containers: - name: with-node-affinity image: myapp:v1 affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: source operator: In values: - qikqiak
混合应用
apiVersion: v1kind: Podmetadata: name: affinity labels: app: node-affinity-podspec: containers: - name: with-node-affinity image: myapp:v1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: NotIn values: - k8s-node02 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: source operator: In values: - qikqiak
键值运算关系
- In:label的值在某个列表中
- NotIn:label的值不在某个列表中
- Gt:label的值大于某个值
- Lt:label的值小于某个值
- Exists:某个label存在
- DoesNotExist:某个label不存在
Pod亲和性
pod.spec.afinity.podAfinity/podAntiAfinity
- preferredDuringSchedulingIgnoredDuringExecution:软策略
- requiredDuringSchedulingIgnoredDuringExecution:硬策略
apiVersion: v1kind: Podmetadata: name: pod-3 labels: app: pod-3spec: containers: - name: pod-3 image: myapp:v1 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - pod-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - pod-2 topologyKey: kubernetes.io/hostname
亲和性/反亲和性调度策略比拟如下:
调度策略 | 匹配标签 | 操作符 | 拓扑域反对 | 调度指标 |
---|---|---|---|---|
nodeAfinity | 主机 | In, NotIn, Exists,DoesNotExist, Gt, Lt | 否 | 指定主机 |
podAfinity | POD | In, NotIn, Exists,DoesNotExist | 是 | POD与指定POD同一拓扑域 |
podAnitAfinity | POD | In, NotIn, Exists,DoesNotExist | 是 | POD与指定POD不在同一拓扑域 |
Taint和Toleration
节点亲和性,是pod的一种属性(偏好或硬性要求),它使pod被吸引到一类特定的节点。Taint则相同,它使节点可能排挤一类特定的pod
Taint和toleration相互配合,能够用来防止pod被调配到不适合的节点上。每个节点上都能够利用一个或多个taint,这示意对于那些不能容忍这些taint的pod,是不会被该节点承受的。如果将toleration利用于pod上,则示意这些pod能够(但不要求)被调度到具备匹配taint的节点上
污点(Taint)
Ⅰ、污点( Taint )的组成
应用kubectltaint命令能够给某个Node节点设置污点,Node被设置上污点之后就和Pod之间存在了一种相斥的关系,能够让Node回绝Pod的调度执行,甚至将Node曾经存在的Pod驱赶进来
每个污点的组成如下:
key=value:effect
每个污点有一个key和value作为污点的标签,其中value能够为空,efect形容污点的作用。以后taintefect反对如下三个选项:
- NoSchedule:示意k8s将不会将Pod调度到具备该污点的Node上
- PreferNoSchedule:示意k8s将尽量避免将Pod调度到具备该污点的Node上
- NoExecute:示意k8s将不会将Pod调度到具备该污点的Node上,同时会将Node上曾经存在的Pod驱赶进来
Ⅱ、污点的设置、查看和去除
#设置污点kubectl taint nodes node1 key1=value1:NoSchedule#节点阐明中,查找Taints字段kubectl describe pod pod-name#去除污点kubectl taint nodes node1 key1:NoSchedule-
容忍(Tolerations)
设置了污点的Node将依据taint的efect:NoSchedule、PreferNoSchedule、NoExecute和Pod之间产生互斥的关系,Pod将在肯定水平上不会被调度到Node上。但咱们能够在Pod上设置容忍( Toleration ),意思是设置了容忍的Pod将能够容忍污点的存在,能够被调度到存在污点的Node上
pod.spec.tolerations
tolerations:- key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" tolerationSeconds: 3600- key: "key1" operator: "Equal" value: "value1" effect: "NoExecute"- key: "key2" operator: "Exists" effect: "NoSchedule"
- 其中key, vaule, efect要与Node上设置的taint保持一致
- operator的值为Exists将会疏忽value值
- tolerationSeconds用于形容当Pod须要被驱赶时能够在Pod上持续保留运行的工夫
Ⅰ、当不指定key值时,示意容忍所有的污点key:
tolerations:- operator: "Exists"
Ⅱ、当不指定efect值时,示意容忍所有的污点作用
tolerations:- key: "key" operator: "Exists"
Ⅲ、有多个Master存在时,避免资源节约,能够如下设置
kubectl taint nodes Node-Name node-role.kubernetes.io/master=:PreferNoSchedule
指定调度节点
Ⅰ、Pod.spec.nodeName将Pod间接调度到指定的Node节点上,会跳过Scheduler的调度策略,该匹配规定是强制匹配
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: mywebspec: replicas: 7 template: metadata: labels: app: myweb spec: nodeName: k8s-node01 containers: - name: myweb image: myapp:v1 ports: - containerPort: 80
Ⅱ、Pod.spec.nodeSelector:通过kubernetes的label-selector机制抉择节点,由调度器调度策略匹配label,而后调度Pod到指标节点,该匹配规定属于强制束缚
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: mywebspec: replicas: 2 template: metadata: labels: app: myweb spec: nodeSelector: type: backEndNode1 containers: - name: myweb image: harbor/tomcat:8.5-jre8 ports: - containerPort: 80