作者:京东科技 徐宪章
1 什么是超容量扩容
超容量扩容性能,是指事后调度肯定数量的工作节点,当业务高峰期或者集群整体负载较高时,能够使利用不用期待集群工作节点扩容,从而迅速实现利用横向扩容。通常状况 下 HPA、ClusterAutosacler 和超容量扩容 同时应用以满足负载敏感度高的业务场景。
超容量扩容性能是通过 K8S 利用优先级设置和 ClusterAutosaler 独特作用实现的,通过调整低优先级空载利用的数量,使集群已调度资源放弃在较高的状态,当其余高优先级利用因为 HPA 或者手动调整利用分片数量时,能够通过驱赶空载的形式凌空调度资源却保高优先级利用能够在第一工夫调度并创立。当空载利用从被驱赶转变为等到状态时,ClusterAutosaler 此时对集群机型扩容,确保下次高优先级利用调度时,有足够的空载利用能够被驱赶。
超容量扩容性能的外围为 OverprovisionAutoscaler(超容量扩容)和 ClusterAutosaler(集群主动扩容),两者都须要通过一直调整参数配置去适配多重业务需要需要。
超容量扩容性能在肯定水平上升高了资源应用饱和度,通过减少老本进步了集群和利用的稳定性,理论业务场景中须要依据需要进行取舍并合理配置。
2 什么状况下须要应用超容量扩容
当集群值开启 Hpa 和 Autoscaler 时,在产生节点扩容的状况下,利用调度工夫通常为4-12 分钟,次要取决于创立工作节点资源以及工作节点从退出集群到 Ready 的总耗时。以下为最佳和最差效率剖析
最佳案例场景-4分钟
• 30 秒 – 指标指标值更新:30-60 秒
• 30 秒 – HPA 查看指标值:30 秒 – >30 秒 – HPA 查看指标值:30 秒 – >
• <2 秒 – Pods 创立之后进入 pending 状态 <2 秒 -Pods 创立之后进入 pending 状态
• <2 秒 – CA 看到 pending 状态的 pods,之后调用来创立 node 1 秒 <2 秒 -CA 看到 pending 状态的 pods,之后调用来创立 node 1 秒
• 3 分钟 – cloud provider 创立工作节点,之后退出 k8s 之后期待 node 变成 ready
最蹩脚的状况 – 12 分钟
• 60 秒 —指标指标值更新
• 30 秒 — HPA 查看指标值
• < 2 秒 — Pods 创立之后进入 pending 状态
• < 2 秒 —CA 看到 pending 状态的 pods,之后调用来创立 node 1 秒
• 10 分钟 — cloud provider 创立工作节点,之后退出k8s 之后期待 node 变成 ready
两种场景下,创立工作节点耗时占比超过 75%,如果能够升高或者齐全不思考该工夫,将大大提高利用扩容速度,配合超容量扩容性能能够大大加强集群和业务稳定性。超容量扩容次要用于对利用负载敏感度较高的业务场景
- 大促备战
- 流计算 / 实时计算
- Devops 零碎
- 其余调度频繁的业务场景
3 如何开启超容量扩容
超容量扩容性能以 ClusterAutoscaler 为根底,配合 OverprovisionAutoscaler 实现。以京东私有云 Kubernetes 容器服务为例
3.1 开启 ClusterAutoscaler
https://cns-console.jdcloud.com/host/nodeGroups/list
• 进入“kubernetes 容器服务”->“工作节点组”
• 抉择须要对应节点组,点击开启主动伸缩
• 设置节点数量区间,并点击确定
3.2 部署 OverprovisionAutoscaler
1 部署控制器及配置
apiVersion: apps/v1
kind: Deployment
metadata:
name: overprovisioning-autoscaler
namespace: default
labels:
app: overprovisioning-autoscaler
owner: cluster-autoscaler-overprovisioning
spec:
selector:
matchLabels:
app: overprovisioning-autoscaler
owner: cluster-autoscaler-overprovisioning
replicas: 1
template:
metadata:
labels:
app: overprovisioning-autoscaler
owner: cluster-autoscaler-overprovisioning
spec:
serviceAccountName: cluster-proportional-autoscaler
containers:
- image: jdcloud-cn-north-1.jcr.service.jdcloud.com/k8s/cluster-proportional-autoscaler:v1.16.3
name: proportional-autoscaler
command:
- /autoscaler
- --namespace=default
## 留神这里须要依据须要指定上述的 configmap 的名称
## /overprovisioning-autoscaler-ladder/overprovisioning-autoscaler-linear
- --configmap=overprovisioning-autoscaler-{provision-mode}
## 预热集群利用(类型)/ 名称, 基准利用和空值利用须要在同一个命名空间下
- --target=deployment/overprovisioning
- --logtostderr=true
- --v=2
imagePullPolicy: IfNotPresent
volumeMounts:
- name: host-time
mountPath: /etc/localtime
volumes:
- name: host-time
hostPath:
path: /etc/localtime
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cluster-proportional-autoscaler
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-proportional-autoscaler
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list", "watch"]
- apiGroups: [""]
resources: ["replicationcontrollers/scale"]
verbs: ["get", "update"]
- apiGroups: ["extensions","apps"]
resources: ["deployments/scale", "replicasets/scale","deployments","replicasets"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-proportional-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-proportional-autoscaler
namespace: default
roleRef:
kind: ClusterRole
name: cluster-proportional-autoscaler
apiGroup: rbac.authorization.k8s.io
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: overprovisioning
value: -1
globalDefault: false
description: "Priority class used by overprovisioning."
2 部署空载利用
apiVersion: apps/v1
kind: Deployment
metadata:
name: overprovisioning
namespace: default
labels:
app: overprovisioning
owner: cluster-autoscaler-overprovisioning
spec:
replicas: 1
selector:
matchLabels:
app: overprovisioning
owner: cluster-autoscaler-overprovisioning
template:
metadata:
annotations:
autoscaler.jke.jdcloud.com/overprovisioning: "reserve-pod"
labels:
app: overprovisioning
owner: cluster-autoscaler-overprovisioning
spec:
priorityClassName: overprovisioning
containers:
- name: reserve-resources
image: jdcloud-cn-east-2.jcr.service.jdcloud.com/k8s/pause-amd64:3.1
resources:
requests:
## 依据预热预期设置配置的分片数量及单分片所需资源
cpu: 7
imagePullPolicy: IfNotPresent
3.3 验证超容量扩容性能是否失常
1 验证 Autoscaler
• 查看 autoscaler 控制器是否 Running
• 一直创立测试利用,利用需要资源稍微小于节点组单节点可调度资源
• 察看集群节点状态,当资源有余导致 pod 期待中状态时,autocalser 是否会依照预设 (扩容期待、扩容冷却、最大节点数量等) 进行扩容
• 开启集群主动缩容,删除测试利用,察看集群节点资源 Request 达到阈值后是否产生缩容。
2 验证 OverprovisionAutoscaler
• 查看 OverprovisionAutoscaler 控制器是否 Running
• 一直创立测试利用,当产生 autoscaler 后,空载利用数量是否会依据配置发生变化
• 当业务利用 pendding 后,空载利用是否会产生驱赶,并调度业务利用
4 设置 OverprovisionAutoscaler 及 ClusterAutoscaler 参数
4.1 配置 ClusterAutoscaler
1 ca 参数阐明
参数名称 | 默认值 | 参数阐明 |
---|---|---|
scan_interval | 20s | How often cluster is reevaluated for scale up or down |
max_nodes_total | 0 | Maximum number of nodes in all node groups |
estimator | binpacking | Type of resource estimator to be used in scale up. |
expander | least-waste | Type of node group expander to be used in scale up |
max_empty_bulk_delete | 15 | Maximum number of empty nodes that can be deleted at the same time |
max_graceful_termination_sec | 600 | Maximum number of seconds CA waits for pod termination when trying to scale down a node |
max_total_unready_percentage | 45 | Maximum percentage of unready nodes in the cluster. After this is exceeded, CA halts operations |
ok_total_unready_count | 100 | Number of allowed unready nodes, irrespective of max-total-unready-percentage |
max_node_provision_time | 900s | Maximum time CA waits for node to be provisioned |
scale_down_enabled | true | Should CA scale down the cluster |
scale_down_delay_after_add | 600s | How long after scale up that scale down evaluation resumes |
scale_down_delay_after_delete | 10s | How long after node deletion that scale down evaluation resumes, defaults to scanInterval |
scale_down_delay_after_failure | 180s | How long after scale down failure that scale down evaluation resumes |
scale_down_unneeded_time | 600s | How long a node should be unneeded before it is eligible for scale down |
scale_down_unready_time | 1200s | How long an unready node should be unneeded before it is eligible for scale down |
scale_down_utilization_threshold | 0.5 | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down |
balance_similar_node_groups | false | Detect similar node groups and balance the number of nodes between them |
node_autoprovisioning_enabled | false | Should CA autoprovision node groups when needed |
max_autoprovisioned_node_group_count | 15 | The maximum number of autoprovisioned groups in the cluster |
skip_nodes_with_system_pods | true | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods) |
skip_nodes_with_local_storage | true | If true cluster autoscaler will never delete nodes with pods with local storage, e.g. EmptyDir or HostPath’, NOW(), NOW(), 1); |
2 举荐配置
# 其余放弃默认
scan_interval=10s
max_node_provision_time=180s
scale_down_delay_after_add=180s
scale_down_delay_after_delete=180s
scale_down_unneeded_time=300s
scale_down_utilization_threshold=0.4
4.2 配置 OverprovisionAutoscaler
OverprovisionAutoscaler 的配置有线性配置和阶梯配置两种形式,两种配置形式只能抉择一种.
1 线性配置(ladder)
线性配置,通过配置总体 CPU 核数 以及 节点数量和空载利用数量的比例实现线性资源预留,空载利用数量总是和 CPU 总量 以及 节点数量成正比,精度会依据空载利用 CPU 资源 request 变动,request 值越小,精度月高,当配置发生冲突时,取合乎线性关系的空载利用数量最大值.
节点数量满足配置中 min 和 max 的区间
preventSinglePointFailure,当为 true 时,Running 状态的空载利用分片数满足线性关系;当为 false 时,Failer/Running 状态的空载利用分片数满足线性关系
includeUnschedulableNodes,是否思考不可调度节点
kind: ConfigMap
apiVersion: v1
metadata:
name: overprovisioning-autoscaler-linear
namespace: default
data:
linear: |-
{
"coresPerReplica": 2,
"nodesPerReplica": 1,
"min": 1,
"max": 100,
"includeUnschedulableNodes": false,
"preventSinglePointFailure": true
}
2 阶梯配置(linear)
阶梯配置,通过配置总体 CPU 核数或者节点数量和空载利用数量的矩阵实现阶梯状资源预留,空载利用数量合乎 CPU 总量以及节点数量的散布状态,当配置发生冲突时,取合乎区间散布的空载利用数量最大值
kind: ConfigMap
apiVersion: v1
metadata:
name: overprovisioning-autoscaler-ladder
namespace: default
data:
ladder: |-
{
"coresToReplicas":
[[ 1,1],
[50,3],
[200,5],
[500,7]
],
"nodesToReplicas":
[[ 1,1],
[3,4],
[10,5],
[50,20],
[100,120],
[150,120]
]
}