共计 11716 个字符,预计需要花费 30 分钟才能阅读完成。
k8s v1.18 减少了 hpa v2beta2 的 behavior 字段,能够更精细化的管制伸缩的行为:
- 若不指定 behavior 字段,则按默认的 behavior 行为执行伸缩;
- 若指定 behavior 字段,则按自定义的 behavior 行为执行伸缩;
一. demo
若 behavior 的策略 (包含冷却工夫 + 伸缩策略) 不满足需要,能够通过自定义 behavior 精细化管制伸缩的策略。
比方上面的 behavior:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: sample-app
spec:
...
behavior:
scaleUp:
policies:
- type: Percent
value: 900
periodSeconds: 300
scaleDown:
stabilizationWindowSeconds: 60
policies:
- type: Pods
value: 1
periodSeconds: 10
-
扩容时:
- 立刻扩容;
- 每次扩容最大(1+9)*currentReplicas,即 10 倍的 replicas;
- 1 次扩容后,冷却 300s 后能力持续扩容;
-
缩容时:
- 冷却 60s 才进行缩容,每次缩容 1 个正本;
- 1 次缩容后,冷却 10s 后能力持续缩容;
以下面的 Hpa v2beta2 的定义为例,查看其扩缩容的过程:
扩容,将指标猛增(1–>13):
- 首先,依照指标计算,应该将正本数从 1 扩容到 13;但因为 scaleUp.policies 的限度,最多扩容 10 倍,即 1 –>10 个正本;
- 而后,依据指标计算,冷却 300s 后,最终将正本数扩容至 13;
# kubectl describe hpa
Name: sample-app
Namespace: default
Labels: <none>
Annotations: <none>
Reference: Deployment/sample-app
Metrics: (current / target)
"metric_hpa" on pods: 1 / 1
Min replicas: 1
Max replicas: 15
Behavior:
Scale Up:
Stabilization Window: 0 seconds
Select Policy: Max
Policies:
- Type: Percent Value: 900 Period: 300 seconds
Scale Down:
Stabilization Window: 60 seconds
Select Policy: Max
Policies:
- Type: Pods Value: 1 Period: 10 seconds
Deployment pods: 13 current / 13 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric metric_hpa
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 6m29s horizontal-pod-autoscaler New size: 10; reason: pods metric metric_hpa above target
Normal SuccessfulRescale 79s horizontal-pod-autoscaler New size: 13; reason: pods metric metric_hpa above target
缩容,将指标猛降(13–>1):
- 首先,冷却 60s 后进行缩容,每次缩容 1 个正本;
- 而后,待上次缩容后 10s,再次缩容 1 个正本;
- 最终,缩容至 1 个正本;
# kubectl describe hpa
Name: sample-app
Namespace: default
Labels: <none>
Annotations: <none>
Reference: Deployment/sample-app
Metrics: (current / target)
"metric_hpa" on pods: 1 / 1
Min replicas: 1
Max replicas: 15
Behavior:
Scale Up:
Stabilization Window: 0 seconds
Select Policy: Max
Policies:
- Type: Percent Value: 900 Period: 300 seconds
Scale Down:
Stabilization Window: 60 seconds
Select Policy: Max
Policies:
- Type: Pods Value: 1 Period: 10 seconds
Deployment pods: 1 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric metric_hpa
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 12m horizontal-pod-autoscaler New size: 10; reason: pods metric metric_hpa above target
Normal SuccessfulRescale 6m59s horizontal-pod-autoscaler New size: 13; reason: pods metric metric_hpa above target
Normal SuccessfulRescale 3m25s horizontal-pod-autoscaler New size: 12; reason: All metrics below target
Normal SuccessfulRescale 3m9s horizontal-pod-autoscaler New size: 11; reason: All metrics below target
Normal SuccessfulRescale 2m54s horizontal-pod-autoscaler New size: 10; reason: All metrics below target
Normal SuccessfulRescale 2m38s horizontal-pod-autoscaler New size: 9; reason: All metrics below target
Normal SuccessfulRescale 2m23s horizontal-pod-autoscaler New size: 8; reason: All metrics below target
Normal SuccessfulRescale 2m7s horizontal-pod-autoscaler New size: 7; reason: All metrics below target
Normal SuccessfulRescale 112s horizontal-pod-autoscaler New size: 6; reason: All metrics below target
Normal SuccessfulRescale 34s (x5 over 96s) horizontal-pod-autoscaler (combined from similar events): New size: 1; reason: All metrics below target
二. 源码剖析
整个过程分以下几步:
- 首先,若未设置 scaleDown 的冷却工夫,则配置默认冷却工夫 =300s;
- 而后,依据 scaleUp/scaleDown 的 stabilizationWindow,计算指标正本数;
- 最初,依据 scaleUp/scaleDown 的 Policies,计算指标正本数;
// pkg/controller/podautoscaler/horizontal.go
func (a *HorizontalController) normalizeDesiredReplicasWithBehaviors(hpa *autoscalingv2.HorizontalPodAutoscaler, key string, currentReplicas, prenormalizedDesiredReplicas, minReplicas int32) int32 {
// 1. 设置 scaleDown 的默认冷却工夫
a.maybeInitScaleDownStabilizationWindow(hpa)
normalizationArg := NormalizationArg{
Key: key,
ScaleUpBehavior: hpa.Spec.Behavior.ScaleUp,
ScaleDownBehavior: hpa.Spec.Behavior.ScaleDown,
MinReplicas: minReplicas,
MaxReplicas: hpa.Spec.MaxReplicas,
CurrentReplicas: currentReplicas,
DesiredReplicas: prenormalizedDesiredReplicas}
// 2. 依据冷却工夫计算正本数
stabilizedRecommendation, reason, message := a.stabilizeRecommendationWithBehaviors(normalizationArg)
normalizationArg.DesiredReplicas = stabilizedRecommendation
...
// 3. 依据策略计算正本数
desiredReplicas, reason, message := a.convertDesiredReplicasWithBehaviorRate(normalizationArg)
...
return desiredReplicas
}
1. 设置 scaleDown 的默认冷却工夫
若 scaleDown.StabilizationWindowSeoncds 未设置,则默认 =300s;
// pkg/controller/podautoscaler/horizontal.go
func (a *HorizontalController) maybeInitScaleDownStabilizationWindow(hpa *autoscalingv2.HorizontalPodAutoscaler) {
behavior := hpa.Spec.Behavior
if behavior != nil && behavior.ScaleDown != nil && behavior.ScaleDown.StabilizationWindowSeconds == nil {stabilizationWindowSeconds := (int32)(a.downscaleStabilisationWindow.Seconds()) // 默认 =300s
hpa.Spec.Behavior.ScaleDown.StabilizationWindowSeconds = &stabilizationWindowSeconds
}
}
2. 依据冷却工夫计算正本数
依据 stabilizationWindow 计算指标正本数,最终返回 recommendation:
- 首先,初始值 = 上一步计算的正本数(即指标计算的正本数);
-
而后:
- 若扩容,则 recommendation=min(最近 stabilizationWindowSeconds 的伸缩正本数),这也意味着冷却 stabilizationWindowSeconds;
- 若缩容,则 recommendation=max(最近 stabilizationWindowSeconds 的伸缩正本数),这也意味着冷却 stabilizationWindowSeconds;
// pkg/controller/podautoscaler/horizontal.go
func (a *HorizontalController) stabilizeRecommendationWithBehaviors(args NormalizationArg) (int32, string, string) {
recommendation := args.DesiredReplicas
...
var betterRecommendation func(int32, int32) int32
// 扩容
if args.DesiredReplicas >= args.CurrentReplicas {
scaleDelaySeconds = *args.ScaleUpBehavior.StabilizationWindowSeconds
betterRecommendation = min // min 函数
reason = "ScaleUpStabilized"
message = "recent recommendations were lower than current one, applying the lowest recent recommendation"
} else { // 缩容
scaleDelaySeconds = *args.ScaleDownBehavior.StabilizationWindowSeconds
betterRecommendation = max // max 函数
reason = "ScaleDownStabilized"
message = "recent recommendations were higher than current one, applying the highest recent recommendation"
}
...
cutoff := time.Now().Add(-time.Second * time.Duration(scaleDelaySeconds))
for i, rec := range a.recommendations[args.Key] {if rec.timestamp.After(cutoff) {recommendation = betterRecommendation(rec.recommendation, recommendation)
}
...
}
...
return recommendation, reason, message
}
3. 依据 policies 计算正本数
依据 policies 计算指标正本数
-
若是扩容:
- 依据 scaleUp.Policies(percent/pods/period)计算扩容下限;
- 扩容下限必须 <= hpaMaxReplicas;
- 最终扩容正本数必须 <= 上一步计算的 desiredReplicas;
-
若是缩容:
- 依据 scaleDown.Policies(percent/pods/period)计算缩容下限;
- 缩容下限必须 >= hpaMinReplicas;
- 最终缩容正本数必须 >= 上一步计算的 desiredReplicas;
// pkg/controller/podautoscaler/horizontal.go
func (a *HorizontalController) convertDesiredReplicasWithBehaviorRate(args NormalizationArg) (int32, string, string) {
var possibleLimitingReason, possibleLimitingMessage string
// 扩容
if args.DesiredReplicas > args.CurrentReplicas {
// 依据 scaleUp.Policies 计算扩容下限
scaleUpLimit := calculateScaleUpLimitWithScalingRules(args.CurrentReplicas, a.scaleUpEvents[args.Key], args.ScaleUpBehavior)
...
// 扩容下限必须 <= hpaMaxReplicas
maximumAllowedReplicas := args.MaxReplicas
if maximumAllowedReplicas > scaleUpLimit {
maximumAllowedReplicas = scaleUpLimit
possibleLimitingReason = "ScaleUpLimit"
possibleLimitingMessage = "the desired replica count is increasing faster than the maximum scale rate"
} else {
possibleLimitingReason = "TooManyReplicas"
possibleLimitingMessage = "the desired replica count is more than the maximum replica count"
}
// 扩容正本数必须 <= 上一步计算的 desiredReplicas
if args.DesiredReplicas > maximumAllowedReplicas {return maximumAllowedReplicas, possibleLimitingReason, possibleLimitingMessage}
} else if args.DesiredReplicas < args.CurrentReplicas { // 缩容
// 依据 scaleDown.Policies 计算缩容下限
scaleDownLimit := calculateScaleDownLimitWithBehaviors(args.CurrentReplicas, a.scaleDownEvents[args.Key], args.ScaleDownBehavior)
...
// 缩容下限必须 >= hpaMinReplicas
minimumAllowedReplicas := args.MinReplicas
if minimumAllowedReplicas < scaleDownLimit {
minimumAllowedReplicas = scaleDownLimit
possibleLimitingReason = "ScaleDownLimit"
possibleLimitingMessage = "the desired replica count is decreasing faster than the maximum scale rate"
} else {
possibleLimitingMessage = "the desired replica count is less than the minimum replica count"
possibleLimitingReason = "TooFewReplicas"
}
// 缩容正本数必须 >= 上一步计算的 desiredReplicas
if args.DesiredReplicas < minimumAllowedReplicas {return minimumAllowedReplicas, possibleLimitingReason, possibleLimitingMessage}
}
return args.DesiredReplicas, "DesiredWithinRange", "the desired count is within the acceptable range"
}
扩容下限的计算,根据 policy.percent/pods/period:
- 能够存在多个 policy:由 scaleUp.SelectPolicy 决定是抉择这些 policy 的 max、min 还是 disable;
// pkg/controller/podautoscaler/horizontal.go
func calculateScaleUpLimitWithScalingRules(currentReplicas int32, scaleEvents []timestampedScaleEvent, scalingRules *autoscalingv2.HPAScalingRules) int32 {
var result int32
var proposed int32
var selectPolicyFn func(int32, int32) int32
if *scalingRules.SelectPolicy == autoscalingv2.DisabledPolicySelect {return currentReplicas // Scaling is disabled} else if *scalingRules.SelectPolicy == autoscalingv2.MinPolicySelect {
result = math.MaxInt32
selectPolicyFn = min // For scaling up, the lowest change ('min' policy) produces a minimum value
} else {
result = math.MinInt32
selectPolicyFn = max // Use the default policy otherwise to produce a highest possible change
}
for _, policy := range scalingRules.Policies {replicasAddedInCurrentPeriod := getReplicasChangePerPeriod(policy.PeriodSeconds, scaleEvents)
periodStartReplicas := currentReplicas - replicasAddedInCurrentPeriod
if policy.Type == autoscalingv2.PodsScalingPolicy { // Pods
proposed = periodStartReplicas + policy.Value
} else if policy.Type == autoscalingv2.PercentScalingPolicy { // Percent
// the proposal has to be rounded up because the proposed change might not increase the replica count causing the target to never scale up
proposed = int32(math.Ceil(float64(periodStartReplicas) * (1 + float64(policy.Value)/100)))
}
result = selectPolicyFn(result, proposed)
}
return result
}
对于每个 scaleUp 的 policy:
- 首先,计算过来 policy.periodSeconds 这段时间的伸缩总正本数;
- 而后,计算 以后正本数 – 过来 policy.periodSconds 工夫内的伸缩正本数;
-
后面两步的目标:
- 是抹平 policy.periodSeconds 工夫内的伸缩正本数;
- 即下一次伸缩间隔上一次伸缩的冷却工夫;
-
最初:
- 若 policy.Type=Pods,则再 + policy.Pods.Value 正本数;
- 若 policy.Type=Percent,则再 * (1 + percent.Value)/100= 最终正本数;
policy.periodSeconds 工夫内的伸缩总正本数的计算:
// pkg/controller/podautoscaler/horizontal.go
func getReplicasChangePerPeriod(periodSeconds int32, scaleEvents []timestampedScaleEvent) int32 {period := time.Second * time.Duration(periodSeconds)
cutoff := time.Now().Add(-period)
var replicas int32
for _, rec := range scaleEvents {if rec.timestamp.After(cutoff) {replicas += rec.replicaChange // 汇总 periodSeconds 工夫内的伸缩正本总数,扩容 =+M,缩容 =-N}
}
return replicas
}
缩容下限的计算,根据 policy.percent/pods/period:
- 与扩容下限的计算方法相似;
-
区别在于:因为是缩容
- perioldStartReplicas = 以后正本数 + 过来 periodSeoncds 内伸缩正本数;
- 若 policy.Type=Pods,则再 – policy.Pods.Value 正本数;
- 若 policy.Type=Percent,则再 * (1 – percent.Value)/100= 最终正本数;
// pkg/controller/podautoscaler/horizontal.go
func calculateScaleDownLimitWithBehaviors(currentReplicas int32, scaleEvents []timestampedScaleEvent, scalingRules *autoscalingv2.HPAScalingRules) int32 {
var result int32
var proposed int32
var selectPolicyFn func(int32, int32) int32
if *scalingRules.SelectPolicy == autoscalingv2.DisabledPolicySelect {return currentReplicas // Scaling is disabled} else if *scalingRules.SelectPolicy == autoscalingv2.MinPolicySelect {
result = math.MinInt32
selectPolicyFn = max // For scaling down, the lowest change ('min' policy) produces a maximum value
} else {
result = math.MaxInt32
selectPolicyFn = min // Use the default policy otherwise to produce a highest possible change
}
for _, policy := range scalingRules.Policies {replicasDeletedInCurrentPeriod := getReplicasChangePerPeriod(policy.PeriodSeconds, scaleEvents)
periodStartReplicas := currentReplicas + replicasDeletedInCurrentPeriod
if policy.Type == autoscalingv2.PodsScalingPolicy { // Pod
proposed = periodStartReplicas - policy.Value
} else if policy.Type == autoscalingv2.PercentScalingPolicy { // Percent
proposed = int32(float64(periodStartReplicas) * (1 - float64(policy.Value)/100))
}
result = selectPolicyFn(result, proposed)
}
return result
}
参考:
1.https://zhuanlan.zhihu.com/p/245208287
正文完
发表至: kubernetes
2023-03-25