乐趣区

关于kubernetes:混合云K8s容器化应用弹性伸缩实战

简介: 混合云 K8s 容器化利用弹性伸缩实战

1. 前提条件

本最佳实际的软件环境要求如下:
应用环境:
①容器服务 ACK 基于专有云 V3.10.0 版本。
②公共云云企业网服务 CEN。
③公共云弹性伸缩组服务 ESS。
配置条件:
1)应用专有云的容器服务或者在 ECS 上手动部署麻利 PaaS。
2)开明云专线,买通容器服务所在 VPC 与公共云上的 VPC。
3)开明公共云弹性伸缩组服务(ESS)。

2. 背景信息

本实际基于 K8s 的业务集群运行在专有云上,对测试业务进行压力测试,次要基于以下三种产品和能力:
①利用阿里云的云企业网专线买通专有云和公共云,实现两朵云上 VPC 网络互通。
②利用 K8s(Kubernetes)的 HPA 能力,实现容器的程度伸缩。
③利用 K8s 的 Cluster Autoscaler 和阿里云弹性伸缩组 ESS 能力实现节点的主动伸缩。

HPA(Horizontal Pod Autoscaler)是 K8s 的一种资源对象,可能依据 CPU、内存等指标对 statefulset、deployment 等对象中的 pod 数量进行动静伸缩,使运行在下面的服务对指标的变动有肯定的自适应能力。

当被测试业务指标达到下限时,触发 HPA 主动扩容业务 pod;当业务集群无奈承载更多 pod 时,触发公共云的 ESS 服务,在公共云内扩容出 ECS 并主动增加到专有云的 K8s 集群。


图 1:架构原理图

3. 配置 HPA

本示例创立了一个反对 HPA 的 nginx 利用,创立胜利后,当 Pod 的利用率超过本例中设置的 20% 利用率时,则会进行程度扩容,低于 20% 的时候会进行缩容。

1. 若应用自建 K8s 集群,则通过 yaml 文件配置 HPA

1)创立一个 nginx 利用,必须为利用设置 request 值,否则 HPA 不会失效。

apiVersion:
app/v1beta2
kind: Deployment
spec:
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: hpa-test
    spec:
        dnsPolicy: ClusterFirst     
        terminationGracePeriodSeconds:30         
        containers:
        image: '192.168.**.***:5000/admin/hpa-example:v1'
        imagePullPolicy: IfNotPresent
        terminationMessagePolicy:File
        terminationMessagePath:/dev/termination-log
        name: hpa-test
        resources:
          requests:
            cpu: // 必须设置 request 值
        securityContext: {}
        restartPolicy:Always
        schedulerName:default-scheduler
  replicas: 1
  selector: 
    matchLabels:
        app: hpa-test
  revisionHistoryLimit: 10
  strategy: 
    type: RollingUpdate
    rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
     progressDeadlineSeconds: 600

2)创立 HPA。

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
 annotations:
    autoscaling.alpha.kubernetes.io/conditions:'[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-04-29T06:57:28Z","reason":"ScaleDownStabilized","message":"recent
    recommendations were higher than current one, applying the highest recent
    recommendation"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2020-04-29T06:57:28Z","reason":"ValidMetricFound","message":"theHPA
    was able to successfully calculate a replica count from cpu resource
    utilization(percentage of
    request)"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2020-04-29T06:57:28Z","reason":"DesiredWithinRange","message":"thedesired
    count is within the acceptable range"}]'
    autoscaling.alpha.kubernetes.io/currentmetrics:'[{"type":"Resource","resource":{"name":"cpu","currentAverageUtilization":0,"currentAverageValue":"0"}}]'

creationTimestamp: 2020-04-29T06:57:13Z
name: hpa-test
namespace: default
resourceVersion: "3092268"
selfLink:
/apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/hpa01
uid: a770ca26-89e6-11ea-a7d7-00163e0106e9
spec:
    maxReplicas: // 设置 pod 数量 
    minReplicas: 1
    scaleTargetRef:
       apiVersion: apps/v1beta2
       kind: Deployment
       name: centos 
            targetCPUUtilizationPercentage:// 设置 CPU 阈值 
2. 若应用阿里云容器服务,须要在部署利用时抉择配置 HPA


图 2:拜访设置

4. 配置 Cluster Autoscaler

资源申请(Request)的正确、正当设置,是弹性伸缩的前提条件。节点主动伸缩组件基于 K8s 资源调度的分配情况进行伸缩判断,节点中资源的调配通过资源请(Request)进行计算。

当 Pod 因为资源申请(Request)无奈满足并进入期待(Pending)状态时,节点主动伸缩组件会依据弹性伸缩组配置信息中的资源规格以及束缚配置,计算所需的节点数目。

如果能够满足伸缩条件,则会触发伸缩组的节点退出。而当一个节点在弹性伸缩组中且节点上 Pod 的资源申请低于阈值时,节点主动伸缩组件会将节点进行缩容。

1. 配置弹性伸缩组 ESS

1)创立 ESS 弹性伸缩组,记录最小实例数和最大实例数。


图 3:批改伸缩组

2)创立伸缩配置,记录伸缩配置的 id。


图 4:伸缩配置

#!/bin/sh
yum install -y ntpdate && ntpdate -u ntp1.aliyun.com && curl http:// example.com/public/hybrid/attach_local_node_aliyun.sh | bash -s -- --docker-version 17.06.2-ce-3 --token
9s92co.y2gkocbumal4fz1z --endpoint 192.168.**.***:6443 --cluster-dns 10.254.**.**
--region cn-huhehaote
echo "{" > /etc/docker/daemon.json
echo "\"registry-mirrors\": [" >>
/etc/docker/daemon.json
echo "\"https://registry-vpc.cn-huhehaote.aliyuncs.com\"" >> /etc/docker/daemon.json
echo "]," >> /etc/docker/daemon.json
echo "\"insecure-registries\": [\"https://192.168.**.***:5000\"]" >> /etc/docker/daemon.json
echo "}" >> /etc/docker/daemon.json
systemctl restart docker 
2.K8s 集群部署 autoscaler
kubectl apply -f ca.yml

参考 ca.yml 创立 autoscaler,留神批改如下配置与理论环境绝对应。

access-key-id: "TFRBSWlCSFJyeHd2QXZ6****"
access-key-secret: "bGIyQ3NuejFQOWM0WjFUNjR4WTVQZzVPRXND****"
region-id: "Y24taHVoZWhh****"

ca.yal 代码如下:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
  name: cluster-autoscaler
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
  resources: ["events","endpoints"]
  verbs: ["create", "patch"]
- apiGroups: [""]
  resources: ["pods/eviction"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["pods/status"]
  verbs: ["update"]
- apiGroups: [""]
  resources: ["endpoints"]
  resourceNames: ["cluster-autoscaler"]
  verbs: ["get","update"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["watch","list","get","update"]
- apiGroups: [""]
  resources: ["pods","services","replicationcontrollers","persistentvolumeclaims","persistentvolumes"]
  verbs: ["watch","list","get"]
- apiGroups: ["extensions"]
  resources: ["replicasets","daemonsets"]
  verbs: ["watch","list","get"]
- apiGroups: ["policy"]
  resources: ["poddisruptionbudgets"]
  verbs: ["watch","list"]
- apiGroups: ["apps"]
  resources: ["statefulsets"]
  verbs: ["watch","list","get"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["watch","list","get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create","list","watch"]
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
  verbs: ["delete","get","update","watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system
---
apiVersion: v1
kind: Secret
metadata:
  name: cloud-config
  namespace: kube-system
type: Opaque
data:
  access-key-id: "TFRBSWlCSFJyeHd2********"
  access-key-secret: "bGIyQ3NuejFQOWM0WjFUNjR4WTVQZzVP*********"
  region-id: "Y24taHVoZW********"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      dnsConfig:
        nameservers:
          - 100.XXX.XXX.XXX
          - 100.XXX.XXX.XXX
      nodeSelector:
        ca-key: ca-value
      priorityClassName: system-cluster-critical
      serviceAccountName: admin
      containers:
        - image: 192.XXX.XXX.XXX:XX/admin/autoscaler:v1.3.1-7369cf1
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - '--v=5'
            - '--stderrthreshold=info'
            - '--cloud-provider=alicloud'
            - '--scan-interval=30s'
            - '--scale-down-delay-after-add=8m'
            - '--scale-down-delay-after-failure=1m'
            - '--scale-down-unready-time=1m'
            - '--ok-total-unready-count=1000'
            - '--max-empty-bulk-delete=50'
            - '--expander=least-waste'
            - '--leader-elect=false'
            - '--scale-down-unneeded-time=8m'
            - '--scale-down-utilization-threshold=0.2'
            - '--scale-down-gpu-utilization-threshold=0.3'
            - '--skip-nodes-with-local-storage=false'
            - '--nodes=0:5:asg-hp3fbu2zeu9bg3clraqj'
          imagePullPolicy: "Always"
          env:
            - name: ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: cloud-config
                  key: access-key-id
            - name: ACCESS_KEY_SECRET
              valueFrom:
                secretKeyRef:
                  name: cloud-config
                  key: access-key-secret
            - name: REGION_ID
              valueFrom:
                secretKeyRef:
                  name: cloud-config
                  key: region-id

5. 执行后果

模仿业务拜访:

启动 busybox 镜像,在 pod 内执行如下命令拜访以上利用的 service,能够同时启动多个 pod 减少业务负载。while true;do wget -q -O- http://hpa-test/index.html;done

察看 HPA:

加压前


图 5:加压前

加压后
当 CPU 值达到阈值后,会触发 pod 的程度扩容。


图 6:加压后 1

图 7:加压后 2

察看 Pod:

当集群资源有余时,新扩容出的 pod 处于 pending 状态,此时将触发 cluster autoscaler,主动扩容节点。


图 8:伸缩流动

咱们是阿里云智能寰球技术服务 -SRE 团队,咱们致力成为一个以技术为根底、面向服务、保障业务零碎高可用的工程师团队;提供业余、体系化的 SRE 服务,帮忙广大客户更好地应用云、基于云构建更加稳固牢靠的业务零碎,晋升业务稳定性。咱们冀望可能分享更多帮忙企业客户上云、用好云,让客户云上业务运行更加稳固牢靠的技术,您可用钉钉扫描下方二维码,退出阿里云 SRE 技术学院钉钉圈子,和更多云上人交换对于云平台的那些事。

版权申明: 本文内容由阿里云实名注册用户自发奉献,版权归原作者所有,阿里云开发者社区不领有其著作权,亦不承当相应法律责任。具体规定请查看《阿里云开发者社区用户服务协定》和《阿里云开发者社区知识产权爱护指引》。如果您发现本社区中有涉嫌剽窃的内容,填写侵权投诉表单进行举报,一经查实,本社区将立即删除涉嫌侵权内容。

退出移动版