关于kubernetes:详细教程丨使用Prometheus和Thanos进行高可用K8S监控

34次阅读

共计 28304 个字符,预计需要花费 71 分钟才能阅读完成。

本文转自 Rancher Labs

介 绍

Prometheus 高可用的必要性

在过来的几年里,Kubernetes 的采纳量增长了数倍。很显著,Kubernetes 是容器编排的不二抉择。与此同时,Prometheus 也被认为是监控容器化和非容器化工作负载的绝佳抉择。监控是任何基础设施的一个重要关注点,咱们应该确保咱们的监控设置具备高可用性和高可扩展性,以满足一直增长的基础设施的需要,特地是在采纳 Kubernetes 的状况下。

因而,明天咱们将部署一个集群化的 Prometheus 设置,它不仅可能弹性应答节点故障,还能保障适合的数据存档,供当前参考。咱们的设置还具备很强的可扩展性,以至于咱们能够在同一个监控保护伞下逾越多个 Kubernetes 集群。

以后计划

大部分的 Prometheus 部署都是应用长久卷的 pod,而 Prometheus 则是应用联邦机制进行扩大。然而并不是所有的数据都能够应用联邦机制进行聚合,在这里,当你减少额定的服务器时,你往往须要一个机制来治理 Prometheus 配置。

解决办法

Thanos 旨在解决上述问题。在 Thanos 的帮忙下,咱们不仅能够对 Prometheus 的实例进行多重复制,并在它们之间进行数据去重,还能够将数据归档到 GCS 或 S3 等长期存储中。

施行过程

Thanos 架构

图片起源: https://thanos.io/quick-tutor…

Thanos 由以下组件形成:

  • Thanos sidecar:这是运行在 Prometheus 上的次要组件。它读取和归档对象存储上的数据。此外,它还治理着 Prometheus 的配置和生命周期。为了辨别每个 Prometheus 实例,sidecar 组件将内部标签注入到 Prometheus 配置中。该组件可能在 Prometheus 服务器的 PromQL 接口上运行查问。Sidecar 组件还能监听 Thanos gRPC 协定,并在 gRPC 和 REST 之间翻译查问。
  • Thanos 存储:该组件在对象 storage bucket 中的历史数据之上实现了 Store API,它次要作为 API 网关,因而不须要大量的本地磁盘空间。它在启动时退出一个 Thanos 集群,并颁布它能够拜访的数据。它在本地磁盘上保留了大量对于所有近程区块的信息,并使其与 bucket 放弃同步。通常状况下,在重新启动时能够平安地删除此数据,但会减少启动工夫。
  • Thanos 查问:查问组件在 HTTP 上监听并将查问翻译成 Thanos gRPC 格局。它从不同的源头汇总查问后果,并能从 Sidecar 和 Store 读取数据。在 HA 设置中,它甚至会对查问后果进行反复数据删除。

HA 组的运行时反复数据删除

Prometheus 是有状态的,不容许复制其数据库。这意味着通过运行多个 Prometheus 副原本进步高可用性并不易于应用。简略的负载平衡是行不通的,比方在产生某些解体之后,一个正本可能会启动,然而查问这样的正本会导致它在敞开期间呈现一个小的缺口(gap)。你有第二个正本可能正在启动,但它可能在另一个时刻(如滚动重启)敞开,因而在这些正本下面的负载平衡将无奈失常工作。

  • Thanos Querier 则从两个正本中提取数据,并对这些信号进行反复数据删除,从而为 Querier 使用者填补了缺口(gap)。
  • Thanos Compact 组件将 Prometheus 2.0 存储引擎的压实程序利用于对象存储中的块数据存储。它通常不是语义上的并发平安,必须针对 bucket 进行单例部署。它还负责数据的下采样——40 小时后执行 5m 下采样,10 天后执行 1h 下采样。
  • Thanos Ruler 基本上和 Prometheus 的规定具备雷同作用,惟一区别是它能够与 Thanos 组件进行通信。

配 置

后期筹备

要齐全了解这个教程,须要筹备以下货色:

  1. 对 Kubernetes 和应用 kubectl 有肯定的理解。
  2. 运行中的 Kubernetes 集群至多有 3 个节点(在本 demo 中,应用 GKE 集群)
  3. 实现 Ingress Controller 和 Ingress 对象(在本 demo 中应用 Nginx Ingress Controller)。尽管这不是强制性的,但为了缩小创立内部端点的数量,强烈建议应用。
  4. 创立用于 Thanos 组件拜访对象存储的凭证(在本例中为 GCS bucket)。
  5. 创立 2 个 GCS bucket,并将其命名为 Prometheus-long-term 和 thanos-ruler。
  6. 创立一个服务账户,角色为 Storage Object Admin。
  7. 下载密钥文件作为 json 证书,并命名为 thanos-gcs-credentials.json。
  8. 应用凭证创立 Kubernetes sercret

kubectl create secret generic thanos-gcs-credentials --from-file=thanos-gcs-credentials.json

部署各类组件

部署 Prometheus 服务账户、ClusterrolerClusterrolebinding

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: monitoring
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: monitoring
  namespace: monitoring
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: monitoring
subjects:
  - kind: ServiceAccount
    name: monitoring
    namespace: monitoring
roleRef:
  kind: ClusterRole
  Name: monitoring
  apiGroup: rbac.authorization.k8s.io
---

以上 manifest 创立了 Prometheus 所需的监控命名空间以及服务账户、clusterrole以及clusterrolebinding

部署 Prometheues 配置 configmap

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-server-conf
  labels:
    name: prometheus-server-conf
  namespace: monitoring
data:
  prometheus.yaml.tmpl: |-
    global:
      scrape_interval: 5s
      evaluation_interval: 5s
      external_labels:
        cluster: prometheus-ha
        # Each Prometheus has to have unique labels.
        replica: $(POD_NAME)

    rule_files:
      - /etc/prometheus/rules/*rules.yaml

    alerting:

      # We want our alerts to be deduplicated
      # from different replicas.
      alert_relabel_configs:
      - regex: replica
        action: labeldrop

      alertmanagers:
        - scheme: http
          path_prefix: /
          static_configs:
            - targets: ['alertmanager:9093']

    scrape_configs:
    - job_name: kubernetes-nodes-cadvisor
      scrape_interval: 10s
      scrape_timeout: 10s
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
        - role: node
      relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        # Only for Kubernetes ^1.7.3.
        # See: https://github.com/prometheus/prometheus/issues/2916
        - target_label: __address__
          replacement: kubernetes.default.svc:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
      metric_relabel_configs:
        - action: replace
          source_labels: [id]
          regex: '^/machine\.slice/machine-rkt\\x2d([^\\]+)\\.+/([^/]+)\.service$'
          target_label: rkt_container_name
          replacement: '${2}-${1}'
        - action: replace
          source_labels: [id]
          regex: '^/system\.slice/(.+)\.service$'
          target_label: systemd_service_name
          replacement: '${1}'

    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
        - role: pod
      relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: kubernetes_pod_name
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (https?)
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_prometheus_io_port]
          action: replace
          target_label: __address__
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2


    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
        - role: endpoints
      scheme: https 
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
        - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
          action: keep
          regex: default;kubernetes;https

    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
        - role: endpoints
      relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_service_name]
          action: replace
          target_label: kubernetes_name
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (https?)
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
          action: replace
          target_label: __address__
          regex: (.+)(?::\d+);(\d+)
          replacement: $1:$2

上述 Configmap 创立了 Prometheus 配置文件模板。这个配置文件模板将被 Thanos sidecar 组件读取,它将生成理论的配置文件,而这个配置文件又将被运行在同一个 pod 中的 Prometheus 容器所耗费。在配置文件中增加 external_labels 局部是极其重要的,这样 Querier 就能够依据这个来反复删除数据。

部署 Prometheus Rules configmap

这将创立咱们的告警规定,这些规定将被转发到 alertmanager,以便发送。

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-rules
  labels:
    name: prometheus-rules
  namespace: monitoring
data:
  alert-rules.yaml: |-
    groups:
      - name: Deployment
        rules:
        - alert: Deployment at 0 Replicas
          annotations:
            summary: Deployment {{$labels.deployment}} in {{$labels.namespace}} is currently having no pods running
          expr: |
            sum(kube_deployment_status_replicas{pod_template_hash=""}) by (deployment,namespace)  < 1
          for: 1m
          labels:
            team: devops

        - alert: HPA Scaling Limited  
          annotations: 
            summary: HPA named {{$labels.hpa}} in {{$labels.namespace}} namespace has reached scaling limited state
          expr: | 
            (sum(kube_hpa_status_condition{condition="ScalingLimited",status="true"}) by (hpa,namespace)) == 1
          for: 1m
          labels: 
            team: devops

        - alert: HPA at MaxCapacity 
          annotations: 
            summary: HPA named {{$labels.hpa}} in {{$labels.namespace}} namespace is running at Max Capacity
          expr: | 
            ((sum(kube_hpa_spec_max_replicas) by (hpa,namespace)) - (sum(kube_hpa_status_current_replicas) by (hpa,namespace))) == 0
          for: 1m
          labels: 
            team: devops

      - name: Pods
        rules:
        - alert: Container restarted
          annotations:
            summary: Container named {{$labels.container}} in {{$labels.pod}} in {{$labels.namespace}} was restarted
          expr: |
            sum(increase(kube_pod_container_status_restarts_total{namespace!="kube-system",pod_template_hash=""}[1m])) by (pod,namespace,container) > 0
          for: 0m
          labels:
            team: dev

        - alert: High Memory Usage of Container 
          annotations: 
            summary: Container named {{$labels.container}} in {{$labels.pod}} in {{$labels.namespace}} is using more than 75% of Memory Limit
          expr: | 
            (((sum(container_memory_usage_bytes{image!="",container_name!="POD", namespace!="kube-system"}) by (namespace,container_name,pod_name)  / sum(container_spec_memory_limit_bytes{image!="",container_name!="POD",namespace!="kube-system"}) by (namespace,container_name,pod_name) ) * 100 ) < +Inf ) > 75
          for: 5m
          labels: 
            team: dev

        - alert: High CPU Usage of Container 
          annotations: 
            summary: Container named {{$labels.container}} in {{$labels.pod}} in {{$labels.namespace}} is using more than 75% of CPU Limit
          expr: | 
            ((sum(irate(container_cpu_usage_seconds_total{image!="",container_name!="POD", namespace!="kube-system"}[30s])) by (namespace,container_name,pod_name) / sum(container_spec_cpu_quota{image!="",container_name!="POD", namespace!="kube-system"} / container_spec_cpu_period{image!="",container_name!="POD", namespace!="kube-system"}) by (namespace,container_name,pod_name) ) * 100)  > 75
          for: 5m
          labels: 
            team: dev

      - name: Nodes
        rules:
        - alert: High Node Memory Usage
          annotations:
            summary: Node {{$labels.kubernetes_io_hostname}} has more than 80% memory used. Plan Capcity
          expr: |
            (sum (container_memory_working_set_bytes{id="/",container_name!="POD"}) by (kubernetes_io_hostname) / sum (machine_memory_bytes{}) by (kubernetes_io_hostname) * 100) > 80
          for: 5m
          labels:
            team: devops

        - alert: High Node CPU Usage
          annotations:
            summary: Node {{$labels.kubernetes_io_hostname}} has more than 80% allocatable cpu used. Plan Capacity.
          expr: |
            (sum(rate(container_cpu_usage_seconds_total{id="/", container_name!="POD"}[1m])) by (kubernetes_io_hostname) / sum(machine_cpu_cores) by (kubernetes_io_hostname)  * 100) > 80
          for: 5m
          labels:
            team: devops

        - alert: High Node Disk Usage
          annotations:
            summary: Node {{$labels.kubernetes_io_hostname}} has more than 85% disk used. Plan Capacity.
          expr: |
            (sum(container_fs_usage_bytes{device=~"^/dev/[sv]d[a-z][1-9]$",id="/",container_name!="POD"}) by (kubernetes_io_hostname) / sum(container_fs_limit_bytes{container_name!="POD",device=~"^/dev/[sv]d[a-z][1-9]$",id="/"}) by (kubernetes_io_hostname)) * 100 > 85
          for: 5m
          labels:
            team: devops

部署 Prometheus Stateful Set

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: fast
  namespace: monitoring
provisioner: kubernetes.io/gce-pd
allowVolumeExpansion: true
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: prometheus
  namespace: monitoring
spec:
  replicas: 3
  serviceName: prometheus-service
  template:
    metadata:
      labels:
        app: prometheus
        thanos-store-api: "true"
    spec:
      serviceAccountName: monitoring
      containers:
        - name: prometheus
          image: prom/prometheus:v2.4.3
          args:
            - "--config.file=/etc/prometheus-shared/prometheus.yaml"
            - "--storage.tsdb.path=/prometheus/"
            - "--web.enable-lifecycle"
            - "--storage.tsdb.no-lockfile"
            - "--storage.tsdb.min-block-duration=2h"
            - "--storage.tsdb.max-block-duration=2h"
          ports:
            - name: prometheus
              containerPort: 9090
          volumeMounts:
            - name: prometheus-storage
              mountPath: /prometheus/
            - name: prometheus-config-shared
              mountPath: /etc/prometheus-shared/
            - name: prometheus-rules
              mountPath: /etc/prometheus/rules
        - name: thanos
          image: quay.io/thanos/thanos:v0.8.0
          args:
            - "sidecar"
            - "--log.level=debug"
            - "--tsdb.path=/prometheus"
            - "--prometheus.url=http://127.0.0.1:9090"
            - "--objstore.config={type: GCS, config: {bucket: prometheus-long-term}}"
            - "--reloader.config-file=/etc/prometheus/prometheus.yaml.tmpl"
            - "--reloader.config-envsubst-file=/etc/prometheus-shared/prometheus.yaml"
            - "--reloader.rule-dir=/etc/prometheus/rules/"
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name : GOOGLE_APPLICATION_CREDENTIALS
              value: /etc/secret/thanos-gcs-credentials.json
          ports:
            - name: http-sidecar
              containerPort: 10902
            - name: grpc
              containerPort: 10901
          livenessProbe:
              httpGet:
                port: 10902
                path: /-/healthy
          readinessProbe:
            httpGet:
              port: 10902
              path: /-/ready
          volumeMounts:
            - name: prometheus-storage
              mountPath: /prometheus
            - name: prometheus-config-shared
              mountPath: /etc/prometheus-shared/
            - name: prometheus-config
              mountPath: /etc/prometheus
            - name: prometheus-rules
              mountPath: /etc/prometheus/rules
            - name: thanos-gcs-credentials
              mountPath: /etc/secret
              readOnly: false
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 1000
      volumes:
        - name: prometheus-config
          configMap:
            defaultMode: 420
            name: prometheus-server-conf
        - name: prometheus-config-shared
          emptyDir: {}
        - name: prometheus-rules
          configMap:
            name: prometheus-rules
        - name: thanos-gcs-credentials
          secret:
            secretName: thanos-gcs-credentials
  volumeClaimTemplates:
  - metadata:
      name: prometheus-storage
      namespace: monitoring
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: fast
      resources:
        requests:
          storage: 20Gi

对于下面提供的 manifest,了解以下内容很重要:

  1. Prometheus 是作为一个有状态集部署的,有 3 个正本,每个正本动静地提供本人的长久化卷。
  2. Prometheus 配置是由 Thanos sidecar 容器应用咱们下面创立的模板文件生成的。
  3. Thanos 解决数据压缩,因而咱们须要设置 –storage.tsdb.min-block-duration=2h 和 –storage.tsdb.max-block-duration=2h。
  4. Prometheus 有状态集被标记为 thanos-store-api: true,这样每个 pod 就会被咱们接下来创立的 headless service 发现。正是这个 headless service 将被 Thanos Querier 用来查问所有 Prometheus 实例的数据。咱们还将雷同的标签利用于 Thanos Store 和 Thanos Ruler 组件,这样它们也会被 Querier 发现,并可用于查问指标。
  5. GCS bucket credentials 门路是应用 GOOGLE_APPLICATION_CREDENTIALS 环境变量提供的,配置文件是由咱们作为后期筹备中创立的 secret 挂载到它下面的。

部署 Prometheus 服务

apiVersion: v1
kind: Service
metadata: 
  name: prometheus-0-service
  annotations: 
    prometheus.io/scrape: "true"
    prometheus.io/port: "9090"
  namespace: monitoring
  labels:
    name: prometheus
spec:
  selector: 
    statefulset.kubernetes.io/pod-name: prometheus-0
  ports: 
    - name: prometheus 
      port: 8080
      targetPort: prometheus
---
apiVersion: v1
kind: Service
metadata: 
  name: prometheus-1-service
  annotations: 
    prometheus.io/scrape: "true"
    prometheus.io/port: "9090"
  namespace: monitoring
  labels:
    name: prometheus
spec:
  selector: 
    statefulset.kubernetes.io/pod-name: prometheus-1
  ports: 
    - name: prometheus 
      port: 8080
      targetPort: prometheus
---
apiVersion: v1
kind: Service
metadata: 
  name: prometheus-2-service
  annotations: 
    prometheus.io/scrape: "true"
    prometheus.io/port: "9090"
  namespace: monitoring
  labels:
    name: prometheus
spec:
  selector: 
    statefulset.kubernetes.io/pod-name: prometheus-2
  ports: 
    - name: prometheus 
      port: 8080
      targetPort: prometheus
---
#This service creates a srv record for querier to find about store-api's
apiVersion: v1
kind: Service
metadata:
  name: thanos-store-gateway
  namespace: monitoring
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - name: grpc
      port: 10901
      targetPort: grpc
  selector:
    thanos-store-api: "true"

除了上述办法外,你还能够点击这篇文章理解如何在 Rancher 上疾速部署和配置 Prometheus 服务。

咱们为 stateful set 中的每个 Prometheus pod 创立了不同的服务,只管这并不是必要的。这些服务的创立只是为了调试。上文曾经解释了 thanos-store-gateway headless service 的目标。咱们稍后将应用一个 ingress 对象来裸露 Prometheus 服务。

部署 Prometheus Querier

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: thanos-querier
  namespace: monitoring
  labels:
    app: thanos-querier
spec:
  replicas: 1
  selector:
    matchLabels:
      app: thanos-querier
  template:
    metadata:
      labels:
        app: thanos-querier
    spec:
      containers:
      - name: thanos
        image: quay.io/thanos/thanos:v0.8.0
        args:
        - query
        - --log.level=debug
        - --query.replica-label=replica
        - --store=dnssrv+thanos-store-gateway:10901
        ports:
        - name: http
          containerPort: 10902
        - name: grpc
          containerPort: 10901
        livenessProbe:
          httpGet:
            port: http
            path: /-/healthy
        readinessProbe:
          httpGet:
            port: http
            path: /-/ready
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: thanos-querier
  name: thanos-querier
  namespace: monitoring
spec:
  ports:
  - port: 9090
    protocol: TCP
    targetPort: http
    name: http
  selector:
    app: thanos-querier

这是 Thanos 部署的次要内容之一。请留神以下几点:

  1. 容器参数 -store=dnssrv+thanos-store-gateway:10901 有助于发现所有应查问的指标数据的组件。
  2. thanos-querier 服务提供了一个 Web 接口来运行 PromQL 查问。它还能够抉择在不同的 Prometheus 集群中去反复删除数据。
  3. 这是咱们提供 Grafana 作为所有 dashboard 的数据源的起点(end point)。

部署 Thanos 存储网关

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: thanos-store-gateway
  namespace: monitoring
  labels:
    app: thanos-store-gateway
spec:
  replicas: 1
  selector:
    matchLabels:
      app: thanos-store-gateway
  serviceName: thanos-store-gateway
  template:
    metadata:
      labels:
        app: thanos-store-gateway
        thanos-store-api: "true"
    spec:
      containers:
        - name: thanos
          image: quay.io/thanos/thanos:v0.8.0
          args:
          - "store"
          - "--log.level=debug"
          - "--data-dir=/data"
          - "--objstore.config={type: GCS, config: {bucket: prometheus-long-term}}"
          - "--index-cache-size=500MB"
          - "--chunk-pool-size=500MB"
          env:
            - name : GOOGLE_APPLICATION_CREDENTIALS
              value: /etc/secret/thanos-gcs-credentials.json
          ports:
          - name: http
            containerPort: 10902
          - name: grpc
            containerPort: 10901
          livenessProbe:
            httpGet:
              port: 10902
              path: /-/healthy
          readinessProbe:
            httpGet:
              port: 10902
              path: /-/ready
          volumeMounts:
            - name: thanos-gcs-credentials
              mountPath: /etc/secret
              readOnly: false
      volumes:
        - name: thanos-gcs-credentials
          secret:
            secretName: thanos-gcs-credentials
---

这将创立存储组件,它将从对象存储中向 Querier 提供指标。

部署 Thanos Ruler

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: thanos-ruler-rules
  namespace: monitoring
data:
  alert_down_services.rules.yaml: |
    groups:
    - name: metamonitoring
      rules:
      - alert: PrometheusReplicaDown
        annotations:
          message: Prometheus replica in cluster {{$labels.cluster}} has disappeared from Prometheus target discovery.
        expr: |
          sum(up{cluster="prometheus-ha", instance=~".*:9090", job="kubernetes-service-endpoints"}) by (job,cluster) < 3
        for: 15s
        labels:
          severity: critical
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  labels:
    app: thanos-ruler
  name: thanos-ruler
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: thanos-ruler
  serviceName: thanos-ruler
  template:
    metadata:
      labels:
        app: thanos-ruler
        thanos-store-api: "true"
    spec:
      containers:
        - name: thanos
          image: quay.io/thanos/thanos:v0.8.0
          args:
            - rule
            - --log.level=debug
            - --data-dir=/data
            - --eval-interval=15s
            - --rule-file=/etc/thanos-ruler/*.rules.yaml
            - --alertmanagers.url=http://alertmanager:9093
            - --query=thanos-querier:9090
            - "--objstore.config={type: GCS, config: {bucket: thanos-ruler}}"
            - --label=ruler_cluster="prometheus-ha"
            - --label=replica="$(POD_NAME)"
          env:
            - name : GOOGLE_APPLICATION_CREDENTIALS
              value: /etc/secret/thanos-gcs-credentials.json
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          ports:
            - name: http
              containerPort: 10902
            - name: grpc
              containerPort: 10901
          livenessProbe:
            httpGet:
              port: http
              path: /-/healthy
          readinessProbe:
            httpGet:
              port: http
              path: /-/ready
          volumeMounts:
            - mountPath: /etc/thanos-ruler
              name: config
            - name: thanos-gcs-credentials
              mountPath: /etc/secret
              readOnly: false
      volumes:
        - configMap:
            name: thanos-ruler-rules
          name: config
        - name: thanos-gcs-credentials
          secret:
            secretName: thanos-gcs-credentials
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: thanos-ruler
  name: thanos-ruler
  namespace: monitoring
spec:
  ports:
    - port: 9090
      protocol: TCP
      targetPort: http
      name: http
  selector:
    app: thanos-ruler

当初,如果你在与咱们的工作负载雷同的命名空间中启动交互式 shell,并尝试查看咱们的 thanos-store-gateway 解析到哪些 pods,你会看到以下内容:

root@my-shell-95cb5df57-4q6w8:/# nslookup thanos-store-gateway
Server:    10.63.240.10
Address:  10.63.240.10#53

Name:  thanos-store-gateway.monitoring.svc.cluster.local
Address: 10.60.25.2
Name:  thanos-store-gateway.monitoring.svc.cluster.local
Address: 10.60.25.4
Name:  thanos-store-gateway.monitoring.svc.cluster.local
Address: 10.60.30.2
Name:  thanos-store-gateway.monitoring.svc.cluster.local
Address: 10.60.30.8
Name:  thanos-store-gateway.monitoring.svc.cluster.local
Address: 10.60.31.2

root@my-shell-95cb5df57-4q6w8:/# exit

下面返回的 IP 对应的是咱们的 Prometheus Pod、thanos-storethanos-ruler。这能够被验证为:

$ kubectl get pods -o wide -l thanos-store-api="true"
NAME                     READY   STATUS    RESTARTS   AGE    IP           NODE                              NOMINATED NODE   READINESS GATES
prometheus-0             2/2     Running   0          100m   10.60.31.2   gke-demo-1-pool-1-649cbe02-jdnv   <none>           <none>
prometheus-1             2/2     Running   0          14h    10.60.30.2   gke-demo-1-pool-1-7533d618-kxkd   <none>           <none>
prometheus-2             2/2     Running   0          31h    10.60.25.2   gke-demo-1-pool-1-4e9889dd-27gc   <none>           <none>
thanos-ruler-0           1/1     Running   0          100m   10.60.30.8   gke-demo-1-pool-1-7533d618-kxkd   <none>           <none>
thanos-store-gateway-0   1/1     Running   0          14h    10.60.25.4   gke-demo-1-pool-1-4e9889dd-27gc   <none>           <none>

部署 Alertmanager

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: alertmanager
  namespace: monitoring
data:
  config.yml: |-
    global:
      resolve_timeout: 5m
      slack_api_url: "<your_slack_hook>"
      victorops_api_url: "<your_victorops_hook>"

    templates:
    - '/etc/alertmanager-templates/*.tmpl'
    route:
      group_by: ['alertname', 'cluster', 'service']
      group_wait: 10s
      group_interval: 1m
      repeat_interval: 5m  
      receiver: default 
      routes:
      - match:
          team: devops
        receiver: devops
        continue: true 
      - match: 
          team: dev
        receiver: dev
        continue: true

    receivers:
    - name: 'default'

    - name: 'devops'
      victorops_configs:
      - api_key: '<YOUR_API_KEY>'
        routing_key: 'devops'
        message_type: 'CRITICAL'
        entity_display_name: '{{.CommonLabels.alertname}}'
        state_message: 'Alert: {{.CommonLabels.alertname}}. Summary:{{.CommonAnnotations.summary}}. RawData: {{.CommonLabels}}'
      slack_configs:
      - channel: '#k8-alerts'
        send_resolved: true


    - name: 'dev'
      victorops_configs:
      - api_key: '<YOUR_API_KEY>'
        routing_key: 'dev'
        message_type: 'CRITICAL'
        entity_display_name: '{{.CommonLabels.alertname}}'
        state_message: 'Alert: {{.CommonLabels.alertname}}. Summary:{{.CommonAnnotations.summary}}. RawData: {{.CommonLabels}}'
      slack_configs:
      - channel: '#k8-alerts'
        send_resolved: true

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: alertmanager
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alertmanager
  template:
    metadata:
      name: alertmanager
      labels:
        app: alertmanager
    spec:
      containers:
      - name: alertmanager
        image: prom/alertmanager:v0.15.3
        args:
          - '--config.file=/etc/alertmanager/config.yml'
          - '--storage.path=/alertmanager'
        ports:
        - name: alertmanager
          containerPort: 9093
        volumeMounts:
        - name: config-volume
          mountPath: /etc/alertmanager
        - name: alertmanager
          mountPath: /alertmanager
      volumes:
      - name: config-volume
        configMap:
          name: alertmanager
      - name: alertmanager
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: '/metrics'
  labels:
    name: alertmanager
  name: alertmanager
  namespace: monitoring
spec:
  selector:
    app: alertmanager
  ports:
  - name: alertmanager
    protocol: TCP
    port: 9093
    targetPort: 9093

这将创立咱们的 Alertmanager 部署,它将依据 Prometheus 规定生成所有告警。

部署 Kubestate 指标

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1 
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kube-state-metrics
rules:
- apiGroups: [""]
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs: ["list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - daemonsets
  - deployments
  - replicasets
  verbs: ["list", "watch"]
- apiGroups: ["apps"]
  resources:
  - statefulsets
  verbs: ["list", "watch"]
- apiGroups: ["batch"]
  resources:
  - cronjobs
  - jobs
  verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
  resources:
  - horizontalpodautoscalers
  verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: kube-state-metrics
  namespace: monitoring
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kube-state-metrics-resizer
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  namespace: monitoring
  name: kube-state-metrics-resizer
rules:
- apiGroups: [""]
  resources:
  - pods
  verbs: ["get"]
- apiGroups: ["extensions"]
  resources:
  - deployments
  resourceNames: ["kube-state-metrics"]
  verbs: ["get", "update"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: monitoring
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: monitoring
spec:
  selector:
    matchLabels:
      k8s-app: kube-state-metrics
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kube-state-metrics
    spec:
      serviceAccountName: kube-state-metrics
      containers:
      - name: kube-state-metrics
        image: quay.io/mxinden/kube-state-metrics:v1.4.0-gzip.3
        ports:
        - name: http-metrics
          containerPort: 8080
        - name: telemetry
          containerPort: 8081
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 5
      - name: addon-resizer
        image: k8s.gcr.io/addon-resizer:1.8.3
        resources:
          limits:
            cpu: 150m
            memory: 50Mi
          requests:
            cpu: 150m
            memory: 50Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        command:
          - /pod_nanny
          - --container=kube-state-metrics
          - --cpu=100m
          - --extra-cpu=1m
          - --memory=100Mi
          - --extra-memory=2Mi
          - --threshold=5
          - --deployment=kube-state-metrics
---
apiVersion: v1
kind: Service
metadata:
  name: kube-state-metrics
  namespace: monitoring
  labels:
    k8s-app: kube-state-metrics
  annotations:
    prometheus.io/scrape: 'true'
spec:
  ports:
  - name: http-metrics
    port: 8080
    targetPort: http-metrics
    protocol: TCP
  - name: telemetry
    port: 8081
    targetPort: telemetry
    protocol: TCP
  selector:
    k8s-app: kube-state-metrics

Kubestate 指标部署须要转发一些重要的容器指标,这些指标不是 kubelet 原生裸露的,因而不能间接提供给 Prometheus。

部署 Node-Exporter Daemonset

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    name: node-exporter
spec:
  template:
    metadata:
      labels:
        name: node-exporter
      annotations:
         prometheus.io/scrape: "true"
         prometheus.io/port: "9100"
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
        - name: node-exporter
          image: prom/node-exporter:v0.16.0
          securityContext:
            privileged: true
          args:
            - --path.procfs=/host/proc
            - --path.sysfs=/host/sys
          ports:
            - containerPort: 9100
              protocol: TCP
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
            requests:
              cpu: 10m
              memory: 100Mi
          volumeMounts:
            - name: dev
              mountPath: /host/dev
            - name: proc
              mountPath: /host/proc
            - name: sys
              mountPath: /host/sys
            - name: rootfs
              mountPath: /rootfs
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: dev
          hostPath:
            path: /dev
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /

Node-Exporter daemonset 在每个节点上运行一个 node-exporter 的 pod,并暴露出十分重要的节点相干指标,这些指标能够被 Prometheus 实例拉取。

部署 Grafana

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: fast
  namespace: monitoring
provisioner: kubernetes.io/gce-pd
allowVolumeExpansion: true
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: grafana
  namespace: monitoring
spec:
  replicas: 1
  serviceName: grafana
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
  volumeClaimTemplates:
  - metadata:
      name: grafana-storage
      namespace: monitoring
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: fast
      resources:
        requests:
          storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: grafana
  name: grafana
  namespace: monitoring
spec:
  ports:
  - port: 3000
    targetPort: 3000
  selector:
    k8s-app: grafana

这将创立咱们的 Grafana 部署和服务,它将应用咱们的 Ingress 对象裸露。为了做到这一点,咱们应该增加 Thanos-Querier 作为咱们 Grafana 部署的数据源:

  1. 点击增加数据源
  2. 设置 Name: DS_PROMETHEUS
  3. 设置 Type: Prometheus
  4. 设置 URL: http://thanos-querier:9090
  5. 保留并测试。当初你能够构建你的自定义 dashboard 或从 grafana.net 简略导入 dashboard。Dashboard #315 和 #1471 都非常适合入门。

部署 Ingress 对象

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: monitoring-ingress
  namespace: monitoring
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: grafana.<yourdomain>.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: 3000
  - host: prometheus-0.<yourdomain>.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus-0-service
          servicePort: 8080
  - host: prometheus-1.<yourdomain>.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus-1-service
          servicePort: 8080
  - host: prometheus-2.<yourdomain>.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus-2-service
          servicePort: 8080
  - host: alertmanager.<yourdomain>.com
    http: 
      paths:
      - path: /
        backend:
          serviceName: alertmanager
          servicePort: 9093
  - host: thanos-querier.<yourdomain>.com
    http:
      paths:
      - path: /
        backend:
          serviceName: thanos-querier
          servicePort: 9090
  - host: thanos-ruler.<yourdomain>.com
    http:
      paths:
      - path: /
        backend:
          serviceName: thanos-ruler
          servicePort: 9090

这是拼图的最初一块。有助于将咱们的所有服务裸露在 Kubernetes 集群之外,并帮忙咱们拜访它们。确保将 <yourdomain> 替换为一个你能够拜访的域名,并且你能够将 Ingress-Controller 的服务指向这个域名。

当初你应该能够拜访 Thanos Querier,网址是:http://thanos-querier.<yourdomain>.com。它如下所示:

确保选中反复数据删除(deduplication)。

如果你点击 Store,能够看到所有由 thanos-store-gateway 服务发现的流动端点。

当初你能够在 Grafana 中增加 Thanos Querier 作为数据源,并开始创立 dashboard。

Kubernetes 集群监控 dashboard

Kubernetes 节点监控 dashboard

总 结

将 Thanos 与 Prometheus 集成在一起,无疑提供了横向扩大 Prometheus 的能力,而且因为 Thanos-Querier 可能从其余 querier 实例中提取指标数据,因而实际上你能够跨集群提取指标数据,并在一个繁多的仪表板中可视化。

咱们还可能将指标数据归档在对象存储中,为咱们的监控零碎提供有限的存储空间,同时从对象存储自身提供指标数据。这种设置的次要老本局部能够归结为对象存储(S3 或 GCS)。如果咱们对它们利用适当的保留策略,能够进一步降低成本。

然而,实现这所有须要你进行大量的配置。下面提供的 manifest 曾经在生产环境中进行了测试,你能够大胆进行尝试。

正文完
 0