乐趣区

关于prometheus:prometheusoperator使用二-serviceMonitor监控kubeproxy

上文讲到 serviceMonitor 是 service 监控对象的形象,本文就以 kube-proxy 为例,剖析如何应用 serviceMonitor 对象监控 kube-proxy。

1. kube-proxy 的部署模式

# kubectl get all -A|grep proxy
kube-system     pod/kube-proxy-bn64j                           1/1     Running   0          30m
kube-system     pod/kube-proxy-jcl54                           1/1     Running   0          30m
kube-system     pod/kube-proxy-n44bh                           1/1     Running   0          30m
kube-system     daemonset.apps/kube-proxy                 3         3         3       3            3           kubernetes.io/os=linux   217d

能够看到,kube-proxy 应用 daemonset 部署,但没有 service,部署了 3 个 Pod。

2. 减少 kube-proxy 的 /metrics 拜访端口

kube-proxy 的 Pod 内含 1 个 container,并且其配置文件中,metrics 绑定的 ip 为 127.0.0.1:

# kubectl edit ds kube-proxy -n kube-system
......
    spec:
      containers:
      - command:
        - /usr/local/bin/kube-proxy
        - --config=/var/lib/kube-proxy/config.conf
        - --hostname-override=$(NODE_NAME)
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: 178.104.162.39:443/dev/kubernetes/amd64/kube-proxy:v1.18.0
        imagePullPolicy: IfNotPresent
        name: kube-proxy

查看其配置文件 /var/lib/kube-proxy/config.conf:

# kubectl exec -it kube-proxy-4vxsf /bin/sh -n kube-system

# cat /var/lib/kube-proxy/config.conf

bindAddress: 0.0.0.0
......
healthzBindAddress: 0.0.0.0:10256
......
metricsBindAddress: 127.0.0.1:10249

能够看到,其绑定的 metrics 地址:127.0.0.1:10249;
要想里面能够拜访 /metrics,须要将该端口转发进去,这里 应用 sidecar:减少 1 个 kube-rbac-proxy container 的形式,将 proxy container 的 metrics 端口转发进去:

# kubectl edit ds kube-proxy -n kube-system

#在 containers 列表中减少
- args:
  - --logtostderr
  - --secure-listen-address=[$(IP)]:10249
  - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
  - --upstream=http://127.0.0.1:10249/
  env:
  - name: IP
    valueFrom:
      fieldRef:
        apiVersion: v1
        fieldPath: status.podIP
  image: 178.104.162.39:443/dev/kubernetes/amd64/kube-rbac-proxy:v0.4.1
  imagePullPolicy: IfNotPresent
  name: kube-rbac-proxy
  ports:
  - containerPort: 10249
    hostPort: 10249
    name: https
    protocol: TCP
  resources:
    limits:
      cpu: 20m
      memory: 40Mi
    requests:
      cpu: 10m
      memory: 20Mi
  terminationMessagePath: /dev/termination-log
  terminationMessagePolicy: File

在 daemonset 的最初,还指定了 serviceAccount:

serviceAccount: kube-proxy
serviceAccountName: kube-proxy

daemonset 批改结束后,验证 10249 端口是否监听:

# netstat -nalp|grep 10249|grep LISTEN
tcp        0      0 178.104.163.38:10249    0.0.0.0:*               LISTEN      16930/./kube-rbac-p
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      16735/kube-proxy

3. 创立 kube-proxy 的 service 和 serviceMonitor

kube-proxy 没有 service,须要在 service 的根底上,创立 serviceMonitor;

kube-proxy-service.yaml 定义了 name=kube-proxy 的 service:

  • 筛选 Pod: 含 label, k8s-app=kube-proxy;
  • 给本人加 lable: k8s-app=kube-proxy;(serviceMonitor 会用)
# cat kube-proxy-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: kube-proxy
    app.kubernetes.io/version: v0.18.1
    k8s-app: kube-proxy
  name: kube-proxy
  namespace: kube-system
spec:
  clusterIP: None
  ports:
  - name: https
    port: 10249
    targetPort: https
  selector:
    k8s-app: kube-proxy

kube-proxy-serviceMonitor.yaml,它定义了 name=kube-proxy 的 serviceMonitor:

  • 筛选 service 中 label: k8s-app=kube-proxy 的 service;
# cat kube-proxy-serviceMonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: kube-proxy
  namespace: monitoring
  labels:
    k8s-app: kube-proxy
spec:
  jobLabel: kube-proxy
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 15s
    port: https
    relabelings:
    - action: replace
      regex: (.*)
      replacement: $1
      sourceLabels:
      - __meta_kubernetes_pod_node_name
      targetLabel: instance
    scheme: https
    tlsConfig:
      insecureSkipVerify: true
  selector:
    matchLabels:
      k8s-app: kube-proxy
  namespaceSelector:
    matchNames:
    - kube-system

serviceMonitor 定义结束,会在 prometheus 的 dashboard 看到 kube-proxy 的 target:

同时,在 prometheus-server 的配置文件中,也对应减少了 kube-proxy 的服务发现配置:

- job_name: monitoring/kube-proxy/0
  honor_labels: false
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - kube-system
  scrape_interval: 15s
  scheme: https
  tls_config:
    insecure_skip_verify: true
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  relabel_configs:
  - action: keep        ## 筛选 label:k8s_app=kube-proxy 的 service
    source_labels:
    - __meta_kubernetes_service_label_k8s_app
    regex: kube-proxy
  - action: keep        ## 筛选 endpoint_port_name=https 的 service
    source_labels:
    - __meta_kubernetes_endpoint_port_name
    regex: https
  - source_labels:
    - __meta_kubernetes_endpoint_address_target_kind
    - __meta_kubernetes_endpoint_address_target_name
    separator: ;
    regex: Node;(.*)
    replacement: ${1}
    target_label: node
  - source_labels:
    - __meta_kubernetes_endpoint_address_target_kind
    - __meta_kubernetes_endpoint_address_target_name
    separator: ;
    regex: Pod;(.*)
    replacement: ${1}
    target_label: pod
 .....

下面的配置次要有 2 个筛选项:

  • 筛选 label: k8s-app=kube-proxy 的 service;
  • 筛选 endpoint_port_name=https 的 service;

这跟 kube-proxy 的 service 定义统一。

4. 减少 kube-proxy 的 rbac 配置

daemonset 中应用的 serviceAccount: kube-proxy,须要给该 sa 减少 clusterRole 和 clusterRoleBinding,否则 scrape /metrics 时会报 401 Unauthorize;

# cat kube-proxy-clusterRole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube-proxy
rules:
- apiGroups:
  - authentication.k8s.io
  resources:
  - tokenreviews
  verbs:
  - create
- apiGroups:
  - authorization.k8s.io
  resources:
  - subjectaccessreviews
  verbs:
  - create

#kubectl apply -f kube-proxy-clusterRole.yaml
# cat kube-proxy-clusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-proxy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-proxy
subjects:
- kind: ServiceAccount
  name: kube-proxy
  namespace: kube-system

#kubectl  apply  -f  kube-proxy-clusterRoleBinding.yaml

5. 集群内 curl /metrics 查看指标

# curl --header "Authorization: Bearer $TOKEN" --insecure https://178.104.163.38:10249/metrics
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
......
退出移动版