一、环境介绍

相干软件版本:
kubernetets:v1.22.2
kube-prometheus:release 0.9

二、开始部署

在部署之前留神先去官网查看kube-prometheus与k8s集群的兼容性。kube-prometheus官网:kube-prometheus

在 master 节点下载相干软件包。

root@master01:~/monitor# git clone https://github.com/prometheus-operator/kube-prometheus.gitCloning into 'kube-prometheus'...remote: Enumerating objects: 17447, done.remote: Counting objects: 100% (353/353), done.remote: Compressing objects: 100% (129/129), done.remote: Total 17447 (delta 262), reused 279 (delta 211), pack-reused 17094Receiving objects: 100% (17447/17447), 9.12 MiB | 5.59 MiB/s, done.Resolving deltas: 100% (11455/11455), done.root@master01:~/monitor# lskube-prometheus

kube-prometheus部署:留神manifests/setup/文件夹的资源在部署的时候必须用create不能用apply,否则会提醒“The CustomResourceDefinition "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes”,manifests文件夹中的内容能够用apply创立。

root@master01:~/monitor# lskube-prometheusroot@master01:~/monitor/kube-prometheus# kubectl create -f manifests/setup/customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com creatednamespace/monitoring createdroot@master01:~/monitor/kube-prometheus# kubectl apply -f manifests/alertmanager.monitoring.coreos.com/main creatednetworkpolicy.networking.k8s.io/alertmanager-main createdpoddisruptionbudget.policy/alertmanager-main createdprometheusrule.monitoring.coreos.com/alertmanager-main-rules createdsecret/alertmanager-main createdservice/alertmanager-main createdserviceaccount/alertmanager-main createdservicemonitor.monitoring.coreos.com/alertmanager-main createdclusterrole.rbac.authorization.k8s.io/blackbox-exporter createdclusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter createdconfigmap/blackbox-exporter-configuration createddeployment.apps/blackbox-exporter creatednetworkpolicy.networking.k8s.io/blackbox-exporter createdservice/blackbox-exporter createdserviceaccount/blackbox-exporter createdservicemonitor.monitoring.coreos.com/blackbox-exporter createdsecret/grafana-config createdsecret/grafana-datasources createdconfigmap/grafana-dashboard-alertmanager-overview createdconfigmap/grafana-dashboard-apiserver createdconfigmap/grafana-dashboard-cluster-total createdconfigmap/grafana-dashboard-controller-manager createdconfigmap/grafana-dashboard-grafana-overview createdconfigmap/grafana-dashboard-k8s-resources-cluster createdconfigmap/grafana-dashboard-k8s-resources-namespace createdconfigmap/grafana-dashboard-k8s-resources-node createdconfigmap/grafana-dashboard-k8s-resources-pod createdconfigmap/grafana-dashboard-k8s-resources-workload createdconfigmap/grafana-dashboard-k8s-resources-workloads-namespace createdconfigmap/grafana-dashboard-kubelet createdconfigmap/grafana-dashboard-namespace-by-pod createdconfigmap/grafana-dashboard-namespace-by-workload createdconfigmap/grafana-dashboard-node-cluster-rsrc-use createdconfigmap/grafana-dashboard-node-rsrc-use createdconfigmap/grafana-dashboard-nodes-darwin createdconfigmap/grafana-dashboard-nodes createdconfigmap/grafana-dashboard-persistentvolumesusage createdconfigmap/grafana-dashboard-pod-total createdconfigmap/grafana-dashboard-prometheus-remote-write createdconfigmap/grafana-dashboard-prometheus createdconfigmap/grafana-dashboard-proxy createdconfigmap/grafana-dashboard-scheduler createdconfigmap/grafana-dashboard-workload-total createdconfigmap/grafana-dashboards createddeployment.apps/grafana creatednetworkpolicy.networking.k8s.io/grafana createdprometheusrule.monitoring.coreos.com/grafana-rules createdservice/grafana createdserviceaccount/grafana createdservicemonitor.monitoring.coreos.com/grafana createdprometheusrule.monitoring.coreos.com/kube-prometheus-rules createdclusterrole.rbac.authorization.k8s.io/kube-state-metrics createdclusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics createddeployment.apps/kube-state-metrics creatednetworkpolicy.networking.k8s.io/kube-state-metrics createdprometheusrule.monitoring.coreos.com/kube-state-metrics-rules createdservice/kube-state-metrics createdserviceaccount/kube-state-metrics createdservicemonitor.monitoring.coreos.com/kube-state-metrics createdprometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules createdservicemonitor.monitoring.coreos.com/kube-apiserver createdservicemonitor.monitoring.coreos.com/coredns createdservicemonitor.monitoring.coreos.com/kube-controller-manager createdservicemonitor.monitoring.coreos.com/kube-scheduler createdservicemonitor.monitoring.coreos.com/kubelet createdclusterrole.rbac.authorization.k8s.io/node-exporter createdclusterrolebinding.rbac.authorization.k8s.io/node-exporter createddaemonset.apps/node-exporter creatednetworkpolicy.networking.k8s.io/node-exporter createdprometheusrule.monitoring.coreos.com/node-exporter-rules createdservice/node-exporter createdserviceaccount/node-exporter createdservicemonitor.monitoring.coreos.com/node-exporter createdclusterrole.rbac.authorization.k8s.io/prometheus-k8s createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s creatednetworkpolicy.networking.k8s.io/prometheus-k8s createdpoddisruptionbudget.policy/prometheus-k8s createdprometheus.monitoring.coreos.com/k8s createdprometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s-config createdrole.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s createdservice/prometheus-k8s createdserviceaccount/prometheus-k8s createdservicemonitor.monitoring.coreos.com/prometheus-k8s createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdclusterrole.rbac.authorization.k8s.io/prometheus-adapter createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter createdclusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator createdclusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources createdconfigmap/adapter-config createddeployment.apps/prometheus-adapter creatednetworkpolicy.networking.k8s.io/prometheus-adapter createdpoddisruptionbudget.policy/prometheus-adapter createdrolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader createdservice/prometheus-adapter createdserviceaccount/prometheus-adapter createdservicemonitor.monitoring.coreos.com/prometheus-adapter createdclusterrole.rbac.authorization.k8s.io/prometheus-operator createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-operator createddeployment.apps/prometheus-operator creatednetworkpolicy.networking.k8s.io/prometheus-operator createdprometheusrule.monitoring.coreos.com/prometheus-operator-rules createdservice/prometheus-operator createdserviceaccount/prometheus-operator createdservicemonitor.monitoring.coreos.com/prometheus-operator created

部署后果验证,能够看到新创建的monitoring名称空间中的POD全副失常启动,相干资源失常被创立。

root@master01:~/monitor/kube-prometheus# kubectl get po -n monitoring NAME                                   READY   STATUS    RESTARTS   AGEalertmanager-main-0                    2/2     Running   0          2m18salertmanager-main-1                    2/2     Running   0          2m18salertmanager-main-2                    2/2     Running   0          2m18sblackbox-exporter-58d99cfb6d-5f44f     3/3     Running   0          2m30sgrafana-5cfbb9b4c5-7qw6c               1/1     Running   0          2m29skube-state-metrics-c9f8b947b-xrhbd     3/3     Running   0          2m29snode-exporter-gzspv                    2/2     Running   0          2m28snode-exporter-jz588                    2/2     Running   0          2m28snode-exporter-z8vps                    2/2     Running   0          2m28sprometheus-adapter-5bf8d6f7c6-5jf6n    1/1     Running   0          2m26sprometheus-adapter-5bf8d6f7c6-sz4ns    1/1     Running   0          2m27sprometheus-k8s-0                       2/2     Running   0          2m16sprometheus-k8s-1                       2/2     Running   0          2m16sprometheus-operator-6cbd5c84fb-8xslw   2/2     Running   0          2m26sroot@master01:~# kubectl get crd -n monitoring NAME                                        CREATED ATalertmanagerconfigs.monitoring.coreos.com   2023-01-04T14:54:38Zalertmanagers.monitoring.coreos.com         2023-01-04T14:54:38Zpodmonitors.monitoring.coreos.com           2023-01-04T14:54:38Zprobes.monitoring.coreos.com                2023-01-04T14:54:38Zprometheuses.monitoring.coreos.com          2023-01-04T14:54:38Zprometheusrules.monitoring.coreos.com       2023-01-04T14:54:38Zservicemonitors.monitoring.coreos.com       2023-01-04T14:54:38Zthanosrulers.monitoring.coreos.com          2023-01-04T14:54:38Zroot@master01:~# kubectl get secret -n monitoring NAME                              TYPE                                  DATA   AGEalertmanager-main                 Opaque                                1      3d21halertmanager-main-generated       Opaque                                1      3d21halertmanager-main-tls-assets-0    Opaque                                0      3d21halertmanager-main-token-6c5pj     kubernetes.io/service-account-token   3      3d21halertmanager-main-web-config      Opaque                                1      3d21hblackbox-exporter-token-22c6s     kubernetes.io/service-account-token   3      3d21hdefault-token-kff56               kubernetes.io/service-account-token   3      3d22hgrafana-config                    Opaque                                1      3d21hgrafana-datasources               Opaque                                1      3d21hgrafana-token-7dvc9               kubernetes.io/service-account-token   3      3d21hkube-state-metrics-token-nzhtn    kubernetes.io/service-account-token   3      3d21hnode-exporter-token-8zrpf         kubernetes.io/service-account-token   3      3d21hprometheus-adapter-token-jmdch    kubernetes.io/service-account-token   3      3d21hprometheus-k8s                    Opaque                                1      3d21hprometheus-k8s-tls-assets-0       Opaque                                0      3d21hprometheus-k8s-token-jfwmh        kubernetes.io/service-account-token   3      3d21hprometheus-k8s-web-config         Opaque                                1      3d21hprometheus-operator-token-w4xcf   kubernetes.io/service-account-token   3      3d21h

三、服务裸露

服务裸露分为两种形式,能够应用ingress或NodePort的形式将服务裸露进来使宿主机能够进行拜访。留神如果集群应用的如果是calico网络的话会默认创立出若干条网络规定导致服务裸露后也无奈在客户端进行拜访。可将网络策略批改或间接删除。(留神图片中所操作的策略也须要删除)。

root@master01:~# kubectl get networkpolicies.networking.k8s.io -n monitoring NAME                  POD-SELECTOR                                                                                                                                             AGEalertmanager-main     app.kubernetes.io/component=alert-router,app.kubernetes.io/instance=main,app.kubernetes.io/name=alertmanager,app.kubernetes.io/part-of=kube-prometheus   14hblackbox-exporter     app.kubernetes.io/component=exporter,app.kubernetes.io/name=blackbox-exporter,app.kubernetes.io/part-of=kube-prometheus                                  14hgrafana               app.kubernetes.io/component=grafana,app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=kube-prometheus                                             14hkube-state-metrics    app.kubernetes.io/component=exporter,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=kube-prometheus                                 14hnode-exporter         app.kubernetes.io/component=exporter,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=kube-prometheus                                      14hprometheus-adapter    app.kubernetes.io/component=metrics-adapter,app.kubernetes.io/name=prometheus-adapter,app.kubernetes.io/part-of=kube-prometheus                          14hprometheus-operator   app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=kube-prometheus                              14hroot@master01:~# kubectl delete networkpolicies.networking.k8s.io  grafana -n monitoring networkpolicy.networking.k8s.io "grafana" deletedroot@master01:~# kubectl delete networkpolicies.networking.k8s.io alertmanager-main -n monitoring networkpolicy.networking.k8s.io "alertmanager-main" deleted

3.1 应用NodePort形式裸露服务(可选)

批改prometheus的service文件

root@master01:~/monitor/kube-prometheus# cat manifests/prometheus-service.yaml apiVersion: v1kind: Servicemetadata:  labels:    app.kubernetes.io/component: prometheus    app.kubernetes.io/instance: k8s    app.kubernetes.io/name: prometheus    app.kubernetes.io/part-of: kube-prometheus    app.kubernetes.io/version: 2.41.0  name: prometheus-k8s  namespace: monitoringspec:  type: NodePort  ports:  - name: web    port: 9090    targetPort: web    nodePort: 30090  - name: reloader-web    port: 8080    targetPort: reloader-web  selector:    app.kubernetes.io/component: prometheus    app.kubernetes.io/instance: k8s    app.kubernetes.io/name: prometheus    app.kubernetes.io/part-of: kube-prometheus  sessionAffinity: ClientIProot@master01:~/monitor/kube-prometheus# kubectl apply -f  manifests/prometheus-service.yaml service/prometheus-k8s configured

批改 grafana 和 alert-manager 的 service 文件。

root@master01:~/monitor/kube-prometheus/manifests# cat grafana-service.yaml apiVersion: v1kind: Servicemetadata:  labels:    app.kubernetes.io/component: grafana    app.kubernetes.io/name: grafana    app.kubernetes.io/part-of: kube-prometheus    app.kubernetes.io/version: 9.3.2  name: grafana  namespace: monitoringspec:  type: NodePort  ports:  - name: http    port: 3000    targetPort: http    nodePort: 30030  selector:    app.kubernetes.io/component: grafana    app.kubernetes.io/name: grafana    app.kubernetes.io/part-of: kube-prometheusroot@master01:~/monitor/kube-prometheus/manifests# cat alertmanager-service.yaml apiVersion: v1kind: Servicemetadata:  labels:    app.kubernetes.io/component: alert-router    app.kubernetes.io/instance: main    app.kubernetes.io/name: alertmanager    app.kubernetes.io/part-of: kube-prometheus    app.kubernetes.io/version: 0.25.0  name: alertmanager-main  namespace: monitoringspec:  type: NodePort  ports:  - name: web    port: 9093    targetPort: web    nodePort: 30093  - name: reloader-web    port: 8080    targetPort: reloader-web  selector:    app.kubernetes.io/component: alert-router    app.kubernetes.io/instance: main    app.kubernetes.io/name: alertmanager    app.kubernetes.io/part-of: kube-prometheus  sessionAffinity: ClientIP批改完后applyroot@master01:~/monitor/kube-prometheus/manifests# kubectl apply -f  grafana-service.yaml service/grafana configuredroot@master01:~/monitor/kube-prometheus/manifests# kubectl apply -f  alertmanager-service.yaml service/alertmanager-main configured

批改完相干service文件后验证

root@master01:~# kubectl get svc -n monitoring NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGEalertmanager-main       NodePort    10.100.23.255    <none>        9093:30093/TCP,8080:32647/TCP   14halertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP      14hblackbox-exporter       ClusterIP   10.100.48.67     <none>        9115/TCP,19115/TCP              14hgrafana                 NodePort    10.100.153.195   <none>        3000:30030/TCP                  14hkube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP               14hnode-exporter           ClusterIP   None             <none>        9100/TCP                        14hprometheus-adapter      ClusterIP   10.100.226.131   <none>        443/TCP                         14hprometheus-k8s          NodePort    10.100.206.216   <none>        9090:30090/TCP,8080:30019/TCP   14hprometheus-operated     ClusterIP   None             <none>        9090/TCP                        14hprometheus-operator     ClusterIP   None             <none>        8443/TCP                        14h

在客户端用浏览器进行拜访验证



下面咱们能够看到 Prometheus的POD 是两个正本,咱们这里通过 Service 去拜访,按失常来说申请是会去轮询拜访后端的两个 Prometheus 实例的,但实际上咱们这里拜访的时候始终是通过service的负载平衡个性调度到后端的任意一个实例下来,因为这里的 Service 在创立的时候增加了 sessionAffinity: ClientIP 这样的属性,会依据 ClientIP 来做 session 亲和性,所以不必放心申请会到不同的正本下来,失常多正本应该是看成高可用的罕用计划,实践上来说不同正本本地的数据是统一的,然而须要留神的是 Prometheus 的被动 Pull 拉取监控指标的形式,因为抓取工夫不能完全一致,即便统一也不肯定就能保障网络没什么问题,所以最终不同正本下存储的数据很大可能是不一样的,所以官网提供的yaml文件里配置了 session 亲和性,能够保障咱们在拜访数据的时候始终是统一的。

此时能够应用admin:admin,登录granfna平台。
从内置监控模板中抉择一个模板测试。能够发现曾经能够看到监控数据。

3.2 应用ingress形式裸露服务(可选)

ingress工作逻辑简介:先创立ingress控制器,相似于nginx主过程,监听一个端口,也就是将宿主机的某个端口映射给ingress的controller,所有的申请先到控制器,各种配置文件都在控制器上,之后再将申请转给各个ingress工作线程。

本试验采纳的是nginx-ingress测试

root@master01:~# kubectl get svc -n ingress-nginx NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGEingress-nginx-controller             NodePort    10.100.184.74    <none>        80:30080/TCP,443:30787/TCP   3d6hingress-nginx-controller-admission   ClusterIP   10.100.216.215   <none>        443/TCP                      3d6hroot@master01:~# kubectl get po -n ingress-nginx NAME                                        READY   STATUS    RESTARTS     AGEingress-nginx-controller-69fbfb4bfd-zjs2w   1/1     Running   1 (9h ago)   9h

编写通过ingress拜访监控零碎的yaml文件,在编写ingress的yaml文件的时候留神集群版本与api变动。
ingress的api版本历经过屡次变动他们的配置项也不太一样别离是:
● extensions/v1beta1:1.16版本之前应用
● networking.k8s.io/v1beta1:1.19版本之前应用
● networking.k8s.io/v1:1.19版本之后应用

编写须要裸露服务的yaml文件并创立相干资源。

root@master01:~/ingress/monitor-ingress# cat promethues-ingress.yaml apiVersion: networking.k8s.io/v1kind: Ingressmetadata:  name: prometheus-ingress  namespace: monitoringspec:  ingressClassName: nginx #此版本ingress控制器由此处指定,不必注解指了  rules:  - host: prometheus.snow.com    http:      paths:        - path: /          pathType: Prefix          backend:             service:               name: prometheus-k8s               port:                 number: 9090root@master01:~/ingress/monitor-ingress# cat grafana-ingress.yaml apiVersion: networking.k8s.io/v1kind: Ingressmetadata:  name: grafana-ingress  namespace: monitoringspec:  ingressClassName: nginx  rules:  - host: grafana.snow.com    http:      paths:      - path: /        pathType: Prefix        backend:           service:             name: grafana            port:               number: 3000root@master01:~/ingress/monitor-ingress# cat alert-manager-ingress.yaml apiVersion: networking.k8s.io/v1kind: Ingressmetadata:  name: alert-ingress  namespace: monitoringspec:  ingressClassName: nginx  rules:  - host: alert.snow.com    http:      paths:      - path: /        pathType: Prefix        backend:           service:             name: alertmanager-main            port:               number: 9093root@master01:~/ingress/monitor-ingress# kubectl apply -f promethues-ingress.yaml ingress.networking.k8s.io/prometheus-ingress createdroot@master01:~/ingress/monitor-ingress# kubectl apply -f  grafana-ingress.yamlingress.networking.k8s.io/grafana-ingress createdroot@master01:~/ingress/monitor-ingress# kubectl apply -f  alert-manager-ingress.yaml ingress.networking.k8s.io/alter-ingress created

ingress资源创立结束后验证

root@master01:~/ingress/monitor-ingress# kubectl get ingress -n monitoring NAME                 CLASS   HOSTS                 ADDRESS         PORTS   AGEalter-ingress        nginx   alert.snow.com        10.100.184.74   80      3m42sgrafana-ingress      nginx   grafana.snow.com      10.100.184.74   80      14hprometheus-ingress   nginx   prometheus.snow.com   10.100.184.74   80      12h

批改宿主机hosts文件,留神在宿主机进行拜访的时候敞开代理工具,clash会使本机的hosts解析生效。

在浏览器应用域名拜访测试

ps:以上内容在自己实现环境中已试验胜利,如发现有问题或表述不清的中央欢送斧正。