kubernetes(k8s) 装置 Prometheus + Grafana
组件阐明
MetricServer:是kubernetes集群资源应用状况的聚合器,收集数据给kubernetes集群内应用,如 kubectl,hpa,scheduler等。
PrometheusOperator:是一个零碎监测和警报工具箱,用来存储监控数据。
NodeExporter:用于各node的要害度量指标状态数据。
KubeStateMetrics:收集kubernetes集群内资源对象数 据,制订告警规定。
Prometheus:采纳pull形式收集apiserver,scheduler,controller-manager,kubelet组件数 据,通过http协定传输。
Grafana:是可视化数据统计和监控平台。
克隆代码
root@hello:~# git clone -b release-0.10 https://github.com/prometheus-operator/kube-prometheus.gitCloning into 'kube-prometheus'...remote: Enumerating objects: 16026, done.remote: Counting objects: 100% (2639/2639), done.remote: Compressing objects: 100% (165/165), done.remote: Total 16026 (delta 2524), reused 2485 (delta 2470), pack-reused 13387Receiving objects: 100% (16026/16026), 7.81 MiB | 2.67 MiB/s, done.Resolving deltas: 100% (10333/10333), done.root@hello:~#
进入目录批改镜像地址
若拜访Google畅通无阻即可无需批改,跳过即可
root@hello:~# cd kube-prometheus/manifestsroot@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/prometheus/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/brancz/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# sed -i "s#k8s.gcr.io/prometheus-adapter/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/prometheus-operator/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests# sed -i "s#k8s.gcr.io/kube-state-metrics/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yamlroot@hello:~/kube-prometheus/manifests#
批改svc为NodePort
root@hello:~/kube-prometheus/manifests# sed -i "/ports:/i\ type: NodePort" grafana-service.yamlroot@hello:~/kube-prometheus/manifests# sed -i "/targetPort: http/i\ nodePort: 31100" grafana-service.yamlroot@hello:~/kube-prometheus/manifests# cat grafana-service.yamlapiVersion: v1kind: Servicemetadata: labels: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 8.3.3 name: grafana namespace: monitoringspec: type: NodePort ports: - name: http port: 3000 nodePort: 31100 targetPort: http selector: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheusroot@hello:~/kube-prometheus/manifests# sed -i "/ports:/i\ type: NodePort" prometheus-service.yamlroot@hello:~/kube-prometheus/manifests# sed -i "/targetPort: web/i\ nodePort: 31200" prometheus-service.yamlroot@hello:~/kube-prometheus/manifests# sed -i "/targetPort: reloader-web/i\ nodePort: 31300" prometheus-service.yamlroot@hello:~/kube-prometheus/manifests# cat prometheus-service.yamlapiVersion: v1kind: Servicemetadata: labels: app.kubernetes.io/component: prometheus app.kubernetes.io/instance: k8s app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 2.32.1 name: prometheus-k8s namespace: monitoringspec: type: NodePort ports: - name: web port: 9090 nodePort: 31200 targetPort: web - name: reloader-web port: 8080 nodePort: 31300 targetPort: reloader-web selector: app.kubernetes.io/component: prometheus app.kubernetes.io/instance: k8s app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus sessionAffinity: ClientIProot@hello:~/kube-prometheus/manifests# sed -i "/ports:/i\ type: NodePort" alertmanager-service.yaml root@hello:~/kube-prometheus/manifests# sed -i "/targetPort: web/i\ nodePort: 31400" alertmanager-service.yaml root@hello:~/kube-prometheus/manifests# sed -i "/targetPort: reloader-web/i\ nodePort: 31500" alertmanager-service.yaml root@hello:~/kube-prometheus/manifests# cat alertmanager-service.yaml apiVersion: v1kind: Servicemetadata: labels: app.kubernetes.io/component: alert-router app.kubernetes.io/instance: main app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 0.23.0 name: alertmanager-main namespace: monitoringspec: type: NodePort ports: - name: web port: 9093 nodePort: 31400 targetPort: web - name: reloader-web port: 8080 nodePort: 31500 targetPort: reloader-web selector: app.kubernetes.io/component: alert-router app.kubernetes.io/instance: main app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: kube-prometheus sessionAffinity: ClientIP
执行部署
root@hello:~# root@hello:~# kubectl create -f /root/kube-prometheus/manifests/setupcustomresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com createdcustomresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com creatednamespace/monitoring createdroot@hello:~# root@hello:~# root@hello:~# root@hello:~# kubectl create -f /root/kube-prometheus/manifests/alertmanager.monitoring.coreos.com/main creatednetworkpolicy.networking.k8s.io/alertmanager-main createdpoddisruptionbudget.policy/alertmanager-main createdprometheusrule.monitoring.coreos.com/alertmanager-main-rules createdsecret/alertmanager-main createdservice/alertmanager-main createdserviceaccount/alertmanager-main createdservicemonitor.monitoring.coreos.com/alertmanager-main createdclusterrole.rbac.authorization.k8s.io/blackbox-exporter createdclusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter createdconfigmap/blackbox-exporter-configuration createddeployment.apps/blackbox-exporter creatednetworkpolicy.networking.k8s.io/blackbox-exporter createdservice/blackbox-exporter createdserviceaccount/blackbox-exporter createdservicemonitor.monitoring.coreos.com/blackbox-exporter createdsecret/grafana-config createdsecret/grafana-datasources createdconfigmap/grafana-dashboard-alertmanager-overview createdconfigmap/grafana-dashboard-apiserver createdconfigmap/grafana-dashboard-cluster-total createdconfigmap/grafana-dashboard-controller-manager createdconfigmap/grafana-dashboard-grafana-overview createdconfigmap/grafana-dashboard-k8s-resources-cluster createdconfigmap/grafana-dashboard-k8s-resources-namespace createdconfigmap/grafana-dashboard-k8s-resources-node createdconfigmap/grafana-dashboard-k8s-resources-pod createdconfigmap/grafana-dashboard-k8s-resources-workload createdconfigmap/grafana-dashboard-k8s-resources-workloads-namespace createdconfigmap/grafana-dashboard-kubelet createdconfigmap/grafana-dashboard-namespace-by-pod createdconfigmap/grafana-dashboard-namespace-by-workload createdconfigmap/grafana-dashboard-node-cluster-rsrc-use createdconfigmap/grafana-dashboard-node-rsrc-use createdconfigmap/grafana-dashboard-nodes createdconfigmap/grafana-dashboard-persistentvolumesusage createdconfigmap/grafana-dashboard-pod-total createdconfigmap/grafana-dashboard-prometheus-remote-write createdconfigmap/grafana-dashboard-prometheus createdconfigmap/grafana-dashboard-proxy createdconfigmap/grafana-dashboard-scheduler createdconfigmap/grafana-dashboard-workload-total createdconfigmap/grafana-dashboards createddeployment.apps/grafana creatednetworkpolicy.networking.k8s.io/grafana createdprometheusrule.monitoring.coreos.com/grafana-rules createdservice/grafana createdserviceaccount/grafana createdservicemonitor.monitoring.coreos.com/grafana createdprometheusrule.monitoring.coreos.com/kube-prometheus-rules createdclusterrole.rbac.authorization.k8s.io/kube-state-metrics createdclusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics createddeployment.apps/kube-state-metrics creatednetworkpolicy.networking.k8s.io/kube-state-metrics createdprometheusrule.monitoring.coreos.com/kube-state-metrics-rules createdservice/kube-state-metrics createdserviceaccount/kube-state-metrics createdservicemonitor.monitoring.coreos.com/kube-state-metrics createdprometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules createdservicemonitor.monitoring.coreos.com/kube-apiserver createdservicemonitor.monitoring.coreos.com/coredns createdservicemonitor.monitoring.coreos.com/kube-controller-manager createdservicemonitor.monitoring.coreos.com/kube-scheduler createdservicemonitor.monitoring.coreos.com/kubelet createdclusterrole.rbac.authorization.k8s.io/node-exporter createdclusterrolebinding.rbac.authorization.k8s.io/node-exporter createddaemonset.apps/node-exporter creatednetworkpolicy.networking.k8s.io/node-exporter createdprometheusrule.monitoring.coreos.com/node-exporter-rules createdservice/node-exporter createdserviceaccount/node-exporter createdservicemonitor.monitoring.coreos.com/node-exporter createdclusterrole.rbac.authorization.k8s.io/prometheus-k8s createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s creatednetworkpolicy.networking.k8s.io/prometheus-k8s createdpoddisruptionbudget.policy/prometheus-k8s createdprometheus.monitoring.coreos.com/k8s createdprometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrolebinding.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s-config createdrole.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s createdrole.rbac.authorization.k8s.io/prometheus-k8s createdservice/prometheus-k8s createdserviceaccount/prometheus-k8s createdservicemonitor.monitoring.coreos.com/prometheus-k8s createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdclusterrole.rbac.authorization.k8s.io/prometheus-adapter createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter createdclusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator createdclusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources createdconfigmap/adapter-config createddeployment.apps/prometheus-adapter creatednetworkpolicy.networking.k8s.io/prometheus-adapter createdpoddisruptionbudget.policy/prometheus-adapter createdrolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader createdservice/prometheus-adapter createdserviceaccount/prometheus-adapter createdservicemonitor.monitoring.coreos.com/prometheus-adapter createdclusterrole.rbac.authorization.k8s.io/prometheus-operator createdclusterrolebinding.rbac.authorization.k8s.io/prometheus-operator createddeployment.apps/prometheus-operator creatednetworkpolicy.networking.k8s.io/prometheus-operator createdprometheusrule.monitoring.coreos.com/prometheus-operator-rules createdservice/prometheus-operator createdserviceaccount/prometheus-operator createdservicemonitor.monitoring.coreos.com/prometheus-operator createdroot@hello:~# root@hello:~#
查看验证
root@hello:~# kubectl get pod -n monitoring NAME READY STATUS RESTARTS AGEalertmanager-main-0 2/2 Running 0 69salertmanager-main-1 2/2 Running 0 69salertmanager-main-2 2/2 Running 0 69sblackbox-exporter-6c559c5c66-kw6vd 3/3 Running 0 83sgrafana-7fd69887fb-jmpmp 1/1 Running 0 81skube-state-metrics-867b64476b-h84g4 3/3 Running 0 81snode-exporter-576bm 2/2 Running 0 80snode-exporter-94gn9 2/2 Running 0 80snode-exporter-cbjqk 2/2 Running 0 80snode-exporter-mhlh7 2/2 Running 0 80snode-exporter-pdc6k 2/2 Running 0 80snode-exporter-pqqds 2/2 Running 0 80snode-exporter-s9cz4 2/2 Running 0 80snode-exporter-tdlnt 2/2 Running 0 81sprometheus-adapter-8f88b5b45-rrsh4 1/1 Running 0 78sprometheus-adapter-8f88b5b45-wh6pf 1/1 Running 0 78sprometheus-k8s-0 2/2 Running 0 68sprometheus-k8s-1 2/2 Running 0 68sprometheus-operator-7f9d9c77f8-h5gkt 2/2 Running 0 78sroot@hello:~# root@hello:~# kubectl get svc -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEalertmanager-main NodePort 10.103.47.160 <none> 9093:31400/TCP,8080:31500/TCP 92salertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 78sblackbox-exporter ClusterIP 10.102.108.160 <none> 9115/TCP,19115/TCP 92sgrafana NodePort 10.106.2.21 <none> 3000:31100/TCP 90skube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 90snode-exporter ClusterIP None <none> 9100/TCP 90sprometheus-adapter ClusterIP 10.108.65.108 <none> 443/TCP 87sprometheus-k8s NodePort 10.100.227.174 <none> 9090:31200/TCP,8080:31300/TCP 88sprometheus-operated ClusterIP None <none> 9090/TCP 77sprometheus-operator ClusterIP None <none> 8443/TCP 87sroot@hello:~# http://192.168.1.81:31400/http://192.168.1.81:31200/http://192.168.1.81:31100/
一条命令执行
cd /root ; git clone -b release-0.10 https://github.com/prometheus-operator/kube-prometheus.git ;cd kube-prometheus/manifests ;sed -i "s#quay.io/prometheus/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#quay.io/brancz/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#k8s.gcr.io/prometheus-adapter/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#quay.io/prometheus-operator/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#k8s.gcr.io/kube-state-metrics/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "/ports:/i\ type: NodePort" grafana-service.yaml ;sed -i "/targetPort: http/i\ nodePort: 31100" grafana-service.yaml ;sed -i "/ports:/i\ type: NodePort" prometheus-service.yaml ;sed -i "/targetPort: web/i\ nodePort: 31200" prometheus-service.yaml ;sed -i "/targetPort: reloader-web/i\ nodePort: 31300" prometheus-service.yaml ;sed -i "/ports:/i\ type: NodePort" alertmanager-service.yaml ;sed -i "/targetPort: web/i\ nodePort: 31400" alertmanager-service.yaml ;sed -i "/targetPort: reloader-web/i\ nodePort: 31500" alertmanager-service.yaml ;kubectl create -f /root/kube-prometheus/manifests/setup ;kubectl create -f /root/kube-prometheus/manifests/ ; sleep 30 ; kubectl get pod -n monitoring ; kubectl get svc -n monitoring ;
https://www.oiox.cn/
https://www.chenby.cn/
https://cby-chen.github.io/
https://blog.csdn.net/qq\_33921750
https://my.oschina.net/u/3981543
https://www.zhihu.com/people/...
https://segmentfault.com/u/hp...
https://juejin.cn/user/331578...
https://cloud.tencent.com/dev...
https://www.jianshu.com/u/0f8...
https://www.toutiao.com/c/use...
CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、集体博客、全网可搜《小陈运维》
文章次要公布于微信公众号:《Linux运维交换社区》