返回 韦世东的技术专栏 播种爬虫架构/爬虫逆向/存储引擎/音讯队列/Python/Golang相干常识
这篇文章的次要目标是通知大家如何配置 Prometheus,使得它能够从指定的 Web Api 接口采集指标数据。文章中用到的案例是 NGINX 的采集配置,从设定了用户名和明码的 NGINX 数据指标页中采集数据,因而这篇文章的副标题可能是 nginx 的 prometheus 采集配置或者prometheus 采集 basic auth 的 nginx。
上图是配置实现后,在 Grafana 配置了模板的成果。
应用过 Prometheus 的敌人想必都晓得如何配置 address:port 类服务,例如收集某个 Redis 的相干信息时,配置能够这么写:
- job_name: 'redis' static_configs: - targets: ['11.22.33.58:6087']
正文:以上案例中假如 Redis Exporter 的 地址和端口是 11.22.33.58:6087。
这种是最简略,也是最为人熟知的方法。然而如果要监控指定的 Web API,可就不能这么写了。如果你没有看到这篇文章,你可能会在搜索引擎外面这么搜:
- Prometheus 监控 Web
- Prometheus scrape Web
- Prometheus 监控接口
- Prometheus 监控指定 API
- Prometheus API 配置
- Prometheus 域名 配置
- Prometheus basic auth
- Prometheus 接口 用户名 明码
但很可怜的是,搜不到什么无效信息(当初是 2021 年 03 月),能找到的基本上都是坑。
条件假如
假如咱们当初须要从地址为 https://www.weishidong.com//s... 的接口收集相干的 Prometheus 监控指标,并且这个接口应用了 basic auth(假设用户名为 weishidong,明码为 0099887kk)进行根本的权限校验。
配置实际操作
依照之前见到的 Prometheus 配置填写的话,很有可能把配置写成这样:
- job_name: 'web' static_configs: - targets: ['http://www.weishidong.com/status/format/prometheus'] basic_auth: username: weishidong password: 0099887kk
保留配置文件并重启服务后你就会发现,这样是收集不到数据的,几乎蹩脚。
官网配置指南
方才的操作真的是蹩脚透了,遇到不懂的问题时,咱们当然是去翻官网文档了->Prometheus Configuration。浏览时举荐自上而下,不过如果你比拟焦急,能够间接来到 采集配置 这个局部。官网给出的示例如下(内容太多,这里只保留跟本文相干的局部,倡议大家去看原文):
# The job name assigned to scraped metrics by default.job_name: <job_name># How frequently to scrape targets from this job.[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]# Per-scrape timeout when scraping this job.[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]# The HTTP resource path on which to fetch metrics from targets.[ metrics_path: <path> | default = /metrics ]# honor_labels controls how Prometheus handles conflicts between labels that are# already present in scraped data and labels that Prometheus would attach# server-side ("job" and "instance" labels, manually configured target# labels, and labels generated by service discovery implementations).## If honor_labels is set to "true", label conflicts are resolved by keeping label# values from the scraped data and ignoring the conflicting server-side labels.## If honor_labels is set to "false", label conflicts are resolved by renaming# conflicting labels in the scraped data to "exported_<original-label>" (for# example "exported_instance", "exported_job") and then attaching server-side# labels.## Setting honor_labels to "true" is useful for use cases such as federation and# scraping the Pushgateway, where all labels specified in the target should be# preserved.## Note that any globally configured "external_labels" are unaffected by this# setting. In communication with external systems, they are always applied only# when a time series does not have a given label yet and are ignored otherwise.[ honor_labels: <boolean> | default = false ]# honor_timestamps controls whether Prometheus respects the timestamps present# in scraped data.## If honor_timestamps is set to "true", the timestamps of the metrics exposed# by the target will be used.## If honor_timestamps is set to "false", the timestamps of the metrics exposed# by the target will be ignored.[ honor_timestamps: <boolean> | default = true ]# Configures the protocol scheme used for requests.[ scheme: <scheme> | default = http ]# Optional HTTP URL parameters.params: [ <string>: [<string>, ...] ]# Sets the `Authorization` header on every scrape request with the# configured username and password.# password and password_file are mutually exclusive.basic_auth: [ username: <string> ] [ password: <secret> ] [ password_file: <string> ]# Sets the `Authorization` header on every scrape request with# the configured bearer token. It is mutually exclusive with `bearer_token_file`.[ bearer_token: <secret> ]# Sets the `Authorization` header on every scrape request with the bearer token# read from the configured file. It is mutually exclusive with `bearer_token`.[ bearer_token_file: <filename> ]
如果你认真看的话,应该会关注到几个要害信息: metrics_path 和 basic_auth。其中,metrics_path 用于指定 HTTP 类指标信息采集时的路由地址,默认值是 /metrics;字段 basic_auth 则是用来进行权限验证的,而且明码这里能够指定密码文件,而不是间接填写明文(一般来说,指定密码文件的安全性稍高与明文)。
无效的配置
依据官网文档的指引,咱们很快便能够推导出正确的配置写法:
- job_name: 'web' metrics_path: /status/format/prometheus static_configs: - targets: ['www.weishidong.com'] basic_auth: username: weishidong password: 0099887kk
要留神的是,这里并不需要填写 http:// 字样,因为 Prometheus 默认的 Scheme 就是 http。如果地址的 Scheme 是 https 的话,依照文档指引,咱们须要增加 scheme 字段,对应的配置为:
- job_name: 'web' metrics_path: /status/format/prometheus static_configs: - targets: ['www.weishidong.com'] scheme: https basic_auth: username: weishidong password: 0099887kk
配置实现后,Prometheus 应该就能顺利的采集到数据了,配上 Grafana,就可能看到开篇给出的监控效果图。