关于filebeat:filebeat简介

本篇次要来聊一聊filebeat的装置部署与应用。 概念如果你用过logstash,那么你就会十分疾速的相熟filebeat,它其实就做一件事件:从input读取事件源,通过相应的解析和解决之后,从output输入到指标存储库。 下载首先咱们要去下载安装包,这个比较简单,就不多介绍了 我这里抉择了rpm安装包。 装置我这里用的是linux操作系统,把上一步下载到的安装包放到对应的目录下,而后进入到该目录,执行rpm -ivh filebeat-6.6.0-x86_64.rpm,执行完之后就装置胜利了,咱们能够看到装置的相干门路 配置rpm形式装置完之后,它默认的配置文件门路在/etc/filebeat,如图 filebeat.yml是一个样例文件,咱们能够基于这个文件来改变本人想要的配置项,filebeat.reference.yml是蕴含了所有的配置项的文件,咱们能够从外面拷贝本人想要的项。这里介绍几个咱们罕用的配置项, filebeat.inputs 次要用来配置收集的信息起源,咱们用的比拟多的是如下这种形式来收集日志 type就是收集的类型,enabled就是是否启用,paths就是对应的日志文件。这个节点是反对多个类型的收集配置的,如 filebeat.modules 这个预计大家用的不多或者说不是很熟,其实这个能够了解为对中间件的日志解析模块,很多中间件输入的日志格局都是固定的,只有这里加了对应中间件的module就能够主动帮你解析。通过./filebeat modules list能够查看所有反对的module。 output 这里写output并不是说这是一个配置节点,是因为filebeat反对多种输入,咱们真正在配置的时候是配理论的输入形式,比方output.elasticsearch、output.file等。 更多的配置介绍能够查看官网的配置。 启动执行命令systemctl start filebeat.service就能够启动了。而后执行ps -ef|grep filebeat查看一下 能够看到曾经启动胜利了,如果你发现没有启动胜利,能够执行 cd /usr/bin,在这个目录下执行./filebeat -c /etc/filebeat/filebeat.yml -e,这样会提醒具体的错误信息。比方我这里filebeat用的8.1.2版本,es用的7.9版本,用启动的时候就提醒es版本太老,须要配置output.elasticsearch.allow_older_versions为true。而用systemctl start filebeat.service启动的时候没有任何提醒,连在 /var/log/filebeat/ 和 /var/lib/filebeat/registry/filebeat/ 都没找到错误信息,这里属实有点坑。 重新启动命令systemctl restart filebeat.service。 最初看下es的内容http://ip:9200/_cat/indices?v: 能够看到曾经有了,当然咱们能够自定义索引名称。 集成ElasticSearch如何应用filebeat将日志文件数据同步到es中去呢,这里基于例子来讲。我本地有两个日志文件,别离为/usr/local/log/ucs.20220412.0.log、/usr/local/log/cps-provider.20220412.0.log,filebeat.yml做如下配置: #输出filebeat.inputs:- type: log enabled: true paths: - /usr/local/log/ucs.20220412.0.log tags: ucs- type: log enabled: true paths: - /usr/local/log/cps-provider.20220412.0.log tags: cpssetup.template.name: "my-log"setup.template.pattern: "my-log-*"#输入output.elasticsearch: # Array of hosts to connect to. hosts: ["192.168.10.118:9200"] indices: - index: "ucs-%{+yyyy.MM.dd}" when.contains: tags: "ucs" - index: "cps-%{+yyyy.MM.dd}" when.contains: tags: "cps" #如果是es版本较老,则开启上面这项 #allow_older_versions: true配置实现之后启动filebeat,再拜访es地址http://ip:port/_cat/indices?v看下 ...

April 19, 2022 · 1 min · jiezi

关于filebeat:日志收集-实例

日志收集流程形容留神:当es集群重启后记得在kibana中执行 PUT /_cluster/settings{ "transient": { "cluster": { "max_shards_per_node":10000 } }}tomcat 日志收集filebeat conf[root@tomcat-prod_20 ~]# cd /data/work/filebeat-5.5.2/[root@tomcat-prod_20 filebeat-5.5.2]# cat filebeat.yml filebeat.prospectors:- input_type: log paths: - /data/WEBLOG/prod-ecommerce-app/catalina.out document_type: tykh_insurance_ecommerce-app_78pro multiline: pattern: '^\d{4}(\-|\/|\.)\d{1,2}(\-|\/|\.)\d{1,2}' negate: true match: after max_lines: 100 timeout: 3s fields: logtype: tykh_insurance_ecommerce-app_78protail_files: falseoutput.kafka: enabled: true hosts: ["10.100.20.1xx:9092","10.100.20.1x1:9092","10.100.20.1x2:9092"] topic: tykh-140 compression: gzip max_message_bytes: 1000000 required_acks: 1logstash[root@localhost conf.d]# cat insurace-140.conf input { kafka { bootstrap_servers => ["10.100.20.1xx:9092,10.100.20.1x1:9092,10.100.20.1x2:9092"] topics => ["tykh-140"] codec => "json" consumer_threads => 1 #auto_offset_reset => "earliest" auto_offset_reset => "latest" group_id => "tykh-140" decorate_events => true max_partition_fetch_bytes => "52428700" max_poll_records => "200" session_timeout_ms => "50000" request_timeout_ms => "510000" heartbeat_interval_ms => "1000" }}filter { grok { patterns_dir => [ "/etc/logstash/patterns.d" ] match => [ "message", "%{TIMESTAMP_ISO8601:log_time}\s+\[%{THREADID:threadId}\]\s+\[%{THREADNAME:traceid}\]\s+%{LOGLEVEL:level}\s+%{JAVACLASS:javaclass}\s+\-\s+%{JAVAMESSAGE:javameassage}","message", "%{TIMESTAMP_ISO8601:log_time}\s+\[%{THREADID_1:threadId}\]\s+%{LOGLEVEL:level}\s+%{JAVACLASS:javaclass}\s+\-\s+%{JAVAMESSAGE:javameassage}","message","%{TIMESTAMP_ISO8601:log_time}\s+%{TID:TID}\s+\[%{THREADID_1:threadId}\]\s+%{LOGLEVEL:level}\s+%{JAVACLASS:javaclass}\s+\-\s+%{JAVAMESSAGE:javameassage}"] remove_field => [ "message","beat","timestamp","topic","hostname","name","index","host","tags"] } ruby { code => "event.timestamp.time.localtime" } date {match=>["log_time","yyyy-MM-dd HH:mm:ss.SSS"]}}output { if [fields][logtype] == "tykh_insurance_ecommerce-app_78pro" { elasticsearch { hosts => ["10.100.20.1xx:9200","10.100.20.1xx:9200","10.100.20.1x8:9200"] index => "tykh_insurance_ecommerce-app_78pro%{+YYYY-MM-dd}" user => elasxxx password => "elasticsearcxxx" } stdout { codec => rubydebug } }}k8s logs (在jenkins )[root@insurace-24 ~]# cat /root/docker/scripts/install_logstash.sh#!/bin/bashconfpath=~/docker/scripts/confrepo=harborxx.reg/pre_jinfuapp=$1topics_pattern=$2profile=$3project=$4master_host=10.100.24.xxyaml_host=http://10.100.24.1x2:8889cd $confpathmkdir -p $app/$profileecho "---logstash-configmap.yaml---"cat logstash-configmap-template.yaml | sed "s|#topics_pattern#|$topics_pattern|g" | sed "s|#project#|$project|g" | sed "s|#profile#|$profile|g"cat logstash-configmap-template.yaml | sed "s|#topics_pattern#|$topics_pattern|g" | sed "s|#project#|$project|g" | sed "s|#profile#|$profile|g" > $app/$profile/logstash-configmap.yamlecho "---logstash.yaml---"cat logstash-template.yaml | sed "s|#topics_pattern#|$topics_pattern|g" | sed "s|#project#|$project|g" | sed "s|#profile#|$profile|g" cat logstash-template.yaml | sed "s|#topics_pattern#|$topics_pattern|g" | sed "s|#project#|$project|g" | sed "s|#profile#|$profile|g" > $app/$profile/logstash.yamlssh $master_host "kubectl apply -f $yaml_host/$app/$profile/logstash-configmap.yaml && kubectl apply -f $yaml_host/$app/$profile/logstash.yaml"logstash-template.yaml[root@insurace-24 conf]# cat logstash-template.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: logstash-#topics_pattern#-#profile# namespace: defaultspec: selector: matchLabels: app: logstash-#topics_pattern#-#profile# template: metadata: labels: app: logstash-#topics_pattern#-#profile# spec: containers: - name: logstash-#topics_pattern#-#profile# image: harborxx.reg/library/logstash:7.6.2.1 imagePullPolicy: IfNotPresent command: - logstash - '-f' - '/etc/logstash_c/logstash-#project#-#topics_pattern#-#profile#.conf' volumeMounts: - name: config-volume mountPath: /etc/logstash_c/ resources: limits: cpu: 1000m memory: 1348Mi volumes: - name: config-volume configMap: name: logstash-#project#-#topics_pattern#-#profile# items: - key: logstash-#project#-#topics_pattern#-#profile#.conf path: logstash-#project#-#topics_pattern#-#profile#.conf/root/docker/scripts/install_logstash.sh prodpipeline-assessment-back e-assessment-back profile-a insurance---logstash-configmap.yaml---kind: ConfigMapapiVersion: v1metadata: name: logstash-insurance-e-assessment-back-profile-a namespace: defaultdata: logstash-insurance-e-assessment-back-profile-a.conf: | input { kafka { bootstrap_servers => ["10.100.24.xx:9092"] topics_pattern => "e-assessment-back.*" codec => "json" consumer_threads => 5 auto_offset_reset => "latest" group_id => "e-assessment-back" client_id => "e-assessment-back" decorate_events => true #auto_commit_interval_ms => 5000 } } filter { json { source => "message" } date { match => [ "timestamp" ,"dd/MMM/YYYY:HH:mm:ss Z" ] } mutate { remove_field => "timestamp" } if "_geoip_lookup_failure" in [tags] { drop { } } } output { elasticsearch { hosts => ["10.100.24.xx:9200"] index => "logstash-insurance-e-assessment-back-%{+YYYY-MM-dd}" user => elastic password => "Elasticsearch_Insuance24*#" } stdout { codec => rubydebug } }---logstash.yaml---apiVersion: apps/v1kind: Deploymentmetadata: name: logstash-e-assessment-back-profile-a namespace: defaultspec: selector: matchLabels: app: logstash-e-assessment-back-profile-a template: metadata: labels: app: logstash-e-assessment-back-profile-a spec: containers: - name: logstash-e-assessment-back-profile-a image: harborxx.reg/library/logstash:7.6.2.1 imagePullPolicy: IfNotPresent command: - logstash - '-f' - '/etc/logstash_c/logstash-insurance-e-assessment-back-profile-a.conf' volumeMounts: - name: config-volume mountPath: /etc/logstash_c/ resources: limits: cpu: 1000m memory: 1348Mi volumes: - name: config-volume configMap: name: logstash-insurance-e-assessment-back-profile-a items: - key: logstash-insurance-e-assessment-back-profile-a.conf path: logstash-insurance-e-assessment-back-profile-a.confconfigmap/logstash-insurance-e-assessment-back-profile-a createddeployment.apps/logstash-e-assessment-back-profile-a created

July 12, 2021 · 3 min · jiezi

关于filebeat:FileBeat-简单介绍并且收集本地日志数据传入到logStash-在传入到es中并且使用kibana进行日志查看

一、FileBeat插件 https://www.cnblogs.com/zsql/... 首先filebeat是Beats中的一员。 Filebeat是用于转发和集中日志数据的轻量级传送工具。Filebeat监督您指定的日志文件或地位,收集日志事件,并将它们转发到Elasticsearch或 Logstash进行索引。 logstash与filebeat的关系:因为logstash是jvm跑的,资源耗费比拟大,所以起初作者又用golang写了一个性能较少然而资源耗费也小的轻量级的logstash-forwarder。最新我的项目名就是filebeat了二、filebeat原理 1、形成 由两个组件形成,别离是inputs(输出)和harvesters(收集器),harvester负责读取单个文件的内容。harvester逐行读取每个文件,并将内容发送到输入。一个input负责管理harvesters和寻找所有起源读取。如果input类型是log,则input将查找驱动器上与定义的门路匹配的所有文件,并为每个文件启动一个harvester。每个input在它本人的Go过程中运行,Filebeat以后反对多种输出类型。每个输出类型能够定义屡次。日志输出查看每个文件,以查看是否须要启动harvester、是否曾经在运行harvester或是否能够疏忽该文件2、filebeat如何保留文件的状态 Filebeat保留每个文件的状态,并常常将状态刷新到磁盘中的注册表文件中。该状态用于记住harvester读取的最初一个偏移量,并确保发送所有日志行。3、filebeat何如保障至多一次数据生产 Filebeat保障事件将至多传递到配置的输入一次,并且不会失落数据。是因为它将每个事件的传递状态存储在注册表文件中。在已定义的输入被阻止且未确认所有事件的状况下,Filebeat将持续尝试发送事件,直到输入确认已接管到事件为止。如果Filebeat在发送事件的过程中敞开,它不会期待输入确认所有事件后再敞开。当Filebeat重新启动时,将再次将Filebeat敞开前未确认的所有事件发送到输入。这样能够确保每个事件至多发送一次,但最终可能会有反复的事件发送到输入。通过设置shutdown_timeout选项,能够将Filebeat配置为在关机前期待特定工夫三、装置 https://www.elastic.co/cn/downloads/past-releases/filebeat-7-6-2 装置windows版 四、执行操作具体流程如下 1、启动logstash 在logstash中创立stdin.confinput { beats { port => "5044" }} output { elasticsearch { hosts => ["es的ip地址:9200"] index => "es_index20210311" } stdout { codec => json_lines }}先启动logstash:./logstash -f stdin.conf --config.reload.automatic2、运行filebeat 批改filebeat.yml文件filebeat.inputs: - type: log enabled: true paths: - c:\Users\34683\AppData\Local\JetBrains\IntelliJIdea2020.3\log\idea.log output.logstash: # The Logstash hosts hosts: ["localhost:5044"]再启动:./filebeat -e -c filebeat.yml -d "publish"参考起源:https://www.cnblogs.com/peter...https://www.cnblogs.com/peter... 留神:呈现以下谬误不要缓和 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":359},"total":{"ticks":是input的文件没有变动了,只是一种提醒,是你已近操作了,信息记录在了data中,能够间接将data文件删除探后将es日志删掉,具体意思它指的是曾经将你要的数据导入到了es中,当日志没有更新就会始终刷这个。(嘿嘿,这是我本人认为的只是跑起来,具体没有深刻探索了) ...

March 30, 2021 · 6 min · jiezi

关于filebeat:ES关于windows环境下安装filebeat环境

一、下载filebeat下载地址:filebeat官网下载地址 WINDOWS 64-BITsha二、装置filebeat1.将压缩包放到C盘目录下进行解压2.批改配置文件filebeat.yml- type: log enabled: true encoding: utf-8 paths: - c:\programdata\elasticsearch\logs\* fields: logtype: test group: test server: test10 fields_under_root: true exclude_lines: ['poll','running','Content-Length']配置文件具体细节请参考:官网filebeat配置文件配置 3.后盾通过powershell启动filebeat服务管理员启动powershell,切换到filebeat目录;运行install-service-filebeat.ps1, 装置filebeat为服务;启动命令: start-Service filebeat 4.常见谬误如果遇到如上谬误(脚本执行策略的问题),解决形式如下: 以管理员身份运行PowerShell执行:get-ExecutionPolicy.查看PowerShell脚本运行权限政策执行:set-ExecutionPolicy RemoteSigned即可装置完当前再批改回来 Set-ExecutionPolicy Restricted5.也可通过以下命令测试filebeat运行测试命令:.\filebeat.exe -c .\filebeat.yml --configtest运行命令:.\filebeat.exe -e -c .\filebeat.yml参考附录windows增加filebeat收集日志Powershell-"无奈加载文件,因为在此零碎上禁止运行脚本"windows环境应用filebeat模块收集日志

March 16, 2021 · 1 min · jiezi