一、Kafka 2.2.2

1、server.properties 配置文件

broker.id=128listeners=PLAINTEXT://cwbg001:9092num.network.threads=3num.io.threads=4socket.send.buffer.bytes=1024000socket.receive.buffer.bytes=1024000socket.request.max.bytes=104857600log.dirs=/home/kafka/kafka_2.12-2.2.2/datanum.partitions=3num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1log.retention.hours=168log.segment.bytes=1073741824log.retention.check.interval.ms=300000zookeeper.connect=192.168.32.128:2181zookeeper.connection.timeout.ms=6000group.initial.rebalance.delay.ms=3000

二、Filebeat7.9.3

1、filebeat.yml

filebeat.inputs:- type: log  enabled: true  paths:    - /home/elastic/elasticsearch-7.9.3/logs/elasticsearch_server.json  fields:    log_topic: 'prod-app-service-name-app-prod'  exclude_files: [".tar$",".tgz$",".gz$",".bz2$",".zip$"]filebeat.config.modules:  path: ${path.config}/modules.d/*.yml  reload.enabled: false# output kafkaoutput.kafka:  hosts: ["192.168.32.128:9092"]  topic: '%{[fields.log_topic]}'  partition.round_robin:    reachable_only: true  required_acks: 1  compression: gzip  max_message_bytes: 1000000

三、logstash 7.9.1

1、logstash.conf