很久没有发文了,明天水一篇操作文章分享一下教训吧,后续陆续推出深度好文

springboot+logStash+elasticsearch+kibana

版本

  • elasticsearch 7.4.2
  • logStash 7.4.2
  • springboot 2.1.10

下载地址

抉择下载的产品和版本,进行下载即可

https://www.elastic.co/cn/downloads/past-releases

部署

启动Elasticsearch

  • 设置配置文件elasticsearch

    cluster.name: my-applicationnode.name: node-1path.data: /cxt/software/maces/7.4.2/elasticsearch-7.4.2/datapath.logs: /cxt/software/maces/7.4.2/elasticsearch-7.4.2/logsnetwork.host: 0.0.0.0http.port: 9200discovery.seed_hosts: ["127.0.0.1"]cluster.initial_master_nodes: ["node-1"]
  • 启动

    bin/elasticsearch

启动Kibana

  • 间接去bin目录下启动即可, 都是本机启动无需批改配置

    bin/kibana

启动LogStash

  • config文件夹下创立springboot-log.conf,该配置文件的作用是启动在本机9600端口,前面springboot利用能够间接往9600发送日志。input为日志输出,output为日志输入到elasticsearch

    input{# 启动在9600端口,输入在控制台    tcp {        mode => "server"        host => "0.0.0.0"        port => 9600        codec => json_lines    }}output{    elasticsearch{        hosts=>["192.168.123.166:9200"]        index => "springboot-logstash-%{+YYYY.MM.dd}"    }#    stdout{#        codec => rubydebug#    }}
  • 启动

    bin/logstash -f config/springboot-log.conf

启动SpringBoot利用

  • pom

            <dependency>            <groupId>net.logstash.logback</groupId>            <artifactId>logstash-logback-encoder</artifactId>            <version>7.0</version>        </dependency>
  • testController 办法

    @RestControllerpublic class TestController {    public static final Logger log = LoggerFactory.getLogger(TestController.class);    @RequestMapping("/test")    public String test(){        log.info("this is a log from springboot");        log.trace("this is a trace log ");        return "success";    }}
  • 启动类main办法退出主动生成日志代码

      @SpringBootApplicationpublic class ElkApplication {    public static final Logger log = LoggerFactory.getLogger(ElkApplication.class);    Random random = new Random(10000);    public static void main(String[] args) {        SpringApplication.run(ElkApplication.class, args);        new ElkApplication().initTask();    }    private void initTask() {        Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(new Runnable() {            @Override            public void run() {                log.info("seed info msg :" + random.nextInt(999999));            }        }, 100, 100, TimeUnit.MILLISECONDS);    }}
  • resource新建logback-spring.xml

    <?xml version="1.0" encoding="UTF-8"?><configuration>    <include resource="org/springframework/boot/logging/logback/base.xml" />    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">        <!--配置logStash 服务地址-->        <destination>192.168.123.166:9600</destination>        <!-- 日志输入编码 -->        <encoder charset="UTF-8"                 class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">            <providers>                <timestamp>                    <timeZone>UTC</timeZone>                </timestamp>                <pattern>                    <pattern>                        {                        "logLevel": "%level",                        "serviceName": "${springAppName:-}",                        "pid": "${PID:-}",                        "thread": "%thread",                        "class": "%logger{40}",                        "detail": "%message"                        }                    </pattern>                </pattern>            </providers>        </encoder>    </appender>    <root level="INFO">        <appender-ref ref="LOGSTASH" />        <appender-ref ref="CONSOLE" />    </root></configuration>

验证

  • 按程序启动好了之后关上es-head插件即可查看索引信息,其中有一条调用接口的信息,还有利用启动的音讯

  • Kibana数据展现

    • 设置索引规定

      输出之后设置工夫戳匹配

    • 展现数据,抉择Discover

退出filebeat

  • logstash 新建filebeat-logstash-log.conf

    input{    beats {        host => "192.168.123.166"        port => 9600    }}output{    elasticsearch{        hosts=>["192.168.123.166:9200"]        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"    }}
  • 启动

    bin/logstash -f filebeat-logstash-log.conf
  • Filebeat 批改配置文件,找到下方批改的中央批改即可,次要监听的日志文件和输入的logstash服务器地址

    filebeat.inputs:- type: log  enabled: true  paths:    - /cxt/codework/java/springboot-demo/logs/springboot-elk/2022-06-04/info.2022-06-04.0.logsetup.kibana:  Host: "192.168.123.166:5601"#----------------------------- Logstash output --------------------------------output.logstash:  # The Logstash hosts  hosts: ["192.168.123.166:9600"]
  • springboot 应用程序配置生成日志文件的地位,resource下新建logback-spring-file.xml

    <?xml version="1.0" encoding="UTF-8"?><configuration debug="false" scan="false">    <!-- Log file path -->    <property name="log.path" value="logs/springboot-elk"/>    <!-- Console log output -->    <appender name="console"              class="ch.qos.logback.core.ConsoleAppender">        <encoder>            <pattern>%d{MM-dd HH:mm:ss.SSS} %-5level [%logger{50}] - %msg%n            </pattern>        </encoder>    </appender>    <!-- Log file debug output -->    <appender name="fileRolling_info" class="ch.qos.logback.core.rolling.RollingFileAppender">        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">            <fileNamePattern>${log.path}/%d{yyyy-MM-dd}/info.%d{yyyy-MM-dd}.%i.log</fileNamePattern>            <TimeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">                <maxFileSize>50MB</maxFileSize>            </TimeBasedFileNamingAndTriggeringPolicy>        </rollingPolicy>        <encoder>            <pattern>%date [%thread] %-5level [%logger{50}] %file:%line - %msg%n            </pattern>        </encoder>        <!--<filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level>            <onMatch>DENY</onMatch> <onMismatch>NEUTRAL</onMismatch> </filter> -->    </appender>    <!-- Log file error output -->    <appender name="fileRolling_error" class="ch.qos.logback.core.rolling.RollingFileAppender">        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">            <fileNamePattern>${log.path}/%d{yyyy-MM-dd}/error.%d{yyyy-MM-dd}.%i.log</fileNamePattern>            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">                <maxFileSize>50MB</maxFileSize>            </timeBasedFileNamingAndTriggeringPolicy>        </rollingPolicy>        <encoder>            <pattern>%date [%thread] %-5level [%logger{50}] %file:%line - %msg%n            </pattern>        </encoder>        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">            <level>ERROR</level>        </filter>    </appender>    <!-- Level: FATAL 0 ERROR 3 WARN 4 INFO 6 DEBUG 7 -->    <root level="info">        <!--{dev.start}-->        <appender-ref ref="console"/>        <!--{dev.end}-->        <!--{alpha.start}        <appender-ref ref="fileRolling_info" />        {alpha.end}-->        <!--        {release.start}-->        <appender-ref ref="fileRolling_info"/>        <!--        {release.end}-->        <appender-ref ref="fileRolling_error"/>    </root>    <!-- Framework level setting -->    <!--    <include resource="config/logger-core.xml" />-->    <!-- Project level setting -->    <!-- <logger name="your.package" level="DEBUG" /> -->    <logger name="org.springframework" level="INFO"></logger>    <logger name="org.mybatis" level="INFO"></logger></configuration>
  • application.yml文件中指定logback-spring-file.xml

    logging:  # 默认logback-spring.xml 应用logstash传输到es;  # 改为logback-spring-file.xml传输日志到归档日志文件,应用filebeat监听日志  config: classpath:logback-spring-file.xml
  • 到这流程就完结了,当初流程就是

好了,以上就是ELK的搭建流程,顺带应用filebeat监听日志文件的也做了,大体就是这个酱紫,良久没有肝文章了,前面会陆续推出更多深度实践好文,欢送关注微信公众号:《醉鱼JAVA》一起提高学习

github