关于docker:Docker再学习笔记

11次阅读

共计 65728 个字符,预计需要花费 165 分钟才能阅读完成。

Docker

由来

  • Docker 是 dotCloud 公司开源的一款基于 Go 语言实现的开源容器我的项目。dotCloud 公司是 2010 年新成立的一家公司,次要基于 PaaS(Platform as a Service,平台即服务)平台为开发者提供服务。在 PaaS 平台下,所有的服务环境曾经事后配置好了,开发者只须要抉择服务类型、上传代码就可对外服务,不须要破费大量的工夫搭建服务和配置环境。dotCloud 的 PaaS 平台曾经做得足够好了,它反对简直所有支流的 Web 编程语言和数据库,能够让开发者得心应手地抉择本人须要的编程语言、数据库和编程框架,而且它的设置非常简单,每次编码后只须要运行一条命令就能把整个网站部署下来;并且利用多层次平台的概念,实践上,它的利用能够运行在各种类型的云服务上。两三年下来,尽管 dotCloud 也在业界取得不错的口碑,但因为整个 PaaS 市场还处于培养阶段,dotCloud 公司体现得不温不火,没有呈现爆发性的增长。
  • Docker 最先次要运行在 Ubuntu 零碎下,起初反对 REHL/Centos,所有的云计算大公司,如 Azure、Google 和亚马逊等都在反对 Docker 技术,这实际上也让 Docker 成为云计算畛域的一大重要组成部分。
  • Docker 含糊了 IaaS 与 PaaS 之间的界线,为云计算的服务模式带来了有限的可能,Docker 带着它的容器理念破而后立,是云计算静止中一项了不起的创举。

    https://www.ruanyifeng.com/bl…

概念及劣势

  • Docker,目前的定义是一个开源的容器引擎,能够不便地对容器进行治理。其对镜像的打包封装,以及引入的 Docker Registry 对镜像的对立治理,构建了方便快捷的“Build,Ship and Run”流程,它能够对立整个开发、测试和部署的环境和流程,极大地缩小运维老本
  • Docker 容器运行速度很快,能够在秒级实现启动和进行,比传统虚拟机要快很多。Docker 解决的外围问题是利用容器来实现相似虚拟机的性能,从而利用更少的硬件资源给用户提供更多的计算资源。Docker 容器除了运行其中的利用之外,根本不耗费额定的系统资源,在保障利用性能的同时,减小了零碎开销,这使得一台主机上同时运行数千个 Docker 容器成为可能。
  • 统一的运行环境
  • 资源、网络、库等都是隔离的,不会呈现依赖问题
  • 提供各种标准化操作,非常适合自动化
  • 轻量级,可能疾速启动和迁徙

装置

  • Centos 零碎装置 docker
[root@localhost ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror aliyun
  • 装置胜利后默认不启动 docker 服务,手动启动
[root@localhost ~]# systemctl start docker
  • 将 docker 服务退出开机启动项
[root@localhost ~]# systemctl enable docker
  • 查看版本号
[root@localhost ~]# docker versionClient: Docker Engine - Community Version:           20.10.7 API version:       1.41 Go version:        go1.13.15 Git commit:        f0df350 Built:             Wed Jun  2 11:58:10 2021 OS/Arch:           linux/amd64 Context:           default Experimental:      trueServer: Docker Engine - Community Engine:  Version:          20.10.7  API version:      1.41 (minimum version 1.12)  Go version:       go1.13.15  Git commit:       b0f5bc3  Built:            Wed Jun  2 11:56:35 2021  OS/Arch:          linux/amd64  Experimental:     false containerd:  Version:          1.4.6  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d runc:  Version:          1.0.0-rc95  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 docker-init:  Version:          0.19.0  GitCommit:        de40ad0

根本组成

  • Docker 客户端:最罕用的 Docker 客户端是 docker 命令。通过 docker 咱们能够不便地在 Host 上构建和运行容器。
  • Docker 服务器:Docker daemon 运行在 Docker host 上,负责创立、运行、监控容器,构建、存储镜像。默认配置下,Docker daemon 只能响应来自本地 Host 的客户端申请。如果要容许近程客户端申请,须要在配置文件中关上 TCP 监听。

    • 编辑配置文件 /etc/systemd/system/multi-user.target.wants/docker.service,在环境变量 ExecStart 前面增加 -H tcp://0.0.0.0,容许来自任意 IP 的客户端连贯
    • 重启 Docker daemon
    systemctl daemon-reloadsystemctl restart docker.service
  • docker 服务器 IP 为 192.168.9.140,客户端在命令行里加上 - H 参数,在另外一台机器上即可与近程服务器通信
docker -H 192.168.9.140 info
[root@xdja ~]# docker -H 192.168.9.140 infoContainers: 3 Running: 3 Paused: 0 Stopped: 0Images: 3Server Version: 18.09.7Storage Driver: devicemapper Pool Name: docker-8:3-67364689-pool Pool Blocksize: 65.54kB Base Device Size: 10.74GB Backing Filesystem: xfs Udev Sync Supported: true Data file: /dev/loop0 Metadata file: /dev/loop1 Data loop file: /var/lib/docker/devicemapper/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
  • 镜像:Docker 的镜像是创立容器的根底,相似虚拟机的快照,能够了解为一个面向 Docker 容器引擎的只读模板。例如,一个镜像能够是一个残缺的 CentOS 操作系统环境,称为一个 CentOS 镜像;也能够是一个装置了 MySQL 的应用程序,称为一个 MySQL 镜像,等等。
  • 容器:Docker 的容器是从镜像创立的运行实例,它能够被启动、进行和删除。每一个容器都是互相隔离、互不可见的,以保障平台的安全性。能够将容器看作是一个简易版的 Linux 环境,Docker 利用容器来运行和隔离利用。
  • 仓库:Docker 仓库是用来集中保留镜像的中央。当开发人员创立了本人的镜像之后,能够应用 push 命令将它上传到私有(Public)仓库或者公有(Private)仓库。下次要在另外一台机器上应用这个镜像时,只需从仓库获取即可。

    官网 Docker 仓库地址为 https://hub.docker.com

  • Docker 主机(Host):一个物理或者虚构的机器用于执行 Docker 守护过程和容器。

镜像构建:即创立一个镜像,它蕴含装置运行所需的环境、程序代码等,这个创立过程就是应用 dockerfile 来实现的。

容器启动:容器最终运行起来是通过拉取构建好的镜像,通过一系列运行指令(如端口映射、内部数据挂载、环境变量等)来启动服务的。针对单个容器,这能够通过 docker run 来运行。

而如果波及多个容器的运行(如服务编排)就能够通过 docker-compose 来实现,它能够轻松的将多个容器作为 service 来运行(当然也可仅运行其中的某个),并且提供了 scale (服务扩容) 的性能。

通过 Dockerfile 构建镜像

  • 拉取 centos 镜像
[root@localhost docker]# docker pull centos
  • 上传 jdk 及 tomcat 安装包
[root@localhost docker]# lltotal 151856-rw-rw-rw- 1 root root  10559131 Jun 21 17:45 apache-tomcat-8.5.68.tar.gz-rw-r--r-- 1 root root       696 Jun 22 09:32 Dockerfile-rw-rw-rw- 1 root root 144935989 Jun 22 09:15 jdk-8u291-linux-x64.tar.gz
  • 构建 Dockerfile 文件
[root@localhost wch]# pwd/home/wch[root@localhost wch]# mkdir docker[root@localhost docker]# touch Dockerfile
  • 键入以下内容
# 根底镜像 FROM centos:latest# 创建者信息 MAINTAINER wch# 增加 tomcat 和 jdk 到镜像中 #我的 jdk 和 tomcat 压缩包在当前目录下,ADD 命令会主动解压 ADD jdk-8u291-linux-x64.tar.gz /usr/local/ADD apache-tomcat-8.5.68.tar.gz /usr/local/# 设置环境变量 ENV JAVA_HOME /usr/local/jdk1.8.0_291/ENV PATH $JAVA_HOME/bin:$PATHENV CLASSPATH .:$JAVA_HOME/lib# 配置启动文件的权限 RUN chmod +x /usr/local/apache-tomcat-8.5.68/bin/*.sh# 指定于外界交互的端口 EXPOSE 8080# 定义在容器启动之后的运行程序 ENTRYPOINT /usr/local/apache-tomcat-8.5.68/bin/startup.sh && /bin/bash && tail -f /usr/local/apache-tomcat-8.5.68/logs/catalina.out
  • 构建镜像,胜利后返回镜像 ID
[root@localhost docker]# docker build -f /home/wch/docker/Dockerfile -t wch/tomcat .Sending build context to Docker daemon  155.5MBStep 1/10 : FROM centos:latest ---> 300e315adb2fStep 2/10 : MAINTAINER wch ---> Running in c9ff9c1277b4Removing intermediate container c9ff9c1277b4 ---> 3b8b3ffc8af3Step 3/10 : ADD jdk-8u291-linux-x64.tar.gz /usr/local/ ---> 988571412bacStep 4/10 : ADD apache-tomcat-8.5.68.tar.gz /usr/local/ ---> f160e9207148Step 5/10 : ENV JAVA_HOME /usr/local/jdk1.8.0_291/ ---> Running in 4574503f1307Removing intermediate container 4574503f1307 ---> af37b9368f59Step 6/10 : ENV PATH $JAVA_HOME/bin:$PATH ---> Running in 30521e475681Removing intermediate container 30521e475681 ---> 98760e798091Step 7/10 : ENV CLASSPATH .:$JAVA_HOME/lib ---> Running in 6efa1040eb62Removing intermediate container 6efa1040eb62 ---> e50226013e04Step 8/10 : RUN chmod +x /usr/local/apache-tomcat-8.5.68/bin/*.sh ---> Running in 733a8f068adcRemoving intermediate container 733a8f068adc ---> 60ffde451605Step 9/10 : EXPOSE 8080 ---> Running in 024e2e19af04Removing intermediate container 024e2e19af04 ---> 52afaea4fc62Step 10/10 : ENTRYPOINT /usr/local/apache-tomcat-8.5.68/bin/startup.sh && /bin/bash && tail -f /usr/local/apache-tomcat-8.5.68/logs/catalina.out ---> Running in 69e6fea9f1b7Removing intermediate container 69e6fea9f1b7 ---> 9b8179770e78Successfully built 9b8179770e78Successfully tagged wch/tomcat:latest

命令开端的.指明 build context 为当前目录。Docker 默认会从 build context 中查找 Dockerfile 文件,咱们也能够通过 - f 参数指定 Dockerfile 的地位。

docker build -f /home/wch/docker/Dockerfile -t wch/tomcat .

或者以下

cd /home/wch/docker

docker build -t wch/tomcat .

  • 查看构建胜利的镜像
[root@localhost docker]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        25 minutes ago      584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • 通过构建好的镜像,启动容器
docker run -d -p 8010:8080 wch/tomcatb43861a53e3206650d57107c869f538cc3384630957fcb8bff1cc40bb92610e0
  • 浏览器拜访

  • 查看容器
[root@localhost ~]# docker exec -it b43861a53e32 /bin/bash[root@b43861a53e32 /]# cd /usr/local/[root@b43861a53e32 local]# lsapache-tomcat-8.5.68  bin  etc  games  include  jdk1.8.0_291  lib  lib64  libexec  sbin  share  src

RUN vs CMD vs ENTRYPOINT

  • RUN:执行命令并创立新的镜像层,RUN 常常用于装置软件包。
  • CMD:设置容器启动后默认执行的命令及其参数,但 CMD 可能被 docker run 前面跟的命令行参数替换。

    • 如果 docker run 指定了其余命令,CMD 指定的默认命令将被疏忽
    • 如果 Dockerfile 中有多个 CMD 指令,只有最初一个 CMD 无效
  • ENTRYPOINT:配置容器启动时运行的命令。

    • ENTRYPOINT 不会被疏忽,肯定会被执行,即便运行 docker run 时指定了其余命令,CMD 可为 ENTRYPOINT 提供额定的默认参数,同时可利用 docker run 命令行替换默认参数。
    • ENTRYPOINT 的 Shell 格局会疏忽任何 CMD 或 docker run 提供的参数
  • Shell 格局,当指令执行时,shell 格局底层会调用 /bin/sh -c [command]

    • RUN apt-get install python3
    • CMD echo“hello world”
    • ENTRYPOINT echo“hello world”
  • Exec 格局,当指令执行时,会间接调用 [command],不会被 shell 解析。

    • RUN [“apt-get”,”install”,”python3”]
    • CMD [“/bin/echo”,“hello world”]
    • ENTRYPOINT [“/bin/echo”,“hello world”]
    • ENTRYPOINT [“/bin/echo”,“hello”] CMD [“world”]
    • ENV name Cloud Man ENTRYPOINT [“/bin/sh”,”–c”,“echo hello,$name”]

CMD 和 ENTRYPOINT 举荐应用 Exec 格局,因为指令可读性更强,更容易了解。RUN 则两种格局都能够。

散发镜像

应用公共 Registry

  • Docker Hub,首先通过 web 页面注册一个账户
[root@localhost ~]# docker login -u wholegale39Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
  • 查看本地镜像
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        5 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • 批改镜像名称
[root@localhost ~]# docker tag wch/tomcat wholegale39/tomcat
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        5 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        5 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
  • 上传镜像
[root@localhost ~]# docker push wholegale39/tomcat:latestThe push refers to repository [docker.io/wholegale39/tomcat]711749be7df9: Pushed 579be2cb5f3b: Pushed 015815b60df5: Pushed 2653d992f4ef: Mounted from library/centos latest: digest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3 size: 1163
  • 胜利后查看镜像

  • 下载应用该镜像,所有用户都可下载应用
[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB[root@localhost ~]# docker rmi wholegale39/tomcatUntagged: wholegale39/tomcat:latestUntagged: wholegale39/tomcat@sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB[root@localhost ~]# docker pull wholegale39/tomcatUsing default tag: latestlatest: Pulling from wholegale39/tomcatDigest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3Status: Downloaded newer image for wholegale39/tomcat:latest[root@localhost ~]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB

搭建本地 Registry

  • 搭建本地 registey 服务
docker run -d -p 5000:5000 -v /home/wch/localRegistry:/var/lib/registry registryUnable to find image 'registry:latest' locallylatest: Pulling from library/registryddad3d7c1e96: Pull complete 6eda6749503f: Pull complete 363ab70c2143: Pull complete 5b94580856e6: Pull complete 12008541203a: Pull complete Digest: sha256:aba2bfe9f0cff1ac0618ec4a54bfefb2e685bbac67c8ebaf3b6405929b3e616fStatus: Downloaded newer image for registry:latestb7d56c751422ec434dd5217db4afac626fcf452b2d86554ea08126d8ee226cfb[root@localhost wch]# docker psCONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS                                        NAMESb7d56c751422        registry                                  "/entrypoint.sh /etc…"   8 seconds ago       Up 4 seconds        0.0.0.0:5000->5000/tcp                       happy_mcleanb43861a53e32        wch/tomcat                                "/bin/sh -c'/usr/lo…"6 hours ago         Up 6 hours          0.0.0.0:8010->8080/tcp                       inspiring_rubin2649b0f316c3        quay.io/prometheus/node-exporter:latest"/bin/node_exporter …"5 days ago          Up 24 hours                                                      node_exporter314026ddbcc3        grafana/grafana:latest"/run.sh"5 days ago          Up 24 hours         0.0.0.0:26->26/tcp, 0.0.0.0:3000->3000/tcp   grafana407fd7fc14a6        prom/prometheus:latest"/bin/prometheus --c…"   5 days ago          Up 24 hours         8086/tcp, 0.0.0.0:9090->9090/tcp             prometheus
  • 批改镜像
[root@localhost docker]# docker imagesREPOSITORY                         TAG                 IMAGE ID            CREATED             SIZEwch/tomcat                         latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                 latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                    latest              b53df981d3aa        7 days ago          206MBprom/prometheus                    latest              86ea6f86fc57        4 weeks ago         185MBregistry                           latest              1fd8e1b0bb7e        2 months ago        26.2MBquay.io/prometheus/node-exporter   latest              c19ae228f069        3 months ago        26MBcentos                             latest              300e315adb2f        6 months ago        209MB
docker tag wholegale39/tomcat 192.168.9.140:5000/wholegale39/tomcat
[root@localhost docker]# docker imagesREPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE192.168.9.140:5000/wholegale39/tomcat   latest              9b8179770e78        6 hours ago         584MBwch/tomcat                              latest              9b8179770e78        6 hours ago         584MBwholegale39/tomcat                      latest              9b8179770e78        6 hours ago         584MBgrafana/grafana                         latest              b53df981d3aa        7 days ago          206MBprom/prometheus                         latest              86ea6f86fc57        4 weeks ago         185MBregistry                                latest              1fd8e1b0bb7e        2 months ago        26.2MBquay.io/prometheus/node-exporter        latest              c19ae228f069        3 months ago        26MBcentos                                  latest              300e315adb2f        6 months ago        209MB
  • 上传镜像
[root@localhost docker]# docker push 192.168.9.140:5000/wholegale39/tomcat:latestThe push refers to repository [192.168.9.140:5000/wholegale39/tomcat]711749be7df9: Pushed 579be2cb5f3b: Pushed 015815b60df5: Pushed 2653d992f4ef: Pushed latest: digest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3 size: 1163
  • 下载应用该镜像,所有内网用户都可下载应用
[root@localhost docker]# docker imagesREPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE192.168.9.140:5000/wholegale39/tomcat   latest              9b8179770e78        7 hours ago         584MBwch/tomcat                              latest              9b8179770e78        7 hours ago         584MBwholegale39/tomcat                      latest              9b8179770e78        7 hours ago         584MBgrafana/grafana                         latest              b53df981d3aa        7 days ago          206MBprom/prometheus                         latest              86ea6f86fc57        4 weeks ago         185MBregistry                                latest              1fd8e1b0bb7e        2 months ago        26.2MBquay.io/prometheus/node-exporter        latest              c19ae228f069        3 months ago        26MBcentos                                  latest              300e315adb2f        6 months ago        209MB[root@localhost docker]# docker rmi 192.168.9.140:5000/wholegale39/tomcatUntagged: 192.168.9.140:5000/wholegale39/tomcat:latestUntagged: 192.168.9.140:5000/wholegale39/tomcat@sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3[root@localhost docker]# docker pull 192.168.9.140:5000/wholegale39/tomcatUsing default tag: latestlatest: Pulling from wholegale39/tomcatDigest: sha256:8ce292efe201dcefcd76fb1e3d42d5bc65a5469f46b470a738ed1027fcaeebd3Status: Downloaded newer image for 192.168.9.140:5000/wholegale39/tomcat:latest[root@localhost docker]# docker imagesREPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE192.168.9.140:5000/wholegale39/tomcat   latest              9b8179770e78        7 hours ago         584MBwch/tomcat                              latest              9b8179770e78        7 hours ago         584MBwholegale39/tomcat                      latest              9b8179770e78        7 hours ago         584MBgrafana/grafana                         latest              b53df981d3aa        7 days ago          206MBprom/prometheus                         latest              86ea6f86fc57        4 weeks ago         185MBregistry                                latest              1fd8e1b0bb7e        2 months ago        26.2MBquay.io/prometheus/node-exporter        latest              c19ae228f069        3 months ago        26MBcentos                                  latest              300e315adb2f        6 months ago        209MB
  • 查看 Registry 中的 Image 信息
[root@localhost docker]# curl http://192.168.9.140:5000/v2/_catalog{"repositories":["wholegale39/tomcat"]}[root@localhost docker]# curl http://192.168.9.140:5000/v2/wholegale39/tomcat/tags/list{"name":"wholegale39/tomcat","tags":["latest"]}

常用命令

  • 查看以后运行的容器
[root@localhost ~]# docker psCONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS                                        NAMESb7d56c751422        registry                                  "/entrypoint.sh /etc…"   25 hours ago        Up 24 hours         0.0.0.0:5000->5000/tcp                       happy_mclean
  • 查看所有状态的容器
[root@localhost ~]# docker ps -aCONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS                      PORTS                                        NAMESb7d56c751422        registry                                  "/entrypoint.sh /etc…"   25 hours ago        Up 24 hours                 0.0.0.0:5000->5000/tcp                       happy_mcleanb43861a53e32        wch/tomcat                                "/bin/sh -c'/usr/lo…"   31 hours ago        Exited (137) 24 hours ago                                                inspiring_rubin
  • 进入容器
[root@localhost ~]# docker exec -it CONTAINERID /bin/bash
  • 启动容器
[root@localhost ~]# docker start CONTAINERID
  • 进行容器
[root@localhost ~]# docker stop CONTAINERID
  • 重启容器
[root@localhost ~]# docker restart CONTAINERID
  • 查看日志
[root@localhost ~]# docker logs -f CONTAINERID
  • 暂停容器
[root@localhost ~]# docker pause CONTAINERID
  • 复原暂停的容器
[root@localhost ~]# docker unpause CONTAINERID
  • 删除不在运行状态的容器
[root@localhost ~]# docker rm CONTAINERID
  • 删除指定未被应用的镜像
[root@localhost ~]# docker rmi IMAGEID
  • 删除所有未被应用的镜像
[root@xdja wch]# docker image prune -aWARNING! This will remove all images without at least one container associated to them.Are you sure you want to continue? [y/N] y
  • 存出镜像
[root@localhost ~]# docker save -0 tomcat wholegale39/tomcat
  • 其余机器载入镜像
[root@xdja wch]# docker load -i tomcat2653d992f4ef: Loading layer [==================================================>]  216.5MB/216.5MB015815b60df5: Loading layer [==================================================>]  360.4MB/360.4MB579be2cb5f3b: Loading layer [==================================================>]  15.27MB/15.27MB711749be7df9: Loading layer [==================================================>]  65.02kB/65.02kBLoaded image: wholegale39/tomcat:latest
  • 批量删除孤儿 volume
[root@localhost ~]# docker volumerm $ (docker volume ls -q)
  • 复制
[root@localhost ~]# docker cp /home/wch containerID:/home/[root@localhost ~]# docker cp containerID:/home/ /home/wch

Docker 网络

  • 查看网络
[root@localhost docker]# docker network lsNETWORK ID          NAME                         DRIVER              SCOPE0a6e7337301f        bridge                       bridge              locale558d63e1ee8        host                         host                localc7da7be15130        none                         null                local4965012c623e        prometheus_grafana_monitor   bridge              local

none 网络,仅有 lo 网卡,一些对安全性要求高的利用能够应用

host 网络:容器共享 Docker host 的网络栈,网络配置与 host 齐全一样,最大的益处是性能较好,然而要思考端口抵触问题

bridge 网络:Docker 守护过程创立了一个虚构以太网桥docker0,附加在其上的任何网卡之间都能主动转发数据包。默认状况下,守护过程会创立一对对等接口,将其中一个接口设置为容器的 eth0 接口,另一个接口搁置在宿主机的命名空间中,从而将宿主机上的所有容器都连贯到这个外部网络上。同时,守护过程还会从网桥的公有地址空间中调配一个 IP 地址和子网给该容器。bridge 模式是 Docker 的默认设置

  • 动静端口映射,将 80 端口映射到 host 动静端口
[root@localhost docker]# docker run -p 80 httpd
  • 指定端口映射,将 80 端口映射到 host 的 8080 端口
[root@localhost docker]# docker run -p 8080:80 httpd

每一个映射的端口,host 都会启动一个 docker-proxy 过程来解决拜访容器的流量

[root@localhost docker]# ps -ef|grep docker-proxyroot       910 16786  0 Jun23 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 5000 -container-ip 172.17.0.1 -container-port 5000root     17024 16786  0 Jun22 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3000 -container-ip 172.26.0.2 -container-port 3000root     17038 16786  0 Jun22 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 26 -container-ip 172.26.0.2 -container-port 26root     17068 16786  0 Jun22 ?        00:01:57 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9090 -container-ip 172.26.0.3 -container-port 9090root     27721 17810  0 09:59 pts/0    00:00:00 grep --color=auto docker-proxy

跨主机网络计划包含:

1、docker 原生的 overlay 和 macvlan;

2、第三方计划:罕用的包含 flannel、weave 和 calico;

Overlay 网络利用隧道技术,将数据包封装到 UDP 中进行传输。因为波及数据包的封装和解封,存在额定的 CPU 和网络开销。尽管简直所有 Overlay 网络计划底层都采纳 Linux kernel 的 vxlan 模块,这样能够尽量减少开销,但这个开销与 Underlay 网络相比还是存在的。所以 Macvlan、Flannel host-gw、Calico 的性能会优于 Docker overlay、Flannel vxlan 和 Weave。

Overlay 较 Underlay 能够反对更多的二层网段,能更好地利用已有网络,以及有防止物理交换机 MAC 表耗尽等劣势,所以在计划选型的时候须要综合思考。

Docker 存储

Docker 为容器提供了两种存放数据的资源:

1、由 storage driver 治理的镜像层和容器层。

2、Data Volume。

storage driver

容器由最下面一个可写的容器层,以及若干只读的镜像层组成,容器的数据就寄存在这些层中。这样的分层构造最大的个性是 Copy-on-Write:

1、新数据会间接寄存在最下面的容器层。

2、批改现有数据会先从镜像层将数据复制到容器层,批改后的数据间接保留在容器层中,镜像层放弃不变。

3、如果多个层中有命名雷同的文件,用户只能看到最下面那层中的文件。

分层构造使镜像和容器的创立、共享以及散发变得十分高效,而这些都要归功于 Docker storagedriver。正是 storage driver 实现了多层数据的重叠并为用户提供一个繁多的合并之后的对立视图。

Docker 反对多种 storage driver,有 AUFS、Device Mapper、Btrfs、OverlayFS、VFS 和 ZFS。它们都能实现分层的架构,同时又有各自的个性。对于 Docker 用户来说,具体抉择应用哪个 storagedriver 是一个难题,因为:

1、没有哪个 driver 可能适应所有的场景。

2、driver 自身在疾速倒退和迭代。

不过 Docker 官网给出了一个简略的答案:优先应用 Linux 发行版默认的 storage driver。

  • Centos7.4 零碎
[root@localhost docker]# docker infoContainers: 5 Running: 3 Paused: 1 Stopped: 1Images: 14Server Version: 18.09.7Storage Driver: devicemapper
  • Ubuntu18 零碎
wch@ubuntu:~$ sudo docker infoClient:Debug Mode:falseServer:Containers: 0 Running: 0 Paused: 0 Stopped: 0Images: 0Server Version: 19.03.13Storage Driver: overlay2

对于某些容器,间接将数据放在由 storage driver 保护的层中是很好的抉择,比方那些无状态的利用。无状态意味着容器没有须要长久化的数据,随时能够从镜像间接创立。

比方 busybox,它是一个工具箱,启动 busybox 是为了执行诸如 wget、ping 之类的命令,不须要保留数据供当前应用,应用完间接退出,容器删除时寄存在容器层中的工作数据也一起被删除,这没问题,下次再启动新容器即可。

但对于另一类利用这种形式就不适合了,它们有长久化数据的需要,容器启动时须要加载已有的数据,容器销毁时心愿保留产生的新数据,也就是说,这类容器是有状态的。

这就要用到 Docker 的另一种存储机制:Data Volume。

Data Volume

Data Volume 实质上是 Docker Host 文件系统中的目录或文件,可能间接被 mount 到容器的文件系统中。

Data Volume 有以下特点:

1、Data Volume 是目录或文件,而非没有格式化的磁盘(块设施)。

2、容器能够读写 volume 中的数据。

3、volume 数据能够被永恒地保留,即便应用它的容器曾经销毁。

docker 提供了两种类型的 volume:bind mount 和 docker managed volume

  • bind mount

    • bind mount 是将 host 上已存在的目录或文件 mount 到容器。
    • - v 的格局为 <host path>:<container path>。/usr/local/apache2/htdocs 就是 Apache Server 寄存动态文件的中央。因为 /usr/local/apache2/htdocs 曾经存在,原有数据会被暗藏起来,取而代之的是 host /home/wch/docker/httpd/ 中的数据
[root@localhost httpd]# pwd/home/wch/docker/httpd[root@localhost httpd]# lltotal 4-rw-r--r-- 1 root root 72 Jun 24 15:17 index.html[root@localhost httpd]# cat index.html <html><body><h1>This is a file in host file system !</h1></body></html>[root@localhost httpd]# docker run -d -p 80:80 -v /home/wch/docker/httpd:/usr/local/apache2/htdocs httpd275953f4f8bcc276dc83c63147a5d05582c4b216eb80855d12a1eb3d7da5baae[root@localhost httpd]# curl 127.0.0.1:80<html><body><h1>This is a file in host file system !</h1></body></html>
[root@localhost httpd]# echo "update index page" > index.html[root@localhost httpd]# cat index.html update index page[root@localhost httpd]# curl 127.0.0.1:80update index page
# 默认是可读可写[root@localhost httpd]# docker run -d -p 80:80 -v /home/wch/docker/httpd:/usr/local/apache2/htdocs httpd# 可指定为只读, 在容器中是无奈对 bind mount 数据进行批改的, 只有 host 有权批改数据[root@localhost httpd]# docker run -d -p 80:80 -v /home/wch/docker/httpd:/usr/local/apache2/htdocs:ro httpd
  • docker managed volume

docker managed volume 与 bind mount 在应用上的最大区别是不须要指定 mount 源,指明 mountpoint 就行了

如果 mount point 指向的是已有目录,原有数据会被复制到 host 的 volume 中

[root@localhost httpd]# docker run -d -p 80:80 -v /usr/local/apache2/htdocs httpd6c0c6c8e15ebc5e99ff53d60a9e59994dc79909b80f1020f15271e9012958c64[root@localhost httpd]# docker inspect 6c0c6c8e15eb"Mounts": [{                "Type": "volume",                "Name": "02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154",                "Source": "/var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data",                "Destination": "/usr/local/apache2/htdocs",                "Driver": "local",                "Mode": "","RW": true,"Propagation":""}        ]
[root@localhost httpd]# docker volume lsDRIVER              VOLUME NAMElocal               02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154local               0449d527e57c9b7b48789449fb02ae9c598db4d982a6c9af4f56cddea57a1b49[root@localhost httpd]# docker inspect 02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154[{        "CreatedAt": "2021-06-24T15:35:00+08:00",        "Driver": "local",        "Labels": null,        "Mountpoint": "/var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data",        "Name": "02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154",        "Options": null,        "Scope": "local"}]
[root@localhost httpd]# ls -l /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_datatotal 4-rw-r--r-- 1 mysql mysql 45 Jun 12  2007 index.html[root@localhost httpd]# cat /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data/index.html <html><body><h1>It works!</h1></body></html>[root@localhost httpd]# curl 127.0.0.1:80<html><body><h1>It works!</h1></body></html>
# 对于 docker managed volume,在执行 docker rm 删除容器时能够带上 - v 参数,docker 会将容器应用到的 volume 一并删除,但前提是没有其余容器 mount 该 volume[root@localhost httpd]# docker rm -v 6c0c6c8e15eb

数据共享

  • 容器与 host 共享数据

    • bind mount 间接将要共享的目录 mount 到容器
    • docker managed volume
    [root@localhost httpd]# curl 127.0.0.1:80<html><body><h1>It works!</h1></body></html>[root@localhost httpd]# docker cp /home/wch/docker/httpd/index.html 6c0c6c8e15eb:/usr/local/apache2/htdocs[root@localhost httpd]# curl 127.0.0.1:80This is a new index page for web cluster[root@localhost httpd]# cat /var/lib/docker/volumes/02a78718f039a58ba22b56a96c1b0379da45f37408b96c8792b33a781ac04154/_data/index.html This is a new index page for web cluster
  • 容器之间共享数据
[root@localhost httpd]# docker run --name web1 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd2126366ffe2cb5aca7b97012b41779b7963ca41c4afd797a992d8a3c2e471ab4[root@localhost httpd]# docker run --name web2 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd03a859cfda48a472ff28c313638c6054633e30e7ed77d17d0919a6e95ecd164f[root@localhost httpd]# docker run --name web3 -d -p 80 -v /home/wch/docker/httpd/:/usr/local/apache2/htdocs httpd27483f6f7ccccce086594501d21e0b9eef1fdcc9f3145dd1a36e0c9c7910322a[root@localhost httpd]# docker psCONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS                 PORTS                                        NAMES27483f6f7ccc        httpd                                     "httpd-foreground"       8 seconds ago       Up 5 seconds           0.0.0.0:1026->80/tcp                         web303a859cfda48        httpd                                     "httpd-foreground"       17 seconds ago      Up 14 seconds          0.0.0.0:1025->80/tcp                         web22126366ffe2c        httpd                                     "httpd-foreground"       29 seconds ago      Up 26 seconds          0.0.0.0:1024->80/tcp                         web1
[root@localhost httpd]# curl 127.0.0.1:1024update index page[root@localhost httpd]# curl 127.0.0.1:1025update index page[root@localhost httpd]# curl 127.0.0.1:1026update index page
[root@localhost httpd]# echo "This is a new index page for web cluster" > index.html [root@localhost httpd]# curl 127.0.0.1:1024This is a new index page for web cluster[root@localhost httpd]# curl 127.0.0.1:1025This is a new index page for web cluster[root@localhost httpd]# curl 127.0.0.1:1026This is a new index page for web cluster
  • volume container

    • bind mount,寄存 Web Server 的动态文件
    • docker managed volume,寄存一些实用工具(当然当初是空的,这里只是做个示例)
# docker create 命令,这是因为 volume container 的作用只是提供数据,它自身不须要处于运行状态[root@localhost httpd]# docker create --name vc_data -v /home/wch/docker/httpd:/usr/local/apache2/htdocs -v /other/useful/tools busyboxUnable to find image 'busybox:latest' locallylatest: Pulling from library/busyboxb71f96345d44: Pull complete Digest: sha256:930490f97e5b921535c153e0e7110d251134cc4b72bbb8133c6a5065cc68580dStatus: Downloaded newer image for busybox:latest948a7dd94baf96c7b6291d4830df7d314a65680c687bad52ece2432e1190ee55
 [root@localhost httpd]# docker inspect vc_data "Mounts": [{                "Type": "bind",                "Source": "/home/wch/docker/httpd",                "Destination": "/usr/local/apache2/htdocs",                "Mode": "","RW": true,"Propagation":"rprivate"},            {"Type":"volume","Name":"9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f","Source":"/var/lib/docker/volumes/9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f/_data","Destination":"/other/useful/tools","Driver":"local","Mode":"",                "RW": true,                "Propagation": ""}
# 其余容器能够通过 --volumes-from 应用 vc_data 这个 volume container[root@localhost httpd]# docker run --name web4 -d -p 80 --volumes-from vc_data httpdc9e05ea4c552687c79f00698ae56f1ab2c4654192105db309d09dd41eb3fcbee[root@localhost httpd]# docker inspect web4"Mounts": [{                "Type": "bind",                "Source": "/home/wch/docker/httpd",                "Destination": "/usr/local/apache2/htdocs",                "Mode": "","RW": true,"Propagation":"rprivate"},            {"Type":"volume","Name":"9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f","Source":"/var/lib/docker/volumes/9ea52d28e5824755983b45ebd1a28ea220eecadd2e653e3537143191dd97578f/_data","Destination":"/other/useful/tools","Driver":"local","Mode":"",                "RW": true,                "Propagation": ""}        ],
  • data-packed volume container

原理是将数据打包到镜像中,而后通过 docker managed volume 共享

容器可能正确读取 volume 中的数据。data-packed volume container 是自蕴含的,不依赖 host 提供数据,具备很强的移植性,非常适合只应用静态数据的场景,比方利用的配置信息、Web server 的动态文件等。

[root@localhost httpd]# pwd/home/wch/httpd[root@localhost httpd]# lltotal 4-rw-r--r-- 1 root root 91 Jun 24 17:00 Dockerfiledrwxr-xr-x 2 root root 23 Jun 24 16:57 htdocs
[root@localhost httpd]# docker build -t datapacked .Sending build context to Docker daemon  3.584kBStep 1/3 : FROM busybox:latest ---> 69593048aa3aStep 2/3 : ADD htdocs /usr/local/apache2/htdocs ---> aa1f4298814eStep 3/3 : VOLUME /usr/local/apache2/htdocs ---> Running in 71362c795108Removing intermediate container 71362c795108 ---> cb8ced11e74cSuccessfully built cb8ced11e74cSuccessfully tagged datapacked:latest
[root@localhost httpd]# docker run -d -p 80 --volumes-from vc_data2 httpdb9da47ebcf64477c77fed8bb85613765485624b20161daf1508b56e326880447[root@localhost httpd]# curl 127.0.0.1:1028This is a new index page for web cluster

多主机治理

Docker Machine 是一种能够让您在虚拟主机上装置 Docker 的工具,并能够应用 docker-machine 命令来治理主机。

Docker Machine 也能够集中管理所有的 docker 主机,比方疾速的给 100 台服务器装置上 docker。

装置

[root@localhost httpd]# curl -L https://github.com/docker/machine/releases/download/v0.16.2/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine && chmod +x /tmp/docker-machine &$ sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
[root@localhost httpd]# docker-machine -vdocker-machine version 0.16.2, build bd45ab13# 装置主动补全性能[root@localhost httpd]# yum -y install bash-completion

配置管理

  • 查看一下以后的 machine
[root@localhost httpd]# docker-machine lsNAME   ACTIVE   DRIVER   STATE   URL   SWARM   DOCKER   ERRORS
  • 配置免明码登陆
# 一路回车创立生成 keys[root@localhost httpd]# ssh-keygen# 将 keys 拷贝到 client1 下来[root@localhost httpd]# ssh-copy-id 192.168.9.31# 测试是否能够免密登录[root@localhost httpd]# ssh root@192.168.9.31
  • 创立 machine
[root@localhost httpd]# docker-machine create --driver generic --generic-ip-address=192.168.9.31 client1Running pre-create checks...Creating machine...(client1) No SSH key specified. Assuming an existing key at the default location.Waiting for machine to be running, this may take a few minutes...Detecting operating system of created instance...Waiting for SSH to be available...Detecting the provisioner...Provisioning with centos...Copying certs to the local machine directory...Copying certs to the remote machine...Setting Docker configuration on the remote daemon...Checking connection to Docker...Docker is up and running!To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env client1
  • 查看一下以后的 machine
[root@localhost httpd]# docker-machine lsNAME      ACTIVE   DRIVER    STATE     URL                       SWARM   DOCKER        ERRORSclient1   -        generic   Running   tcp://192.168.9.31:2376           v18.06.3-ce   
  • 拜访 client1 所有环境变量
[root@localhost docker]# docker-machine env client1export DOCKER_TLS_VERIFY="1"export DOCKER_HOST="tcp://192.168.9.31:2376"export DOCKER_CERT_PATH="/root/.docker/machine/machines/client1"export DOCKER_MACHINE_NAME="client1"# Run this command to configure your shell: # eval $(docker-machine env client1)
  • 切换到 client1 上进行操作
[root@localhost docker]# eval $(docker-machine env client1)[root@localhost docker]# docker imagesREPOSITORY                       TAG                 IMAGE ID            CREATED             SIZEwholegale39/tomcat               latest              9b8179770e78        2 days ago          584MB
  • 其余命令
[root@localhost docker]# docker-machine version client118.06.3-ce[root@localhost docker]# docker-machine status client1Running

容器监控

自带命令工具

[root@client1 docker]# docker ps
[root@client1 docker]# docker container ls[root@localhost ~]# docker container ls -a
[root@localhost ~]# docker container top containerID
[root@localhost ~]# docker stats

sysdig

Sysdig 是 Sysdig Cloud 开发的次要基于 Lua 语言的一个开源系统分析工具。Sysdig 能从运行的零碎中,获取零碎状态和行为,做过滤剖析,性能上超同类开源工具。Sysdig 能够看做是 strace + tcpdump + lsof + htop + iftop 以及其余系统分析工具的合集。

  • 装置
[root@localhost ~]# docker run -i -t --name sysdig --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro sysdig/sysdig
root@6d57b899e866:/# csysdig

Weave Scope

Weave Scope 用于监控、可视化和治理 Docker 以及 Kubernetes。

Weave Scope 这个我的项目会主动生成容器之间的关系图,不便了解容器之间的关系,也不便监控容器化和微服务化的利用。

  • 装置
# 下载最新版本[root@localhost ~]# sudo curl -L https://github.com/weaveworks/scope/releases/download/latest_release/scope -o /usr/local/bin/scope# 赋予权限[root@localhost ~]# sudo chmod a+x /usr/local/bin/scope# scope launch 将以容器形式启动 Weave Scope 并减少用户名和明码,进步安全性[root@localhost ~]# scope launch -app.basicAuth -app.basicAuth.password 123456 -app.basicAuth.username user -probe.basicAuth -probe.basicAuth.password 123456 -probe.basicAuth.username user
  • 浏览器拜访 http://[Host IP]:4040/,可对容器进行任意操作,性能十分弱小

  • 多主机监控,在多台机器上依照上述命令装置胜利
# 首先在多台机器上别离进行 weave scope 容器服务[root@client1 ~]# docker stop 1215c4a1d22e# 别离在多台机器上执行[root@localhost ~]# scope launch 192.168.9.31 192.168.9.1405023feeda6c0e299c6c56cf7f1e1a4be1c9b8532a591f1aa326fbf8c75c4d561Scope probe startedWeave Scope is listening at the following URL(s):

cAdvisor

  • 具体请参考【Prometheus&Grafana 性能监控】文章

Prometheus

  • 具体请参考【Prometheus&Grafana 性能监控】文章

监控工具比照

关注点 / 计划 Docker ps/top/stats sysdig WeaveScope cAvisor Prometheus
部署难易水平 sssss sssss ssss sssss sss
数据具体度 sss sssss sssss sss sssss
多 Host 监控 none none sssss none sssss
告警性能 none none none none ssss
监控非容器资源 none sss sss ss sssss

s 为 strong 缩写

容器日志治理

Docker logs

  • attach,看不到之前的日志,只能看后续日志,并且退出操作比拟繁琐
[root@localhost ~]# docker attach containerID
  • logs
[root@localhost ~]# docker logs -f containerID

Docker logging driver

将容器日志发送到 STDOUT 和 STDERR 是 Docker 的默认日志行为。实际上,Docker 提供了多种日志机制帮忙用户从运行的容器中提取日志信息,这些机制被称作 logging driver。

Docker 的默认 logging driver 是 json-file。

[root@localhost ~]# cat /var/lib/docker/containers/03a859cfda48a472ff28c313638c6054633e30e7ed77d17d0919a6e95ecd164f/03a859cfda48a472ff28c313638c6054633e30e7ed77d17d0919a6e95ecd164f-json.log

ELK

Filebeat 是用于转发和集中日志数据的轻量级传送工具。Filebeat 监督您指定的日志文件或地位,收集日志事件,并将它们转发到 Elasticsearch 或 Logstash 进行索引。也有收集网络流量数据、收集零碎、过程和文件系统级别的 CPU 和内存应用状况等数据、收集 Windows 事件日志数据、收集审计日志、收集零碎运行时的数据等 beat。

Logstash,读取原始日志,并对其进行剖析和过滤,而后将其转发给其余组件(比方 Elasticsearch)进行索引或存储。Logstash 反对丰盛的 Input 和 Output 类型,可能解决各种利用的日志。jvm 跑的,资源耗费比拟大

Elasticsearch,一个近乎实时查问的全文搜索引擎。Elasticsearch 的设计指标就是要可能解决和搜寻巨量的日志数据。

Kibana,一个基于 JavaScript 的 Web 图形界面程序,专门用于可视化 Elasticsearch 的数据。Kibana 可能查问 Elasticsearch 并通过丰盛的图表展现后果。用户能够创立 Dashboard 来监控零碎的日志。

Filebeat>Kafka 集群 >Logstash 集群 >Elasticsearch 集群 >Kibana

  • Git Clone 命令下载我的项目
[root@localhost docker-elk]# git clone https://github.com/deviantony/docker-elk.git
  • 装置
[root@localhost docker-elk]# docker-compose upBuilding elasticsearchSending build context to Docker daemon  3.584kBStep 1/2 : ARG ELK_VERSIONStep 2/2 : FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}7.13.2: Pulling from elasticsearch/elasticsearchddf49b9115d7: Already exists 815a15889ec1: Pull complete ba5d33fc5cc5: Pull complete 976d4f887b1a: Extracting [==============>]  104.7MB/354.9MB9b5ee4563932: Download complete ef11e8f17d0c: Download complete 3c5ad4db1e24: Download complete 
  • 重置明码
[root@localhost docker-elk]# docker-compose exec -T elasticsearch bin/elasticsearch-setup-passwords auto --batchChanged password for user apm_systemPASSWORD apm_system = 4OHYCFm7yZhsVG5tQDflChanged password for user kibana_systemPASSWORD kibana_system = oksG2cfrYEFDFqzPLpu3Changed password for user kibanaPASSWORD kibana = oksG2cfrYEFDFqzPLpu3Changed password for user logstash_systemPASSWORD logstash_system = nHU6m8iuBoGKpHI4Yt1pChanged password for user beats_systemPASSWORD beats_system = YTjhnmgKxLlTVOY8V9PJChanged password for user remote_monitoring_userPASSWORD remote_monitoring_user = eihRRu2eDt05zY7AbqYuChanged password for user elasticPASSWORD elastic = fpgKWAI6tkQKkS8c8zzD
  • 批改以下配置文件中用户 elastic 对应的明码
kibana/config/kibana.ymllogstash/config/logstash.ymllogstash/pipeline/logstash.conf
  • 重启服务
[root@localhost docker-elk]# docker-compose restartRestarting docker-elk_logstash_1      ... doneRestarting docker-elk_kibana_1        ... doneRestarting docker-elk_elasticsearch_1 ... done

https://blog.csdn.net/soultea…

Graylog

Graylog 是一个开源的日志聚合、剖析、审计、展示和预警工具。性能上和 ELK 相似,但又比 ELK 要简略,依附着更加简洁,高效,部署应用简略的劣势很快受到许多人的青眼。

  • 创立配置文件

https://raw.githubusercontent…

https://raw.githubusercontent…

  • 创立 graylog.conf 文件
############################# GRAYLOG CONFIGURATION FILE############################## This is the Graylog configuration file. The file has to use ISO 8859-1/Latin-1 character encoding.# Characters that cannot be directly represented in this encoding can be written using Unicode escapes# as defined in https://docs.oracle.com/javase/specs/jls/se8/html/jls-3.html#jls-3.3, using the \u prefix.# For example, \u002c.## * Entries are generally expected to be a single line of the form, one of the following:## propertyName=propertyValue# propertyName:propertyValue## * White space that appears between the property name and property value is ignored,#   so the following are equivalent:## name=Stephen# name = Stephen## * White space at the beginning of the line is also ignored.## * Lines that start with the comment characters ! or # are ignored. Blank lines are also ignored.## * The property value is generally terminated by the end of the line. White space following the#   property value is not ignored, and is treated as part of the property value.## * A property value can span several lines if each line is terminated by a backslash (‘\’) character.#   For example:## targetCities=\#         Detroit,\#         Chicago,\#         Los Angeles##   This is equivalent to targetCities=Detroit,Chicago,Los Angeles (white space at the beginning of lines is ignored).## * The characters newline, carriage return, and tab can be inserted with characters \n, \r, and \t, respectively.## * The backslash character must be escaped as a double backslash. For example:## path=c:\\docs\\doc1## If you are running more than one instances of Graylog server you have to select one of these# instances as master. The master will perform some periodical tasks that non-masters won't perform.is_master = true# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea# to use an absolute file path here if you are starting Graylog server from init scripts or similar.node_id_file = /usr/share/graylog/data/config/node-id# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.# Generate one by using for example: pwgen -N 1 -s 96# ATTENTION: This value must be the same on all Graylog nodes in the cluster.# Changing this value after installation will render all user sessions and encrypted values in the database invalid. (e.g. encrypted access tokens)password_secret = replacethiswithyourownsecret!# The default root user is named'admin'#root_username = admin# You MUST specify a hash password for the root user (which you only need to initially set up the# system and in case you lose connectivity to your authentication backend)# This password cannot be changed using the API or via the web interface. If you need to change it,# modify it in this file.# Create one by using for example: echo -n yourpassword | shasum -a 256# and put the resulting hash value into the following line# CHANGE THIS!root_password_sha2 = 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918# The email address of the root user.# Default is empty#root_email =""# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.# Default is UTC#root_timezone = UTC# Set the bin directory here (relative or absolute)# This directory contains binaries that are used by the Graylog server.# Default: binbin_dir = /usr/share/graylog/bin# Set the data directory here (relative or absolute)# This directory is used to store Graylog server state.# Default: datadata_dir = /usr/share/graylog/data# Set plugin directory here (relative or absolute)plugin_dir = /usr/share/graylog/plugin################ HTTP settings################### HTTP bind address## The network interface used by the Graylog HTTP interface.## This network interface must be accessible by all Graylog nodes in the cluster and by all clients# using the Graylog web interface.## If the port is omitted, Graylog will use port 9000 by default.## Default: 127.0.0.1:9000#http_bind_address = 127.0.0.1:9000#http_bind_address = [2001:db8::1]:9000http_bind_address = 0.0.0.0:9000#### HTTP publish URI## The HTTP URI of this Graylog node which is used to communicate with the other Graylog nodes in the cluster and by all# clients using the Graylog web interface.## The URI will be published in the cluster discovery APIs, so that other Graylog nodes will be able to find and connect to this Graylog node.## This configuration setting has to be used if this Graylog node is available on another network interface than $http_bind_address,# for example if the machine has multiple network interfaces or is behind a NAT gateway.## If $http_bind_address contains a wildcard IPv4 address (0.0.0.0), the first non-loopback IPv4 address of this machine will be used.# This configuration setting *must not* contain a wildcard address!## Default: http://$http_bind_address/#http_publish_uri = http://192.168.1.1:9000/#### External Graylog URI## The public URI of Graylog which will be used by the Graylog web interface to communicate with the Graylog REST API.## The external Graylog URI usually has to be specified, if Graylog is running behind a reverse proxy or load-balancer# and it will be used to generate URLs addressing entities in the Graylog REST API (see $http_bind_address).## When using Graylog Collector, this URI will be used to receive heartbeat messages and must be accessible for all collectors.## This setting can be overriden on a per-request basis with the "X-Graylog-Server-URL" HTTP request header.## Default: $http_publish_uri#http_external_uri =#### Enable CORS headers for HTTP interface## This allows browsers to make Cross-Origin requests from any origin.# This is disabled for security reasons and typically only needed if running graylog# with a separate server for frontend development.## Default: false#http_enable_cors = false#### Enable GZIP support for HTTP interface## This compresses API responses and therefore helps to reduce# overall round trip times. This is enabled by default. Uncomment the next line to disable it.#http_enable_gzip = false# The maximum size of the HTTP request headers in bytes.#http_max_header_size = 8192# The size of the thread pool used exclusively for serving the HTTP interface.#http_thread_pool_size = 16################# HTTPS settings#################### Enable HTTPS support for the HTTP interface## This secures the communication with the HTTP interface with TLS to prevent request forgery and eavesdropping.## Default: false#http_enable_tls = true# The X.509 certificate chain file in PEM format to use for securing the HTTP interface.#http_tls_cert_file = /path/to/graylog.crt# The PKCS#8 private key file in PEM format to use for securing the HTTP interface.#http_tls_key_file = /path/to/graylog.key# The password to unlock the private key used for securing the HTTP interface.#http_tls_key_password = secret# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For# header. May be subnets, or hosts.#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128# List of Elasticsearch hosts Graylog should connect to.# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that# requires authentication.## Default: http://127.0.0.1:9200#elasticsearch_hosts = http://node1:9200,http://user:password@node2:19200elasticsearch_hosts = http://elasticsearch:9200# Maximum amount of time to wait for successfull connection to Elasticsearch HTTP port.## Default: 10 Seconds#elasticsearch_connect_timeout = 10s# Maximum amount of time to wait for reading back a response from an Elasticsearch server.# (e. g. during search, index creation, or index time-range calculations)## Default: 60 seconds#elasticsearch_socket_timeout = 60s# Maximum idle time for an Elasticsearch connection. If this is exceeded, this connection will# be tore down.## Default: inf#elasticsearch_idle_timeout = -1s# Maximum number of total connections to Elasticsearch.## Default: 200#elasticsearch_max_total_connections = 200# Maximum number of total connections per Elasticsearch route (normally this means per# elasticsearch server).## Default: 20#elasticsearch_max_total_connections_per_route = 20# Maximum number of times Graylog will retry failed requests to Elasticsearch.## Default: 2#elasticsearch_max_retries = 2# Enable automatic Elasticsearch node discovery through Nodes Info,# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster-nodes-info.html## WARNING: Automatic node discovery does not work if Elasticsearch requires authentication, e. g. with Shield.## Default: false#elasticsearch_discovery_enabled = true# Filter for including/excluding Elasticsearch nodes in discovery according to their custom attributes,# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster.html#cluster-nodes## Default: empty#elasticsearch_discovery_filter = rack:42# Frequency of the Elasticsearch node discovery.## Default: 30s# elasticsearch_discovery_frequency = 30s# Set the default scheme when connecting to Elasticsearch discovered nodes## Default: http (available options: http, https)#elasticsearch_discovery_default_scheme = http# Enable payload compression for Elasticsearch requests.## Default: false#elasticsearch_compression_enabled = true# Enable use of "Expect: 100-continue" Header for Elasticsearch index requests.# If this is disabled, Graylog cannot properly handle HTTP 413 Request Entity Too Large errors.## Default: true#elasticsearch_use_expect_continue = true# Graylog will use multiple indices to store documents in. You can configured the strategy it uses to determine# when to rotate the currently active write index.# It supports multiple rotation strategies:#   - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure#   - "size" per index, use elasticsearch_max_size_per_index below to configure# valid values are "count", "size" and "time", default is "count"## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.rotation_strategy = count# (Approximate) maximum number of documents in an Elasticsearch index before a new index# is being created, also see no_retention and elasticsearch_max_number_of_indices.# Configure this if you used 'rotation_strategy = count' above.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_max_docs_per_index = 20000000# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see# no_retention and elasticsearch_max_number_of_indices. Default is 1GB.# Configure this if you used 'rotation_strategy = size' above.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_max_size_per_index = 1073741824# (Approximate) maximum time before a new Elasticsearch index is being created, also see# no_retention and elasticsearch_max_number_of_indices. Default is 1 day.# Configure this if you used 'rotation_strategy = time' above.# Please note that this rotation period does not look at the time specified in the received messages, but is# using the real clock value to decide when to rotate the index!# Specify the time using a duration and a suffix indicating which unit you want:#  1w  = 1 week#  1d  = 1 day#  12h = 12 hours# Permitted suffixes are: d for day, h for hour, m for minute, s for second.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_max_time_per_index = 1d# Disable checking the version of Elasticsearch for being compatible with this Graylog release.# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!#elasticsearch_disable_version_check = true# Disable message retention on this node, i. e. disable Elasticsearch index rotation.#no_retention = false# How many indices do you want to keep?## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_max_number_of_indices = 5# Decide what happens with the oldest indices when the maximum number of indices is reached.# The following strategies are availble:#   - delete # Deletes the index completely (Default)#   - close # Closes the index and hides it from the system. Can be re-opened later.## ATTENTION: These settings have been moved to the database in 2.0. When you upgrade, make sure to set these#            to your previous 1.x settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.retention_strategy = delete# How many Elasticsearch shards and replicas should be used per index? Note that this only applies to newly created indices.# ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_shards = 1elasticsearch_replicas = 0# Prefix for all Elasticsearch indices and index aliases managed by Graylog.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_index_prefix = graylog# Name of the Elasticsearch index template used by Graylog to apply the mandatory index mapping.# Default: graylog-internal## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#elasticsearch_template_name = graylog-internal# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only# be enabled with care. See also: http://docs.graylog.org/en/2.1/pages/queries.htmlallow_leading_wildcard_searches = false# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and# should only be enabled after making sure your Elasticsearch cluster has enough memory.allow_highlighting = false# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom# Elasticsearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/2.3/analysis.html# Note that this setting only takes effect on newly created indices.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the 'System / Indices' page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.elasticsearch_analyzer = standard# Global timeout for index optimization (force merge) requests.# Default: 1h#elasticsearch_index_optimization_timeout = 1h# Maximum number of concurrently running index optimization (force merge) jobs.# If you are using lots of different index sets, you might want to increase that number.# Default: 20#elasticsearch_index_optimization_jobs = 20# Time interval for index range information cleanups. This setting defines how often stale index range information# is being purged from the database.# Default: 1h#index_ranges_cleanup_interval = 1h# Time interval for the job that runs index field type maintenance tasks like cleaning up stale entries. This doesn't# need to run very often.# Default: 1h#index_field_type_periodical_interval = 1h# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember# that every outputbuffer processor manages its own batch and performs its own batch write calls.# ("outputbuffer_processors"variable)output_batch_size = 500# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages# for this time period is less than output_batch_size * outputbuffer_processors.output_flush_interval = 1# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and# over again. To prevent this, the following configuration options define after how many faults an output will# not be tried again for an also configurable amount of seconds.output_fault_count_threshold = 5output_fault_penalty_seconds = 30# The number of parallel running processors.# Raise this number if your buffers are filling up.processbuffer_processors = 5outputbuffer_processors = 3# The following settings (outputbuffer_processor_*) configure the thread pools backing each output buffer processor.# See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html for technical details# When the number of threads is greater than the core (see outputbuffer_processor_threads_core_pool_size),# this is the maximum time in milliseconds that excess idle threads will wait for new tasks before terminating.# Default: 5000#outputbuffer_processor_keep_alive_time = 5000# The number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set# Default: 3#outputbuffer_processor_threads_core_pool_size = 3# The maximum number of threads to allow in the pool# Default: 30#outputbuffer_processor_threads_max_pool_size = 30# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).#udp_recvbuffer_sizes = 1048576# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)# Possible types:#  - yielding#     Compromise between performance and CPU usage.#  - sleeping#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.#  - blocking#     High throughput, low latency, higher CPU usage.#  - busy_spinning#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.processor_wait_strategy = blocking# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.# Must be a power of 2. (512, 1024, 2048, ...)ring_size = 65536inputbuffer_ring_size = 65536inputbuffer_processors = 2inputbuffer_wait_strategy = blocking# Enable the disk based message journal.message_journal_enabled = true# The directory which will be used to store the message journal. The directory must be exclusively used by Graylog and# must not contain any other files than the ones created by Graylog itself.## ATTENTION:#   If you create a seperate partition for the journal files and use a file system creating directories like'lost+found'#   in the root directory, you need to create a sub directory for your journal.#   Otherwise Graylog will log an error message that the journal is corrupt and Graylog will not start.message_journal_dir = data/journal# Journal hold messages before they could be written to Elasticsearch.# For a maximum of 12 hours or 5 GB whichever happens first.# During normal operation the journal will be smaller.#message_journal_max_age = 12h#message_journal_max_size = 5gb#message_journal_flush_age = 1m#message_journal_flush_interval = 1000000#message_journal_segment_age = 1h#message_journal_segment_size = 100mb# Number of threads used exclusively for dispatching internal events. Default is 2.#async_eventbus_processors = 2# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual# shutdown process. Set to 0 if you have no status checking load balancers in front.lb_recognition_period_seconds = 3# Journal usage percentage that triggers requesting throttling for this server node from load balancers. The feature is# disabled if not set.#lb_throttle_threshold_percentage = 95# Every message is matched against the configured streams and it can happen that a stream contains rules which# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other# streams, Graylog limits the execution time for each stream.# The default values are noted below, the timeout is in milliseconds.# If the stream matching for one stream took longer than the timeout value, and this happened more than"max_faults"times# that stream is disabled and a notification is shown in the web interface.#stream_processing_timeout = 2000#stream_processing_max_faults = 3# Since 0.21 the Graylog server supports pluggable output modules. This means a single message can be written to multiple# outputs. The next setting defines the timeout for a single output module, including the default output module where all# messages end up.## Time in milliseconds to wait for all message outputs to finish writing a single message.#output_module_timeout = 10000# Time in milliseconds after which a detected stale master node is being rechecked on startup.#stale_master_timeout = 2000# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.#shutdown_timeout = 30000# MongoDB connection string# See https://docs.mongodb.com/manual/reference/connection-string/ for details#mongodb_uri = mongodb://localhost/graylogmongodb_uri = mongodb://mongo/graylog# Authenticate against the MongoDB server#'+'-signs in the username or password need to be replaced by'%2B'#mongodb_uri = mongodb://grayloguser:secret@localhost:27017/graylog# Use a replica set instead of a single host#mongodb_uri = mongodb://grayloguser:secret@localhost:27017,localhost:27018,localhost:27019/graylog?replicaSet=rs01# DNS Seedlist https://docs.mongodb.com/manual/reference/connection-string/#dns-seedlist-connection-format#mongodb_uri = mongodb+srv://server.example.org/graylog# Increase this value according to the maximum connections your MongoDB server can handle from a single client# if you encounter MongoDB connection problems.mongodb_max_connections = 1000# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5,# then 500 threads can block. More than that and an exception will be thrown.# http://api.mongodb.com/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultipliermongodb_threads_allowed_to_block_multiplier = 5# Email transport#transport_email_enabled = false#transport_email_hostname = mail.example.com#transport_email_port = 587#transport_email_use_auth = true#transport_email_auth_username = you@example.com#transport_email_auth_password = secret#transport_email_subject_prefix = [graylog]#transport_email_from_email = graylog@example.com# Encryption settings## ATTENTION:#    Using SMTP with STARTTLS *and* SMTPS at the same time is *not* possible.# Use SMTP with STARTTLS, see https://en.wikipedia.org/wiki/Opportunistic_TLS#transport_email_use_tls = true# Use SMTP over SSL (SMTPS), see https://en.wikipedia.org/wiki/SMTPS# This is deprecated on most SMTP services!#transport_email_use_ssl = false# Specify and uncomment this if you want to include links to the stream in your stream alert mails.# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.#transport_email_web_interface_url = https://graylog.example.com# The default connect timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 5s#http_connect_timeout = 5s# The default read timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 10s#http_read_timeout = 10s# The default write timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 10s#http_write_timeout = 10s# HTTP proxy for outgoing HTTP connections# ATTENTION: If you configure a proxy, make sure to also configure the"http_non_proxy_hosts"option so internal#            HTTP connections with other nodes does not go through the proxy.# Examples:#   - http://proxy.example.com:8123#   - http://username:password@proxy.example.com:8123#http_proxy_uri =# A list of hosts that should be reached directly, bypassing the configured proxy server.# This is a list of patterns separated by",". The patterns may start or end with a"*"for wildcards.# Any host matching one of these patterns will be reached through a direct connection instead of through a proxy.# Examples:#   - localhost,127.0.0.1#   - 10.0.*,*.example.com#http_non_proxy_hosts =# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize# cycled indices.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the'System / Indices'page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#disable_index_optimization = true# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch# on heavily used systems with large indices, but it will decrease search performance. The default is 1.## ATTENTION: These settings have been moved to the database in Graylog 2.2.0. When you upgrade, make sure to set these#            to your previous settings so they will be migrated to the database!#            This configuration setting is only used on the first start of Graylog. After that,#            index related settings can be changed in the Graylog web interface on the'System / Indices'page.#            Also see http://docs.graylog.org/en/2.3/pages/configuration/index_model.html#index-set-configuration.#index_optimization_max_num_segments = 1# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification# will be generated to warn the administrator about possible problems with the system. Default is 1 second.#gc_warning_threshold = 1s# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.#ldap_connection_timeout = 2000# Disable the use of SIGAR for collecting system stats#disable_sigar = false# The default cache time for dashboard widgets. (Default: 10 seconds, minimum: 1 second)#dashboard_widget_default_cache_time = 10s# For some cluster-related REST requests, the node must query all other nodes in the cluster. This is the maximum number# of threads available for this. Increase it, if'/cluster/*'requests take long to complete.# Should be http_thread_pool_size * average_cluster_size if you have a high number of concurrent users.proxied_requests_thread_pool_size = 32# The server is writing processing status information to the database on a regular basis. This setting controls how# often the data is written to the database.# Default: 1s (cannot be less than 1s)#processing_status_persist_interval = 1s# Configures the threshold for detecting outdated processing status records. Any records that haven't been updated# in the configured threshold will be ignored.# Default: 1m (one minute)#processing_status_update_threshold = 1m# Configures the journal write rate threshold for selecting processing status records. Any records that have a lower# one minute rate than the configured value might be ignored. (dependent on number of messages in the journal)# Default: 1#processing_status_journal_write_rate_threshold = 1# Configures the prefix used for graylog event indices# Default: gl-events#default_events_index_prefix = gl-events# Configures the prefix used for graylog system event indices# Default: gl-system-events#default_system_events_index_prefix = gl-system-events# Automatically load content packs in "content_packs_dir" on the first start of Graylog.#content_packs_loader_enabled = false# The directory which contains content packs which should be loaded on the first start of Graylog.#content_packs_dir = /usr/share/graylog/data/contentpacks# A comma-separated list of content packs (files in "content_packs_dir") which should be applied on# the first start of Graylog.# Default: empty#content_packs_auto_install = grok-patterns.json# The allowed TLS protocols for system wide TLS enabled servers. (e.g. message inputs, http interface)# Setting this to an empty value, leaves it up to system libraries and the used JDK to chose a default.# Default: TLSv1.2,TLSv1.3  (might be automatically adjusted to protocols supported by the JDK)#enabled_tls_protocols= TLSv1.2,TLSv1.3
  • 创立 log4j2.xml 文件
<?xml version="1.0" encoding="UTF-8"?><Configuration packages="org.graylog2.log4j" shutdownHook="disable">    <Appenders>        <Console name="STDOUT" target="SYSTEM_OUT">            <PatternLayout pattern="%d %-5p: %c - %m%n"/>        </Console>        <!-- Internal Graylog log appender. Please do not disable. This makes internal log messages available via REST calls. -->        <Memory name="graylog-internal-logs" bufferSize="500"/>    </Appenders>    <Loggers>        <!-- Application Loggers -->        <Logger name="org.graylog2" level="info"/>        <Logger name="com.github.joschi.jadconfig" level="warn"/>        <!-- Prevent DEBUG message about Lucene Expressions not found. -->        <Logger name="org.elasticsearch.script" level="warn"/>        <!-- Disable messages from the version check -->        <Logger name="org.graylog2.periodical.VersionCheckThread" level="off"/>        <!-- Silence chatty natty -->        <Logger name="com.joestelmach.natty.Parser" level="warn"/>        <!-- Silence Kafka log chatter -->        <Logger name="org.graylog.shaded.kafka09.log.Log" level="warn"/>        <Logger name="org.graylog.shaded.kafka09.log.OffsetIndex" level="warn"/>        <Logger name="org.apache.kafka.clients.consumer.ConsumerConfig" level="warn"/>        <!-- Silence useless session validation messages -->        <Logger name="org.apache.shiro.session.mgt.AbstractValidatingSessionManager" level="warn"/>        <Root level="warn">            <AppenderRef ref="STDOUT"/>            <AppenderRef ref="graylog-internal-logs"/>        </Root>    </Loggers></Configuration>
  • 创立 docker-compose_graylog.yml 文件
version: '2'services:  # MongoDB: https://hub.docker.com/_/mongo/  mongodb:    container_name: mongo    image: mongo:3    volumes:      - mongo_data:/data/db  # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.x/docker.html  elasticsearch:    container_name: es    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2    volumes:      - es_data:/usr/share/elasticsearch/data    environment:      - TZ=Asia/Shanghai      - http.host=0.0.0.0      - transport.host=localhost      - network.host=0.0.0.0      - "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"    ulimits:      memlock:        soft: -1        hard: -1    mem_limit: 4g  # Graylog: https://hub.docker.com/r/graylog/graylog/  graylog:    container_name: graylog    image: graylog/graylog:4.1    volumes:      - graylog_journal:/usr/share/graylog/data/journal      - ./graylog/config:/usr/share/graylog/data/config    environment:      # CHANGE ME (must be at least 16 characters)!      - GRAYLOG_PASSWORD_SECRET=somepasswordpepper      # Password: admin      - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918      #- GRAYLOG_HTTP_EXTERNAL_URI=http://1.1.1.1:9000/ #这里配置公网拜访地址, 可正文.      - TZ=Asia/Shanghai    links:      - mongodb:mongo      - elasticsearch    depends_on:      - mongodb      - elasticsearch    ports:      # Graylog web interface and REST API      - 9000:9000      # Syslog TCP      - 1514:1514      # Syslog UDP      - 1514:1514/udp      # GELF TCP      - 12201:12201      # GELF UDP      - 12201-12205:12201-12205/udp# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/volumes:  mongo_data:    driver: local  es_data:    driver: local  graylog_journal:    driver: local
  • 装置
[root@localhost graylog]# docker-compose -f docker-compose_graylog.yml up -dCreating network "graylog_default" with the default driverCreating volume "graylog_mongo_data" with local driverCreating volume "graylog_es_data" with local driverCreating volume "graylog_graylog_journal" with local driverPulling mongodb (mongo:3)...
  • 胜利后浏览器拜访 http://192.168.9.140:9000/system/inputs 并创立 input

  • 发送数据
[root@localhost ~]# curl -XPOST http://127.0.0.1:12201/gelf -p0 -d '{"message":"hello Tinywan222","host":"127.0.0.1","facility":"test","topic":"meme"}'

https://www.cnblogs.com/tinyw…

https://www.cnblogs.com/jonny…

容器平台技术

  • 所谓编排(orchestration),通常包含容器治理、调度、集群定义和服务发现等。通过容器编排引擎,容器被有机地组合成微服务利用,实现业务需要。
  • 容器治理平台是架构在容器编排引擎之上的一个更为通用的平台。通常容器治理平台可能反对多种编排引擎,形象了编排引擎的底层实现细节,为用户提供更不便的性能,比方 application catalog 和一键利用部署等。
  • 基于容器的 PaaS 为微服务利用开发人员和公司提供了开发、部署和治理利用的平台,使用户不用关怀底层基础设施而专一于利用的开发。

容器反对技术

  • 容器的呈现使网络拓扑变得更加动静和简单。用户须要专门的解决方案来治理容器与容器、容器与其余实体之间的连通性和隔离性。
  • 动态变化是微服务利用的一大特点。当负载减少时,集群会主动创立新的容器;负载减小,多余的容器会被销毁。容器也会依据 host 的资源应用状况在不同 host 中迁徙,容器的 IP 和端口也会随之发生变化。
  • 监控对于基础架构十分重要,而容器的动静特色对监控提出更多挑战。
  • 容器常常会在不同的 host 之间迁徙,如何保障长久化数据也可能动静迁徙,是 Rex-Ray 这类数据管理工具提供的能力。
  • 日志为问题排查和事件治理提供了重要依据。
  • 对于年老的容器,安全性始终是业界争执的焦点,OpenSCAP 是一种容器平安工具。

Docker 加速器

  • daocloud.io
  • aliyun
sudo tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https:// 本人的阿里云镜像减速字符串.mirror.aliyuncs.com"]}EOF
sudo systemctl daemon-reloadsudo systemctl restart docker

问题

WARNING: Found orphan containers

  • 问题:docker-compose 启动容器报以下谬误

    • WARNING: Found orphan containers (prometheus, grafana) for this project. If you removed or renamed this service in your compose file, you can run this command with the –remove-orphans flag to clean it up.
  • 起因:如果将 docker-compose 的镜像的配置放在同一个目录下时,docker 运行时生成的镜像实例会有雷同的前缀,就是以后的目录名,也就是说默认雷同前缀的是同一组实例,当你在当前目录下还有别的镜像的配置文件,在运行时就会呈现上述正告

<img title=”” src=”https://gitee.com/wholegale39/pictures_markdown/raw/master/20210518185307.png” alt=”” data-align=”center”>

  • 解决办法

    • 1、在启动时重命名实例
    docker-compose -p node_exporter -f docker-compose_node-exporter.yml up -d
  • 2. 或者将文件放在不同的目录下运行

公有仓库上传镜像

  • 景象:上传镜像提醒
[root@localhost docker]# docker pull 192.168.9.140:5000/wholegale39/tomcat:latestError response from daemon: Get https://192.168.9.140:5000/v2/: http: server gave HTTP response to HTTPS client
  • 起因:Docker 默认不容许非 HTTPS 形式推送镜像
  • 解决办法:daemon.json 减少配置项,重启 docker 服务再次上传即可
vim /etc/docker/daemon.json{"registry-mirrors": ["https://dnw6qtuv.mirror.aliyuncs.com"],  "insecure-registries":["192.168.9.140:5000"]}
[root@localhost docker]# systemctl restart docker

参考书籍

Docker 技术入门与实战(第 3 版)

Docker 容器技术与高可用实战

Docker:容器与容器云(第 2 版)

Docker 进阶与实战

循序渐进学 Docker

深入浅出 Docker

每天 5 分钟玩转 Docker 容器技术

正文完
 0