关于kubernetes:06kubernetes笔记-Pod探针与多容器

6次阅读

共计 15509 个字符,预计需要花费 39 分钟才能阅读完成。

Pod 生命周期

  1. 容器初始化环境
  2. 运行容器的 pause 根底容器
  3. init c 容器初始化容器(init c 能够有多个,串行运行)
  4. main c 启动主容器(启动之初容许执行一个执行命令或一个脚本,在完结的时候容许执行一个命令)
  5. readless 就绪探测
  6. liveness 存活探测

初始化容器
初始化容器即 pod 内主容器启动之前要运行的容器,次要是做一些前置工作,初始化容器具备以下特色:
初始化容器必须首先执行,若初始化容器运行失败,集群会始终重启初始化容器直至实现,留神,如果 pod 的重启策略为 Never,那初始化容器启动失败后就不会重启。
初始化容器必须依照定义的程序执行,初始化容器能够通过 pod 的 spec.initContainers 进行定义。

lifecycle 申明周期钩子函数
创立资源对象时,能够应用 lifecycle 来治理容器在运行前和敞开前的一些动作。

  • lifecycle 有两种回调函数︰
  • PostStart: 容器创立胜利后,运行前的工作,用于资源部署、环境筹备等
  • PreStop: 在容器被终止前的工作,用于优雅敞开应用程序、告诉其余零碎等等。
    备注:钩子程序的执行形式有“exec”和“HTTP”两种。

Pod 状态
Pod 的 status 定义在 PodStatus 对象中,其中有一个 phase 字段。它简略形容了 Pod 在其生命周期的阶段。相熟 Pod 的各种状态对咱们了解如何设置 Pod 的调度策略、重启策略是很有必要的。上面是 phase 可能的值:

阶段           形容
Pending       Pod 已被 Kubernetes 零碎承受,但有一个或者多个容器镜像尚未创立。等待时间包含调度 Pod 的工夫和通过网络下载镜像的工夫,这可能须要花点工夫。Running       该 Pod 曾经绑定到了一个节点上,Pod 中所有的容器都已被创立。至多有一个容器正在运行,或者正处于启动或重启状态。Succeeded     Pod 中的所有容器都被胜利终止,并且不会再重启。Failed        Pod 中的所有容器都已终止了,并且至多有一个容器是因为失败终止。也就是说,容器以非 0 状态退出或者被零碎终止。Unknown       因为某些起因无奈获得 Pod 的状态,通常是因为与 Pod 所在主机通信失败。

Pod 终止过程
终止过程次要分为如下几个步骤:

  1. 用户收回删除 pod 命令
  2. Pod 对象随着工夫的推移更新,在宽限期(默认状况下 30 秒),pod 被视为“dead”状态
  3. 将 pod 标记为“Terminating”状态
  4. 第三步同时运行,监控到 pod 对象为“Terminating”状态的同时启动 pod 敞开过程 5. 第三步同时进行,5. endpoints 控制器监控到 pod 对象敞开,将 pod 与 service 匹配的 endpoints 列表中删除
  5. 如果 pod 中定义了 preStop 钩子处理程序,则 pod 被标记为“Terminating”状态时以同步的形式启动执行;若宽限期完结后,preStop 仍未执行完结,第二步会从新执行并额定取得一个 2 秒的小宽限期
  6. Pod 内对象的容器收到 TERM 信号
  7. 宽限期完结之后,若存在任何一个运行的过程,pod 会收到 SIGKILL 信号
  8. Kubelet 申请 API Server 将此 Pod 资源宽限期设置为 0 从而实现删除操作

探针

容器探测

  • Startupprobe: 开启检测, 胜利后 前面的 LivenessProbe、ReadinessProbe 配置才会失效
  • LivenessProbe: 周期性检测,检测未通过时,kubelet 会依据 restartPolicy 的定义来决定是否会重启该容器; 未定义时,Kubelet 认为只容器未终止,即为衰弱;
  • ReadinessProbe: 周期性检测,检测末通过时,与该 Pod 关联的 Service, 会将颁 PodlASevice 的后端可用端点列表中删除; 间接再次就绪,从新增加回来。未定义时,只有容器末终止,即为就绪,StartupProbe: 便于用户应用同 livenessProbe 不同参数或阈值;

容器探测形式
容器探测分为存活性探测和就绪性探测容器探测是 kubelet 对容器衰弱状态进行诊断,容器探测的形式次要以下三种:
ExecAction:在容器中执行命令,依据返回的状态码判断容器衰弱状态,返回 0 即示意胜利,否则为失败。
TCPSocketAction: 通过与容器的某 TCP 端口尝试建设连贯进行诊断,端口能关上即为示意胜利,否则失败。
HTTPGetAction:向容器指定 URL 发动 HTTP GET 申请,响应码为 2xx 或者是 3xx 为胜利,否则失败。

探针资源标准

spec:
  containers:
  - name: ...
    image: ...
    livenessProbe:
      exec <0bject>     #命令式探针
      httpGet <0bject>  #httpGET 类型的探针
      tcpSocket <0bject>  #tcp Socket 类型的探针 I
      initialDelaySeconds <integer> #发动首次探测申请的延后时长
      periodSeconds <integer>  #申请周期
      timeoutSeconds <integer> #超时时长
      successThreshold <integer>  #胜利阈值
      failureThreshold <integer>  #失败阈值
  • 查看具体阐明
$ kubectl explain pods.spec.containers.livenessProbe.tcpSocket
KIND:     Pod
VERSION:  v1

RESOURCE: tcpSocket <Object>

DESCRIPTION:
     TCPSocket specifies an action involving a TCP port. TCP hooks not yet
     supported

     TCPSocketAction describes an action based on opening a socket

FIELDS:
   host    <string>
     Optional: Host name to connect to, defaults to the pod IP.

   port    <string> -required-
     Number or name of the port to access on the container. Number must be in
     the range 1 to 65535. Name must be an IANA_SVC_NAME.
livenessProbe 存活探测示例

示例 1: 通过 cmd 命令探测存活

[root@k8s-master Probe]# cat liveness-exec-damo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-demo
  namespace: default
spec:
  containers:
  - name: demo
    image: ikubernetes/demoapp:v1.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      exec:
        command: ['/bin/sh','-c','["$(curl -s 127.0.0.1/livez)"=="OK"]']
      initialDelaySeconds: 5  #初始化等待时间 5 秒后探测
      timeoutSeconds: 1   #超时工夫
      periodSeconds: 5   #每隔 5 秒探测一次

$ kubectl apply -f liveness-exec-damo.yaml
$ kubectl get pod  -o wide
NAME                                READY   STATUS            RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
...
liveness-exec-demo                  1/1     Running           0          76s     10.244.2.84     k8s-node2   <none>           <none>
...

$ curl 10.244.2.84:/livez
OK

$ curl -X POST -d 'livez=FAIL' 10.244.2.84:/livez      #能过 POST 批改 livez 值
$ curl 10.244.2.84:/livez
FAIL
$ kubectl describe pod liveness-exec-demo
...
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  7m38s                default-scheduler  Successfully assigned default/liveness-exec-demo to k8s-node2
  Warning  Unhealthy  89s (x3 over 99s)    kubelet            Liveness probe failed:    #存活检测失败 重启容器
  Normal   Killing    89s                  kubelet            Container demo failed liveness probe, will be restarted
  Normal   Pulled     59s (x2 over 7m33s)  kubelet            Container image "ikubernetes/demoapp:v1.0" already present on machine
  Normal   Created    59s (x2 over 7m33s)  kubelet            Created container demo
  Normal   Started    59s (x2 over 7m33s)  kubelet            Started container demo


$ kubectl get pod  -o wide
NAME                                READY   STATUS            RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
...
liveness-exec-demo                  1/1     Running           1          9m15s   10.244.2.84     k8s-node2   <none>           <none>    #显示容器已重启
...

示例 2: tcp 端口探测

$ cat liveness-tcpsocket-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-tcpsocket-demo
  namespace: default
spec:
  containers:
  - name: demo
    image: ikubernetes/demoapp:v1.0
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    securityContext:   #为了测试能有批改 iptable 的权限而增加
      capabilities:
        add:
        - NET_ADMIN
    livenessProbe:
      tcpSocket:
        port: http   #援用下面的 name: http : 相似变量的援用 前期下面批改端口后这里不须要在批改
      periodSeconds: 5
      initialDelaySeconds: 5

$ kubectl apply -f liveness-tcpsocket-demo.yaml

$ kubectl get pod  -o wide
NAME                                READY   STATUS            RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
liveness-exec-demo                  1/1     Running           1          2d20h   10.244.2.84     k8s-node2   <none>           <none>
liveness-tcpsocket-demo             1/1     Running           0          2d19h   10.244.1.88     k8s-node1   <none>           <none>
...

$ kubectl exec liveness-tcpsocket-demo -- iptables -A INPUT -p tcp --dport 80 -j REJECT

$ kubectl describe pod  liveness-tcpsocket-demo
Name:         liveness-tcpsocket-demo
...
Events:
  Type     Reason     Age               From     Message
  ----     ------     ----              ----     -------
  Warning  Unhealthy  1s (x3 over 11s)  kubelet  Liveness probe failed: dial tcp 10.244.1.88:80: i/o timeout   #探测失败 重启 pod
  Normal   Killing    1s                kubelet  Container demo failed liveness probe, will be restarted

示例 3: http 存活探测

–http 存活检测 与示例 1 的区别在于 http 探测以状态码为探测值 2xx 3xx 都为正常值, 不以探测内容为准

$ cat liveness-httpget-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget-demo
  namespace: default
spec:
  containers:
  - name: demo
    image: ikubernetes/demoapp:v1.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      httpGet:
         path: '/livez'
         port: 80
         scheme: HTTP  #连贯计划为 http
      initialDelaySeconds: 5  #首次探测延时 5 秒

$ kubectl apply -f liveness-httpget-demo.yaml

$ kubectl describe pod liveness-httpget-demo
...
    Liveness:       http-get http://:80/livez delay=5s timeout=1s period=10s #success=1 #failure=3
...
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  5m44s             default-scheduler  Successfully assigned default/liveness-httpget-demo to k8s-node1
  Normal   Pulled     5m39s             kubelet            Container image "ikubernetes/demoapp:v1.0" already present on machine
  Normal   Created    5m39s             kubelet            Created container demo
  Normal   Started    5m39s             kubelet            Started container demo
  Warning  Unhealthy  5m33s             kubelet            Liveness probe failed: Get "http://10.244.1.89:80/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers)  #首次探测失败, 但没有重启 不影响

$ curl http://10.244.1.89:80/livez
OK
$ curl -I http://10.244.1.89:80/livez     #查看状态码 200
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 2
Server: Werkzeug/1.0.0 Python/3.8.2
Date: Mon, 26 Jul 2021 14:31:17 GMT

$ curl -XPOST -d 'livez=FAIL' 10.244.1.89:80/livez  #批改 livez 为 FAIL
$ curl -I http://10.244.1.89:80/livez               #状态码为 506 存活探测失败
HTTP/1.0 506 VARIANT ALSO NEGOTIATES
Content-Type: text/html; charset=utf-8
Content-Length: 4
Server: Werkzeug/1.0.0 Python/3.8.2
Date: Mon, 26 Jul 2021 14:35:39 GMT

$ kubectl describe pod liveness-httpget-demo
Name:         liveness-httpget-demo
...
    Liveness:       http-get http://:80/livez delay=5s timeout=1s period=10s #success=1 #failure=3
...
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  5m44s             default-scheduler  Successfully assigned default/liveness-httpget-demo to k8s-node1
  Normal   Pulled     5m39s             kubelet            Container image "ikubernetes/demoapp:v1.0" already present on machine
  Normal   Created    5m39s             kubelet            Created container demo
  Normal   Started    5m39s             kubelet            Started container demo
  Warning  Unhealthy  5m33s             kubelet            Liveness probe failed: Get "http://10.244.1.89:80/livez": context deadline exceeded (Client.Timeout exceeded while awaiting headers)  #
  Warning  Unhealthy  4s (x3 over 24s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 506    #探测状态码为 506 3 次都失败  重启 pod 
  Normal   Killing    4s                kubelet            Container demo failed liveness probe, will be restarted

示例 4: 就绪探测 失败不会重启 pod

$ cat readiness-httpget-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: rediness-httpget-demo
  namespace: default
spec:
  containers:
  - name: demo
    image: ikubernetes/demoapp:v1.0
    imagePullPolicy: IfNotPresent
    readinessProbe:
      httpGet:
         path: '/readyz'
         port: 80
         scheme: HTTP  #连贯计划为 http
      initialDelaySeconds: 15  #首次探测延时 5 秒
      timeoutSeconds: 2
      periodSeconds: 5
      failureThreshold: 3  #间断探测 3 次失败为真
  restartPolicy: Always  #当容器退出去重启策略 默认策略

$ kubectl apply -f readiness-httpget-demo.yaml 

$ kubectl get pod -o wide
NAME                                READY   STATUS             RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
centos-deployment-nq5b2             1/1     Running            1          7d16h   10.244.2.80     k8s-node2   <none>           <none>
centos-deployment-r4vsm             1/1     Running            1          7d16h   10.244.1.82     k8s-node1   <none>           <none>
liveness-exec-demo                  1/1     Running            1          2d22h   10.244.2.84     k8s-node2   <none>           <none>
liveness-httpget-demo               1/1     Running            1          57m     10.244.1.89     k8s-node1   <none>           <none>
rediness-httpget-demo               1/1     Running            0          53s     10.244.2.87     k8s-node2   <none>           <none>

$ kubectl describe pod rediness-httpget-demo
...
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  2m59s                  default-scheduler  Successfully assigned default/rediness-httpget-demo to k8s-node2
  Normal   Pulled     2m57s                  kubelet            Container image "ikubernetes/demoapp:v1.0" already present on machine
  Normal   Created    2m57s                  kubelet            Created container demo
  Normal   Started    2m57s                  kubelet            Started container demo
  Warning  Unhealthy  2m26s (x3 over 2m36s)  kubelet            Readiness probe failed: Get "http://10.244.2.87:80/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)  #首次探测失败 疏忽
  
$ curl http://10.244.2.87:80/readyz
OK
$ curl -I http://10.244.2.87:80/readyz
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 2
Server: Werkzeug/1.0.0 Python/3.8.2
Date: Mon, 26 Jul 2021 15:31:33 GMT

$ curl -XPOST -d 'readyz=FAIL' http://10.244.2.87:80/readyz  #重写 readyz
$ curl -I http://10.244.2.87:80/readyz   #查看 readyz 状态码为 507
HTTP/1.0 507 INSUFFICIENT STORAGE
Content-Type: text/html; charset=utf-8
Content-Length: 4
Server: Werkzeug/1.0.0 Python/3.8.2
Date: Mon, 26 Jul 2021 15:32:11 GMT

$ kubectl get pod   
NAME                                READY   STATUS             RESTARTS   AGE
liveness-exec-demo                  1/1     Running            1          2d22h
liveness-httpget-demo               1/1     Running            1          62m
mypod-port-network                  1/1     Running            1          7d16h
rediness-httpget-demo               0/1     Running            0          5m39s   #服务筹备失败  但不会重启 POD

$ kubectl describe pod rediness-httpget-demo
Name:         rediness-httpget-demo
...
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  9m26s                 default-scheduler  Successfully assigned default/rediness-httpget-demo to k8s-node2
  Normal   Pulled     9m23s                 kubelet            Container image "ikubernetes/demoapp:v1.0" already present on machine
  Normal   Created    9m23s                 kubelet            Created container demo
  Normal   Started    9m23s                 kubelet            Started container demo
  Warning  Unhealthy  8m52s (x3 over 9m2s)  kubelet            Readiness probe failed: Get "http://10.244.2.87:80/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  4s (x13 over 4m4s)    kubelet            Readiness probe failed: HTTP probe failed with statuscode: 507    #筹备探测失败


$ curl -XPOST -d 'readyz=OK' http://10.244.2.87:80/readyz   #复原原值
$ curl -I http://10.244.2.87:80/readyz
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 2
Server: Werkzeug/1.0.0 Python/3.8.2
Date: Mon, 26 Jul 2021 15:32:51 GMT

$ kubectl get pod
NAME                                READY   STATUS            RESTARTS   AGE
...
rediness-httpget-demo               1/1     Running           0          13m

poststart hook、perstop hook 钩子

post start hook #只有 start hook 执行完之后 容器才会变 running 状态 相似 docker 的 run 容器启动时执行的命令
per stop hook #只有 stop hook 执行实现后 容器才会变终止状态

poststart、perstop 阐明
$ kubectl explain pods.spec.containers.lifecycle.postStart   
KIND:     Pod
VERSION:  v1
...
FIELDS:          #反对 3 种格局
   exec    <Object>
     One and only one of the following should be specified. Exec specifies the
     action to take.

   httpGet    <Object>
     HTTPGet specifies the http request to perform.

   tcpSocket    <Object>
     TCPSocket specifies an action involving a TCP port. TCP hooks not yet
     supported

示例 1: poststart hook、perstop hook 演示

$ cat lifecycle-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: lifecycle-demo
  namespace: default
spec:
  containers:
  - name: demo
    image: ikubernetes/demoapp:v1.0
    imagePullPolicy: IfNotPresent
    securityContext:  #为测试增加网络管理员权限
      capabilities:
        add:
        - NET_ADMIN
    livenessProbe:
      httpGet:
         path: '/livez'
         port: 80
         scheme: HTTP  #连贯计划为 http
      initialDelaySeconds: 5  #首次探测延时 5 秒
    lifecycle:
      postStart:   #启动时执行的命令  相似 docker run 命令
        exec:
          command: ['/bin/sh','-c','iptables  -t nat -A PREROUTING -p tcp  --dport 8080 -j REDIRECT  --to-ports  80']
      preStop:     #完结时执行的命令 不是太好验证
        exec:
          command: ['/bin/sh','-c','while killall python3; do sleep 1; done']

$ kubectl apply -f lifecycle-demo.yaml

# kubectl exec lifecycle-demo -- iptables -t nat -vnL   #查看容器 iptables 规定胜利
Chain PREROUTING (policy ACCEPT 3 packets, 180 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 redir ports 80


$ kubectl get pod -o wide
NAME                                READY   STATUS             RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
lifecycle-demo                      1/1     Running            0          58s     10.244.1.90     k8s-node1   <none>           <none>
...

$ curl 10.244.1.90:8080
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: lifecycle-demo, ServerIP: 10.244.1.90!

多容器

Pod 中能够存在多个容器
Sidecar #为主容器提供辅助性能(外 –> 内) 为了更好的接入外部环境 如: 日志收集等容器 代理服务等
Adapater #为主容器更适配外部环境(申请、数据) 如: 各种 export 转换成 prometheus 能辨认的格局
Ambassador #大使模式 为 pod 内容器更好的接外部环境(内 –> 外) 如: 拜访内部数据 拜访数据库的容器

示例 1: 不同容器的权限治理

1. 受权 initcontainers 容器 NET_ADMIN 权限 实现端口的转发
2. 主容器失常启动, 没有 NET_ADMIN 权限

# cat init-container-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: init-container-demo
  namespace: default
spec:
  initContainers:
  - name: iptables-init
    image: ikubernetes/admin-box:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c"]   #相当于 docker 的 cmd
    args: ["/sbin/iptables -t nat -A PREROUTING -p tcp  --dport 8080 -j REDIRECT  --to-port 80"]
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
  containers:
  - name: demo
    image: ikubernetes/demoapp:v1.0
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80

$ kubectl apply -f init-container-demo.yaml

$ kubectl get pod -o wide
NAME                                READY   STATUS            RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
...
init-container-demo                 1/1     Running           0          22s     10.244.2.91     k8s-node2   <none>           <none>


# kubectl describe pod init-container-demo
Name:         init-container-demo
...
IPs:
  IP:  10.244.2.91
Init Containers:     #初始化容器
  iptables-init:    
    Container ID:  docker://9ea2b437b9ab68feeb68 9fd017f65d7b083159269871e8fc7f3a8604f8926af9
    Image:         ikubernetes/admin-box:latest
    Image ID:      docker-pullable://ikubernetes/admin-box@sha256:56e2413d2bcc279c6667d36fa7dfc7202062a933d0f69cd8a1769b68e2155bbf
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
    Args:
      /sbin/iptables -t nat -A PREROUTING -p tcp  --dport 8080 -j REDIRECT  --to-port 80   #执行命令
    State:          Terminated
      Reason:       Completed
      Exit Code:    0                             #失常退出
      Started:      Tue, 27 Jul 2021 02:08:14 +0800
      Finished:     Tue, 27 Jul 2021 02:08:14 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fsshk (ro)
...

$ curl 10.244.2.91:8080  #拜访 8080 端口
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: init-container-demo, ServerIP: 10.244.2.91!

$ kubectl exec init-container-demo -- iptables -t nat -vnL  #主容器查看 iptables 规定 但没有权限
iptables v1.8.3 (legacy): can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
command terminated with exit code 3

示例 2: 实现双容器运行

1. 实现双容器运行
2. 实现通过 envoy.yaml 代理主容器 http 服务 其中 envoy.yaml 通过宿主机 httpd 服务挂载

$ cat envoy.yaml 
admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: {address: 127.0.0.1, port_value: 9999}
# 定义动态资源
static_resources:
  listeners:
  # 配置一个接管流量的监听器,接管客户发送的申请
  - name: listener_0
    address:
      socket_address: {address: 0.0.0.0, port_value: 10000}
    filter_chains:
    - filters:
      # 定义一个过滤器用于转发流量
      - name: envoy.http_connection_manager
        config:
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              # 不绑定域名
              domains: ["*"]
              routes:
              - match: {prefix: "/"}
                route: {cluster: some_service}
          http_filters:
          - name: envoy.router
  clusters:
  - name: some_service
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    # 配置后端挂载的 IP 地址
    hosts: [{socket_address: { address: 127.0.0.1, port_value: 8080}}]


$ cat sidecar-container-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: sidecar-container-demo
  namespace: default
spec:
  containers:
  - name: proxy-container
    image: envoyproxy/envoy-alpine:v1.14.1
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c"]   #相当于 docker 的 cmd
    args: ["sleep 5 && envoy -c /etc/envoy/envoy.yaml"]  #因为 commadn 与 poststart 同时运行, 这里期待 5 秒期待 poststart 把文件下载好
    lifecycle:
      postStart:
        exec:
          command: ['/bin/sh','-c','wget -O /etc/envoy/envoy.yaml  http://192.168.4.254/envoy.yaml'] #下载配置文件到容器的 /etc/envoy/envoy.yaml
  - name: demo
    image: ikubernetes/demoapp:v1.0
    imagePullPolicy: IfNotPresent
    env:
    - name: HOST
      value: "127.0.0.1"
    - name: PORT
      value: "8080"   #envoy 默认监听 8080 端口

$ kubectl apply -f sidecar-container-demo.yaml
[root@k8s-master Probe]# kubectl get pod -o wide
NAME                                READY   STATUS             RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
...
sidecar-container-demo              2/2     Running            0          39s     10.244.1.91     k8s-node1   <none>           <none>   #两个容器都准备就绪

$ curl 10.244.1.91:10000  #通过 10000 端口拜访 http 服务 实现服务代理
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: sidecar-container-demo, ServerIP: 10.244.1.91!
正文完
 0