乐趣区

关于kubernetes:27kubernetesk8s笔记-Ingress二-Envoy

前言:

流量入口代理作为互联网零碎的门户组件,具备泛滥选型:从老牌代理 HAProxy、Nginx,到微服务 API 网关 Kong、Zuul,再到容器化 Ingress 标准与实现,不同选型间性能、性能、可扩展性、实用场景参差不齐。当云原生时代大浪袭来,Envoy 这一 CNCF 毕业数据面组件为更多人所知。那么,优良“毕业生”Envoy 是否成为云原生时代下流量入口规范组件?

背景 —— 流量入口的泛滥选型与场景

在互联网体系下,但凡须要对外裸露的零碎简直都须要网络代理:较早呈现的 HAProxy、Nginx 至今仍在风行;进入微服务时代后,性能更丰盛、管控能力更强的 API 网关又成为流量入口必备组件;在进入容器时代后,Kubernetes Ingress 作为容器集群的入口,是容器时代微服务的流量入口代理规范。对于这三类典型的七层代理,外围能力比照如下:

  • 从上述外围能力比照来看:
  • HAProxy&Nginx 在具备根底路由性能根底上,性能、稳定性经验多年考验。Nginx 的上游社区 OpenResty 提供了欠缺的 Lua 扩大能力,使得 Nginx 能够更宽泛的利用与扩大,如 API 网关 Kong 即是基于 Nginx+OpenResty 实现。
  • API 网关作为微服务对外 API 流量裸露的根底组件,提供比拟丰盛的性能和动静管控能力。
  • Ingress 作为 Kubernetes 入口流量的标准规范,具体能力视实现形式而定。如基于 Nginx 的 Ingress 实现能力更靠近于 Nginx,Istio Ingress Gateway 基于 Envoy+Istio 管制面实现,性能上更加丰盛(实质上 Istio Ingress Gateway 能力上强于通常的 Ingress 实现,但未依照 Ingress 标准实现)。
  • 那么问题来了:同样是流量入口,在云原生技术趋势下,是否找到一个能力全面的技术计划,让流量入口标准化?

Envoy 外围能力介绍

Envoy 是一个为云原生利用设计的开源边缘与服务代理(ENVOY IS AN OPEN SOURCE EDGE AND SERVICE PROXY, DESIGNED FOR CLOUD-NATIVE APPLICATIONS,@envoyproxy.io),是云原生计算基金会(CNCF)第三个毕业的我的项目,GitHub 目前有 13k+ Star。

  • Envoy 有以下次要个性:
  • 基于古代 C++ 开发的 L4/L7 高性能代理。
  • 通明代理。
  • 流量治理。反对路由、流量复制、分流等性能。
  • 治理个性。反对健康检查、熔断、限流、超时、重试、故障注入。
  • 多协定反对。反对 HTTP/1.1,HTTP/2,GRPC,WebSocket 等协定代理与治理。
  • 负载平衡。加权轮询、加权起码申请、Ring hash、Maglev、随机等算法反对。反对区域感知路由、故障转移等个性。
  • 动静配置 API。提供强壮的管控代理行为的接口,实现 Envoy 动静配置热更新。
  • 可察看性设计。提供七层流量高可察看性,原生反对分布式追踪。
  • 反对热重启。可实现 Envoy 的无缝降级。
  • 自定义插件能力。Lua 与多语言扩大沙箱 WebAssembly。
  • 总体来说,Envoy 是一个性能与性能都十分优良的“双优生”。在理论业务流量入口代理场景下,Envoy 具备先天劣势,能够作为云原生技术趋势流量入口的规范技术计划:
  1. 较 HAProxy、Nginx 更丰盛的性能
    相较于 HAProxy、Nginx 提供流量代理所需的基本功能(更多高级性能通常须要通过扩大插件形式实现),Envoy 自身基于 C++ 曾经实现了相当多代理所需高级性能,如高级负载平衡、熔断、限流、故障注入、流量复制、可观测性等。更为丰盛的性能不仅让 Envoy 天生就能够用于多种场景,原生 C++ 的实现相较通过扩大的实现形式性能劣势更为显著。
  2. 与 Nginx 相当,远高于传统 API 网关的性能
    在性能方面,Envoy 与 Nginx 在罕用协定代理(如 HTTP)上性能相当。与传统 API 网关相比,性能劣势显著.

目前 Service Mesh 曾经进入了以 Istio 为代表的第二代,由 Data Panel(Proxy)、Control Panel 两局部组成。Istio 是对 Service Mesh 的产品化实际,帮忙微服务实现了分层解耦,架构图如下:

HTTPProxy 资源标准

apiVersion: projectcontour.io/v1 #API 群组及版本;
kind: HTTPProxy #CRD 资源的名称;
metadata:
  name <string>
  namespace <string>  #名称空间级别的资源
spec:
  virtualhost <VirtualHost>    #定义 FQDN 格局的虚拟主机,相似于 Ingress 中 host fqdn  <string>  #虚拟主机 FQDN 格局的名称
  tls <TLS>   #启用 HTTPS,且默认以 301 将 HTTP 申请重定向至 HTTPS
    secretName <string>  #存储于证书和私钥信息的 Secret 资源名称
    minimumProtocolVersion <string>  #反对的 SSL/TLS 协定的最低版本
    passthrough <boolean>  #是否启用透传模式,启用时控制器不卸载 HTTPS 会话
    clientvalidation <DownstreamValidation>  #验证客户端证书,可选配置
      caSecret <string>  #用于验证客户端证书的 CA 的证书
  routes <[ ]Route> #定义路由规定
    conditions <[]Condition>  #流量匹配条件,反对 PATH 前缀和标头匹配两种检测机制
      prefix <String> #PATH 门路前缀匹配,相似于 Ingress 中的 path 字段
    permitInsecure <Boolean> #是否禁止默认的将 HTTP 重定向到 HTTPS 的性能
    services <[ ]Service> #后端服务,会对应转换为 Envoy 的 Cluster 定义
      name <String>  #服务名称
      port <Integer> #服务端口
       protocol <string> #达到后端服务的协定,可用值为 tls、h2 或者 h2c
       validation <UpstreamValidation> #是否校验服务端证书
          caSecret <string>
          subjectName <string> #要求证书中应用的 Subject 值
  • HTTPProxy 高级路由资源标准
spec:
  routes <[]Route> #定义路由规定
    conditions <[]Condition>
      prefix <String>
      header <Headercondition> #申请报文标头匹配
        name <String>   #标头名称
        present <Boolean>  #true 示意存在该标头即满足条件,值 false 没有意义
        contains <String>  #标头值必须蕴含的子串
        notcontains <string> #标头值不能蕴含的子串
        exact <String> #标头值准确的匹配
        notexact <string> #标头值准确反向匹配,即不能与指定的值雷同
    services <[ ]Service># 后端服务,转换为 Envoy 的 Cluster
      name <String>
      port <Integer>
      protocol <String>
      weight <Int64>  #服务权重, 用于流量宰割
      mirror <Boolean> #流量镜像
      requestHeadersPolicy <HeadersPolicy> #到上游服务器申请报文的标头策略
        set <[ ]HeaderValue>  #增加标头或设置指定标头的值
          name <String>
          value <String>
        remove <[]String># 移除指定的标头
      responseHeadersPolicy <HeadersPolicy>  #到上游客户端响应报文的标头策略
    loadBalancerPolicy <LoadBalancerPolicy>  #指定要应用负载平衡策略
      strategy <String># 具体应用的策略,反对 Random、RoundRobin、Cookie 
                       #和 weightedLeastRequest,默认为 RoundRobin;
    requestHeadersPolicy <HeadersPolicy>  #路由级别的申请报文标头策略
    reHeadersPolicy <HeadersPolicy>  #路由级别的响应报文标头策略
    pathRewritePolicy <PathRewritePolicy>  #URL 重写
      replacePrefix <[]ReplacePrefix>
        prefix <String> #PATH 路由前缀
        replacement <string>  #替换为的指标门路
  • HTTPProxy 服务弹性 健康检查资源标准
spec:
  routes <[]Route>
    timeoutPolicy <TimeoutPolicy> #超时策略
      response <String> #期待服务器响应报文的超时时长
      idle <String> # 超时后,Envoy 维持与客户端之间连贯的闲暇时长
    retryPolicy <RetryPolicy> #重试策略
      count <Int64> #重试的次数,默认为 1
      perTryTimeout <String> #每次重试的超时时长
    healthCheckPolicy <HTTPHealthCheckPolicy> # 被动衰弱状态检测
      path <String> #检测针对的门路(HTTP 端点)
      host <String> #检测时申请的虚拟主机
      intervalSeconds <Int64> #工夫距离,即检测频度, 默认为 5 秒
      timeoutSeconds <Int64> #超时时长,默认为 2 秒
      unhealthyThresholdCount <Int64> # 断定为非衰弱状态的阈值,即间断谬误次数
      healthyThresholdCount <Int64> # 断定为衰弱状态的阈值

Envoy 部署

$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

[root@k8s-master Ingress]# kubectl get ns
NAME                   STATUS   AGE
default                Active   14d
dev                    Active   13d
ingress-nginx          Active   29h
kube-node-lease        Active   14d
kube-public            Active   14d
kube-system            Active   14d
kubernetes-dashboard   Active   21h
longhorn-system        Active   21h
projectcontour         Active   39m   #新增名称空间
test                   Active   12d
[root@k8s-master Ingress]# kubectl nget pod -n projectcontour

[root@k8s-master Ingress]# kubectl get pod -n projectcontour
NAME                            READY   STATUS      RESTARTS   AGE
contour-5449c4c94d-mqp9b        1/1     Running     3          37m
contour-5449c4c94d-xgvqm        1/1     Running     5          37m
contour-certgen-v1.18.1-82k8k   0/1     Completed   0          39m
envoy-n2bs9                     2/2     Running     0          37m
envoy-q777l                     2/2     Running     0          37m
envoy-slt49                     1/2     Running     2          37m

[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
contour   ClusterIP      10.100.120.94   <none>        8001/TCP                     39m
envoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   39m  #因为不是 Iaas 平台上 显示为 pending 状态, 示意始终在申请资源而挂起, 不影响通过 NodePort 的形式拜访

[root@k8s-master Ingress]# kubectl api-resources
NAME                              SHORTNAMES                           APIGROUP                       NAMESPACED   KIND
...
extensionservices                 extensionservice,extensionservices   projectcontour.io              true         ExtensionService
httpproxies                       proxy,proxies                        projectcontour.io              true         HTTPProxy
tlscertificatedelegations         tlscerts                             projectcontour.io              true         TLSCertificateDelegation
  • 创立虚拟机 www.ik8s.io
[root@k8s-master Ingress]# cat httpproxy-demo.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-demo
  namespace: default
spec:
  virtualhost:
    fqdn: www.ik8s.io  #虚拟主机
    tls:
      secretName: ik8s-tls
      minimumProtocolVersion: "tlsv1.1"  #最低兼容的协定版本
  routes :
  - conditions:
    - prefix: /
    services :
    - name: demoapp-deploy   #后端 svc
      port: 80
    permitInsecure: true  #明文拜访是否重定向 true 为否

[root@k8s-master Ingress]# kubectl apply -f httpproxy-demo.yaml 
httpproxy.projectcontour.io/httpproxy-demo configured
  • 查看代理 httpproxy 或 httpproxies
[root@k8s-master Ingress]# kubectl get httpproxy 
NAME             FQDN          TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-demo   www.ik8s.io   ik8s-tls     valid    Valid HTTPProxy

[root@k8s-master Ingress]# kubectl get httpproxies
NAME             FQDN          TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-demo   www.ik8s.io   ik8s-tls     valid    Valid HTTPProxy

[root@k8s-master Ingress]# kubectl describe httpproxy  httpproxy-demo 
...
Spec:
  Routes:
    Conditions:
      Prefix:         /
    Permit Insecure:  true
    Services:
      Name:  demoapp-deploy
      Port:  80
  Virtualhost:
    Fqdn:  www.ik8s.io
    Tls:
      Minimum Protocol Version:  tlsv1.1
      Secret Name:               ik8s-tls
Status:
  Conditions:
    Last Transition Time:  2021-09-13T08:44:00Z
    Message:               Valid HTTPProxy
    Observed Generation:   2
    Reason:                Valid
    Status:                True
    Type:                  Valid
  Current Status:          valid
  Description:             Valid HTTPProxy
  Load Balancer:
Events:  <none>


[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
contour   ClusterIP      10.100.120.94   <none>        8001/TCP                     39m
envoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   39m
  • 增加 hosts 拜访测试
[root@bigyong ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
...
192.168.54.171   www.ik8s.io

[root@bigyong ~]# curl  www.ik8s.io:32668
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, ServerIP: 192.168.12.39!

[root@bigyong ~]# curl  www.ik8s.io:32668
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-gw6qp, ServerIP: 192.168.113.39!
  • HTTPS 拜访
[root@bigyong ~]# curl https://www.ik8s.io:32278
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

[root@bigyong ~]# curl -k https://www.ik8s.io:32278   #疏忽证书不受信问题  拜访胜利
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, Se

示例 1: 访问控制

  • 创立 2 个 Pod 别离应用不同版本
[root@k8s-master Ingress]# kubectl create deployment demoappv11 --image='ikubernetes/demoapp:v1.1' -n dev
deployment.apps/demoappv11 created

[root@k8s-master Ingress]# kubectl create deployment demoappv12 --image='ikubernetes/demoapp:v1.2' -n dev
deployment.apps/demoappv12 created
  • 发明与之对应的 SVC
[root@k8s-master Ingress]# kubectl create service clusterip demoappv11 --tcp=80 -n dev
service/demoappv11 created
[root@k8s-master Ingress]# kubectl create service clusterip demoappv12 --tcp=80 -n dev
service/demoappv12 created
[root@k8s-master Ingress]# kubectl get svc -n dev
kuNAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
demoappv11   ClusterIP   10.99.204.65   <none>        80/TCP    19s
demoappv12   ClusterIP   10.97.211.38   <none>        80/TCP    17s

[root@k8s-master Ingress]# kubectl describe svc demoappv11 -n dev
Name:              demoappv11
Namespace:         dev
Labels:            app=demoappv11
Annotations:       <none>
Selector:          app=demoappv11
Type:              ClusterIP
IP:                10.99.204.65
Port:              80  80/TCP
TargetPort:        80/TCP
Endpoints:         192.168.12.53:80
Session Affinity:  None
Events:            <none>

[root@k8s-master Ingress]# kubectl describe svc demoappv12 -n dev
Name:              demoappv12
Namespace:         dev
Labels:            app=demoappv12
Annotations:       <none>
Selector:          app=demoappv12
Type:              ClusterIP
IP:                10.97.211.38
Port:              80  80/TCP
TargetPort:        80/TCP
Endpoints:         192.168.51.79:80
Session Affinity:  None
Events:            <none>
  • 拜访测试
[root@k8s-master Ingress]# curl 10.99.204.65
iKubernetes demoapp v1.1 !! ClientIP: 192.168.4.170, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@k8s-master Ingress]# curl 10.97.211.38
iKubernetes demoapp v1.2 !! ClientIP: 192.168.4.170, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
  • 部署 Envoy httpproxy
[root@k8s-master Ingress]# cat httpproxy-headers-routing.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-headers-routing
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:  #路由
  - conditions:
    - header:
        name: X-Canary  #header 中蕴含 X -Canary:true
        present: true
    - header:
        name: User-Agent #header 中蕴含 curl
        contains: curl
    services:  #满足以上两个条件路由到 demoappv11
    - name: demoappv11
      port: 80
  - services:  #其余不满足条件路由到 demoapp12
    - name: demoappv12
      port: 80

[root@k8s-master Ingress]# kubectl apply -f httpproxy-headers-routing.yaml 
httpproxy.projectcontour.io/httpproxy-headers-routing unchanged
[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME                        FQDN            TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-headers-routing   www.ilinux.io                valid    Valid HTTPProxy

[root@k8s-master Ingress]# kubectl  get svc -n projectcontour
NAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
contour   ClusterIP      10.100.120.94   <none>        8001/TCP                     114m
envoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   114m
  • 拜访测试
[root@bigyong ~]# cat /etc/hosts   #增加 hosts
...
192.168.54.171   www.ik8s.io  www.ilinux.io


[root@bigyong ~]# curl http://www.ilinux.io  #默认为 1.2 版本
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
[root@bigyong ~]# curl http://www.ilinux.io
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
  • 因为通过 curl 拜访 所以在增加信息头中增加 X-Canary:true 即可满足条件 为 1.1 版本
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io 
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!


[root@k8s-master Ingress]# kubectl delete -f httpproxy-headers-routing.yaml 
httpproxy.projectcontour.io "httpproxy-headers-routing" deleted

示例 2: 流量切割 金丝雀公布

  • 先小比例公布, 没问题后在公布到全副
  • 部署部署 Envoy httpproxy 流量比列别离为 10%、90%流量
[root@k8s-master Ingress]# cat httpproxy-traffic-splitting.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-traffic-splitting
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:
  - conditions:
    - prefix: /
    services:
    - name: demoappv11 
      port: 80
      weight: 90   #v1.1 版本为 90%流量
    - name: demoappv12
      port: 80
      weight: 10  #v1.2 版本为 10%流量

[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME                          FQDN            TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-traffic-splitting   www.ilinux.io                valid    Valid HTTPProxy
  • 拜访测试
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done   #v1.1 v1.2 的比大概是 9:1
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!

示例 3: 镜像公布

[root@k8s-master Ingress]# cat httpproxy-traffic-mirror.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-traffic-mirror
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:
  - conditions:
    - prefix: /
    services :
    - name: demoappv11
      port: 80
    - name: demoappv12
      port: 80
      mirror: true  #镜像拜访

[root@k8s-master Ingress]# kubectl apply -f httpproxy-traffic-mirror.yaml

[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME                       FQDN            TLS SECRET   STATUS   STATUS DESCRIPTION
httpproxy-traffic-mirror   www.ilinux.io                valid    Valid HTTPProxy
[root@k8s-master Ingress]# kubectl get pod -n dev
NAME                          READY   STATUS    RESTARTS   AGE
demoappv11-59544d568d-5gg72   1/1     Running   0          74m
demoappv12-64c664955b-lkchk   1/1     Running   0          74m
  • 拜访测试
# 都是 v1.1 版本
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
  • 查看 v1.2 版本 Pod 日志 有雷同流量拜访并显示拜访失常
[root@k8s-master Ingress]# kubectl get pod -n dev
NAME                          READY   STATUS    RESTARTS   AGE
demoappv11-59544d568d-5gg72   1/1     Running   0          74m
demoappv12-64c664955b-lkchk   1/1     Running   0          74m
[root@k8s-master Ingress]# kubectl logs demoappv12-64c664955b-lkchk -n dev
 * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
192.168.4.170 - - [13/Sep/2021 09:35:01] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:24] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:28] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:29] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:47:12] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:47:25] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:50] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:52] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:50] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:52] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:07] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:14] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:28] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:05:14] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:05:16] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -

[root@k8s-master Ingress]# kubectl delete  -f httpproxy-traffic-mirror.yaml 
httpproxy.projectcontour.io "httpproxy-traffic-mirror" deleted

示例 4 : 自定义调度算法

[root@k8s-master Ingress]# cat httpproxy-lb-strategy.yaml 
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-lb-strategy
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:
    - conditions:
      - prefix: /
      services:
        - name: demoappv11
          port: 80
        - name: demoappv12
          port: 80
      loadBalancerPolicy:
        strategy: Random  #随机拜访策略
示例 5: HTTPProxy 服务弹性 健康检查
[root@k8s-master Ingress]# cat  httpproxy-retry-timeout.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpproxy-retry-timeout
  namespace: dev
spec:
  virtualhost:
    fqdn: www.ilinux.io
  routes:
  - timeoutPolicy:
      response: 2s  #响应工夫为 2s 2s 内没有响应为超时
      idle: 5s  #闲暇 5s
    retryPolicy:
      count: 3  #重试 3 次
      perTryTimeout: 500ms  #重试工夫
    services:
    - name: demoappv12
      port: 80

参考链接:

https://baijiahao.baidu.com/s…

退出移动版