前言:

流量入口代理作为互联网零碎的门户组件,具备泛滥选型:从老牌代理 HAProxy、Nginx,到微服务 API 网关 Kong、Zuul,再到容器化 Ingress 标准与实现,不同选型间性能、性能、可扩展性、实用场景参差不齐。当云原生时代大浪袭来,Envoy 这一 CNCF 毕业数据面组件为更多人所知。那么,优良“毕业生”Envoy 是否成为云原生时代下流量入口规范组件?

背景 —— 流量入口的泛滥选型与场景

在互联网体系下,但凡须要对外裸露的零碎简直都须要网络代理:较早呈现的 HAProxy、Nginx 至今仍在风行;进入微服务时代后,性能更丰盛、管控能力更强的 API 网关又成为流量入口必备组件;在进入容器时代后,Kubernetes Ingress 作为容器集群的入口,是容器时代微服务的流量入口代理规范。对于这三类典型的七层代理,外围能力比照如下:

  • 从上述外围能力比照来看:
  • HAProxy&Nginx 在具备根底路由性能根底上,性能、稳定性经验多年考验。Nginx 的上游社区 OpenResty 提供了欠缺的 Lua 扩大能力,使得 Nginx 能够更宽泛的利用与扩大,如 API 网关 Kong 即是基于 Nginx+OpenResty 实现。
  • API 网关作为微服务对外 API 流量裸露的根底组件,提供比拟丰盛的性能和动静管控能力。
  • Ingress 作为 Kubernetes 入口流量的标准规范,具体能力视实现形式而定。如基于 Nginx 的 Ingress 实现能力更靠近于 Nginx,Istio Ingress Gateway 基于 Envoy+Istio 管制面实现,性能上更加丰盛(实质上 Istio Ingress Gateway 能力上强于通常的 Ingress 实现,但未依照 Ingress 标准实现)。
  • 那么问题来了:同样是流量入口,在云原生技术趋势下,是否找到一个能力全面的技术计划,让流量入口标准化?

Envoy 外围能力介绍

Envoy 是一个为云原生利用设计的开源边缘与服务代理(ENVOY IS AN OPEN SOURCE EDGE AND SERVICE PROXY, DESIGNED FOR CLOUD-NATIVE APPLICATIONS,@envoyproxy.io),是云原生计算基金会(CNCF)第三个毕业的我的项目,GitHub 目前有 13k+ Star。

  • Envoy 有以下次要个性:
  • 基于古代 C++ 开发的 L4/L7 高性能代理。
  • 通明代理。
  • 流量治理。反对路由、流量复制、分流等性能。
  • 治理个性。反对健康检查、熔断、限流、超时、重试、故障注入。
  • 多协定反对。反对 HTTP/1.1,HTTP/2,GRPC,WebSocket 等协定代理与治理。
  • 负载平衡。加权轮询、加权起码申请、Ring hash、Maglev、随机等算法反对。反对区域感知路由、故障转移等个性。
  • 动静配置 API。提供强壮的管控代理行为的接口,实现 Envoy 动静配置热更新。
  • 可察看性设计。提供七层流量高可察看性,原生反对分布式追踪。
  • 反对热重启。可实现 Envoy 的无缝降级。
  • 自定义插件能力。Lua 与多语言扩大沙箱 WebAssembly。
  • 总体来说,Envoy 是一个性能与性能都十分优良的“双优生”。在理论业务流量入口代理场景下,Envoy 具备先天劣势,能够作为云原生技术趋势流量入口的规范技术计划:
  1. 较 HAProxy、Nginx 更丰盛的性能
    相较于 HAProxy、Nginx 提供流量代理所需的基本功能(更多高级性能通常须要通过扩大插件形式实现),Envoy 自身基于 C++ 曾经实现了相当多代理所需高级性能,如高级负载平衡、熔断、限流、故障注入、流量复制、可观测性等。更为丰盛的性能不仅让 Envoy 天生就能够用于多种场景,原生 C++ 的实现相较通过扩大的实现形式性能劣势更为显著。
  2. 与 Nginx 相当,远高于传统 API 网关的性能
    在性能方面,Envoy 与 Nginx 在罕用协定代理(如 HTTP)上性能相当。与传统 API 网关相比,性能劣势显著.

目前Service Mesh曾经进入了以Istio为代表的第二代,由Data Panel(Proxy)、Control Panel两局部组成。Istio是对Service Mesh的产品化实际,帮忙微服务实现了分层解耦,架构图如下:

HTTPProxy资源标准

apiVersion: projectcontour.io/v1 #API群组及版本;kind: HTTPProxy #CRD资源的名称;metadata:  name <string>  namespace <string>  #名称空间级别的资源spec:  virtualhost <VirtualHost>    #定义FQDN格局的虚拟主机,相似于Ingress中host fqdn  <string>  #虚拟主机FQDN格局的名称  tls <TLS>   #启用HTTPS,且默认以301将HTTP申请重定向至HTTPS    secretName <string>  #存储于证书和私钥信息的Secret资源名称    minimumProtocolVersion <string>  #反对的SSL/TLS协定的最低版本    passthrough <boolean>  #是否启用透传模式,启用时控制器不卸载HTTPS会话    clientvalidation <DownstreamValidation>  #验证客户端证书,可选配置      caSecret <string>  #用于验证客户端证书的CA的证书  routes <[ ]Route> #定义路由规定    conditions <[]Condition>  #流量匹配条件,反对PATH前缀和标头匹配两种检测机制      prefix <String> #PATH门路前缀匹配,相似于Ingress中的path字段    permitInsecure <Boolean> #是否禁止默认的将HTTP重定向到HTTPS的性能    services <[ ]Service> #后端服务,会对应转换为Envoy的Cluster定义      name <String>  #服务名称      port <Integer> #服务端口       protocol <string> #达到后端服务的协定,可用值为tls、h2或者h2c       validation <UpstreamValidation> #是否校验服务端证书          caSecret <string>          subjectName <string> #要求证书中应用的Subject值
  • HTTPProxy 高级路由资源标准
spec:  routes <[]Route> #定义路由规定    conditions <[]Condition>      prefix <String>      header <Headercondition> #申请报文标头匹配        name <String>   #标头名称        present <Boolean>  #true示意存在该标头即满足条件,值false没有意义        contains <String>  #标头值必须蕴含的子串        notcontains <string> #标头值不能蕴含的子串        exact <String> #标头值准确的匹配        notexact <string> #标头值准确反向匹配,即不能与指定的值雷同    services <[ ]Service>#后端服务,转换为Envoy的Cluster      name <String>      port <Integer>      protocol <String>      weight <Int64>  #服务权重,用于流量宰割      mirror <Boolean> #流量镜像      requestHeadersPolicy <HeadersPolicy> #到上游服务器申请报文的标头策略        set <[ ]HeaderValue>  #增加标头或设置指定标头的值          name <String>          value <String>        remove <[]String>#移除指定的标头      responseHeadersPolicy <HeadersPolicy>  #到上游客户端响应报文的标头策略    loadBalancerPolicy <LoadBalancerPolicy>  #指定要应用负载平衡策略      strategy <String>#具体应用的策略,反对Random、RoundRobin、Cookie                        #和weightedLeastRequest,默认为RoundRobin;    requestHeadersPolicy <HeadersPolicy>  #路由级别的申请报文标头策略    reHeadersPolicy <HeadersPolicy>  #路由级别的响应报文标头策略    pathRewritePolicy <PathRewritePolicy>  #URL重写      replacePrefix <[]ReplacePrefix>        prefix <String> #PATH路由前缀        replacement <string>  #替换为的指标门路
  • HTTPProxy服务弹性 健康检查资源标准
spec:  routes <[]Route>    timeoutPolicy <TimeoutPolicy> #超时策略      response <String> #期待服务器响应报文的超时时长      idle <String> # 超时后,Envoy维持与客户端之间连贯的闲暇时长    retryPolicy <RetryPolicy> #重试策略      count <Int64> #重试的次数,默认为1      perTryTimeout <String> #每次重试的超时时长    healthCheckPolicy <HTTPHealthCheckPolicy> # 被动衰弱状态检测      path <String> #检测针对的门路(HTTP端点)      host <String> #检测时申请的虚拟主机      intervalSeconds <Int64> #工夫距离,即检测频度,默认为5秒      timeoutSeconds <Int64> #超时时长,默认为2秒      unhealthyThresholdCount <Int64> # 断定为非衰弱状态的阈值,即间断谬误次数      healthyThresholdCount <Int64> # 断定为衰弱状态的阈值

Envoy部署

$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml[root@k8s-master Ingress]# kubectl get nsNAME                   STATUS   AGEdefault                Active   14ddev                    Active   13dingress-nginx          Active   29hkube-node-lease        Active   14dkube-public            Active   14dkube-system            Active   14dkubernetes-dashboard   Active   21hlonghorn-system        Active   21hprojectcontour         Active   39m   #新增名称空间test                   Active   12d[root@k8s-master Ingress]# kubectl nget pod -n projectcontour[root@k8s-master Ingress]# kubectl get pod -n projectcontourNAME                            READY   STATUS      RESTARTS   AGEcontour-5449c4c94d-mqp9b        1/1     Running     3          37mcontour-5449c4c94d-xgvqm        1/1     Running     5          37mcontour-certgen-v1.18.1-82k8k   0/1     Completed   0          39menvoy-n2bs9                     2/2     Running     0          37menvoy-q777l                     2/2     Running     0          37menvoy-slt49                     1/2     Running     2          37m[root@k8s-master Ingress]# kubectl get svc -n projectcontourNAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGEcontour   ClusterIP      10.100.120.94   <none>        8001/TCP                     39menvoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   39m  #因为不是Iaas平台上 显示为pending状态,示意始终在申请资源而挂起,不影响通过NodePort的形式拜访[root@k8s-master Ingress]# kubectl api-resourcesNAME                              SHORTNAMES                           APIGROUP                       NAMESPACED   KIND...extensionservices                 extensionservice,extensionservices   projectcontour.io              true         ExtensionServicehttpproxies                       proxy,proxies                        projectcontour.io              true         HTTPProxytlscertificatedelegations         tlscerts                             projectcontour.io              true         TLSCertificateDelegation
  • 创立虚拟机 www.ik8s.io
[root@k8s-master Ingress]# cat httpproxy-demo.yaml apiVersion: projectcontour.io/v1kind: HTTPProxymetadata:  name: httpproxy-demo  namespace: defaultspec:  virtualhost:    fqdn: www.ik8s.io  #虚拟主机    tls:      secretName: ik8s-tls      minimumProtocolVersion: "tlsv1.1"  #最低兼容的协定版本  routes :  - conditions:    - prefix: /    services :    - name: demoapp-deploy   #后端svc      port: 80    permitInsecure: true  #明文拜访是否重定向 true为否[root@k8s-master Ingress]# kubectl apply -f httpproxy-demo.yaml httpproxy.projectcontour.io/httpproxy-demo configured
  • 查看代理httpproxy或 httpproxies
[root@k8s-master Ingress]# kubectl get httpproxy NAME             FQDN          TLS SECRET   STATUS   STATUS DESCRIPTIONhttpproxy-demo   www.ik8s.io   ik8s-tls     valid    Valid HTTPProxy[root@k8s-master Ingress]# kubectl get httpproxiesNAME             FQDN          TLS SECRET   STATUS   STATUS DESCRIPTIONhttpproxy-demo   www.ik8s.io   ik8s-tls     valid    Valid HTTPProxy[root@k8s-master Ingress]# kubectl describe httpproxy  httpproxy-demo ...Spec:  Routes:    Conditions:      Prefix:         /    Permit Insecure:  true    Services:      Name:  demoapp-deploy      Port:  80  Virtualhost:    Fqdn:  www.ik8s.io    Tls:      Minimum Protocol Version:  tlsv1.1      Secret Name:               ik8s-tlsStatus:  Conditions:    Last Transition Time:  2021-09-13T08:44:00Z    Message:               Valid HTTPProxy    Observed Generation:   2    Reason:                Valid    Status:                True    Type:                  Valid  Current Status:          valid  Description:             Valid HTTPProxy  Load Balancer:Events:  <none>[root@k8s-master Ingress]# kubectl get svc -n projectcontourNAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGEcontour   ClusterIP      10.100.120.94   <none>        8001/TCP                     39menvoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   39m
  • 增加hosts 拜访测试
[root@bigyong ~]# cat /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6...192.168.54.171   www.ik8s.io[root@bigyong ~]# curl  www.ik8s.io:32668iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, ServerIP: 192.168.12.39![root@bigyong ~]# curl  www.ik8s.io:32668iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-gw6qp, ServerIP: 192.168.113.39!
  • HTTPS拜访
[root@bigyong ~]# curl https://www.ik8s.io:32278curl: (60) Peer's certificate issuer has been marked as not trusted by the user.More details here: http://curl.haxx.se/docs/sslcerts.htmlcurl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option.If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL).If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.[root@bigyong ~]# curl -k https://www.ik8s.io:32278   #疏忽证书不受信问题  拜访胜利iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, Se

示例1:访问控制

  • 创立2个Pod 别离应用不同版本
[root@k8s-master Ingress]# kubectl create deployment demoappv11 --image='ikubernetes/demoapp:v1.1' -n devdeployment.apps/demoappv11 created[root@k8s-master Ingress]# kubectl create deployment demoappv12 --image='ikubernetes/demoapp:v1.2' -n devdeployment.apps/demoappv12 created
  • 发明与之对应的SVC
[root@k8s-master Ingress]# kubectl create service clusterip demoappv11 --tcp=80 -n devservice/demoappv11 created[root@k8s-master Ingress]# kubectl create service clusterip demoappv12 --tcp=80 -n devservice/demoappv12 created[root@k8s-master Ingress]# kubectl get svc -n devkuNAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGEdemoappv11   ClusterIP   10.99.204.65   <none>        80/TCP    19sdemoappv12   ClusterIP   10.97.211.38   <none>        80/TCP    17s[root@k8s-master Ingress]# kubectl describe svc demoappv11 -n devName:              demoappv11Namespace:         devLabels:            app=demoappv11Annotations:       <none>Selector:          app=demoappv11Type:              ClusterIPIP:                10.99.204.65Port:              80  80/TCPTargetPort:        80/TCPEndpoints:         192.168.12.53:80Session Affinity:  NoneEvents:            <none>[root@k8s-master Ingress]# kubectl describe svc demoappv12 -n devName:              demoappv12Namespace:         devLabels:            app=demoappv12Annotations:       <none>Selector:          app=demoappv12Type:              ClusterIPIP:                10.97.211.38Port:              80  80/TCPTargetPort:        80/TCPEndpoints:         192.168.51.79:80Session Affinity:  NoneEvents:            <none>
  • 拜访测试
[root@k8s-master Ingress]# curl 10.99.204.65iKubernetes demoapp v1.1 !! ClientIP: 192.168.4.170, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53![root@k8s-master Ingress]# curl 10.97.211.38iKubernetes demoapp v1.2 !! ClientIP: 192.168.4.170, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
  • 部署Envoy httpproxy
[root@k8s-master Ingress]# cat httpproxy-headers-routing.yaml apiVersion: projectcontour.io/v1kind: HTTPProxymetadata:  name: httpproxy-headers-routing  namespace: devspec:  virtualhost:    fqdn: www.ilinux.io  routes:  #路由  - conditions:    - header:        name: X-Canary  #header中蕴含X-Canary:true        present: true    - header:        name: User-Agent #header中蕴含curl        contains: curl    services:  #满足以上两个条件路由到demoappv11    - name: demoappv11      port: 80  - services:  #其余不满足条件路由到demoapp12    - name: demoappv12      port: 80[root@k8s-master Ingress]# kubectl apply -f httpproxy-headers-routing.yaml httpproxy.projectcontour.io/httpproxy-headers-routing unchanged[root@k8s-master Ingress]# kubectl get httpproxy -n devNAME                        FQDN            TLS SECRET   STATUS   STATUS DESCRIPTIONhttpproxy-headers-routing   www.ilinux.io                valid    Valid HTTPProxy[root@k8s-master Ingress]# kubectl  get svc -n projectcontourNAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGEcontour   ClusterIP      10.100.120.94   <none>        8001/TCP                     114menvoy     LoadBalancer   10.97.48.41     <pending>     80:32668/TCP,443:32278/TCP   114m
  • 拜访测试
[root@bigyong ~]# cat /etc/hosts   #增加hosts...192.168.54.171   www.ik8s.io  www.ilinux.io[root@bigyong ~]# curl http://www.ilinux.io  #默认为1.2版本iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79![root@bigyong ~]# curl http://www.ilinux.ioiKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
  • 因为通过curl拜访 所以在增加信息头中增加 X-Canary:true即可满足条件 为1.1版本
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53![root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.ioiKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53![root@k8s-master Ingress]# kubectl delete -f httpproxy-headers-routing.yaml httpproxy.projectcontour.io "httpproxy-headers-routing" deleted

示例2:流量切割 金丝雀公布

  • 先小比例公布,没问题后在公布到全副
  • 部署部署Envoy httpproxy 流量比列别离为10%、90%流量
[root@k8s-master Ingress]# cat httpproxy-traffic-splitting.yaml apiVersion: projectcontour.io/v1kind: HTTPProxymetadata:  name: httpproxy-traffic-splitting  namespace: devspec:  virtualhost:    fqdn: www.ilinux.io  routes:  - conditions:    - prefix: /    services:    - name: demoappv11       port: 80      weight: 90   #v1.1版本为90%流量    - name: demoappv12      port: 80      weight: 10  #v1.2版本为10%流量[root@k8s-master Ingress]# kubectl get httpproxy -n devNAME                          FQDN            TLS SECRET   STATUS   STATUS DESCRIPTIONhttpproxy-traffic-splitting   www.ilinux.io                valid    Valid HTTPProxy
  • 拜访测试
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done   #v1.1 v1.2的比大概是9:1iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!

示例3:镜像公布

[root@k8s-master Ingress]# cat httpproxy-traffic-mirror.yaml apiVersion: projectcontour.io/v1kind: HTTPProxymetadata:  name: httpproxy-traffic-mirror  namespace: devspec:  virtualhost:    fqdn: www.ilinux.io  routes:  - conditions:    - prefix: /    services :    - name: demoappv11      port: 80    - name: demoappv12      port: 80      mirror: true  #镜像拜访[root@k8s-master Ingress]# kubectl apply -f httpproxy-traffic-mirror.yaml[root@k8s-master Ingress]# kubectl get httpproxy -n devNAME                       FQDN            TLS SECRET   STATUS   STATUS DESCRIPTIONhttpproxy-traffic-mirror   www.ilinux.io                valid    Valid HTTPProxy[root@k8s-master Ingress]# kubectl get pod -n devNAME                          READY   STATUS    RESTARTS   AGEdemoappv11-59544d568d-5gg72   1/1     Running   0          74mdemoappv12-64c664955b-lkchk   1/1     Running   0          74m
  • 拜访测试
#都是v1.1版本[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; doneiKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
  • 查看v1.2版本Pod日志 有雷同流量拜访并显示拜访失常
[root@k8s-master Ingress]# kubectl get pod -n devNAME                          READY   STATUS    RESTARTS   AGEdemoappv11-59544d568d-5gg72   1/1     Running   0          74mdemoappv12-64c664955b-lkchk   1/1     Running   0          74m[root@k8s-master Ingress]# kubectl logs demoappv12-64c664955b-lkchk -n dev * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)192.168.4.170 - - [13/Sep/2021 09:35:01] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:46:24] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:46:28] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:46:29] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:47:12] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:47:25] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:50:50] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 09:50:52] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:50] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:52] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:04:07] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:04:14] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:04:28] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:05:14] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:05:16] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -[root@k8s-master Ingress]# kubectl delete  -f httpproxy-traffic-mirror.yaml httpproxy.projectcontour.io "httpproxy-traffic-mirror" deleted

示例4 :自定义调度算法

[root@k8s-master Ingress]# cat httpproxy-lb-strategy.yaml apiVersion: projectcontour.io/v1kind: HTTPProxymetadata:  name: httpproxy-lb-strategy  namespace: devspec:  virtualhost:    fqdn: www.ilinux.io  routes:    - conditions:      - prefix: /      services:        - name: demoappv11          port: 80        - name: demoappv12          port: 80      loadBalancerPolicy:        strategy: Random  #随机拜访策略
示例5: HTTPProxy服务弹性 健康检查
[root@k8s-master Ingress]# cat  httpproxy-retry-timeout.yamlapiVersion: projectcontour.io/v1kind: HTTPProxymetadata:  name: httpproxy-retry-timeout  namespace: devspec:  virtualhost:    fqdn: www.ilinux.io  routes:  - timeoutPolicy:      response: 2s  #响应工夫为2s 2s内没有响应为超时      idle: 5s  #闲暇5s    retryPolicy:      count: 3  #重试3次      perTryTimeout: 500ms  #重试工夫    services:    - name: demoappv12      port: 80

参考链接:

https://baijiahao.baidu.com/s...