Endpoint Controller简介
- 后面有提到 治理后端端点与svc的绑定,依据标签选择器,筛选适配的pod,监控就绪的pod 并实现svc与pod的绑定
- 理论利用中能够主动创立一个Endpoint Controller把内部的节点治理起来 相当于所内部的服务引入到外部 而后绑定到svc 集群外部就能够以拜访外部svc一样拜访到内部的服务
资源标准
apiVersion: v1kind: Endpointmetadata: # 对象元数据 name : namespace:subsets: #端点对象的列表- addresses: #处于“就绪”状态的端点地址对象列表 - hostname <string> #端点主机名 ip <string> #端点的IP地址,必选字段 hostname或IP给其中一个就行 nodeName <string> # 节点主机名 targetRef: #提供了该端点的对象援用 apiVersion <string> # 被援用对象所属的API群组及版本 kind <string> # 被援用对象的资源类型,多为Pod name <string> # 对象名称 namespace <string> # 对象所属的名称到底 fieldPath <string> #被援用的对象的字段,在未援用整个对象时应用,罕用于仅援用 # 指定Pod对象中的单容器,例如spec . containers[1] uid <string> #对象的标识符; notReadyAddresses: #处于“未就绪”状态的端点地址对象列表,格局与address雷同 ports: # 端口对象列表 - name <string> #端口名称; port <integer> # 端口号,必选字段; protocol <string> #协定类型,仅反对UDP、TCP和SCTP,默认为TCP; appProtocol <string> # 应用层协定;
Endpoints详情
[root@k8s-master svc]# kubectl get endpointsNAME ENDPOINTS AGEdemoapp-externalip-svc 10.244.1.102:80,10.244.1.103:80,10.244.2.97:80 + 1 more... 9m36sdemoapp-loadbalancer-svc 10.244.1.102:80,10.244.1.103:80,10.244.2.97:80 + 1 more... 3h15mdemoapp-nodeport-svc 10.244.1.102:80,10.244.1.103:80,10.244.2.97:80 + 1 more... 3h45mdemoapp-svc 10.244.1.102:80,10.244.1.103:80,10.244.2.97:80 + 1 more... 4h57m[root@k8s-master svc]# kubectl describe ep demoapp-svc Name: demoapp-svcNamespace: defaultLabels: <none>Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2021-07-28T19:22:06ZSubsets: Addresses: 10.244.1.102,10.244.1.103,10.244.2.97,10.244.2.99 #绑定的后端Pod地址 NotReadyAddresses: <none> #所有归类到未就绪后端端点都不会承受流量 Ports: Name Port Protocol ---- ---- -------- http 80 TCP
示例1: Endpoints引入内部服务
1.通过Endpoints把192.168.4.100、192.168.4.254 http引入到k8s集权外部并绑定svc
2.这里httpd服务为内部服务 无奈通过API service来检测就绪状态,须要手动配置
[root@k8s-master svc]# cat http-endpoints-demo.yaml apiVersion: v1kind: Endpointsmetadata: name: http-external namespace: defaultsubsets:- addresses: #内部服务地址 - ip: 192.168.4.100 - ip: 192.168.4.254 ports: - name: http port: 80 protocol: TCP notReadyAddresses:---apiVersion: v1kind: Servicemetadata: name: http-external #通过name匹配 不在须要用标签选择器 在同一名称空间下 name 统一就会互相匹配 namespace: defaultspec: type: ClusterIP ports: - name: http protocol: TCP port: 80 targetPort: 80root@k8s-master svc]# kubectl apply -f http-endpoints-demo.yaml endpoints/http-external createdservice/http-external created[root@k8s-master svc]# kubectl describe svc http-externalName: http-externalNamespace: defaultLabels: <none>Annotations: <none>Selector: <none>Type: ClusterIPIP: 10.103.125.128 #svc IPPort: http 80/TCPTargetPort: 80/TCPEndpoints: 192.168.4.100:80,192.168.4.254:80Session Affinity: NoneEvents: <none>#拜访测试[root@k8s-master svc]# while true;do curl 10.103.125.128;sleep 1;done192.168.4.254192.168.4.100192.168.4.100192.168.4.254192.168.4.100192.168.4.254192.168.4.100
iptable、ipvs代理模式
iptable代理模式:
- iptables代理模式下的ClusterIP,每个Service在每个节点上(由kube-proxy负责生成))都会生成相应的iptables规定
- iptables 用户空间-->ptables(内核 实现数据调度)-->调度给用户空间 效率高 在iptables模型下kube-proxy的作用不在是数据调度转发,而是监听API server所有service中的定义转为本地的iptables规定 毛病:iptables模式,一个service会生成大量的规定; 如果一个service有50条规定 那如果有一万个容器,内核的性能就会受到影响
ipvs代理模式:
kube-ipvs0,将所有的ClusterlP绑定在该接口;而后将每个Service定义为虚构服务器; nat转发 仅须要借助于极少量的iptables规定实现源地址转换等性能
ipvs代理模式: 在继承iptables长处的状况下,同时改良了iptables产生大量规定的毛病,在大规模集群中serice多的状况下劣势更显著
示例2: 批改iptable为ipvs模式
[root@k8s-master ~]# kubectl get configmap -nkube-systemNAME DATA AGEcoredns 1 31dextension-apiserver-authentication 6 31dkube-flannel-cfg 2 31dkube-proxy 2 31dkubeadm-config 2 31dkubelet-config-1.19 1 31d[root@k8s-master ~]# kubectl edit cm kube-proxy -n kube-system... qps: 0 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 0s conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" #调度算法 默认轮询算法 strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "ipvs" #默认为空 批改来ipvs nodePortAddresses: null oomScoreAdj: null portRange: "" showHiddenMetricsForVersion: ""[root@k8s-master ~]# kubectl get pod -n kube-system -l k8s-app=kube-proxyNAME READY STATUS RESTARTS AGEkube-proxy-4shl5 1/1 Running 6 31dkube-proxy-dw4tc 1/1 Running 7 31dkube-proxy-xg2vf 1/1 Running 6 31d[root@k8s-master ~]# kubectl delete pod -n kube-system -l k8s-app=kube-proxy #手动重启pod 生产环境最好是提前设定好pod "kube-proxy-4shl5" deletedpod "kube-proxy-dw4tc" deletedpod "kube-proxy-xg2vf" deleted[root@k8s-master ~]# ifconfig kube-ipvs #批改胜利好 会有一个kube-ipvs的虚构接口kube-ipvs0: flags=130<BROADCAST,NOARP> mtu 1500 inet 10.97.56.1 netmask 255.255.255.255 broadcast 0.0.0.0 ether b2:09:48:a5:8c:0a txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0[root@k8s-master ~]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdemoapp-externalip-svc ClusterIP 10.110.30.133 192.168.100.100 80/TCP 42hdemoapp-loadbalancer-svc LoadBalancer 10.110.155.70 <pending> 80:31619/TCP 45hdemoapp-nodeport-svc NodePort 10.97.56.1 <none> 80:31399/TCP 45hdemoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 47hhttp-external ClusterIP 10.103.125.128 <none> 80/TCP 29hkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31dmy-grafana NodePort 10.96.4.185 <none> 80:30379/TCP 29dmyapp NodePort 10.106.116.205 <none> 80:31532/TCP 31droot@k8s-master ~]# ip addr show kube-ipvs0 #所有svc的IP地址都能够在kube-ipvs0接口中找到 也阐明所有的svc都配置在kube-ipvs0接口上14: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default link/ether b2:09:48:a5:8c:0a brd ff:ff:ff:ff:ff:ff inet 10.97.56.1/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.110.30.133/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 192.168.100.100/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.97.72.1/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.103.125.128/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.4.185/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.0.10/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.110.155.70/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.106.116.205/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.108.171.56/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.106.239.211/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.103.145.83/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.0.1/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever[root@k8s-master ~]# ipvsadm -Ln #查看IPVS规格IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 127.0.0.1:31619 rr -> 10.244.1.102:80 Masq 1 0 0 -> 10.244.1.103:80 Masq 1 0 0 -> 10.244.2.97:80 Masq 1 0 0 -> 10.244.2.99:80 Masq 1 0 0 TCP 127.0.0.1:31994 rr -> 192.168.4.170:9100 Masq 1 0 0 -> 192.168.4.171:9100 Masq 1 0 0 -> 192.168.4.172:9100 Masq 1 0 0 TCP 172.17.0.1:30169 rr -> 10.244.2.82:4443 Masq 1 0 0 TCP 172.17.0.1:30379 rr -> 10.244.1.84:3000 Masq 1 0 0