Endpoint Controller 简介
- 后面有提到 治理后端端点与 svc 的绑定, 依据标签选择器, 筛选适配的 pod, 监控就绪的 pod 并实现 svc 与 pod 的绑定
- 理论利用中能够主动创立一个 Endpoint Controller 把内部的节点治理起来 相当于所内部的服务引入到外部 而后绑定到 svc 集群外部就能够以拜访外部 svc 一样拜访到内部的服务
资源标准
apiVersion: v1
kind: Endpoint
metadata: # 对象元数据
name :
namespace:
subsets: #端点对象的列表
- addresses: #处于“就绪”状态的端点地址对象列表
- hostname <string> #端点主机名
ip <string> #端点的 IP 地址,必选字段 hostname 或 IP 给其中一个就行
nodeName <string> # 节点主机名
targetRef: #提供了该端点的对象援用
apiVersion <string> # 被援用对象所属的 API 群组及版本
kind <string> # 被援用对象的资源类型,多为 Pod
name <string> # 对象名称
namespace <string> # 对象所属的名称到底
fieldPath <string> #被援用的对象的字段,在未援用整个对象时应用,罕用于仅援用
# 指定 Pod 对象中的单容器,例如 spec . containers[1]
uid <string> #对象的标识符;
notReadyAddresses: #处于“未就绪”状态的端点地址对象列表, 格局与 address 雷同
ports: # 端口对象列表
- name <string> #端口名称;
port <integer> # 端口号,必选字段;
protocol <string> #协定类型,仅反对 UDP、TCP 和 SCTP,默认为 TCP;
appProtocol <string> # 应用层协定;
Endpoints 详情
[root@k8s-master svc]# kubectl get endpoints
NAME ENDPOINTS AGE
demoapp-externalip-svc 10.244.1.102:80,10.244.1.103:80,10.244.2.97:80 + 1 more... 9m36s
demoapp-loadbalancer-svc 10.244.1.102:80,10.244.1.103:80,10.244.2.97:80 + 1 more... 3h15m
demoapp-nodeport-svc 10.244.1.102:80,10.244.1.103:80,10.244.2.97:80 + 1 more... 3h45m
demoapp-svc 10.244.1.102:80,10.244.1.103:80,10.244.2.97:80 + 1 more... 4h57m
[root@k8s-master svc]# kubectl describe ep demoapp-svc
Name: demoapp-svc
Namespace: default
Labels: <none>
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2021-07-28T19:22:06Z
Subsets:
Addresses: 10.244.1.102,10.244.1.103,10.244.2.97,10.244.2.99 #绑定的后端 Pod 地址
NotReadyAddresses: <none> #所有归类到未就绪后端端点都不会承受流量
Ports:
Name Port Protocol
---- ---- --------
http 80 TCP
示例 1: Endpoints 引入内部服务
1. 通过 Endpoints 把 192.168.4.100、192.168.4.254 http 引入到 k8s 集权外部并绑定 svc
2. 这里 httpd 服务为内部服务 无奈通过 API service 来检测就绪状态, 须要手动配置
[root@k8s-master svc]# cat http-endpoints-demo.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: http-external
namespace: default
subsets:
- addresses: #内部服务地址
- ip: 192.168.4.100
- ip: 192.168.4.254
ports:
- name: http
port: 80
protocol: TCP
notReadyAddresses:
---
apiVersion: v1
kind: Service
metadata:
name: http-external #通过 name 匹配 不在须要用标签选择器 在同一名称空间下 name 统一就会互相匹配
namespace: default
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
root@k8s-master svc]# kubectl apply -f http-endpoints-demo.yaml
endpoints/http-external created
service/http-external created
[root@k8s-master svc]# kubectl describe svc http-external
Name: http-external
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.103.125.128 #svc IP
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.4.100:80,192.168.4.254:80
Session Affinity: None
Events: <none>
#拜访测试
[root@k8s-master svc]# while true;do curl 10.103.125.128;sleep 1;done
192.168.4.254
192.168.4.100
192.168.4.100
192.168.4.254
192.168.4.100
192.168.4.254
192.168.4.100
iptable、ipvs 代理模式
-
iptable 代理模式:
- iptables 代理模式下的 ClusterIP,每个 Service 在每个节点上(由 kube-proxy 负责生成 ))都会生成相应的 iptables 规定
- iptables 用户空间 –>ptables(内核 实现数据调度)–> 调度给用户空间 效率高 在 iptables 模型下 kube-proxy 的作用不在是数据调度转发, 而是监听 API server 所有 service 中的定义转为本地的 iptables 规定 毛病:iptables 模式,一个 service 会生成大量的规定; 如果一个 service 有 50 条规定 那如果有一万个容器, 内核的性能就会受到影响
ipvs 代理模式:
kube-ipvs0,将所有的 ClusterlP 绑定在该接口; 而后将每个 Service 定义为虚构服务器; nat 转发 仅须要借助于极少量的 iptables 规定实现源地址转换等性能
ipvs 代理模式: 在继承 iptables 长处的状况下, 同时改良了 iptables 产生大量规定的毛病, 在大规模集群中 serice 多的状况下劣势更显著
示例 2: 批改 iptable 为 ipvs 模式
[root@k8s-master ~]# kubectl get configmap -nkube-system
NAME DATA AGE
coredns 1 31d
extension-apiserver-authentication 6 31d
kube-flannel-cfg 2 31d
kube-proxy 2 31d
kubeadm-config 2 31d
kubelet-config-1.19 1 31d
[root@k8s-master ~]# kubectl edit cm kube-proxy -n kube-system
...
qps: 0
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""hostnameOverride:""
iptables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: "" #调度算法 默认轮询算法
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""mode:"ipvs" #默认为空 批改来 ipvs
nodePortAddresses: null
oomScoreAdj: null
portRange: ""showHiddenMetricsForVersion:""
[root@k8s-master ~]# kubectl get pod -n kube-system -l k8s-app=kube-proxy
NAME READY STATUS RESTARTS AGE
kube-proxy-4shl5 1/1 Running 6 31d
kube-proxy-dw4tc 1/1 Running 7 31d
kube-proxy-xg2vf 1/1 Running 6 31d
[root@k8s-master ~]# kubectl delete pod -n kube-system -l k8s-app=kube-proxy #手动重启 pod 生产环境最好是提前设定好
pod "kube-proxy-4shl5" deleted
pod "kube-proxy-dw4tc" deleted
pod "kube-proxy-xg2vf" deleted
[root@k8s-master ~]# ifconfig kube-ipvs #批改胜利好 会有一个 kube-ipvs 的虚构接口
kube-ipvs0: flags=130<BROADCAST,NOARP> mtu 1500
inet 10.97.56.1 netmask 255.255.255.255 broadcast 0.0.0.0
ether b2:09:48:a5:8c:0a txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp-externalip-svc ClusterIP 10.110.30.133 192.168.100.100 80/TCP 42h
demoapp-loadbalancer-svc LoadBalancer 10.110.155.70 <pending> 80:31619/TCP 45h
demoapp-nodeport-svc NodePort 10.97.56.1 <none> 80:31399/TCP 45h
demoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 47h
http-external ClusterIP 10.103.125.128 <none> 80/TCP 29h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31d
my-grafana NodePort 10.96.4.185 <none> 80:30379/TCP 29d
myapp NodePort 10.106.116.205 <none> 80:31532/TCP 31d
root@k8s-master ~]# ip addr show kube-ipvs0 #所有 svc 的 IP 地址都能够在 kube-ipvs0 接口中找到 也阐明所有的 svc 都配置在 kube-ipvs0 接口上
14: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether b2:09:48:a5:8c:0a brd ff:ff:ff:ff:ff:ff
inet 10.97.56.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.110.30.133/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 192.168.100.100/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.97.72.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.103.125.128/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.4.185/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.110.155.70/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.106.116.205/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.108.171.56/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.106.239.211/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.103.145.83/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
[root@k8s-master ~]# ipvsadm -Ln #查看 IPVS 规格
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 127.0.0.1:31619 rr
-> 10.244.1.102:80 Masq 1 0 0
-> 10.244.1.103:80 Masq 1 0 0
-> 10.244.2.97:80 Masq 1 0 0
-> 10.244.2.99:80 Masq 1 0 0
TCP 127.0.0.1:31994 rr
-> 192.168.4.170:9100 Masq 1 0 0
-> 192.168.4.171:9100 Masq 1 0 0
-> 192.168.4.172:9100 Masq 1 0 0
TCP 172.17.0.1:30169 rr
-> 10.244.2.82:4443 Masq 1 0 0
TCP 172.17.0.1:30379 rr
-> 10.244.1.84:3000 Masq 1 0 0