Service简介
- Service:能够了解为pod的负债均衡器,规范资源类型,Service Controller 为动静的一组Pod提供一个固定的拜访入口, kubernetes实现SVC工作的是组件是kube-proxy
- Endpoint Controller:治理后端端点与svc的绑定,依据标签选择器,筛选适配的pod,监控就绪的pod 并实现svc与pod的绑定
- 工作流程:Service Controller---->创立雷同标签选择器的 Endpoint Controller依据标签选择器去治理和监听后端Pod状态 实现Svc与Pod绑定
Service可能提供负载平衡的能力,然而在应用上有以下限度:
- 只提供4层负载平衡能力,而没有7层性能,但有时咱们可能须要更多的匹配规定来转发申请,这在4层负载平衡上是不反对的
kube-proxy3种不同的数据调度模式
- 1.Userspace
Userspace模型:Pod-->Service, iptables拦挡规定,但本人不做调度 工作流程: 用户空间-->ptables(内核)-->kube-proxy-->ptables(内核)-->再调度给用户空间 效率低 - iptables 用户空间-->ptables(内核 实现数据调度)-->调度给用户空间 效率高
在iptables模型下kube-proxy的作用不在是数据调度转发,而是监听API server所有service中的定义转为本地的iptables规定
毛病:iptables模式,一个service会生成大量的规定; 如果一个service有50条规定 那如果有一万个容器,内核的性能就会受到影响 - ipvs代理模式: 在继承iptables长处的状况下,同时改良了iptables产生大量规定的毛病,在大规模集群中serice多的状况下劣势更显著,
Service的类型
- clusterIP:通过集群外部IP地址裸露服务,但该地址仅在集群外部可见、可达,它无奈被集群内部的客户端拜访;默认类型;倡议由K8S动静指定一个;也反对用户手动明确指定;
- NodePort: NodePort是ClusterIP的加强类型,它会于ClusterIP的性能之外,在每个节点上应用一个雷同的端口号将内部流量引入到该Service上来。
- LoadBalancer: 是NodePort的加强类型,为各节点上的NodePort提供一个内部负载均衡器;须要私有云反对
- ExternalName:内部流程引入到K8S外部,借助集群上KubeDNS来实现,服务的名称会被解析为一个CNAME记录,而CNAME名称会被DNS解析为集群内部的服务的TP地址,实现外部服务与内部服务的数据交互 ExternallP 能够与ClusterIP、NodePort一起应用 应用其中一个IP做进口IP
ServicePort
Service:被映射进Pod上的应用程序监听的端口; 而且如果后端Pod有多个端口,并且每个端口都想通过Service裸露的话,每个都要独自定义。 最终接管申请的是PodIP和ContainerPort;
Service资源标准
Service名称空间级别的资源不能跨名称空间
apiVersion: v1kind: Servicemetadata: name: .. namespace: ... labels: key1: value1 key2: value2spec: type <string> #Service类型,默认为ClusterIP selector <map[string]string> #等值类型的标签选择器,内含“与"逻辑 ports: # Service的端口对象列表 - name <string>#端口名称 protocol <string> #协定,目前仅反对TCP、UDP和SCTP,默认为TCP port <integer> # Service的端口号 targetPort <string> #后端指标过程的端口号或名称,名称需由Pod标准定义 nodePort <integer> # 节点端口号,仅实用于NodePort和LoadBalancer类型 clusterIP <string> # Service的集群IP,倡议由零碎主动调配 externalTrafficPolicy <string>#内部流量策略解决形式,Local示意由以后节点解决,#Cluster示意向集群范畴调度 loadBalancerIP <string> #内部负载均衡器应用的IP地址,仅实用于LoadBlancer externalName <string> # 内部服务名称,该名称将作为Service的DNS CNAME值
示例1: ClusterIP 演示
[root@k8s-master svc]# cat services-clusterip-demo.yaml apiVersion: v1kind: Servicemetadata: name: demoapp-svc namespace: defaultspec: clusterIP: 10.97.72.1 #正式部署不须要指定 会主动生成,手动指定还可能会导致抵触 selector: #定义过滤条件 app: demoapp ports: - name: http protocol: TCP port: 80 targetPort: 80 #后端pod端口[root@k8s-master svc]# kubectl apply -f services-clusterip-demo.yaml service/demoapp-svc created[root@k8s-master svc]# kubectl get svc -o wideNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORdemoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 11s app=demoappkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30d <none>my-grafana NodePort 10.96.4.185 <none> 80:30379/TCP 27d app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafanamyapp NodePort 10.106.116.205 <none> 80:31532/TCP 30d app=myapp,release=stabel[root@k8s-master svc]# curl 10.97.72.1 #通过拜访svc IP拜访到后端节点iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97![root@k8s-master svc]# curl 10.97.72.1iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-vzb4f, ServerIP: 10.244.1.98![root@k8s-master svc]# kubectl describe svc demoapp-svcName: demoapp-svcNamespace: defaultLabels: <none>Annotations: <none>Selector: app=demoappType: ClusterIPIP: 10.97.72.1Port: http 80/TCPTargetPort: 80/TCPEndpoints: 10.244.1.98:80,10.244.2.97:80 #后端节点Session Affinity: NoneEvents: <none>[root@k8s-master svc]# kubectl get pod -o wide --show-labels #匹配到前1、2个NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELSdemoapp-66db74fcfc-9wkgj 1/1 Running 0 39m 10.244.2.97 k8s-node2 <none> <none> app=demoapp,pod-template-hash=66db74fcfc,release=stabledemoapp-66db74fcfc-vzb4f 1/1 Running 0 39m 10.244.1.98 k8s-node1 <none> <none> app=demoapp,pod-template-hash=66db74fcfc,release=stable,track=dailyliveness-httpget-demo 1/1 Running 3 29m 10.244.1.99 k8s-node1 <none> <none> app=livenessliveness-tcpsocket-demo 1/1 Running 3 29m 10.244.1.100 k8s-node1 <none> <none> <none>my-grafana-7d788c5479-kpq9q 1/1 Running 4 27d 10.244.1.84 k8s-node1 <none> <none> app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafana,pod-template-hash=7d788c5479[root@k8s-master svc]# kubectl get ep #理论治理后端端点与svc的绑定是EndpointsNAME ENDPOINTS AGEdemoapp-svc 10.244.1.98:80,10.244.2.97:80 2m33skubernetes 192.168.4.170:6443 30dmy-grafana 10.244.1.84:3000 27dmyapp <none> 30d[root@k8s-master svc]# kubectl scale deployment demoapp --replicas=4 #批改deployment正本数为4deployment.apps/demoapp scaled[root@k8s-master svc]# kubectl get pod --show-labelsNAME READY STATUS RESTARTS AGE LABELSdemoapp-66db74fcfc-9jzs5 1/1 Running 0 18s app=demoapp,pod-template-hash=66db74fcfc,release=stabledemoapp-66db74fcfc-9wkgj 1/1 Running 0 100m app=demoapp,pod-template-hash=66db74fcfc,release=stabledemoapp-66db74fcfc-dw9w2 1/1 Running 0 18s app=demoapp,pod-template-hash=66db74fcfc,release=stabledemoapp-66db74fcfc-vzb4f 1/1 Running 0 100m app=demoapp,pod-template-hash=66db74fcfc,release=stable,track=dailyliveness-httpget-demo 1/1 Running 3 90m app=livenessliveness-tcpsocket-demo 1/1 Running 3 90m <none>my-grafana-7d788c5479-kpq9q 1/1 Running 4 27d app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafana,pod-template-hash=7d788c5479[root@k8s-master svc]# kubectl get ep #已实时增加到ep与svc绑定NAME ENDPOINTS AGEdemoapp-svc 10.244.1.101:80,10.244.1.98:80,10.244.2.97:80 + 1 more... 63mkubernetes 192.168.4.170:6443 30dmy-grafana 10.244.1.84:3000 27dmyapp <none> 30d
示例2: NodePort 演示
[root@k8s-master svc]# cat services-nodeport-demo.yaml apiVersion: v1kind: Servicemetadata: name: demoapp-nodeport-svc namespace: defaultspec: type: NodePort clusterIP: 10.97.56.1 #正式部署不须要指定 会主动生成手动指定还可能会导致抵触 selector: app: demoapp ports: - name: http protocol: TCP port: 80 targetPort: 80 #后端pod端口 nodePort: 31399 #正式部署不须要指定 会主动生成 默认生成端口在30000-32768之间[root@k8s-master svc]# kubectl apply -f services-nodeport-demo.yaml service/demoapp-nodeport-svc created[root@k8s-master svc]# kubectl get podNAME READY STATUS RESTARTS AGEdemoapp-66db74fcfc-9jzs5 1/1 Running 0 8m47sdemoapp-66db74fcfc-9wkgj 1/1 Running 0 109mdemoapp-66db74fcfc-dw9w2 1/1 Running 0 8m47sdemoapp-66db74fcfc-vzb4f 1/1 Running 0 109mliveness-httpget-demo 1/1 Running 3 98mliveness-tcpsocket-demo 1/1 Running 3 98mmy-grafana-7d788c5479-kpq9q 1/1 Running 4 27d[root@k8s-master svc]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdemoapp-nodeport-svc NodePort 10.97.56.1 <none> 80:31399/TCP 11s #能够看到两个prot 其中31399就是nodeport端口demoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 72m[root@k8s-master svc]# while true;do curl 192.168.4.171:31399;sleep 1;done #通过节点IP:prot拜访iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.1, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.0, ServerName: demoapp-66db74fcfc-dw9w2, ServerIP: 10.244.1.101!iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.0, ServerName: demoapp-66db74fcfc-vzb4f, ServerIP: 10.244.1.98!iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.1, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
- 能够看到下面尽管是通过节点2拜访,但通过IP地址发现还是会轮询到节点1上的pod
这时就要提到 'externalTrafficPolicy <string>' #内部流量策略解决形式,
Local示意由以后节点解决
Cluster示意向集群范畴调度
[root@k8s-master ~]# kubectl edit svc demoapp-nodeport-svc...spec: clusterIP: 10.97.56.1 externalTrafficPolicy: Local #把默认的Cluster改成Local...[root@k8s-master svc]# kubectl scale deployment demoapp --replicas=1 #调整deployment正本数为1deployment.apps/demoapp scaled[root@k8s-master ~]# kubectl get pod -o wide #能够看到惟一的pod运行node2节点上NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemoapp-66db74fcfc-9wkgj 1/1 Running 0 123m 10.244.2.97 k8s-node2 <none> <none>liveness-httpget-demo 1/1 Running 3 112m 10.244.1.99 k8s-node1 <none> <none>[root@k8s-master ~]# curl 192.168.4.171:31399 #通过节点1 失败^C[root@k8s-master ~]# curl 192.168.4.172:31399 #通过节点2iKubernetes demoapp v1.0 !! ClientIP: 192.168.4.170, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
示例3: LoadBalancer 演示
[root@k8s-master svc]# cat services-loadbalancer-demo.yaml apiVersion: v1kind: Servicemetadata: name: demoapp-loadbalancer-svc namespace: defaultspec: type: LoadBalancer selector: app: demoapp ports: - name: http protocol: TCP port: 80 targetPort: 80 #后端pod端口# loadBalancerIP: 1.2.3.4 #这里应该不是在Iaas平台上,无奈创立ELB,所以无奈创立[root@k8s-master svc]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdemoapp-loadbalancer-svc LoadBalancer 10.110.155.70 <pending> 80:31619/TCP 31s #能够看到因为不是Iaas平台上 EXTERNAL-IP始终为pending状态,示意始终在申请资源而挂起,仍然能够通过NodePort的形式拜访demoapp-nodeport-svc NodePort 10.97.56.1 <none> 80:31399/TCP 30mdemoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 102m[root@k8s-master svc]# while true;do curl 192.168.4.171:31399;sleep 1;done #通过NodePort的形式拜访iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.1, ServerName: demoapp-66db74fcfc-2jf49, ServerIP: 10.244.1.103!iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.1, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!
示例4: externalIPs 演示
NodePort 理论利用中还须要在后面加一层负载平衡,以起到对立入口和高可用,而且后端新增的节点也不会主动增加到负载上
externalIPs 在只有1个或多个节点裸露IP的状况下,可通过虚构IP,实现高可用
[root@k8s-master ~]# ip addr add 192.168.100.100/16 dev eth0[root@k8s-master ~]# ip addr show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:44:16:16 brd ff:ff:ff:ff:ff:ff inet 192.168.4.170/24 brd 192.168.4.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet 192.168.100.100/16 scope global eth0 valid_lft forever preferred_lft forever [root@k8s-master svc]# cat services-services-clusterip-demo.yaml services-externalip-demo.yaml services-loadbalancer-demo.yaml services-nodeport-demo.yaml[root@k8s-master svc]# cat services-externalip-demo.yaml apiVersion: v1kind: Servicemetadata: name: demoapp-externalip-svc namespace: defaultspec: type: ClusterIP selector: app: demoapp ports: - name: http protocol: TCP port: 80 targetPort: 80 #后端pod端口 externalIPs: - 192.168.100.100 #理论利用中,能够通过过haproxy等实现虚构IP 达到高可用 [root@k8s-master svc]# kubectl apply -f services-externalip-demo.yaml service/demoapp-externalip-svc created[root@k8s-master svc]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdemoapp-externalip-svc ClusterIP 10.110.30.133 192.168.100.100 80/TCP 16sdemoapp-loadbalancer-svc LoadBalancer 10.110.155.70 <pending> 80:31619/TCP 3h6mdemoapp-nodeport-svc NodePort 10.97.56.1 <none> 80:31399/TCP 3h36mdemoapp-svc ClusterIP 10.97.72.1 <none> 80/TCP 4h47m#拜访测试[root@k8s-master svc]# curl 192.168.100.100iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97![root@k8s-master svc]# while true;do curl 192.168.100.100;sleep 1;doneiKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-z682r, ServerIP: 10.244.2.99!iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!