关于kubernetes:07kubernetes笔记-Service一-ClusterIPNodePortLoadBalancer

44次阅读

共计 10832 个字符,预计需要花费 28 分钟才能阅读完成。

Service 简介

  • Service: 能够了解为 pod 的负债均衡器, 规范资源类型,Service Controller 为动静的一组 Pod 提供一个固定的拜访入口,kubernetes 实现 SVC 工作的是组件是 kube-proxy
  • Endpoint Controller: 治理后端端点与 svc 的绑定, 依据标签选择器, 筛选适配的 pod, 监控就绪的 pod 并实现 svc 与 pod 的绑定
  • 工作流程:Service Controller—-> 创立雷同标签选择器的 Endpoint Controller 依据标签选择器去治理和监听后端 Pod 状态 实现 Svc 与 Pod 绑定
Service 可能提供负载平衡的能力,然而在应用上有以下限度:
  • 只提供 4 层负载平衡能力,而没有 7 层性能,但有时咱们可能须要更多的匹配规定来转发申请,这在 4 层负载平衡上是不反对的

kube-proxy3 种不同的数据调度模式

  1. 1.Userspace
    Userspace 模型:Pod–>Service, iptables 拦挡规定, 但本人不做调度 工作流程: 用户空间 –>ptables(内核)–>kube-proxy–>ptables(内核)–> 再调度给用户空间 效率低
  2. iptables 用户空间 –>ptables(内核 实现数据调度)–> 调度给用户空间 效率高
    在 iptables 模型下 kube-proxy 的作用不在是数据调度转发, 而是监听 API server 所有 service 中的定义转为本地的 iptables 规定
    毛病:iptables 模式,一个 service 会生成大量的规定; 如果一个 service 有 50 条规定 那如果有一万个容器, 内核的性能就会受到影响
  3. ipvs 代理模式: 在继承 iptables 长处的状况下, 同时改良了 iptables 产生大量规定的毛病, 在大规模集群中 serice 多的状况下劣势更显著,

Service 的类型

  1. clusterIP: 通过集群外部 IP 地址裸露服务,但该地址仅在集群外部可见、可达,它无奈被集群内部的客户端拜访; 默认类型; 倡议由 K8S 动静指定一个; 也反对用户手动明确指定;
  2. NodePort: NodePort 是 ClusterIP 的加强类型,它会于 ClusterIP 的性能之外,在每个节点上应用一个雷同的端口号将内部流量引入到该 Service 上来。
  3. LoadBalancer: 是 NodePort 的加强类型,为各节点上的 NodePort 提供一个内部负载均衡器; 须要私有云反对
  4. ExternalName: 内部流程引入到 K8S 外部, 借助集群上 KubeDNS 来实现,服务的名称会被解析为一个 CNAME 记录,而 CNAME 名称会被 DNS 解析为集群内部的服务的 TP 地址, 实现外部服务与内部服务的数据交互 ExternallP 能够与 ClusterIP、NodePort 一起应用 应用其中一个 IP 做进口 IP
ServicePort

Service: 被映射进 Pod 上的应用程序监听的端口; 而且如果后端 Pod 有多个端口,并且每个端口都想通过 Service 裸露的话,每个都要独自定义。最终接管申请的是 PodIP 和 ContainerPort;

Service 资源标准

Service 名称空间级别的资源不能跨名称空间

apiVersion: v1
kind: Service
metadata:
  name: ..
  namespace: ...
  labels:
    key1: value1
    key2: value2
spec:
  type <string>  #Service 类型,默认为 ClusterIP
  selector <map[string]string> #等值类型的标签选择器,内含“与 " 逻辑
  ports: # Service 的端口对象列表
  - name <string># 端口名称
    protocol <string> #协定,目前仅反对 TCP、UDP 和 SCTP,默认为 TCP
    port <integer> # Service 的端口号
    targetPort <string> #后端指标过程的端口号或名称,名称需由 Pod 标准定义
    nodePort <integer> # 节点端口号,仅实用于 NodePort 和 LoadBalancer 类型
  clusterIP <string> # Service 的集群 IP,倡议由零碎主动调配
  externalTrafficPolicy <string># 内部流量策略解决形式,Local 示意由以后节点解决,#Cluster 示意向集群范畴调度
  loadBalancerIP <string> #内部负载均衡器应用的 IP 地址,仅实用于 LoadBlancer
  externalName <string>  # 内部服务名称,该名称将作为 Service 的 DNS CNAME 值

示例 1: ClusterIP 演示

[root@k8s-master svc]# cat services-clusterip-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: demoapp-svc
  namespace: default
spec: 
  clusterIP: 10.97.72.1   #正式部署不须要指定 会主动生成, 手动指定还可能会导致抵触
  selector:               #定义过滤条件
    app: demoapp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80    #后端 pod 端口

[root@k8s-master svc]# kubectl apply -f services-clusterip-demo.yaml 
service/demoapp-svc created


[root@k8s-master svc]# kubectl get svc -o wide
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE   SELECTOR
demoapp-svc   ClusterIP   10.97.72.1       <none>        80/TCP         11s   app=demoapp
kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP        30d   <none>
my-grafana    NodePort    10.96.4.185      <none>        80:30379/TCP   27d   app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafana
myapp         NodePort    10.106.116.205   <none>        80:31532/TCP   30d   app=myapp,release=stabel

[root@k8s-master svc]# curl 10.97.72.1  #通过拜访 svc IP 拜访到后端节点
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
[root@k8s-master svc]# curl 10.97.72.1
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-vzb4f, ServerIP: 10.244.1.98!

[root@k8s-master svc]# kubectl describe svc demoapp-svc
Name:              demoapp-svc
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=demoapp
Type:              ClusterIP
IP:                10.97.72.1
Port:              http  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.98:80,10.244.2.97:80   #后端节点
Session Affinity:  None
Events:            <none>

[root@k8s-master svc]# kubectl get pod -o wide --show-labels   #匹配到前 1、2 个
NAME                          READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES   LABELS
demoapp-66db74fcfc-9wkgj      1/1     Running   0          39m   10.244.2.97    k8s-node2   <none>           <none>            app=demoapp,pod-template-hash=66db74fcfc,release=stable
demoapp-66db74fcfc-vzb4f      1/1     Running   0          39m   10.244.1.98    k8s-node1   <none>           <none>            app=demoapp,pod-template-hash=66db74fcfc,release=stable,track=daily
liveness-httpget-demo         1/1     Running   3          29m   10.244.1.99    k8s-node1   <none>           <none>            app=liveness
liveness-tcpsocket-demo       1/1     Running   3          29m   10.244.1.100   k8s-node1   <none>           <none>            <none>
my-grafana-7d788c5479-kpq9q   1/1     Running   4          27d   10.244.1.84    k8s-node1   <none>           <none>            app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafana,pod-template-hash=7d788c5479

[root@k8s-master svc]# kubectl get ep  #理论治理后端端点与 svc 的绑定是 Endpoints
NAME          ENDPOINTS                       AGE
demoapp-svc   10.244.1.98:80,10.244.2.97:80   2m33s
kubernetes    192.168.4.170:6443              30d
my-grafana    10.244.1.84:3000                27d
myapp         <none>                          30d

[root@k8s-master svc]#  kubectl scale deployment demoapp  --replicas=4  #批改 deployment 正本数为 4
deployment.apps/demoapp scaled

[root@k8s-master svc]# kubectl get pod --show-labels
NAME                          READY   STATUS    RESTARTS   AGE    LABELS
demoapp-66db74fcfc-9jzs5      1/1     Running   0          18s    app=demoapp,pod-template-hash=66db74fcfc,release=stable
demoapp-66db74fcfc-9wkgj      1/1     Running   0          100m   app=demoapp,pod-template-hash=66db74fcfc,release=stable
demoapp-66db74fcfc-dw9w2      1/1     Running   0          18s    app=demoapp,pod-template-hash=66db74fcfc,release=stable
demoapp-66db74fcfc-vzb4f      1/1     Running   0          100m   app=demoapp,pod-template-hash=66db74fcfc,release=stable,track=daily
liveness-httpget-demo         1/1     Running   3          90m    app=liveness
liveness-tcpsocket-demo       1/1     Running   3          90m    <none>
my-grafana-7d788c5479-kpq9q   1/1     Running   4          27d    app.kubernetes.io/instance=my-grafana,app.kubernetes.io/name=grafana,pod-template-hash=7d788c5479

[root@k8s-master svc]# kubectl get ep  #已实时增加到 ep 与 svc 绑定
NAME          ENDPOINTS                                                   AGE
demoapp-svc   10.244.1.101:80,10.244.1.98:80,10.244.2.97:80 + 1 more...   63m
kubernetes    192.168.4.170:6443                                          30d
my-grafana    10.244.1.84:3000                                            27d
myapp         <none>                                                      30d

示例 2: NodePort 演示

[root@k8s-master svc]# cat services-nodeport-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: demoapp-nodeport-svc
  namespace: default
spec: 
  type: NodePort
  clusterIP: 10.97.56.1   #正式部署不须要指定 会主动生成手动指定还可能会导致抵触
  selector:
    app: demoapp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80    #后端 pod 端口
    nodePort: 31399  #正式部署不须要指定 会主动生成   默认生成端口在 30000-32768 之间

[root@k8s-master svc]# kubectl apply -f services-nodeport-demo.yaml 
service/demoapp-nodeport-svc created
[root@k8s-master svc]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
demoapp-66db74fcfc-9jzs5      1/1     Running   0          8m47s
demoapp-66db74fcfc-9wkgj      1/1     Running   0          109m
demoapp-66db74fcfc-dw9w2      1/1     Running   0          8m47s
demoapp-66db74fcfc-vzb4f      1/1     Running   0          109m
liveness-httpget-demo         1/1     Running   3          98m
liveness-tcpsocket-demo       1/1     Running   3          98m
my-grafana-7d788c5479-kpq9q   1/1     Running   4          27d
[root@k8s-master svc]# kubectl get svc
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
demoapp-nodeport-svc   NodePort    10.97.56.1       <none>        80:31399/TCP   11s    #能够看到两个 prot 其中 31399 就是 nodeport 端口
demoapp-svc            ClusterIP   10.97.72.1       <none>        80/TCP         72m

[root@k8s-master svc]# while true;do curl 192.168.4.171:31399;sleep 1;done  #通过节点 IP:prot 拜访
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.1, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.0, ServerName: demoapp-66db74fcfc-dw9w2, ServerIP: 10.244.1.101!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.0, ServerName: demoapp-66db74fcfc-vzb4f, ServerIP: 10.244.1.98!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.1, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
  • 能够看到下面尽管是通过节点 2 拜访, 但通过 IP 地址发现还是会轮询到节点 1 上的 pod
    这时就要提到 ‘externalTrafficPolicy <string>’ #内部流量策略解决形式,
    Local 示意由以后节点解决
    Cluster 示意向集群范畴调度
[root@k8s-master ~]# kubectl edit svc demoapp-nodeport-svc
...
spec:
  clusterIP: 10.97.56.1
  externalTrafficPolicy: Local  #把默认的 Cluster 改成 Local
...

[root@k8s-master svc]# kubectl scale deployment demoapp  --replicas=1  #调整 deployment 正本数为 1
deployment.apps/demoapp scaled

[root@k8s-master ~]# kubectl get pod -o wide  #能够看到惟一的 pod 运行 node2 节点上
NAME                          READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
demoapp-66db74fcfc-9wkgj      1/1     Running   0          123m   10.244.2.97    k8s-node2   <none>           <none>
liveness-httpget-demo         1/1     Running   3          112m   10.244.1.99    k8s-node1   <none>           <none>

[root@k8s-master ~]# curl 192.168.4.171:31399  #通过节点 1  失败

^C
[root@k8s-master ~]# curl 192.168.4.172:31399  #通过节点 2
iKubernetes demoapp v1.0 !! ClientIP: 192.168.4.170, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!

示例 3: LoadBalancer 演示

[root@k8s-master svc]# cat services-loadbalancer-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: demoapp-loadbalancer-svc
  namespace: default
spec: 
  type: LoadBalancer
  selector:
    app: demoapp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80    #后端 pod 端口
#  loadBalancerIP: 1.2.3.4    #这里应该不是在 Iaas 平台上, 无奈创立 ELB, 所以无奈创立

[root@k8s-master svc]# kubectl get svc
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
demoapp-loadbalancer-svc   LoadBalancer   10.110.155.70    <pending>     80:31619/TCP   31s    #能够看到因为不是 Iaas 平台上 EXTERNAL-IP 始终为 pending 状态, 示意始终在申请资源而挂起, 仍然能够通过 NodePort 的形式拜访
demoapp-nodeport-svc       NodePort       10.97.56.1       <none>        80:31399/TCP   30m
demoapp-svc                ClusterIP      10.97.72.1       <none>        80/TCP         102m


[root@k8s-master svc]# while true;do curl 192.168.4.171:31399;sleep 1;done  #通过 NodePort 的形式拜访
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.1, ServerName: demoapp-66db74fcfc-2jf49, ServerIP: 10.244.1.103!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.1.1, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!

示例 4: externalIPs 演示

NodePort 理论利用中还须要在后面加一层负载平衡,以起到对立入口和高可用,而且后端新增的节点也不会主动增加到负载上
externalIPs 在只有 1 个或多个节点裸露 IP 的状况下,可通过虚构 IP,实现高可用

[root@k8s-master ~]# ip addr add 192.168.100.100/16 dev eth0
[root@k8s-master ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:44:16:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.170/24 brd 192.168.4.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 192.168.100.100/16 scope global eth0
       valid_lft forever preferred_lft forever
       
[root@k8s-master svc]# cat services-
services-clusterip-demo.yaml     services-externalip-demo.yaml    services-loadbalancer-demo.yaml  services-nodeport-demo.yaml
[root@k8s-master svc]# cat services-externalip-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: demoapp-externalip-svc
  namespace: default
spec: 
  type: ClusterIP
  selector:
    app: demoapp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80    #后端 pod 端口
  externalIPs:
  - 192.168.100.100  #理论利用中,能够通过过 haproxy 等实现虚构 IP 达到高可用
  
[root@k8s-master svc]# kubectl apply -f services-externalip-demo.yaml 
service/demoapp-externalip-svc created
[root@k8s-master svc]# kubectl get svc
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)        AGE
demoapp-externalip-svc     ClusterIP      10.110.30.133    192.168.100.100   80/TCP         16s
demoapp-loadbalancer-svc   LoadBalancer   10.110.155.70    <pending>         80:31619/TCP   3h6m
demoapp-nodeport-svc       NodePort       10.97.56.1       <none>            80:31399/TCP   3h36m
demoapp-svc                ClusterIP      10.97.72.1       <none>            80/TCP         4h47m

#拜访测试
[root@k8s-master svc]# curl 192.168.100.100
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
[root@k8s-master svc]# while true;do curl 192.168.100.100;sleep 1;done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-z682r, ServerIP: 10.244.2.99!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-9wkgj, ServerIP: 10.244.2.97!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-66db74fcfc-5dp5n, ServerIP: 10.244.1.102!

正文完
 0