关于kubernetes:12kubernetes笔记-CNI网络插件一-Flannel

77次阅读

共计 12888 个字符,预计需要花费 33 分钟才能阅读完成。

前言

  • CNI 是 Container Network Interface 的是一个规范的,通用的接口。当初容器平台:docker,kubernetes,mesos,容器网络解决方案:flannel,calico,weave。只有提供一个规范的接口,就能为同样满足该协定的所有容器平台提供网络性能,而 CNI 正是这样的一个规范接口协议。
  • CNI 用于连贯容器管理系统和网络插件。提供一个容器所在的 network namespace,将 network interface 插入该 network namespace 中(比方 veth 的一端),并且在宿主机做一些必要的配置(例如将 veth 的另一端退出 bridge 中),最初对 namespace 中的 interface 进行 IP 和路由的配置

Kubernetes 次要存在 4 种类型的通信:

  1. container-to-container: 产生在 Pod 外部, 借助于 lo 实现;
  2. Pod-to-Pod: Pod 间的通信,k8s 本身并未解决该类通信,而是借助于 CNI 接口,交给第三方解决方案;CNI 之前的接口叫 kubenet;
  3. Service-to-Pod: 借助于 kube-proxy 生成的 iptables 或 ipvs 规定实现;
  4. ExternalClients-to-Service: 引入集群内部流量 hostPort、hostNletwork、nodeport/service,、loadbalancer/service、exteralP/service、Ingres;

Flannel 简介

  • Flannel 是 CoreOS 团队针对 Kubernetes 设计的一个网络布局服务,简略来说,它的性能是让集群中的不同节点主机创立的 Docker 容器都具备全集群惟一的虚构 IP 地址。
  • 在默认的 Docker 配置中,每个节点上的 Docker 服务会别离负责所在节点容器的 IP 调配。这样导致的一个问题是,不同节点上容器可能取得雷同的内外 IP 地址。并使这些容器之间可能之间通过 IP 地址互相找到,也就是互相 ping 通。
  • Flannel 的设计目标就是为集群中的所有节点从新布局 IP 地址的应用规定,从而使得不同节点上的容器可能取得“同属一个内网”且”不反复的”IP 地址,并让属于不同节点上的容器可能间接通过内网 IP 通信。
  • Flannel 本质上是一种“笼罩网络 (overlaynetwork)”,也就是将 TCP 数据包装在另一种网络包外面进行路由转发和通信,目前曾经反对 udp、vxlan、host-gw、aws-vpc、gce 和 alloc 路由等数据转发形式,默认的节点间数据通信形式是 UDP 转发。

简略总结 Flannel 特点

  1. 使集群中的不同 Node 主机创立的 Docker 容器都具备全集群惟一的虚构 IP 地址。
  2. 建设一个笼罩网络(overlay network),通过这个笼罩网络,将数据包一成不变的传递到指标容器。笼罩网络是建设在另一个网络之上并由其基础设施反对的虚构网络。笼罩网络通过将一个分组封装在另一个分组内来将网络服务与底层基础设施拆散。在将封装的数据包转发到端点后,将其解封装。
  3. 创立一个新的虚构网卡 flannel0 接管 docker 网桥的数据,通过保护路由表,对接管到的数据进行封包和转发(vxlan)。
  4. etcd 保障了所有 node 上 flanned 所看到的配置是统一的。同时每个 node 上的 flanned 监听 etcd 上的数据变动,实时感知集群中 node 的变动。

Flannel 反对三种 Pod 网络模型,每个模型在 flannel 中称为一种 ”backend”:

  • vxlan: Pod 与 Pod 经由隧道封装后通信,各节点彼此间能通信就行,不要求在同一个二层网络; 毛病: 因为通过 2 次封装, 吞吐量绝对变低, 长处: 不要求节点处于同一个 2 层网络
  • vwlan directrouting: 位于同一个二层网络上的、但不同节点上的 Pod 间通信,毋庸隧道封装; 但非同一个二层网络上的节点上的 Pod 间通信,仍须隧道封装; 最优的计划
  • host-gw: Pod 与 Pod 不经隧道封装而间接通信,要求各节点位于同一个二层网络; #吞吐量最大 但须要在同个 2 层网络中

Flannel 下载安装地址

https://github.com/flannel-io…

示例 1: 部署 flannel 以 Vxlan 类型运行

# 查看 flannel 部署清单 yaml 文件中有对于网络类型的形容
[root@k8s-master plugin]# cat kube-flannel.yml
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",  #实现虚构网络
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",  #端口映射 如:NodePort
          "capabilities": {"portMappings": true}
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {"Type": "vxlan"  #默认为 vxlan 模式}
    }

[root@k8s-master plugin]# kubectl apply -f  kube-flannel.yml
  • vxlan 模式下 路由表 Pod 地址指向 flannel.1
[root@k8s-master plugin]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.54.2    0.0.0.0         UG    101    0        0 eth4
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0  #本机虚构网络接口
10.244.1.0      10.244.1.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.2.0      10.244.2.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.3.0      10.244.3.0      255.255.255.0   UG    0      0        0 flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.4.0     0.0.0.0         255.255.255.0   U     100    0        0 eth0
192.168.54.0    0.0.0.0         255.255.255.0   U     101    0        0 eth4


[root@k8s-node1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.54.2    0.0.0.0         UG    101    0        0 eth4
10.244.0.0      10.244.0.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0  #本机虚构网络接口
10.244.2.0      10.244.2.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.3.0      10.244.3.0      255.255.255.0   UG    0      0        0 flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.4.0     0.0.0.0         255.255.255.0   U     100    0        0 eth0
192.168.54.0    0.0.0.0         255.255.255.0   U     101    0        0 eth4

[root@k8s-master plugin]# ip neighbour|grep flannel.1  #生成的永恒 neighbour 信息 进步路由效率
10.244.1.0 dev flannel.1 lladdr ba:98:1c:fa:3a:51 PERMANENT
10.244.3.0 dev flannel.1 lladdr da:29:42:38:29:55 PERMANENT
10.244.2.0 dev flannel.1 lladdr fa:48:c1:29:0b:dd PERMANENT

[root@k8s-master plugin]# bridge fdb show flannel.1|grep flannel.1
ba:98:1c:fa:3a:51 dev flannel.1 dst 192.168.54.171 self permanent
22:85:29:77:e1:00 dev flannel.1 dst 192.168.54.173 self permanent
fa:48:c1:29:0b:dd dev flannel.1 dst 192.168.54.172 self permanent
da:29:42:38:29:55 dev flannel.1 dst 192.168.54.173 self permanent

#抓包 flannel 网络 其中 udp 8472 为 flannel 网络默认端口

[root@k8s-node3 ~]# tcpdump -i eth4 -nn udp port 8472
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth4, link-type EN10MB (Ethernet), capture size 262144 bytes
17:08:15.113389 IP 192.168.54.172.46879 > 192.168.54.173.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.3.92: ICMP echo request, id 2816, seq 61, length 64
17:08:15.113498 IP 192.168.54.173.55553 > 192.168.54.172.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.3.92 > 10.244.2.9: ICMP echo reply, id 2816, seq 61, length 64
17:08:16.114359 IP 192.168.54.172.46879 > 192.168.54.173.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.3.92: ICMP echo request, id 2816, seq 62, length 64
17:08:16.114447 IP 192.168.54.173.55553 > 192.168.54.172.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.3.92 > 10.244.2.9: ICMP echo reply, id 2816, seq 62, length 64
17:08:17.115558 IP 192.168.54.172.46879 > 192.168.54.173.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.3.92: ICMP echo request, id 2816, seq 63, length 64
17:08:17.115717 IP 192.168.54.173.55553 > 192.168.54.172.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.3.92 > 10.244.2.9: ICMP echo reply, id 2816, seq 63, length 64

17:08:18.117498 IP 192.168.54.172.46879 > 192.168.54.173.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.3.92: ICMP echo request, id 2816, seq 64, length 64
  • 能够看到 10.244.2.9 > 10.244.3.92 Pod 间的传输 通过封装从节点 192.168.54.172.46879 传输到节点 192.168.54.173.8472 通过一层数据封装

示例 2: 增加 flannel 网络类型 DirectRouting

  • 增加 DirectRouting 后,2 层网络节点会应用宿主机网络接口间接通信,3 层网络的节点会应用 Vxlan 隧道封装后通信, 组合应用是 flannel 最现实的网络类型
  • 因为测试环境所有节点都处于同一 2 层网络, 所以从路由表无奈看到和 flannel.1 接口同时存在
[root@k8s-master ~]# kubectl get cm -n kube-system
NAME                                 DATA   AGE
coredns                              1      57d
extension-apiserver-authentication   6      57d
kube-flannel-cfg                     2      57d
kube-proxy                           2      57d
kubeadm-config                       2      57d
kubelet-config-1.19                  1      57d
[root@k8s-master ~]# kubectl edit cm kube-flannel-cfg  -n kube-system

  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "DirectRouting": true   #增加
      }
    }
  • 重启 Pod 正式环境用蓝绿更新
[root@k8s-master ~]# kubectl get pod -n kube-system  --show-labels
NAME                                 READY   STATUS    RESTARTS   AGE     LABELS
coredns-f9fd979d6-l9zck              1/1     Running   16         57d     k8s-app=kube-dns,pod-template-hash=f9fd979d6
coredns-f9fd979d6-s8fp5              1/1     Running   15         57d     k8s-app=kube-dns,pod-template-hash=f9fd979d6
etcd-k8s-master                      1/1     Running   12         57d     component=etcd,tier=control-plane
kube-apiserver-k8s-master            1/1     Running   16         57d     component=kube-apiserver,tier=control-plane
kube-controller-manager-k8s-master   1/1     Running   40         57d     component=kube-controller-manager,tier=control-plane
kube-flannel-ds-6sppx                1/1     Running   1          7d23h   app=flannel,controller-revision-hash=585c88d56b,pod-template-generation=2,tier=node
kube-flannel-ds-j5g9s                1/1     Running   3          7d23h   app=flannel,controller-revision-hash=585c88d56b,pod-template-generation=2,tier=node
kube-flannel-ds-nfz77                1/1     Running   1          7d23h   app=flannel,controller-revision-hash=585c88d56b,pod-template-generation=2,tier=node
kube-flannel-ds-sqhq2                1/1     Running   1          7d23h   app=flannel,controller-revision-hash=585c88d56b,pod-template-generation=2,tier=node
kube-proxy-42vln                     1/1     Running   4          25d     controller-revision-hash=565786c69c,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-98gfb                     1/1     Running   3          21d     controller-revision-hash=565786c69c,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-nlnnw                     1/1     Running   4          17d     controller-revision-hash=565786c69c,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-qbsw2                     1/1     Running   4          25d     controller-revision-hash=565786c69c,k8s-app=kube-proxy,pod-template-generation=1
kube-scheduler-k8s-master            1/1     Running   38         57d     component=kube-scheduler,tier=control-plane
metrics-server-6849f98b-fsvf8        1/1     Running   15         8d      k8s-app=metrics-server,pod-template-hash=6849f98b
[root@k8s-master ~]# kubectl delete pod -n kube-system -l app=flannel
pod "kube-flannel-ds-6sppx" deleted
pod "kube-flannel-ds-j5g9s" deleted
pod "kube-flannel-ds-nfz77" deleted
pod "kube-flannel-ds-sqhq2" deleted
[root@k8s-master ~]# 
  • 再次查看 master、node 路由表
[root@k8s-master ~]# route -n  
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.54.2    0.0.0.0         UG    101    0        0 eth4
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      10.244.1.0      255.255.255.0   UG    0      0        0 eth4
10.244.2.0      192.168.54.172  255.255.255.0   UG    0      0        0 eth4
10.244.3.0      192.168.54.173  255.255.255.0   UG    0      0        0 eth4
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.4.0     0.0.0.0         255.255.255.0   U     100    0        0 eth0
192.168.54.0    0.0.0.0         255.255.255.0   U     101    0        0 eth4
[root@k8s-node1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.54.2    0.0.0.0         UG    101    0        0 eth4
10.244.0.0      192.168.54.170  255.255.255.0   UG    0      0        0 eth4
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.2.0      192.168.54.172  255.255.255.0   UG    0      0        0 eth4
10.244.3.0      192.168.54.173  255.255.255.0   UG    0      0        0 eth4
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.4.0     0.0.0.0         255.255.255.0   U     100    0        0 eth0
192.168.54.0    0.0.0.0         255.255.255.0   U     101    0        0 eth4

#网络相干的 Pod 的 IP 会间接通过宿主机网络接口地址

[root@k8s-master ~]# kubectl get pod -n kube-system -o wide
NAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
coredns-f9fd979d6-l9zck              1/1     Running   16         57d     10.244.0.42     k8s-master   <none>           <none>
coredns-f9fd979d6-s8fp5              1/1     Running   15         57d     10.244.0.41     k8s-master   <none>           <none>
etcd-k8s-master                      1/1     Running   12         57d     192.168.4.170   k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running   16         57d     192.168.4.170   k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running   40         57d     192.168.4.170   k8s-master   <none>           <none>
kube-flannel-ds-d79nx                1/1     Running   0          2m12s   192.168.4.170   k8s-master   <none>           <none>
kube-flannel-ds-m48m7                1/1     Running   0          2m14s   192.168.4.172   k8s-node2    <none>           <none>
kube-flannel-ds-pxmnf                1/1     Running   0          2m14s   192.168.4.171   k8s-node1    <none>           <none>
kube-flannel-ds-vm9kt                1/1     Running   0          2m19s   192.168.4.173   k8s-node3    <none>           <none>
kube-proxy-42vln                     1/1     Running   4          25d     192.168.4.172   k8s-node2    <none>           <none>  #应用宿主机网络接口
kube-proxy-98gfb                     1/1     Running   3          21d     192.168.4.173   k8s-node3    <none>           <none>
kube-proxy-nlnnw                     1/1     Running   4          17d     192.168.4.171   k8s-node1    <none>           <none>
kube-proxy-qbsw2                     1/1     Running   4          25d     192.168.4.170   k8s-master   <none>           <none>
kube-scheduler-k8s-master            1/1     Running   38         57d     192.168.4.170   k8s-master   <none>           <none>
metrics-server-6849f98b-fsvf8        1/1     Running   15         8d      10.244.2.250    k8s-node2    <none>           <none>
  • 抓包查看数据封装

[root@k8s-master plugin]# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
client-1639                  1/1     Running   0          52s   10.244.1.222   k8s-node1   <none>           <none>
replicaset-demo-v1.1-lgf6b   1/1     Running   0          59m   10.244.1.221   k8s-node1   <none>           <none>
replicaset-demo-v1.1-mvvfq   1/1     Running   0          59m   10.244.3.169   k8s-node3   <none>           <none>
replicaset-demo-v1.1-tn49t   1/1     Running   0          59m   10.244.2.136   k8s-node2   <none>           <none>

root@k8s-master plugin]# kubectl exec replicaset-demo-v1.1-tn49t -it -- /bin/sh  #拜访 node3
[root@replicaset-demo-v1 /]# curl 10.244.3.169
iKubernetes demoapp v1.1 !! ClientIP: 10.244.2.136, ServerName: replicaset-demo-v1.1-mvvfq, ServerIP: 10.244.3.169!
[root@replicaset-demo-v1 /]# curl 10.244.3.169

#node3 上抓包
[root@k8s-node3 ~]# tcpdump -i eth4 -nn tcp port 80


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth4, link-type EN10MB (Ethernet), capture size 262144 bytes
11:03:57.508877 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [S], seq 1760692242, win 64860, options [mss 1410,sackOK,TS val 4266124446 ecr 0,nop,wscale 7], length 0
11:03:57.509245 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [S.], seq 3150629627, ack 1760692243, win 64308, options [mss 1410,sackOK,TS val 1453973317 ecr 4266124446,nop,wscale 7], length 0
11:03:57.510198 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 4266124447 ecr 1453973317], length 0
11:03:57.510373 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [P.], seq 1:77, ack 1, win 507, options [nop,nop,TS val 4266124447 ecr 1453973317], length 76: HTTP: GET / HTTP/1.1
11:03:57.510427 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [.], ack 77, win 502, options [nop,nop,TS val 1453973318 ecr 4266124447], length 0
11:03:57.713241 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [P.], seq 1:18, ack 77, win 502, options [nop,nop,TS val 1453973521 ecr 4266124447], length 17: HTTP: HTTP/1.0 200 OK
11:03:57.713821 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [.], ack 18, win 507, options [nop,nop,TS val 4266124651 ecr 1453973521], length 0
11:03:57.733459 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [P.], seq 18:155, ack 77, win 502, options [nop,nop,TS val 1453973541 ecr 4266124651], length 137: HTTP
11:03:57.733720 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [FP.], seq 155:271, ack 77, win 502, options [nop,nop,TS val 1453973541 ecr 4266124651], length 116: HTTP
11:03:57.735862 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [.], ack 155, win 506, options [nop,nop,TS val 4266124671 ecr 1453973541], length 0
11:03:57.735883 IP 10.244.2.136.49656 > 10.244.3.169.80: Flags [F.], seq 77, ack 272, win 506, options [nop,nop,TS val 4266124672 ecr 1453973541], length 0
11:03:57.736063 IP 10.244.3.169.80 > 10.244.2.136.49656: Flags [.], ack 78, win 502, options [nop,nop,TS val 1453973543 ecr 4266124672], length 0
11:03:58.650891 IP 10.244.2.136.49662 > 10.244.3.169.80: Flags [S], seq 3494935965, win 64860, options [mss 1410,sackOK,TS val 4266125588 ecr 0,nop,wscale 7], length 0
  • 能够看到数据的传输没有再通过封装 间接通过 Pod IP flannel 网络传输

示例 3: 批改 flannel 网络类型 host-gw 须要留神 host-gw 只反对 2 层网络

  • 因为所有节点都处在 2 层网络中, 实践上和后面增加 DirectRouting 成果是一样的 就不累述

    [root@k8s-master plugin]# vim kube-flannel.yml
    ...
    net-conf.json: |
      {
        "Network": "10.244.0.0/16",
        "Backend": {"Type": "host-gw"  #批改类型为 host-gw}
      }
    ...
    
    #查看路由表 
    [root@k8s-master plugin]# kubectl apply -f kube-flannel.yml
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         192.168.54.2    0.0.0.0         UG    101    0        0 eth4
    10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
    10.244.1.0      192.168.54.171  255.255.255.0   UG    0      0        0 eth4
    10.244.2.0      192.168.54.172  255.255.255.0   UG    0      0        0 eth4
    10.244.3.0      192.168.54.173  255.255.255.0   UG    0      0        0 eth4
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    192.168.4.0     0.0.0.0         255.255.255.0   U     102    0        0 eth0
    192.168.54.0    0.0.0.0         255.255.255.0   U     101    0        0 eth4

正文完
 0