背景:
最近就想体验各种多集群互联(基于wireguard),而后就深感网络划分的重要性,开始网络设计的杂七乱八的。想互联了都各种问题了,网络重叠了怎么办?集群扩容IP资源不够了杂整?还有就是默认的每个node节点的subset都默认是24?我一台机器下面也跑不了那么多Pod阿......
恩 默认的 SUBNET都是24,举个例子:
我的kubernetes集群初始化配置文件networking局部如下:
节约ip 资源阿 我一台服务器跑不了那么多 200 多个pod........,而且这样算下来除去service的地址,集群只能包容12个工作节点(包含master节点)
对于节点pod ip布局与集群包容更多节点
腾讯云tke的例子
正好看到腾讯云tke创立集群的时候能够看到能够限度但节点的pod数量上线和service的数量:
他们怎么搞的呢?参照:k8s-flannel网络Node下限冲破255
apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationetcd: local: dataDir: "/var/lib/etcd"networking: serviceSubnet: "10.96.0.0/16" podSubnet: "10.244.0.0/16" dnsDomain: "cluster.local"kubernetesVersion: "v1.18.0"controlPlaneEndpoint: "11.167.124.4:6443"controllerManager: extraArgs: allocate-node-cidrs: 'true' node-cidr-mask-size: '28'apiServer: extraArgs: authorization-mode: "Node,RBAC" certSANs: - "11.167.124.4" timeoutForControlPlane: 4m0simageRepository: "registry.aliyuncs.com/google_containers"
对于controllerManager extraArgs配置:
allocate-node-cidrs: 'true' node-cidr-mask-size: '28'
参照:https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-k8s-io-v1beta3-Networking
我的kubernets初始化配置文件是这样的:
apiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authenticationkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 10.0.2.28 bindPort: 6443nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: sh-master-01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master---apiServer: timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: extraArgs: allocate-node-cidrs: 'true' node-cidr-mask-size: '26'dns: {}etcd: local: dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: 1.25.0networking: dnsDomain: cluster.local serviceSubnet: 172.21.12.0/22 podSubnet: 172.21.0.0/20scheduler: {}
注:环境基于kubeadm搭建!
node-cidr-mask-size: '26' 能够承载多少个地址呢?2^(32-26)-1=2^6-1=63个地址满够用了(其实还应该去除一个flannel.1网卡占用的地址,还有子网地址cni0地址?应该是61个?)
再扩大一下:我的集群能够有多少台node呢?
首先:serviceSubnet: 172.21.12.0/22 也就是我的集群能够有2^(32-22)-1=2^10-1=1023个地址
172.21.0.0/20子网数量是64 减去server网段目测应该是48台节点的集群(当然了也包含master节点)
依然以flannel为例:
kube-flannel.yaml同样的也要批改 net-conf.json局部
net-conf.json: | { "Network": "172.21.0.0/20", "SubnetLen": 26, "Backend": { "Type": "vxlan" } }
初始化集群并验证网络配置
kubeadm init --config=config.yamlkubectl apply -f kube-flannel.yml
work 节点退出集群 疏忽 ,查看/run/flannel/subnet.env,发现FLANNEL_SUBNET的掩码变成了26
ifconfig cni0 flannel.1所属Ip地址:
其余碰到的:
我在初始化集群的时候搞成了上面这样....没错 pod网络跟service网络写反了.....
kubeadm init --kubernetes-version=1.25.0 --image-repository=registry.aliyuncs.com/google_containers --service-cidr=171.21.0.0/20 --pod-network-cidr=172.21.12.0/22 --apiserver-advertise-address=10.0.2.28
而后的后果就是四台节点能够,增加第五台就是出问题,而后还流氓了一下patch设置了 最初一台的podcidr......
kubectl patch node sh-work-05 -p '{"spec":{"podCIDR":"172.21.7.0/24"}}'
然而管制立体组件就开始异样了!这里只是揭示一下有patch的办法能够用,心愿大家不要跟我一样,写反了配置!因为是新的集群,我是reset集群从新初始化了!