二进制装置Kubernetes(k8s) v1.24.1 IPv4/IPv6双栈 --- Ubuntu版本
Kubernetes 开源不易,帮忙点个star,谢谢了
介绍
kubernetes二进制装置
后续尽可能第一工夫更新新版本文档,更新后内容在GitHub。
本文是应用的是Ubuntu作为基底,其余文档请在GitHub上查看。
1.23.3 和 1.23.4 和 1.23.5 和 1.23.6 和 1.24.0 和1.24.1 文档以及安装包已生成。
我应用IPV6的目标是在公网进行拜访,所以我配置了IPV6动态地址。
若您没有IPV6环境,或者不想应用IPv6,不对主机进行配置IPv6地址即可。
不配置IPV6,不影响后续,不过集群仍旧是反对IPv6的。为前期留有扩大可能性。
https://github.com/cby-chen/K...
手动我的项目地址:https://github.com/cby-chen/K...
脚本我的项目地址:https://github.com/cby-chen/B...
kubernetes 1.24 变动较大,具体见:https://kubernetes.io/zh/blog...
1.环境
主机名称 | IP地址 | 阐明 | 软件 |
---|---|---|---|
Master01 | 192.168.1.11 | master节点 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd、 kubelet、kube-proxy、nfs-client、haproxy、keepalived |
Master02 | 192.168.1.12 | master节点 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd、 kubelet、kube-proxy、nfs-client、haproxy、keepalived |
Master03 | 192.168.1.13 | master节点 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd、 kubelet、kube-proxy、nfs-client、haproxy、keepalived |
Node01 | 192.168.1.14 | node节点 | kubelet、kube-proxy、nfs-client |
Node02 | 192.168.1.15 | node节点 | kubelet、kube-proxy、nfs-client |
192.168.1.19 | VIP |
软件 | 版本 |
---|---|
kernel | 5.4.0-86 |
Ubuntu | 2004 及以上 |
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy | v1.24.1 |
etcd | v3.5.4 |
containerd | v1.5.11 |
cfssl | v1.6.1 |
cni | v1.1.1 |
crictl | v1.24.2 |
haproxy | v1.8.27 |
keepalived | v2.1.5 |
网段
物理主机:10.0.0.0/24
service:10.96.0.0/12
pod:172.16.0.0/12
倡议k8s集群与etcd集群离开装置
安装包曾经整顿好:https://github.com/cby-chen/K...
1.1.k8s根底零碎环境配置
1.2.配置IP
root@hello:~# vim /etc/netplan/00-installer-config.yaml root@hello:~# root@hello:~# cat /etc/netplan/00-installer-config.yaml# This is the network config written by 'subiquity'network: ethernets: ens18: addresses: - 192.168.1.11/24 gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8] version: 2root@hello:~# root@hello:~# netplan apply root@hello:~#
1.3.设置主机名
hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02
1.4.配置apt源
sudo sed -i 's/archive.ubuntu.com/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
1.5.装置一些必备工具
apt install wget jq psmisc vim net-tools nfs-kernel-server telnet lvm2 git tar curl -y
1.6.选择性下载须要工具
1.下载kubernetes1.24.+的二进制包github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.mdwget https://dl.k8s.io/v1.24.1/kubernetes-server-linux-amd64.tar.gz2.下载etcdctl二进制包github二进制包下载地址:https://github.com/etcd-io/etcd/releaseswget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz3.docker-ce二进制包下载地址二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/这里须要下载20.10.+版本wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.14.tgz4.containerd二进制包下载github下载地址:https://github.com/containerd/containerd/releasescontainerd下载时下载带cni插件的二进制包。wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz5.下载cfssl二进制包github二进制包下载地址:https://github.com/cloudflare/cfssl/releaseswget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd646.cni插件下载github下载地址:https://github.com/containernetworking/plugins/releaseswget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz7.crictl客户端二进制下载github下载:https://github.com/kubernetes-sigs/cri-tools/releaseswget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz
1.7.敞开防火墙
systemctl disable --now ufw
1.8.敞开替换分区
sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 0
1.9.进行工夫同步 (lb除外)
# 服务端apt install chrony -ycat > /etc/chrony/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 192.168.1.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronydsystemctl enable chronyd# 客户端apt install chrony -yvim /etc/chrony/chrony.confcat /etc/chrony/chrony.conf | grep -v "^#" | grep -v "^$"pool 192.168.1.11 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronysystemctl restart chronyd ; systemctl enable chronyd# 客户端装置一条命令yum install chrony -y ; sed -i "s#2.centos.pool.ntp.org#192.168.1.11#g" /etc/chrony/chrony.conf ; systemctl restart chronyd ; systemctl enable chronyd#应用客户端进行验证chronyc sources -v
1.10.配置ulimit
ulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF
1.11.配置免密登录
apt install -y sshpassssh-keygen -f /root/.ssh/id_rsa -P ''export IP="192.168.1.11 192.168.1.12 192.168.1.13 192.168.1.14 192.168.1.15"export SSHPASS=123123for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOSTdone
1.12.装置ipvsadm (lb除外)
apt install ipvsadm ipset sysstat conntrack -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOFsystemctl restart systemd-modules-load.servicelsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 155648 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 139264 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 4 nf_conntrack,btrfs,raid456,ip_vs
1.13.批改内核参数 (lb除外)
cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 0EOFsysctl --system
1.14.所有节点配置hosts本地解析
cat > /etc/hosts <<EOF127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.11 k8s-master01192.168.1.12 k8s-master02192.168.1.13 k8s-master03192.168.1.14 k8s-node01192.168.1.15 k8s-node02192.168.1.19 lb-vipEOF
2.k8s根本组件装置
2.1.所有k8s节点装置Containerd作为Runtime
wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz#创立cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz#解压tar -C / -xzf cri-containerd-cni-1.6.4-linux-amd64.tar.gz#创立服务启动文件cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.targetEOF
2.1.1配置Containerd所需的模块
cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF
2.1.2加载模块
systemctl restart systemd-modules-load.service
2.1.3配置Containerd所需的内核
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOF# 加载内核sysctl --system
2.1.4创立Containerd的配置文件
mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml批改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroupsed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image# 找到containerd.runtimes.runc.options,在其下退出SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".cni]# 将sandbox_image默认地址改为合乎版本地址 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/chenby/pause:3.6"
2.1.5启动并设置为开机启动
systemctl daemon-reloadsystemctl enable --now containerd
2.1.6配置crictl客户端连贯的运行时地位
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz#解压tar xf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/bin/#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF#测试systemctl restart containerdcrictl info
2.2.k8s与etcd下载及装置(仅在master01操作)
2.2.1解压k8s安装包
# 下载安装包wget https://dl.k8s.io/v1.24.1/kubernetes-server-linux-amd64.tar.gzwget https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz# 解压k8s安装文件cd cbytar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}# 解压etcd安装文件tar -xf etcd-v3.5.4-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.4-linux-amd64/etcd{,ctl}# 查看/usr/local/bin下内容ls /usr/local/bin/etcd etcdctl kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler
2.2.2查看版本
[root@k8s-master01 ~]# kubelet --versionKubernetes v1.24.1[root@k8s-master01 ~]# etcdctl versionetcdctl version: 3.5.4API version: 3.5[root@k8s-master01 ~]#
2.2.3将组件发送至其余k8s节点
Master='k8s-master02 k8s-master03'Work='k8s-node01 k8s-node02'for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; donefor NODE in $Work; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; donemkdir -p /opt/cni/bin
2.3创立证书相干文件
mkdir pkicd pkicat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ]}EOFcat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOFcat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOFcat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOFcat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOFcat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOFcat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOFcat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOFcat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOFcat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOFcat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOFcat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOFcd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOFcd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsrules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }---apiVersion: apps/v1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS"spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile---apiVersion: v1kind: Servicemetadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCPEOFcd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF
3.相干证书生成
master01节点下载证书生成工具wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64" -O /usr/local/bin/cfsslwget "https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
3.1.生成etcd证书
特地阐明除外,以下操作在所有master节点操作
3.1.1所有master节点创立证书寄存目录
mkdir /etc/etcd/ssl -p
3.1.2master01节点生成etcd证书
cd pki# 生成etcd证书和etcd证书的key(如果你感觉当前可能会扩容,能够在ip那多写几个预留进去)cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-cacfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.1.11,192.168.1.12,192.168.1.13,2408:8207:78ca:9fa1::10,2408:8207:78ca:9fa1::20,2408:8207:78ca:9fa1::30 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
3.1.3将证书复制到其余节点
Master='k8s-master02 k8s-master03'for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done
3.2.生成k8s相干证书
特地阐明除外,以下操作在所有master节点操作
3.2.1所有k8s节点创立证书寄存目录
mkdir -p /etc/kubernetes/pki
3.2.2master01节点生成k8s证书
# 生成一个根证书 ,多写了一些IP作为预留IP,为未来增加node做筹备cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca# 10.96.0.1是service网段的第一个地址,须要计算,192.168.1.19为高可用vip地址cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-hostname=10.96.0.1,192.168.1.19,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.11,192.168.1.12,192.168.1.13,192.168.1.14,192.168.1.15,192.168.1.16,192.168.1.17,192.168.1.18,2408:8207:78ca:9fa1::10,2408:8207:78ca:9fa1::20,2408:8207:78ca:9fa1::30,2408:8207:78ca:9fa1::40,2408:8207:78ca:9fa1::50,2408:8207:78ca:9fa1::60,2408:8207:78ca:9fa1::70,2408:8207:78ca:9fa1::80,2408:8207:78ca:9fa1::90,2408:8207:78ca:9fa1::100 \-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
3.2.3生成apiserver聚合证书
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca # 有一个正告,能够疏忽cfssl gencert \-ca=/etc/kubernetes/pki/front-proxy-ca.pem \-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \-config=ca-config.json \-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
3.2.4生成controller-manage的证书
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager# 设置一个集群项kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.19:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个环境项,一个上下文kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置一个用户项kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig# 设置默认环境kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfigcfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/schedulerkubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.19:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigcfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/adminkubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.19:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
3.2.5创立kube-proxy证书
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.19:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
3.2.5创立ServiceAccount Key ——secret
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
3.2.6将证书发送到其余master节点
#其余节点创立目录# mkdir /etc/kubernetes/pki/ -pfor NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done
3.2.7查看证书
ls /etc/kubernetes/pki/admin.csr ca.csr front-proxy-ca.csr kube-proxy.csr scheduler-key.pemadmin-key.pem ca-key.pem front-proxy-ca-key.pem kube-proxy-key.pem scheduler.pemadmin.pem ca.pem front-proxy-ca.pem kube-proxy.pemapiserver.csr controller-manager.csr front-proxy-client.csr sa.keyapiserver-key.pem controller-manager-key.pem front-proxy-client-key.pem sa.pubapiserver.pem controller-manager.pem front-proxy-client.pem scheduler.csr# 一共26个就对了ls /etc/kubernetes/pki/ |wc -l26
4.k8s零碎组件配置
4.1.etcd配置
4.1.1master01配置
# 如果要用IPv6那么把IPv4地址批改为IPv6即可cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.11:2380'listen-client-urls: 'https://192.168.1.11:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.11:2380'advertise-client-urls: 'https://192.168.1.11:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: truepeer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF
4.1.2master02配置
# 如果要用IPv6那么把IPv4地址批改为IPv6即可cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.12:2380'listen-client-urls: 'https://192.168.1.12:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.12:2380'advertise-client-urls: 'https://192.168.1.12:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: truepeer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF
4.1.3master03配置
# 如果要用IPv6那么把IPv4地址批改为IPv6即可cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.13:2380'listen-client-urls: 'https://192.168.1.13:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.13:2380'advertise-client-urls: 'https://192.168.1.13:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.1.11:2380,k8s-master02=https://192.168.1.12:2380,k8s-master03=https://192.168.1.13:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: truepeer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF
4.2.创立service(所有master节点操作)
4.2.1创立etcd.service并启动
cat > /usr/lib/systemd/system/etcd.service << EOF[Unit]Description=Etcd ServiceDocumentation=https://coreos.com/etcd/docs/latest/After=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.ymlRestart=on-failureRestartSec=10LimitNOFILE=65536[Install]WantedBy=multi-user.targetAlias=etcd3.serviceEOF
4.2.2创立etcd证书目录
mkdir /etc/kubernetes/pki/etcdln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/systemctl daemon-reloadsystemctl enable --now etcd
4.2.3查看etcd状态
# 如果要用IPv6那么把IPv4地址批改为IPv6即可export ETCDCTL_API=3etcdctl --endpoints="192.168.1.13:2379,192.168.1.12:2379,192.168.1.11:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+| 192.168.1.13:2379 | c0c8142615b9523f | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | || 192.168.1.12:2379 | de8396604d2c160d | 3.5.4 | 20 kB | false | false | 2 | 9 | 9 | || 192.168.1.11:2379 | 33c9d6df0037ab97 | 3.5.4 | 20 kB | true | false | 2 | 9 | 9 | |+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+[root@k8s-master01 pki]#
5.高可用配置
5.1在master三台服务器上操作
5.1.1装置keepalived和haproxy服务
apt -y install keepalived haproxy
5.1.2批改haproxy配置文件(配置文件一样)
# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bakcat >/etc/haproxy/haproxy.cfg<<"EOF"global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30sdefaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15sfrontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitorfrontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-masterbackend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.1.11:6443 check server k8s-master02 192.168.1.12:6443 check server k8s-master03 192.168.1.13:6443 checkEOF
5.1.3M1配置keepalived master节点
#cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs { router_id LVS_DEVEL}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state MASTER interface ens18 mcast_src_ip 192.168.1.11 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.19 } track_script { chk_apiserver } }EOF
5.1.4M2配置keepalived backup节点
# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs { router_id LVS_DEVEL}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state BACKUP interface ens18 mcast_src_ip 192.168.1.12 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.19 } track_script { chk_apiserver } }EOF
5.1.5M3配置keepalived backup节点
# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bakcat > /etc/keepalived/keepalived.conf << EOF! Configuration File for keepalivedglobal_defs { router_id LVS_DEVEL}vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1}vrrp_instance VI_1 { state BACKUP interface ens18 mcast_src_ip 192.168.1.13 virtual_router_id 51 priority 50 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.1.19 } track_script { chk_apiserver } }EOF
5.1.5健康检查脚本配置(所有master主机)
cat > /etc/keepalived/check_apiserver.sh << EOF#!/bin/basherr=0for k in \$(seq 1 3)do check_code=\$(pgrep haproxy) if [[ \$check_code == "" ]]; then err=\$(expr \$err + 1) sleep 1 continue else err=0 break fidoneif [[ \$err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1else exit 0fiEOF# 给脚本受权chmod +x /etc/keepalived/check_apiserver.sh
5.1.6启动服务
systemctl daemon-reloadsystemctl enable --now haproxysystemctl enable --now keepalived
5.1.7测试高可用
# 能ping同[root@k8s-node02 ~]# ping 192.168.1.19# 能telnet拜访[root@k8s-node02 ~]# telnet 192.168.1.19 8443# 敞开主节点,看vip是否漂移到备节点
6.k8s组件配置(区别于第4点)
所有k8s节点创立以下目录
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
6.1.创立apiserver(所有master节点)
6.1.1master01节点配置
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=192.168.1.11 \ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \ --feature-gates=IPv6DualStack=true \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User \ --enable-aggregator-routing=true # --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF
6.1.2master02节点配置
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=192.168.1.12 \ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \ --feature-gates=IPv6DualStack=true \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User \ --enable-aggregator-routing=true # --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF
6.1.3master03节点配置
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --advertise-address=192.168.1.13 \ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \ --feature-gates=IPv6DualStack=true \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User \ --enable-aggregator-routing=trueRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.targetEOF
6.1.4启动apiserver(所有master节点)
systemctl daemon-reload && systemctl enable --now kube-apiserver# 留神查看状态是否启动失常systemctl status kube-apiserver
6.2.配置kube-controller-manager service
# 所有master节点配置,且配置雷同# 172.16.0.0/12为pod网段,按需要设置你本人的网段cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-controller-manager \ --v=2 \ --logtostderr=true \ --bind-address=127.0.0.1 \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --leader-elect=true \ --use-service-account-credentials=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=2m0s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs=true \ --feature-gates=IPv6DualStack=true \ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \ --cluster-cidr=172.16.0.0/12,fc00::/48 \ --node-cidr-mask-size-ipv4=24 \ --node-cidr-mask-size-ipv6=64 \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF
6.2.1启动kube-controller-manager,并查看状态
systemctl daemon-reloadsystemctl enable --now kube-controller-managersystemctl status kube-controller-manager
6.3.配置kube-scheduler service
6.3.1所有master节点配置,且配置雷同
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-scheduler \ --v=2 \ --logtostderr=true \ --bind-address=127.0.0.1 \ --leader-elect=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF
6.3.2启动并查看服务状态
systemctl daemon-reloadsystemctl enable --now kube-schedulersystemctl status kube-scheduler
7.TLS Bootstrapping配置
7.1在master01上配置
cd bootstrapkubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true --server=https://192.168.1.19:8443 \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-credentials tls-bootstrap-token-user \--token=c8ad9c.2e4d610cf3e7426e \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-context tls-bootstrap-token-user@kubernetes \--cluster=kubernetes \--user=tls-bootstrap-token-user \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config use-context tls-bootstrap-token-user@kubernetes \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig# token的地位在bootstrap.secret.yaml,如果批改的话到这个文件批改mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
7.2查看集群状态,没问题的话持续后续操作
# 三台ha节点重启haproxysystemctl stop haproxysystemctl start haproxykubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""} etcd-2 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""} # 切记执行,别忘记!!!kubectl create -f bootstrap.secret.yaml
8.node节点配置
8.1.在master01上将证书复制到node节点
cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done
8.2.kubelet配置
8.2.1所有k8s节点创立相干目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/# 所有k8s节点配置kubelet servicecat > /usr/lib/systemd/system/kubelet.service << EOF[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=containerd.serviceRequires=containerd.service[Service]ExecStart=/usr/local/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --config=/etc/kubernetes/kubelet-conf.yml \ --container-runtime=remote \ --runtime-request-timeout=15m \ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \ --cgroup-driver=systemd \ --node-labels=node.kubernetes.io/node='' \ --feature-gates=IPv6DualStack=true[Install]WantedBy=multi-user.targetEOF
8.2.2所有k8s节点创立kubelet的配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<EOFapiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationaddress: 0.0.0.0port: 10250readOnlyPort: 10255authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pemauthorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30scgroupDriver: systemdcgroupsPerQOS: trueclusterDNS:- 10.96.0.10clusterDomain: cluster.localcontainerLogMaxFiles: 5containerLogMaxSize: 10MicontentType: application/vnd.kubernetes.protobufcpuCFSQuota: truecpuManagerPolicy: nonecpuManagerReconcilePeriod: 10senableControllerAttachDetach: trueenableDebuggingHandlers: trueenforceNodeAllocatable:- podseventBurst: 10eventRecordQPS: 5evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5%evictionPressureTransitionPeriod: 5m0sfailSwapOn: truefileCheckFrequency: 20shairpinMode: promiscuous-bridgehealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 20simageGCHighThresholdPercent: 85imageGCLowThresholdPercent: 80imageMinimumGCAge: 2m0siptablesDropBit: 15iptablesMasqueradeBit: 14kubeAPIBurst: 10kubeAPIQPS: 5makeIPTablesUtilChains: truemaxOpenFiles: 1000000maxPods: 110nodeStatusUpdateFrequency: 10soomScoreAdj: -999podPidsLimit: -1registryBurst: 10registryPullQPS: 5resolvConf: /etc/resolv.confrotateCertificates: trueruntimeRequestTimeout: 2m0sserializeImagePulls: truestaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 4h0m0ssyncFrequency: 1m0svolumeStatsAggPeriod: 1m0sEOF
8.2.3启动kubelet
systemctl daemon-reloadsystemctl restart kubeletsystemctl enable --now kubelet
8.2.4查看集群
[root@k8s-master01 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master01 Ready <none> 12s v1.24.1k8s-master02 Ready <none> 12s v1.24.1k8s-master03 Ready <none> 12s v1.24.1k8s-node01 Ready <none> 12s v1.24.1k8s-node02 Ready <none> 12s v1.24.1[root@k8s-master01 ~]#
8.3.kube-proxy配置
8.3.1将kubeconfig发送至其余节点
for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; donefor NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
8.3.2所有k8s节点增加kube-proxy的service文件
cat > /usr/lib/systemd/system/kube-proxy.service << EOF[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --v=2Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.targetEOF
8.3.3所有k8s节点增加kube-proxy的配置
cat > /etc/kubernetes/kube-proxy.yaml << EOFapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5clusterCIDR: 172.16.0.0/12,fc00::/48 configSyncPeriod: 15m0sconntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30sipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250msEOF
8.3.4启动kube-proxy
systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy
9.装置Calico
9.1以下步骤只在master01操作
9.1.1更改calico网段
# vim calico.yamlvim calico-ipv6.yaml# calico-config ConfigMap处 "ipam": { "type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "true" }, - name: IP value: "autodetect" - name: IP6 value: "autodetect" - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/16" - name: CALICO_IPV6POOL_CIDR value: "fc00::/48" - name: FELIX_IPV6SUPPORT value: "true"# kubectl apply -f calico.yamlkubectl apply -f calico-ipv6.yaml
9.1.2查看容器状态
[root@k8s-master01 ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-7fb57bc4b5-dwwg8 1/1 Running 0 23skube-system calico-node-b8p4z 1/1 Running 0 23skube-system calico-node-c4lzj 1/1 Running 0 23skube-system calico-node-dfh2m 1/1 Running 0 23skube-system calico-node-gbhgn 1/1 Running 0 23skube-system calico-node-ht6nl 1/1 Running 0 23skube-system calico-typha-dd885f47-jvgsj 1/1 Running 0 23s[root@k8s-master01 ~]#
10.装置CoreDNS
10.1以下步骤只在master01操作
10.1.1批改文件
cd coredns/sed -i "s#10.96.0.10#10.96.0.10#g" coredns.yamlcat coredns.yaml | grep clusterIP: clusterIP: 10.96.0.10
10.1.2装置
kubectl create -f coredns.yaml serviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns createdconfigmap/coredns createddeployment.apps/coredns createdservice/kube-dns created
11.装置Metrics Server
11.1以下步骤只在master01操作
11.1.1装置Metrics-server
在新版的Kubernetes中系统资源的采集均应用Metrics-server,能够通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率
# 装置metrics servercd metrics-server/kubectl apply -f metrics-server.yaml
11.1.2稍等片刻查看状态
kubectl top nodeNAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 154m 1% 1715Mi 21% k8s-master02 151m 1% 1274Mi 16% k8s-master03 523m 6% 1345Mi 17% k8s-node01 84m 1% 671Mi 8% k8s-node02 73m 0% 727Mi 9% k8s-node03 96m 1% 769Mi 9% k8s-node04 68m 0% 673Mi 8% k8s-node05 82m 1% 679Mi 8%
12.集群验证
12.1部署pod资源
cat<<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata: name: busybox namespace: defaultspec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: AlwaysEOF# 查看kubectl get podNAME READY STATUS RESTARTS AGEbusybox 1/1 Running 0 17s
12.2用pod解析默认命名空间中的kubernetes
kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17hkubectl exec busybox -n default -- nslookup kubernetes3Server: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kubernetesAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local
12.3测试跨命名空间是否能够解析
kubectl exec busybox -n default -- nslookup kube-dns.kube-systemServer: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kube-dns.kube-systemAddress 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
12.4每个节点都必须要能拜访Kubernetes的kubernetes svc 443和kube-dns的service 53
telnet 10.96.0.1 443Trying 10.96.0.1...Connected to 10.96.0.1.Escape character is '^]'. telnet 10.96.0.10 53Trying 10.96.0.10...Connected to 10.96.0.10.Escape character is '^]'.curl 10.96.0.10:53curl: (52) Empty reply from server
12.5Pod和Pod之前要能通
kubectl get po -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESbusybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScalico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none>calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.1.11 k8s-master01 <none> <none>calico-node-g8nqd 1/1 Running 0 77m 192.168.1.14 k8s-node01 <none> <none>calico-node-mdps8 1/1 Running 0 77m 192.168.1.15 k8s-node02 <none> <none>calico-node-nf4nt 1/1 Running 0 77m 192.168.1.13 k8s-master03 <none> <none>calico-node-sq2ml 1/1 Running 0 77m 192.168.1.12 k8s-master02 <none> <none>calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.1.15 k8s-node02 <none> <none>calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.1.11 k8s-master01 <none> <none>calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.1.14 k8s-node01 <none> <none>coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none>metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none># 进入busybox ping其余节点上的podkubectl exec -ti busybox -- sh/ # ping 192.168.1.14PING 192.168.1.14 (192.168.1.14): 56 data bytes64 bytes from 192.168.1.14: seq=0 ttl=63 time=0.358 ms64 bytes from 192.168.1.14: seq=1 ttl=63 time=0.668 ms64 bytes from 192.168.1.14: seq=2 ttl=63 time=0.637 ms64 bytes from 192.168.1.14: seq=3 ttl=63 time=0.624 ms64 bytes from 192.168.1.14: seq=4 ttl=63 time=0.907 ms# 能够连通证实这个pod是能够跨命名空间和跨主机通信的
12.6创立三个正本,能够看到3个正本散布在不同的节点上(用完能够删了)
cat > deployments.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80EOFkubectl apply -f deployments.yaml deployment.apps/nginx-deployment createdkubectl get pod NAME READY STATUS RESTARTS AGEbusybox 1/1 Running 0 6m25snginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8snginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8snginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s# 删除nginx[root@k8s-master01 ~]# kubectl delete -f deployments.yaml
13.装置dashboard
wget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard.yamlwget https://raw.githubusercontent.com/cby-chen/Kubernetes/main/yaml/dashboard-user.yamlkubectl apply -f dashboard.yamlkubectl apply -f dashboard-user.yaml
13.1更改dashboard的svc为NodePort,如果已是请疏忽
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard type: NodePort
13.2查看端口号
kubectl get svc kubernetes-dashboard -n kubernetes-dashboardNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes-dashboard NodePort 10.108.120.110 <none> 443:30034/TCP 34s
13.3创立token
kubectl -n kubernetes-dashboard create token admin-usereyJhbGciOiJSUzI1NiIsImtpZCI6ImxkV1hHaHViN2d3STVLTkxtbFkyaUZPdnhWa0s2NjUzRGVrNmJhMjVpRmsifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjUzODMwMTUwLCJpYXQiOjE2NTM4MjY1NTAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZDZlOTI2YWUtNDExYS00YTU3LTk3NWUtOWI4ZTEyMzYyZjg1In19LCJuYmYiOjE2NTM4MjY1NTAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.ZSGJmGQc0F1jeJp8SwgZQ0a9ynTYi-y1JNUBJBhjRVStS9KphVK5MLpRxV4KqzhzGt8pR20nNZGop3na6EgIXVJ8XNrlQQO8kZV_I11ylw_mqL7sjCK_UsxJODOOvoRzOJMN3Qd9ONLB3cPjge9zIGeRvaEwpQulOWALScyQvO__1LkSjqz2DPQM7aDh0Gt6VZ2-JoVgTlEBy--nF-Okb0qyHMI8KEcqv7BnI1rJw5rETL7JrYBM3YIWY8_Ft71w6dKn7UhEbB9tPVMi0ymGTpUVja2M2ypsDymrMlcd4doRUn98F_i0iGW4ZN3CweRDFnkwwIUODjTn1fdp1uPXnQ
13.3登录dashboard
https://192.168.1.11:30034/
14.ingress装置
14.1写入配置文件,并执行
[root@hello ~/yaml]# vim deploy.yaml[root@hello ~/yaml]# cat deploy.yamlapiVersion: v1kind: Namespacemetadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx---# Source: ingress-nginx/templates/controller-serviceaccount.yamlapiVersion: v1kind: ServiceAccountmetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginxautomountServiceAccountToken: true---# Source: ingress-nginx/templates/controller-configmap.yamlapiVersion: v1kind: ConfigMapmetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginxdata: allow-snippet-annotations: 'true'---# Source: ingress-nginx/templates/clusterrole.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm name: ingress-nginxrules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets - namespaces verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch---# Source: ingress-nginx/templates/clusterrolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm name: ingress-nginxroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginxsubjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx---# Source: ingress-nginx/templates/controller-role.yamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginxrules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - events verbs: - create - patch---# Source: ingress-nginx/templates/controller-rolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginxroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginxsubjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx---# Source: ingress-nginx/templates/controller-service-webhook.yamlapiVersion: v1kind: Servicemetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginxspec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller---# Source: ingress-nginx/templates/controller-service.yamlapiVersion: v1kind: Servicemetadata: annotations: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginxspec: type: NodePort externalTrafficPolicy: Local ipFamilyPolicy: SingleStack ipFamilies: - IPv4 ports: - name: http port: 80 protocol: TCP targetPort: http appProtocol: http - name: https port: 443 protocol: TCP targetPort: https appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller---# Source: ingress-nginx/templates/controller-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginxspec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: dnsPolicy: ClusterFirst containers: - name: controller image: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.2.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LD_PRELOAD value: /usr/local/lib/libmimalloc.so livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true resources: requests: cpu: 100m memory: 90Mi nodeSelector: kubernetes.io/os: linux serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission---# Source: ingress-nginx/templates/controller-ingressclass.yaml# We don't support namespaced ingressClass yet# So a ClusterRole and a ClusterRoleBinding is requiredapiVersion: networking.k8s.io/v1kind: IngressClassmetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: nginx namespace: ingress-nginxspec: controller: k8s.io/ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml# before changing this value, check the required kubernetes version# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisitesapiVersion: admissionregistration.k8s.io/v1kind: ValidatingWebhookConfigurationmetadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admissionwebhooks: - name: validate.nginx.ingress.kubernetes.io matchPolicy: Equivalent rules: - apiGroups: - networking.k8s.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - ingresses failurePolicy: Fail sideEffects: None admissionReviewVersions: - v1 clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /networking/v1/ingresses---# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yamlapiVersion: v1kind: ServiceAccountmetadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook---# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhookrules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update---# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhookroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admissionsubjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhookrules: - apiGroups: - '' resources: - secrets verbs: - get - create---# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yamlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhookroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admissionsubjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx---# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yamlapiVersion: batch/v1kind: Jobmetadata: name: ingress-nginx-admission-create namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhookspec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.2.0 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc - --namespace=$(POD_NAMESPACE) - --secret-name=ingress-nginx-admission env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000---# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yamlapiVersion: batch/v1kind: Jobmetadata: name: ingress-nginx-admission-patch namespace: ingress-nginx annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhookspec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=$(POD_NAMESPACE) - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000[root@hello ~/yaml]#
14.2启用后端,写入配置文件执行
[root@hello ~/yaml]# vim backend.yaml[root@hello ~/yaml]# cat backend.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: default-http-backend labels: app.kubernetes.io/name: default-http-backend namespace: kube-systemspec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: default-http-backend template: metadata: labels: app.kubernetes.io/name: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi---apiVersion: v1kind: Servicemetadata: name: default-http-backend namespace: kube-system labels: app.kubernetes.io/name: default-http-backendspec: ports: - port: 80 targetPort: 8080 selector: app.kubernetes.io/name: default-http-backend[root@hello ~/yaml]#
14.3装置测试利用
[root@hello ~/yaml]# vim ingress-demo-app.yaml[root@hello ~/yaml]#[root@hello ~/yaml]# cat ingress-demo-app.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: hello-serverspec: replicas: 2 selector: matchLabels: app: hello-server template: metadata: labels: app: hello-server spec: containers: - name: hello-server image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server ports: - containerPort: 9000---apiVersion: apps/v1kind: Deploymentmetadata: labels: app: nginx-demo name: nginx-demospec: replicas: 2 selector: matchLabels: app: nginx-demo template: metadata: labels: app: nginx-demo spec: containers: - image: nginx name: nginx---apiVersion: v1kind: Servicemetadata: labels: app: nginx-demo name: nginx-demospec: selector: app: nginx-demo ports: - port: 8000 protocol: TCP targetPort: 80---apiVersion: v1kind: Servicemetadata: labels: app: hello-server name: hello-serverspec: selector: app: hello-server ports: - port: 8000 protocol: TCP targetPort: 9000---apiVersion: networking.k8s.io/v1kind: Ingress metadata: name: ingress-host-barspec: ingressClassName: nginx rules: - host: "hello.chenby.cn" http: paths: - pathType: Prefix path: "/" backend: service: name: hello-server port: number: 8000 - host: "demo.chenby.cn" http: paths: - pathType: Prefix path: "/nginx" backend: service: name: nginx-demo port: number: 8000
14.4执行部署
kubectl apply -f deploy.yaml kubectl apply -f backend.yaml # 等创立实现后在执行:kubectl apply -f ingress-demo-app.yaml kubectl get ingressNAME CLASS HOSTS ADDRESS PORTS AGEingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.1.12 80 7s
14.5过滤查看ingress端口
[root@hello ~/yaml]# kubectl get svc -A | grep ingressingress-nginx ingress-nginx-controller NodePort 10.104.231.36 <none> 80:32636/TCP,443:30579/TCP 104singress-nginx ingress-nginx-controller-admission ClusterIP 10.101.85.88 <none> 443/TCP 105s[root@hello ~/yaml]#
15.IPv6测试
#部署利用[root@k8s-master01 ~]# vim cby.yaml [root@k8s-master01 ~]# cat cby.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: chenbyspec: replicas: 3 selector: matchLabels: app: chenby template: metadata: labels: app: chenby spec: containers: - name: chenby image: nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80---apiVersion: v1kind: Servicemetadata: name: chenbyspec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 type: NodePort selector: app: chenby ports: - port: 80 targetPort: 80[root@k8s-master01 ~]# kubectl apply -f cby.yaml#查看端口[root@k8s-master01 ~]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEchenby NodePort fd00::a29c <none> 80:30779/TCP 5s[root@k8s-master01 ~]# #应用内网拜访[root@localhost yaml]# curl -I http://[fd00::a29c]HTTP/1.1 200 OKServer: nginx/1.21.6Date: Thu, 05 May 2022 10:20:35 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes[root@localhost yaml]# curl -I http://192.168.1.11:30779HTTP/1.1 200 OKServer: nginx/1.21.6Date: Thu, 05 May 2022 10:20:59 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes[root@localhost yaml]# #应用公网拜访[root@localhost yaml]# curl -I http://[2408:8207:78ca:9fa1::10]:30779HTTP/1.1 200 OKServer: nginx/1.21.6Date: Thu, 05 May 2022 10:20:54 GMTContent-Type: text/htmlContent-Length: 615Last-Modified: Tue, 25 Jan 2022 15:03:52 GMTConnection: keep-aliveETag: "61f01158-267"Accept-Ranges: bytes
16.装置命令行主动补全性能
yum install bash-completion -ysource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc
对于
https://www.oiox.cn/
https://www.oiox.cn/index.php...
CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、集体博客
全网可搜《小陈运维》
文章次要公布于微信公众号:《Linux运维交换社区》