共计 18675 个字符,预计需要花费 47 分钟才能阅读完成。
一、Ubunt 环境
1. 测试环境机器布局
角色 | 主机名 | IP 地址 |
---|---|---|
master | test-b-k8s-master | 192.168.11.13 |
node | test-b-k8s-node01 | 192.168.11.14 |
node | test-b-k8s-node02 | 192.168.11.15 |
2. 软件环境版本
软件 | 版本 |
---|---|
OS | Ubuntu 20.04.5 focal |
Docker | Docker version 20.10.21 |
Kubernetes | v1.25.4 |
3. 初始化配置(所有节点)
批改阿里 apt 源
sudo vi /etc/apt/sources.list | |
deb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse | |
deb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse | |
deb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse | |
deb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse | |
deb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse | |
deb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse | |
# deb https://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse | |
# deb-src https://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse | |
deb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse | |
deb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse | |
sudo apt-get update -y |
敞开防火墙
sudo ufw disable
敞开 selinux
略...
我装置的 ubuntu20 默认没有 selinux 这货色,因而不波及敞开
敞开 swap
sudo swapoff -a # 长期 | |
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab # 永恒 |
增加 hosts
sudo vi /etc/hosts | |
192.168.11.13 test-b-k8s-master | |
192.168.11.14 test-b-k8s-node01 | |
192.168.11.15 test-b-k8s-node02 |
将桥接的 IPv4 流量传递到 iptables 的链
sudo vi /etc/sysctl.d/k8s.conf | |
net.bridge.bridge-nf-call-ip6tables = 1 | |
net.bridge.bridge-nf-call-iptables = 1 | |
sudo sysctl --system |
工夫同步
sudo apt install ntpdate | |
sudo ntpdate time.windows.com |
4. 装置 Docker/cri-dockerd/kubeadm/kubelet/kubectl(所有节点)
装置 Docker-ce
sudo apt-get -y install apt-transport-https ca-certificates software-properties-common | |
sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add - | |
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" | |
sudo apt-get -y update && sudo apt-get -y install docker-ce | |
# 配置镜像下载加速器 | |
sudo vi /etc/docker/daemon.json | |
{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"], | |
"exec-opts": ["native.cgroupdriver=systemd"] | |
} | |
sudo systemctl daemon-reload && sudo systemctl restart docker && sudo docker info |
装置 cri-dockerd
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb | |
sudo dpkg -i cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb | |
# 指定依赖镜像地址 | |
sudo vi /usr/lib/systemd/system/cri-docker.service | |
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:latest | |
sudo systemctl daemon-reload && sudo systemctl enable cri-docker && sudo systemctl restart cri-docker |
装置 kubeadm,kubelet 和 kubectl
# 增加 k8s 的阿里云 apt 源 | |
sudo apt-get install -y apt-transport-https ca-certificates | |
sudo vi /etc/apt/sources.list.d/kubernetes.list | |
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main | |
sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - | |
# 开始装置 | |
sudo apt-get update -y && sudo apt-get install -y kubelet kubeadm kubectl && sudo systemctl enable kubelet.service |
5. 部署 Kubernetes Master
在 192.168.11.13(Master)执行
sudo kubeadm init \ | |
--apiserver-advertise-address=192.168.11.13 \ | |
--image-repository registry.aliyuncs.com/google_containers \ | |
--kubernetes-version v1.25.4 \ | |
--service-cidr=10.96.0.0/12 \ | |
--pod-network-cidr=10.244.0.0/16 \ | |
--cri-socket=unix:///var/run/cri-dockerd.sock \ | |
--ignore-preflight-errors=all |
配置 kubectl 应用的连贯 k8s 认证文件
# 普通用户 | |
mkdir -p $HOME/.kube | |
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | |
sudo chown $(id -u):$(id -g) $HOME/.kube/config | |
# 管理员(可加到环境变量)export KUBECONFIG=/etc/kubernetes/admin.conf |
6. 退出节点
在 2 台 node 节点上执行
sudo kubeadm join 192.168.11.13:6443 --token o37091.z858bts6jmth9irz \ | |
--discovery-token-ca-cert-hash sha256:628a5b50227c93e465adc1ca380cf335e8f639c15c8a92892f9d22b71ac6c2ac \ | |
--cri-socket=unix:///var/run/cri-dockerd.sock |
下面用到的 token,默认 token 有效期为 24 小时,当过期之后,该 token 就不可用了。这时就须要从新创立 token,能够间接应用命令快捷生成:
sudo kubeadm token create --print-join-command
退出后在 master 查看工作节点
tantianran@test-b-k8s-master:~$ kubectl get nodes | |
NAME STATUS ROLES AGE VERSION | |
test-b-k8s-master NotReady control-plane 7m57s v1.25.4 | |
test-b-k8s-node01 NotReady <none> 14s v1.25.4 | |
test-b-k8s-node02 NotReady <none> 8s v1.25.4 | |
tantianran@test-b-k8s-master:~$ |
目前工作节点的状态为 NotReady,是因为还没有部署网络插件,等部署完就会处于 Ready。
7. 部署容器网络接口(CNI)Calico
Calico 是一个纯三层的数据中心网络计划,是目前 Kubernetes 支流的网络计划。上面开始部署 Calico,在 master 上进行部署操作。
下载 calico 的 YAML:
wget https://docs.projectcalico.org/manifests/calico.yaml
下载完后还须要批改外面定义 Pod 网络(CALICO_IPV4POOL_CIDR),与后面 kubeadm init 的 –pod-network-cidr 指定的一样。
calico.yaml 中 CALICO_IPV4POOL_CIDR 默认的配置如下:
# - name: CALICO_IPV4POOL_CIDR | |
# value: "192.168.0.0/16" |
勾销正文,并批改成与后面 kubeadm init 的 –pod-network-cidr(10.244.0.0/16)指定的一样。
- name: CALICO_IPV4POOL_CIDR | |
value: "10.244.0.0/16" |
批改完后文件后,开始部署:
# 部署 calico | |
kubectl apply -f calico.yaml |
接下来就是比拟漫长的期待,期待多久取决于你的网速,如果网络的好的话,就会更快一点,因为它还要拉镜像,而且是 3 台节点都要拉取相干镜像,上面能够在 master 上查看位于 kube-system 命名空间下的 pod
tantianran@test-b-k8s-master:~$ kubectl get pods -n kube-system | |
NAME READY STATUS RESTARTS AGE | |
calico-kube-controllers-798cc86c47-sg65n 0/1 Pending 0 111s | |
calico-node-9d2gz 0/1 Init:0/3 0 111s | |
calico-node-csnwt 0/1 Init:0/3 0 111s | |
calico-node-g7rk2 0/1 Init:0/3 0 111s | |
coredns-c676cc86f-p2sgs 0/1 Pending 0 19m | |
coredns-c676cc86f-pn5zk 0/1 Pending 0 19m | |
etcd-test-b-k8s-master 1/1 Running 0 19m | |
kube-apiserver-test-b-k8s-master 1/1 Running 0 19m | |
kube-controller-manager-test-b-k8s-master 1/1 Running 0 19m | |
kube-proxy-6bdwl 1/1 Running 0 12m | |
kube-proxy-d8xgk 1/1 Running 0 19m | |
kube-proxy-lcw2n 1/1 Running 0 11m | |
kube-scheduler-test-b-k8s-master 1/1 Running 0 19m | |
tantianran@test-b-k8s-master:~$ |
等 calico 的全副 pod 状态都为 Running 的时候,就胜利了。到时候你再去查看 node 的状态,应该也会为 Ready 了。
方才提到,在部署 calico 的过程中 3 台节点都须要拉取相干镜像,能够到其中 1 台 Node 上查看有没有镜像了:
tantianran@test-b-k8s-node02:~$ sudo docker images | |
REPOSITORY TAG IMAGE ID CREATED SIZE | |
registry.aliyuncs.com/google_containers/kube-proxy v1.25.4 2c2bc1864279 2 weeks ago 61.7MB | |
calico/cni v3.24.5 628dd7088041 2 weeks ago 198MB | |
calico/node v3.24.5 54637cb36d4a 2 weeks ago 226MB | |
registry.aliyuncs.com/google_containers/pause latest 350b164e7ae1 8 years ago 240kB | |
tantianran@test-b-k8s-node02:~$ |
通过一小段时间期待后,calico 相干的 pod 曾经都为 Running 了
tantianran@test-b-k8s-master:~$ kubectl get pods -n kube-system | |
NAME READY STATUS RESTARTS AGE | |
calico-kube-controllers-798cc86c47-sg65n 1/1 Running 0 7m45s | |
calico-node-9d2gz 1/1 Running 0 7m45s | |
calico-node-csnwt 1/1 Running 0 7m45s | |
calico-node-g7rk2 1/1 Running 0 7m45s | |
coredns-c676cc86f-p2sgs 1/1 Running 0 25m | |
coredns-c676cc86f-pn5zk 1/1 Running 0 25m | |
etcd-test-b-k8s-master 1/1 Running 0 25m | |
kube-apiserver-test-b-k8s-master 1/1 Running 0 25m | |
kube-controller-manager-test-b-k8s-master 1/1 Running 0 25m | |
kube-proxy-6bdwl 1/1 Running 0 17m | |
kube-proxy-d8xgk 1/1 Running 0 25m | |
kube-proxy-lcw2n 1/1 Running 0 17m | |
kube-scheduler-test-b-k8s-master 1/1 Running 0 25m | |
tantianran@test-b-k8s-master:~$ |
持续查看工作节点的状态,也是 Ready 状态了
tantianran@test-b-k8s-master:~$ kubectl get nodes | |
NAME STATUS ROLES AGE VERSION | |
test-b-k8s-master Ready control-plane 26m v1.25.4 | |
test-b-k8s-node01 Ready <none> 19m v1.25.4 | |
test-b-k8s-node02 Ready <none> 19m v1.25.4 | |
tantianran@test-b-k8s-master:~$ |
到此为止!CNI 网络插件 calico 就曾经实现部署啦!
8. 部署 Dashboard
Dashboard 是官网提供的一个 UI,可用于根本治理 K8s 资源,也是在 master 上进行部署操作。github 我的项目地址:https://github.com/kubernetes…
YAML 下载地址:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
默认 Dashboard 只能集群外部拜访,下载 yaml 文件后,批改 Service 为 NodePort 类型,裸露到内部:
kind: Service | |
apiVersion: v1 | |
metadata: | |
labels: | |
k8s-app: kubernetes-dashboard | |
name: kubernetes-dashboard | |
namespace: kubernetes-dashboard | |
spec: | |
ports: | |
- port: 443 | |
targetPort: 8443 | |
nodePort: 30001 # 减少这个(等会咱们拜访 UI 的端口)selector: | |
k8s-app: kubernetes-dashboard | |
type: NodePort # 减少这个(让每个节点都可拜访) |
开始部署
kubectl apply -f recommended.yaml
这时候,也是须要期待到 dashboard 相干的 pod 为 Running 的状态,它也是要拉镜像,可查看命名空间 kubernetes-dashboard 下的 pod:
tantianran@test-b-k8s-master:~$ kubectl get pods -n kubernetes-dashboard | |
NAME READY STATUS RESTARTS AGE | |
dashboard-metrics-scraper-64bcc67c9c-nnzqv 0/1 ContainerCreating 0 17s | |
kubernetes-dashboard-5c8bd6b59-blpgm 0/1 ContainerCreating 0 17s | |
tantianran@test-b-k8s-master:~$ |
期待后,再次查看,dashboard 相干的 pod 曾经为 Running 的状态,阐明曾经部署好
tantianran@test-b-k8s-master:~$ kubectl get pods -n kubernetes-dashboard | |
NAME READY STATUS RESTARTS AGE | |
dashboard-metrics-scraper-64bcc67c9c-nnzqv 1/1 Running 0 2m | |
kubernetes-dashboard-5c8bd6b59-blpgm 1/1 Running 0 2m |
接下来就能够拜访任意节点的 30001 端口拜访 UI,记得是用 https 哦
- https://192.168.11.13:30001/#…
- https://192.168.11.14:30001/#…
- https://192.168.11.15:30001/#…
创立登录 UI 的 token
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard | |
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin | |
kubectl create token dashboard-admin -n kubernetes-dashboard |
将创立好的 token 复制进去即可实现登录
最初阐明一下,当前所有 yaml 文件都只在 Master 节点执行(部署操作),切记!
二、CentOS7 环境
1. 测试环境机器布局
角色 | 主机名 | IP 地址 |
---|---|---|
master | test-a-k8s-master | 192.168.11.10 |
node | test-a-k8s-node01 | 192.168.11.11 |
node | test-a-k8s-node02 | 192.168.11.12 |
2. 软件环境版本
软件 | 版本 |
---|---|
OS | CentOS Linux release 7.9.2009 |
Docker | Docker version 20.10.21 |
Kubernetes | v1.25.4 |
3. 操作系统初始化配置(所有节点)
3.1 敞开防火墙
systemctl stop firewalld | |
systemctl disable firewalld |
3.2 敞开 selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永恒 | |
setenforce 0 # 长期 |
3.3 敞开 swap
swapoff -a # 长期 | |
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永恒 |
3.4 依据布局设置主机名
hostnamectl set-hostname <hostname>
3.5 在 master 增加 hosts
cat >> /etc/hosts << EOF | |
192.168.11.10 test-a-k8s-master | |
192.168.11.11 test-a-k8s-node01 | |
192.168.11.12 test-a-k8s-node02 | |
EOF |
3.6 将桥接的 IPv4 流量传递到 iptables 的链
cat > /etc/sysctl.d/k8s.conf << EOF | |
net.bridge.bridge-nf-call-ip6tables = 1 | |
net.bridge.bridge-nf-call-iptables = 1 | |
EOF | |
sysctl --system # 失效 |
3.7 工夫同步
yum install ntpdate -y | |
ntpdate time.windows.com | |
ntpdate ntp1.aliyun.com # 阿里云时钟服务 |
4. 装置 Docker/kubeadm/kubelet/cri-dockerd(所有节点)
4.1 装置 Docker-ce
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo | |
yum -y install docker-ce | |
systemctl enable docker && systemctl start docker |
4.2 配置镜像下载加速器
cat > /etc/docker/daemon.json << EOF | |
{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"], | |
"exec-opts": ["native.cgroupdriver=systemd"] | |
} | |
EOF | |
systemctl restart docker | |
docker info |
4.3 装置 cri-dockerd
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6-3.el7.x86_64.rpm | |
rpm -ivh cri-dockerd-0.2.6-3.el7.x86_64.rpm |
4.4 指定依赖镜像地址
在 cri-docker.service 配置中的 fd:// 前面减少 –pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 阐明:pause:3.7 也能够指向最新的版本 pause:latest
vi /usr/lib/systemd/system/cri-docker.service | |
# 内容:ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 | |
systemctl daemon-reload | |
systemctl enable cri-docker && systemctl start cri-docker |
4.5 增加阿里云 YUM 软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF | |
[kubernetes] | |
name=Kubernetes | |
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 | |
enabled=1 | |
gpgcheck=0 | |
repo_gpgcheck=0 | |
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg | |
EOF |
4.6 装置 kubeadm,kubelet 和 kubectl
yum install -y kubelet-1.25.4 kubeadm-1.25.4 kubectl-1.25.4 | |
systemctl enable kubelet.service # 这里仅设置 enable,后续部署的时候会主动交由 kubeadm 拉起 |
5. 部署 Kubernetes Master
在 192.168.11.10(Master)执行
kubeadm init \ | |
--apiserver-advertise-address=192.168.11.10 \ | |
--image-repository registry.aliyuncs.com/google_containers \ | |
--kubernetes-version v1.25.4 \ | |
--service-cidr=10.96.0.0/12 \ | |
--pod-network-cidr=10.244.0.0/16 \ | |
--cri-socket=unix:///var/run/cri-dockerd.sock \ | |
--ignore-preflight-errors=all |
在进行更改之前,运行一些列查看,验证零碎状态,有些查看只会触发正告,有些查看会被视为谬误并退出 kubeadm,因而应用 –ignore-preflight-errors=all 疏忽查看中的谬误。
初始化实现之后,最初会呈现上面这样的提醒:
Your Kubernetes control-plane has initialized successfully! | |
To start using your cluster, you need to run the following as a regular user: | |
要开始应用集群,您须要以普通用户身份运行以下命令:mkdir -p $HOME/.kube | |
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | |
sudo chown $(id -u):$(id -g) $HOME/.kube/config | |
Alternatively, if you are the root user, you can run: | |
或者,如果您是 root 用户,则能够运行:export KUBECONFIG=/etc/kubernetes/admin.conf | |
You should now deploy a pod network to the cluster. | |
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: | |
https://kubernetes.io/docs/concepts/cluster-administration/addons/ | |
Then you can join any number of worker nodes by running the following on each as root: | |
# 记住这个 join 命令,前面用,作用是在 node 节点运行这条命令,退出集群 | |
kubeadm join 192.168.11.10:6443 --token pk474p.uvc65opv1zs625lq \ | |
--discovery-token-ca-cert-hash sha256:c14439c309345c6a02340bb7df74108c4cde6e0f8393f45a123f347f11b98b57 |
6. 配置 kubectl 应用的连贯 k8s 认证文件
# 普通用户要开始应用集群,您须要以普通用户身份运行以下命令:mkdir -p $HOME/.kube | |
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | |
sudo chown $(id -u):$(id -g) $HOME/.kube/config | |
# 或者,如果您是 root 用户,则能够运行(能够退出到环境变量):export KUBECONFIG=/etc/kubernetes/admin.conf |
查看工作节点
[root@test-a-k8s-master ~]# kubectl get nodes | |
NAME STATUS ROLES AGE VERSION | |
test-a-k8s-master NotReady control-plane 12m v1.25.4 | |
[root@test-a-k8s-master ~]# |
因为网络插件还没有部署,还没有准备就绪 NotReady,先持续
7. 退出 Kubernetes Node
向集群增加新节点,执行在 kubeadm init 输入的 kubeadm join 命令并手动加上 –cri-socket=unix:///var/run/cri-dockerd.sock
# 在 2 台 node 上运行(192.168.11.11、192.168.11.12)kubeadm join 192.168.11.10:6443 --token pk474p.uvc65opv1zs625lq \ | |
--discovery-token-ca-cert-hash sha256:c14439c309345c6a02340bb7df74108c4cde6e0f8393f45a123f347f11b98b57 \ | |
--cri-socket=unix:///var/run/cri-dockerd.sock |
下面用到的 token,默认 token 有效期为 24 小时,当过期之后,该 token 就不可用了。这时就须要从新创立 token,能够间接应用命令快捷生成:
kubeadm token create --print-join-command
8. 部署容器网络接口(CNI)Calico
Calico 是一个纯三层的数据中心网络计划,是目前 Kubernetes 支流的网络计划。上面开始部署 Calico,在 master 上进行部署操作。
下载 calico 的 YAML:
wget https://docs.projectcalico.org/manifests/calico.yaml
下载完后还须要批改外面定义 Pod 网络(CALICO_IPV4POOL_CIDR),与后面 kubeadm init 的 –pod-network-cidr 指定的一样。
calico.yaml 中 CALICO_IPV4POOL_CIDR 默认的配置如下:
# - name: CALICO_IPV4POOL_CIDR | |
# value: "192.168.0.0/16" |
勾销正文,并批改成与后面 kubeadm init 的 –pod-network-cidr 指定的一样。
- name: CALICO_IPV4POOL_CIDR | |
value: "10.244.0.0/16" |
批改完后文件后,开始部署:
# 部署 calico | |
kubectl apply -f calico.yaml | |
# 查看 kube-system 命名空间下的 pod | |
[root@test-a-k8s-master ~]# kubectl get pods -n kube-system | |
NAME READY STATUS RESTARTS AGE | |
calico-kube-controllers-798cc86c47-pd8zc 0/1 ContainerCreating 0 82s | |
calico-node-d54c9 0/1 Init:0/3 0 82s | |
calico-node-prpdl 0/1 Init:0/3 0 82s | |
calico-node-rkwzv 1/1 Running 0 82s | |
coredns-c676cc86f-kw9v8 1/1 Running 0 26m | |
coredns-c676cc86f-vfwz7 1/1 Running 0 26m | |
etcd-test-a-k8s-master 1/1 Running 0 26m | |
kube-apiserver-test-a-k8s-master 1/1 Running 0 26m | |
kube-controller-manager-test-a-k8s-master 1/1 Running 0 26m | |
kube-proxy-kv9rx 1/1 Running 0 26m | |
kube-proxy-qphqt 0/1 ContainerCreating 0 11m | |
kube-proxy-z5fjm 0/1 ContainerCreating 0 11m | |
kube-scheduler-test-a-k8s-master 1/1 Running 0 26m | |
[root@test-a-k8s-master ~]# |
阐明:当前所有 yaml 文件都只在 Master 节点执行。
等 Calico Pod 都 Running,节点也会准备就绪。
再次查看工作节点,曾经为 Ready 状态
[root@test-a-k8s-master ~]# kubectl get nodes | |
NAME STATUS ROLES AGE VERSION | |
test-a-k8s-master Ready control-plane 13m v1.25.4 | |
test-a-k8s-node01 Ready <none> 8m43s v1.25.4 | |
test-a-k8s-node02 Ready <none> 7m57s v1.25.4 | |
[root@test-a-k8s-master ~]# |
9. 部署 Dashboard
Dashboard 是官网提供的一个 UI,可用于根本治理 K8s 资源,github 我的项目地址:https://github.com/kubernetes…
YAML 下载地址:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
默认 Dashboard 只能集群外部拜访,批改 Service 为 NodePort 类型,裸露到内部:
kind: Service | |
apiVersion: v1 | |
metadata: | |
labels: | |
k8s-app: kubernetes-dashboard | |
name: kubernetes-dashboard | |
namespace: kubernetes-dashboard | |
spec: | |
ports: | |
- port: 443 | |
targetPort: 8443 | |
nodePort: 30001 # 减少这个(等会咱们拜访 UI 的端口)selector: | |
k8s-app: kubernetes-dashboard | |
type: NodePort # 减少这个(让每个节点都可拜访) |
开始部署
kubectl apply -f recommended.yaml
查看命名空间 kubernetes-dashboard 下的 pod
[root@test-a-k8s-master ~]# kubectl get pods -n kubernetes-dashboard | |
NAME READY STATUS RESTARTS AGE | |
dashboard-metrics-scraper-64bcc67c9c-4jvvw 0/1 ContainerCreating 0 40s | |
kubernetes-dashboard-5c8bd6b59-v7nvm 1/1 Running 0 40s | |
[root@test-a-k8s-master ~]# |
等 dashboard 的 pod 状态都为 running 时,就实现了
创立登录 UI 的 token
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard | |
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin | |
kubectl create token dashboard-admin -n kubernetes-dashboard |
拜访任意节点的 30001 的端口都能够登录 UI:
- https://192.168.11.10:30001/#…
- https://192.168.11.11:30001/#…
- https://192.168.11.12:30001/#…
将创立好的 token 复制进去即可实现登录。
本文转载于(喜爱的盆友关注咱们哦):https://mp.weixin.qq.com/s/F5…