关于kubernetes:Kubernetes三节点安装

52次阅读

共计 29660 个字符,预计需要花费 75 分钟才能阅读完成。

1. 装置要求

在开始之前,部署 Kubernetes 集群机器须要满足以下几个条件:

  • 虚构了 3 台机器,操作系统:Linux version 3.10.0-1127.el7.x86_64
  • 硬件配置:CPU(s)-2;3GB 或更多 RAM; 硬盘 20G 左右;
  • 3 台机器形成一个集群,机器之前网络互通;
  • 集群能够拜访外网,用于拉取镜像
  • 禁用 swap 分区

2. 装置指标

  1. 集群内所有机器装置 Docker 和 Kubeadm
  2. 部署 Kubernetes Master 节点
  3. 部署容器网络插件
  4. 部署 Kubernetes Node 节点,将节点退出 Kubernetes 集群中
  5. 部署 Dashboard Web 页面,可视化查看 Kubernetes 资源

3. 环境筹备

  • master 节点:192.168.11.99
  • node1 节点:192.168.11.100
  • node2 节点:192.168.11.101
##  1. 敞开防火墙
$ systemctl stop firewalld
### 防火墙敞开开启自启动
$ systemctl disable firewalld

## 2. 查看 selinux 状态
$ sestatus
SELinux status:                 disabled 

### 2.1 永恒敞开 selinux【重启失效】;$ vim /etc/sysconfig/selinux
批改:SELINUX=disabled 

### 2.2 长期敞开 selinux
$ setenforce 0

## 3. 别离敞开 swap
$  free -g                                                                            
              total        used        free      shared  buff/cache   available                              
Mem:              2           0           2           0           0           2                              
Swap:             1           0           1          


### 3.1 长期敞开
$ swapoff -a 

### 3.2 永恒敞开【重启失效】;$ vi /etc/fstab 
正文 swap

## 4. 别离设置设置主机名:$ hostnamectl set-hostname <hostname>
#master 节点:
hostnamectl set-hostname k8s-master
#node1 节点:hostnamectl set-hostname k8s-node1
#node2 节点:
hostnamectl set-hostname k8s-node2

### 4.1 查看 hostname
[root@localhost vagrant]# hostname                                                                           
k8s-master  

### 5. 在 master 增加 hosts【ip 和 name 根据本人的虚机设置】:$ cat >> /etc/hosts << EOF
192.168.11.99 k8s-master
192.168.11.100 k8s-node1
192.168.11.101 k8s-node2
EOF

## 6. 配置 master 工夫同步:### 6.1 装置 chrony:$ yum install ntpdate -y
$ ntpdate time.windows.com    

### 6.2 正文默认 ntp 服务器
sed -i 's/^server/#&/' /etc/chrony.conf

### 6.3 指定上游公共 ntp 服务器,并容许其余节点同步工夫
cat >> /etc/chrony.conf << EOF
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
allow all
EOF

### 6.4 重启 chronyd 服务并设为开机启动:systemctl enable chronyd && systemctl restart chronyd

### 6.5 开启网络工夫同步性能
timedatectl set-ntp true

## 7. 配置 node 节点工夫同步
### 7.1 装置 chrony:yum install -y chrony

### 7.2 正文默认服务器
sed -i 's/^server/#&/' /etc/chrony.conf

### 7.3 指定内网 master 节点为上游 NTP 服务器
echo server 192.168.11.99 iburst >> /etc/chrony.conf

### 7.4 重启服务并设为开机启动:systemctl enable chronyd && systemctl restart chronyd

### 7.5 查看配置 -- 胜利同步 master 节点工夫
[root@k8s-node2 vagrant]# chronyc sources                                                                    
210 Number of sources = 1                                                                                    
MS Name/IP address         Stratum Poll Reach LastRx Last sample                                             
===============================================================================                              
^* k8s-master                    3   6    36   170    +15us[+85us] +/- 4405us 

## 8. 将桥接的 IPv4 流量传递到 iptables(所有节点)$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system  # 失效

## 9. 加载 ipvs 相干模块
注解:ipvs 是虚构 IP 模块 --ClusterIP:vip(虚构 IP 提供集群 IP)因为 ipvs 曾经退出到了内核的骨干,所以为 kube-proxy 开启 ipvs 的前提须要加载以下的内核模块:在所有的 Kubernetes 节点执行以下脚本:

### 9.1 配置
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

### 9.2 执行脚本
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

下面脚本创立了 /etc/sysconfig/modules/ipvs.modules 文件,保障在节点重启后能主动加载所需模块。应用 lsmod | grep -e ip_vs -e nf_conntrack_ipv4 命令查看是否曾经正确加载所需的内核模块。接下来还须要确保各个节点上曾经装置了 ipset 软件包。为了便于查看 ipvs 的代理规定,最好装置一下管理工具 ipvsadm。### 9.3 装置管理工具
$ yum install ipset ipvsadm -y

4. 所有节点装置 Docker / kubeadm- 疏导集群的工具 / kubelet- 容器治理

Kubernetes 默认 CRI(容器运行时)为 Docker,因而先装置 Docker。

4.1 Docker 装置

Kubernetes 默认的容器运行时依然是 Docker,应用的是 kubelet 中内置 dockershim CRI 实现。须要留神的是,Kubernetes 1.13 最低反对的 Docker 版本是 1.11.1,最高反对是 18.06,而 Docker 最新版本曾经是 18.09 了,故咱们装置时须要指定版本为 18.06.1-ce。

$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version                                                                  
Docker version 18.06.1-ce, build e68fc7a

批改镜像源:

$ cat > /etc/docker/daemon.json << EOF
{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

4.2 增加 k8s 阿里云 YUM 软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.3 所有节点装置 kubeadm,kubelet 和 kubectl

因为版本更新频繁,这里指定版本号部署:

$ yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
$ systemctl enable kubelet && systemctl start kubelet

5. 部署 Kubernetes Master【只在 master 执行】

5.1 在 master 节点执行。

因为默认拉取镜像地址 http://k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。

$ kubeadm init \
  --apiserver-advertise-address=192.168.11.99 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.17.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

kubeadm init 初始化參數

      --apiserver-advertise-address string   设置 apiserver 绑定的 IP.
      --apiserver-bind-port int32            设置 apiserver 监听的端口. (默认 6443)
      --apiserver-cert-extra-sans strings    api 证书中指定额定的 Subject Alternative Names (SANs) 能够是 IP 也能够是 DNS 名称。证书是和 SAN 绑定的。--cert-dir string                      证书寄存的目录 (默认 "/etc/kubernetes/pki")
      --certificate-key string                kubeadm-cert secret 中 用于加密 control-plane 证书的 key
      --config string                         kubeadm 配置文件的门路.
      --cri-socket string                    CRI socket 文件门路,如果为空 kubeadm 将主动发现相干的 socket 文件; 只有当机器中存在多个 CRI  socket 或者 存在非标准 CRI socket 时才指定.
      --dry-run                              测试,并不真正执行; 输入运行后的后果.
      --feature-gates string                 指定启用哪些额定的 feature 应用 key=value 对的模式。--help  -h                             帮忙文档
      --ignore-preflight-errors strings       疏忽前置查看谬误,被疏忽的谬误将被显示为正告. 例子: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
      --image-repository string              抉择拉取 control plane images 的镜像 repo (default "k8s.gcr.io")
      --kubernetes-version string            抉择 K8S 版本. (default "stable-1")
      --node-name string                     指定 node 的名称,默认应用 node 的 hostname.
      --pod-network-cidr string              指定 pod 的网络,control plane 会主动将 网络公布到其余节点的 node,让其上启动的容器应用此网络
      --service-cidr string                  指定 service 的 IP 范畴. (default "10.96.0.0/12")
      --service-dns-domain string            指定 service 的 dns 后缀, e.g. "myorg.internal". (default "cluster.local")
      --skip-certificate-key-print            不打印 control-plane 用于加密证书的 key.
      --skip-phases strings                  跳过指定的阶段(phase)--skip-token-print                     不打印 kubeadm init 生成的 default bootstrap token 
      --token string                         指定 node 和 control plane 之间,简历双向认证的 token,格局为 [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
      --token-ttl duration                   token 主动删除的工夫距离。(e.g. 1s, 2m, 3h). 如果设置为 '0', token 永不过期 (default 24h0m0s)
      --upload-certs                         上传 control-plane 证书到 kubeadm-certs Secret.

输入

[root@localhost sysctl.d]# kubeadm init   --apiserver-advertise-address=192.168.11.99   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.17.0   --service-cidr=10.96.0.
0/12   --pod-network-cidr=10.244.0.0/16                                                                                                                                                                    
W0923 10:03:00.077870    3217 validation.go:28] Cannot validate kube-proxy config - no validator is available                                                                                              
W0923 10:03:00.077922    3217 validation.go:28] Cannot validate kubelet config - no validator is available                                                                                                 
[init] Using Kubernetes version: v1.17.0                                                                                                                                                                   
[preflight] Running pre-flight checks                                                                                                                                                                      
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/             
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'                                                                                           
[preflight] Pulling images required for setting up a Kubernetes cluster                                                                                                                                    
[preflight] This might take a minute or two, depending on the speed of your internet connection                                                                                                            
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'                                                                                                              
                                                                                                                                                                                                           
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"                                                                                                   
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                                                                                                                       
[kubelet-start] Starting the kubelet                                                                                                                                                                       
[certs] Using certificateDir folder "/etc/kubernetes/pki"                                                                                                                                                  
[certs] Generating "ca" certificate and key                                                                                                                                                                
[certs] Generating "apiserver" certificate and key                                                                                                                                                         
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.99]            
[certs] Generating "apiserver-kubelet-client" certificate and key                                                                                                                                          
[certs] Generating "front-proxy-ca" certificate and key                                                                                                                                                    
[certs] Generating "front-proxy-client" certificate and key                                                                                                                                                
[certs] Generating "etcd/ca" certificate and key                                                                                                                                                           
[certs] Generating "etcd/server" certificate and key                                                                                                                                                       
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.99 127.0.0.1 ::1]                                                                                      
[certs] Generating "etcd/peer" certificate and key                                                                                                                                                         
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.99 127.0.0.1 ::1]                                                                                        
[certs] Generating "etcd/healthcheck-client" certificate and key                                                                                                                                           
[certs] Generating "apiserver-etcd-client" certificate and key                                                                                                                                             
[certs] Generating "sa" key and public key                                                                                                                                                                 
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"                                                                                                                                                     
[kubeconfig] Writing "admin.conf" kubeconfig file                                                                                                                                                          
[kubeconfig] Writing "kubelet.conf" kubeconfig file                                                                                                                                                        
[kubeconfig] Writing "controller-manager.conf" kubeconfig file                                                                                                                                             
[kubeconfig] Writing "scheduler.conf" kubeconfig file                                                                                                                                                      
[control-plane] Using manifest folder "/etc/kubernetes/manifests"                                                                                                                                          
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                                                                                                          
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                                                                                                 
W0923 10:04:06.247879    3217 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"                                                                            
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                                                                                                          
W0923 10:04:06.249731    3217 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"                                                                            
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"                                                                                                                          
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s                                              
[apiclient] All control plane components are healthy after 26.503229 seconds                                                                                                                               
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace                                                                                                
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster                                                                       
[upload-certs] Skipping phase. Please see --upload-certs                                                                                                                                                   
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"                                                                                  
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]                                                                         
[bootstrap-token] Using token: tccfjm.u0k13eb29g9qaxzc                                                                                                                                                     
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles                                                                                                                         
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials                                                            
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token                                                                         
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster                                                                                      
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace                                                                                                                     
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key                                                                                      
[addons] Applied essential addon: CoreDNS                                                                                                                                                                  
[addons] Applied essential addon: kube-proxy                                                                                                                                                               
                                                                                                                                                                                                           
Your Kubernetes control-plane has initialized successfully!                                                                                                                                                
                                                                                                                                                                                                           
To start using your cluster, you need to run the following as a regular user:                                                                                                                              
                                                                                                                                                                                                           
  mkdir -p $HOME/.kube                                                                                                                                                                                     
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config                                                                                                                                                 
  sudo chown $(id -u):$(id -g) $HOME/.kube/config                                                                                                                                                          
                                                                                                                                                                                                           
You should now deploy a pod network to the cluster.                                                                                                                                                        
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:                                                                                                                                
  https://kubernetes.io/docs/concepts/cluster-administration/addons/                                                                                                                                       
                                                                                                                                                                                                           
Then you can join any number of worker nodes by running the following on each as root:                                                                                                                     
                                                                                                                                                                                                          
kubeadm join 192.168.11.99:6443 --token sc92uy.m2utl1i03ejf8kxb \                                                                                                                                          
    --discovery-token-ca-cert-hash sha256:c2e814618022854ddc3cb060689c520397a6faa0e2e132be2c11c2b1900d1789

(留神记录下初始化后果中的 kubeadm join 命令,部署 worker 节点时会用到)

初始化过程阐明

  • [preflight] kubeadm 执行初始化前的查看。
  • [kubelet-start] 生成 kubelet 的配置文件”/var/lib/kubelet/config.yaml”
  • [certificates] 生成相干的各种 token 和证书
  • [kubeconfig] 生成 KubeConfig 文件,kubelet 须要这个文件与 Master 通信
  • [control-plane] 装置 Master 组件,会从指定的 Registry 下载组件的 Docker 镜像。
  • [bootstraptoken] 生成 token 记录下来,后边应用 kubeadm join 往集群中增加节点时会用到
  • [addons] 装置附加组件 kube-proxy 和 kube-dns。Kubernetes Master 初始化胜利,提醒如何配置惯例用户应用 kubectl 拜访集群。提醒如何装置 Pod 网络。提醒如何注册其余节点到 Cluster。

5.2 配置 kubectl

kubectl 是治理 Kubernetes Cluster 的命令行工具,后面咱们曾经在所有的节点装置了 kubectl。Master 初始化实现后须要做一些配置工作,而后 kubectl 就能应用了。按照 kubeadm init 输入的最初提醒,举荐用 Linux 普通用户执行 kubectl。

创立普通用户 centos【能够不做配置,间接用 root 拜访 kubectl】

# 创立普通用户并设置明码 123456
useradd centos && echo "centos:123456" | chpasswd centos

#追加 sudo 权限, 并配置 sudo 免密
sed -i '/^root/a\centos  ALL=(ALL)       NOPASSWD:ALL' /etc/sudoers

#保留集群平安配置文件到以后用户.kube 目录
su - centos
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#启用 kubectl 命令主动补全性能(登记从新登录失效)echo "source <(kubectl completion bash)" >> ~/.bashrc

5.3 应用 kubectl 工具:

# 获取节点信息(临时只有 master 节点)[centos@k8s-master ~]$ kubectl get nodes                                                                     
NAME         STATUS     ROLES    AGE   VERSION                                                               
k8s-master   NotReady   master   95m   v1.17.0   

# 获取 pod 信息(临时没有挂载 pod)[centos@k8s-master ~]$ kubectl get po                                                                        
No resources found in default namespace.

# 获取所有 namespaces 信息
[centos@k8s-master ~]$ kubectl get po --all-namespaces                                                       
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE                          
kube-system   coredns-9d85f5447-k8q66              0/1     Pending   0          99m                          
kube-system   coredns-9d85f5447-prwrh              0/1     Pending   0          99m                          
kube-system   etcd-k8s-master                      1/1     Running   0          99m                          
kube-system   kube-apiserver-k8s-master            1/1     Running   0          99m                          
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          99m                          
kube-system   kube-proxy-k59lt                     1/1     Running   0          99m                          
kube-system   kube-scheduler-k8s-master            1/1     Running   0          99m

# 查看集群状态:确认各个组件都处于 healthy 状态
[centos@k8s-master ~]$ kubectl get  cs                                                                       
NAME                 STATUS    MESSAGE             ERROR                                                     
controller-manager   Healthy   ok                                                                            
scheduler            Healthy   ok                                                                            
etcd-0               Healthy   {"health":"true"} 

# 查看这个节点上各个系统 Pod 的状态
[centos@k8s-master ~]$  kubectl get pod -n kube-system -o wide                                                                                                                                             
NAME                                 READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES                                                                     
coredns-9d85f5447-k8q66              0/1     Pending   0          106m   <none>          <none>       <none>           <none>                                                                              
coredns-9d85f5447-prwrh              0/1     Pending   0          106m   <none>          <none>       <none>           <none>                                                                              
etcd-k8s-master                      1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              
kube-apiserver-k8s-master            1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              
kube-controller-manager-k8s-master   1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              
kube-proxy-k59lt                     1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              
kube-scheduler-k8s-master            1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none> 

能够看到,CoreDNS 依赖于网络的 Pod 都处于 Pending 状态,即调度失败。这当然是合乎预期的:因为这个 Master 节点的网络尚未就绪。集群初始化如果遇到问题,能够应用 kubeadm reset 命令进行清理而后从新执行初始化。

6. 部署网络插件

要让 Kubernetes Cluster 可能工作,必须装置 Pod 网络,否则 Pod 之间无奈通信。Kubernetes 反对多种网络计划,这里咱们应用 flannel 执行如下命令部署 flannel:

6.1 装置 Pod 网络插件(CNI)- master 节点,node 节点退出后主动下载

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果不下来,报错:The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
间接在宿主机用浏览器关上这个链接,而后把这个文件下载下来,再传到虚机里

而后在当前目录利用这个文件就能够了

[centos@k8s-master ~]$ ll                                                                                    
total 8                                                                                                      
-rw-rw-r-- 1 centos centos 4819 Sep 24 06:45 kube-flannel.yaml 

$ kubectl apply -f kube-flannel.yaml

kube-flannel.yaml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {"portMappings": true}
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {"Type": "vxlan"}
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

如果 Pod 镜像下载失败,能够改成这个镜像地址:lizhenliang/flannel:v0.11.0-amd64

# 查看 POD 信息
[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide                                                                                                                                              
NAME                                 READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES                                                                      
coredns-9d85f5447-k8q66              1/1     Running   0          21h   10.244.0.2      k8s-master   <none>           <none>                                                                               
coredns-9d85f5447-prwrh              1/1     Running   0          21h   10.244.0.3      k8s-master   <none>           <none>                                                                               
etcd-k8s-master                      1/1     Running   2          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-apiserver-k8s-master            1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-controller-manager-k8s-master   1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-flannel-ds-rzbkv                1/1     Running   0          14m   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-proxy-k59lt                     1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               
kube-scheduler-k8s-master            1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>  

从下面的 POD 信息中咱们能够发现多一个 kube-flannel-ds-rzbkv,这个是用来买通网络的,coredns 也曾经胜利了

6.2 查看节点状态

[centos@k8s-master ~]$ kubectl get nodes                                                                     
NAME         STATUS   ROLES    AGE   VERSION                                                                 
k8s-master   Ready    master   21h   v1.17.0 

咱们能够看到 k8s-master 节点的 Stauts 曾经是 Ready

至此,Kubernetes 的 Master 节点就部署实现了。如果你只须要一个单节点的 Kubernetes,当初你就能够应用了。不过,在默认状况下,Kubernetes 的 Master 节点是不能运行用户 Pod 的

7 部署 worker 节点

Kubernetes 的 Worker 节点跟 Master 节点简直是雷同的,它们运行着的都是一个 kubelet 组件。惟一的区别在于,在 kubeadm init 的过程中,kubelet 启动后,Master 节点上还会主动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个零碎 Pod。在 k8s-node1 和 k8s-node2 上别离执行如下命令,将其注册到 Cluster 中:

7.1 Kubernetes Node 节点接入到集群中承受 master 节点治理

k8s-node 节点中执行,将其挂载到 master 节点中承受治理

退出集群

# kubeadm init 的时候生成的 token 在 子节点中执行,使其退出集群;$ kubeadm join 192.168.11.99:6443 --token tccfjm.u0k13eb29g9qaxzc --discovery-token-ca-cert-hash sha256:2db1d97e65a4c4d20118e53292876906388c95f08f40b9ddb6c485a155ca7007     

# 退出胜利
[preflight] Running pre-flight checks                                                                        
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driv
er is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/                            
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.servi
ce'                                                                                                          
[preflight] Reading configuration from the cluster...                                                        
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' 
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kub
e-system namespace                                                                                           
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                         
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"     
[kubelet-start] Starting the kubelet                                                                         
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...                                      
                                                                                                             
This node has joined the cluster:                                                                            
* Certificate signing request was sent to apiserver and a response was received.                             
* The Kubelet was informed of the new secure connection details.                                             
                                                                                                             
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.   


# 如果遗记了,能够通过上面命令从新生成
$ kubeadm token create --print-join-command

7.2 Kubernetes Master 节点查看集群状态

[centos@k8s-master ~]$ kubectl get nodes                                                                     
NAME         STATUS   ROLES    AGE    VERSION                                                                
k8s-master   Ready    master   21h    v1.17.0                                                                
k8s-node1    Ready    <none>   2m8s   v1.17.0                                                                
k8s-node2    Ready    <none>   114s   v1.17.0 

# 查看 pod 状态
[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide                                                                                                                                              
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES                                                                     
coredns-9d85f5447-k8q66              1/1     Running   0          22h   10.244.0.2       k8s-master   <none>           <none>                                                                              
coredns-9d85f5447-prwrh              1/1     Running   0          22h   10.244.0.3       k8s-master   <none>           <none>                                                                              
etcd-k8s-master                      1/1     Running   2          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-apiserver-k8s-master            1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-controller-manager-k8s-master   1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-flannel-ds-crxlk                1/1     Running   0          45m   192.168.11.130   k8s-node1    <none>           <none>                                                                              
kube-flannel-ds-njjdv                1/1     Running   0          45m   192.168.11.131   k8s-node2    <none>           <none>                                                                              
kube-flannel-ds-rzbkv                1/1     Running   0          86m   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-proxy-66bw5                     1/1     Running   0          45m   192.168.11.130   k8s-node1    <none>           <none>                                                                              
kube-proxy-k59lt                     1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              
kube-proxy-p7xch                     1/1     Running   0          45m   192.168.11.131   k8s-node2    <none>           <none>                                                                              
kube-scheduler-k8s-master            1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none> 

等所有的节点都曾经 Ready,Kubernetes Cluster 创立胜利,所有准备就绪。如果 pod 状态为 Pending、ContainerCreating、ImagePullBackOff 都表明 Pod 没有就绪,Running 才是就绪状态。如果有 pod 提醒 Init:ImagePullBackOff,阐明这个 pod 的镜像在对应节点上拉取失败,咱们能够通过 kubectl describe pod 查看 Pod 具体情况,以确认拉取失败的镜像:

7.3 Kubernetes Master 节点 和 Kubernetes Node 节点的镜像

# Kubernetes Master 节点镜像
[centos@k8s-master ~]$ sudo docker images                                                                                                                                                                  
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE                                                                         
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.0             7d54289267dc        21 months ago       116MB                                                                        
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.0             78c190f736b1        21 months ago       94.4MB                                                                       
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.0             0cae8d5cc64c        21 months ago       171MB                                                                        
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.0             5eb3b7486872        21 months ago       161MB                                                                        
registry.aliyuncs.com/google_containers/coredns                   1.6.5               70f311871ae1        22 months ago       41.6MB                                                                       
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        23 months ago       288MB                                                                        
lizhenliang/flannel                                               v0.11.0-amd64       ff281650a721        2 years ago         52.6MB                                                                       
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        3 years ago         742kB

# Kubernetes Node 节点镜像
[root@k8s-node1 vagrant]# docker images                                                                                                                                                                    
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE                                                                                      
registry.aliyuncs.com/google_containers/kube-proxy   v1.17.0             7d54289267dc        21 months ago       116MB                                                                                     
lizhenliang/flannel                                  v0.11.0-amd64       ff281650a721        2 years ago         52.6MB                                                                                    
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        3 years ago         742kB

其实咱们从 Kubernetes Node 节点的镜像中咱们能够发现,flannel 镜像的存在,这也验证了咱们之前说的:在 node 节点 join 到 master 的时候 会加载网络主键,用于买通网络

8 测试 kubernetes 集群

在 Kubernetes 集群中创立一个 pod,验证是否失常运行:

[centos@k8s-master ~]$ kubectl create deployment nginx --image=nginx:alpine                                   
deployment.apps/nginx created

# scale 用于程序在负载减轻或放大时正本进行扩容或放大
[centos@k8s-master ~]$ kubectl scale deployment nginx --replicas=2                                            
deployment.apps/nginx scaled 

# 查看 pod 的运行状态
[centos@k8s-master ~]$ kubectl get pods -l app=nginx -o wide                                                                                                                                               
NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES                                                                                    
nginx-5b6fb6dd96-4t4dt   1/1     Running   0          10m     10.244.2.2   k8s-node2   <none>           <none>                                                                                             
nginx-5b6fb6dd96-594pq   1/1     Running   0          9m21s   10.244.1.2   k8s-node1   <none>           <none>

# 通过 deployment 创立 nginx 服务,并以 **NodePort** 的形式 凋谢 80 端口
[centos@k8s-master ~]$ kubectl expose deployment nginx --port=80 --type=NodePort                             
service/nginx exposed

# 查看 services 的运行状态 (节点的 30919 端口映射到容器的 80 端口)
[centos@k8s-master ~]$ kubectl get services nginx                                                            >
NAME    TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE                                             
nginx   NodePort   10.96.72.70   <none>        80:30919/TCP   8m39s 

# curl 拜访 nginx 服务
[root@k8s-node1 vagrant]# curl 192.168.11.130:30919                                                          
<!DOCTYPE html>                                                                                              
<html>                                                                                                       
<head>                                                                                                       
<title>Welcome to nginx!</title>                                                                             
<style>                                                                                                      
html {color-scheme: light dark;}                                                                           
body { width: 35em; margin: 0 auto;
.......

最初验证一下 dns, pod network 是否失常:运行 Busybox 并进入交互模式

# busybox 是一个集成了一百多个最罕用 linux 命令和工具的软件
[centos@k8s-master ~]$ kubectl run -it curl --image=radial/busyboxplus:curl                                   
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl
 run --generator=run-pod/v1 or kubectl create instead.                                                       
If you don't see a command prompt, try pressing enter.                                                       

# 输出 nslookup nginx 查看是否能够正确解析出集群内的 IP,以验证 DNS 是否失常
[root@curl-69c656fd45-t8jth:/]$ nslookup nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 10.96.112.44 nginx.default.svc.cluster.local

# 通过服务名进行拜访,验证 kube-proxy 是否失常
[root@curl-69c656fd45-t8jth:/]$ curl http://nginx/

9 部署 Dashboard

9.1 筹备装置 Kubernetes dashboard 的 yaml 文件

# 下载配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

# 改名
mv recommended.yaml kubernetes-dashboard.yaml

9.2 编辑 yaml 配置文件

默认 Dashboard 只能集群外部拜访,批改 Service 为 NodePort 类型,并裸露端口到内部:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

9.3 创立 service account 并绑定默认 cluster-admin 管理员集群角色:

[centos@k8s-master ~]$ ll                                                                                     total 16
-rw-rw-r-- 1 centos centos 4815 Sep 26 08:17 kube-flannel.yaml
-rw-rw-r-- 1 centos centos 7607 Sep 26 09:34 kubernetes-dashboard.yaml

[centos@k8s-master ~]$ kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
serviceaccount/dashboard-admin created

[centos@k8s-master ~]$ kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin                                                          
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created


[centos@k8s-master ~]$ kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin
dashboard-admin-token-j6zhm        kubernetes.io/service-account-token   3      9m36s                         
[centos@k8s-master ~]$ kubectl describe secrets dashboard-admin-token-j6zhm  -n kubernetes-dashboard         -
Name:         dashboard-admin-token-j6zhm                                                                    s
Namespace:    kubernetes-dashboard                                                                            
Labels:       <none>                                                                                         N
Annotations:  kubernetes.io/service-account.name: dashboard-admin                                            e
              kubernetes.io/service-account.uid: 353ad721-17ff-48e2-a6b6-0654798de60b                        e
                                                                                                              
Type:  kubernetes.io/service-account-token                                                                    
                                                                                                              
Data                                                                                                          
====                                                                                                          
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IndrOGZVSGFaWGVQVHZLTUdMOTdkd3NhaGI2UGhHbDdZSFVkVFk3ejYyNncifQ.eyJpc3 MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGV zLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tajZ6 aG0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZ XRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzUzYWQ3MjEtMTdmZi00OGUyLWE2YjYtMDY1NDc5OGRlNjBiIi wic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.XxrEE6AbDQEH-bu1SY d7FkRuOo98nJUy05t_fCkpa-xyx-VgIWfOsH9VVhOlINtzYOBbpAFnNb52w943t7XmeK9P0Jiwd0Z2YxbmhXbZLtIAFO9gWeTWVUxe0yIzlne DkQui_CLgFTzSHZZYed19La2ejghXeYXcWXPpMKBI3dohc2TTUj0cEfMmMeWPJef_6fj5Qyx7ujokwsw5mqO9givaI9zdBkiV3ky35SjBerYO lvbQ8TjzHygF9vhVZTVh38_Tuff8SomMDUv5xpZ2DQXXkTmXbGao7SFivX4U56tOxrGGktio7A0xVVqnlJN3Br5M-YUrVNJrm73MegKVWg

此处的 token, 下文 dashboard 登陆时须要应用,记得找中央记下来,切实忘了记录,也有从新输入的操作

9.4 利用配置文件启动服务

[centos@k8s-master ~]$ kubectl apply -f kubernetes-dashboard.yaml                                                                                                                                          
namespace/kubernetes-dashboard created                                                                                                                                                                     
serviceaccount/kubernetes-dashboard created                                                                                                                                                                
service/kubernetes-dashboard created                                                                                                                                                                       
secret/kubernetes-dashboard-certs created                                                                                                                                                                  
secret/kubernetes-dashboard-csrf created                                                                                                                                                                   
secret/kubernetes-dashboard-key-holder created                                                                                                                                                             
configmap/kubernetes-dashboard-settings created                                                                                                                                                            
role.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                                
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                         
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                         
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                  
deployment.apps/kubernetes-dashboard created                                                                                                                                                               
service/dashboard-metrics-scraper created                                                                                                                                                                  
deployment.apps/dashboard-metrics-scraper created 

9.5 查看状态

kubectl get pods -n kubernetes-dashboard
kubectl get svc -n kubernetes-dashboard

10 Dashboard chrome 无法访问问题 – 问题修复

10.1 问题形容

K8S Dashboard 装置好当前,通过 Firefox 浏览器是能够关上的,但通过 Google Chrome 浏览器,无奈胜利浏览页面。

[centos@k8s-master ~]$ curl 192.168.11.130:30001                                                            Client sent an HTTP request to an HTTPS server.

可见这是,要改成 https

10.2 解决问题

kubeadm 主动生成的证书,很多浏览器不反对。所以咱们须要本人创立证书。

# 创立一个 key 的目录,寄存证书等文件
[centos@k8s-master ~]$ mkdir kubernetes-key                                                                 [centos@k8s-master ~]$ cd kubernetes-key/                                                                   [centos@k8s-master kubernetes-key]$  

# 生成证书
# 1)生成证书申请的 key
[centos@k8s-master kubernetes-key]$ openssl genrsa -out dashboard.key 2048                          
Generating RSA private key, 2048 bit long modulus
..........+++
...............................................+++
e is 65537 (0x10001)

# 2)生成证书申请,上面 192.168.11.99 为 master 节点的 IP 地址
[centos@k8s-master kubernetes-key]$ openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.11.99'

# 3)生成自签证书
[centos@k8s-master kubernetes-key]$ openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt                                                                                          
Signature ok
subject=/CN=192.168.11.99
Getting Private key 

# 删除原有证书
[centos@k8s-master kubernetes-key]$ kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
secret "kubernetes-dashboard-certs" deleted

# 创立新证书的 sercret
[centos@k8s-master kubernetes-key]$ ll
total 12
-rw-rw-r-- 1 centos centos  989 Sep 26 11:15 dashboard.crt
-rw-rw-r-- 1 centos centos  895 Sep 26 11:14 dashboard.csr
-rw-rw-r-- 1 centos centos 1679 Sep 26 11:12 dashboard.key

[centos@k8s-master kubernetes-key]$ kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
secret/kubernetes-dashboard-certs created

# 查找正在运行的 pod
[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76585494d8-pcpk8   1/1     Running   0          99m
kubernetes-dashboard-5996555fd8-5sh8w        1/1     Running   0          99m


# 删除现有的 pod
[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76585494d8-pcpk8   1/1     Running   0          99m
kubernetes-dashboard-5996555fd8-5sh8w        1/1     Running   0          99m

[centos@k8s-master kubernetes-key]$ kubectl delete po dashboard-metrics-scraper-76585494d8-pcpk8 -n kubernetes-dashboard                                                                                   
pod "dashboard-metrics-scraper-76585494d8-pcpk8" deleted

[centos@k8s-master kubernetes-key]$ kubectl delete po kubernetes-dashboard-5996555fd8-5sh8w -n kubernetes-dashboard                                                                                        
pod "kubernetes-dashboard-5996555fd8-5sh8w" deleted

10.3 应用 Chrome 拜访验证

期待新的 pod 状态失常 , 再应用浏览器拜访

[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76585494d8-wqfbp   1/1     Running   0          2m18s
kubernetes-dashboard-5996555fd8-c96js        1/1     Running   0          97s

拜访地址:http://NodeIP:30001,抉择 token 登陆,复制前文提到的 token 就能够登陆了。

应用 token 登录,Kubernetes 即可真香可用了

正文完
 0