1. 装置要求

在开始之前,部署Kubernetes集群机器须要满足以下几个条件:

  • 虚构了3台机器,操作系统:Linux version 3.10.0-1127.el7.x86_64
  • 硬件配置:CPU(s)-2;3GB或更多RAM;硬盘20G左右;
  • 3台机器形成一个集群,机器之前网络互通;
  • 集群能够拜访外网,用于拉取镜像
  • 禁用swap分区

2. 装置指标

  1. 集群内所有机器装置Docker和Kubeadm
  2. 部署Kubernetes Master节点
  3. 部署容器网络插件
  4. 部署Kubernetes Node节点,将节点退出Kubernetes集群中
  5. 部署Dashboard Web页面,可视化查看Kubernetes资源

3. 环境筹备

  • master节点 : 192.168.11.99
  • node1节点 :192.168.11.100
  • node2节点 :192.168.11.101
##  1. 敞开防火墙$ systemctl stop firewalld### 防火墙敞开开启自启动$ systemctl disable firewalld## 2. 查看selinux状态$ sestatusSELinux status:                 disabled ### 2.1 永恒敞开selinux【重启失效】;$ vim /etc/sysconfig/selinux批改:SELINUX=disabled ### 2.2 长期敞开selinux$ setenforce 0## 3. 别离敞开swap$  free -g                                                                                          total        used        free      shared  buff/cache   available                              Mem:              2           0           2           0           0           2                              Swap:             1           0           1          ### 3.1 长期敞开$ swapoff -a ### 3.2 永恒敞开 【重启失效】;$ vi /etc/fstab 正文 swap## 4.别离设置设置主机名:$ hostnamectl set-hostname <hostname>#master节点:hostnamectl set-hostname k8s-master#node1节点:hostnamectl set-hostname k8s-node1#node2节点:hostnamectl set-hostname k8s-node2### 4.1 查看hostname[root@localhost vagrant]# hostname                                                                           k8s-master  ### 5.在master增加hosts【ip和name根据本人的虚机设置】:$ cat >> /etc/hosts << EOF192.168.11.99 k8s-master192.168.11.100 k8s-node1192.168.11.101 k8s-node2EOF## 6.配置master工夫同步:### 6.1装置chrony:$ yum install ntpdate -y$ ntpdate time.windows.com    ### 6.2正文默认ntp服务器sed -i 's/^server/#&/' /etc/chrony.conf### 6.3指定上游公共 ntp 服务器,并容许其余节点同步工夫cat >> /etc/chrony.conf << EOFserver 0.asia.pool.ntp.org iburstserver 1.asia.pool.ntp.org iburstserver 2.asia.pool.ntp.org iburstserver 3.asia.pool.ntp.org iburstallow allEOF### 6.4重启chronyd服务并设为开机启动:systemctl enable chronyd && systemctl restart chronyd### 6.5开启网络工夫同步性能timedatectl set-ntp true## 7.配置node节点工夫同步### 7.1装置chrony:yum install -y chrony### 7.2正文默认服务器sed -i 's/^server/#&/' /etc/chrony.conf### 7.3指定内网 master节点为上游NTP服务器echo server 192.168.11.99 iburst >> /etc/chrony.conf### 7.4重启服务并设为开机启动:systemctl enable chronyd && systemctl restart chronyd### 7.5 查看配置--胜利同步master节点工夫[root@k8s-node2 vagrant]# chronyc sources                                                                    210 Number of sources = 1                                                                                    MS Name/IP address         Stratum Poll Reach LastRx Last sample                                             ===============================================================================                              ^* k8s-master                    3   6    36   170    +15us[  +85us] +/- 4405us ## 8.将桥接的IPv4流量传递到iptables(所有节点)$ cat > /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF$ sysctl --system  # 失效## 9.加载ipvs相干模块注解:ipvs是虚构IP模块--ClusterIP:vip(虚构IP提供集群IP)因为ipvs曾经退出到了内核的骨干,所以为kube-proxy开启ipvs的前提须要加载以下的内核模块:在所有的Kubernetes节点执行以下脚本:### 9.1 配置cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOF### 9.2执行脚本$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4下面脚本创立了/etc/sysconfig/modules/ipvs.modules文件,保障在节点重启后能主动加载所需模块。 应用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否曾经正确加载所需的内核模块。接下来还须要确保各个节点上曾经装置了ipset软件包。 为了便于查看ipvs的代理规定,最好装置一下管理工具ipvsadm。### 9.3装置管理工具$ yum install ipset ipvsadm -y

4. 所有节点装置Docker / kubeadm-疏导集群的工具 / kubelet-容器治理

Kubernetes默认CRI(容器运行时)为Docker,因而先装置Docker。

4.1 Docker装置

Kubernetes默认的容器运行时依然是Docker,应用的是kubelet中内置dockershim CRI实现。须要留神的是,Kubernetes 1.13最低反对的Docker版本是1.11.1,最高反对是18.06,而Docker最新版本曾经是18.09了,故咱们装置时须要指定版本为18.06.1-ce。
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo$ yum -y install docker-ce-18.06.1.ce-3.el7$ systemctl enable docker && systemctl start docker$ docker --version                                                                  Docker version 18.06.1-ce, build e68fc7a

批改镜像源:

$ cat > /etc/docker/daemon.json << EOF{  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]}EOF

4.2 增加k8s阿里云YUM软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF

4.3 所有节点装置kubeadm,kubelet和kubectl

因为版本更新频繁,这里指定版本号部署:

$ yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0$ systemctl enable kubelet && systemctl start kubelet

5. 部署Kubernetes Master【只在master执行】

5.1 在master节点执行。

因为默认拉取镜像地址http://k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。
$ kubeadm init \  --apiserver-advertise-address=192.168.11.99 \  --image-repository registry.aliyuncs.com/google_containers \  --kubernetes-version v1.17.0 \  --service-cidr=10.96.0.0/12 \  --pod-network-cidr=10.244.0.0/16
kubeadm init 初始化參數
      --apiserver-advertise-address string   设置 apiserver 绑定的 IP.      --apiserver-bind-port int32            设置apiserver 监听的端口. (默认 6443)      --apiserver-cert-extra-sans strings    api证书中指定额定的Subject Alternative Names (SANs) 能够是IP 也能够是DNS名称。 证书是和SAN绑定的。      --cert-dir string                      证书寄存的目录 (默认 "/etc/kubernetes/pki")      --certificate-key string                kubeadm-cert secret 中 用于加密 control-plane 证书的key      --config string                         kubeadm 配置文件的门路.      --cri-socket string                    CRI socket 文件门路,如果为空 kubeadm 将主动发现相干的socket文件; 只有当机器中存在多个 CRI  socket 或者 存在非标准 CRI socket 时才指定.      --dry-run                              测试,并不真正执行;输入运行后的后果.      --feature-gates string                 指定启用哪些额定的feature 应用 key=value 对的模式。        --help  -h                             帮忙文档      --ignore-preflight-errors strings       疏忽前置查看谬误,被疏忽的谬误将被显示为正告. 例子: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.      --image-repository string              抉择拉取 control plane images 的镜像repo (default "k8s.gcr.io")      --kubernetes-version string            抉择K8S版本. (default "stable-1")      --node-name string                     指定node的名称,默认应用 node 的 hostname.      --pod-network-cidr string              指定 pod 的网络, control plane 会主动将 网络公布到其余节点的node,让其上启动的容器应用此网络      --service-cidr string                  指定service 的IP 范畴. (default "10.96.0.0/12")      --service-dns-domain string            指定 service 的 dns 后缀, e.g. "myorg.internal". (default "cluster.local")      --skip-certificate-key-print            不打印 control-plane 用于加密证书的key.      --skip-phases strings                  跳过指定的阶段(phase)      --skip-token-print                     不打印 kubeadm init 生成的 default bootstrap token       --token string                         指定 node 和control plane 之间,简历双向认证的token ,格局为 [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef      --token-ttl duration                   token 主动删除的工夫距离。 (e.g. 1s, 2m, 3h). 如果设置为 '0', token 永不过期 (default 24h0m0s)      --upload-certs                         上传 control-plane 证书到 kubeadm-certs Secret.
输入
[root@localhost sysctl.d]# kubeadm init   --apiserver-advertise-address=192.168.11.99   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.17.0   --service-cidr=10.96.0.0/12   --pod-network-cidr=10.244.0.0/16                                                                                                                                                                    W0923 10:03:00.077870    3217 validation.go:28] Cannot validate kube-proxy config - no validator is available                                                                                              W0923 10:03:00.077922    3217 validation.go:28] Cannot validate kubelet config - no validator is available                                                                                                 [init] Using Kubernetes version: v1.17.0                                                                                                                                                                   [preflight] Running pre-flight checks                                                                                                                                                                              [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/                     [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'                                                                                           [preflight] Pulling images required for setting up a Kubernetes cluster                                                                                                                                    [preflight] This might take a minute or two, depending on the speed of your internet connection                                                                                                            [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'                                                                                                                                                                                                                                                                                                                         [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"                                                                                                   [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                                                                                                                       [kubelet-start] Starting the kubelet                                                                                                                                                                       [certs] Using certificateDir folder "/etc/kubernetes/pki"                                                                                                                                                  [certs] Generating "ca" certificate and key                                                                                                                                                                [certs] Generating "apiserver" certificate and key                                                                                                                                                         [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.99]            [certs] Generating "apiserver-kubelet-client" certificate and key                                                                                                                                          [certs] Generating "front-proxy-ca" certificate and key                                                                                                                                                    [certs] Generating "front-proxy-client" certificate and key                                                                                                                                                [certs] Generating "etcd/ca" certificate and key                                                                                                                                                           [certs] Generating "etcd/server" certificate and key                                                                                                                                                       [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.99 127.0.0.1 ::1]                                                                                      [certs] Generating "etcd/peer" certificate and key                                                                                                                                                         [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.99 127.0.0.1 ::1]                                                                                        [certs] Generating "etcd/healthcheck-client" certificate and key                                                                                                                                           [certs] Generating "apiserver-etcd-client" certificate and key                                                                                                                                             [certs] Generating "sa" key and public key                                                                                                                                                                 [kubeconfig] Using kubeconfig folder "/etc/kubernetes"                                                                                                                                                     [kubeconfig] Writing "admin.conf" kubeconfig file                                                                                                                                                          [kubeconfig] Writing "kubelet.conf" kubeconfig file                                                                                                                                                        [kubeconfig] Writing "controller-manager.conf" kubeconfig file                                                                                                                                             [kubeconfig] Writing "scheduler.conf" kubeconfig file                                                                                                                                                      [control-plane] Using manifest folder "/etc/kubernetes/manifests"                                                                                                                                          [control-plane] Creating static Pod manifest for "kube-apiserver"                                                                                                                                          [control-plane] Creating static Pod manifest for "kube-controller-manager"                                                                                                                                 W0923 10:04:06.247879    3217 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"                                                                            [control-plane] Creating static Pod manifest for "kube-scheduler"                                                                                                                                          W0923 10:04:06.249731    3217 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"                                                                            [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"                                                                                                                          [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s                                              [apiclient] All control plane components are healthy after 26.503229 seconds                                                                                                                               [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace                                                                                                [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster                                                                       [upload-certs] Skipping phase. Please see --upload-certs                                                                                                                                                   [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"                                                                                  [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]                                                                         [bootstrap-token] Using token: tccfjm.u0k13eb29g9qaxzc                                                                                                                                                     [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles                                                                                                                         [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials                                                            [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token                                                                         [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster                                                                                      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace                                                                                                                     [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key                                                                                      [addons] Applied essential addon: CoreDNS                                                                                                                                                                  [addons] Applied essential addon: kube-proxy                                                                                                                                                                                                                                                                                                                                                                          Your Kubernetes control-plane has initialized successfully!                                                                                                                                                                                                                                                                                                                                                           To start using your cluster, you need to run the following as a regular user:                                                                                                                                                                                                                                                                                                                                           mkdir -p $HOME/.kube                                                                                                                                                                                       sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config                                                                                                                                                   sudo chown $(id -u):$(id -g) $HOME/.kube/config                                                                                                                                                                                                                                                                                                                                                                     You should now deploy a pod network to the cluster.                                                                                                                                                        Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:                                                                                                                                  https://kubernetes.io/docs/concepts/cluster-administration/addons/                                                                                                                                                                                                                                                                                                                                                  Then you can join any number of worker nodes by running the following on each as root:                                                                                                                                                                                                                                                                                                                               kubeadm join 192.168.11.99:6443 --token sc92uy.m2utl1i03ejf8kxb \                                                                                                                                              --discovery-token-ca-cert-hash sha256:c2e814618022854ddc3cb060689c520397a6faa0e2e132be2c11c2b1900d1789

(留神记录下初始化后果中的kubeadm join命令,部署worker节点时会用到)

初始化过程阐明

  • [preflight] kubeadm 执行初始化前的查看。
  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
  • [certificates] 生成相干的各种token和证书
  • [kubeconfig] 生成 KubeConfig 文件,kubelet 须要这个文件与 Master 通信
  • [control-plane] 装置 Master 组件,会从指定的 Registry 下载组件的 Docker 镜像。
  • [bootstraptoken] 生成token记录下来,后边应用kubeadm join往集群中增加节点时会用到
  • [addons] 装置附加组件 kube-proxy 和 kube-dns。 Kubernetes Master 初始化胜利,提醒如何配置惯例用户应用kubectl拜访集群。 提醒如何装置 Pod 网络。 提醒如何注册其余节点到 Cluster。

5.2 配置 kubectl

kubectl 是治理 Kubernetes Cluster 的命令行工具,后面咱们曾经在所有的节点装置了 kubectl。Master 初始化实现后须要做一些配置工作,而后 kubectl 就能应用了。 按照 kubeadm init 输入的最初提醒,举荐用 Linux 普通用户执行 kubectl。

创立普通用户centos【能够不做配置,间接用root拜访kubectl】
#创立普通用户并设置明码123456useradd centos && echo "centos:123456" | chpasswd centos#追加sudo权限,并配置sudo免密sed -i '/^root/a\centos  ALL=(ALL)       NOPASSWD:ALL' /etc/sudoers#保留集群平安配置文件到以后用户.kube目录su - centosmkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config#启用 kubectl 命令主动补全性能(登记从新登录失效)echo "source <(kubectl completion bash)" >> ~/.bashrc

5.3 应用kubectl工具:

# 获取节点信息(临时只有master节点)[centos@k8s-master ~]$ kubectl get nodes                                                                     NAME         STATUS     ROLES    AGE   VERSION                                                               k8s-master   NotReady   master   95m   v1.17.0   # 获取 pod 信息(临时没有挂载pod)[centos@k8s-master ~]$ kubectl get po                                                                        No resources found in default namespace.# 获取所有 namespaces信息[centos@k8s-master ~]$ kubectl get po --all-namespaces                                                       NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE                          kube-system   coredns-9d85f5447-k8q66              0/1     Pending   0          99m                          kube-system   coredns-9d85f5447-prwrh              0/1     Pending   0          99m                          kube-system   etcd-k8s-master                      1/1     Running   0          99m                          kube-system   kube-apiserver-k8s-master            1/1     Running   0          99m                          kube-system   kube-controller-manager-k8s-master   1/1     Running   0          99m                          kube-system   kube-proxy-k59lt                     1/1     Running   0          99m                          kube-system   kube-scheduler-k8s-master            1/1     Running   0          99m# 查看集群状态:确认各个组件都处于healthy状态[centos@k8s-master ~]$ kubectl get  cs                                                                       NAME                 STATUS    MESSAGE             ERROR                                                     controller-manager   Healthy   ok                                                                            scheduler            Healthy   ok                                                                            etcd-0               Healthy   {"health":"true"} # 查看这个节点上各个系统 Pod 的状态[centos@k8s-master ~]$  kubectl get pod -n kube-system -o wide                                                                                                                                             NAME                                 READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES                                                                     coredns-9d85f5447-k8q66              0/1     Pending   0          106m   <none>          <none>       <none>           <none>                                                                              coredns-9d85f5447-prwrh              0/1     Pending   0          106m   <none>          <none>       <none>           <none>                                                                              etcd-k8s-master                      1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              kube-apiserver-k8s-master            1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              kube-controller-manager-k8s-master   1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              kube-proxy-k59lt                     1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none>                                                                              kube-scheduler-k8s-master            1/1     Running   0          106m   192.168.11.99   k8s-master   <none>           <none> 
能够看到,CoreDNS依赖于网络的 Pod 都处于 Pending 状态,即调度失败。这当然是合乎预期的:因为这个 Master 节点的网络尚未就绪。 集群初始化如果遇到问题,能够应用kubeadm reset命令进行清理而后从新执行初始化。

6. 部署网络插件

要让 Kubernetes Cluster 可能工作,必须装置 Pod 网络,否则 Pod 之间无奈通信。 Kubernetes 反对多种网络计划,这里咱们应用 flannel 执行如下命令部署 flannel:

6.1 装置Pod网络插件(CNI)- master节点,node节点退出后主动下载

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml如果不下来,报错:The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?间接在宿主机用浏览器关上这个链接,而后把这个文件下载下来,再传到虚机里而后在当前目录利用这个文件就能够了[centos@k8s-master ~]$ ll                                                                                    total 8                                                                                                      -rw-rw-r-- 1 centos centos 4819 Sep 24 06:45 kube-flannel.yaml $ kubectl apply -f kube-flannel.yaml
kube-flannel.yaml
---apiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:  name: psp.flannel.unprivileged  annotations:    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/defaultspec:  privileged: false  volumes:  - configMap  - secret  - emptyDir  - hostPath  allowedHostPaths:  - pathPrefix: "/etc/cni/net.d"  - pathPrefix: "/etc/kube-flannel"  - pathPrefix: "/run/flannel"  readOnlyRootFilesystem: false  # Users and groups  runAsUser:    rule: RunAsAny  supplementalGroups:    rule: RunAsAny  fsGroup:    rule: RunAsAny  # Privilege Escalation  allowPrivilegeEscalation: false  defaultAllowPrivilegeEscalation: false  # Capabilities  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']  defaultAddCapabilities: []  requiredDropCapabilities: []  # Host namespaces  hostPID: false  hostIPC: false  hostNetwork: true  hostPorts:  - min: 0    max: 65535  # SELinux  seLinux:    # SELinux is unused in CaaSP    rule: 'RunAsAny'---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelrules:- apiGroups: ['extensions']  resources: ['podsecuritypolicies']  verbs: ['use']  resourceNames: ['psp.flannel.unprivileged']- apiGroups:  - ""  resources:  - pods  verbs:  - get- apiGroups:  - ""  resources:  - nodes  verbs:  - list  - watch- apiGroups:  - ""  resources:  - nodes/status  verbs:  - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: flannelsubjects:- kind: ServiceAccount  name: flannel  namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:  name: flannel  namespace: kube-system---kind: ConfigMapapiVersion: v1metadata:  name: kube-flannel-cfg  namespace: kube-system  labels:    tier: node    app: flanneldata:  cni-conf.json: |    {      "name": "cbr0",      "cniVersion": "0.3.1",      "plugins": [        {          "type": "flannel",          "delegate": {            "hairpinMode": true,            "isDefaultGateway": true          }        },        {          "type": "portmap",          "capabilities": {            "portMappings": true          }        }      ]    }  net-conf.json: |    {      "Network": "10.244.0.0/16",      "Backend": {        "Type": "vxlan"      }    }---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:            - matchExpressions:              - key: kubernetes.io/os                operator: In                values:                - linux      hostNetwork: true      priorityClassName: system-node-critical      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni        image: lizhenliang/flannel:v0.11.0-amd64        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: lizhenliang/flannel:v0.11.0-amd64        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:            add: ["NET_ADMIN", "NET_RAW"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/      volumes:      - name: run        hostPath:          path: /run/flannel      - name: cni        hostPath:          path: /etc/cni/net.d      - name: flannel-cfg        configMap:          name: kube-flannel-cfg
如果Pod镜像下载失败,能够改成这个镜像地址:lizhenliang/flannel:v0.11.0-amd64
# 查看POD信息[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide                                                                                                                                              NAME                                 READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES                                                                      coredns-9d85f5447-k8q66              1/1     Running   0          21h   10.244.0.2      k8s-master   <none>           <none>                                                                               coredns-9d85f5447-prwrh              1/1     Running   0          21h   10.244.0.3      k8s-master   <none>           <none>                                                                               etcd-k8s-master                      1/1     Running   2          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               kube-apiserver-k8s-master            1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               kube-controller-manager-k8s-master   1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               kube-flannel-ds-rzbkv                1/1     Running   0          14m   192.168.11.99   k8s-master   <none>           <none>                                                                               kube-proxy-k59lt                     1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>                                                                               kube-scheduler-k8s-master            1/1     Running   1          21h   192.168.11.99   k8s-master   <none>           <none>  
从下面的POD信息中咱们能够发现多一个kube-flannel-ds-rzbkv,这个是用来买通网络的, coredns也曾经胜利了

6.2 查看节点状态

[centos@k8s-master ~]$ kubectl get nodes                                                                     NAME         STATUS   ROLES    AGE   VERSION                                                                 k8s-master   Ready    master   21h   v1.17.0 

咱们能够看到 k8s-master 节点的 Stauts 曾经是 Ready

至此,Kubernetes 的 Master 节点就部署实现了。如果你只须要一个单节点的 Kubernetes,当初你就能够应用了。不过,在默认状况下,Kubernetes 的 Master 节点是不能运行用户 Pod 的

7 部署worker节点

Kubernetes 的 Worker 节点跟 Master 节点简直是雷同的,它们运行着的都是一个 kubelet 组件。惟一的区别在于,在 kubeadm init 的过程中,kubelet 启动后,Master 节点上还会主动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个零碎 Pod。 在 k8s-node1 和 k8s-node2 上别离执行如下命令,将其注册到 Cluster 中:

7.1 Kubernetes Node节点接入到集群中承受master节点治理

k8s-node 节点中执行,将其挂载到master节点中承受治理

退出集群

# kubeadm init 的时候生成的 token 在 子节点中执行,使其退出集群;$ kubeadm join 192.168.11.99:6443 --token tccfjm.u0k13eb29g9qaxzc --discovery-token-ca-cert-hash sha256:2db1d97e65a4c4d20118e53292876906388c95f08f40b9ddb6c485a155ca7007     # 退出胜利[preflight] Running pre-flight checks                                                                                [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/                                    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'                                                                                                          [preflight] Reading configuration from the cluster...                                                        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace                                                                                           [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                         [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"     [kubelet-start] Starting the kubelet                                                                         [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...                                                                                                                                                   This node has joined the cluster:                                                                            * Certificate signing request was sent to apiserver and a response was received.                             * The Kubelet was informed of the new secure connection details.                                                                                                                                                          Run 'kubectl get nodes' on the control-plane to see this node join the cluster.   # 如果遗记了,能够通过上面命令从新生成$ kubeadm token create --print-join-command

7.2 Kubernetes Master节点查看集群状态

[centos@k8s-master ~]$ kubectl get nodes                                                                     NAME         STATUS   ROLES    AGE    VERSION                                                                k8s-master   Ready    master   21h    v1.17.0                                                                k8s-node1    Ready    <none>   2m8s   v1.17.0                                                                k8s-node2    Ready    <none>   114s   v1.17.0 # 查看pod状态[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide                                                                                                                                              NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES                                                                     coredns-9d85f5447-k8q66              1/1     Running   0          22h   10.244.0.2       k8s-master   <none>           <none>                                                                              coredns-9d85f5447-prwrh              1/1     Running   0          22h   10.244.0.3       k8s-master   <none>           <none>                                                                              etcd-k8s-master                      1/1     Running   2          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              kube-apiserver-k8s-master            1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              kube-controller-manager-k8s-master   1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              kube-flannel-ds-crxlk                1/1     Running   0          45m   192.168.11.130   k8s-node1    <none>           <none>                                                                              kube-flannel-ds-njjdv                1/1     Running   0          45m   192.168.11.131   k8s-node2    <none>           <none>                                                                              kube-flannel-ds-rzbkv                1/1     Running   0          86m   192.168.11.99    k8s-master   <none>           <none>                                                                              kube-proxy-66bw5                     1/1     Running   0          45m   192.168.11.130   k8s-node1    <none>           <none>                                                                              kube-proxy-k59lt                     1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none>                                                                              kube-proxy-p7xch                     1/1     Running   0          45m   192.168.11.131   k8s-node2    <none>           <none>                                                                              kube-scheduler-k8s-master            1/1     Running   1          22h   192.168.11.99    k8s-master   <none>           <none> 
等所有的节点都曾经 Ready,Kubernetes Cluster 创立胜利,所有准备就绪。 如果pod状态为Pending、ContainerCreating、ImagePullBackOff都表明 Pod 没有就绪,Running 才是就绪状态。 如果有pod提醒Init:ImagePullBackOff,阐明这个pod的镜像在对应节点上拉取失败,咱们能够通过 kubectl describe pod 查看 Pod 具体情况,以确认拉取失败的镜像:

7.3 Kubernetes Master节点 和 Kubernetes Node 节点的镜像

# Kubernetes Master节点镜像[centos@k8s-master ~]$ sudo docker images                                                                                                                                                                  REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE                                                                         registry.aliyuncs.com/google_containers/kube-proxy                v1.17.0             7d54289267dc        21 months ago       116MB                                                                        registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.0             78c190f736b1        21 months ago       94.4MB                                                                       registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.0             0cae8d5cc64c        21 months ago       171MB                                                                        registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.0             5eb3b7486872        21 months ago       161MB                                                                        registry.aliyuncs.com/google_containers/coredns                   1.6.5               70f311871ae1        22 months ago       41.6MB                                                                       registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        23 months ago       288MB                                                                        lizhenliang/flannel                                               v0.11.0-amd64       ff281650a721        2 years ago         52.6MB                                                                       registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        3 years ago         742kB# Kubernetes Node节点镜像[root@k8s-node1 vagrant]# docker images                                                                                                                                                                    REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE                                                                                      registry.aliyuncs.com/google_containers/kube-proxy   v1.17.0             7d54289267dc        21 months ago       116MB                                                                                     lizhenliang/flannel                                  v0.11.0-amd64       ff281650a721        2 years ago         52.6MB                                                                                    registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        3 years ago         742kB
其实咱们从 Kubernetes Node 节点的镜像中咱们能够发现,flannel 镜像的存在,这也验证了咱们之前说的:在 node 节点 join 到master的时候 会加载网络主键,用于买通网络

8 测试kubernetes集群

在Kubernetes集群中创立一个pod,验证是否失常运行:

[centos@k8s-master ~]$ kubectl create deployment nginx --image=nginx:alpine                                   deployment.apps/nginx created# scale用于程序在负载减轻或放大时正本进行扩容或放大[centos@k8s-master ~]$ kubectl scale deployment nginx --replicas=2                                            deployment.apps/nginx scaled # 查看pod的运行状态[centos@k8s-master ~]$ kubectl get pods -l app=nginx -o wide                                                                                                                                               NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES                                                                                    nginx-5b6fb6dd96-4t4dt   1/1     Running   0          10m     10.244.2.2   k8s-node2   <none>           <none>                                                                                             nginx-5b6fb6dd96-594pq   1/1     Running   0          9m21s   10.244.1.2   k8s-node1   <none>           <none># 通过deployment创立nginx服务,并以 **NodePort** 的形式 凋谢 80 端口[centos@k8s-master ~]$ kubectl expose deployment nginx --port=80 --type=NodePort                             service/nginx exposed# 查看services的运行状态 (节点的30919端口映射到容器的80端口)[centos@k8s-master ~]$ kubectl get services nginx                                                            >NAME    TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE                                             nginx   NodePort   10.96.72.70   <none>        80:30919/TCP   8m39s # curl拜访nginx服务[root@k8s-node1 vagrant]# curl 192.168.11.130:30919                                                          <!DOCTYPE html>                                                                                              <html>                                                                                                       <head>                                                                                                       <title>Welcome to nginx!</title>                                                                             <style>                                                                                                      html { color-scheme: light dark; }                                                                           body { width: 35em; margin: 0 auto;.......
最初验证一下dns, pod network是否失常: 运行Busybox并进入交互模式
# busybox是一个集成了一百多个最罕用linux命令和工具的软件[centos@k8s-master ~]$ kubectl run -it curl --image=radial/busyboxplus:curl                                   kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.                                                       If you don't see a command prompt, try pressing enter.                                                       # 输出nslookup nginx查看是否能够正确解析出集群内的IP,以验证DNS是否失常[ root@curl-69c656fd45-t8jth:/ ]$ nslookup nginxServer:    10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      nginxAddress 1: 10.96.112.44 nginx.default.svc.cluster.local# 通过服务名进行拜访,验证kube-proxy是否失常[ root@curl-69c656fd45-t8jth:/ ]$ curl http://nginx/

9 部署 Dashboard

9.1 筹备装置 Kubernetes dashboard的yaml文件

# 下载配置文件wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml# 改名mv recommended.yaml kubernetes-dashboard.yaml

9.2 编辑yaml配置文件

默认Dashboard只能集群外部拜访,批改Service为NodePort类型,并裸露端口到内部:
kind: ServiceapiVersion: v1metadata:  labels:    k8s-app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kubernetes-dashboardspec:  type: NodePort  ports:    - port: 443      targetPort: 8443      nodePort: 30001  selector:    k8s-app: kubernetes-dashboard

9.3 创立service account并绑定默认cluster-admin管理员集群角色:

[centos@k8s-master ~]$ ll                                                                                     total 16-rw-rw-r-- 1 centos centos 4815 Sep 26 08:17 kube-flannel.yaml-rw-rw-r-- 1 centos centos 7607 Sep 26 09:34 kubernetes-dashboard.yaml[centos@k8s-master ~]$ kubectl create serviceaccount dashboard-admin -n kubernetes-dashboardserviceaccount/dashboard-admin created[centos@k8s-master ~]$ kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin                                                          clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created[centos@k8s-master ~]$ kubectl get secrets -n kubernetes-dashboard | grep dashboard-admindashboard-admin-token-j6zhm        kubernetes.io/service-account-token   3      9m36s                         [centos@k8s-master ~]$ kubectl describe secrets dashboard-admin-token-j6zhm  -n kubernetes-dashboard         -Name:         dashboard-admin-token-j6zhm                                                                    sNamespace:    kubernetes-dashboard                                                                            Labels:       <none>                                                                                         NAnnotations:  kubernetes.io/service-account.name: dashboard-admin                                            e              kubernetes.io/service-account.uid: 353ad721-17ff-48e2-a6b6-0654798de60b                        e                                                                                                              Type:  kubernetes.io/service-account-token                                                                                                                                                                                  Data                                                                                                          ====                                                                                                          token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IndrOGZVSGFaWGVQVHZLTUdMOTdkd3NhaGI2UGhHbDdZSFVkVFk3ejYyNncifQ.eyJpc3 MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGV zLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tajZ6 aG0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZ XRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzUzYWQ3MjEtMTdmZi00OGUyLWE2YjYtMDY1NDc5OGRlNjBiIi wic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.XxrEE6AbDQEH-bu1SY d7FkRuOo98nJUy05t_fCkpa-xyx-VgIWfOsH9VVhOlINtzYOBbpAFnNb52w943t7XmeK9P0Jiwd0Z2YxbmhXbZLtIAFO9gWeTWVUxe0yIzlne DkQui_CLgFTzSHZZYed19La2ejghXeYXcWXPpMKBI3dohc2TTUj0cEfMmMeWPJef_6fj5Qyx7ujokwsw5mqO9givaI9zdBkiV3ky35SjBerYO lvbQ8TjzHygF9vhVZTVh38_Tuff8SomMDUv5xpZ2DQXXkTmXbGao7SFivX4U56tOxrGGktio7A0xVVqnlJN3Br5M-YUrVNJrm73MegKVWg
此处的token,下文dashboard 登陆时须要应用,记得找中央记下来,切实忘了记录,也有从新输入的操作

9.4 利用配置文件启动服务

[centos@k8s-master ~]$ kubectl apply -f kubernetes-dashboard.yaml                                                                                                                                          namespace/kubernetes-dashboard created                                                                                                                                                                     serviceaccount/kubernetes-dashboard created                                                                                                                                                                service/kubernetes-dashboard created                                                                                                                                                                       secret/kubernetes-dashboard-certs created                                                                                                                                                                  secret/kubernetes-dashboard-csrf created                                                                                                                                                                   secret/kubernetes-dashboard-key-holder created                                                                                                                                                             configmap/kubernetes-dashboard-settings created                                                                                                                                                            role.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                                clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                         rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                         clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created                                                                                                                                  deployment.apps/kubernetes-dashboard created                                                                                                                                                               service/dashboard-metrics-scraper created                                                                                                                                                                  deployment.apps/dashboard-metrics-scraper created 

9.5 查看状态

kubectl get pods -n kubernetes-dashboardkubectl get svc -n kubernetes-dashboard

10 Dashboard chrome无法访问问题--问题修复

10.1 问题形容

K8S Dashboard装置好当前,通过Firefox浏览器是能够关上的,但通过Google Chrome浏览器,无奈胜利浏览页面。
[centos@k8s-master ~]$ curl 192.168.11.130:30001                                                            Client sent an HTTP request to an HTTPS server.

可见这是,要改成https

10.2 解决问题

kubeadm主动生成的证书,很多浏览器不反对。所以咱们须要本人创立证书。
# 创立一个key的目录,寄存证书等文件[centos@k8s-master ~]$ mkdir kubernetes-key                                                                 [centos@k8s-master ~]$ cd kubernetes-key/                                                                   [centos@k8s-master kubernetes-key]$  # 生成证书# 1)生成证书申请的key[centos@k8s-master kubernetes-key]$ openssl genrsa -out dashboard.key 2048                          Generating RSA private key, 2048 bit long modulus..........+++...............................................+++e is 65537 (0x10001)# 2)生成证书申请,上面192.168.11.99为master节点的IP地址[centos@k8s-master kubernetes-key]$ openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.11.99'# 3)生成自签证书[centos@k8s-master kubernetes-key]$ openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt                                                                                          Signature oksubject=/CN=192.168.11.99Getting Private key # 删除原有证书[centos@k8s-master kubernetes-key]$ kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboardsecret "kubernetes-dashboard-certs" deleted# 创立新证书的sercret[centos@k8s-master kubernetes-key]$ lltotal 12-rw-rw-r-- 1 centos centos  989 Sep 26 11:15 dashboard.crt-rw-rw-r-- 1 centos centos  895 Sep 26 11:14 dashboard.csr-rw-rw-r-- 1 centos centos 1679 Sep 26 11:12 dashboard.key[centos@k8s-master kubernetes-key]$ kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboardsecret/kubernetes-dashboard-certs created# 查找正在运行的pod[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboardNAME                                         READY   STATUS    RESTARTS   AGEdashboard-metrics-scraper-76585494d8-pcpk8   1/1     Running   0          99mkubernetes-dashboard-5996555fd8-5sh8w        1/1     Running   0          99m# 删除现有的pod[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboardNAME                                         READY   STATUS    RESTARTS   AGEdashboard-metrics-scraper-76585494d8-pcpk8   1/1     Running   0          99mkubernetes-dashboard-5996555fd8-5sh8w        1/1     Running   0          99m[centos@k8s-master kubernetes-key]$ kubectl delete po dashboard-metrics-scraper-76585494d8-pcpk8 -n kubernetes-dashboard                                                                                   pod "dashboard-metrics-scraper-76585494d8-pcpk8" deleted[centos@k8s-master kubernetes-key]$ kubectl delete po kubernetes-dashboard-5996555fd8-5sh8w -n kubernetes-dashboard                                                                                        pod "kubernetes-dashboard-5996555fd8-5sh8w" deleted

10.3 应用Chrome拜访验证

期待新的pod状态失常 ,再应用浏览器拜访
[centos@k8s-master kubernetes-key]$ kubectl get pod -n kubernetes-dashboardNAME                                         READY   STATUS    RESTARTS   AGEdashboard-metrics-scraper-76585494d8-wqfbp   1/1     Running   0          2m18skubernetes-dashboard-5996555fd8-c96js        1/1     Running   0          97s
拜访地址:http://NodeIP:30001,抉择token登陆,复制前文提到的token就能够登陆了。

应用token登录, Kubernetes即可真香可用了