关于kubernetes:KubernetesKubernetes集群搭建

117次阅读

共计 72616 个字符,预计需要花费 182 分钟才能阅读完成。

一、前言

为了便于学习容器编排,实际容器编排,本篇文章记录在本地虚拟机搭建一个准生产级别的 Kubernetes 集群。在此 k8s 集群的根底上,咱们能够尝试着容器化工作或者学习场景中的各种中间件集群,以及微服务利用。

二、集群搭建资源筹备

从学习的目标登程,筹备搭建一个管制节点和两个工作节点的集群,为了节约老本,须要反对 pod 被调度到管制节点上。

2.1、机器筹备

虚拟机 节点角色 系统配置 装置组件
k8s-master01 主节点 centos7,2 核 6G kube-apiserver、kube-scheduler、kube-controller-manager、etcd、kube-proxy、kubeadm、kubelet、kubectl、docker
k8s-workder01 工作节点 centos7,2 核 6G kubeadm、kubelet、kubectl、kube-proxy、docker
k8s-workder02 工作节点 centos7,2 核 6G kubeadm、kubelet、kubectl、kube-proxy、docker

2.2、组件介绍

组件名称 组件类型 部署形式 组件形容
kube-apiserver 管制立体组件 通过 kubeadm 容器化部署 API 服务器是 Kubernetes 管制面的组件,该组件公开了 Kubernetes API。API 服务器是 Kubernetes 管制面的前端
kube-scheduler 管制立体组件 通过 kubeadm 容器化部署 负责监督新创建的、未指定运行节点(node)的 Pods,抉择节点让 Pod 在下面运行
kube-controller-manager 管制立体组件 通过 kubeadm 容器化部署 从逻辑上讲,每个控制器都是一个独自的过程,然而为了升高复杂性,它们都被编译到同一个可执行文件,并在一个过程中运行
etcd 管制立体组件 通过 kubeadm 容器化部署 etcd 是兼具一致性和高可用性的键值数据库,作为保留 Kubernetes 所有集群数据(资源对象等)的后盾数据库
kubelet 节点组件 yum 装置,无奈容器化部署 kubelet 接管一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中形容的容器处于运行状态且衰弱。kubelet 不会治理不是由 Kubernetes 创立的容器
kube-proxy 节点组件 通过 kubeadm 容器化部署 kube-proxy 是集群中每个节点上运行的网络代理,实现 Kubernetes 服务(Service)概念的一部分。kube-proxy 保护节点上的网络规定。这些网络规定容许从集群外部或内部的网络会话与 Pod 进行网络通信。
docker 节点组件 yum 装置 docker 引擎,k8s 反对的一种容器运行时实现
kubeadm 集群辅助部署工具 yum 装置 k8s 官网集群部署工具
kubectl 客户端工具 yum 装置 用来与集群通信的命令行工具

2.3、筹备工作确认

无论是本地物理机还是私有云上的虚拟机,都应该保障以下几点要求

  • 满足装置 Docker 我的项目所需的要求,比方 64 位的 Linux 操作系统、3.10 及以上的内核版本
  • x86 或者 ARM 架构均可
  • 机器之间网络互通,容器网络互通的前提
  • 有外网拜访权限,因为须要拉取镜像(也可手动下载好镜像)
  • 节点之中不能够有反复的主机名、MAC 地址或 product_uuid
  • 开启机器上的某些端口,详情参照 k8s 官网,k8s 组件端口协定
  • 禁用替换分区。为了保障 kubelet 失常工作

2.4、实际部署指标

  1. 在所有节点上装置 Docker 和 kubeadm
  2. 部署 Kubernetes Master
  3. 部署容器网络插件
  4. 部署 Kubernetes Worker
  5. 部署 Dashboard 可视化插件
  6. 部署容器存储插件

三、集群搭建

3.1、装置 docker 引擎

k8s 通过下层 CRI 接口对接容器运行时实现,这里咱们采纳 docker 引擎这个容器运行时实现。因为咱们采纳的是 centos7 的机器部署 k8s 集群,此处采纳 yum 源装置 docker 引擎

  • Centos7 yum 源装置 docker 引擎
  • 参考 docker 官网的装置手册

留神:配置 Docker 守护程序,尤其是应用 systemd 来治理容器的 cgroup,放弃 docker 与 kubelet 的 cgroup 驱动统一

[root@vm-k8s-master ~]# cat /etc/docker/daemon.json 
{
  #配置 docker 的镜像仓库,减速镜像的拉取
  "registry-mirrors": ["https://hub-mirror.c.163.com","https://reg-mirror.qiniu.com"],
  #配置 cgroup 驱动为 systemd
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {"max-size": "100m"},
  #对于运行 Linux 内核版本 4.0 或更高版本,或应用 3.10.0-51 及更高版本的 RHEL 或 CentOS 的零碎,overlay2 是首选的存储驱动程序。"storage-driver": "overlay2"
}

3.2、装置 kubeadm

3.2.1、配置主机名

k8s 集群节点之中不能够有反复的主机名、MAC 地址或 product_uuid

# 配置主机名
hostnamectl set-hostname master

#查看主机名,查看是否 k8s 集群节点的主机名是否有反复
cat /etc/hostname    

#product_uuid 校验
sudo cat /sys/class/dmi/id/product_uuid

#应用命令 ip link 或 ifconfig - a 来获取网络接口的 MAC 地址
ifconfig -a

3.2.2、批改节点 host

cat >> /etc/hosts << EOF
192.168.31.254  k8s.master.com
192.168.31.254  k8s.cluster-endpoint
192.168.31.58   k8s.worker01.com
192.168.31.20   k8s.worker02.com
EOF

阐明:以上是 k8s 三个节点的 IP 域名映射。

3.2.3、容许 iptables 查看桥接流量

  • 确保 br_netfilter 模块被加载
# 查看是否加载了 br_netfilter 模块
lsmod | grep br_netfilter

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

#若没有加载 br_netfilter 模块,则显式加载 br_netfilter 模块
sudo modprobe br_netfilter
  • 确保 iptables 可能正确地查看桥接流量

为了让你的 Linux 节点上的 iptables 可能正确地查看桥接流量,你须要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

3.2.4、禁用 swap 替换分区

# 禁用所有替换替换设施
swapoff -a

# 正文 swap 行
vim /etc/fstab

#重启机器
reboot

3.2.5、配置 kubernetes 的 yum 源

  • 配置官网的 kubernetes yum 源(网络容许的状况下)

    cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    exclude=kubelet kubeadm kubectl
    EOF
  • 配置采纳阿里云镜像的 kubernetes yum 源

    cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-\$basearch
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    exclude=kubelet kubeadm kubectl
    EOF

3.2.6、yum 源装置 kubeadm、kubelet、kubectl

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)# 通过运行命令 setenforce 0 和 sed ... 将 SELinux 设置为 permissive 模式 能够无效地将其禁用。这是容许容器拜访主机文件系统所必须的,而这些操作时为了例如 Pod 网络工作失常
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#敞开防火墙,这个不是必须得,然而须要保障 k8s 的各个组件的端口交互的规定的保护,试验环境简略敞开解决
systemctl stop firewalld
systemctl disable firewalld

#yum 源装置 kubeadm、kubelet、kubectl
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

#查问 kubelet 版本,以便于找到到一个版本,用于下一步指定版本装置
yum list kubelet --showduplicates | sort -r 

#指定版本装置
sudo yum install -y kubelet-<version> kubeadm-<version> kubectl-<version> --disableexcludes=kubernetes

#开机自启动 kubelet
sudo systemctl enable --now kubelet

#开机即设置 kubectl 命令补全
echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile
  • 在上述装置 kubeadm 的过程中,kubeadm 和 kubelet、kubectl、kubernetes-cni 这几个二进制文件都会被主动装置好
==================================================================================================================================================================================================================
 Package                                                     架构                                        版本                                               源                                               大小
==================================================================================================================================================================================================================
正在装置:
 kubeadm                                                     x86_64                                      1.23.5-0                                           kubernetes                                      9.0 M
 kubectl                                                     x86_64                                      1.23.5-0                                           kubernetes                                      9.5 M
 kubelet                                                     x86_64                                      1.23.5-0                                           kubernetes                                       21 M
为依赖而装置:
 conntrack-tools                                             x86_64                                      1.4.4-7.el7                                        base                                            187 k
 cri-tools                                                   x86_64                                      1.23.0-0                                           kubernetes                                      7.1 M
 kubernetes-cni                                              x86_64                                      0.8.7-0                                            kubernetes                                       19 M
 libnetfilter_cthelper                                       x86_64                                      1.0.0-11.el7                                       base                                             18 k
 libnetfilter_cttimeout                                      x86_64                                      1.0.0-7.el7                                        base                                             18 k
 libnetfilter_queue                                          x86_64                                      1.0.2-2.el7_2                                      base                                             23 k
 socat                                                       x86_64                                      1.7.3.2-2.el7                                      base                                            290 k

事务概要
==================================================================================================================================================================================================================
装置  3 软件包 (+7 依赖软件包)

总下载量:65 M
装置大小:297 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm                                                                                                                                             | 187 kB  00:00:00     
(2/10): 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm                                                                                     | 7.1 MB  00:00:00     
(3/10): ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm                                                                                       | 9.0 MB  00:00:00     
(4/10): 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm                                                                                       | 9.5 MB  00:00:00     
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm                                                                                                                                      |  18 kB  00:00:00     
(6/10): socat-1.7.3.2-2.el7.x86_64.rpm                                                                                                                                                     | 290 kB  00:00:00     
(7/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm                                                                                                                                      |  18 kB  00:00:00     
(8/10): d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm                                                                                       |  21 MB  00:00:01     
(9/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm                                                                                 |  19 MB  00:00:01     
(10/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm                                                                                                                                       |  23 kB  00:00:01     
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计                                                                                                                                                                               26 MB/s |  65 MB  00:00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在装置    : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                                                                                    1/10 
  正在装置    : socat-1.7.3.2-2.el7.x86_64                                                                                                                                                                   2/10 
  正在装置    : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                                                                                    3/10 
  正在装置    : cri-tools-1.23.0-0.x86_64                                                                                                                                                                    4/10 
  正在装置    : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                                                                                      5/10 
  正在装置    : conntrack-tools-1.4.4-7.el7.x86_64                                                                                                                                                           6/10 
  正在装置    : kubernetes-cni-0.8.7-0.x86_64                                                                                                                                                                7/10 
  正在装置    : kubelet-1.23.5-0.x86_64                                                                                                                                                                      8/10 
  正在装置    : kubectl-1.23.5-0.x86_64                                                                                                                                                                      9/10 
  正在装置    : kubeadm-1.23.5-0.x86_64                                                                                                                                                                     10/10 
  验证中      : conntrack-tools-1.4.4-7.el7.x86_64                                                                                                                                                           1/10 
  验证中      : kubernetes-cni-0.8.7-0.x86_64                                                                                                                                                                2/10 
  验证中      : kubectl-1.23.5-0.x86_64                                                                                                                                                                      3/10 
  验证中      : kubeadm-1.23.5-0.x86_64                                                                                                                                                                      4/10 
  验证中      : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                                                                                      5/10 
  验证中      : cri-tools-1.23.0-0.x86_64                                                                                                                                                                    6/10 
  验证中      : kubelet-1.23.5-0.x86_64                                                                                                                                                                      7/10 
  验证中      : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                                                                                    8/10 
  验证中      : socat-1.7.3.2-2.el7.x86_64                                                                                                                                                                   9/10 
  验证中      : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                                                                                   10/10 

已装置:
  kubeadm.x86_64 0:1.23.5-0                                            kubectl.x86_64 0:1.23.5-0                                            kubelet.x86_64 0:1.23.5-0                                           

作为依赖被装置:
  conntrack-tools.x86_64 0:1.4.4-7.el7         cri-tools.x86_64 0:1.23.0-0     kubernetes-cni.x86_64 0:0.8.7-0    libnetfilter_cthelper.x86_64 0:1.0.0-11.el7    libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7   
  libnetfilter_queue.x86_64 0:1.0.2-2.el7_2    socat.x86_64 0:1.7.3.2-2.el7   

结束!

阐明:kubelet 当初每隔几秒就会重启,因为它陷入了一个期待 kubeadm 指令的死循环。
留神:如果校验签名报错,敞开校验,设置(gpgcheck=0、repo_gpgcheck=0)即可

https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for kubernetes

3.3、部署 Kubernetes 的 Master 节点

3.3.1、查看 kubeadm 须要下载的镜像

[root@vm-k8s-master ~]# kubeadm config images list --kubernetes-version=1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

阐明:kubernetes 所有皆容器的思维体现,k8s 管制立体的外围组件均采纳 pod 治理容器部署。默认状况下, kubeadm 会从 k8s.gcr.io 仓库拉取镜像。如果申请的 Kubernetes 版本是 CI 标签(例如 ci/latest),则应用 gcr.io/k8s-staging-ci-images,如果网络不容许,咱们能够手动下载以上所有的容器镜像,也能够指定从自定义的镜像仓库拉取

3.3.2、指定镜像仓库初始化管制立体(master 节点)

kubeadm init \
--apiserver-advertise-address=192.168.31.254 \
--control-plane-endpoint=k8s.cluster-endpoint \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.5 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
  • apiserver-advertise-address:API 服务器所颁布的其正在监听的 IP 地址。如果未设置,则应用默认网络接口
  • control-plane-endpoint:为管制立体指定一个稳固的 IP 地址或 DNS 名称,
  • image-repository:抉择用于拉取管制平面镜像的容器仓库,默认值:”k8s.gcr.io”
  • kubernetes-version:为管制立体抉择一个特定的 Kubernetes 版本,默认值:”stable-1″
  • service-cidr:为服务的虚构 IP 地址另外指定 IP 地址段,默认值:”10.96.0.0/12″
  • pod-network-cidr:指明 pod 网络能够应用的 IP 地址段。如果设置了这个参数,管制立体将会为每一个节点主动调配 CIDRs

3.3.2.1、初始化管制立体日志

[root@vm-k8s-master k8s-init]# kubeadm init \
> --apiserver-advertise-address=192.168.31.254 \
> --control-plane-endpoint=k8s.cluster-endpoint \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.23.5 \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.cluster-endpoint kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vm-k8s-master] and IPs [10.1.0.1 192.168.31.254]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vm-k8s-master] and IPs [192.168.31.254 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vm-k8s-master] and IPs [192.168.31.254 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002185 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vm-k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vm-k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: vz7g8m.fhgc8sby6tm6mi25
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
    --discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
    --discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d 

阐明:以上日志中有两个比拟重要的信息,如下:

  • 集群增加工作节点的指令
  kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
    --discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d \
    --control-plane 
  • kubectl 客户端依赖的集群配置
# 普通用户须要将集群配置复制到用户的家目录下,kubectl 客户端会默认读取
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#root 用户为了简略不便起见,设置环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

阐明:Kubernetes 集群默认须要加密形式拜访。所以,这几条命令,就是将刚刚部署生成的 Kubernetes 集群的平安配置文件,保留到以后用户的.kube 目录下,kubectl 默认会应用这个目录下的受权信息拜访 Kubernetes 集群。

3.3.3、为管制立体装置网络插件

初始化管制立体后,咱们首先来查看下节点的状态,会发现 Master 节点的状态是 NotReady。

[root@vm-k8s-master k8s-init]# kubectl get nodes
NAME            STATUS     ROLES                  AGE   VERSION
vm-k8s-master   NotReady   control-plane,master   24m   v1.23.5

进一步,通过 kubectl describe 指令的输入,咱们能够看到 NodeNotReady 的起因在于,咱们尚未部署任何网络插件。

[root@vm-k8s-master k8s-init]# kubectl describe node vm-k8s-master
Name:               vm-k8s-master
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=vm-k8s-master
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 26 Mar 2022 16:48:43 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  vm-k8s-master
  AcquireTime:     <unset>
  RenewTime:       Sat, 26 Mar 2022 17:18:00 +0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 26 Mar 2022 17:14:24 +0800   Sat, 26 Mar 2022 16:48:40 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 26 Mar 2022 17:14:24 +0800   Sat, 26 Mar 2022 16:48:40 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 26 Mar 2022 17:14:24 +0800   Sat, 26 Mar 2022 16:48:40 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Sat, 26 Mar 2022 17:14:24 +0800   Sat, 26 Mar 2022 16:48:40 +0800   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  192.168.31.254
  Hostname:    vm-k8s-master
Capacity:
  cpu:                2
  ephemeral-storage:  39617640Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             5925672Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  36511616964
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             5823272Ki
  pods:               110
System Info:
  Machine ID:                 5bf19e1a8ca94a5c987c497fd9d169f9
  System UUID:                4A364D56-F429-F892-581C-C7E2B013277E
  Boot ID:                    8a40b366-2645-4ee1-bf40-90e60b23a2df
  Kernel Version:             3.10.0-1160.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.13
  Kubelet Version:            v1.23.5
  Kube-Proxy Version:         v1.23.5
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-vm-k8s-master                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         29m
  kube-system                 kube-apiserver-vm-k8s-master             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
  kube-system                 kube-controller-manager-vm-k8s-master    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
  kube-system                 kube-proxy-dvrkb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
  kube-system                 kube-scheduler-vm-k8s-master             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (32%)  0 (0%)
  memory             100Mi (1%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age   From        Message
  ----    ------                   ----  ----        -------
  Normal  Starting                 28m   kube-proxy  
  Normal  Starting                 29m   kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  29m   kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  29m   kubelet     Node vm-k8s-master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    29m   kubelet     Node vm-k8s-master status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     29m   kubelet     Node vm-k8s-master status is now: NodeHasSufficientPID

另外,能够看到,CoreDNS 这个依赖于网络的 Pod 处于 Pending 状态,即调度失败。这当然是合乎预期的,因为这个 Master 节点的网络尚未就绪。

[root@vm-k8s-master k8s-init]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-6ldld                 0/1     Pending   0          31m
coredns-6d8c4cb4d-8z59l                 0/1     Pending   0          31m
etcd-vm-k8s-master                      1/1     Running   0          32m
kube-apiserver-vm-k8s-master            1/1     Running   0          32m
kube-controller-manager-vm-k8s-master   1/1     Running   0          32m
kube-proxy-dvrkb                        1/1     Running   0          31m
kube-scheduler-vm-k8s-master            1/1     Running   0          32m

3.3.3.1、装置 flannel 网络插件

在 Kubernetes 我的项目所有皆容器的设计理念领导下,部署网络插件也是采纳 pod 部署。

# 如果网络容许可间接执行 kubectl apply 指令即可
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#如果网络不容许,无奈下载 kube-flannel.yml,能够手动复制以下的 kube-flannel.yml 文件内容,留神手动创立的 kube-flannel.yml 地位执行以下命令
kubectl apply -f ./kube-flannel.yml
  • kube-flannel.yml 文件内容
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {"portMappings": true}
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {"Type": "vxlan"}
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
  • 装置 flannel 网络插件日志
[root@vm-k8s-master k8s-init]# kubectl apply -f ./kube-flannel.yml 
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds created
  • 网络插件部署实现后,咱们能够通过 kubectl get 从新查看 Pod 的状态
[root@vm-k8s-master k8s-init]# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-6ldld                 1/1     Running   0          95m
coredns-6d8c4cb4d-8z59l                 1/1     Running   0          95m
etcd-vm-k8s-master                      1/1     Running   0          95m
kube-apiserver-vm-k8s-master            1/1     Running   0          95m
kube-controller-manager-vm-k8s-master   1/1     Running   0          95m
kube-flannel-ds-gwccf                   1/1     Running   0          38s
kube-proxy-dvrkb                        1/1     Running   0          95m
kube-scheduler-vm-k8s-master            1/1     Running   0          95m

能够看到,所有的零碎 Pod 都胜利启动了,而刚刚部署的 flannel 网络插件则在 kube-system 上面新建了一个名叫 kube-flannel-ds-gwccf 的 Pod,一般来说,这些 Pod 就是容器网络插件在每个节点上的管制组件。
至此,Kubernetes 的 Master 节点就部署实现了。如果你只须要一个单节点的 Kubernetes,当初你就能够应用了。不过,在默认状况下,Kubernetes 的 Master 节点是不能运行用户 Pod 的,所以还须要额定做一个小操作。

3.3.4、管制立体节点污点打消

默认状况下 Master 节点是不容许运行用户 Pod 的。而 Kubernetes 做到这一点,依附的是 Kubernetes 的 Taint/Toleration 机制。
它的原理非常简单:一旦某个节点被加上了一个 Taint,即被“打上了污点”,那么所有 Pod 就都不能在这个节点上运行,因为 Kubernetes 的 Pod 都有“洁癖”。除非,有个别的 Pod 申明本人能“容忍”这个“污点”,即申明了 Toleration,它才能够在这个节点上运行。

  • 为节点打上“污点”(Taint)的命令是
# 该 node1 节点上就会减少一个键值对格局的 Taint,即:foo=bar:NoSchedule。其中值外面的 NoSchedule,意味着这个 Taint 只会在调度新 Pod 时产生作用,而不会影响曾经在 node1 上运行的 Pod,哪怕它们没有 Toleration
kubectl taint nodes node1 foo=bar:NoSchedule
  • Pod 申明 Toleration 污点

apiVersion: v1
kind: Pod
...
spec:
  tolerations:
  - key: "foo"
    operator: "Equal"
    value: "bar"
    effect: "NoSchedule"

这个 Toleration 的含意是,这个 Pod 能“容忍”所有键值对为 foo=bar 的 Taint(operator:“Equal”,“等于”操作)。

  • kubectl describe 查看 Master 节点的 Taint 字段
[root@vm-k8s-master k8s-init]# kubectl describe node vm-k8s-master
Name:               vm-k8s-master
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=vm-k8s-master
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"d6:07:13:de:95:c6"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.31.254
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 26 Mar 2022 16:48:43 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule

能够看到,Master 节点默认被加上了 node-role.kubernetes.io/master:NoSchedule 这样一个“污点”,其中“键”是 node-role.kubernetes.io/master,而没有提供“值”。

  • 删除 master 节点的默认污点
    为了不便起见,测试学习环境须要 master 节点也能被调度 pod 运行,简略粗犷地为 master 节点删除掉默认污点。
# 在“node-role.kubernetes.io/master”这个键前面加上了一个短横线“-”,这个格局就意味着移除所有以“node-role.kubernetes.io/master”为键的 Taint。kubectl taint nodes --all node-role.kubernetes.io/master-

3.4、为 Kubernetes 集群增加工作节点

Kubernetes 的 Worker 节点跟 Master 节点简直是雷同的,它们运行着的都是一个 kubelet 组件。惟一的区别在于,在 kubeadm init 的过程中,kubelet 启动后,Master 节点上还会主动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个零碎 Pod。
到这里默认工作节点曾经装置了 kubeadm、kubectl、kubelet、docker。接下来只有执行 kubeadm init 的过程中生成的 kubeadm join 执行就能够了。

  • 执行部署 Master 节点时生成的 kubeadm join 指令
kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
    --discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d
  • worker 节点退出集群日志输入
[root@vm-k8s-worker01 manifests]# kubeadm join k8s.cluster-endpoint:6443 --token vz7g8m.fhgc8sby6tm6mi25 \
>     --discovery-token-ca-cert-hash sha256:e368eb6bdd202ff442cef277b97f12fe374d63071ff9dc277add24301609204d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.5、部署和拜访 Kubernetes 仪表板(Dashboard)

Dashboard 是基于网页的 Kubernetes 用户界面。你能够应用 Dashboard 将容器利用部署到 Kubernetes 集群中,也能够对容器利用排错,还能治理集群资源。你能够应用 Dashboard 获取运行在集群中的利用的概览信息,也能够创立或者批改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,你能够对 Deployment 实现弹性伸缩、发动滚动降级、重启 Pod 或者应用向导创立新的利用。
Dashboard 同时展现了 Kubernetes 集群中的资源状态信息和所有报错信息。

  • 参考官网部署文档

3.5.1、部署 Dashboard 可视化插件

# 向 k8s 集群提交 dashboard 资源组件配置,如果网络不通,可复制以下提供的配置文件内容
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
  • kubernetes-dashboard.yaml 文件内容
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.5.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
  • kubernetes-dashboard 装置日志
[root@vm-k8s-master k8s-init]# kubectl apply -f ./kubernetes-dashboard.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

3.5.2、创立超级管理员的账号用于登录 Dashboard

Dashboard 是一个 Web Server,很多人常常会在本人的私有云上无心地裸露 Dashboard 的端口,从而造成安全隐患。所以,1.7 版本之后的 Dashboard 我的项目部署实现后,默认只能通过 Proxy 的形式在本地拜访。

  • 创立用户,并绑定集群 admin 权限
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
  • 生成 Bearer Token
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

#打印输出信息如下
eyJhbGciOiJSUzI1NiIsImtpZCI6Il9uU25iZHpROE5WZkZOcDFFeGhGT0JRQjZOal93Zk1qOHNVVlRsQVU2QmMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWRzbDR2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0MzJkNTA5Yy0zNTQ4LTRlYTktOTNmZi03NTM2ZDkxY2YwMTIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RLl_jz41Snyc4qEHFk9-o_JYKfv22Lv2JquTgXX92_9k9VQOswd7RhF1OsWywVfd5TKq9_tagMDfVRqHo6fVe-jB4blUL6j6iTxtkFno7P9snqboFsEoegSFCM-S9OaF1C5Dk_rcIp2yJzbALkmTC6VjdWxPt3TvjEclXRQl0cITlE1NFmJKpJqbNlG65l4SjnRO5KLinrgYfpcLxlroqlKd4ZFWenKrz16VOicCUHuP_mWlq4yxp_WRrfpCglX92u-sDY3ouJW5cL0eqk-w5XKhREFvm-C2Opoba-mDZVcbaQxEIw_VkeudLliSUs5_v9CKQGKPYK6r6rdESYi6sw

3.5.3、批改 kubernetes-dashboard 的 service

为了更敌对的拜访 dashboard 服务,须要批改 kubernetes-dashboard.yaml 的 service 配置局部,将 service type 从 ClusterIP 批改为 NodePort 的形式。

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
  • 更新以上配置文件后,执行 kubectl apply 更新资源对象
kubectl apply -f ./kubernetes-dashboard.yaml
  • 查看更新后的 service 资源对象
[root@vm-k8s-master k8s-init]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.1.245.206   <none>        8000/TCP        62m
kubernetes-dashboard        NodePort    10.1.50.101    <none>        443:30001/TCP   62m

3.5.4、拜访 kubernetes-dashboard

# 留神此处的 https 协定,抉择 token 的认证形式,复制早前生成的 token 即可
https://k8s.master.com:30001/

3.6、部署 rook 存储插件

Rook 我的项目是一个基于 Ceph 的 Kubernetes 存储插件(它前期也在退出对更多存储实现的反对)。不过,不同于对 Ceph 的简略封装,Rook 在本人的实现中退出了程度扩大、迁徙、劫难备份、监控等大量的企业级性能,使得这个我的项目变成了一个残缺的、生产级别可用的容器存储插件。
可在生产环境中对文件存储、块存储和对象存储进行治理。Rook 由云原生计算基金会(CNCF) 作为毕业级我的项目托管。Rook 是用 golang 实现的。Ceph 是用 C ++ 实现的。

  • 更多信息参考 rook 官网
  • 查看 rook github

3.6.1、部署 rook 存储插件前置条件

以以后最新的 rook 版本为 v1.8.7 为例,进行以下的装置步骤。为了配置 Ceph 存储集群,至多须要以下本地存储选项之一。

  • 原始设施(无分区或格式化文件系统)
  • 原始分区(无格式化文件系统)
  • 可通过 block 模式从存储类取得 PV
  • 集群至多须要三个节点,每个节点至多须要一个 OSD(对象存储设备),保障存储类的创立
  • 须要 Kubernetesv1.16 或更高版本

3.6.2、部署 rook 存储插件前置筹备

因为本次搭建的环境,机器是通过 VMware 创立进去的,所以能够很不便地为每个虚拟机增加一块全新的硬盘。用于后续 rook 集群作为一个 OSD(对象存储设备)。

  • VMware 增加虚拟机硬盘,参考 vmware 官网文档
1、抉择该虚拟机,而后抉择虚拟机 > 设置。2、在硬件选项卡中,单击增加。3、在新建硬件向导中,抉择硬盘。4、抉择创立新虚构磁盘。5、抉择磁盘类型。

  • 查看磁盘块设施
[root@vm-k8s-worker01 ~]# lsblk -f
NAME                            FSTYPE      LABEL           UUID                                   MOUNTPOINT
sda                                                                                                
├─sda1                          xfs                         8fd14e7c-6b98-4845-bd19-2f1eb6b175b0   /boot
└─sda2                          LVM2_member                 ZT7jhW-IRXs-0KBo-FGSk-C85B-hnOA-upyrdP 
  ├─centos_vm--k8s--jayway-root xfs                         477d2f79-9e28-49a0-8f5f-01c69014014b   /
  └─centos_vm--k8s--jayway-swap swap                        949d1c1f-abe7-4fd2-9232-40526e1b8637   
sdb                                                                                                
sr0                             iso9660     CentOS 7 x86_64 2020-11-04-11-36-43-00   

阐明:留神 FSTYPE 字段的值,如果为空,则示意以后的设施尚未写入文件系统,例如以上的 sdb 设施就是刚新增的硬盘设施。

3.6.3、部署 rook 存储插件前置筹备

  • 克隆 rook git 仓库
# 以后最新版本为 v1.8.7
git clone --single-branch --branch v1.8.7 https://github.com/rook/rook.git
  • 查看 rook 集群部署的相干 K8S 资源配置文件
    进入 rook/deploy/examples 目录,可查看 rook 我的项目提供了很多 K8S 资源配置文件。临时咱们只须要关注个别的资源配置文件。
资源文件 形容
crds.yaml 创立 rook 集群前,首先须要被创立的 CRD 资源对象,让 k8s 意识 rook 的自定义资源
common.yaml 启动 operator 和 ceph cluster 所必须得通用资源,必须先于 operator.yaml、cluster.yaml 资源被创立
operator.yaml rook operator 控制器
cluster.yaml rook-ceph cluster 配置
storageclass.yaml 创立存储类,为 k8s 集群提供动态创建 pv 的能力
mysql.yaml rook 官网提供的 mysql 测试服务,依赖于 storageclass 创立的 pv 长久化
wordpress.yaml rook 官网提供的 wordpress 测试服务,依赖于 storageclass 创立的 pv 长久化
-rw-r--r--. 1 root root    696 3 月  27 10:27 bucket-notification-endpoint.yaml
-rw-r--r--. 1 root root   1258 3 月  27 10:27 bucket-notification.yaml
-rw-r--r--. 1 root root    842 3 月  27 10:27 bucket-topic.yaml
-rw-r--r--. 1 root root    456 3 月  27 10:27 ceph-client.yaml
-rw-r--r--. 1 root root   1088 3 月  27 10:27 cluster-external-management.yaml
-rw-r--r--. 1 root root   1275 3 月  27 10:27 cluster-external.yaml
-rw-r--r--. 1 root root   7350 3 月  27 10:27 cluster-on-local-pvc.yaml
-rw-r--r--. 1 root root   8539 3 月  27 10:27 cluster-on-pvc.yaml
-rw-r--r--. 1 root root   5376 3 月  27 10:27 cluster-stretched-aws.yaml
-rw-r--r--. 1 root root   3312 3 月  27 10:27 cluster-stretched.yaml
-rw-r--r--. 1 root root   1798 3 月  27 10:27 cluster-test.yaml
-rw-r--r--. 1 root root  15343 3 月  27 10:27 cluster.yaml
-rw-r--r--. 1 root root   2418 3 月  27 10:27 common-external.yaml
-rw-r--r--. 1 root root   3908 3 月  27 10:27 common-second-cluster.yaml
-rw-r--r--. 1 root root  38180 3 月  27 10:27 common.yaml
-rw-r--r--. 1 root root 739053 3 月  27 10:27 crds.yaml
-rw-r--r--. 1 root root  75912 3 月  27 10:27 create-external-cluster-resources.py
-rw-r--r--. 1 root root   3696 3 月  27 10:27 create-external-cluster-resources.sh
drwxr-xr-x. 4 root root     31 3 月  27 10:27 csi
-rw-r--r--. 1 root root    430 3 月  27 10:27 dashboard-external-https.yaml
-rw-r--r--. 1 root root    429 3 月  27 10:27 dashboard-external-http.yaml
-rw-r--r--. 1 root root    988 3 月  27 10:27 dashboard-ingress-https.yaml
-rw-r--r--. 1 root root    432 3 月  27 10:27 dashboard-loadbalancer.yaml
-rw-r--r--. 1 root root   1904 3 月  27 10:27 direct-mount.yaml
-rw-r--r--. 1 root root   3729 3 月  27 10:27 filesystem-ec.yaml
-rw-r--r--. 1 root root   1253 3 月  27 10:27 filesystem-mirror.yaml
-rw-r--r--. 1 root root    825 3 月  27 10:27 filesystem-test.yaml
-rw-r--r--. 1 root root   5378 3 月  27 10:27 filesystem.yaml
-rw-r--r--. 1 root root    416 3 月  27 10:27 images.txt
-rw-r--r--. 1 root root   5142 3 月  27 10:27 import-external-cluster.sh
drwxr-xr-x. 2 root root   4096 3 月  27 10:27 monitoring
-rw-r--r--. 1 root root   1269 3 月  27 10:27 mysql.yaml
-rw-r--r--. 1 root root    747 3 月  27 10:27 nfs-test.yaml
-rw-r--r--. 1 root root   2635 3 月  27 10:27 nfs.yaml
-rw-r--r--. 1 root root    604 3 月  27 10:27 object-bucket-claim-delete.yaml
-rw-r--r--. 1 root root    502 3 月  27 10:27 object-bucket-claim-notification.yaml
-rw-r--r--. 1 root root    604 3 月  27 10:27 object-bucket-claim-retain.yaml
-rw-r--r--. 1 root root   3793 3 月  27 10:27 object-ec.yaml
-rw-r--r--. 1 root root    836 3 月  27 10:27 object-external.yaml
-rw-r--r--. 1 root root   1991 3 月  27 10:27 object-multisite-pull-realm-test.yaml
-rw-r--r--. 1 root root   2064 3 月  27 10:27 object-multisite-pull-realm.yaml
-rw-r--r--. 1 root root   1339 3 月  27 10:27 object-multisite-test.yaml
-rw-r--r--. 1 root root   1365 3 月  27 10:27 object-multisite.yaml
-rw-r--r--. 1 root root   6090 3 月  27 10:27 object-openshift.yaml
-rw-r--r--. 1 root root    707 3 月  27 10:27 object-test.yaml
-rw-r--r--. 1 root root    795 3 月  27 10:27 object-user.yaml
-rw-r--r--. 1 root root   5835 3 月  27 10:27 object.yaml
-rw-r--r--. 1 root root  23608 3 月  27 10:27 operator-openshift.yaml
-rw-r--r--. 1 root root  21444 3 月  27 10:27 operator.yaml
-rw-r--r--. 1 root root    944 3 月  27 10:27 osd-env-override.yaml
-rw-r--r--. 1 root root   3150 3 月  27 10:27 osd-purge.yaml
-rw-r--r--. 1 root root    797 3 月  27 10:27 pool-device-health-metrics.yaml
-rw-r--r--. 1 root root   1127 3 月  27 10:27 pool-ec.yaml
-rw-r--r--. 1 root root    507 3 月  27 10:27 pool-mirrored.yaml
-rw-r--r--. 1 root root    539 3 月  27 10:27 pool-test.yaml
-rw-r--r--. 1 root root   3461 3 月  27 10:27 pool.yaml
-rw-r--r--. 1 root root   1515 3 月  27 10:27 rbdmirror.yaml
-rw-r--r--. 1 root root    130 3 月  27 10:27 README.md
-rw-r--r--. 1 root root    546 3 月  27 10:27 rgw-external.yaml
-rw-r--r--. 1 root root    754 3 月  27 10:27 storageclass-bucket-delete.yaml
-rw-r--r--. 1 root root    752 3 月  27 10:27 storageclass-bucket-retain.yaml
-rw-r--r--. 1 root root    281 3 月  27 10:27 subvolumegroup.yaml
-rw-r--r--. 1 root root   1796 3 月  27 10:27 toolbox-job.yaml
-rw-r--r--. 1 root root   1672 3 月  27 10:27 toolbox.yaml
-rw-r--r--. 1 root root    460 3 月  27 10:27 volume-replication-class.yaml
-rw-r--r--. 1 root root    352 3 月  27 10:27 volume-replication.yaml
-rw-r--r--. 1 root root   1369 3 月  27 10:27 wordpress.yaml

3.6.4、rook 存储插件装置

rook 各个资源对象依赖的镜像如下所示,这些镜像也是在谷歌的镜像仓库。

[root@vm-k8s-master examples]# pwd
/root/k8s-init/rook-git/deploy/examples
[root@vm-k8s-master examples]# cat images.txt 
 k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0
 k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
 k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
 k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1
 quay.io/ceph/ceph:v16.2.7
 quay.io/cephcsi/cephcsi:v3.5.1
 quay.io/csiaddons/k8s-sidecar:v0.2.1
 quay.io/csiaddons/volumereplication-operator:v0.3.0
 rook/ceph:v1.8.7
3.6.4.1、创立的 CRD 资源对象

创立 rook 集群前,首先须要被创立的 CRD 资源对象,让 k8s 意识 rook 的自定义资源;此资源文件未依赖镜像。

# 以后门路:rook/deploy/examples
kubectl apply -f crds.yaml
  • 执行日志
[root@vm-k8s-master examples]# kubectl apply -f crds.yaml
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephbucketnotifications.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephbuckettopics.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystemmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystemsubvolumegroups.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created
customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created
3.6.4.2、创立通用资源对象

启动 operator 和 ceph cluster 所必须得通用资源,必须先于 operator.yaml、cluster.yaml 资源被创立。此资源文件未依赖镜像。

# 以后门路:rook/deploy/examples
kubectl apply -f common.yaml
  • 执行日志
[root@vm-k8s-master examples]# kubectl apply -f common.yaml
namespace/rook-ceph created
clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created
clusterrole.rbac.authorization.k8s.io/psp:rook created
clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created
clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrole.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created
clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/00-rook-privileged created
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created
role.rbac.authorization.k8s.io/rbd-csi-nodeplugin created
role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created
role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
role.rbac.authorization.k8s.io/rook-ceph-mgr created
role.rbac.authorization.k8s.io/rook-ceph-osd created
role.rbac.authorization.k8s.io/rook-ceph-purge-osd created
role.rbac.authorization.k8s.io/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin-role-cfg created
rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created
rolebinding.rbac.authorization.k8s.io/rook-ceph-purge-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
serviceaccount/rook-ceph-cmd-reporter created
serviceaccount/rook-ceph-mgr created
serviceaccount/rook-ceph-osd created
serviceaccount/rook-ceph-purge-osd created
serviceaccount/rook-ceph-system created
serviceaccount/rook-csi-cephfs-plugin-sa created
serviceaccount/rook-csi-cephfs-provisioner-sa created
serviceaccount/rook-csi-rbd-plugin-sa created
serviceaccount/rook-csi-rbd-provisioner-sa created
3.6.4.3、创立 rook operator 控制器

依赖于 rook/ceph:v1.8.7 镜像,此镜像国内网络能够失常拉取。然而也默认依赖 CSI 相干的镜像,须要拜访谷歌镜像仓库。

# 以后门路:rook/deploy/examples
kubectl apply -f operator.yaml
  • 如果无奈拉取谷歌镜像,请批改相干依赖镜像仓库地址
ROOK_CSI_ATTACHER_IMAGE: "willdockerhub/csi-attacher:v3.4.0"
ROOK_CSI_REGISTRAR_IMAGE: "willdockerhub/csi-node-driver-registrar:v2.5.0"
ROOK_CSI_PROVISIONER_IMAGE: "willdockerhub/csi-provisioner:v3.1.0"
ROOK_CSI_RESIZER_IMAGE: "willdockerhub/csi-resizer:v1.4.0"
ROOK_CSI_SNAPSHOTTER_IMAGE: "willdockerhub/csi-snapshotter:v5.0.1"

ROOK_CSI_CEPH_IMAGE: "willdockerhub/cephcsi:v3.5.1"
ROOK_CSIADDONS_IMAGE: "willdockerhub/k8s-sidecar:v0.2.1"
CSI_VOLUME_REPLICATION_IMAGE: "willdockerhub/volumereplication-operator:v0.3.0"
  • 批改过镜像后的 operator.yaml 文件内容
#################################################################################################################
# The deployment for the rook operator
# Contains the common settings for most Kubernetes deployments.
# For example, to create the rook-ceph cluster:
#   kubectl create -f crds.yaml -f common.yaml -f operator.yaml
#   kubectl create -f cluster.yaml
#
# Also see other operator sample files for variations of operator.yaml:
# - operator-openshift.yaml: Common settings for running in OpenShift
###############################################################################################################

# Rook Ceph Operator Config ConfigMap
# Use this ConfigMap to override Rook-Ceph Operator configurations.
# NOTE! Precedence will be given to this config if the same Env Var config also exists in the
#       Operator Deployment.
# To move a configuration(s) from the Operator Deployment to this ConfigMap, add the config
# here. It is recommended to then remove it from the Deployment to eliminate any future confusion.
kind: ConfigMap
apiVersion: v1
metadata:
  name: rook-ceph-operator-config
  # should be in the namespace of the operator
  namespace: rook-ceph # namespace:operator
data:
  # The logging level for the operator: ERROR | WARNING | INFO | DEBUG
  ROOK_LOG_LEVEL: "INFO"

  # Enable the CSI driver.
  # To run the non-default version of the CSI driver, see the override-able image properties in operator.yaml
  ROOK_CSI_ENABLE_CEPHFS: "true"
  # Enable the default version of the CSI RBD driver. To start another version of the CSI driver, see image properties below.
  ROOK_CSI_ENABLE_RBD: "true"
  ROOK_CSI_ENABLE_GRPC_METRICS: "false"

  # Set to true to enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
  # in some network configurations where the SDN does not provide access to an external cluster or
  # there is significant drop in read/write performance.
  # CSI_ENABLE_HOST_NETWORK: "true"

  # Set logging level for csi containers.
  # Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
  # CSI_LOG_LEVEL: "0"

  # Set replicas for csi provisioner deployment.
  CSI_PROVISIONER_REPLICAS: "2"

  # OMAP generator will generate the omap mapping between the PV name and the RBD image.
  # CSI_ENABLE_OMAP_GENERATOR need to be enabled when we are using rbd mirroring feature.
  # By default OMAP generator sidecar is deployed with CSI provisioner pod, to disable
  # it set it to false.
  # CSI_ENABLE_OMAP_GENERATOR: "false"

  # set to false to disable deployment of snapshotter container in CephFS provisioner pod.
  CSI_ENABLE_CEPHFS_SNAPSHOTTER: "true"

  # set to false to disable deployment of snapshotter container in RBD provisioner pod.
  CSI_ENABLE_RBD_SNAPSHOTTER: "true"

  # Enable cephfs kernel driver instead of ceph-fuse.
  # If you disable the kernel client, your application may be disrupted during upgrade.
  # See the upgrade guide: https://rook.io/docs/rook/latest/ceph-upgrade.html
  # NOTE! cephfs quota is not supported in kernel version < 4.17
  CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"

  # (Optional) policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
  # supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
  CSI_RBD_FSGROUPPOLICY: "ReadWriteOnceWithFSType"

  # (Optional) policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
  # supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
  CSI_CEPHFS_FSGROUPPOLICY: "ReadWriteOnceWithFSType"

  # (Optional) Allow starting unsupported ceph-csi image
  ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false"

  # (Optional) control the host mount of /etc/selinux for csi plugin pods.
  CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT: "false"

  # The default version of CSI supported by Rook will be started. To change the version
  # of the CSI driver to something other than what is officially supported, change
  # these images to the desired release of the CSI driver.
  # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.5.1"
  # ROOK_CSI_REGISTRAR_IMAGE: "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0"
  # ROOK_CSI_RESIZER_IMAGE: "k8s.gcr.io/sig-storage/csi-resizer:v1.4.0"
  # ROOK_CSI_PROVISIONER_IMAGE: "k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0"
  # ROOK_CSI_SNAPSHOTTER_IMAGE: "k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1"
  # ROOK_CSI_ATTACHER_IMAGE: "k8s.gcr.io/sig-storage/csi-attacher:v3.4.0"
  
  
  ROOK_CSI_ATTACHER_IMAGE: "willdockerhub/csi-attacher:v3.4.0"
  ROOK_CSI_REGISTRAR_IMAGE: "willdockerhub/csi-node-driver-registrar:v2.5.0"
  ROOK_CSI_PROVISIONER_IMAGE: "willdockerhub/csi-provisioner:v3.1.0"
  ROOK_CSI_RESIZER_IMAGE: "willdockerhub/csi-resizer:v1.4.0"
  ROOK_CSI_SNAPSHOTTER_IMAGE: "willdockerhub/csi-snapshotter:v5.0.1"
  
  ROOK_CSI_CEPH_IMAGE: "willdockerhub/cephcsi:v3.5.1"
  ROOK_CSIADDONS_IMAGE: "willdockerhub/k8s-sidecar:v0.2.1"
  CSI_VOLUME_REPLICATION_IMAGE: "willdockerhub/volumereplication-operator:v0.3.0"

  # (Optional) set user created priorityclassName for csi plugin pods.
  # CSI_PLUGIN_PRIORITY_CLASSNAME: "system-node-critical"

  # (Optional) set user created priorityclassName for csi provisioner pods.
  # CSI_PROVISIONER_PRIORITY_CLASSNAME: "system-cluster-critical"

  # CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
  # Default value is RollingUpdate.
  # CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY: "OnDelete"
  # CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
  # Default value is RollingUpdate.
  # CSI_RBD_PLUGIN_UPDATE_STRATEGY: "OnDelete"

  # kubelet directory path, if kubelet configured to use other than /var/lib/kubelet path.
  # ROOK_CSI_KUBELET_DIR_PATH: "/var/lib/kubelet"

  # Labels to add to the CSI CephFS Deployments and DaemonSets Pods.
  # ROOK_CSI_CEPHFS_POD_LABELS: "key1=value1,key2=value2"
  # Labels to add to the CSI RBD Deployments and DaemonSets Pods.
  # ROOK_CSI_RBD_POD_LABELS: "key1=value1,key2=value2"

  # (Optional) CephCSI provisioner NodeAffinity(applied to both CephFS and RBD provisioner).
  # CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
  # (Optional) CephCSI provisioner tolerations list(applied to both CephFS and RBD provisioner).
  # Put here list of taints you want to tolerate in YAML format.
  # CSI provisioner would be best to start on the same nodes as other ceph daemons.
  # CSI_PROVISIONER_TOLERATIONS: |
  #   - effect: NoSchedule
  #     key: node-role.kubernetes.io/controlplane
  #     operator: Exists
  #   - effect: NoExecute
  #     key: node-role.kubernetes.io/etcd
  #     operator: Exists
  # (Optional) CephCSI plugin NodeAffinity(applied to both CephFS and RBD plugin).
  # CSI_PLUGIN_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
  # (Optional) CephCSI plugin tolerations list(applied to both CephFS and RBD plugin).
  # Put here list of taints you want to tolerate in YAML format.
  # CSI plugins need to be started on all the nodes where the clients need to mount the storage.
  # CSI_PLUGIN_TOLERATIONS: |
  #   - effect: NoSchedule
  #     key: node-role.kubernetes.io/controlplane
  #     operator: Exists
  #   - effect: NoExecute
  #     key: node-role.kubernetes.io/etcd
  #     operator: Exists

  # (Optional) CephCSI RBD provisioner NodeAffinity(if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
  # CSI_RBD_PROVISIONER_NODE_AFFINITY: "role=rbd-node"
  # (Optional) CephCSI RBD provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
  # Put here list of taints you want to tolerate in YAML format.
  # CSI provisioner would be best to start on the same nodes as other ceph daemons.
  # CSI_RBD_PROVISIONER_TOLERATIONS: |
  #   - key: node.rook.io/rbd
  #     operator: Exists
  # (Optional) CephCSI RBD plugin NodeAffinity(if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
  # CSI_RBD_PLUGIN_NODE_AFFINITY: "role=rbd-node"
  # (Optional) CephCSI RBD plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
  # Put here list of taints you want to tolerate in YAML format.
  # CSI plugins need to be started on all the nodes where the clients need to mount the storage.
  # CSI_RBD_PLUGIN_TOLERATIONS: |
  #   - key: node.rook.io/rbd
  #     operator: Exists

  # (Optional) CephCSI CephFS provisioner NodeAffinity(if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
  # CSI_CEPHFS_PROVISIONER_NODE_AFFINITY: "role=cephfs-node"
  # (Optional) CephCSI CephFS provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
  # Put here list of taints you want to tolerate in YAML format.
  # CSI provisioner would be best to start on the same nodes as other ceph daemons.
  # CSI_CEPHFS_PROVISIONER_TOLERATIONS: |
  #   - key: node.rook.io/cephfs
  #     operator: Exists
  # (Optional) CephCSI CephFS plugin NodeAffinity(if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
  # CSI_CEPHFS_PLUGIN_NODE_AFFINITY: "role=cephfs-node"
  # (Optional) CephCSI CephFS plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
  # Put here list of taints you want to tolerate in YAML format.
  # CSI plugins need to be started on all the nodes where the clients need to mount the storage.
  # CSI_CEPHFS_PLUGIN_TOLERATIONS: |
  #   - key: node.rook.io/cephfs
  #     operator: Exists

  # (Optional) CEPH CSI RBD provisioner resource requirement list, Put here list of resource
  # requests and limits you want to apply for provisioner pod
  # CSI_RBD_PROVISIONER_RESOURCE: |
  #  - name : csi-provisioner
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 100m
  #      limits:
  #        memory: 256Mi
  #        cpu: 200m
  #  - name : csi-resizer
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 100m
  #      limits:
  #        memory: 256Mi
  #        cpu: 200m
  #  - name : csi-attacher
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 100m
  #      limits:
  #        memory: 256Mi
  #        cpu: 200m
  #  - name : csi-snapshotter
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 100m
  #      limits:
  #        memory: 256Mi
  #        cpu: 200m
  #  - name : csi-rbdplugin
  #    resource:
  #      requests:
  #        memory: 512Mi
  #        cpu: 250m
  #      limits:
  #        memory: 1Gi
  #        cpu: 500m
  #  - name : liveness-prometheus
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 50m
  #      limits:
  #        memory: 256Mi
  #        cpu: 100m
  # (Optional) CEPH CSI RBD plugin resource requirement list, Put here list of resource
  # requests and limits you want to apply for plugin pod
  # CSI_RBD_PLUGIN_RESOURCE: |
  #  - name : driver-registrar
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 50m
  #      limits:
  #        memory: 256Mi
  #        cpu: 100m
  #  - name : csi-rbdplugin
  #    resource:
  #      requests:
  #        memory: 512Mi
  #        cpu: 250m
  #      limits:
  #        memory: 1Gi
  #        cpu: 500m
  #  - name : liveness-prometheus
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 50m
  #      limits:
  #        memory: 256Mi
  #        cpu: 100m
  # (Optional) CEPH CSI CephFS provisioner resource requirement list, Put here list of resource
  # requests and limits you want to apply for provisioner pod
  # CSI_CEPHFS_PROVISIONER_RESOURCE: |
  #  - name : csi-provisioner
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 100m
  #      limits:
  #        memory: 256Mi
  #        cpu: 200m
  #  - name : csi-resizer
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 100m
  #      limits:
  #        memory: 256Mi
  #        cpu: 200m
  #  - name : csi-attacher
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 100m
  #      limits:
  #        memory: 256Mi
  #        cpu: 200m
  #  - name : csi-cephfsplugin
  #    resource:
  #      requests:
  #        memory: 512Mi
  #        cpu: 250m
  #      limits:
  #        memory: 1Gi
  #        cpu: 500m
  #  - name : liveness-prometheus
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 50m
  #      limits:
  #        memory: 256Mi
  #        cpu: 100m
  # (Optional) CEPH CSI CephFS plugin resource requirement list, Put here list of resource
  # requests and limits you want to apply for plugin pod
  # CSI_CEPHFS_PLUGIN_RESOURCE: |
  #  - name : driver-registrar
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 50m
  #      limits:
  #        memory: 256Mi
  #        cpu: 100m
  #  - name : csi-cephfsplugin
  #    resource:
  #      requests:
  #        memory: 512Mi
  #        cpu: 250m
  #      limits:
  #        memory: 1Gi
  #        cpu: 500m
  #  - name : liveness-prometheus
  #    resource:
  #      requests:
  #        memory: 128Mi
  #        cpu: 50m
  #      limits:
  #        memory: 256Mi
  #        cpu: 100m

  # Configure CSI CSI Ceph FS grpc and liveness metrics port
  # CSI_CEPHFS_GRPC_METRICS_PORT: "9091"
  # CSI_CEPHFS_LIVENESS_METRICS_PORT: "9081"
  # Configure CSI RBD grpc and liveness metrics port
  # CSI_RBD_GRPC_METRICS_PORT: "9090"
  # CSI_RBD_LIVENESS_METRICS_PORT: "9080"
  # CSIADDONS_PORT: "9070"

  # Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used
  ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true"

  # Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
  # This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs.
  ROOK_ENABLE_DISCOVERY_DAEMON: "false"
  # The timeout value (in seconds) of Ceph commands. It should be >= 1. If this variable is not set or is an invalid value, it's default to 15.
  ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS: "15"
  # Enable the volume replication controller.
  # Before enabling, ensure the Volume Replication CRDs are created.
  # See https://rook.io/docs/rook/latest/ceph-csi-drivers.html#rbd-mirroring
  CSI_ENABLE_VOLUME_REPLICATION: "false"
  # CSI_VOLUME_REPLICATION_IMAGE: "quay.io/csiaddons/volumereplication-operator:v0.3.0"
  # Enable the csi addons sidecar.
  CSI_ENABLE_CSIADDONS: "false"
  # ROOK_CSIADDONS_IMAGE: "quay.io/csiaddons/k8s-sidecar:v0.2.1"
---
# OLM: BEGIN OPERATOR DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rook-ceph-operator
  namespace: rook-ceph # namespace:operator
  labels:
    operator: rook
    storage-backend: ceph
    app.kubernetes.io/name: rook-ceph
    app.kubernetes.io/instance: rook-ceph
    app.kubernetes.io/component: rook-ceph-operator
    app.kubernetes.io/part-of: rook-ceph-operator
spec:
  selector:
    matchLabels:
      app: rook-ceph-operator
  replicas: 1
  template:
    metadata:
      labels:
        app: rook-ceph-operator
    spec:
      serviceAccountName: rook-ceph-system
      containers:
        - name: rook-ceph-operator
          image: rook/ceph:v1.8.7
          args: ["ceph", "operator"]
          securityContext:
            runAsNonRoot: true
            runAsUser: 2016
            runAsGroup: 2016
          volumeMounts:
            - mountPath: /var/lib/rook
              name: rook-config
            - mountPath: /etc/ceph
              name: default-config-dir
            - mountPath: /etc/webhook
              name: webhook-cert
          ports:
            - containerPort: 9443
              name: https-webhook
              protocol: TCP
          env:
            # If the operator should only watch for cluster CRDs in the same namespace, set this to "true".
            # If this is not set to true, the operator will watch for cluster CRDs in all namespaces.
            - name: ROOK_CURRENT_NAMESPACE_ONLY
              value: "false"
            # Rook Discover toleration. Will tolerate all taints with all keys.
            # Choose between NoSchedule, PreferNoSchedule and NoExecute:
            # - name: DISCOVER_TOLERATION
            #   value: "NoSchedule"
            # (Optional) Rook Discover toleration key. Set this to the key of the taint you want to tolerate
            # - name: DISCOVER_TOLERATION_KEY
            #   value: "<KeyOfTheTaintToTolerate>"
            # (Optional) Rook Discover tolerations list. Put here list of taints you want to tolerate in YAML format.
            # - name: DISCOVER_TOLERATIONS
            #   value: |
            #     - effect: NoSchedule
            #       key: node-role.kubernetes.io/controlplane
            #       operator: Exists
            #     - effect: NoExecute
            #       key: node-role.kubernetes.io/etcd
            #       operator: Exists
            # (Optional) Rook Discover priority class name to set on the pod(s)
            # - name: DISCOVER_PRIORITY_CLASS_NAME
            #   value: "<PriorityClassName>"
            # (Optional) Discover Agent NodeAffinity.
            # - name: DISCOVER_AGENT_NODE_AFFINITY
            #   value: "role=storage-node; storage=rook, ceph"
            # (Optional) Discover Agent Pod Labels.
            # - name: DISCOVER_AGENT_POD_LABELS
            #   value: "key1=value1,key2=value2"

            # The duration between discovering devices in the rook-discover daemonset.
            - name: ROOK_DISCOVER_DEVICES_INTERVAL
              value: "60m"

            # Whether to start pods as privileged that mount a host path, which includes the Ceph mon and osd pods.
            # Set this to true if SELinux is enabled (e.g. OpenShift) to workaround the anyuid issues.
            # For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641
            - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
              value: "false"

            # In some situations SELinux relabelling breaks (times out) on large filesystems, and doesn't work with cephfs ReadWriteMany volumes (last relabel wins).
            # Disable it here if you have similar issues.
            # For more details see https://github.com/rook/rook/issues/2417
            - name: ROOK_ENABLE_SELINUX_RELABELING
              value: "true"

            # In large volumes it will take some time to chown all the files. Disable it here if you have performance issues.
            # For more details see https://github.com/rook/rook/issues/2254
            - name: ROOK_ENABLE_FSGROUP
              value: "true"

            # Disable automatic orchestration when new devices are discovered
            - name: ROOK_DISABLE_DEVICE_HOTPLUG
              value: "false"

            # Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+".
            # In case of more than one regex, use comma to separate between them.
            # Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
            # Add regex expression after putting a comma to blacklist a disk
            # If value is empty, the default regex will be used.
            - name: DISCOVER_DAEMON_UDEV_BLACKLIST
              value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"

            # Time to wait until the node controller will move Rook pods to other
            # nodes after detecting an unreachable node.
            # Pods affected by this setting are:
            # mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox
            # The value used in this variable replaces the default value of 300 secs
            # added automatically by k8s as Toleration for
            # <node.kubernetes.io/unreachable>
            # The total amount of time to reschedule Rook pods in healthy nodes
            # before detecting a <not ready node> condition will be the sum of:
            #  --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag)
            #  --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds
            - name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS
              value: "5"

            # The name of the node to pass with the downward API
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # The pod name to pass with the downward API
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            # The pod namespace to pass with the downward API
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          # Recommended resource requests and limits, if desired
          #resources:
          #  limits:
          #    cpu: 500m
          #    memory: 256Mi
          #  requests:
          #    cpu: 100m
          #    memory: 128Mi

          #  Uncomment it to run lib bucket provisioner in multithreaded mode
          #- name: LIB_BUCKET_PROVISIONER_THREADS
          #  value: "5"

      # Uncomment it to run rook operator on the host network
      #hostNetwork: true
      volumes:
        - name: rook-config
          emptyDir: {}
        - name: default-config-dir
          emptyDir: {}
        - name: webhook-cert
          emptyDir: {}
# OLM: END OPERATOR DEPLOYMENT
  • 执行日志
[root@vm-k8s-master examples]# kubectl apply -f operator.yaml
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created
3.6.4.4、创立 rook-ceph cluster
# 以后门路:rook/deploy/examples
kubectl apply -f cluster.yaml
  • 查看 pod 状态
# 查看 pod 状态
[root@vm-k8s-master examples]# kubectl get pods -n rook-ceph
NAME                                            READY   STATUS              RESTARTS   AGE
csi-cephfsplugin-2qr5s                          2/3     ImagePullBackOff    0          5m6s
csi-cephfsplugin-mwzc6                          2/3     ErrImagePull        0          5m6s
csi-cephfsplugin-provisioner-5dc9cbcc87-gpbbg   2/6     ErrImagePull        0          5m6s
csi-cephfsplugin-provisioner-5dc9cbcc87-zjflm   0/6     ContainerCreating   0          5m6s
csi-cephfsplugin-qlqb5                          0/3     ErrImagePull        0          5m6s
csi-rbdplugin-4hwfg                             2/3     ErrImagePull        0          5m6s
csi-rbdplugin-8hzz4                             2/3     ImagePullBackOff    0          5m6s
csi-rbdplugin-f6d6z                             2/3     ImagePullBackOff    0          5m6s
csi-rbdplugin-provisioner-58f584754c-b57xb      2/6     ErrImagePull        0          5m6s
csi-rbdplugin-provisioner-58f584754c-t87ts      0/6     ContainerCreating   0          5m6s
rook-ceph-mon-a-7dd7f95f87-mzrbt                1/1     Running             0          5m52s
rook-ceph-mon-b-b7645dd5b-ls7nm                 1/1     Running             0          4m56s
rook-ceph-mon-c-5dc96ff9c-zgq6x                 0/1     Init:0/2            0          2m35s
rook-ceph-operator-846695c777-jzg4z             1/1     Running             0          15m

#
[root@vm-k8s-master examples]# kubectl describe pod -n rook-ceph csi-cephfsplugin-2qr5s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  9m45s                  default-scheduler  Successfully assigned rook-ceph/csi-cephfsplugin-2qr5s to vm-k8s-worker02
  Normal   Pulled     9m14s                  kubelet            Container image "quay.io/cephcsi/cephcsi:v3.5.1" already present on machine
  Normal   Created    9m14s                  kubelet            Created container liveness-prometheus
  Normal   Started    9m14s                  kubelet            Started container liveness-prometheus
  Normal   Pulled     9m14s                  kubelet            Container image "quay.io/cephcsi/cephcsi:v3.5.1" already present on machine
  Normal   Created    9m14s                  kubelet            Created container csi-cephfsplugin
  Normal   Started    9m14s                  kubelet            Started container csi-cephfsplugin
  Normal   Pulling    7m33s (x3 over 9m44s)  kubelet            Pulling image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0"
  Warning  Failed     6m44s (x3 over 9m14s)  kubelet            Failed to pull image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0": rpc error: code = Unknown desc = Error response from daemon: Get "https://k8s.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     6m44s (x3 over 9m14s)  kubelet            Error: ErrImagePull
  Warning  Failed     6m17s (x5 over 9m14s)  kubelet            Error: ImagePullBackOff
  Normal   BackOff    4m33s (x8 over 9m14s)  kubelet            Back-off pulling image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0"
3.6.4.5、创立存储类

创立存储类,为 k8s 集群提供动态创建 pv 的能力

# 以后门路:rook/deploy/examples/csi/rbd
kubectl apply -f storageclass.yaml
[root@vm-k8s-master rbd]# kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   65s
  • storageclass.yaml 资源文件内容
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph # namespace:cluster
spec:
  failureDomain: host
  replicated:
    size: 3
    # Disallow setting pool with replica 1, this could lead to data loss without recovery.
    # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
    requireSafeReplicaSize: true
    # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
    # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
    #targetSizeRatio: .5
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
  # clusterID is the namespace where the rook cluster is running
  # If you change this namespace, also change the namespace below where the secret namespaces are defined
  clusterID: rook-ceph # namespace:cluster

  # If you want to use erasure coded pool with RBD, you need to create
  # two pools. one erasure coded and one replicated.
  # You need to specify the replicated pool here in the `pool` parameter, it is
  # used for the metadata of the images.
  # The erasure coded pool must be set as the `dataPool` parameter below.
  #dataPool: ec-data-pool
  pool: replicapool

  # (optional) mapOptions is a comma-separated list of map options.
  # For krbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
  # For nbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
  # mapOptions: lock_on_read,queue_depth=1024

  # (optional) unmapOptions is a comma-separated list of unmap options.
  # For krbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
  # For nbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
  # unmapOptions: force

  # RBD image format. Defaults to "2".
  imageFormat: "2"

  # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
  imageFeatures: layering

  # The secrets contain Ceph admin credentials. These are generated automatically by the operator
  # in the same namespace as the cluster.
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  # Specify the filesystem type of the volume. If not specified, csi-provisioner
  # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
  # in hyperconverged settings where the volume is mounted on the same node as the osds.
  csi.storage.k8s.io/fstype: ext4
# uncomment the following to use rbd-nbd as mounter on supported nodes
# **IMPORTANT**: CephCSI v3.4.0 onwards a volume healer functionality is added to reattach
# the PVC to application pod if nodeplugin pod restart.
# Its still in Alpha support. Therefore, this option is not recommended for production use.
#mounter: rbd-nbd
allowVolumeExpansion: true
reclaimPolicy: Delete
3.6.4.6、创立 rook 官网提供的 mysql 测试服务

rook 官网提供的 mysql 测试服务,依赖于 storageclass 创立的 pv 长久化

# 以后门路:rook/deploy/examples
kubectl apply -f mysql.yaml
[root@vm-k8s-master examples]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-effe80d6-a1ae-42a7-b112-d11e50c06fb8   20Gi       RWO            rook-ceph-block   4m27s
  • mysql.yaml 资源文件内容
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
    tier: mysql
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: changeme
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          persistentVolumeClaim:
            claimName: mysql-pv-claim
3.6.4.7、创立 toolbox

默认启动的 Ceph 集群,是开启 Ceph 认证的,这样你登陆 Ceph 组件所在的 Pod 里,是没法去获取集群状态,以及执行 CLI 命令,这时须要部署 Ceph toolbox。

# 以后门路:rook/deploy/examples
kubectl apply -f toolbox.yaml
3.6.4.8、创立 rook 官网提供的 wordpress 测试服务

rook 官网提供的 wordpress 测试服务,依赖于 storageclass 创立的 pv 长久化

# 以后门路:rook/deploy/examples
kubectl apply -f wordpress.yaml
[root@vm-k8s-master examples]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-effe80d6-a1ae-42a7-b112-d11e50c06fb8   20Gi       RWO            rook-ceph-block   5m49s
wp-pv-claim      Bound    pvc-240ea773-ace2-4a9a-99ff-89f52bbd8084   20Gi       RWO            rook-ceph-block   7s

阐明:这里的 pv 会主动创立,当提交了蕴含 StorageClass 字段的 PVC 之后,Kubernetes 就会依据这个 StorageClass 创立出对应的 PV,这是用到的是 Dynamic Provisioning 机制来动态创建 pv。

  • 拜访 wordpress
[root@vm-k8s-master examples]# kubectl get svc
NAME              TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes        ClusterIP      10.1.0.1      <none>        443/TCP        26h
wordpress         LoadBalancer   10.1.43.219   <pending>     80:31225/TCP   14m
wordpress-mysql   ClusterIP      None          <none>        3306/TCP       20m

正文完
 0