写在开篇

上一次,我发了一篇:《实践篇:让咱们一起鲁克鲁克——rook(开源存储编排)》。这次,来一篇实战,应用rook在k8s上把ceph集群搞起来。后续,还会陆续分享如何对接k8s(作为k8s的后端存储)、以及分享一些在生产上的实践经验。

环境规划

主机名IP角色数据磁盘
k8s-a-master192.168.11.10k8s master
k8s-a-node01192.168.11.11k8s worker、ceph osd1个1TB硬盘
k8s-a-node02192.168.11.12k8s worker、ceph osd1个1TB硬盘
k8s-a-node03192.168.11.13k8s worker、ceph osd1个1TB硬盘
k8s-a-node04192.168.11.14k8s worker、ceph osd1个1TB硬盘
k8s-a-node05192.168.11.15k8s worker、ceph osd1个1TB硬盘
k8s-a-node06192.168.11.16k8s worker、ceph osd1个1TB硬盘
k8s-a-node07192.168.11.17k8s worker、ceph osd1个1TB硬盘
k8s-a-node08192.168.11.18k8s worker、ceph osd1个1TB硬盘
k8s-a-node09192.168.11.19k8s worker、ceph osd1个1TB硬盘
k8s-a-node010192.168.11.20k8s worker、ceph osd1个1TB硬盘

筹备工作

筹备工作在所有worker节点上做操作。

  1. 数据盘筹备,除了sda是装了操作系统之外,sdb是用于osd盘的,不能被格式化文件系统
[root@k8s-a-node01 ~]# lsblk sda               8:0    0  500G  0 disk ├─sda1            8:1    0    2G  0 part /boot└─sda2            8:2    0  498G  0 part   ├─centos-root 253:0    0  122G  0 lvm  /...sdb               8:16   0 1000G  0 disk  # 就是这块
  1. 装置lvm2 如果在裸设施或未应用的分区上创立OSD,则Ceph OSD具备对LVM(逻辑卷治理)的依赖。
yum install -y lvm2
  1. 加载rbd模块 Ceph 须要应用带有 RBD 模块的 Linux 内核。许多 Linux 发行版都曾经蕴含了 RBD 模块,但不是所有的发行版都有,所以要确认。
modprobe rbdlsmod | grep rbd

Rook 的默认 RBD 配置仅指定了 layering 个性,以便与较旧的内核放弃兼容。如果您的 Kubernetes 节点运行的是 5.4 或更高版本的内核,您能够抉择启用其余个性标记。其中,fast-diff 和 object-map 个性尤其有用。

  1. 降级内核(可选) 如果你将应用 Ceph 共享文件系统(CephFS)来创立卷(Volumes),则 rook 官网倡议应用至多 4.17 版本的 Linux 内核。如果你应用的内核版本低于 4.17,则申请的 Persistent Volume Claim(PVC)大小将不会被强制执行。存储配额(Storage quotas)只能在较新的内核上失去强制执行。

在 Kubernetes 中,PVC 用于向存储系统申请指定大小的存储空间。如果申请的 PVC 大小无奈失去强制执行,则无奈保障所申请的存储空间大小。因为存储配额在较旧的内核上无奈失去强制执行,因而在应用 CephFS 创立卷时,如果应用较旧的内核版本,则可能无奈正确地治理和调配存储空间。因而,rook 官网倡议应用至多 4.17 版本的内核。

在所有k8s节点降级内核,之前遗记思考这个事件,当前得提前降级好内核再搭建k8s集群,这种状况下降级内核,很难保障不会对k8s集群带来间接影响。留神了,生产环境可不能这么玩,得提前做好布局和筹备。

我的centos7内核以后版本:

[root@k8s-a-node01 ~]# uname -r3.10.0-1160.el7.x86_64

开始降级

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.orgrpm -Uvh https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpmyum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml-headers kernel-ml -ygrub2-set-default 0grub2-mkconfig -o /boot/grub2/grub.cfgrebootuname -sr# 降级实现后查看内核,曾经满足条件[root@k8s-a-node10 ~]# uname -srLinux 6.2.9-1.el7.elrepo.x86_64# 看看k8s是否失常,我的没问题,十分顺利。kubectl get nodeskubectl get pod -n kube-system

部署ceph集群

  1. 部署Rook Operator
git clone --single-branch --branch v1.11.2 https://github.com/rook/rook.gitcd rook/deploy/examples/kubectl create -f crds.yaml -f common.yaml -f operator.yaml# 在持续操作之前,验证 rook-ceph-operator 是否失常运行kubectl get deployment -n rook-cephkubectl get pod -n rook-ceph
  1. 创立 Ceph 集群

当初 Rook Operator曾经运行起来了,接下来咱们能够创立 Ceph 集群。

Rook容许通过自定义资源定义(CephCluster CRD)创立和自定义存储集群,创立群集的次要模式有四种不同的模式,对于模式以及对于cluster.yaml所有可用的字段和阐明请仔细阅读和参考:https://rook.io/docs/rook/v1.11/CRDs/Cluster/ceph-cluster-crd/

本次创立的集群就先按官网示例里的cluster.yaml,不改变示例里的内容,它会在每个节点将创立一个OSD

kubectl create -f cluster.yaml

创立后查看,请确保所有Pod处于Running,OSD POD处于Completed:

[root@k8s-a-master ~]# kubectl get pod -n rook-cephNAME                                                     READY   STATUS      RESTARTS       AGEcsi-cephfsplugin-4cx9v                                   2/2     Running     0              45mcsi-cephfsplugin-84fd4                                   2/2     Running     0              45mcsi-cephfsplugin-dnx6l                                   2/2     Running     0              45mcsi-cephfsplugin-gb4f7                                   2/2     Running     0              45mcsi-cephfsplugin-mtd67                                   2/2     Running     0              45mcsi-cephfsplugin-nrnck                                   2/2     Running     0              45mcsi-cephfsplugin-p5kmv                                   2/2     Running     0              45mcsi-cephfsplugin-provisioner-9f7449fc9-9qf67             5/5     Running     0              45mcsi-cephfsplugin-provisioner-9f7449fc9-k2cxc             5/5     Running     0              45mcsi-cephfsplugin-vhfr7                                   2/2     Running     0              45mcsi-cephfsplugin-wrgrb                                   2/2     Running     0              45mcsi-cephfsplugin-xt6f6                                   2/2     Running     0              45mcsi-rbdplugin-5mnb8                                      2/2     Running     0              45mcsi-rbdplugin-9j69s                                      2/2     Running     0              45mcsi-rbdplugin-ffg9b                                      2/2     Running     0              45mcsi-rbdplugin-ggfxk                                      2/2     Running     0              45mcsi-rbdplugin-h5gj7                                      2/2     Running     0              45mcsi-rbdplugin-ks7db                                      2/2     Running     0              45mcsi-rbdplugin-l2kmv                                      2/2     Running     0              45mcsi-rbdplugin-l5jm2                                      2/2     Running     0              45mcsi-rbdplugin-m7nc4                                      2/2     Running     0              45mcsi-rbdplugin-provisioner-649c57b978-m5pdk               5/5     Running     0              45mcsi-rbdplugin-provisioner-649c57b978-trp8j               5/5     Running     0              45mcsi-rbdplugin-szchk                                      2/2     Running     0              45mrook-ceph-crashcollector-k8s-a-node01-6db7865746-clr4h   1/1     Running     0              50mrook-ceph-crashcollector-k8s-a-node02-97bb5864c-wqbx4    1/1     Running     0              37mrook-ceph-crashcollector-k8s-a-node03-59777d587b-gkr24   1/1     Running     0              41mrook-ceph-crashcollector-k8s-a-node04-7f77bbdcfc-qgh7x   1/1     Running     0              45mrook-ceph-crashcollector-k8s-a-node05-7cd59b4bc8-nkfgt   1/1     Running     0              45mrook-ceph-crashcollector-k8s-a-node06-74dc6dc879-n9z5z   1/1     Running     0              45mrook-ceph-crashcollector-k8s-a-node07-5f4c5c6c56-27zs9   1/1     Running     0              50mrook-ceph-crashcollector-k8s-a-node08-7fd7ff557-5qthx    1/1     Running     0              39mrook-ceph-crashcollector-k8s-a-node09-6b7d5645f4-fdpwp   1/1     Running     0              50mrook-ceph-crashcollector-k8s-a-node10-69f8f5cbfb-6mhtw   1/1     Running     0              33mrook-ceph-mgr-a-86f445fdcc-vdg4b                         3/3     Running     0              50mrook-ceph-mgr-b-7b7967fd54-wjhxw                         3/3     Running     0              50mrook-ceph-mon-a-6dd564d54d-cq4rb                         2/2     Running     0              52mrook-ceph-mon-b-5c4875d7b9-xbd7l                         2/2     Running     0              51mrook-ceph-mon-c-7fb466569d-qwb6v                         2/2     Running     0              51mrook-ceph-operator-c489cccb5-8dtnz                       1/1     Running     1 (116m ago)   14hrook-ceph-osd-0-f7cc57ddf-vc9tg                          2/2     Running     0              50mrook-ceph-osd-1-7d79b4d4c-rbrkn                          2/2     Running     0              50mrook-ceph-osd-2-564d78d445-jpjqk                         2/2     Running     0              50mrook-ceph-osd-3-7ddf87bfd6-dlvmf                         2/2     Running     0              45mrook-ceph-osd-4-849ddc4b68-5xg8h                         2/2     Running     0              45mrook-ceph-osd-5-5ff57d4d55-skwv6                         2/2     Running     0              45mrook-ceph-osd-6-668fcb969c-cwb46                         2/2     Running     0              41mrook-ceph-osd-7-7876687b4d-r4b2g                         2/2     Running     0              39mrook-ceph-osd-8-6c7784c4c-6xtv6                          2/2     Running     0              37mrook-ceph-osd-9-654ffc5b66-qqhnr                         2/2     Running     0              33mrook-ceph-osd-prepare-k8s-a-node01-6lf6r                 0/1     Completed   0              33mrook-ceph-osd-prepare-k8s-a-node02-8rcpw                 0/1     Completed   0              33mrook-ceph-osd-prepare-k8s-a-node03-dqcl4                 0/1     Completed   0              33mrook-ceph-osd-prepare-k8s-a-node04-8qtmq                 0/1     Completed   0              33mrook-ceph-osd-prepare-k8s-a-node05-qm78x                 0/1     Completed   0              33mrook-ceph-osd-prepare-k8s-a-node06-qq58n                 0/1     Completed   0              33mrook-ceph-osd-prepare-k8s-a-node07-lp9j8                 0/1     Completed   0              33mrook-ceph-osd-prepare-k8s-a-node08-vh4sn                 0/1     Completed   0              33mrook-ceph-osd-prepare-k8s-a-node09-smvp8                 0/1     Completed   0              32mrook-ceph-osd-prepare-k8s-a-node10-k6ggk                 0/1     Completed   0              32mrook-ceph-tools-54bdbfc7b7-hqdw4                         1/1     Running     0              24m[root@k8s-a-master ~]# 
  1. 验证ceph集群

要验证群集是否处于失常状态,能够连贯到 toolbox 工具箱并运行命令

kubectl create -f toolbox.yamlkubectl exec -it rook-ceph-tools-54bdbfc7b7-hqdw4 -n rook-ceph -- bash# 进去之后就能够执行各种命令了:ceph statusceph osd statusceph dfrados df# 看看我的[root@k8s-a-master examples]# kubectl exec -it rook-ceph-tools-54bdbfc7b7-pzkd4 -n rook-ceph -- bashbash-4.4$ ceph -s  cluster:    id:     0b64957b-9aaa-43ef-9946-211ca911dc92    health: HEALTH_OK   services:    mon: 3 daemons, quorum a,b,c (age 3m)    mgr: a(active, since 64s), standbys: b    osd: 10 osds: 10 up (since 116s), 10 in (since 2m)   data:    pools:   1 pools, 1 pgs    objects: 2 objects, 449 KiB    usage:   234 MiB used, 9.8 TiB / 9.8 TiB avail    pgs:     1 active+clean bash-4.4$ ceph osd statusID  HOST           USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE       0  k8s-a-node02  23.9M   999G      0        0       0        0   exists,up   1  k8s-a-node03  20.9M   999G      0        0       0        0   exists,up   2  k8s-a-node04  24.9M   999G      0        0       0        0   exists,up   3  k8s-a-node06  24.9M   999G      0        0       0        0   exists,up   4  k8s-a-node05  23.8M   999G      0        0       0        0   exists,up   5  k8s-a-node07  23.8M   999G      0        0       0        0   exists,up   6  k8s-a-node08  19.8M   999G      0        0       0        0   exists,up   7  k8s-a-node10  23.8M   999G      0        0       0        0   exists,up   8  k8s-a-node01  23.8M   999G      0        0       0        0   exists,up   9  k8s-a-node09  23.7M   999G      0        0       0        0   exists,up  

每个 OSD 都有一个状态,能够是以下几种之一:

  • up:示意该 OSD 失常运行,并且能够解决数据申请。
  • down:示意该 OSD 以后无奈运行或不可用,可能因为硬件故障或软件问题。
  • out:示意该 OSD 不参加数据存储或复原,这通常是因为管理员手动将其标记为不可用。
  • exists:示意该 OSD 配置存在,但尚未启动或退出集群。

在我这里,exists,up 示意该 OSD 配置存在,并且曾经胜利启动并参加了数据存储和复原。exists 状态示意 OSD 配置曾经存在,然而 OSD 还没有启动。只有当 OSD 运行后,状态才会变为 up。因而,exists,up 示意 OSD 的启动过程曾经实现,并且 OSD 失常运行。

裸露Dashboard

Dashboard能够让咱们查看Ceph集群的状态,包含整体的运行状况、mon仲裁状态、mgr、osd 和其余Ceph守护程序的状态、查看池和PG状态、显示守护程序的日志等。对于Dashboard的更多配置,请参考:https://rook.io/docs/rook/v1.11/Storage-Configuration/Monitor...

  1. 查看dashboard的svc,Rook将启用端口 8443 以进行 https 拜访:
[root@k8s-a-master ~]# kubectl get svc -n rook-cephNAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGErook-ceph-mgr             ClusterIP   10.103.90.1      <none>        9283/TCP            52mrook-ceph-mgr-dashboard   ClusterIP   10.108.66.238    <none>        8443/TCP            52mrook-ceph-mon-a           ClusterIP   10.104.191.13    <none>        6789/TCP,3300/TCP   54mrook-ceph-mon-b           ClusterIP   10.108.194.172   <none>        6789/TCP,3300/TCP   53mrook-ceph-mon-c           ClusterIP   10.98.32.80      <none>        6789/TCP,3300/TCP   53m[root@k8s-a-master ~]# 
  • rook-ceph-mgr:是 Ceph 的治理过程(Manager),负责集群的监控、状态报告、数据分析、调度等性能,它默认监听 9283 端口,并提供了 Prometheus 格局的监控指标,能够被 Prometheus 拉取并用于集群监控。
  • rook-ceph-mgr-dashboard:是 Rook 提供的一个 Web 界面,用于不便地查看 Ceph 集群的监控信息、状态、性能指标等。
  • rook-ceph-mon:是 Ceph Monitor 过程的 Kubernetes 服务。Ceph Monitor 是 Ceph 集群的外围组件之一,负责保护 Ceph 集群的状态、拓扑构造、数据分布等信息,是 Ceph 集群的治理节点。
  1. 查看默认账号admin的明码
[root@k8s-a-master examples]# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo$}<EVv58G-*C3@/?J1@t

这个明码就是等会用来登录

  1. 应用 NodePort 类型的Service裸露dashboard dashboard-external-https.yaml
apiVersion: v1kind: Servicemetadata:  name: rook-ceph-mgr-dashboard-external-https  namespace: rook-ceph  labels:    app: rook-ceph-mgr    rook_cluster: rook-cephspec:  ports:  - name: dashboard    port: 8443    protocol: TCP    targetPort: 8443  selector:    app: rook-ceph-mgr    rook_cluster: rook-ceph  sessionAffinity: None  type: NodePort
  1. 创立后NodePort类的svc后查看
[root@k8s-a-master examples]# kubectl get svc -n rook-cephNAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGErook-ceph-mgr                            ClusterIP   10.96.58.93      <none>        9283/TCP            28mrook-ceph-mgr-dashboard                  ClusterIP   10.103.131.77    <none>        8443/TCP            28m # 记得干掉rook-ceph-mgr-dashboard-external-https   NodePort    10.107.247.53    <none>        8443:32564/TCP      9m38srook-ceph-mon-a                          ClusterIP   10.98.168.35     <none>        6789/TCP,3300/TCP   29mrook-ceph-mon-b                          ClusterIP   10.103.127.134   <none>        6789/TCP,3300/TCP   29mrook-ceph-mon-c                          ClusterIP   10.111.46.187    <none>        6789/TCP,3300/TCP   29m

ClusterIP类型的SVC是集群部署后默认创立的,要干掉,不然影响NodePort类型的拜访。

  1. 在任何节点拜访dashboard

清理集群

如果你不想要这个集群了,干掉这个集群也是很简略的,工夫无限,我就不再输入,可参考官网文档:https://rook.io/docs/rook/v1.11/Getting-Started/ceph-teardown...。后续我也会专门抽个工夫做个分享。

写在最初

对于3种存储类型的实战,下次持续分享:

  • 块:创立要由 Pod (RWO) 应用的块存储
  • 共享文件系统:创立要在多个 Pod (RWX) 之间共享的文件系统
  • 对象:创立可在 Kubernetes 集群外部或内部拜访的对象存储

PS:我会继续的分享k8s、rook ceph等等,感激各位盆友高度关注,如你跟着我的文章做测试,遇到问题解决不了的时候可私信我,我会尽力帮忙您解决问题。

本文转载于WX公众号:不背锅运维(喜爱的盆友关注咱们):https://mp.weixin.qq.com/s/gBErT26UikPBdsot9_Tjqg