关于kubernetes:IOMESH-Installation

7次阅读

共计 32651 个字符,预计需要花费 82 分钟才能阅读完成。

IOMESH Installation

官网文档:

Introduction · Documentation

同步发表于集体网站:www.etaon.top

试验拓扑图:

也能够再测试的时候是应用一个端口,官网倡议将 IOMESH 的端口分可,即下图 10.234.1.0/24 网段:

试验应用裸机,配置如下:

配件 型号规格 数量 备注
CPU Intel(R) Xeon(R) Silver 4214R  @ 2.40GHz 2 节点 1、3
Intel(R) Xeon(R) Gold 6226R  @ 2.90GHz 1 节点 2
内存 256 GB 3 节点 1、2、3
SSD 1.6 TB NVMe SSD 2*3 节点 1 / 节点 2 / 节点 3
HDD 2.4 TB SAS 4*3 节点 1 / 节点 2 / 节点 3
DOM 盘 M.2 240GB 2 节点 1 / 节点 2 / 节点 3
千兆网卡 I350 2 节点 1 / 节点 2 / 节点 3
存储网卡 10/25G 2 节点 1 / 节点 2 / 节点 3
  • 部署 IOMesh 至多须要一块 cache disk 以及一块 partition disk

装置步骤:

创立 Kubernetes 集群

采纳 Kubernetes 1.24 版本,Runtime 为 containerd。
装置参考 Install Kubernetes 1.24 – 路无止境!(etaon.top)

装置前筹备

在所有 worker 节点执行以下操作。

  1. 装置 open-iscsi,ubuntu 曾经装置。
    apt install open-iscsi -y

  1. 编辑 iscsi 配置文件
    sudo sed -i ‘s/^node.startup = automatic$/node.startup = manual/’ /etc/iscsi/iscsid.conf

  1. 确保 iscsi_tcp 模块已被加载
    sudo modprobe iscsi_tcpsudo bash -c ‘echo iscsi_tcp > /etc/modprobe.d/iscsi-tcp.conf’

  1. 启动 iscsid 服务
    systemctl enable –now iscsid

离线装置 IOMesh

装置文档:

Install IOMesh · Documentation

  1. 下载离线安装文件,将其上传至 所有 须要部署 IOMesh 的节点上,包含 Master 节点。
# 这个的节点个别指 worker 节点,且 >=3;#如果 worker 节点不够,心愿 master 节点退出,再 master 节点执行:kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
  1. 对上传的离线安装包进行解压缩:
    tar -xvf  iomesh-offline-v0.11.1.tgz && cd iomesh-offline

  1. 加载镜像文件,Runtime 为 Containerd 的状况下执行以下命令
    ctr –namespace k8s.io image import ./images/iomesh-offline-images.tar

  1. 转到 Master 节点(或 API Server 接入点),生成 IOMesh 配置文件
    ./helm show values charts/iomesh > iomesh.yaml

  1. 定义 iomesh.yaml 配置文件,留神批改 dataCIDR 为 IOMesh 存储网段
...
iomesh:
  chunk:
    dataCIDR: "10.234.1.0/24" # change to your own data network CIDR

  1. Master 装置 IOMesh 集群
./helm install iomesh ./charts/iomesh \
    --create-namespace \
    --namespace iomesh-system \
    --values iomesh.yaml \
    --wait
#返回后果
NAME: iomesh
LAST DEPLOYED: Tue Dec 27 09:38:56 2022
NAMESPACE: iomesh-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

  1. 创立胜利当前的 POD 状况
    创立 POD 过程:
root@cp01:~/iomesh-offline# kubectl get po -n iomesh-system -w
NAME                                                   READY   STATUS    RESTARTS   AGE
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   6/6     Running   0          24s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   6/6     Running   0          24s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   6/6     Running   0          24s
iomesh-csi-driver-node-plugin-gr8bm                    3/3     Running   0          24s
iomesh-csi-driver-node-plugin-kshdt                    3/3     Running   0          24s
iomesh-csi-driver-node-plugin-xxhhx                    3/3     Running   0          24s
iomesh-hostpath-provisioner-59v28                      1/1     Running   0          24s
iomesh-hostpath-provisioner-79dgh                      1/1     Running   0          24s
iomesh-hostpath-provisioner-cknk8                      1/1     Running   0          24s
iomesh-openebs-ndm-7vdkm                               1/1     Running   0          24s
iomesh-openebs-ndm-cluster-exporter-75f568df84-dvz4g   1/1     Running   0          24s
iomesh-openebs-ndm-hctjc                               1/1     Running   0          24s
iomesh-openebs-ndm-node-exporter-f59t5                 1/1     Running   0          24s
iomesh-openebs-ndm-node-exporter-l48dj                 1/1     Running   0          24s
iomesh-openebs-ndm-node-exporter-sxgjn                 1/1     Running   0          24s
iomesh-openebs-ndm-operator-7d58d8fbc8-xxwdt           1/1     Running   0          24s
iomesh-openebs-ndm-x64pj                               1/1     Running   0          24s
iomesh-zookeeper-0                                     0/1     Running   0          18s
iomesh-zookeeper-operator-f5588b6d7-lrj6n              1/1     Running   0          24s
operator-765dd9678f-95rcw                              1/1     Running   0          24s
operator-765dd9678f-ns2gs                              1/1     Running   0          24s
operator-765dd9678f-q8pwn                              1/1     Running   0          24s
iomesh-zookeeper-0                                     1/1     Running   0          23s
iomesh-zookeeper-1                                     0/1     Pending   0          0s
iomesh-zookeeper-1                                     0/1     Pending   0          1s
iomesh-zookeeper-1                                     0/1     ContainerCreating   0          1s
iomesh-zookeeper-1                                     0/1     ContainerCreating   0          1s
iomesh-zookeeper-1                                     0/1     Running             0          2s
iomesh-csi-driver-node-plugin-gr8bm                    3/3     Running             1 (0s ago)   50s
iomesh-csi-driver-node-plugin-kshdt                    3/3     Running             1 (0s ago)   50s
iomesh-csi-driver-node-plugin-xxhhx                    3/3     Running             1 (0s ago)   50s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   2/6     Error               1 (1s ago)   53s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   2/6     Error               1 (1s ago)   53s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   2/6     Error               1 (1s ago)   54s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   6/6     Running             5 (1s ago)   54s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   6/6     Running             5 (2s ago)   54s
iomesh-zookeeper-1                                     1/1     Running             0            26s
iomesh-zookeeper-2                                     0/1     Pending             0            0s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   6/6     Running             5 (2s ago)   55s
iomesh-zookeeper-2                                     0/1     Pending             0            1s
iomesh-zookeeper-2                                     0/1     ContainerCreating   0            1s
iomesh-zookeeper-2                                     0/1     ContainerCreating   0            1s
iomesh-zookeeper-2                                     0/1     Running             0            2s
iomesh-zookeeper-2                                     1/1     Running             0            26s
iomesh-meta-0                                          0/2     Pending             0            0s
iomesh-meta-1                                          0/2     Pending             0            0s
iomesh-meta-2                                          0/2     Pending             0            0s
iomesh-meta-0                                          0/2     Pending             0            1s
iomesh-meta-1                                          0/2     Pending             0            1s
iomesh-meta-1                                          0/2     Init:0/1            0            1s
iomesh-meta-0                                          0/2     Init:0/1            0            1s
iomesh-meta-2                                          0/2     Pending             0            1s
iomesh-meta-2                                          0/2     Init:0/1            0            1s
iomesh-iscsi-redirector-jj2qm                          0/2     Pending             0            0s
iomesh-iscsi-redirector-jj2qm                          0/2     Pending             0            0s
iomesh-iscsi-redirector-9crx8                          0/2     Pending             0            0s
iomesh-iscsi-redirector-zlhtk                          0/2     Pending             0            0s
iomesh-iscsi-redirector-9crx8                          0/2     Pending             0            0s
iomesh-iscsi-redirector-zlhtk                          0/2     Pending             0            0s
iomesh-iscsi-redirector-zlhtk                          0/2     Init:0/1            0            0s
iomesh-iscsi-redirector-9crx8                          0/2     Init:0/1            0            0s
iomesh-chunk-0                                         0/3     Pending             0            0s
iomesh-meta-0                                          0/2     Init:0/1            0            2s
iomesh-meta-1                                          0/2     Init:0/1            0            2s
iomesh-meta-2                                          0/2     Init:0/1            0            2s
iomesh-iscsi-redirector-zlhtk                          0/2     Init:0/1            0            0s
iomesh-iscsi-redirector-jj2qm                          0/2     Init:0/1            0            0s
iomesh-chunk-0                                         0/3     Pending             0            1s
iomesh-meta-1                                          0/2     Init:0/1            0            3s
iomesh-meta-2                                          0/2     PodInitializing     0            3s
iomesh-iscsi-redirector-9crx8                          0/2     PodInitializing     0            1s
iomesh-chunk-0                                         0/3     Init:0/1            0            1s
iomesh-iscsi-redirector-zlhtk                          0/2     PodInitializing     0            2s
iomesh-iscsi-redirector-9crx8                          1/2     Running             0            2s
iomesh-meta-0                                          0/2     PodInitializing     0            4s
iomesh-meta-1                                          0/2     PodInitializing     0            4s
iomesh-meta-2                                          1/2     Running             0            4s
iomesh-meta-1                                          1/2     Running             0            5s
iomesh-iscsi-redirector-jj2qm                          0/2     PodInitializing     0            3s
iomesh-iscsi-redirector-9crx8                          1/2     Error               0            3s
iomesh-iscsi-redirector-zlhtk                          1/2     Running             0            4s
iomesh-meta-0                                          1/2     Running             0            6s
iomesh-iscsi-redirector-zlhtk                          1/2     Error               0            4s
iomesh-iscsi-redirector-zlhtk                          1/2     Running             1 (3s ago)   5s
iomesh-meta-0                                          1/2     Running             0            7s
iomesh-iscsi-redirector-9crx8                          1/2     Running             1 (2s ago)   5s
iomesh-iscsi-redirector-jj2qm                          1/2     Running             0            5s
iomesh-iscsi-redirector-9crx8                          1/2     Error               1 (2s ago)   5s
iomesh-iscsi-redirector-zlhtk                          1/2     Error               1 (3s ago)   5s
iomesh-iscsi-redirector-jj2qm                          1/2     Error               0            6s
iomesh-iscsi-redirector-9crx8                          1/2     CrashLoopBackOff    1 (1s ago)   6s
iomesh-iscsi-redirector-zlhtk                          1/2     CrashLoopBackOff    1 (2s ago)   6s
iomesh-chunk-0                                         0/3     PodInitializing     0            7s
iomesh-chunk-0                                         3/3     Running             0            7s
iomesh-chunk-1                                         0/3     Pending             0            0s
iomesh-iscsi-redirector-jj2qm                          1/2     Running             1 (5s ago)   8s
iomesh-chunk-1                                         0/3     Pending             0            1s
iomesh-chunk-1                                         0/3     Init:0/1            0            1s
iomesh-chunk-1                                         0/3     Init:0/1            0            2s
iomesh-meta-0                                          2/2     Running             0            12s
iomesh-meta-1                                          2/2     Running             0            12s
iomesh-meta-2                                          2/2     Running             0            12s
iomesh-chunk-0                                         2/3     Error               0            11s
iomesh-chunk-1                                         0/3     PodInitializing     0            4s
iomesh-chunk-0                                         3/3     Running             1 (2s ago)   12s
iomesh-chunk-1                                         3/3     Running             0            5s
iomesh-chunk-2                                         0/3     Pending             0            0s
iomesh-chunk-2                                         0/3     Pending             0            1s
iomesh-chunk-2                                         0/3     Init:0/1            0            1s
iomesh-chunk-2                                         0/3     Init:0/1            0            2s
iomesh-chunk-2                                         0/3     PodInitializing     0            3s
iomesh-iscsi-redirector-jj2qm                          2/2     Running             1 (13s ago)   16s
iomesh-csi-driver-node-plugin-gr8bm                    3/3     Running             2 (0s ago)    100s
iomesh-chunk-2                                         3/3     Running             0             4s
iomesh-csi-driver-node-plugin-xxhhx                    3/3     Running             2 (0s ago)    100s
iomesh-csi-driver-node-plugin-kshdt                    3/3     Running             2 (1s ago)    101s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   2/6     Error               6 (0s ago)    103s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   2/6     Error               6 (1s ago)    103s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   2/6     Error               6 (1s ago)    103s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   2/6     CrashLoopBackOff    6 (1s ago)    104s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   2/6     CrashLoopBackOff    6 (2s ago)    104s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   2/6     CrashLoopBackOff    6 (2s ago)    104s
iomesh-iscsi-redirector-9crx8                          1/2     Running             2 (18s ago)   23s
iomesh-iscsi-redirector-zlhtk                          1/2     Running             2 (19s ago)   23s
iomesh-csi-driver-controller-plugin-6887b8d974-nxt62   6/6     Running             10 (14s ago)   116s
iomesh-csi-driver-controller-plugin-6887b8d974-sv2xq   6/6     Running             10 (14s ago)   117s
iomesh-iscsi-redirector-zlhtk                          2/2     Running             2 (31s ago)    35s
iomesh-iscsi-redirector-9crx8                          2/2     Running             2 (30s ago)    35s
iomesh-csi-driver-controller-plugin-6887b8d974-ffm5z   6/6     Running             10 (18s ago)   2m
root@cp01:~/iomesh-offline# kubectl get po -n iomesh-system -owide
NAME                                                   READY   STATUS    RESTARTS         AGE     IP               NODE       NOMINATED NODE   READINESS GATES
iomesh-chunk-0                                         3/3     Running   0                7m27s   192.168.80.23    worker02   <none>           <none>
iomesh-chunk-1                                         3/3     Running   0                7m20s   192.168.80.22    worker01   <none>           <none>
iomesh-chunk-2                                         3/3     Running   0                7m16s   192.168.80.21    cp01       <none>           <none>
iomesh-csi-driver-controller-plugin-6887b8d974-4cjlq   6/6     Running   10 (7m12s ago)   9m10s   192.168.80.22    worker01   <none>           <none>
iomesh-csi-driver-controller-plugin-6887b8d974-d2d4r   6/6     Running   44 (11m ago)     35m     192.168.80.23    worker02   <none>           <none>
iomesh-csi-driver-controller-plugin-6887b8d974-j2rks   6/6     Running   44 (11m ago)     35m     192.168.80.21    cp01       <none>           <none>
iomesh-csi-driver-node-plugin-95f5z                    3/3     Running   14 (7m15s ago)   35m     192.168.80.22    worker01   <none>           <none>
iomesh-csi-driver-node-plugin-dftlv                    3/3     Running   12 (11m ago)     35m     192.168.80.21    cp01       <none>           <none>
iomesh-csi-driver-node-plugin-s549x                    3/3     Running   12 (11m ago)     35m     192.168.80.23    worker02   <none>           <none>
iomesh-hostpath-provisioner-8jvs8                      1/1     Running   1 (8m59s ago)    35m     10.211.5.10      worker01   <none>           <none>
iomesh-hostpath-provisioner-rkkrs                      1/1     Running   0                35m     10.211.30.71     worker02   <none>           <none>
iomesh-hostpath-provisioner-rmk4z                      1/1     Running   0                35m     10.211.214.131   cp01       <none>           <none>
iomesh-iscsi-redirector-6g2fk                          2/2     Running   2 (7m25s ago)    7m30s   192.168.80.22    worker01   <none>           <none>
iomesh-iscsi-redirector-cxnbq                          2/2     Running   2 (7m25s ago)    7m30s   192.168.80.23    worker02   <none>           <none>
iomesh-iscsi-redirector-wnglv                          2/2     Running   2 (7m24s ago)    7m30s   192.168.80.21    cp01       <none>           <none>
iomesh-meta-0                                          2/2     Running   0                7m29s   10.211.30.76     worker02   <none>           <none>
iomesh-meta-1                                          2/2     Running   0                7m29s   10.211.5.11      worker01   <none>           <none>
iomesh-meta-2                                          2/2     Running   0                7m29s   10.211.214.135   cp01       <none>           <none>
iomesh-openebs-ndm-55lgk                               1/1     Running   1 (8m59s ago)    35m     192.168.80.22    worker01   <none>           <none>
iomesh-openebs-ndm-cluster-exporter-75f568df84-mrd6b   1/1     Running   0                35m     10.211.30.70     worker02   <none>           <none>
iomesh-openebs-ndm-k7dxn                               1/1     Running   0                35m     192.168.80.23    worker02   <none>           <none>
iomesh-openebs-ndm-m5p2k                               1/1     Running   0                35m     192.168.80.21    cp01       <none>           <none>
iomesh-openebs-ndm-node-exporter-2stn9                 1/1     Running   0                35m     10.211.30.72     worker02   <none>           <none>
iomesh-openebs-ndm-node-exporter-ccfr4                 1/1     Running   0                35m     10.211.214.129   cp01       <none>           <none>
iomesh-openebs-ndm-node-exporter-jdzdv                 1/1     Running   1 (8m59s ago)    35m     10.211.5.8       worker01   <none>           <none>
iomesh-openebs-ndm-operator-7d58d8fbc8-plkzz           1/1     Running   0                9m10s   10.211.30.75     worker02   <none>           <none>
iomesh-zookeeper-0                                     1/1     Running   0                35m     10.211.30.73     worker02   <none>           <none>
iomesh-zookeeper-1                                     1/1     Running   0                8m14s   10.211.5.9       worker01   <none>           <none>
iomesh-zookeeper-2                                     1/1     Running   0                7m59s   10.211.214.134   cp01       <none>           <none>
iomesh-zookeeper-operator-f5588b6d7-wknqk              1/1     Running   0                9m10s   10.211.30.74     worker02   <none>           <none>
operator-765dd9678f-6x5jf                              1/1     Running   0                35m     10.211.30.68     worker02   <none>           <none>
operator-765dd9678f-h5mzh                              1/1     Running   0                35m     10.211.214.130   cp01       <none>           <none>
operator-765dd9678f-svff4                              1/1     Running   0                9m10s   10.211.214.133   cp01       <none>           <none>

部署 IOMesh

查看 Blockdevice 设施状态

IOMesh 装置实现之后,所有的 blockdevice 均处于 Unclaimed 状态

root@cp01:~# kubectl -n iomesh-system get blockdevice
NAME                                           NODENAME   SIZE            CLAIMSTATE   STATUS   AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89   worker02   2400476553216   Unclaimed    Active   15m
blockdevice-0c6a53344f61d5f4789c393acf459c83   cp01       1600321314816   Unclaimed    Active   15m
blockdevice-4f6835975745542e4a36b0548a573ef3   cp01       2400476553216   Unclaimed    Active   15m
blockdevice-5254ac6aa3528ba39dd3cb18663a1211   cp01       1600321314816   Unclaimed    Active   15m
blockdevice-5f40eb004d563e77febbde69d930880c   worker01   2400476553216   Unclaimed    Active   15m
blockdevice-6e9e9eaafee63535906f3f3b9ab35687   worker01   2400476553216   Unclaimed    Active   15m
blockdevice-73f728aca1ab2f74a833303fffc59301   worker01   2400476553216   Unclaimed    Active   15m
blockdevice-7401844e99d0b75f2543768a0464e16c   cp01       2400476553216   Unclaimed    Active   15m
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb   worker02   1600321314816   Unclaimed    Active   15m
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110   worker02   2400476553216   Unclaimed    Active   15m
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0   worker01   1600321314816   Unclaimed    Active   15m
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf   worker02   2400476553216   Unclaimed    Active   15m
blockdevice-d116cd6ee3b11542beee3737d03de26a   worker02   1600321314816   Unclaimed    Active   15m
blockdevice-d81eadce662fe037fa84d1ffa4e73a37   worker01   1600321314816   Unclaimed    Active   15m
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6   worker01   2400476553216   Unclaimed    Active   15m
blockdevice-de6e4ce89275cd799fdf88246c445399   cp01       2400476553216   Unclaimed    Active   15m
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1   worker02   2400476553216   Unclaimed    Active   15m
blockdevice-f505ed7231329ab53ebe26e8a00d4483   cp01       2400476553216   Unclaimed    Active   15m

标记磁盘(非必须)

通过以下命令别离对所有缓存盘以及数据盘打上标签,便于后续依照不同性能 claimed。

缓存盘:# kubectl label blockdevice blockdevice-20cce4274e106429525f4a0ca7d192c2 -n iomesh-system iomesh-system/disk=SSD
数据盘:# kubectl label blockdevice blockdevice-31a2642808ca091a75331b6c7a1f9f68 -n iomesh-system iomesh-system/disk=HDD

设施映射 -HybridFlash

iomesh.yaml 文件 chunk/deviceMap 局部用以阐明哪些磁盘被 iomesh 所应用以及用以何种类型磁盘进行挂载,本文档进行 label select 时依照 iomesh-system/disk=SSD 或 HDD 进行辨别。

也能够通过查看磁盘的 labels,抉择默认的 label

root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system blockdevice-02e15a7c78768f42a1e552a3726cff89 -oyaml
apiVersion: openebs.io/v1alpha1
kind: BlockDevice
metadata:
  annotations:
    internal.openebs.io/uuid-scheme: gpt
  creationTimestamp: "2022-12-27T09:39:07Z"
  generation: 1
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    iomesh.com/bd-devicePath: dev.sdd
    iomesh.com/bd-deviceType: disk
    iomesh.com/bd-driverType: HDD
    iomesh.com/bd-model: DL2400MM0159
    iomesh.com/bd-serial: 5000c500e2476207
    iomesh.com/bd-vendor: SEAGATE
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: worker02
    kubernetes.io/os: linux
    ndm.io/blockdevice-type: blockdevice
    ndm.io/managed: "true"
    nodename: worker02
  name: blockdevice-02e15a7c78768f42a1e552a3726cff89
  namespace: iomesh-system
  resourceVersion: "3333"
  uid: 489546d4-ef39-4980-9fcd-b699a21940ec
......

如果客户环境中有多个磁盘但有局部磁盘不作为 IOMesh 应用,能够通过 exclude 排除。

示例:这个选用了咱们下面设置的 label

. . .
    deviceMap:
      # cacheWithJournal:
         #   selector: 
      #     matchLabels:      
      #       iomesh.com/bd-deviceType: disk
      cacheWithJournal:
        selector:
          matchExpressions:
          - key: iomesh-system/disk
            operator: In
            values:
            - SSD
       dataStore:
         selector:
           matchExpressions:
           - key: iomesh-system/disk
             operator: In
             values:
             - HDD
      # exclude: blockdev-xxxx   ### 须要排除的 blockdevice

申领磁盘

iomesh.yaml 文件批改实现之后通过以下命令实现磁盘申领

./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml

执行后查问所有须要用到的磁盘均已处于 Claimed 状态

root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system
NAME                                           NODENAME   SIZE            CLAIMSTATE   STATUS   AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89   worker02   2400476553216   Claimed      Active   5h19m
blockdevice-0c6a53344f61d5f4789c393acf459c83   cp01       1600321314816   Claimed      Active   5h19m
blockdevice-4f6835975745542e4a36b0548a573ef3   cp01       2400476553216   Claimed      Active   5h19m
blockdevice-5254ac6aa3528ba39dd3cb18663a1211   cp01       1600321314816   Claimed      Active   5h19m
blockdevice-5f40eb004d563e77febbde69d930880c   worker01   2400476553216   Claimed      Active   5h19m
blockdevice-6e9e9eaafee63535906f3f3b9ab35687   worker01   2400476553216   Claimed      Active   5h19m
blockdevice-73f728aca1ab2f74a833303fffc59301   worker01   2400476553216   Claimed      Active   5h19m
blockdevice-7401844e99d0b75f2543768a0464e16c   cp01       2400476553216   Claimed      Active   5h19m
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb   worker02   1600321314816   Claimed      Active   5h19m
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110   worker02   2400476553216   Claimed      Active   5h19m
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0   worker01   1600321314816   Claimed      Active   5h19m
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf   worker02   2400476553216   Claimed      Active   5h19m
blockdevice-d116cd6ee3b11542beee3737d03de26a   worker02   1600321314816   Claimed      Active   5h19m
blockdevice-d81eadce662fe037fa84d1ffa4e73a37   worker01   1600321314816   Claimed      Active   5h19m
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6   worker01   2400476553216   Claimed      Active   5h19m
blockdevice-de6e4ce89275cd799fdf88246c445399   cp01       2400476553216   Claimed      Active   5h19m
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1   worker02   2400476553216   Claimed      Active   5h19m
blockdevice-f505ed7231329ab53ebe26e8a00d4483   cp01       2400476553216   Claimed      Active   5h19m

创立 StorageClass

接下来就是创立 StorageClass 的过程。

这外面波及到给容器提供长久化存储时 PV – PVC – StorageClass(SC)的概念。具体的形容能够参见 CNCF 的介绍。给出一个比拟通俗的了解

  • PV 对应的存储集群中的 Volume。
  • PVC 用来申明须要什么类型的存储。当容器须要应用长久化存储资源时通过 PVC 来实现容器和 PV 之间的一个接口。
  • 在定义 PV 的时候须要配置很多的字段内容,在大规模部署环境中须要创立多个 PV 较为繁琐。为了简化配置过程,于是就有了动静供应概念 – Dynamic Provisioning,主动创立 PV。管理员能够定义 StorageClass 并部署 PV 配置器(provisioner)。开发人员在申请资源的时候通过 PVC 指定所需的存储类型 – StorageClass,PVC 会把 StorageClass 传递给 PV provisioner,由 Provisioner 主动创立 PV。

IOMesh CSI driver 装置实现之后,会默认装置 Storage Class:iomesh-csi-driver。同时也能够依据需要创立自定义的 StorageClass.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: iomesh-sc
provisioner: com.iomesh.csi-driver # <-- driver.name in iomesh-values.yaml
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
  # "ext4" / "ext3" / "ext2" / "xfs"
  csi.storage.k8s.io/fstype: "ext4"
  # "2" / "3"  
    replicaFactor: "2"
  # "true" / "false"
  thinProvision: "true"
volumeBindingMode: Immediate

示例:

------
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-default
provisioner: com.iomesh.csi-driver
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
  csi.storage.k8s.io/fstype: "ext4"
  replicaFactor: "2"
  thinProvision: "true"
volumeBindingMode: Immediate

阐明:reclaimPolicy 能够设置为 Ratain 或者是 Delete,默认 Delete。在创立 PV 的时候 persistentVolumeReclaimPolicy 会继承 StorageClass 的 reclaimPolicy 策略。

两者的区别在于:

Delete:通过 PVC-SC 创立的 PV,当 PVC 被删除的时候,PV 会一并被删除掉。

Retain:通过 PVC-SC 创立的 PV,当 PVC 被删除的时候,PV 不会被删除掉,会处于 Released 状态,可被其余容器应用。

root@cp01:~/iomesh-offline# kubectl get sc
NAME                PROVISIONER                        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
hostpath            kubevirt.io/hostpath-provisioner   Delete          WaitForFirstConsumer   false                  5h23m
iomesh-csi-driver   com.iomesh.csi-driver              Retain          Immediate              true                   5h23m
sc-default          com.iomesh.csi-driver              Delete          Immediate              true                   47m

创立 SnapshotClass

Kubernetes VolumeSnapshotClass 对象相似于 StorageClass。通过如下形式定义:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: iomesh-csi-driver-default
driver: com.iomesh.csi-driver
deletionPolicy: Delete

创立 Volume

创立 PVC 之前须要确保 StorageClass 曾经创立结束。

示例:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-test
spec:  storageClass
Name: sc-default    ## 先前创立的 StorageClass 名称
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

通过下面的 PVC 创立对应的 PV,创立实现之后查问创立实现的 PVC:

root@cp01:~/iomesh-offline# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
iomesh-pvc-10g   Bound    pvc-ac1e1184-a9ce-417e-a7eb-a799cf199be8   10Gi       RWO            sc-default     44m

查问通过 PVC 创立的 PV:

root@cp01:~/iomesh-offline# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                   STORAGECLASS   REASON   AGE
pvc-5900b5e5-00d4-42ad-821e-6d1199a7e3e3   217Gi      RWO            Delete           Bound    iomesh-system/coredump-iomesh-chunk-0   hostpath                4h55m
pvc-ac1e1184-a9ce-417e-a7eb-a799cf199be8   10Gi       RWO            Delete           Bound    default/iomesh-pvc-10g                  sc-default              44m

查看无关 Cluster 的存储状况

root@cp01:~/iomesh-offline# kubectl get iomeshclusters.iomesh.com -n iomesh-system -oyaml
apiVersion: v1
items:
- apiVersion: iomesh.com/v1alpha1
  kind: IOMeshCluster
  metadata:
    annotations:
      meta.helm.sh/release-name: iomesh
      meta.helm.sh/release-namespace: iomesh-system
    creationTimestamp: "2022-12-27T09:38:58Z"
    finalizers:
    - iomesh.com/iomesh-cluster-protection
    - iomesh.com/access-protection
    generation: 4
    labels:
      app.kubernetes.io/instance: iomesh
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: iomesh
      app.kubernetes.io/part-of: iomesh
      app.kubernetes.io/version: v5.1.2-rc14
      helm.sh/chart: iomesh-v0.11.1
    name: iomesh
    namespace: iomesh-system
    resourceVersion: "653267"
    uid: cd33bce1-9400-4914-aa32-75a13f07d13e
  spec:
    chunk:
      dataCIDR: 10.234.1.0/24
      deviceMap:
        cacheWithJournal:
          selector:
            matchExpressions:
            - key: iomesh-system/disk
              operator: In
              values:
              - SSD
        dataStore:
          selector:
            matchExpressions:
            - key: iomesh-system/disk
              operator: In
              values:
              - HDD
      devicemanager:
        image:
          pullPolicy: IfNotPresent
          repository: iomesh/operator-devicemanager
          tag: v0.11.1
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-chunkd
        tag: v5.1.2-rc14
      replicas: 3
      resources: {}
    diskDeploymentMode: hybridFlash
    meta:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-metad
        tag: v5.1.2-rc14
      replicas: 3
      resources: {}
    portOffset: -1
    probe:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/operator-probe
        tag: v0.11.1
    reclaimPolicy:
      blockdevice: Delete
      volume: Delete
    redirector:
      dataCIDR: 10.234.1.0/24
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-iscsi-redirectord
        tag: v5.1.2-rc14
      resources: {}
    storageClass: hostpath
    toolbox:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/operator-toolbox
        tag: v0.11.1
  status:
    attachedDevices:
    - device: blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0
      mountType: cacheWithJournal
      nodeName: worker01
    - device: blockdevice-02e15a7c78768f42a1e552a3726cff89
      mountType: dataStore
      nodeName: worker02
    - device: blockdevice-7401844e99d0b75f2543768a0464e16c
      mountType: dataStore
      nodeName: cp01
    - device: blockdevice-6e9e9eaafee63535906f3f3b9ab35687
      mountType: dataStore
      nodeName: worker01
    - device: blockdevice-5f40eb004d563e77febbde69d930880c
      mountType: dataStore
      nodeName: worker01
    - device: blockdevice-de6e4ce89275cd799fdf88246c445399
      mountType: dataStore
      nodeName: cp01
    - device: blockdevice-d81eadce662fe037fa84d1ffa4e73a37
      mountType: cacheWithJournal
      nodeName: worker01
    - device: blockdevice-d116cd6ee3b11542beee3737d03de26a
      mountType: cacheWithJournal
      nodeName: worker02
    - device: blockdevice-f505ed7231329ab53ebe26e8a00d4483
      mountType: dataStore
      nodeName: cp01
    - device: blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1
      mountType: dataStore
      nodeName: worker02
    - device: blockdevice-4f6835975745542e4a36b0548a573ef3
      mountType: dataStore
      nodeName: cp01
    - device: blockdevice-73f728aca1ab2f74a833303fffc59301
      mountType: dataStore
      nodeName: worker01
    - device: blockdevice-86ba281ed13da04c83b0d5efba7cbeeb
      mountType: cacheWithJournal
      nodeName: worker02
    - device: blockdevice-0c6a53344f61d5f4789c393acf459c83
      mountType: cacheWithJournal
      nodeName: cp01
    - device: blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110
      mountType: dataStore
      nodeName: worker02
    - device: blockdevice-d017164bd3f69f337e1838cc3b3c4aaf
      mountType: dataStore
      nodeName: worker02
    - device: blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6
      mountType: dataStore
      nodeName: worker01
    - device: blockdevice-5254ac6aa3528ba39dd3cb18663a1211
      mountType: cacheWithJournal
      nodeName: cp01
    license:
      expirationDate: Thu, 26 Jan 2023 10:07:30 UTC
      maxChunkNum: 3
      maxPhysicalDataCapacity: 0 TB
      maxPhysicalDataCapacityPerNode: 128 TB
      serial: 44831211-c377-486e-acc5-7b4ea354091e
      signDate: Tue, 27 Dec 2022 10:07:30 UTC
      softwareEdition: COMMUNITY
      subscriptionExpirationDate: "0"
      subscriptionStartDate: "0"
    readyReplicas:
      iomesh-chunk: 3
      iomesh-meta: 3
    runningImages:
      chunk:
        iomesh-chunk-0: iomesh/zbs-chunkd:v5.1.2-rc14
        iomesh-chunk-1: iomesh/zbs-chunkd:v5.1.2-rc14
        iomesh-chunk-2: iomesh/zbs-chunkd:v5.1.2-rc14
      meta:
        iomesh-meta-0: iomesh/zbs-metad:v5.1.2-rc14
        iomesh-meta-1: iomesh/zbs-metad:v5.1.2-rc14
        iomesh-meta-2: iomesh/zbs-metad:v5.1.2-rc14
    summary:
      chunkSummary:
        chunks:
        - id: 1
          ip: 10.234.1.23
          spaceInfo:
            dirtyCacheSpace: 193.75Mi
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 193.75Mi
            usedDataSpace: 5.75Gi
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 2
          ip: 10.234.1.22
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 3
          ip: 10.234.1.21
          spaceInfo:
            dirtyCacheSpace: 193.75Mi
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 193.75Mi
            usedDataSpace: 5.75Gi
          status: CHUNK_STATUS_CONNECTED_HEALTHY
      clusterSummary:
        spaceInfo:
          dirtyCacheSpace: 387.50Mi
          failureCacheSpace: 0B
          failureDataSpace: 0B
          totalCacheCapacity: 8.62Ti
          totalDataCapacity: 24.66Ti
          usedCacheSpace: 387.50Mi
          usedDataSpace: 11.50Gi
      metaSummary:
        aliveHost:
        - 10.211.5.11
        - 10.211.30.76
        - 10.211.214.135
        leader: 10.211.5.11:10100
        status: META_RUNNING
kind: List
metadata:
  resourceVersion: ""

次要查看 Summary 的内容:

summary:
      chunkSummary:
        chunks:
        - id: 1
          ip: 10.234.1.23
          spaceInfo:
            dirtyCacheSpace: 193.75Mi
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 193.75Mi
            usedDataSpace: 5.75Gi
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 2
          ip: 10.234.1.22
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 3
          ip: 10.234.1.21
          spaceInfo:
            dirtyCacheSpace: 193.75Mi
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 2.87Ti
            totalDataCapacity: 8.22Ti
            usedCacheSpace: 193.75Mi
            usedDataSpace: 5.75Gi
          status: CHUNK_STATUS_CONNECTED_HEALTHY
      clusterSummary:
        spaceInfo:
          dirtyCacheSpace: 387.50Mi
          failureCacheSpace: 0B
          failureDataSpace: 0B
          totalCacheCapacity: 8.62Ti
          totalDataCapacity: 24.66Ti
          usedCacheSpace: 387.50Mi
          usedDataSpace: 11.50Gi
      metaSummary:
        aliveHost:
        - 10.211.5.11
        - 10.211.30.76
        - 10.211.214.135
        leader: 10.211.5.11:10100
        status: META_RUNNING

All Flash 模式

扭转 iomesh.yaml 文件

iomesh:
  # Whether to create IOMeshCluster object
  create: true

  # Use All-Flash Mode or Hybrid-Flash Mode, in All-Flash mode will reject mounting `cacheWithJournal` and `rawCache` type.
  # And enable mount `dataStoreWithJournal`.
  diskDeploymentMode: "allFlash"
#留神:这个提醒没有给出关键词,不是 ALL-Flash。利用后会提醒谬误!#./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml
#Error: UPGRADE FAILED: cannot patch "iomesh" with kind IOMeshCluster: IOMeshCluster.iomesh.com "iomesh" is invalid: spec.diskDeploymentMode: Unsupported value: "ALL-Flash": supported values: "hybridFlash", "allFlash"

deviceMap:
    #  cacheWithJournal:
    #    selector:
    #      matchExpressions:
    #      - key: iomesh-system/disk
    #        operator: In
    #        values:
    #        - SSD
    #  dataStore:
    #    selector:
    #      matchExpressions:
    #      - key: iomesh-system/disk
    #        operator: In
    #        values:
    #        - HDD

      dataStoreWithJournal:
        selector:
          matchLabels:
            iomesh.com/bd-deviceType: disk
          matchExpressions:
          - key: iomesh.com/bd-driverType
            operator: In
            values:
            - SSD

查看 CRD 的 yaml 能够看到这部分的配置:

diskDeploymentMode:
                default: hybridFlash
                description: DiskDeploymentMode set this IOMesh cluster start with
                  all-flash mode or hybrid-flash mode. In all-flash mode, the DeviceManager
                  will reject mount any `Cache` type partition. In hybrid-flash mode,
                  the DeviceManager will reject mount `dataStoreWithJournal` type
                  partition.
                enum:
                - hybridFlash
                - allFlash
                type: string

执行当前:

root@cp01:~/iomesh-offline# ./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml
Release "iomesh" has been upgraded. Happy Helming!
NAME: iomesh
LAST DEPLOYED: Wed Dec 28 11:30:02 2022
NAMESPACE: iomesh-system
STATUS: deployed
REVISION: 8
TEST SUITE: None

磁盘会从新 Claim

root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system
NAME                                           NODENAME   SIZE            CLAIMSTATE   STATUS   AGE
blockdevice-02e15a7c78768f42a1e552a3726cff89   worker02   2400476553216   Unclaimed    Active   25h
blockdevice-0c6a53344f61d5f4789c393acf459c83   cp01       1600321314816   Claimed      Active   25h
blockdevice-4f6835975745542e4a36b0548a573ef3   cp01       2400476553216   Unclaimed    Active   25h
blockdevice-5254ac6aa3528ba39dd3cb18663a1211   cp01       1600321314816   Claimed      Active   25h
blockdevice-5f40eb004d563e77febbde69d930880c   worker01   2400476553216   Unclaimed    Active   25h
blockdevice-6e9e9eaafee63535906f3f3b9ab35687   worker01   2400476553216   Unclaimed    Active   25h
blockdevice-73f728aca1ab2f74a833303fffc59301   worker01   2400476553216   Unclaimed    Active   25h
blockdevice-7401844e99d0b75f2543768a0464e16c   cp01       2400476553216   Unclaimed    Active   25h
blockdevice-86ba281ed13da04c83b0d5efba7cbeeb   worker02   1600321314816   Claimed      Active   25h
blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110   worker02   2400476553216   Unclaimed    Active   25h
blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0   worker01   1600321314816   Claimed      Active   25h
blockdevice-d017164bd3f69f337e1838cc3b3c4aaf   worker02   2400476553216   Unclaimed    Active   25h
blockdevice-d116cd6ee3b11542beee3737d03de26a   worker02   1600321314816   Claimed      Active   25h
blockdevice-d81eadce662fe037fa84d1ffa4e73a37   worker01   1600321314816   Claimed      Active   25h
blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6   worker01   2400476553216   Unclaimed    Active   25h
blockdevice-de6e4ce89275cd799fdf88246c445399   cp01       2400476553216   Unclaimed    Active   25h
blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1   worker02   2400476553216   Unclaimed    Active   25h
blockdevice-f505ed7231329ab53ebe26e8a00d4483   cp01       2400476553216   Unclaimed    Active   25h

再次查看零碎的存储

root@cp01:~/iomesh-offline# kubectl get iomeshclusters.iomesh.com -n iomesh-system -o yaml
apiVersion: v1
items:
- apiVersion: iomesh.com/v1alpha1
  kind: IOMeshCluster
  metadata:
    annotations:
      meta.helm.sh/release-name: iomesh
      meta.helm.sh/release-namespace: iomesh-system
    creationTimestamp: "2022-12-27T09:38:58Z"
    finalizers:
    - iomesh.com/iomesh-cluster-protection
    - iomesh.com/access-protection
    generation: 5
    labels:
      app.kubernetes.io/instance: iomesh
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: iomesh
      app.kubernetes.io/part-of: iomesh
      app.kubernetes.io/version: v5.1.2-rc14
      helm.sh/chart: iomesh-v0.11.1
    name: iomesh
    namespace: iomesh-system
    resourceVersion: "692613"
    uid: cd33bce1-9400-4914-aa32-75a13f07d13e
  spec:
    chunk:
      dataCIDR: 10.234.1.0/24
      deviceMap:
        dataStoreWithJournal:
          selector:
            matchExpressions:
            - key: iomesh.com/bd-driverType
              operator: In
              values:
              - SSD
            matchLabels:
              iomesh.com/bd-deviceType: disk
      devicemanager:
        image:
          pullPolicy: IfNotPresent
          repository: iomesh/operator-devicemanager
          tag: v0.11.1
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-chunkd
        tag: v5.1.2-rc14
      replicas: 3
      resources: {}
    diskDeploymentMode: allFlash
    meta:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-metad
        tag: v5.1.2-rc14
      replicas: 3
      resources: {}
    portOffset: -1
    probe:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/operator-probe
        tag: v0.11.1
    reclaimPolicy:
      blockdevice: Delete
      volume: Delete
    redirector:
      dataCIDR: 10.234.1.0/24
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/zbs-iscsi-redirectord
        tag: v5.1.2-rc14
      resources: {}
    storageClass: hostpath
    toolbox:
      image:
        pullPolicy: IfNotPresent
        repository: iomesh/operator-toolbox
        tag: v0.11.1
  status:
    attachedDevices:
    - device: blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0
      mountType: dataStoreWithJournal
      nodeName: worker01
    - device: blockdevice-d81eadce662fe037fa84d1ffa4e73a37
      mountType: dataStoreWithJournal
      nodeName: worker01
    - device: blockdevice-d116cd6ee3b11542beee3737d03de26a
      mountType: dataStoreWithJournal
      nodeName: worker02
    - device: blockdevice-86ba281ed13da04c83b0d5efba7cbeeb
      mountType: dataStoreWithJournal
      nodeName: worker02
    - device: blockdevice-0c6a53344f61d5f4789c393acf459c83
      mountType: dataStoreWithJournal
      nodeName: cp01
    - device: blockdevice-5254ac6aa3528ba39dd3cb18663a1211
      mountType: dataStoreWithJournal
      nodeName: cp01
    license:
      expirationDate: Thu, 26 Jan 2023 10:07:30 UTC
      maxChunkNum: 3
      maxPhysicalDataCapacity: 0 TB
      maxPhysicalDataCapacityPerNode: 128 TB
      serial: 44831211-c377-486e-acc5-7b4ea354091e
      signDate: Tue, 27 Dec 2022 10:07:30 UTC
      softwareEdition: COMMUNITY
      subscriptionExpirationDate: "0"
      subscriptionStartDate: "0"
    readyReplicas:
      iomesh-chunk: 3
      iomesh-meta: 3
    runningImages:
      chunk:
        iomesh-chunk-0: iomesh/zbs-chunkd:v5.1.2-rc14
        iomesh-chunk-1: iomesh/zbs-chunkd:v5.1.2-rc14
        iomesh-chunk-2: iomesh/zbs-chunkd:v5.1.2-rc14
      meta:
        iomesh-meta-0: iomesh/zbs-metad:v5.1.2-rc14
        iomesh-meta-1: iomesh/zbs-metad:v5.1.2-rc14
        iomesh-meta-2: iomesh/zbs-metad:v5.1.2-rc14
    summary:
      chunkSummary:
        chunks:
        - id: 1
          ip: 10.234.1.23
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 0B
            totalDataCapacity: 2.85Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 2
          ip: 10.234.1.22
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 0B
            totalDataCapacity: 2.85Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
        - id: 3
          ip: 10.234.1.21
          spaceInfo:
            dirtyCacheSpace: 0B
            failureCacheSpace: 0B
            failureDataSpace: 0B
            totalCacheCapacity: 0B
            totalDataCapacity: 2.85Ti
            usedCacheSpace: 0B
            usedDataSpace: 0B
          status: CHUNK_STATUS_CONNECTED_HEALTHY
      clusterSummary:
        spaceInfo:
          dirtyCacheSpace: 0B
          failureCacheSpace: 0B
          failureDataSpace: 0B
          totalCacheCapacity: 0B
          totalDataCapacity: 8.56Ti
          usedCacheSpace: 0B
          usedDataSpace: 0B
      metaSummary:
        aliveHost:
        - 10.211.5.11
        - 10.211.30.76
        - 10.211.214.135
        leader: 10.211.5.11:10100
        status: META_RUNNING
kind: List
metadata:
  resourceVersion: ""
  • 监控用 pod
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pyzbs
spec:
  selector:
    matchLabels:
      app: pyzbs
  replicas: 1
  template:
    metadata:
      labels:
        app: pyzbs # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: pyzbs
        image: iomesh/zbs-client-py-builder:latest
        command: ["/bin/bash", "-c", "--"]
        args: ["while true; do sleep 30; done;"]

Uninstall

root@cp01:~/iomesh-offline# ./helm uninstall -n iomesh-system iomesh
These resources were kept due to the resource policy:
[Deployment] iomesh-openebs-ndm-operator
[Deployment] iomesh-zookeeper-operator
[Deployment] operator
[DaemonSet] iomesh-hostpath-provisioner
[DaemonSet] iomesh-openebs-ndm
[RoleBinding] iomesh-zookeeper-operator
[RoleBinding] iomesh:leader-election
[Role] iomesh-zookeeper-operator
[Role] iomesh:leader-election
[ClusterRoleBinding] iomesh-hostpath-provisioner
[ClusterRoleBinding] iomesh-openebs-ndm
[ClusterRoleBinding] iomesh-zookeeper-operator
[ClusterRoleBinding] iomesh:manager
[ClusterRole] iomesh-hostpath-provisioner
[ClusterRole] iomesh-openebs-ndm
[ClusterRole] iomesh-zookeeper-operator
[ClusterRole] iomesh:manager
[ConfigMap] iomesh-openebs-ndm-config
[ServiceAccount] openebs-ndm
[ServiceAccount] zookeeper-operator
[ServiceAccount] iomesh-operator

release "iomesh" uninstalled

Q1- 发现 pod 在某个 node 上无奈挂载 pvc

Warning  FailedMount             5m25s                 kubelet                  Unable to attach or mount volumes: unmounted volumes=[fio-pvc-worker01], unattached volumes=[fio-pvc-worker01 kube-api-access-thpm9]: timed out waiting for the condition
  Warning  FailedMapVolume         66s (x11 over 7m20s)  kubelet                  MapVolume.SetUpDevice failed for volume "pvc-bbe904d5-0362-481b-8ec4-e059c0ac2d52" : rpc error: code = Internal desc = failed to attach &{LunId:8 Initiator:iqn.2020-05.com.iomesh:817dd926-3e0f-4c82-9c02-5e30b8e045c5-59068042-6fff-4700-b0b4-537376943d5f IFace:59068042-6fff-4700-b0b4-537376943d5f Portal:127.0.0.1:3260 TargetIqn:iqn.2016-02.com.smartx:system:ebc8b5ba-7a07-4651-9986-f7ea4fe0121e ChapInfo:<nil>}, failed to iscsiadm  discovery target, command: [iscsiadm -m discovery -t sendtargets -p 127.0.0.1:3260 -I 59068042-6fff-4700-b0b4-537376943d5f -o update], output: iscsiadm: Failed to load module tcp: No such file or directory
iscsiadm: Could not load transport tcp.Dropping interface 59068042-6fff-4700-b0b4-537376943d5f.
, error: exit status 21
  Warning  FailedMount  54s (x2 over 3m9s)  kubelet  Unable to attach or mount volumes: unmounted volumes=[fio-pvc-worker01], unattached volumes=[kube-api-access-thpm9 fio-pvc-worker01]: timed out waiting for the condition

从新查看:

1. 编辑 iscsi 配置文件
sudo sed -i 's/^node.startup = automatic$/node.startup = manual/' /etc/iscsi/iscsid.conf

---

2. 确保 iscsi_tcp 模块已被加载
sudo modprobe iscsi_tcpsudo bash -c 'echo iscsi_tcp > /etc/modprobe.d/iscsi-tcp.conf'

---

3. 启动 iscsid 服务
systemctl enable --now iscsid
正文完
 0