IOMESH Installation
官网文档:
Introduction · Documentation
同步发表于集体网站:www.etaon.top
试验拓扑图:
也能够再测试的时候是应用一个端口,官网倡议将IOMESH的端口分可,即下图10.234.1.0/24网段:
试验应用裸机,配置如下:
配件 | 型号规格 | 数量 | 备注 |
---|---|---|---|
CPU | Intel(R) Xeon(R) Silver 4214R @ 2.40GHz | 2 | 节点1、3 |
Intel(R) Xeon(R) Gold 6226R @ 2.90GHz | 1 | 节点2 | |
内存 | 256 GB | 3 | 节点1、2、3 |
SSD | 1.6 TB NVMe SSD | 2*3 | 节点1/节点2/节点3 |
HDD | 2.4 TB SAS | 4*3 | 节点1/节点2/节点3 |
DOM盘 | M.2 240GB | 2 | 节点1/节点2/节点3 |
千兆网卡 | I350 | 2 | 节点1/节点2/节点3 |
存储网卡 | 10/25G | 2 | 节点1/节点2/节点3 |
- 部署IOMesh 至多须要一块 cache disk以及一块 partition disk
装置步骤:
创立Kubernetes集群
采纳Kubernetes 1.24 版本,Runtime为containerd。
装置参考Install Kubernetes 1.24 - 路无止境! (etaon.top)
装置前筹备
在所有worker节点执行以下操作。
- 装置 open-iscsi,ubuntu曾经装置。
apt install open-iscsi -y
- 编辑iscsi配置文件
sudo sed -i 's/^node.startup = automatic$/node.startup = manual/' /etc/iscsi/iscsid.conf
- 确保 iscsi_tcp 模块已被加载
sudo modprobe iscsi_tcpsudo bash -c 'echo iscsi_tcp > /etc/modprobe.d/iscsi-tcp.conf'
- 启动 iscsid 服务
systemctl enable --now iscsid
离线装置 IOMesh
装置文档:
Install IOMesh · Documentation
- 下载离线安装文件,将其上传至 所有 须要部署 IOMesh 的节点上,包含Master节点。
#这个的节点个别指worker节点,且>=3;#如果worker节点不够,心愿master节点退出,再master节点执行:kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
- 对上传的离线安装包进行解压缩:
tar -xvf iomesh-offline-v0.11.1.tgz && cd iomesh-offline
- 加载镜像文件,Runtime 为 Containerd 的状况下执行以下命令
ctr --namespace k8s.io image import ./images/iomesh-offline-images.tar
- 转到Master 节点(或API Server接入点),生成IOMesh 配置文件
./helm show values charts/iomesh > iomesh.yaml
- 定义 iomesh.yaml 配置文件,留神批改 dataCIDR 为 IOMesh 存储网段
...iomesh: chunk: dataCIDR: "10.234.1.0/24" # change to your own data network CIDR
- Master 装置 IOMesh 集群
./helm install iomesh ./charts/iomesh \ --create-namespace \ --namespace iomesh-system \ --values iomesh.yaml \ --wait#返回后果NAME: iomeshLAST DEPLOYED: Tue Dec 27 09:38:56 2022NAMESPACE: iomesh-systemSTATUS: deployedREVISION: 1TEST SUITE: None
- 创立胜利当前的POD状况
创立POD过程:
root@cp01:~/iomesh-offline# kubectl get po -n iomesh-system -wNAME READY STATUS RESTARTS AGEiomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 6/6 Running 0 24siomesh-csi-driver-controller-plugin-6887b8d974-nxt62 6/6 Running 0 24siomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 6/6 Running 0 24siomesh-csi-driver-node-plugin-gr8bm 3/3 Running 0 24siomesh-csi-driver-node-plugin-kshdt 3/3 Running 0 24siomesh-csi-driver-node-plugin-xxhhx 3/3 Running 0 24siomesh-hostpath-provisioner-59v28 1/1 Running 0 24siomesh-hostpath-provisioner-79dgh 1/1 Running 0 24siomesh-hostpath-provisioner-cknk8 1/1 Running 0 24siomesh-openebs-ndm-7vdkm 1/1 Running 0 24siomesh-openebs-ndm-cluster-exporter-75f568df84-dvz4g 1/1 Running 0 24siomesh-openebs-ndm-hctjc 1/1 Running 0 24siomesh-openebs-ndm-node-exporter-f59t5 1/1 Running 0 24siomesh-openebs-ndm-node-exporter-l48dj 1/1 Running 0 24siomesh-openebs-ndm-node-exporter-sxgjn 1/1 Running 0 24siomesh-openebs-ndm-operator-7d58d8fbc8-xxwdt 1/1 Running 0 24siomesh-openebs-ndm-x64pj 1/1 Running 0 24siomesh-zookeeper-0 0/1 Running 0 18siomesh-zookeeper-operator-f5588b6d7-lrj6n 1/1 Running 0 24soperator-765dd9678f-95rcw 1/1 Running 0 24soperator-765dd9678f-ns2gs 1/1 Running 0 24soperator-765dd9678f-q8pwn 1/1 Running 0 24siomesh-zookeeper-0 1/1 Running 0 23siomesh-zookeeper-1 0/1 Pending 0 0siomesh-zookeeper-1 0/1 Pending 0 1siomesh-zookeeper-1 0/1 ContainerCreating 0 1siomesh-zookeeper-1 0/1 ContainerCreating 0 1siomesh-zookeeper-1 0/1 Running 0 2siomesh-csi-driver-node-plugin-gr8bm 3/3 Running 1 (0s ago) 50siomesh-csi-driver-node-plugin-kshdt 3/3 Running 1 (0s ago) 50siomesh-csi-driver-node-plugin-xxhhx 3/3 Running 1 (0s ago) 50siomesh-csi-driver-controller-plugin-6887b8d974-nxt62 2/6 Error 1 (1s ago) 53siomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 2/6 Error 1 (1s ago) 53siomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 2/6 Error 1 (1s ago) 54siomesh-csi-driver-controller-plugin-6887b8d974-nxt62 6/6 Running 5 (1s ago) 54siomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 6/6 Running 5 (2s ago) 54siomesh-zookeeper-1 1/1 Running 0 26siomesh-zookeeper-2 0/1 Pending 0 0siomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 6/6 Running 5 (2s ago) 55siomesh-zookeeper-2 0/1 Pending 0 1siomesh-zookeeper-2 0/1 ContainerCreating 0 1siomesh-zookeeper-2 0/1 ContainerCreating 0 1siomesh-zookeeper-2 0/1 Running 0 2siomesh-zookeeper-2 1/1 Running 0 26siomesh-meta-0 0/2 Pending 0 0siomesh-meta-1 0/2 Pending 0 0siomesh-meta-2 0/2 Pending 0 0siomesh-meta-0 0/2 Pending 0 1siomesh-meta-1 0/2 Pending 0 1siomesh-meta-1 0/2 Init:0/1 0 1siomesh-meta-0 0/2 Init:0/1 0 1siomesh-meta-2 0/2 Pending 0 1siomesh-meta-2 0/2 Init:0/1 0 1siomesh-iscsi-redirector-jj2qm 0/2 Pending 0 0siomesh-iscsi-redirector-jj2qm 0/2 Pending 0 0siomesh-iscsi-redirector-9crx8 0/2 Pending 0 0siomesh-iscsi-redirector-zlhtk 0/2 Pending 0 0siomesh-iscsi-redirector-9crx8 0/2 Pending 0 0siomesh-iscsi-redirector-zlhtk 0/2 Pending 0 0siomesh-iscsi-redirector-zlhtk 0/2 Init:0/1 0 0siomesh-iscsi-redirector-9crx8 0/2 Init:0/1 0 0siomesh-chunk-0 0/3 Pending 0 0siomesh-meta-0 0/2 Init:0/1 0 2siomesh-meta-1 0/2 Init:0/1 0 2siomesh-meta-2 0/2 Init:0/1 0 2siomesh-iscsi-redirector-zlhtk 0/2 Init:0/1 0 0siomesh-iscsi-redirector-jj2qm 0/2 Init:0/1 0 0siomesh-chunk-0 0/3 Pending 0 1siomesh-meta-1 0/2 Init:0/1 0 3siomesh-meta-2 0/2 PodInitializing 0 3siomesh-iscsi-redirector-9crx8 0/2 PodInitializing 0 1siomesh-chunk-0 0/3 Init:0/1 0 1siomesh-iscsi-redirector-zlhtk 0/2 PodInitializing 0 2siomesh-iscsi-redirector-9crx8 1/2 Running 0 2siomesh-meta-0 0/2 PodInitializing 0 4siomesh-meta-1 0/2 PodInitializing 0 4siomesh-meta-2 1/2 Running 0 4siomesh-meta-1 1/2 Running 0 5siomesh-iscsi-redirector-jj2qm 0/2 PodInitializing 0 3siomesh-iscsi-redirector-9crx8 1/2 Error 0 3siomesh-iscsi-redirector-zlhtk 1/2 Running 0 4siomesh-meta-0 1/2 Running 0 6siomesh-iscsi-redirector-zlhtk 1/2 Error 0 4siomesh-iscsi-redirector-zlhtk 1/2 Running 1 (3s ago) 5siomesh-meta-0 1/2 Running 0 7siomesh-iscsi-redirector-9crx8 1/2 Running 1 (2s ago) 5siomesh-iscsi-redirector-jj2qm 1/2 Running 0 5siomesh-iscsi-redirector-9crx8 1/2 Error 1 (2s ago) 5siomesh-iscsi-redirector-zlhtk 1/2 Error 1 (3s ago) 5siomesh-iscsi-redirector-jj2qm 1/2 Error 0 6siomesh-iscsi-redirector-9crx8 1/2 CrashLoopBackOff 1 (1s ago) 6siomesh-iscsi-redirector-zlhtk 1/2 CrashLoopBackOff 1 (2s ago) 6siomesh-chunk-0 0/3 PodInitializing 0 7siomesh-chunk-0 3/3 Running 0 7siomesh-chunk-1 0/3 Pending 0 0siomesh-iscsi-redirector-jj2qm 1/2 Running 1 (5s ago) 8siomesh-chunk-1 0/3 Pending 0 1siomesh-chunk-1 0/3 Init:0/1 0 1siomesh-chunk-1 0/3 Init:0/1 0 2siomesh-meta-0 2/2 Running 0 12siomesh-meta-1 2/2 Running 0 12siomesh-meta-2 2/2 Running 0 12siomesh-chunk-0 2/3 Error 0 11siomesh-chunk-1 0/3 PodInitializing 0 4siomesh-chunk-0 3/3 Running 1 (2s ago) 12siomesh-chunk-1 3/3 Running 0 5siomesh-chunk-2 0/3 Pending 0 0siomesh-chunk-2 0/3 Pending 0 1siomesh-chunk-2 0/3 Init:0/1 0 1siomesh-chunk-2 0/3 Init:0/1 0 2siomesh-chunk-2 0/3 PodInitializing 0 3siomesh-iscsi-redirector-jj2qm 2/2 Running 1 (13s ago) 16siomesh-csi-driver-node-plugin-gr8bm 3/3 Running 2 (0s ago) 100siomesh-chunk-2 3/3 Running 0 4siomesh-csi-driver-node-plugin-xxhhx 3/3 Running 2 (0s ago) 100siomesh-csi-driver-node-plugin-kshdt 3/3 Running 2 (1s ago) 101siomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 2/6 Error 6 (0s ago) 103siomesh-csi-driver-controller-plugin-6887b8d974-nxt62 2/6 Error 6 (1s ago) 103siomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 2/6 Error 6 (1s ago) 103siomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 2/6 CrashLoopBackOff 6 (1s ago) 104siomesh-csi-driver-controller-plugin-6887b8d974-nxt62 2/6 CrashLoopBackOff 6 (2s ago) 104siomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 2/6 CrashLoopBackOff 6 (2s ago) 104siomesh-iscsi-redirector-9crx8 1/2 Running 2 (18s ago) 23siomesh-iscsi-redirector-zlhtk 1/2 Running 2 (19s ago) 23siomesh-csi-driver-controller-plugin-6887b8d974-nxt62 6/6 Running 10 (14s ago) 116siomesh-csi-driver-controller-plugin-6887b8d974-sv2xq 6/6 Running 10 (14s ago) 117siomesh-iscsi-redirector-zlhtk 2/2 Running 2 (31s ago) 35siomesh-iscsi-redirector-9crx8 2/2 Running 2 (30s ago) 35siomesh-csi-driver-controller-plugin-6887b8d974-ffm5z 6/6 Running 10 (18s ago) 2m
root@cp01:~/iomesh-offline# kubectl get po -n iomesh-system -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESiomesh-chunk-0 3/3 Running 0 7m27s 192.168.80.23 worker02 <none> <none>iomesh-chunk-1 3/3 Running 0 7m20s 192.168.80.22 worker01 <none> <none>iomesh-chunk-2 3/3 Running 0 7m16s 192.168.80.21 cp01 <none> <none>iomesh-csi-driver-controller-plugin-6887b8d974-4cjlq 6/6 Running 10 (7m12s ago) 9m10s 192.168.80.22 worker01 <none> <none>iomesh-csi-driver-controller-plugin-6887b8d974-d2d4r 6/6 Running 44 (11m ago) 35m 192.168.80.23 worker02 <none> <none>iomesh-csi-driver-controller-plugin-6887b8d974-j2rks 6/6 Running 44 (11m ago) 35m 192.168.80.21 cp01 <none> <none>iomesh-csi-driver-node-plugin-95f5z 3/3 Running 14 (7m15s ago) 35m 192.168.80.22 worker01 <none> <none>iomesh-csi-driver-node-plugin-dftlv 3/3 Running 12 (11m ago) 35m 192.168.80.21 cp01 <none> <none>iomesh-csi-driver-node-plugin-s549x 3/3 Running 12 (11m ago) 35m 192.168.80.23 worker02 <none> <none>iomesh-hostpath-provisioner-8jvs8 1/1 Running 1 (8m59s ago) 35m 10.211.5.10 worker01 <none> <none>iomesh-hostpath-provisioner-rkkrs 1/1 Running 0 35m 10.211.30.71 worker02 <none> <none>iomesh-hostpath-provisioner-rmk4z 1/1 Running 0 35m 10.211.214.131 cp01 <none> <none>iomesh-iscsi-redirector-6g2fk 2/2 Running 2 (7m25s ago) 7m30s 192.168.80.22 worker01 <none> <none>iomesh-iscsi-redirector-cxnbq 2/2 Running 2 (7m25s ago) 7m30s 192.168.80.23 worker02 <none> <none>iomesh-iscsi-redirector-wnglv 2/2 Running 2 (7m24s ago) 7m30s 192.168.80.21 cp01 <none> <none>iomesh-meta-0 2/2 Running 0 7m29s 10.211.30.76 worker02 <none> <none>iomesh-meta-1 2/2 Running 0 7m29s 10.211.5.11 worker01 <none> <none>iomesh-meta-2 2/2 Running 0 7m29s 10.211.214.135 cp01 <none> <none>iomesh-openebs-ndm-55lgk 1/1 Running 1 (8m59s ago) 35m 192.168.80.22 worker01 <none> <none>iomesh-openebs-ndm-cluster-exporter-75f568df84-mrd6b 1/1 Running 0 35m 10.211.30.70 worker02 <none> <none>iomesh-openebs-ndm-k7dxn 1/1 Running 0 35m 192.168.80.23 worker02 <none> <none>iomesh-openebs-ndm-m5p2k 1/1 Running 0 35m 192.168.80.21 cp01 <none> <none>iomesh-openebs-ndm-node-exporter-2stn9 1/1 Running 0 35m 10.211.30.72 worker02 <none> <none>iomesh-openebs-ndm-node-exporter-ccfr4 1/1 Running 0 35m 10.211.214.129 cp01 <none> <none>iomesh-openebs-ndm-node-exporter-jdzdv 1/1 Running 1 (8m59s ago) 35m 10.211.5.8 worker01 <none> <none>iomesh-openebs-ndm-operator-7d58d8fbc8-plkzz 1/1 Running 0 9m10s 10.211.30.75 worker02 <none> <none>iomesh-zookeeper-0 1/1 Running 0 35m 10.211.30.73 worker02 <none> <none>iomesh-zookeeper-1 1/1 Running 0 8m14s 10.211.5.9 worker01 <none> <none>iomesh-zookeeper-2 1/1 Running 0 7m59s 10.211.214.134 cp01 <none> <none>iomesh-zookeeper-operator-f5588b6d7-wknqk 1/1 Running 0 9m10s 10.211.30.74 worker02 <none> <none>operator-765dd9678f-6x5jf 1/1 Running 0 35m 10.211.30.68 worker02 <none> <none>operator-765dd9678f-h5mzh 1/1 Running 0 35m 10.211.214.130 cp01 <none> <none>operator-765dd9678f-svff4 1/1 Running 0 9m10s 10.211.214.133 cp01 <none> <none>
部署 IOMesh
查看 Blockdevice 设施状态
IOMesh 装置实现之后,所有的 blockdevice 均处于 Unclaimed 状态
root@cp01:~# kubectl -n iomesh-system get blockdeviceNAME NODENAME SIZE CLAIMSTATE STATUS AGEblockdevice-02e15a7c78768f42a1e552a3726cff89 worker02 2400476553216 Unclaimed Active 15mblockdevice-0c6a53344f61d5f4789c393acf459c83 cp01 1600321314816 Unclaimed Active 15mblockdevice-4f6835975745542e4a36b0548a573ef3 cp01 2400476553216 Unclaimed Active 15mblockdevice-5254ac6aa3528ba39dd3cb18663a1211 cp01 1600321314816 Unclaimed Active 15mblockdevice-5f40eb004d563e77febbde69d930880c worker01 2400476553216 Unclaimed Active 15mblockdevice-6e9e9eaafee63535906f3f3b9ab35687 worker01 2400476553216 Unclaimed Active 15mblockdevice-73f728aca1ab2f74a833303fffc59301 worker01 2400476553216 Unclaimed Active 15mblockdevice-7401844e99d0b75f2543768a0464e16c cp01 2400476553216 Unclaimed Active 15mblockdevice-86ba281ed13da04c83b0d5efba7cbeeb worker02 1600321314816 Unclaimed Active 15mblockdevice-ad8fe313ea9a4cec2c9ecad4e001e110 worker02 2400476553216 Unclaimed Active 15mblockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0 worker01 1600321314816 Unclaimed Active 15mblockdevice-d017164bd3f69f337e1838cc3b3c4aaf worker02 2400476553216 Unclaimed Active 15mblockdevice-d116cd6ee3b11542beee3737d03de26a worker02 1600321314816 Unclaimed Active 15mblockdevice-d81eadce662fe037fa84d1ffa4e73a37 worker01 1600321314816 Unclaimed Active 15mblockdevice-de24330f6ca7152c7d9cec70bfa6b3b6 worker01 2400476553216 Unclaimed Active 15mblockdevice-de6e4ce89275cd799fdf88246c445399 cp01 2400476553216 Unclaimed Active 15mblockdevice-f19c4d080abe3dbf0b7c7c5600df32a1 worker02 2400476553216 Unclaimed Active 15mblockdevice-f505ed7231329ab53ebe26e8a00d4483 cp01 2400476553216 Unclaimed Active 15m
标记磁盘(非必须)
通过以下命令别离对所有缓存盘以及数据盘打上标签,便于后续依照不同性能 claimed。
缓存盘:# kubectl label blockdevice blockdevice-20cce4274e106429525f4a0ca7d192c2 -n iomesh-system iomesh-system/disk=SSD数据盘:# kubectl label blockdevice blockdevice-31a2642808ca091a75331b6c7a1f9f68 -n iomesh-system iomesh-system/disk=HDD
设施映射-HybridFlash
iomesh.yaml 文件 chunk/deviceMap 局部用以阐明哪些磁盘被 iomesh 所应用以及用以何种类型磁盘进行挂载,本文档进行label select时依照 iomesh-system/disk=SSD 或 HDD 进行辨别。
也能够通过查看磁盘的labels,抉择默认的label
root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-system blockdevice-02e15a7c78768f42a1e552a3726cff89 -oyamlapiVersion: openebs.io/v1alpha1kind: BlockDevicemetadata: annotations: internal.openebs.io/uuid-scheme: gpt creationTimestamp: "2022-12-27T09:39:07Z" generation: 1 labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux iomesh.com/bd-devicePath: dev.sdd iomesh.com/bd-deviceType: disk iomesh.com/bd-driverType: HDD iomesh.com/bd-model: DL2400MM0159 iomesh.com/bd-serial: 5000c500e2476207 iomesh.com/bd-vendor: SEAGATE kubernetes.io/arch: amd64 kubernetes.io/hostname: worker02 kubernetes.io/os: linux ndm.io/blockdevice-type: blockdevice ndm.io/managed: "true" nodename: worker02 name: blockdevice-02e15a7c78768f42a1e552a3726cff89 namespace: iomesh-system resourceVersion: "3333" uid: 489546d4-ef39-4980-9fcd-b699a21940ec......
如果客户环境中有多个磁盘但有局部磁盘不作为 IOMesh 应用,能够通过 exclude 排除。
示例:这个选用了咱们下面设置的label
. . . deviceMap: # cacheWithJournal: # selector: # matchLabels: # iomesh.com/bd-deviceType: disk cacheWithJournal: selector: matchExpressions: - key: iomesh-system/disk operator: In values: - SSD dataStore: selector: matchExpressions: - key: iomesh-system/disk operator: In values: - HDD # exclude: blockdev-xxxx ### 须要排除的blockdevice
申领磁盘
iomesh.yaml 文件批改实现之后通过以下命令实现磁盘申领
./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml
执行后查问所有须要用到的磁盘均已处于 Claimed 状态
root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-systemNAME NODENAME SIZE CLAIMSTATE STATUS AGEblockdevice-02e15a7c78768f42a1e552a3726cff89 worker02 2400476553216 Claimed Active 5h19mblockdevice-0c6a53344f61d5f4789c393acf459c83 cp01 1600321314816 Claimed Active 5h19mblockdevice-4f6835975745542e4a36b0548a573ef3 cp01 2400476553216 Claimed Active 5h19mblockdevice-5254ac6aa3528ba39dd3cb18663a1211 cp01 1600321314816 Claimed Active 5h19mblockdevice-5f40eb004d563e77febbde69d930880c worker01 2400476553216 Claimed Active 5h19mblockdevice-6e9e9eaafee63535906f3f3b9ab35687 worker01 2400476553216 Claimed Active 5h19mblockdevice-73f728aca1ab2f74a833303fffc59301 worker01 2400476553216 Claimed Active 5h19mblockdevice-7401844e99d0b75f2543768a0464e16c cp01 2400476553216 Claimed Active 5h19mblockdevice-86ba281ed13da04c83b0d5efba7cbeeb worker02 1600321314816 Claimed Active 5h19mblockdevice-ad8fe313ea9a4cec2c9ecad4e001e110 worker02 2400476553216 Claimed Active 5h19mblockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0 worker01 1600321314816 Claimed Active 5h19mblockdevice-d017164bd3f69f337e1838cc3b3c4aaf worker02 2400476553216 Claimed Active 5h19mblockdevice-d116cd6ee3b11542beee3737d03de26a worker02 1600321314816 Claimed Active 5h19mblockdevice-d81eadce662fe037fa84d1ffa4e73a37 worker01 1600321314816 Claimed Active 5h19mblockdevice-de24330f6ca7152c7d9cec70bfa6b3b6 worker01 2400476553216 Claimed Active 5h19mblockdevice-de6e4ce89275cd799fdf88246c445399 cp01 2400476553216 Claimed Active 5h19mblockdevice-f19c4d080abe3dbf0b7c7c5600df32a1 worker02 2400476553216 Claimed Active 5h19mblockdevice-f505ed7231329ab53ebe26e8a00d4483 cp01 2400476553216 Claimed Active 5h19m
创立 StorageClass
接下来就是创立StorageClass的过程。
这外面波及到给容器提供长久化存储时 PV - PVC - StorageClass(SC) 的概念。具体的形容能够参见 CNCF 的介绍。给出一个比拟通俗的了解
- PV 对应的存储集群中的 Volume。
- PVC 用来申明须要什么类型的存储。当容器须要应用长久化存储资源时通过 PVC 来实现容器和 PV 之间的一个接口。
- 在定义 PV 的时候须要配置很多的字段内容,在大规模部署环境中须要创立多个PV较为繁琐。为了简化配置过程,于是就有了动静供应概念 - Dynamic Provisioning,主动创立 PV。管理员能够定义 StorageClass 并部署 PV 配置器(provisioner)。开发人员在申请资源的时候通过 PVC 指定所需的存储类型 - StorageClass,PVC 会把 StorageClass 传递给 PV provisioner,由Provisioner主动创立 PV。
IOMesh CSI driver 装置实现之后,会默认装置 Storage Class:iomesh-csi-driver。同时也能够依据需要创立自定义的 StorageClass.
kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: iomesh-scprovisioner: com.iomesh.csi-driver # <-- driver.name in iomesh-values.yamlreclaimPolicy: RetainallowVolumeExpansion: trueparameters: # "ext4" / "ext3" / "ext2" / "xfs" csi.storage.k8s.io/fstype: "ext4" # "2" / "3" replicaFactor: "2" # "true" / "false" thinProvision: "true"volumeBindingMode: Immediate
示例:
------kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: sc-defaultprovisioner: com.iomesh.csi-driverreclaimPolicy: DeleteallowVolumeExpansion: trueparameters: csi.storage.k8s.io/fstype: "ext4" replicaFactor: "2" thinProvision: "true"volumeBindingMode: Immediate
阐明:reclaimPolicy 能够设置为 Ratain 或者是 Delete,默认 Delete。在创立PV的时候persistentVolumeReclaimPolicy 会继承 StorageClass 的 reclaimPolicy策略。
两者的区别在于:
Delete:通过 PVC-SC 创立的PV,当PVC被删除的时候,PV会一并被删除掉。
Retain:通过 PVC-SC 创立的PV,当PVC被删除的时候,PV不会被删除掉,会处于Released 状态,可被其余容器应用。
root@cp01:~/iomesh-offline# kubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEhostpath kubevirt.io/hostpath-provisioner Delete WaitForFirstConsumer false 5h23miomesh-csi-driver com.iomesh.csi-driver Retain Immediate true 5h23msc-default com.iomesh.csi-driver Delete Immediate true 47m
创立 SnapshotClass
Kubernetes VolumeSnapshotClass 对象相似于 StorageClass。通过如下形式定义:
apiVersion: snapshot.storage.k8s.io/v1kind: VolumeSnapshotClassmetadata: name: iomesh-csi-driver-defaultdriver: com.iomesh.csi-driverdeletionPolicy: Delete
创立 Volume
创立PVC之前须要确保 StorageClass 曾经创立结束。
示例:
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-testspec: storageClassName: sc-default ## 先前创立的 StorageClass 名称 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
通过下面的PVC创立对应的PV,创立实现之后查问创立实现的PVC:
root@cp01:~/iomesh-offline# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEiomesh-pvc-10g Bound pvc-ac1e1184-a9ce-417e-a7eb-a799cf199be8 10Gi RWO sc-default 44m
查问通过PVC 创立的 PV:
root@cp01:~/iomesh-offline# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-5900b5e5-00d4-42ad-821e-6d1199a7e3e3 217Gi RWO Delete Bound iomesh-system/coredump-iomesh-chunk-0 hostpath 4h55mpvc-ac1e1184-a9ce-417e-a7eb-a799cf199be8 10Gi RWO Delete Bound default/iomesh-pvc-10g sc-default 44m
查看无关Cluster的存储状况
root@cp01:~/iomesh-offline# kubectl get iomeshclusters.iomesh.com -n iomesh-system -oyamlapiVersion: v1items:- apiVersion: iomesh.com/v1alpha1 kind: IOMeshCluster metadata: annotations: meta.helm.sh/release-name: iomesh meta.helm.sh/release-namespace: iomesh-system creationTimestamp: "2022-12-27T09:38:58Z" finalizers: - iomesh.com/iomesh-cluster-protection - iomesh.com/access-protection generation: 4 labels: app.kubernetes.io/instance: iomesh app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: iomesh app.kubernetes.io/part-of: iomesh app.kubernetes.io/version: v5.1.2-rc14 helm.sh/chart: iomesh-v0.11.1 name: iomesh namespace: iomesh-system resourceVersion: "653267" uid: cd33bce1-9400-4914-aa32-75a13f07d13e spec: chunk: dataCIDR: 10.234.1.0/24 deviceMap: cacheWithJournal: selector: matchExpressions: - key: iomesh-system/disk operator: In values: - SSD dataStore: selector: matchExpressions: - key: iomesh-system/disk operator: In values: - HDD devicemanager: image: pullPolicy: IfNotPresent repository: iomesh/operator-devicemanager tag: v0.11.1 image: pullPolicy: IfNotPresent repository: iomesh/zbs-chunkd tag: v5.1.2-rc14 replicas: 3 resources: {} diskDeploymentMode: hybridFlash meta: image: pullPolicy: IfNotPresent repository: iomesh/zbs-metad tag: v5.1.2-rc14 replicas: 3 resources: {} portOffset: -1 probe: image: pullPolicy: IfNotPresent repository: iomesh/operator-probe tag: v0.11.1 reclaimPolicy: blockdevice: Delete volume: Delete redirector: dataCIDR: 10.234.1.0/24 image: pullPolicy: IfNotPresent repository: iomesh/zbs-iscsi-redirectord tag: v5.1.2-rc14 resources: {} storageClass: hostpath toolbox: image: pullPolicy: IfNotPresent repository: iomesh/operator-toolbox tag: v0.11.1 status: attachedDevices: - device: blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0 mountType: cacheWithJournal nodeName: worker01 - device: blockdevice-02e15a7c78768f42a1e552a3726cff89 mountType: dataStore nodeName: worker02 - device: blockdevice-7401844e99d0b75f2543768a0464e16c mountType: dataStore nodeName: cp01 - device: blockdevice-6e9e9eaafee63535906f3f3b9ab35687 mountType: dataStore nodeName: worker01 - device: blockdevice-5f40eb004d563e77febbde69d930880c mountType: dataStore nodeName: worker01 - device: blockdevice-de6e4ce89275cd799fdf88246c445399 mountType: dataStore nodeName: cp01 - device: blockdevice-d81eadce662fe037fa84d1ffa4e73a37 mountType: cacheWithJournal nodeName: worker01 - device: blockdevice-d116cd6ee3b11542beee3737d03de26a mountType: cacheWithJournal nodeName: worker02 - device: blockdevice-f505ed7231329ab53ebe26e8a00d4483 mountType: dataStore nodeName: cp01 - device: blockdevice-f19c4d080abe3dbf0b7c7c5600df32a1 mountType: dataStore nodeName: worker02 - device: blockdevice-4f6835975745542e4a36b0548a573ef3 mountType: dataStore nodeName: cp01 - device: blockdevice-73f728aca1ab2f74a833303fffc59301 mountType: dataStore nodeName: worker01 - device: blockdevice-86ba281ed13da04c83b0d5efba7cbeeb mountType: cacheWithJournal nodeName: worker02 - device: blockdevice-0c6a53344f61d5f4789c393acf459c83 mountType: cacheWithJournal nodeName: cp01 - device: blockdevice-ad8fe313ea9a4cec2c9ecad4e001e110 mountType: dataStore nodeName: worker02 - device: blockdevice-d017164bd3f69f337e1838cc3b3c4aaf mountType: dataStore nodeName: worker02 - device: blockdevice-de24330f6ca7152c7d9cec70bfa6b3b6 mountType: dataStore nodeName: worker01 - device: blockdevice-5254ac6aa3528ba39dd3cb18663a1211 mountType: cacheWithJournal nodeName: cp01 license: expirationDate: Thu, 26 Jan 2023 10:07:30 UTC maxChunkNum: 3 maxPhysicalDataCapacity: 0 TB maxPhysicalDataCapacityPerNode: 128 TB serial: 44831211-c377-486e-acc5-7b4ea354091e signDate: Tue, 27 Dec 2022 10:07:30 UTC softwareEdition: COMMUNITY subscriptionExpirationDate: "0" subscriptionStartDate: "0" readyReplicas: iomesh-chunk: 3 iomesh-meta: 3 runningImages: chunk: iomesh-chunk-0: iomesh/zbs-chunkd:v5.1.2-rc14 iomesh-chunk-1: iomesh/zbs-chunkd:v5.1.2-rc14 iomesh-chunk-2: iomesh/zbs-chunkd:v5.1.2-rc14 meta: iomesh-meta-0: iomesh/zbs-metad:v5.1.2-rc14 iomesh-meta-1: iomesh/zbs-metad:v5.1.2-rc14 iomesh-meta-2: iomesh/zbs-metad:v5.1.2-rc14 summary: chunkSummary: chunks: - id: 1 ip: 10.234.1.23 spaceInfo: dirtyCacheSpace: 193.75Mi failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 2.87Ti totalDataCapacity: 8.22Ti usedCacheSpace: 193.75Mi usedDataSpace: 5.75Gi status: CHUNK_STATUS_CONNECTED_HEALTHY - id: 2 ip: 10.234.1.22 spaceInfo: dirtyCacheSpace: 0B failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 2.87Ti totalDataCapacity: 8.22Ti usedCacheSpace: 0B usedDataSpace: 0B status: CHUNK_STATUS_CONNECTED_HEALTHY - id: 3 ip: 10.234.1.21 spaceInfo: dirtyCacheSpace: 193.75Mi failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 2.87Ti totalDataCapacity: 8.22Ti usedCacheSpace: 193.75Mi usedDataSpace: 5.75Gi status: CHUNK_STATUS_CONNECTED_HEALTHY clusterSummary: spaceInfo: dirtyCacheSpace: 387.50Mi failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 8.62Ti totalDataCapacity: 24.66Ti usedCacheSpace: 387.50Mi usedDataSpace: 11.50Gi metaSummary: aliveHost: - 10.211.5.11 - 10.211.30.76 - 10.211.214.135 leader: 10.211.5.11:10100 status: META_RUNNINGkind: Listmetadata: resourceVersion: ""
次要查看Summary的内容:
summary: chunkSummary: chunks: - id: 1 ip: 10.234.1.23 spaceInfo: dirtyCacheSpace: 193.75Mi failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 2.87Ti totalDataCapacity: 8.22Ti usedCacheSpace: 193.75Mi usedDataSpace: 5.75Gi status: CHUNK_STATUS_CONNECTED_HEALTHY - id: 2 ip: 10.234.1.22 spaceInfo: dirtyCacheSpace: 0B failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 2.87Ti totalDataCapacity: 8.22Ti usedCacheSpace: 0B usedDataSpace: 0B status: CHUNK_STATUS_CONNECTED_HEALTHY - id: 3 ip: 10.234.1.21 spaceInfo: dirtyCacheSpace: 193.75Mi failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 2.87Ti totalDataCapacity: 8.22Ti usedCacheSpace: 193.75Mi usedDataSpace: 5.75Gi status: CHUNK_STATUS_CONNECTED_HEALTHY clusterSummary: spaceInfo: dirtyCacheSpace: 387.50Mi failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 8.62Ti totalDataCapacity: 24.66Ti usedCacheSpace: 387.50Mi usedDataSpace: 11.50Gi metaSummary: aliveHost: - 10.211.5.11 - 10.211.30.76 - 10.211.214.135 leader: 10.211.5.11:10100 status: META_RUNNING
All Flash模式
扭转iomesh.yaml文件
iomesh: # Whether to create IOMeshCluster object create: true # Use All-Flash Mode or Hybrid-Flash Mode, in All-Flash mode will reject mounting `cacheWithJournal` and `rawCache` type. # And enable mount `dataStoreWithJournal`. diskDeploymentMode: "allFlash"#留神:这个提醒没有给出关键词,不是ALL-Flash。利用后会提醒谬误!#./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yaml#Error: UPGRADE FAILED: cannot patch "iomesh" with kind IOMeshCluster: IOMeshCluster.iomesh.com "iomesh" is invalid: spec.diskDeploymentMode: Unsupported value: "ALL-Flash": supported values: "hybridFlash", "allFlash"deviceMap: # cacheWithJournal: # selector: # matchExpressions: # - key: iomesh-system/disk # operator: In # values: # - SSD # dataStore: # selector: # matchExpressions: # - key: iomesh-system/disk # operator: In # values: # - HDD dataStoreWithJournal: selector: matchLabels: iomesh.com/bd-deviceType: disk matchExpressions: - key: iomesh.com/bd-driverType operator: In values: - SSD
查看CRD的yaml能够看到这部分的配置:
diskDeploymentMode: default: hybridFlash description: DiskDeploymentMode set this IOMesh cluster start with all-flash mode or hybrid-flash mode. In all-flash mode, the DeviceManager will reject mount any `Cache` type partition. In hybrid-flash mode, the DeviceManager will reject mount `dataStoreWithJournal` type partition. enum: - hybridFlash - allFlash type: string
执行当前:
root@cp01:~/iomesh-offline# ./helm -n iomesh-system upgrade iomesh charts/iomesh -f iomesh.yamlRelease "iomesh" has been upgraded. Happy Helming!NAME: iomeshLAST DEPLOYED: Wed Dec 28 11:30:02 2022NAMESPACE: iomesh-systemSTATUS: deployedREVISION: 8TEST SUITE: None
磁盘会从新Claim
root@cp01:~/iomesh-offline# kubectl get blockdevice -n iomesh-systemNAME NODENAME SIZE CLAIMSTATE STATUS AGEblockdevice-02e15a7c78768f42a1e552a3726cff89 worker02 2400476553216 Unclaimed Active 25hblockdevice-0c6a53344f61d5f4789c393acf459c83 cp01 1600321314816 Claimed Active 25hblockdevice-4f6835975745542e4a36b0548a573ef3 cp01 2400476553216 Unclaimed Active 25hblockdevice-5254ac6aa3528ba39dd3cb18663a1211 cp01 1600321314816 Claimed Active 25hblockdevice-5f40eb004d563e77febbde69d930880c worker01 2400476553216 Unclaimed Active 25hblockdevice-6e9e9eaafee63535906f3f3b9ab35687 worker01 2400476553216 Unclaimed Active 25hblockdevice-73f728aca1ab2f74a833303fffc59301 worker01 2400476553216 Unclaimed Active 25hblockdevice-7401844e99d0b75f2543768a0464e16c cp01 2400476553216 Unclaimed Active 25hblockdevice-86ba281ed13da04c83b0d5efba7cbeeb worker02 1600321314816 Claimed Active 25hblockdevice-ad8fe313ea9a4cec2c9ecad4e001e110 worker02 2400476553216 Unclaimed Active 25hblockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0 worker01 1600321314816 Claimed Active 25hblockdevice-d017164bd3f69f337e1838cc3b3c4aaf worker02 2400476553216 Unclaimed Active 25hblockdevice-d116cd6ee3b11542beee3737d03de26a worker02 1600321314816 Claimed Active 25hblockdevice-d81eadce662fe037fa84d1ffa4e73a37 worker01 1600321314816 Claimed Active 25hblockdevice-de24330f6ca7152c7d9cec70bfa6b3b6 worker01 2400476553216 Unclaimed Active 25hblockdevice-de6e4ce89275cd799fdf88246c445399 cp01 2400476553216 Unclaimed Active 25hblockdevice-f19c4d080abe3dbf0b7c7c5600df32a1 worker02 2400476553216 Unclaimed Active 25hblockdevice-f505ed7231329ab53ebe26e8a00d4483 cp01 2400476553216 Unclaimed Active 25h
再次查看零碎的存储
root@cp01:~/iomesh-offline# kubectl get iomeshclusters.iomesh.com -n iomesh-system -o yamlapiVersion: v1items:- apiVersion: iomesh.com/v1alpha1 kind: IOMeshCluster metadata: annotations: meta.helm.sh/release-name: iomesh meta.helm.sh/release-namespace: iomesh-system creationTimestamp: "2022-12-27T09:38:58Z" finalizers: - iomesh.com/iomesh-cluster-protection - iomesh.com/access-protection generation: 5 labels: app.kubernetes.io/instance: iomesh app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: iomesh app.kubernetes.io/part-of: iomesh app.kubernetes.io/version: v5.1.2-rc14 helm.sh/chart: iomesh-v0.11.1 name: iomesh namespace: iomesh-system resourceVersion: "692613" uid: cd33bce1-9400-4914-aa32-75a13f07d13e spec: chunk: dataCIDR: 10.234.1.0/24 deviceMap: dataStoreWithJournal: selector: matchExpressions: - key: iomesh.com/bd-driverType operator: In values: - SSD matchLabels: iomesh.com/bd-deviceType: disk devicemanager: image: pullPolicy: IfNotPresent repository: iomesh/operator-devicemanager tag: v0.11.1 image: pullPolicy: IfNotPresent repository: iomesh/zbs-chunkd tag: v5.1.2-rc14 replicas: 3 resources: {} diskDeploymentMode: allFlash meta: image: pullPolicy: IfNotPresent repository: iomesh/zbs-metad tag: v5.1.2-rc14 replicas: 3 resources: {} portOffset: -1 probe: image: pullPolicy: IfNotPresent repository: iomesh/operator-probe tag: v0.11.1 reclaimPolicy: blockdevice: Delete volume: Delete redirector: dataCIDR: 10.234.1.0/24 image: pullPolicy: IfNotPresent repository: iomesh/zbs-iscsi-redirectord tag: v5.1.2-rc14 resources: {} storageClass: hostpath toolbox: image: pullPolicy: IfNotPresent repository: iomesh/operator-toolbox tag: v0.11.1 status: attachedDevices: - device: blockdevice-bbe1c0ddda880529f4b19e3f27d2c8b0 mountType: dataStoreWithJournal nodeName: worker01 - device: blockdevice-d81eadce662fe037fa84d1ffa4e73a37 mountType: dataStoreWithJournal nodeName: worker01 - device: blockdevice-d116cd6ee3b11542beee3737d03de26a mountType: dataStoreWithJournal nodeName: worker02 - device: blockdevice-86ba281ed13da04c83b0d5efba7cbeeb mountType: dataStoreWithJournal nodeName: worker02 - device: blockdevice-0c6a53344f61d5f4789c393acf459c83 mountType: dataStoreWithJournal nodeName: cp01 - device: blockdevice-5254ac6aa3528ba39dd3cb18663a1211 mountType: dataStoreWithJournal nodeName: cp01 license: expirationDate: Thu, 26 Jan 2023 10:07:30 UTC maxChunkNum: 3 maxPhysicalDataCapacity: 0 TB maxPhysicalDataCapacityPerNode: 128 TB serial: 44831211-c377-486e-acc5-7b4ea354091e signDate: Tue, 27 Dec 2022 10:07:30 UTC softwareEdition: COMMUNITY subscriptionExpirationDate: "0" subscriptionStartDate: "0" readyReplicas: iomesh-chunk: 3 iomesh-meta: 3 runningImages: chunk: iomesh-chunk-0: iomesh/zbs-chunkd:v5.1.2-rc14 iomesh-chunk-1: iomesh/zbs-chunkd:v5.1.2-rc14 iomesh-chunk-2: iomesh/zbs-chunkd:v5.1.2-rc14 meta: iomesh-meta-0: iomesh/zbs-metad:v5.1.2-rc14 iomesh-meta-1: iomesh/zbs-metad:v5.1.2-rc14 iomesh-meta-2: iomesh/zbs-metad:v5.1.2-rc14 summary: chunkSummary: chunks: - id: 1 ip: 10.234.1.23 spaceInfo: dirtyCacheSpace: 0B failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 0B totalDataCapacity: 2.85Ti usedCacheSpace: 0B usedDataSpace: 0B status: CHUNK_STATUS_CONNECTED_HEALTHY - id: 2 ip: 10.234.1.22 spaceInfo: dirtyCacheSpace: 0B failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 0B totalDataCapacity: 2.85Ti usedCacheSpace: 0B usedDataSpace: 0B status: CHUNK_STATUS_CONNECTED_HEALTHY - id: 3 ip: 10.234.1.21 spaceInfo: dirtyCacheSpace: 0B failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 0B totalDataCapacity: 2.85Ti usedCacheSpace: 0B usedDataSpace: 0B status: CHUNK_STATUS_CONNECTED_HEALTHY clusterSummary: spaceInfo: dirtyCacheSpace: 0B failureCacheSpace: 0B failureDataSpace: 0B totalCacheCapacity: 0B totalDataCapacity: 8.56Ti usedCacheSpace: 0B usedDataSpace: 0B metaSummary: aliveHost: - 10.211.5.11 - 10.211.30.76 - 10.211.214.135 leader: 10.211.5.11:10100 status: META_RUNNINGkind: Listmetadata: resourceVersion: ""
- 监控用pod
apiVersion: apps/v1kind: Deploymentmetadata: name: pyzbsspec: selector: matchLabels: app: pyzbs replicas: 1 template: metadata: labels: app: pyzbs # has to match .spec.selector.matchLabels spec: containers: - name: pyzbs image: iomesh/zbs-client-py-builder:latest command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 30; done;" ]
Uninstall
root@cp01:~/iomesh-offline# ./helm uninstall -n iomesh-system iomeshThese resources were kept due to the resource policy:[Deployment] iomesh-openebs-ndm-operator[Deployment] iomesh-zookeeper-operator[Deployment] operator[DaemonSet] iomesh-hostpath-provisioner[DaemonSet] iomesh-openebs-ndm[RoleBinding] iomesh-zookeeper-operator[RoleBinding] iomesh:leader-election[Role] iomesh-zookeeper-operator[Role] iomesh:leader-election[ClusterRoleBinding] iomesh-hostpath-provisioner[ClusterRoleBinding] iomesh-openebs-ndm[ClusterRoleBinding] iomesh-zookeeper-operator[ClusterRoleBinding] iomesh:manager[ClusterRole] iomesh-hostpath-provisioner[ClusterRole] iomesh-openebs-ndm[ClusterRole] iomesh-zookeeper-operator[ClusterRole] iomesh:manager[ConfigMap] iomesh-openebs-ndm-config[ServiceAccount] openebs-ndm[ServiceAccount] zookeeper-operator[ServiceAccount] iomesh-operatorrelease "iomesh" uninstalled
Q1-发现pod在某个node上无奈挂载pvc
Warning FailedMount 5m25s kubelet Unable to attach or mount volumes: unmounted volumes=[fio-pvc-worker01], unattached volumes=[fio-pvc-worker01 kube-api-access-thpm9]: timed out waiting for the condition Warning FailedMapVolume 66s (x11 over 7m20s) kubelet MapVolume.SetUpDevice failed for volume "pvc-bbe904d5-0362-481b-8ec4-e059c0ac2d52" : rpc error: code = Internal desc = failed to attach &{LunId:8 Initiator:iqn.2020-05.com.iomesh:817dd926-3e0f-4c82-9c02-5e30b8e045c5-59068042-6fff-4700-b0b4-537376943d5f IFace:59068042-6fff-4700-b0b4-537376943d5f Portal:127.0.0.1:3260 TargetIqn:iqn.2016-02.com.smartx:system:ebc8b5ba-7a07-4651-9986-f7ea4fe0121e ChapInfo:<nil>}, failed to iscsiadm discovery target, command: [iscsiadm -m discovery -t sendtargets -p 127.0.0.1:3260 -I 59068042-6fff-4700-b0b4-537376943d5f -o update], output: iscsiadm: Failed to load module tcp: No such file or directoryiscsiadm: Could not load transport tcp.Dropping interface 59068042-6fff-4700-b0b4-537376943d5f., error: exit status 21 Warning FailedMount 54s (x2 over 3m9s) kubelet Unable to attach or mount volumes: unmounted volumes=[fio-pvc-worker01], unattached volumes=[kube-api-access-thpm9 fio-pvc-worker01]: timed out waiting for the condition
从新查看:
1. 编辑iscsi配置文件sudo sed -i 's/^node.startup = automatic$/node.startup = manual/' /etc/iscsi/iscsid.conf---2. 确保 iscsi_tcp 模块已被加载sudo modprobe iscsi_tcpsudo bash -c 'echo iscsi_tcp > /etc/modprobe.d/iscsi-tcp.conf'---3. 启动 iscsid 服务systemctl enable --now iscsid