乐趣区

关于云计算:Kubernetes-对接-GlusterFS-磁盘扩容实战

前言

知识点

  • 定级:入门级
  • 应用 Heketi Topology 扩容磁盘
  • 应用 Heketi CLI 扩容磁盘

实战服务器配置 (架构 1:1 复刻小规模生产环境,配置略有不同)

主机名 IP CPU 内存 系统盘 数据盘 用处
ks-master-0 192.168.9.91 2 4 50 100 KubeSphere/k8s-master
ks-master-1 192.168.9.92 2 4 50 100 KubeSphere/k8s-master
ks-master-2 192.168.9.93 2 4 50 100 KubeSphere/k8s-master
ks-worker-0 192.168.9.95 2 4 50 100 k8s-worker/CI
ks-worker-1 192.168.9.96 2 4 50 100 k8s-worker
ks-worker-2 192.168.9.97 2 4 50 100 k8s-worker
storage-0 192.168.9.81 2 4 50 100+50+50 ElasticSearch/GlusterFS/Ceph/Longhorn/NFS/
storage-1 192.168.9.82 2 4 50 100+50+50 ElasticSearch/GlusterFS/Ceph/Longhorn
storage-2 192.168.9.83 2 4 50 100+50+50 ElasticSearch/GlusterFS/Ceph/Longhorn
registry 192.168.9.80 2 4 50 200 Sonatype Nexus 3
共计 10 20 40 500 1100+

实战环境波及软件版本信息

  • 操作系统:openEuler 22.03 LTS SP2 x86_64
  • KubeSphere:3.3.2
  • Kubernetes:v1.24.12
  • Containerd:1.6.4
  • KubeKey: v3.0.8
  • GlusterFS:10.0-8
  • Heketi:v10.4.0

简介

之前的实战课程,咱们曾经学习了如何在 openEuler 22.03 LTS SP2 上装置部署 GlusterFS、Heketi 以及 Kubernetes 应用 in-tree storage driver 模式对接 GlusterFS 做为集群的后端存储。

明天咱们来实战模仿生产环境必然会遇到的一个场景,业务上线一段时间后 GlusterFS 数据盘满了,须要扩容怎么办?

基于 Heketi 治理的 GlusterFS 数据卷扩容计划有两种:

  • 调整现有 Topology 配置文件,从新加载
  • 应用 Heketi CLI 间接扩容(简略,倡议应用)

实战模仿前提条件:

  • 在已有的 GlusterFS 100G 数据盘的根底上,额定增加了 2 块 50G 的磁盘,用来模仿两种数据卷扩容计划。
  • 为了模仿实战成果,事后将已有的 100G 空间消耗掉 95G。

本文的实战过程与操作系统无关,所有相干操作均实用于其余操作系统部署的 Heketi + GlusterFS 存储集群。

磁盘空间有余故障模拟

创立新 PVC

  • 编辑 pvc 资源文件 vi pvc-test-95g.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-data-95g
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: glusterfs
  resources:
    requests:
      storage: 95Gi
  • 执行创立命令
 kubectl apply -f pvc-test-95g.yaml

 # 执行命令不会报错,然而 pvc 状态会处于 Pending 状态

查看报错信息

  • 查看 Heketi 服务日志报错信息
# 执行命令(没有独立的日志文件,日志间接输入到了 messages 中)tail -f /var/log/messages


# 输入后果如下(只截取了残缺的一段,前面始终循环输入雷同的错误信息)[root@ks-storage-0 heketi]# tail -f /var/log/messages

Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #1
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #0
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #1
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #2
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] INFO 2023/08/16 15:29:32 Allocating brick set #3
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] ERROR 2023/08/16 15:29:32 heketi/apps/glusterfs/volume_entry_allocate.go:37:glusterfs.(*VolumeEntry).allocBricksInCluster: Minimum brick size limit reached.  Out of space.
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [heketi] ERROR 2023/08/16 15:29:32 heketi/apps/glusterfs/operations_manage.go:220:glusterfs.AsyncHttpOperation: Create Volume Build Failed: No space
Aug 16 15:29:32 ks-storage-0 heketi[34102]: [negroni] 2023-08-16T15:29:32+08:00 | 500 | #011 4.508081ms | 192.168.9.81:18080 | POST /volumes

通过下面的模仿演示,咱们学会了在 K8s 集群中应用 Glusterfs 作为后端存储时,如何判断数据卷空间满了。

  • 创立后状态为 Pending
  • Hekiti 报错日志中有关键字 Create Volume Build Failed: No space

当 GlusterFS 存储集群磁盘空间调配完无奈新建数据卷时,作为运维的咱们就须要为存储集群增加新的硬盘来扩容存储集群了。

利用 Heketi 扩容 GlusterFS 数据卷

请留神,本文为了残缺的展现扩容过程,执行相干命令时会残缺的记录输入的后果。这样导致的结果就是本文会略显简短,因而,各位在浏览本文时能够选择性浏览。

查看现有 Topology 信息

# 执行命令
heketi-cli topology info

# 失常的输入后果如下
[root@ks-storage-0 heketi]# heketi-cli topology info

Cluster Id: 9ad37206ce6575b5133179ba7c6e0935

    File:  true
    Block: true

    Volumes:

        Name: vol_75c90b8463d73a7fd9187a8ca22ff91f
        Size: 95
        Id: 75c90b8463d73a7fd9187a8ca22ff91f
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f
        Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 37006636e1fe713a395755e8d34f6f20
                        Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
                        Size (GiB): 95
                        Node: 5e99fe0cd727b8066f200bad5524c544
                        Device: 8fd529a668d5c19dfc37450b755230cd

                        Id: 3dca27f98e1c20aa092c159226ddbe4d
                        Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
                        Size (GiB): 95
                        Node: 7bb26eb30c1c61456b5ae8d805c01cf1
                        Device: 51ad0981f8fed73002f5a7f2dd0d65c5

                        Id: 7ac64e137d803cccd4b9fcaaed4be8ad
                        Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
                        Size (GiB): 95
                        Node: 0108350a9d13578febbfd0502f8077ff
                        Device: 9af38756fe916fced666fcd3de786c19



    Nodes:

        Node Id: 0108350a9d13578febbfd0502f8077ff
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.81
        Storage Hostnames: 192.168.9.81
        Devices:
                Id:9af38756fe916fced666fcd3de786c19   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb

                        Bricks:
                                Id:7ac64e137d803cccd4b9fcaaed4be8ad   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick

        Node Id: 5e99fe0cd727b8066f200bad5524c544
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.82
        Storage Hostnames: 192.168.9.82
        Devices:
                Id:8fd529a668d5c19dfc37450b755230cd   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb

                        Bricks:
                                Id:37006636e1fe713a395755e8d34f6f20   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick

        Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.83
        Storage Hostnames: 192.168.9.83
        Devices:
                Id:51ad0981f8fed73002f5a7f2dd0d65c5   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb

                        Bricks:
                                Id:3dca27f98e1c20aa092c159226ddbe4d   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick

查看现有 Node 信息

  • 查看 node 节点列表
# 执行命令
heketi-cli node list

# 失常的输入后果如下
[root@ks-storage-0 heketi]# heketi-cli node list
Id:0108350a9d13578febbfd0502f8077ff     Cluster:9ad37206ce6575b5133179ba7c6e0935
Id:5e99fe0cd727b8066f200bad5524c544     Cluster:9ad37206ce6575b5133179ba7c6e0935
Id:7bb26eb30c1c61456b5ae8d805c01cf1     Cluster:9ad37206ce6575b5133179ba7c6e0935
  • 查看 node 详细信息

storage-0 节点为例,查看 Node 详细信息。

# 执行命令
heketi-cli node info xxxxxx

# 失常的输入后果如下
[root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ff
Node Id: 0108350a9d13578febbfd0502f8077ff
State: online
Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
Zone: 1
Management Hostname: 192.168.9.81
Storage Hostname: 192.168.9.81
Devices:
Id:9af38756fe916fced666fcd3de786c19   Name:/dev/sdb            State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4       Bricks:1

查看现有 VG 信息

storage-0 节点为例,查看已调配 VG 信息(输入后果中删除了零碎 VG 信息)。

# 简略查看
[root@ks-storage-0 heketi]# vgs
  VG                                  #PV #LV #SN Attr   VSize   VFree
  vg_9af38756fe916fced666fcd3de786c19   1   2   0 wz--n-  99.87g <3.92g

# 查看详细信息
[root@ks-storage-0 heketi]# vgdisplay vg_9af38756fe916fced666fcd3de786c19
  --- Volume group ---
  VG Name               vg_9af38756fe916fced666fcd3de786c19
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  187
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               99.87 GiB
  PE Size               4.00 MiB
  Total PE              25567
  Alloc PE / Size       24564 / 95.95 GiB
  Free  PE / Size       1003 / <3.92 GiB
  VG UUID               jrxfIv-Fnjq-IYF8-aubc-t2y0-zwUp-YxjkDC

查看现有 LV 信息

storage-0 节点为例,查看已调配 LV 信息(输入后果中删除了零碎 LV 信息)。

# 简略查看
[root@ks-storage-0 heketi]# lvs
  LV                                     VG                                  Attr       LSize   Pool                                Origin Data%  Meta%  Move Log Cpy%Sync Convert
  brick_7ac64e137d803cccd4b9fcaaed4be8ad vg_9af38756fe916fced666fcd3de786c19 Vwi-aotz--  95.00g tp_3c68ad0d0752d41ede13afdc3db9637b        0.05
  tp_3c68ad0d0752d41ede13afdc3db9637b    vg_9af38756fe916fced666fcd3de786c19 twi-aotz--  95.00g                                            0.05   3.31

# 查看详细信息
[root@ks-storage-0 heketi]# lvdisplay
  --- Logical volume ---
  LV Name                tp_3c68ad0d0752d41ede13afdc3db9637b
  VG Name                vg_9af38756fe916fced666fcd3de786c19
  LV UUID                Aho32F-tBTa-VTTp-VfwY-qRbm-WUxu-puj4kv
  LV Write Access        read/write (activated read only)
  LV Creation host, time ks-storage-0, 2023-08-16 15:21:06 +0800
  LV Pool metadata       tp_3c68ad0d0752d41ede13afdc3db9637b_tmeta
  LV Pool data           tp_3c68ad0d0752d41ede13afdc3db9637b_tdata
  LV Status              available
  # open                 0
  LV Size                95.00 GiB
  Allocated pool data    0.05%
  Allocated metadata     3.31%
  Current LE             24320
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:5

  --- Logical volume ---
  LV Path                /dev/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad
  LV Name                brick_7ac64e137d803cccd4b9fcaaed4be8ad
  VG Name                vg_9af38756fe916fced666fcd3de786c19
  LV UUID                VGTOMk-d07E-XWhw-Omzz-Pc1t-WwEH-Wh0EuY
  LV Write Access        read/write
  LV Creation host, time ks-storage-0, 2023-08-16 15:21:10 +0800
  LV Pool name           tp_3c68ad0d0752d41ede13afdc3db9637b
  LV Status              available
  # open                 1
  LV Size                95.00 GiB
  Mapped size            0.05%
  Current LE             24320
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192

留神:Heketi 应用了 LVM 存储池的形式创立 LV 卷,所有输入后果中看到了两个 LV。brick_ 结尾的是理论可用的 LV。

扩容计划之调整 Topology 配置文件

前提阐明

  • 扩容盘符:/dev/sdc
  • 扩容容量:50G

查看现有 Topology 配置文件

  • cat /etc/heketi/topology.json
{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": ["192.168.9.81"],
                            "storage": ["192.168.9.81"]
                        },
                        "zone": 1
                    },
                    "devices": ["/dev/sdb"]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": ["192.168.9.82"],
                            "storage": ["192.168.9.82"]
                        },
                        "zone": 1
                    },
                    "devices": ["/dev/sdb"]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": ["192.168.9.83"],
                            "storage": ["192.168.9.83"]
                        },
                        "zone": 1
                    },
                    "devices": ["/dev/sdb"]
                }
            ]
        }
    ]
}

批改 Topology 文件

编辑现有的 topology.jsonvi /etc/heketi/topology.json

在每一个 node 的 devices 的配置上面减少 /dev/sdc,留神 /dev/sdb 前面的标点配置。

批改后的 topology.json 文件如下:

{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": ["192.168.9.81"],
                            "storage": ["192.168.9.81"]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/sdb",
                        "/dev/sdc"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": ["192.168.9.82"],
                            "storage": ["192.168.9.82"]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/sdb",
                        "/dev/sdc"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": ["192.168.9.83"],
                            "storage": ["192.168.9.83"]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/sdb",
                        "/dev/sdc"
                    ]
                }
            ]
        }
    ]
}

从新加载 Topology

# 执行命令
heketi-cli topology load --json=/etc/heketi/topology.json

# 失常的输入后果如下
[root@ks-storage-0 heketi]# heketi-cli topology load --json=/etc/heketi/topology.json
        Found node 192.168.9.81 on cluster 9ad37206ce6575b5133179ba7c6e0935
                Found device /dev/sdb
                Adding device /dev/sdc ... OK
        Found node 192.168.9.82 on cluster 9ad37206ce6575b5133179ba7c6e0935
                Found device /dev/sdb
                Adding device /dev/sdc ... OK
        Found node 192.168.9.83 on cluster 9ad37206ce6575b5133179ba7c6e0935
                Found device /dev/sdb
                Adding device /dev/sdc ... OK

查看更新后的 Topology 信息

# 执行命令
heketi-cli topology info

# 失常的输入后果如下
[root@ks-storage-0 heketi]# heketi-cli topology info

Cluster Id: 9ad37206ce6575b5133179ba7c6e0935

    File:  true
    Block: true

    Volumes:

        Name: vol_75c90b8463d73a7fd9187a8ca22ff91f
        Size: 95
        Id: 75c90b8463d73a7fd9187a8ca22ff91f
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f
        Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 37006636e1fe713a395755e8d34f6f20
                        Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
                        Size (GiB): 95
                        Node: 5e99fe0cd727b8066f200bad5524c544
                        Device: 8fd529a668d5c19dfc37450b755230cd

                        Id: 3dca27f98e1c20aa092c159226ddbe4d
                        Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
                        Size (GiB): 95
                        Node: 7bb26eb30c1c61456b5ae8d805c01cf1
                        Device: 51ad0981f8fed73002f5a7f2dd0d65c5

                        Id: 7ac64e137d803cccd4b9fcaaed4be8ad
                        Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
                        Size (GiB): 95
                        Node: 0108350a9d13578febbfd0502f8077ff
                        Device: 9af38756fe916fced666fcd3de786c19



    Nodes:

        Node Id: 0108350a9d13578febbfd0502f8077ff
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.81
        Storage Hostnames: 192.168.9.81
        Devices:
                Id:9af38756fe916fced666fcd3de786c19   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb

                        Bricks:
                                Id:7ac64e137d803cccd4b9fcaaed4be8ad   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
                Id:ab5f766ddc779449db2bf45bb165fbff   State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49
                        Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc

                        Bricks:

        Node Id: 5e99fe0cd727b8066f200bad5524c544
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.82
        Storage Hostnames: 192.168.9.82
        Devices:
                Id:8fd529a668d5c19dfc37450b755230cd   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb

                        Bricks:
                                Id:37006636e1fe713a395755e8d34f6f20   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
                Id:b648c995486b0e785f78a8b674d8b590   State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49
                        Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/sdc

                        Bricks:

        Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.83
        Storage Hostnames: 192.168.9.83
        Devices:
                Id:51ad0981f8fed73002f5a7f2dd0d65c5   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb

                        Bricks:
                                Id:3dca27f98e1c20aa092c159226ddbe4d   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
                Id:9b39c4e288d4a1783d204d2033444c00   State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49
                        Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc

                        Bricks:

查看更新后的 Node 信息

storage-0 节点为例,查看更新后 Node 详细信息(重点查看 Devices 信息)。

# 执行命令
heketi-cli node info xxxxxx

# 失常的输入后果如下

[root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ff
Node Id: 0108350a9d13578febbfd0502f8077ff
State: online
Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
Zone: 1
Management Hostname: 192.168.9.81
Storage Hostname: 192.168.9.81
Devices:
Id:9af38756fe916fced666fcd3de786c19   Name:/dev/sdb            State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4       Bricks:1
Id:ab5f766ddc779449db2bf45bb165fbff   Name:/dev/sdc            State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49      Bricks:0

查看更新后的 VG 信息

storage-0 节点为例,查看更新后 VG 信息(输入后果中删除了零碎 VG 信息)。

[root@ks-storage-0 heketi]# vgs
  VG                                  #PV #LV #SN Attr   VSize   VFree
  vg_9af38756fe916fced666fcd3de786c19   1   2   0 wz--n-  99.87g <3.92g
  vg_ab5f766ddc779449db2bf45bb165fbff   1   0   0 wz--n-  49.87g 49.87g

创立测试 PVC

k8s-master-0 节点,执行上面的相干命令。

  • 编辑 pvc 资源文件 vi pvc-test-45g.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-data-45g
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: glusterfs
  resources:
    requests:
      storage: 45Gi
  • 执行创立命令
 kubectl apply -f pvc-test-45g.yaml
  • 查看创立后果
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
test-data-45g   Bound    pvc-19343e73-6b14-40ca-b65b-356d38d16bb0   45Gi       RWO            glusterfs      17s   Filesystem
test-data-95g   Bound    pvc-2461f639-1634-4085-af2f-b526a3800217   95Gi       RWO            glusterfs      42h   Filesystem

查看新创建的 Volume

  • 查看卷 list
[root@ks-storage-0 heketi]# heketi-cli volume list
Id:75c90b8463d73a7fd9187a8ca22ff91f    Cluster:9ad37206ce6575b5133179ba7c6e0935    Name:vol_75c90b8463d73a7fd9187a8ca22ff91f
Id:ebd76f343b04f89ed4166c8f1ece0361    Cluster:9ad37206ce6575b5133179ba7c6e0935    Name:vol_ebd76f343b04f89ed4166c8f1ece0361
  • 查看新创建的 volume 的信息
[root@ks-storage-0 heketi]# heketi-cli volume info ebd76f343b04f89ed4166c8f1ece0361
Name: vol_ebd76f343b04f89ed4166c8f1ece0361
Size: 45
Volume Id: ebd76f343b04f89ed4166c8f1ece0361
Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
Mount: 192.168.9.81:vol_ebd76f343b04f89ed4166c8f1ece0361
Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3
Snapshot Factor: 1.00
  • 查看新创建的 LV 信息

storage-0 节点为例,查看新调配的 LV 信息(输入后果中删除了零碎 LV 信息)。

[root@ks-storage-0 heketi]# lvs
  LV                                     VG                                  Attr       LSize   Pool                                Origin Data%  Meta%  Move Log Cpy%Sync Convert
  brick_7ac64e137d803cccd4b9fcaaed4be8ad vg_9af38756fe916fced666fcd3de786c19 Vwi-aotz--  95.00g tp_3c68ad0d0752d41ede13afdc3db9637b        0.05
  tp_3c68ad0d0752d41ede13afdc3db9637b    vg_9af38756fe916fced666fcd3de786c19 twi-aotz--  95.00g                                            0.05   3.31
  brick_27e193590ccdb5fba287fb66d5473074 vg_ab5f766ddc779449db2bf45bb165fbff Vwi-aotz--  45.00g tp_7bdcf1e2c3aab06cb25906f017ae1b08        0.06
  tp_7bdcf1e2c3aab06cb25906f017ae1b08    vg_ab5f766ddc779449db2bf45bb165fbff twi-aotz--  45.00g                                            0.06   6.94

至此,咱们实战演示了 Heketi 通过 Topology 配置文件扩容磁盘并验证测试的全过程。

扩容计划之 Heketi-CLI 间接扩容

前提阐明

  • 扩容盘符:/dev/sdd
  • 扩容容量:50G

查看 Node 信息

  • 查看 Node 列表,获取 Node ID
[root@ks-storage-0 heketi]# heketi-cli node list
Id:0108350a9d13578febbfd0502f8077ff     Cluster:9ad37206ce6575b5133179ba7c6e0935
Id:5e99fe0cd727b8066f200bad5524c544     Cluster:9ad37206ce6575b5133179ba7c6e0935
Id:7bb26eb30c1c61456b5ae8d805c01cf1     Cluster:9ad37206ce6575b5133179ba7c6e0935
  • 查看 Node 详细信息,查看已有 Devices 信息(以 storage-0 为例)。
[root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ff
Node Id: 0108350a9d13578febbfd0502f8077ff
State: online
Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
Zone: 1
Management Hostname: 192.168.9.81
Storage Hostname: 192.168.9.81
Devices:
Id:9af38756fe916fced666fcd3de786c19   Name:/dev/sdb            State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4       Bricks:1
Id:ab5f766ddc779449db2bf45bb165fbff   Name:/dev/sdc            State:online    Size (GiB):49      Used (GiB):45      Free (GiB):4       Bricks:1

增加新的 Device

新增加的磁盘在零碎中显示盘符为 /dev/sdd,每个 Node 均须要执行增加 Device 的命令。

# 执行的命令
heketi-cli device add --name /dev/sdd --node xxxxxx

# 理论的输入后果如下
[root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 0108350a9d13578febbfd0502f8077ff
Device added successfully

[root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 5e99fe0cd727b8066f200bad5524c544
Device added successfully

[root@ks-storage-0 heketi]# heketi-cli device add --name /dev/sdd --node 7bb26eb30c1c61456b5ae8d805c01cf1
Device added successfully

查看更新后的 Node 信息

storage-0 节点为例,查看更新后的 Node 信息(重点查看 Devices 信息)。

[root@ks-storage-0 heketi]# heketi-cli node info 0108350a9d13578febbfd0502f8077ff
Node Id: 0108350a9d13578febbfd0502f8077ff
State: online
Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
Zone: 1
Management Hostname: 192.168.9.81
Storage Hostname: 192.168.9.81
Devices:
Id:9af38756fe916fced666fcd3de786c19   Name:/dev/sdb            State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4       Bricks:1
Id:ab5f766ddc779449db2bf45bb165fbff   Name:/dev/sdc            State:online    Size (GiB):49      Used (GiB):45      Free (GiB):4       Bricks:1
Id:c189451c573814e05ebd83d46ab9a0af   Name:/dev/sdd            State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49      Bricks:0

查看更新后的 Topology 信息

[root@ks-storage-0 heketi]# heketi-cli topology info

Cluster Id: 9ad37206ce6575b5133179ba7c6e0935

    File:  true
    Block: true

    Volumes:

        Name: vol_75c90b8463d73a7fd9187a8ca22ff91f
        Size: 95
        Id: 75c90b8463d73a7fd9187a8ca22ff91f
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Mount: 192.168.9.81:vol_75c90b8463d73a7fd9187a8ca22ff91f
        Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 37006636e1fe713a395755e8d34f6f20
                        Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
                        Size (GiB): 95
                        Node: 5e99fe0cd727b8066f200bad5524c544
                        Device: 8fd529a668d5c19dfc37450b755230cd

                        Id: 3dca27f98e1c20aa092c159226ddbe4d
                        Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
                        Size (GiB): 95
                        Node: 7bb26eb30c1c61456b5ae8d805c01cf1
                        Device: 51ad0981f8fed73002f5a7f2dd0d65c5

                        Id: 7ac64e137d803cccd4b9fcaaed4be8ad
                        Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
                        Size (GiB): 95
                        Node: 0108350a9d13578febbfd0502f8077ff
                        Device: 9af38756fe916fced666fcd3de786c19


        Name: vol_ebd76f343b04f89ed4166c8f1ece0361
        Size: 45
        Id: ebd76f343b04f89ed4166c8f1ece0361
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Mount: 192.168.9.81:vol_ebd76f343b04f89ed4166c8f1ece0361
        Mount Options: backup-volfile-servers=192.168.9.82,192.168.9.83
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 27e193590ccdb5fba287fb66d5473074
                        Path: /var/lib/heketi/mounts/vg_ab5f766ddc779449db2bf45bb165fbff/brick_27e193590ccdb5fba287fb66d5473074/brick
                        Size (GiB): 45
                        Node: 0108350a9d13578febbfd0502f8077ff
                        Device: ab5f766ddc779449db2bf45bb165fbff

                        Id: 4fab639b551e573c61141508d75bf605
                        Path: /var/lib/heketi/mounts/vg_9b39c4e288d4a1783d204d2033444c00/brick_4fab639b551e573c61141508d75bf605/brick
                        Size (GiB): 45
                        Node: 7bb26eb30c1c61456b5ae8d805c01cf1
                        Device: 9b39c4e288d4a1783d204d2033444c00

                        Id: 8eba3fb2253452999a1ec60f647dcf03
                        Path: /var/lib/heketi/mounts/vg_b648c995486b0e785f78a8b674d8b590/brick_8eba3fb2253452999a1ec60f647dcf03/brick
                        Size (GiB): 45
                        Node: 5e99fe0cd727b8066f200bad5524c544
                        Device: b648c995486b0e785f78a8b674d8b590



    Nodes:

        Node Id: 0108350a9d13578febbfd0502f8077ff
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.81
        Storage Hostnames: 192.168.9.81
        Devices:
                Id:9af38756fe916fced666fcd3de786c19   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb

                        Bricks:
                                Id:7ac64e137d803cccd4b9fcaaed4be8ad   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_9af38756fe916fced666fcd3de786c19/brick_7ac64e137d803cccd4b9fcaaed4be8ad/brick
                Id:ab5f766ddc779449db2bf45bb165fbff   State:online    Size (GiB):49      Used (GiB):45      Free (GiB):4
                        Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc

                        Bricks:
                                Id:27e193590ccdb5fba287fb66d5473074   Size (GiB):45      Path: /var/lib/heketi/mounts/vg_ab5f766ddc779449db2bf45bb165fbff/brick_27e193590ccdb5fba287fb66d5473074/brick
                Id:c189451c573814e05ebd83d46ab9a0af   State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49
                        Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd

                        Bricks:

        Node Id: 5e99fe0cd727b8066f200bad5524c544
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.82
        Storage Hostnames: 192.168.9.82
        Devices:
                Id:5cd245e9826c0bfa46bef0c0d41ed0ed   State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49
                        Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd

                        Bricks:
                Id:8fd529a668d5c19dfc37450b755230cd   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/sdb

                        Bricks:
                                Id:37006636e1fe713a395755e8d34f6f20   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_8fd529a668d5c19dfc37450b755230cd/brick_37006636e1fe713a395755e8d34f6f20/brick
                Id:b648c995486b0e785f78a8b674d8b590   State:online    Size (GiB):49      Used (GiB):45      Free (GiB):4
                        Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/sdc

                        Bricks:
                                Id:8eba3fb2253452999a1ec60f647dcf03   Size (GiB):45      Path: /var/lib/heketi/mounts/vg_b648c995486b0e785f78a8b674d8b590/brick_8eba3fb2253452999a1ec60f647dcf03/brick

        Node Id: 7bb26eb30c1c61456b5ae8d805c01cf1
        State: online
        Cluster Id: 9ad37206ce6575b5133179ba7c6e0935
        Zone: 1
        Management Hostnames: 192.168.9.83
        Storage Hostnames: 192.168.9.83
        Devices:
                Id:51ad0981f8fed73002f5a7f2dd0d65c5   State:online    Size (GiB):99      Used (GiB):95      Free (GiB):4
                        Known Paths: /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 /dev/sdb

                        Bricks:
                                Id:3dca27f98e1c20aa092c159226ddbe4d   Size (GiB):95      Path: /var/lib/heketi/mounts/vg_51ad0981f8fed73002f5a7f2dd0d65c5/brick_3dca27f98e1c20aa092c159226ddbe4d/brick
                Id:6656246eafefffaea49399444989eab1   State:online    Size (GiB):49      Used (GiB):0       Free (GiB):49
                        Known Paths: /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3 /dev/disk/by-path/pci-0000:01:04.0-scsi-0:0:0:3 /dev/sdd

                        Bricks:
                Id:9b39c4e288d4a1783d204d2033444c00   State:online    Size (GiB):49      Used (GiB):45      Free (GiB):4
                        Known Paths: /dev/disk/by-path/pci-0000:01:03.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 /dev/sdc

                        Bricks:
                                Id:4fab639b551e573c61141508d75bf605   Size (GiB):45      Path: /var/lib/heketi/mounts/vg_9b39c4e288d4a1783d204d2033444c00/brick_4fab639b551e573c61141508d75bf605/brick

留神:重点查看 Devices 相干信息。

查看更新后的 VG 信息

storage-0 节点为例,查看更新后 VG 信息(输入后果中删除了零碎 VG 信息)。

[root@ks-storage-0 heketi]# vgs
  VG                                  #PV #LV #SN Attr   VSize   VFree
  openeuler                             1   2   0 wz--n- <19.00g     0
  vg_9af38756fe916fced666fcd3de786c19   1   2   0 wz--n-  99.87g <3.92g
  vg_ab5f766ddc779449db2bf45bb165fbff   1   2   0 wz--n-  49.87g <4.42g
  vg_c189451c573814e05ebd83d46ab9a0af   1   0   0 wz--n-  49.87g 49.87g

为了节俭篇幅,此处省略了创立 PVC 验证、查看的过程。读者能够参考之前的操作自行验证测试。

至此,咱们实战演示了通过 Heketi-CLI 扩容磁盘并验证测试的全过程。

常见问题

问题 1

  • 报错信息
[root@ks-master-0 k8s-yaml]# kubectl apply -f pvc-test-10g.yaml
The PersistentVolumeClaim "test-data-10G" is invalid: metadata.name: Invalid value: "test-data-10G": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9]"a-z0-9")?(\.[a-z0-9]([-a-z0-9]*[a-z0-9]"a-z0-9")?)*')
  • 解决方案

创立 pvc 时,yaml 文件中定义的 metadata.name 应用了 大写 字母 test-data-10G,改成 小写 test-data-10g 就能够了。

问题 2

  • 报错信息
The PersistentVolumeClaim "test-data-10g" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value
  • 解决方案

这个是本人操作失误,之前创立了一个名为 test-data-10g 的 PVC,起初在原来的配置文件根底上,将 storage 的值改小了再去执行创立动作,引发了下面的报错。

问题 3

  • 报错信息
[root@ks-storage-0 heketi]# heketi-cli topology load --json=/etc/heketi/topology.json
        Found node 192.168.9.81 on cluster 9ad37206ce6575b5133179ba7c6e0935
                Found device /dev/sdb
                Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?):   No device found for /dev/sdc.
        Found node 192.168.9.82 on cluster 9ad37206ce6575b5133179ba7c6e0935
                Found device /dev/sdb
                Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?):   No device found for /dev/sdc.
        Found node 192.168.9.83 on cluster 9ad37206ce6575b5133179ba7c6e0935
                Found device /dev/sdb
                Adding device /dev/sdc ... Unable to add device: Initializing device /dev/sdc failed (already initialized or contains data?):   No device found for /dev/sdc.
  • 解决方案

这个是本人操作失误,还没有增加磁盘 /dev/sdc 就去执行重载命令。

总结

本文具体介绍了基于 Hekiti 治理的 GlusterFS 存储集群,当呈现数据盘空间调配满额无奈创立数据卷的场景时,运维人员该如何减少新的物理磁盘并增加到已有存储集群中的两种解决方案。

  • 扩容计划之调整 Topology 配置文件
  • 扩容计划之 Heketi-CLI 间接扩容

本文来源于生产环境的实在案例,所有操作都通过理论验证。但,数据无价、扩容有危险,操作需谨慎。

本文由博客一文多发平台 OpenWrite 公布!

退出移动版