cephadm 装置部署 ceph 集群

介绍

手册:

https://access.redhat.com/doc...

http://docs.ceph.org.cn/

ceph能够实现的存储形式:

块存储:提供像一般硬盘一样的存储,为使用者提供“硬盘”

文件系统存储:相似于NFS的共享形式,为使用者提供共享文件夹

对象存储:像百度云盘一样,须要应用独自的客户端

ceph还是一个分布式的存储系统,非常灵活。如果须要扩容,只有向ceph集中减少服务器即可。ceph存储数据时采纳多正本的形式进行存储,生产环境下,一个文件至多要存3份。ceph默认也是三正本存储。

ceph的形成

Ceph OSD 守护过程: Ceph OSD 用于存储数据。此外,Ceph OSD 利用 Ceph 节点的 CPU、内存和网络来执行数据复制、纠删代码、从新均衡、复原、监控和报告性能。存储节点有几块硬盘用于存储,该节点就会有几个osd过程。

Ceph Mon监控器: Ceph Mon保护 Ceph 存储集群映射的主正本和 Ceph 存储群集的以后状态。监控器须要高度一致性,确保对Ceph 存储集群状态达成统一。保护着展现集群状态的各种图表,包含监视器图、 OSD 图、归置组( PG )图、和 CRUSH 图。

MDSs: Ceph 元数据服务器( MDS )为 Ceph 文件系统存储元数据。

RGW:对象存储网关。次要为拜访ceph的软件提供API接口。

装置

配置IP地址

# 配置IP地址ssh root@192.168.1.154 "nmcli con mod ens18 ipv4.addresses 192.168.1.25/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.179 "nmcli con mod ens18 ipv4.addresses 192.168.1.26/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"ssh root@192.168.1.181 "nmcli con mod ens18 ipv4.addresses 192.168.1.27/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"

配置根底环境

# 配置主机名hostnamectl set-hostname ceph-1hostnamectl set-hostname ceph-2hostnamectl set-hostname ceph-3# 更新到最新yum update -y# 敞开selinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config# 敞开防火墙systemctl disable --now firewalld# 配置免密ssh-keygen -f /root/.ssh/id_rsa -P ''ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.25ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.26ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.27# 查看磁盘[root@ceph-1 ~]# lsblkNAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTsda           8:0    0  100G  0 disk ├─sda1        8:1    0    1G  0 part /boot└─sda2        8:2    0   99G  0 part   ├─cs-root 253:0    0 61.2G  0 lvm  /  ├─cs-swap 253:1    0  7.9G  0 lvm  [SWAP]  └─cs-home 253:2    0 29.9G  0 lvm  /homesdb           8:16   0  100G  0 disk [root@ceph-1 ~]# # 配置hostscat > /etc/hosts <<EOF127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.25 ceph-1192.168.1.26 ceph-2192.168.1.27 ceph-3EOF

安装时间同步和docker

# 装置须要的包yum install epel* -y yum install -y ceph-mon ceph-osd ceph-mds ceph-radosgw# 服务端yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 192.168.1.0/24local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd# 客户端yum install chrony -ycat > /etc/chrony.conf << EOF pool 192.168.1.25 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOFsystemctl restart chronyd ; systemctl enable chronyd#应用客户端进行验证chronyc sources -v# 装置dockercurl -sSL https://get.daocloud.io/docker | sh

装置集群

# 装置集群yum install -y python3# 装置 cephadm 工具curl --silent --remote-name --location https://mirrors.chenby.cn/https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm# 创立源信息./cephadm add-repo --release 17.2.5sed -i 's#download.ceph.com#mirrors.ustc.edu.cn/ceph#' /etc/yum.repos.d/ceph.repo ./cephadm install# 疏导新的集群[root@ceph-1 ~]# cephadm bootstrap --mon-ip 192.168.1.25Verifying podman|docker is present...Verifying lvm2 is present...Verifying time synchronization is in place...Unit chronyd.service is enabled and runningRepeating the final host check...docker (/usr/bin/docker) is presentsystemctl is presentlvcreate is presentUnit chronyd.service is enabled and runningHost looks OKCluster fsid: 976e04fe-9315-11ed-a275-e29e49e9189cVerifying IP 192.168.1.25 port 3300 ...Verifying IP 192.168.1.25 port 6789 ...Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24`Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24`Internal network (--cluster-network) has not been provided, OSD replication will default to the public_networkPulling container image quay.io/ceph/ceph:v17...Ceph version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)Extracting ceph user uid/gid from container image...Creating initial keys...Creating initial monmap...Creating mon...Waiting for mon to start...Waiting for mon...mon is availableAssimilating anything we can from ceph.conf...Generating new minimal ceph.conf...Restarting the monitor...Setting mon public_network to 192.168.1.0/24Wrote config to /etc/ceph/ceph.confWrote keyring to /etc/ceph/ceph.client.admin.keyringCreating mgr...Verifying port 9283 ...Waiting for mgr to start...Waiting for mgr...mgr not available, waiting (1/15)...mgr not available, waiting (2/15)...mgr not available, waiting (3/15)...mgr not available, waiting (4/15)...mgr is availableEnabling cephadm module...Waiting for the mgr to restart...Waiting for mgr epoch 4...mgr epoch 4 is availableSetting orchestrator backend to cephadm...Generating ssh key...Wrote public SSH key to /etc/ceph/ceph.pubAdding key to root@localhost authorized_keys...Adding host ceph-1...Deploying mon service with default placement...Deploying mgr service with default placement...Deploying crash service with default placement...Deploying prometheus service with default placement...Deploying grafana service with default placement...Deploying node-exporter service with default placement...Deploying alertmanager service with default placement...Enabling the dashboard module...Waiting for the mgr to restart...Waiting for mgr epoch 8...mgr epoch 8 is availableGenerating a dashboard self-signed certificate...Creating initial admin user...Fetching dashboard port number...Ceph Dashboard is now available at:             URL: https://ceph-1:8443/            User: admin        Password: dsvi6yiat7Enabling client.admin keyring and conf on hosts with "admin" labelSaving cluster configuration to /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/config directoryEnabling autotune for osd_memory_targetYou can access the Ceph CLI as following in case of multi-cluster or non-default config:        sudo /usr/sbin/cephadm shell --fsid 976e04fe-9315-11ed-a275-e29e49e9189c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyringOr, if you are only running a single cluster on this host:        sudo /usr/sbin/cephadm shell Please consider enabling telemetry to help improve Ceph:        ceph telemetry onFor more information see:        https://docs.ceph.com/docs/master/mgr/telemetry/Bootstrap complete.[root@ceph-1 ~]# 

查看容器

[root@ceph-1 ~]# docker imagesREPOSITORY                         TAG       IMAGE ID       CREATED         SIZEquay.io/ceph/ceph                  v17       cc65afd6173a   2 months ago    1.36GBquay.io/ceph/ceph-grafana          8.3.5     dad864ee21e9   9 months ago    558MBquay.io/prometheus/prometheus      v2.33.4   514e6a882f6e   10 months ago   204MBquay.io/prometheus/node-exporter   v1.3.1    1dbe0e931976   13 months ago   20.9MBquay.io/prometheus/alertmanager    v0.23.0   ba2b418f427c   16 months ago   57.5MB[root@ceph-1 ~]#[root@ceph-1 ~]# docker psCONTAINER ID   IMAGE                                     COMMAND                  CREATED              STATUS              PORTS     NAMES41a980ad57b6   quay.io/ceph/ceph-grafana:8.3.5           "/bin/sh -c 'grafana…"   32 seconds ago       Up 31 seconds                 ceph-976e04fe-9315-11ed-a275-e29e49e9189c-grafana-ceph-1c1d92377e2f2   quay.io/prometheus/alertmanager:v0.23.0   "/bin/alertmanager -…"   33 seconds ago       Up 32 seconds                 ceph-976e04fe-9315-11ed-a275-e29e49e9189c-alertmanager-ceph-19262faff37be   quay.io/prometheus/prometheus:v2.33.4     "/bin/prometheus --c…"   42 seconds ago       Up 41 seconds                 ceph-976e04fe-9315-11ed-a275-e29e49e9189c-prometheus-ceph-12601411f95a6   quay.io/prometheus/node-exporter:v1.3.1   "/bin/node_exporter …"   About a minute ago   Up About a minute             ceph-976e04fe-9315-11ed-a275-e29e49e9189c-node-exporter-ceph-1a6ca018a7620   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   2 minutes ago        Up 2 minutes                  ceph-976e04fe-9315-11ed-a275-e29e49e9189c-crash-ceph-1f9e9de110612   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mgr -…"   3 minutes ago        Up 3 minutes                  ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mgr-ceph-1-svfnsmcac707c88b83   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mon -…"   3 minutes ago        Up 3 minutes                  ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mon-ceph-1[root@ceph-1 ~]# 

应用shell命令

[root@ceph-1 ~]# cephadm shell   #切换模式Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189cInferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/configUsing ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CSTquay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45[ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph -s    cluster:    id:     976e04fe-9315-11ed-a275-e29e49e9189c    health: HEALTH_WARN            OSD count 0 < osd_pool_default_size 3   services:    mon: 1 daemons, quorum ceph-1 (age 4m)    mgr: ceph-1.svfnsm(active, since 2m)    osd: 0 osds: 0 up, 0 in   data:    pools:   0 pools, 0 pgs    objects: 0 objects, 0 B    usage:   0 B used, 0 B / 0 B avail    pgs:      [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph orch ps  #查看目前集群内运行的组件(包含其余节点)NAME                  HOST    PORTS        STATUS        REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  alertmanager.ceph-1   ceph-1  *:9093,9094  running (2m)     2m ago   4m    15.1M        -           ba2b418f427c  c1d92377e2f2  crash.ceph-1          ceph-1               running (4m)     2m ago   4m    6676k        -  17.2.5   cc65afd6173a  a6ca018a7620  grafana.ceph-1        ceph-1  *:3000       running (2m)     2m ago   3m    39.1M        -  8.3.5    dad864ee21e9  41a980ad57b6  mgr.ceph-1.svfnsm     ceph-1  *:9283       running (5m)     2m ago   5m     426M        -  17.2.5   cc65afd6173a  f9e9de110612  mon.ceph-1            ceph-1               running (5m)     2m ago   5m    29.0M    2048M  17.2.5   cc65afd6173a  cac707c88b83  node-exporter.ceph-1  ceph-1  *:9100       running (3m)     2m ago   3m    13.2M        -           1dbe0e931976  2601411f95a6  prometheus.ceph-1     ceph-1  *:9095       running (3m)     2m ago   3m    34.4M        -           514e6a882f6e  9262faff37be  [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# ceph orch ps --daemon-type mon  #查看某一组件的状态NAME        HOST    PORTS  STATUS        REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  mon.ceph-1  ceph-1         running (5m)     2m ago   5m    29.0M    2048M  17.2.5   cc65afd6173a  cac707c88b83  [ceph: root@ceph-1 /]# [ceph: root@ceph-1 /]# exit   #退出命令模式 exit[root@ceph-1 ~]## ceph命令的第二种利用[root@ceph-1 ~]# cephadm shell -- ceph -sInferring fsid 976e04fe-9315-11ed-a275-e29e49e9189cInferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/configUsing ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CSTquay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45  cluster:    id:     976e04fe-9315-11ed-a275-e29e49e9189c    health: HEALTH_WARN            OSD count 0 < osd_pool_default_size 3   services:    mon: 1 daemons, quorum ceph-1 (age 6m)    mgr: ceph-1.svfnsm(active, since 4m)    osd: 0 osds: 0 up, 0 in   data:    pools:   0 pools, 0 pgs    objects: 0 objects, 0 B    usage:   0 B used, 0 B / 0 B avail    pgs:      [root@ceph-1 ~]# 

装置ceph-common包

# 装置ceph-common包[root@ceph-1 ~]# cephadm install ceph-commonInstalling packages ['ceph-common']...[root@ceph-1 ~]# [root@ceph-1 ~]# ceph -v ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)[root@ceph-1 ~]## 启用ceph组件ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-2ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-3

创立mon和mgr

# 创立mon和mgrceph orch host add ceph-2ceph orch host add ceph-3#查看目前集群纳管的节点[root@ceph-1 ~]# ceph orch host ls HOST    ADDR          LABELS  STATUS  ceph-1  192.168.1.25  _admin          ceph-2  192.168.1.26                  ceph-3  192.168.1.27                  3 hosts in cluster[root@ceph-1 ~]##ceph集群个别默认会容许存在5个mon和2个mgr;能够应用ceph orch apply mon --placement="3 node1 node2 node3"进行手动批改[root@ceph-1 ~]# ceph orch apply mon --placement="3 ceph-1 ceph-2 ceph-3"Scheduled mon update...[root@ceph-1 ~]# [root@ceph-1 ~]# ceph orch apply mgr --placement="3 ceph-1 ceph-2 ceph-3"Scheduled mgr update...[root@ceph-1 ~]# [root@ceph-1 ~]# ceph orch ls NAME           PORTS        RUNNING  REFRESHED  AGE   PLACEMENT                     alertmanager   ?:9093,9094      1/1  30s ago    17m   count:1                       crash                           3/3  4m ago     17m   *                             grafana        ?:3000           1/1  30s ago    17m   count:1                       mgr                             3/3  4m ago     46s   ceph-1;ceph-2;ceph-3;count:3  mon                             3/3  4m ago     118s  ceph-1;ceph-2;ceph-3;count:3  node-exporter  ?:9100           3/3  4m ago     17m   *                             prometheus     ?:9095           1/1  30s ago    17m   count:1                       [root@ceph-1 ~]# 

创立osd

# 创立osd[root@ceph-1 ~]# ceph orch daemon add osd ceph-1:/dev/sdbCreated osd(s) 0 on host 'ceph-1'[root@ceph-1 ~]# ceph orch daemon add osd ceph-2:/dev/sdbCreated osd(s) 1 on host 'ceph-2'[root@ceph-1 ~]# ceph orch daemon add osd ceph-3:/dev/sdbCreated osd(s) 2 on host 'ceph-3'[root@ceph-1 ~]# 

创立mds

# 创立mds#首先创立cephfs,不指定pg的话,默认主动调整[root@ceph-1 ~]# ceph osd pool create cephfs_datapool 'cephfs_data' created[root@ceph-1 ~]# ceph osd pool create cephfs_metadatapool 'cephfs_metadata' created[root@ceph-1 ~]# ceph fs new cephfs cephfs_metadata cephfs_datanew fs with metadata pool 3 and data pool 2[root@ceph-1 ~]# #开启mds组件,cephfs:文件系统名称;–placement:指定集群内须要几个mds,前面跟主机名[root@ceph-1 ~]# ceph orch apply mds cephfs --placement="3 ceph-1 ceph-2 ceph-3"Scheduled mds.cephfs update...[root@ceph-1 ~]# #查看各节点是否已启动mds容器;还能够应用ceph orch ps 查看某一节点运行的容器[root@ceph-1 ~]# ceph orch ps --daemon-type mdsNAME                      HOST    PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  mds.cephfs.ceph-1.zgcgrw  ceph-1         running (52s)    44s ago  52s    17.0M        -  17.2.5   cc65afd6173a  aba28ef97b9a  mds.cephfs.ceph-2.vvpuyk  ceph-2         running (51s)    45s ago  51s    14.1M        -  17.2.5   cc65afd6173a  940a019d4c75  mds.cephfs.ceph-3.afnozf  ceph-3         running (54s)    45s ago  54s    14.2M        -  17.2.5   cc65afd6173a  bd17d6414aa9  [root@ceph-1 ~]# [root@ceph-1 ~]# 

创立rgw

# 创立rgw#首先创立一个畛域[root@ceph-1 ~]# radosgw-admin realm create --rgw-realm=myorg --default{    "id": "a6607d08-ac44-45f0-95b0-5435acddfba2",    "name": "myorg",    "current_period": "16769237-0ed5-4fad-8822-abc444292d0b",    "epoch": 1}[root@ceph-1 ~]# #创立区域组[root@ceph-1 ~]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default{    "id": "4d978fe1-b158-4b3a-93f7-87fbb31f6e7a",    "name": "default",    "api_name": "default",    "is_master": "true",    "endpoints": [],    "hostnames": [],    "hostnames_s3website": [],    "master_zone": "",    "zones": [],    "placement_targets": [],    "default_placement": "",    "realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2",    "sync_policy": {        "groups": []    }}[root@ceph-1 ~]# #创立区域[root@ceph-1 ~]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default{    "id": "5ac7f118-a69c-4dec-b174-f8432e7115b7",    "name": "cn-east-1",    "domain_root": "cn-east-1.rgw.meta:root",    "control_pool": "cn-east-1.rgw.control",    "gc_pool": "cn-east-1.rgw.log:gc",    "lc_pool": "cn-east-1.rgw.log:lc",    "log_pool": "cn-east-1.rgw.log",    "intent_log_pool": "cn-east-1.rgw.log:intent",    "usage_log_pool": "cn-east-1.rgw.log:usage",    "roles_pool": "cn-east-1.rgw.meta:roles",    "reshard_pool": "cn-east-1.rgw.log:reshard",    "user_keys_pool": "cn-east-1.rgw.meta:users.keys",    "user_email_pool": "cn-east-1.rgw.meta:users.email",    "user_swift_pool": "cn-east-1.rgw.meta:users.swift",    "user_uid_pool": "cn-east-1.rgw.meta:users.uid",    "otp_pool": "cn-east-1.rgw.otp",    "system_key": {        "access_key": "",        "secret_key": ""    },    "placement_pools": [        {            "key": "default-placement",            "val": {                "index_pool": "cn-east-1.rgw.buckets.index",                "storage_classes": {                    "STANDARD": {                        "data_pool": "cn-east-1.rgw.buckets.data"                    }                },                "data_extra_pool": "cn-east-1.rgw.buckets.non-ec",                "index_type": 0            }        }    ],    "realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2",    "notif_pool": "cn-east-1.rgw.log:notif"}[root@ceph-1 ~]# #为特定畛域和区域部署radosgw守护程序[root@ceph-1 ~]# ceph orch apply rgw myorg cn-east-1 --placement="3 ceph-1 ceph-2 ceph-3"Scheduled rgw.myorg update...[root@ceph-1 ~]# #验证各节点是否启动rgw容器[root@ceph-1 ~]#  ceph orch ps --daemon-type rgwNAME                     HOST    PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  rgw.myorg.ceph-1.tzzauo  ceph-1  *:80   running (60s)    50s ago  60s    18.6M        -  17.2.5   cc65afd6173a  2ce31e5c9d35  rgw.myorg.ceph-2.zxwpfj  ceph-2  *:80   running (61s)    51s ago  61s    20.0M        -  17.2.5   cc65afd6173a  a334e346ae5c  rgw.myorg.ceph-3.bvsydw  ceph-3  *:80   running (58s)    51s ago  58s    18.6M        -  17.2.5   cc65afd6173a  97b09ba01821  [root@ceph-1 ~]# 

为所有节点装置ceph-common包

# 为所有节点装置ceph-common包scp /etc/yum.repos.d/ceph.repo ceph-2:/etc/yum.repos.d/    #将主节点的ceph源同步至其余节点scp /etc/yum.repos.d/ceph.repo ceph-3:/etc/yum.repos.d/    #将主节点的ceph源同步至其余节点yum -y install ceph-common    #在节点装置ceph-common,ceph-common包会提供ceph命令并在etc下创立ceph目录scp /etc/ceph/ceph.conf ceph-2:/etc/ceph/    #将ceph.conf文件传输至对应节点scp /etc/ceph/ceph.conf ceph-3:/etc/ceph/    #将ceph.conf文件传输至对应节点scp /etc/ceph/ceph.client.admin.keyring ceph-2:/etc/ceph/    #将密钥文件传输至对应节点scp /etc/ceph/ceph.client.admin.keyring ceph-3:/etc/ceph/    #将密钥文件传输至对应节点

测试

# 测试[root@ceph-3 ~]# ceph -s  cluster:    id:     976e04fe-9315-11ed-a275-e29e49e9189c    health: HEALTH_OK   services:    mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 17m)    mgr: ceph-1.svfnsm(active, since 27m), standbys: ceph-2.zuetkd, ceph-3.vntnlf    mds: 1/1 daemons up, 2 standby    osd: 3 osds: 3 up (since 8m), 3 in (since 8m)    rgw: 3 daemons active (3 hosts, 1 zones)   data:    volumes: 1/1 healthy    pools:   7 pools, 177 pgs    objects: 226 objects, 585 KiB    usage:   108 MiB used, 300 GiB / 300 GiB avail    pgs:     177 active+clean [root@ceph-3 ~]# 

拜访界面

# 页面拜访https://192.168.1.25:8443http://192.168.1.25:9095/https://192.168.1.25:3000/User: adminPassword: dsvi6yiat7

常用命令

ceph orch ls    #列出集群内运行的组件ceph orch host ls    #列出集群内的主机ceph orch ps     #列出集群内容器的详细信息ceph orch apply mon --placement="3 node1 node2 node3"    #调整组件的数量ceph orch ps --daemon-type rgw    #--daemon-type:指定查看的组件ceph orch host label add node1 mon    #给某个主机指定标签ceph orch apply mon label:mon    #通知cephadm依据标签部署mon,批改后只有蕴含mon的主机才会成为mon,不过原来启动的mon当初临时不会敞开ceph orch device ls    #列出集群内的存储设备例如,要在newhost1IP地址10.1.2.123上部署第二台监视器,并newhost2在网络10.1.2.0/24中部署第三台monitorceph orch apply mon --unmanaged    #禁用mon主动部署ceph orch daemon add mon newhost1:10.1.2.123ceph orch daemon add mon newhost2:10.1.2.0/24

对于

https://www.oiox.cn/

https://www.oiox.cn/index.php...

CSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、集体博客

全网可搜《小陈运维》

文章次要公布于微信公众号