cephadm 装置部署 ceph 集群
介绍
手册:
https://access.redhat.com/doc…
http://docs.ceph.org.cn/
ceph 能够实现的存储形式:
块存储:提供像一般硬盘一样的存储,为使用者提供“硬盘”
文件系统存储:相似于 NFS 的共享形式,为使用者提供共享文件夹
对象存储:像百度云盘一样,须要应用独自的客户端
ceph 还是一个分布式的存储系统,非常灵活。如果须要扩容,只有向 ceph 集中减少服务器即可。ceph 存储数据时采纳多正本的形式进行存储,生产环境下,一个文件至多要存 3 份。ceph 默认也是三正本存储。
ceph 的形成
Ceph OSD 守护过程:Ceph OSD 用于存储数据。此外,Ceph OSD 利用 Ceph 节点的 CPU、内存和网络来执行数据复制、纠删代码、从新均衡、复原、监控和报告性能。存储节点有几块硬盘用于存储,该节点就会有几个 osd 过程。
Ceph Mon 监控器:Ceph Mon 保护 Ceph 存储集群映射的主正本和 Ceph 存储群集的以后状态。监控器须要高度一致性,确保对 Ceph 存储集群状态达成统一。保护着展现集群状态的各种图表,包含监视器图、OSD 图、归置组(PG)图、和 CRUSH 图。
MDSs: Ceph 元数据服务器(MDS)为 Ceph 文件系统存储元数据。
RGW:对象存储网关。次要为拜访 ceph 的软件提供 API 接口。
装置
配置 IP 地址
# 配置 IP 地址
ssh root@192.168.1.154 "nmcli con mod ens18 ipv4.addresses 192.168.1.25/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns"8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.179 "nmcli con mod ens18 ipv4.addresses 192.168.1.26/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns"8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.181 "nmcli con mod ens18 ipv4.addresses 192.168.1.27/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns"8.8.8.8"; nmcli con up ens18"
配置根底环境
# 配置主机名
hostnamectl set-hostname ceph-1
hostnamectl set-hostname ceph-2
hostnamectl set-hostname ceph-3
# 更新到最新
yum update -y
# 敞开 selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
# 敞开防火墙
systemctl disable --now firewalld
# 配置免密
ssh-keygen -f /root/.ssh/id_rsa -P ''
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.25
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.26
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.27
# 查看磁盘
[root@ceph-1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part
├─cs-root 253:0 0 61.2G 0 lvm /
├─cs-swap 253:1 0 7.9G 0 lvm [SWAP]
└─cs-home 253:2 0 29.9G 0 lvm /home
sdb 8:16 0 100G 0 disk
[root@ceph-1 ~]#
# 配置 hosts
cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.25 ceph-1
192.168.1.26 ceph-2
192.168.1.27 ceph-3
EOF
安装时间同步和 docker
# 装置须要的包
yum install epel* -y
yum install -y ceph-mon ceph-osd ceph-mds ceph-radosgw
# 服务端
yum install chrony -y
cat > /etc/chrony.conf << EOF
pool ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.1.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF
systemctl restart chronyd ; systemctl enable chronyd
# 客户端
yum install chrony -y
cat > /etc/chrony.conf << EOF
pool 192.168.1.25 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF
systemctl restart chronyd ; systemctl enable chronyd
#应用客户端进行验证
chronyc sources -v
# 装置 docker
curl -sSL https://get.daocloud.io/docker | sh
装置集群
# 装置集群
yum install -y python3
# 装置 cephadm 工具
curl --silent --remote-name --location https://mirrors.chenby.cn/https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
# 创立源信息
./cephadm add-repo --release 17.2.5
sed -i 's#download.ceph.com#mirrors.ustc.edu.cn/ceph#' /etc/yum.repos.d/ceph.repo
./cephadm install
# 疏导新的集群
[root@ceph-1 ~]# cephadm bootstrap --mon-ip 192.168.1.25
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 976e04fe-9315-11ed-a275-e29e49e9189c
Verifying IP 192.168.1.25 port 3300 ...
Verifying IP 192.168.1.25 port 6789 ...
Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24`
Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.1.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 4...
mgr epoch 4 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph-1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 8...
mgr epoch 8 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:
URL: https://ceph-1:8443/
User: admin
Password: dsvi6yiat7
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /usr/sbin/cephadm shell --fsid 976e04fe-9315-11ed-a275-e29e49e9189c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
Bootstrap complete.
[root@ceph-1 ~]#
查看容器
[root@ceph-1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/ceph/ceph v17 cc65afd6173a 2 months ago 1.36GB
quay.io/ceph/ceph-grafana 8.3.5 dad864ee21e9 9 months ago 558MB
quay.io/prometheus/prometheus v2.33.4 514e6a882f6e 10 months ago 204MB
quay.io/prometheus/node-exporter v1.3.1 1dbe0e931976 13 months ago 20.9MB
quay.io/prometheus/alertmanager v0.23.0 ba2b418f427c 16 months ago 57.5MB
[root@ceph-1 ~]#
[root@ceph-1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41a980ad57b6 quay.io/ceph/ceph-grafana:8.3.5 "/bin/sh -c'grafana…" 32 seconds ago Up 31 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-grafana-ceph-1
c1d92377e2f2 quay.io/prometheus/alertmanager:v0.23.0 "/bin/alertmanager -…" 33 seconds ago Up 32 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-alertmanager-ceph-1
9262faff37be quay.io/prometheus/prometheus:v2.33.4 "/bin/prometheus --c…" 42 seconds ago Up 41 seconds ceph-976e04fe-9315-11ed-a275-e29e49e9189c-prometheus-ceph-1
2601411f95a6 quay.io/prometheus/node-exporter:v1.3.1 "/bin/node_exporter …" About a minute ago Up About a minute ceph-976e04fe-9315-11ed-a275-e29e49e9189c-node-exporter-ceph-1
a6ca018a7620 quay.io/ceph/ceph "/usr/bin/ceph-crash…" 2 minutes ago Up 2 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-crash-ceph-1
f9e9de110612 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mgr -…" 3 minutes ago Up 3 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mgr-ceph-1-svfnsm
cac707c88b83 quay.io/ceph/ceph:v17 "/usr/bin/ceph-mon -…" 3 minutes ago Up 3 minutes ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mon-ceph-1
[root@ceph-1 ~]#
应用 shell 命令
[root@ceph-1 ~]# cephadm shell #切换模式
Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c
Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config
Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]# ceph -s
cluster:
id: 976e04fe-9315-11ed-a275-e29e49e9189c
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph-1 (age 4m)
mgr: ceph-1.svfnsm(active, since 2m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]# ceph orch ps #查看目前集群内运行的组件(包含其余节点)NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
alertmanager.ceph-1 ceph-1 *:9093,9094 running (2m) 2m ago 4m 15.1M - ba2b418f427c c1d92377e2f2
crash.ceph-1 ceph-1 running (4m) 2m ago 4m 6676k - 17.2.5 cc65afd6173a a6ca018a7620
grafana.ceph-1 ceph-1 *:3000 running (2m) 2m ago 3m 39.1M - 8.3.5 dad864ee21e9 41a980ad57b6
mgr.ceph-1.svfnsm ceph-1 *:9283 running (5m) 2m ago 5m 426M - 17.2.5 cc65afd6173a f9e9de110612
mon.ceph-1 ceph-1 running (5m) 2m ago 5m 29.0M 2048M 17.2.5 cc65afd6173a cac707c88b83
node-exporter.ceph-1 ceph-1 *:9100 running (3m) 2m ago 3m 13.2M - 1dbe0e931976 2601411f95a6
prometheus.ceph-1 ceph-1 *:9095 running (3m) 2m ago 3m 34.4M - 514e6a882f6e 9262faff37be
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]# ceph orch ps --daemon-type mon #查看某一组件的状态
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
mon.ceph-1 ceph-1 running (5m) 2m ago 5m 29.0M 2048M 17.2.5 cc65afd6173a cac707c88b83
[ceph: root@ceph-1 /]#
[ceph: root@ceph-1 /]# exit #退出命令模式
exit
[root@ceph-1 ~]#
# ceph 命令的第二种利用
[root@ceph-1 ~]# cephadm shell -- ceph -s
Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c
Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config
Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
cluster:
id: 976e04fe-9315-11ed-a275-e29e49e9189c
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph-1 (age 6m)
mgr: ceph-1.svfnsm(active, since 4m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[root@ceph-1 ~]#
装置 ceph-common 包
# 装置 ceph-common 包
[root@ceph-1 ~]# cephadm install ceph-common
Installing packages ['ceph-common']...
[root@ceph-1 ~]#
[root@ceph-1 ~]# ceph -v
ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
[root@ceph-1 ~]#
# 启用 ceph 组件
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-3
创立 mon 和 mgr
# 创立 mon 和 mgr
ceph orch host add ceph-2
ceph orch host add ceph-3
#查看目前集群纳管的节点
[root@ceph-1 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-1 192.168.1.25 _admin
ceph-2 192.168.1.26
ceph-3 192.168.1.27
3 hosts in cluster
[root@ceph-1 ~]#
#ceph 集群个别默认会容许存在 5 个 mon 和 2 个 mgr; 能够应用 ceph orch apply mon --placement="3 node1 node2 node3" 进行手动批改
[root@ceph-1 ~]# ceph orch apply mon --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mon update...
[root@ceph-1 ~]#
[root@ceph-1 ~]# ceph orch apply mgr --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mgr update...
[root@ceph-1 ~]#
[root@ceph-1 ~]# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 30s ago 17m count:1
crash 3/3 4m ago 17m *
grafana ?:3000 1/1 30s ago 17m count:1
mgr 3/3 4m ago 46s ceph-1;ceph-2;ceph-3;count:3
mon 3/3 4m ago 118s ceph-1;ceph-2;ceph-3;count:3
node-exporter ?:9100 3/3 4m ago 17m *
prometheus ?:9095 1/1 30s ago 17m count:1
[root@ceph-1 ~]#
创立 osd
# 创立 osd
[root@ceph-1 ~]# ceph orch daemon add osd ceph-1:/dev/sdb
Created osd(s) 0 on host 'ceph-1'
[root@ceph-1 ~]# ceph orch daemon add osd ceph-2:/dev/sdb
Created osd(s) 1 on host 'ceph-2'
[root@ceph-1 ~]# ceph orch daemon add osd ceph-3:/dev/sdb
Created osd(s) 2 on host 'ceph-3'
[root@ceph-1 ~]#
创立 mds
# 创立 mds
#首先创立 cephfs,不指定 pg 的话,默认主动调整
[root@ceph-1 ~]# ceph osd pool create cephfs_data
pool 'cephfs_data' created
[root@ceph-1 ~]# ceph osd pool create cephfs_metadata
pool 'cephfs_metadata' created
[root@ceph-1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
[root@ceph-1 ~]#
#开启 mds 组件,cephfs:文件系统名称;–placement:指定集群内须要几个 mds,前面跟主机名
[root@ceph-1 ~]# ceph orch apply mds cephfs --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mds.cephfs update...
[root@ceph-1 ~]#
#查看各节点是否已启动 mds 容器;还能够应用 ceph orch ps 查看某一节点运行的容器
[root@ceph-1 ~]# ceph orch ps --daemon-type mds
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
mds.cephfs.ceph-1.zgcgrw ceph-1 running (52s) 44s ago 52s 17.0M - 17.2.5 cc65afd6173a aba28ef97b9a
mds.cephfs.ceph-2.vvpuyk ceph-2 running (51s) 45s ago 51s 14.1M - 17.2.5 cc65afd6173a 940a019d4c75
mds.cephfs.ceph-3.afnozf ceph-3 running (54s) 45s ago 54s 14.2M - 17.2.5 cc65afd6173a bd17d6414aa9
[root@ceph-1 ~]#
[root@ceph-1 ~]#
创立 rgw
# 创立 rgw
#首先创立一个畛域
[root@ceph-1 ~]# radosgw-admin realm create --rgw-realm=myorg --default
{
"id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
"name": "myorg",
"current_period": "16769237-0ed5-4fad-8822-abc444292d0b",
"epoch": 1
}
[root@ceph-1 ~]#
#创立区域组
[root@ceph-1 ~]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
{
"id": "4d978fe1-b158-4b3a-93f7-87fbb31f6e7a",
"name": "default",
"api_name": "default",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "","zones": [],"placement_targets": [],"default_placement":"",
"realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
"sync_policy": {"groups": []
}
}
[root@ceph-1 ~]#
#创立区域
[root@ceph-1 ~]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default
{
"id": "5ac7f118-a69c-4dec-b174-f8432e7115b7",
"name": "cn-east-1",
"domain_root": "cn-east-1.rgw.meta:root",
"control_pool": "cn-east-1.rgw.control",
"gc_pool": "cn-east-1.rgw.log:gc",
"lc_pool": "cn-east-1.rgw.log:lc",
"log_pool": "cn-east-1.rgw.log",
"intent_log_pool": "cn-east-1.rgw.log:intent",
"usage_log_pool": "cn-east-1.rgw.log:usage",
"roles_pool": "cn-east-1.rgw.meta:roles",
"reshard_pool": "cn-east-1.rgw.log:reshard",
"user_keys_pool": "cn-east-1.rgw.meta:users.keys",
"user_email_pool": "cn-east-1.rgw.meta:users.email",
"user_swift_pool": "cn-east-1.rgw.meta:users.swift",
"user_uid_pool": "cn-east-1.rgw.meta:users.uid",
"otp_pool": "cn-east-1.rgw.otp",
"system_key": {"access_key": "","secret_key":""},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "cn-east-1.rgw.buckets.index",
"storage_classes": {
"STANDARD": {"data_pool": "cn-east-1.rgw.buckets.data"}
},
"data_extra_pool": "cn-east-1.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
"notif_pool": "cn-east-1.rgw.log:notif"
}
[root@ceph-1 ~]#
#为特定畛域和区域部署 radosgw 守护程序
[root@ceph-1 ~]# ceph orch apply rgw myorg cn-east-1 --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled rgw.myorg update...
[root@ceph-1 ~]#
#验证各节点是否启动 rgw 容器
[root@ceph-1 ~]# ceph orch ps --daemon-type rgw
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
rgw.myorg.ceph-1.tzzauo ceph-1 *:80 running (60s) 50s ago 60s 18.6M - 17.2.5 cc65afd6173a 2ce31e5c9d35
rgw.myorg.ceph-2.zxwpfj ceph-2 *:80 running (61s) 51s ago 61s 20.0M - 17.2.5 cc65afd6173a a334e346ae5c
rgw.myorg.ceph-3.bvsydw ceph-3 *:80 running (58s) 51s ago 58s 18.6M - 17.2.5 cc65afd6173a 97b09ba01821
[root@ceph-1 ~]#
为所有节点装置 ceph-common 包
# 为所有节点装置 ceph-common 包
scp /etc/yum.repos.d/ceph.repo ceph-2:/etc/yum.repos.d/ #将主节点的 ceph 源同步至其余节点
scp /etc/yum.repos.d/ceph.repo ceph-3:/etc/yum.repos.d/ #将主节点的 ceph 源同步至其余节点
yum -y install ceph-common #在节点装置 ceph-common,ceph-common 包会提供 ceph 命令并在 etc 下创立 ceph 目录
scp /etc/ceph/ceph.conf ceph-2:/etc/ceph/ #将 ceph.conf 文件传输至对应节点
scp /etc/ceph/ceph.conf ceph-3:/etc/ceph/ #将 ceph.conf 文件传输至对应节点
scp /etc/ceph/ceph.client.admin.keyring ceph-2:/etc/ceph/ #将密钥文件传输至对应节点
scp /etc/ceph/ceph.client.admin.keyring ceph-3:/etc/ceph/ #将密钥文件传输至对应节点
测试
# 测试
[root@ceph-3 ~]# ceph -s
cluster:
id: 976e04fe-9315-11ed-a275-e29e49e9189c
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 17m)
mgr: ceph-1.svfnsm(active, since 27m), standbys: ceph-2.zuetkd, ceph-3.vntnlf
mds: 1/1 daemons up, 2 standby
osd: 3 osds: 3 up (since 8m), 3 in (since 8m)
rgw: 3 daemons active (3 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 7 pools, 177 pgs
objects: 226 objects, 585 KiB
usage: 108 MiB used, 300 GiB / 300 GiB avail
pgs: 177 active+clean
[root@ceph-3 ~]#
拜访界面
# 页面拜访
https://192.168.1.25:8443
http://192.168.1.25:9095/
https://192.168.1.25:3000/
User: admin
Password: dsvi6yiat7
常用命令
ceph orch ls #列出集群内运行的组件
ceph orch host ls #列出集群内的主机
ceph orch ps #列出集群内容器的详细信息
ceph orch apply mon --placement="3 node1 node2 node3" #调整组件的数量
ceph orch ps --daemon-type rgw #--daemon-type:指定查看的组件
ceph orch host label add node1 mon #给某个主机指定标签
ceph orch apply mon label:mon #通知 cephadm 依据标签部署 mon, 批改后只有蕴含 mon 的主机才会成为 mon,不过原来启动的 mon 当初临时不会敞开
ceph orch device ls #列出集群内的存储设备
例如,要在 newhost1IP 地址 10.1.2.123 上部署第二台监视器,并 newhost2 在网络 10.1.2.0/24 中部署第三台 monitor
ceph orch apply mon --unmanaged #禁用 mon 主动部署
ceph orch daemon add mon newhost1:10.1.2.123
ceph orch daemon add mon newhost2:10.1.2.0/24
对于
https://www.oiox.cn/
https://www.oiox.cn/index.php…
CSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、集体博客
全网可搜《小陈运维》
文章次要公布于微信公众号