共计 7493 个字符,预计需要花费 19 分钟才能阅读完成。
作者:杨文
DBA,负责客户我的项目的需要与保护,会点数据库,不限于 MySQL、Redis、Cassandra、GreenPlum、ClickHouse、Elastic、TDSQL 等等。
本文起源:原创投稿
* 爱可生开源社区出品,原创内容未经受权不得随便应用,转载请分割小编并注明起源。
一、环境阐明:
集群扩容分为两种状况:一种是扩正本,一种是扩资源。
原集群部署模式:1-1-1。
上面介绍两种扩容形式:
- 扩容正本:扩容后的模式:1-1-1-1-1;
- 扩容资源:扩容后的模式:2-2-2。
阐明:扩容操作有两种形式:白屏形式操作和黑屏形式操作。
二、白屏形式进行扩容:
扩容正本:进入 OCP -> 找到要扩容的集群 -> 总览 -> 新增 zone;
扩容资源:进入 OCP -> 找到要扩容的集群 -> 总览 -> 新增 OBServer;
如图:
三、黑屏形式进行扩容:
阐明:为了防止篇幅反复,此处扩容正本和扩容资源将别离应用自动化形式扩容和手工形式扩容。
3.1、扩容正本(Zone):
通过 OBD 形式进行自动化扩容:
1)筹备配置文件:
vi /data/5zones.yaml
# Only need to configure when remote login is required
user:
username: admin
#password: admin
key_file: /home/admin/.ssh/id_rsa.pub
#port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
oceanbase-ce:
servers:
# Please don use hostname, only IP can be supported
- name: observer4
ip: 10.186.60.175
- name: observer5
ip: 10.186.60.176
global:
......
appname: ywob2
......
observer4:
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /home/admin/oceanbase-ce
# The directory for data storage. The default value is $home_path/store.
data_dir: /data
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
redo_dir: /redo
zone: zone4
observer5:
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /home/admin/oceanbase-ce
# The directory for data storage. The default value is $home_path/store.
data_dir: /data
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
redo_dir: /redo
zone: zone5
2)部署集群:
obd cluster deploy ywob2 -c 5zones.yaml
3)合并配置:将新配置文件的内容复制到本来配置文件中(此处反复内容较多,故不粘贴文本了)。
4)针对原集群名进行启动集群:
obd cluster start ywob
+------------------------------------------------+
| observer |
+--------------+---------+------+-------+--------+
|ip | version | port | zone | status |
+--------------+---------+------+-------+--------+
|10.186.60.85 | 3.2.2 | 2881 | zone1 | active |
|10.186.60.173 | 3.2.2 | 2881 | zone2 | active |
|10.186.60.174 | 3.2.2 | 2881 | zone3 | active |
+--------------+---------+------+-------+--------+
阐明:此时 observer 显示仍为原来的三个 IP。
5)查看过程:尽管新的 observer 中没有显示在列表中,然而实际上新的 observer 曾经启动了。
6)集群中增加新节点:
mysql> alter system add zone 'zone4' region 'sys_region';
mysql> alter system add zone 'zone5' region 'sys_region';
mysql> alter system start zone 'zone4';
mysql> alter system start zone 'zone5';
mysql> alter system add server '10.186.60.175:2882' zone 'zone4';
mysql> alter system add server '10.186.60.176:2882' zone 'zone5';
mysql> alter system start server '10.186.60.175:2882' zone 'zone4';
mysql> alter system start server '10.186.60.176:2882' zone 'zone5';
7)再次查看:
obd cluster display ywob
+------------------------------------------------+
| observer |
+--------------+---------+------+-------+--------+
|ip | version | port | zone | status |
+--------------+---------+------+-------+--------+
|10.186.60.85 | 3.2.2 | 2881 | zone1 | active |
|10.186.60.173 | 3.2.2 | 2881 | zone2 | active |
|10.186.60.174 | 3.2.2 | 2881 | zone3 | active |
|10.186.60.175 | 3.2.2 | 2881 | zone4 | active |
|10.186.60.176 | 3.2.2 | 2881 | zone5 | active |
+--------------+---------+------+-------+--------+
8)为业务租户扩容数据正本。
阐明:如果要缩容,步骤如下:膨胀节点 -> 发动合并 -> 批改 locality -> 膨胀资源池 -> 下线 zone。
3.2、扩容资源(OBServer):
1)扩容前,倡议进行资源查看操作,此处省略。
2)依照之前的环境进行创立目录,装置并启动 observer 过程:
10.186.60.175:su - admin
sudo rpm -ivh /tmp/oceanbase-ce-*.rpm
cd ~/oceanbase && bin/observer -i eth0 -p 2881 -P 2882 -z zone1 \
-d ~/oceanbase/store/ywob \
-r '10.186.65.85:2882:2881;10.186.60.173:2882:2881;10.186.60.174:2882:2881;10.186.60.175:2882:2881;10.186.60.176:2882:2881;10.186.60.177:2882:2881' \
-c 20230131 -n ywob \
-o "memory_limit=8G,cache_wash_threshold=1G,__min_full_resource_pool_memory=268435456,\
system_memory=3G,memory_chunk_cache_size=128M,cpu_count=12,net_thread_count=4,\
datafile_size=50G,stack_size=1536K,\
config_additional_dir=/data/ywob/etc3;/redo/ywob/etc2"
10.186.60.176:su - admin
sudo rpm -ivh /tmp/oceanbase-ce-*.rpm
cd ~/oceanbase && bin/observer -i eth0 -p 2881 -P 2882 -z zone2 \
-d ~/oceanbase/store/ywob \
-r '10.186.65.85:2882:2881;10.186.60.173:2882:2881;10.186.60.174:2882:2881;10.186.60.175:2882:2881;10.186.60.176:2882:2881;10.186.60.177:2882:2881' \
-c 20230131 -n ywob \
-o "memory_limit=8G,cache_wash_threshold=1G,__min_full_resource_pool_memory=268435456,\
system_memory=3G,memory_chunk_cache_size=128M,cpu_count=12,net_thread_count=4,\
datafile_size=50G,stack_size=1536K,\
config_additional_dir=/data/ywob/etc3;/redo/ywob/etc2"
10.186.60.177:su - admin
sudo rpm -ivh /tmp/oceanbase-ce-*.rpm
cd ~/oceanbase && bin/observer -i eth0 -p 2881 -P 2882 -z zone3 \
-d ~/oceanbase/store/ywob \
-r '10.186.65.85:2882:2881;10.186.60.173:2882:2881;10.186.60.174:2882:2881;10.186.60.175:2882:2881;10.186.60.176:2882:2881;10.186.60.177:2882:2881' \
-c 20230131 -n ywob \
-o "memory_limit=8G,cache_wash_threshold=1G,__min_full_resource_pool_memory=268435456,\
system_memory=3G,memory_chunk_cache_size=128M,cpu_count=12,net_thread_count=4,\
datafile_size=50G,stack_size=1536K,\
config_additional_dir=/data/ywob/etc3;/redo/ywob/etc2"
查看过程和端口:
ps -ef |grep observer
netstat -ntlp
此时应该是能够看到过程的,并且能够监听到 2881 和 2882 端口。
3)将新节点增加到 zone 中:
mysql> alter system add server '10.186.60.175:2882' zone 'zone1';
mysql> alter system add server '10.186.60.176:2882' zone 'zone2';
mysql> alter system add server '10.186.60.177:2882' zone 'zone3';
4)节点增加实现后,信息查看:
查看集群所有节点信息:
select zone,svr_port,inner_port,with_rootserver,status from __all_server order by zone, svr_ip;
能够看到 zone 产生了变动。
分区正本迁徙从新选主的过程,能够通过视图__all_rootservice_event_history 查看集群 event 事件信息。
期待资源均衡分区正本迁徙实现后,查看资源应用状况:
select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, (cpu_total-cpu_assigned) cpu_free,
round(mem_total/1024/1024/1024) mem_total_gb,
round((mem_total-mem_assigned)/1024/1024/1024) mem_free_gb,
round(disk_total/1024/1024/1024) disk_total_gb,unit_num,
substr(a.build_version,1,6) version,usec_to_time(b.start_service_time) start_service_time,b.status
from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port)
order by a.zone, a.svr_ip;
此时资源还没被应用;
5)租户扩容:
减少 unit 数量,由 1 减少为 2:
alter resource pool pool_yw1 unit_num = 2;
unit_num = 2,示意每个 zone 下有两个 unit 资源;
因为集群是 2 -2- 2 架构,每个 zone 下有 2 台 observer,因而每个 zone 下都会有 2 个租户的 unit 资源,因为集群负载平衡策略,会导致正本迁徙和 leader 从新选主;
分区正本迁徙从新选主的过程,能够通过视图__all_rootservice_event_history 查看集群负载平衡工作的执行状况。__all_virtual_partition_migration_status 视图能够看正在产生的迁徙;
最初是能够看到数据正本产生了变动,数据从新均衡。
3.3、膨胀资源(OBServer):
集群模式:由 2 -2- 2 模式缩减为 1 -1- 1 模式。
1)信息查看:资源查看、unit 查看(此处省略)。
2)资源调整:
因为咱们是三正本没变,只是扩了正本资源与 unit 数量,所以不须要调整正本;
缩小 unit 数量:
alter resource pool pool_yw1 unit_num = 1;
3)查看:分区正本迁徙从新选主的过程,能够通过视图__all_rootservice_event_history 查看集群 event 事件信息。
SELECT * FROM __all_rootservice_event_history WHERE 1 = 1 ORDER BY gmt_create LIMIT 50;
能够看到分区正本迁徙从新选主的过程;
4)删除 observer:
alter system delete server '10.186.60.175:2882' zone zone3;
alter system delete server '10.186.60.176:2882' zone zone2;
alter system delete server '10.186.60.177:2882' zone zone1;
5)查看:
查看 OB 集群所有节点信息:
select zone,svr_ip,svr_port,inner_port,with_rootserver,status from __all_server order by zone;
查看业务租户的分区正本所在节点散布状况:
SELECT t1.tenant_id, t1.tenant_name, t2.database_name,
t3.table_id, t3.table_Name, t3.tablegroup_id, t3.part_num,
t4.partition_Id, t4.zone, t4.svr_ip, t4.role,
round(t4.data_size / 1024 / 1024) data_size_mb
from gv$tenant t1
join gv$database t2 on (t1.tenant_id = t2.tenant_id)
join gv$table t3 on (t2.tenant_id = t3.tenant_id and t2.database_id = t3.database_id and t3.index_type = 0)
left join gv$partition t4 on (t2.tenant_id = t4.tenant_id and (t3.table_id = t4.table_id or t3.tablegroup_id = t4.table_id) and t4.role in (1, 2))
where t1.tenant_id > 1000
order by t1.tenant_id, t3.tablegroup_id, t3.table_name, t4.partition_Id, t4.role;
能够看到数据均衡完了,数据全副都只散布在 10.186.60.85/173/174 主机上。
最初,别离通过直连和 OBProxy 进行连贯数据库查验数据。