乐趣区

关于mysql:4-利用MySQL-Shell安装部署MGR集群-深入浅出MGR

  • GreatSQL 社区原创内容未经受权不得随便应用,转载请分割小编并注明起源。

[toc]

本文介绍如何利用 MySQL Shell + GreatSQL 8.0.25 构建一个三节点的 MGR 集群。

MySQL Shell 是一个客户端工具,可用于方便管理和操作 MySQL,反对 SQL、JavaScript、Python 等多种语言,也包含欠缺的 API。MySQL Shell 反对文档型和关系型数据库模式,通过 X DevAPI 能够治理文档型数据,通过 AdminAPI 能够治理 InnoDB Cluster、InnoDB ClusterSet 及 InnoDB ReplicaSet 等。

1. 装置筹备

筹备好上面三台服务器:

IP 端口 角色
172.16.16.10 3306 mgr1
172.16.16.11 3306 mgr2
172.16.16.12 3306 mgr3

确保三个节点间的网络是能够互通的,并且没有针对 3306 和 33061 端口的防火墙拦挡规定。

利用 yum 装置 MySQL Shell,版本抉择和 GreatSQL 雷同的 8.0.25:

$ yum install mysql-shell-8.0.25

假设曾经参考前文 3. 装置部署 MGR 集群 做好 MySQL Server 的初始化并启动三个实例。

接下来间接利用 MySQL Shell 部署 MGR。

2. 利用 MySQL Shell 构建 MGR 集群

利用 MySQL Shell 构建 MGR 集群比较简单,次要有几个步骤:

  1. 查看实例是否满足条件。
  2. 创立并初始化一个集群。
  3. 一一增加实例。

首先,用管理员账号 root 连贯到第一个节点:

# 在本地通过 socket 形式登入
$ mysqlsh -Spath/mysql.sock root@localhost
Please provide the password for 'root@.%2Fmysql.sock': ********
Save password for 'root@.%2Fmysql.sock'? [Y]es/[N]o/Ne[v]er (default No): yes
MySQL Shell 8.0.25
...

执行命令 \status 查看以后节点的状态,确认连贯失常可用。

执行 dba.configureInstance() 命令开始查看以后实例是否满足装置 MGR 集群的条件,如果不满足能够间接配置成为 MGR 集群的一个节点:

 MySQL  localhost  JS > dba.configureInstance()
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as 172.16.16.10:3306

#提醒以后的用户是管理员,不能间接用于 MGR 集群,须要新建一个账号
ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2 <-- 这里咱们抉择计划 2,即创立一个最小权限账号
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: GreatSQL
Password for new account: ********
Confirm password: ********

applierWorkerThreads will be set to the default value of 4.

The instance '172.16.16.10:3306' is valid to be used in an InnoDB cluster.

Cluster admin user 'GreatSQL'@'%' created.
The instance '172.16.16.10:3306' is already ready to be used in an InnoDB cluster.

Successfully enabled parallel appliers.

实现查看并创立完新用户后,退出以后的管理员账户,并用新创建的 MGR 专用账户登入,筹备初始化创立一个新集群:

$ mysqlsh --uri GreatSQL@172.16.16.10:3306
Please provide the password for 'GreatSQL@172.16.16.10:3306': ********
Save password for 'GreatSQL@172.16.16.10:3306'? [Y]es/[N]o/Ne[v]er (default No): yes
MySQL Shell 8.0.25

...
#定义一个变量名 c,不便上面援用
 MySQL  172.16.16.10:3306 ssl  JS > var c = dba.createCluster('MGR1');
A new InnoDB cluster will be created on instance '172.16.16.10:3306'.

Validating instance configuration at 172.16.16.10:3306...

This instance reports its own address as 172.16.16.10:3306

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '172.16.16.10:33061'. Use the localAddress option to override.

Creating InnoDB cluster 'MGR1' on '172.16.16.10:3306'...

Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

 MySQL  172.16.16.10:3306 ssl  JS >

这就实现了 MGR 集群的初始化并退出第一个节点(疏导节点)。

接下来,用同样办法先用 root 账号别离登入到另外两个节点,实现节点的查看并创立最小权限级别用户(此过程略过。。。留神各节点上创立的用户名、明码都要统一),之后回到第一个节点,执行 addInstance() 增加另外两个节点。

 MySQL  172.16.16.10:3306 ssl  JS > c.addInstance('GreatSQL@172.16.16.11:3306');  <-- 这里要指定 MGR 专用账号

WARNING: A GTID set check of the MySQL instance at '172.16.16.11:3306' determined that it contains transactions that do not originate from the cluster, which must be discarded before it can join the cluster.

172.16.16.11:3306 has the following errant GTIDs that do not exist in the cluster:
b05c0838-6850-11ec-a06b-00155d064000:1

WARNING: Discarding these extra GTID events can either be done manually or by completely overwriting the state of 172.16.16.11:3306 with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to choosing the clone recovery method.

Please select a recovery method [C]lone/[A]bort (default Abort): Clone  <-- 抉择用 Clone 形式从第一个节点全量复制数据
Validating instance configuration at 172.16.16.11:3306...

This instance reports its own address as 172.16.16.11:3306

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '172.16.16.11:33061'. Use the localAddress option to override.

A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 172.16.16.11:3306 is being cloned from 172.16.16.10:3306
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 172.16.16.11:3306 is shutting down...  <-- 数据 Clone 实现,筹备重启实例。如果该实例无奈实现主动重启,则须要手动启动

* Waiting for server restart... ready
* 172.16.16.11:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 72.43 MB transferred in about 1 second (~72.43 MB/s)

State recovery already finished for '172.16.16.11:3306'

The instance '172.16.16.11:3306' was successfully added to the cluster.  <-- 新实例退出胜利

 MySQL  172.16.16.10:3306 ssl  JS >

用同样的办法,将 172.16.16.12:3306 实例也退出到集群中。

当初,一个有这三节点的 MGR 集群曾经部署结束,来确认下:

 MySQL  172.16.16.10:3306 ssl  JS > c.describe()
{
    "clusterName": "MGR1",
    "defaultReplicaSet": {
        "name": "default",
        "topology": [
            {
                "address": "172.16.16.10:3306",
                "label": "172.16.16.10:3306",
                "role": "HA"
            },
            {
                "address": "172.16.16.11:3306",
                "label": "172.16.16.11:3306",
                "role": "HA"
            },
            {
                "address": "172.16.16.12:3306",
                "label": "172.16.16.12:3306",
                "role": "HA"
            }
        ],
        "topologyMode": "Single-Primary"
    }
} 

或者执行 c.status() 能够打印出集群更多的信息。

至此,利用 MySQL Shell 构建一个三节点的 MGR 集群做好了,能够尝试向 Primary 节点写入数据察看测试。

3. MySQL Shell 接管现存的 MGR 集群

对于曾经在运行中的 MGR 集群,也是能够用 MySQL Shell 接管的。只须要在调用 createCluster() 函数时,加上 adoptFromGR:true 选项即可。实际上不加这个选项的话,MySQL Shell 也会自动检测到该 MGR 集群已存在,并询问是否要接管。

在这里简略演示下:

# 不加上 adoptFromGr:true 选项
 MySQL  172.16.16.10:3306 ssl  JS > var c=dba.createCluster('MGR1');
A new InnoDB cluster will be created on instance '172.16.16.10:3306'.

You are connected to an instance that belongs to an unmanaged replication group.
Do you want to setup an InnoDB cluster based on this replication group? [Y/n]:

能够看到,会有提示信息询问是否要接管。

如果加上 adoptFromGr:true 选项,则会间接创立集群,不再询问:

var c=dba.createCluster('MGR1', {adoptFromGr:true});
A new InnoDB cluster will be created based on the existing replication group on instance '172.16.16.10:3306'.

Creating InnoDB cluster 'MGR1' on '172.16.16.10:3306'...

Adding Seed Instance...
Adding Instance '172.16.16.10:3306'...
Adding Instance '172.16.16.11:3306'...
Adding Instance '172.16.16.12:3306'...
...

如果是 MGR 集群的 metadata 发生变化,这时候无论调用 dba.getCluster() 还是 dba.createCluster 都可能会报告相似上面的谬误:

Dba.getCluster: Unable to get an InnoDB cluster handle. The instance '192.168.6.27:3306' may belong to a different cluster from the one registered in the Metadata since the value of 'group_replication_group_name' does not match the one registered in the Metadata: possible split-brain scenario. Please retry while connected to another member of the cluster. (RuntimeError)

这种状况下,能够调用 dba.dropMetadataSchema() 函数删除元数据,再调用 dba.createCluster() 接管集群:

# 确保不影响失常业务的话,删除无用 MGR 元数据
 MySQL  172.16.16.10:3306 ssl  JS > dba.dropMetadataSchema()
Are you sure you want to remove the Metadata? [y/N]: y
Metadata Schema successfully removed.

#接管现有集群
 MySQL  172.16.16.10:3306 ssl  JS > var c=dba.createCluster('MGR1', {adoptFromGr:true})
...

这样就能够了接管了。

4. 小结

本文次要介绍了如何利用 MySQL Shell 构建一个三节点的 MGR 集群,以及如何用 MySQL Shell 接管现有集群,解决元数据抵触的问题。绝对于手工形式搭建 MGR 集群,用 MySQL Shell 操作会不便很多,举荐应用。

参考资料、文档

  • MySQL 8.0 Reference Manual
  • 数据库内核开发 – 温正湖
  • Group Replication 原理 – 宋利兵

免责申明

因集体程度无限,专栏中不免存在错漏之处,请勿间接复制文档中的命令、办法间接利用于线上生产环境。请读者们务必先充沛了解并在测试环境验证通过前方可正式施行,防止造成生产环境的毁坏或侵害。

Enjoy GreatSQL :)

文章举荐:

GreatSQL 季报(2021.12.26)
https://mp.weixin.qq.com/s/FZ…

技术分享 |sysbench 压测工具用法浅析
https://mp.weixin.qq.com/s/m1…

故障剖析 | linux 磁盘 io 利用率高,剖析的正确姿态
https://mp.weixin.qq.com/s/7c…

技术分享 | 闪回在 MySQL 中的实现和改良
https://mp.weixin.qq.com/s/6j…

万答 #20,索引下推如何进行数据过滤
https://mp.weixin.qq.com/s/pt…

对于 GreatSQL

GreatSQL 是由万里数据库保护的 MySQL 分支,专一于晋升 MGR 可靠性及性能,反对 InnoDB 并行查问个性,是实用于金融级利用的 MySQL 分支版本。

Gitee:
https://gitee.com/GreatSQL/Gr…

GitHub:
https://github.com/GreatSQL/G…

Bilibili:
https://space.bilibili.com/13…

微信 &QQ 群:
可搜寻增加 GreatSQL 社区助手微信好友,发送验证信息“加群”退出 GreatSQL/MGR 交换微信群

QQ 群:533341697
微信小助手:wanlidbc

本文由博客一文多发平台 OpenWrite 公布!

退出移动版