背景
在GreatSQL主从复制环境中,有时候可能会呈现一些误操作,将本应该写入到主库的数据写入到了从库,导致主从数据不统一,影响数据同步。是否能够将写入从库的数据同步写入主库呢?
测试环境
角色 | IP地址 | 数据库凋谢端口 | 版本 |
---|---|---|---|
主库 | 192.168.137.179 | 3308 | GreatSQL 8.0.32 |
从库 | 192.168.137.180 | 3308 | GreatSQL 8.0.32 |
复制链路:
greatsql> show slave status\G;*************************** 1. row *************************** Slave_IO_State: Waiting for source to send event Master_Host: 192.168.137.179 Master_User: root Master_Port: 3308 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 157 Relay_Log_File: oracle_dts-relay-bin.000002 Relay_Log_Pos: 367 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes
表数据
主库
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME | LOC |+--------+------------+----------+| 10 | ACCOUNTING | NEW YORK || 20 | RESEARCH | DALLAS || 30 | SALES | CHICAGO || 40 | OPERATIONS | BOSTON || 60 | it | 成都 |+--------+------------+----------+5 rows in set (0.00 sec)greatsql> insert into dept select 70,'IT','CTU';Query OK, 1 row affected (0.01 sec)Records: 1 Duplicates: 0 Warnings: 0greatsql> commit;Query OK, 0 rows affected (0.00 sec)
从库
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME | LOC |+--------+------------+----------+| 10 | ACCOUNTING | NEW YORK || 20 | RESEARCH | DALLAS || 30 | SALES | CHICAGO || 40 | OPERATIONS | BOSTON || 60 | it | 成都 || 70 | IT | CTU |+--------+------------+----------+6 rows in set (0.00 sec)
主库写入的数据失常同步到从库
在从库写入数据
greatsql> insert into dept select 80,'IT','SZ';Query OK, 1 row affected (0.01 sec)Records: 1 Duplicates: 0 Warnings: 0greatsql> insert into dept select 90,'SALES','SZ';Query OK, 1 row affected (0.01 sec)Records: 1 Duplicates: 0 Warnings: 0
从库数据
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME | LOC |+--------+------------+----------+| 10 | ACCOUNTING | NEW YORK || 20 | RESEARCH | DALLAS || 30 | SALES | CHICAGO || 40 | OPERATIONS | BOSTON || 60 | it | 成都 || 70 | IT | CTU || 80 | IT | SZ || 90 | SALES | SZ |+--------+------------+----------+8 rows in set (0.00 sec)
主库数据
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME | LOC |+--------+------------+----------+| 10 | ACCOUNTING | NEW YORK || 20 | RESEARCH | DALLAS || 30 | SALES | CHICAGO || 40 | OPERATIONS | BOSTON || 60 | it | 成都 || 70 | IT | CTU |+--------+------------+----------+6 rows in set (0.01 sec)
此时从库写入的数据在主库中并没有呈现
解析从库的二进制日志
$ mysqlbinlog -vv --base64-output=decode-rows binlog.000002>b002.sql BEGIN/*!*/;#at 354#240221 16:10:25 server id 18001 end_log_pos 416 CRC32 0xcc81584b Table_map: `scott`.`dept` mapped to number 101#has_generated_invisible_primary_key=0#at 416#240221 16:10:25 server id 18001 end_log_pos 462 CRC32 0x5149e38a Write_rows: table id 101 flags: STMT_END_F###INSERT INTO `scott`.`dept`###SET###@1=80 /* INT meta=0 nullable=0 is_null=0 */###@2='IT' /* VARSTRING(56) meta=56 nullable=1 is_null=0 */###@3='SZ' /* VARSTRING(52) meta=52 nullable=1 is_null=0 */#at 462#240221 16:10:25 server id 18001 end_log_pos 493 CRC32 0xab795e4a Xid = 34
能够看到写入的从库写入的数据在 binlog.000002,咱们能够通过 grep 从库的 server id 确定日志文件中有没有在从库写入的数据。
复制从库日志到主库
$ scp binlog.000002 192.168.137.179:/tmp/Warning: Permanently added '192.168.137.179' (ECDSA) to the list of known hosts.root@192.168.137.179's password: binlog.000002 100% 836 1.1MB/s 00:00
利用从库的二进制日志
利用从库的日志到主库
$ mysqlbinlog binlog.000002|mysql -uroot -p -h127.1 -P3308
主库利用从库二进制日志时,从库二进制日志信息未发生变化
greatsql> show binary logs;+---------------+-----------+-----------+| Log_name | File_size | Encrypted |+---------------+-----------+-----------+| binlog.000001 | 498 | No || binlog.000002 | 836 | No || binlog.000003 | 237 | No |+---------------+-----------+-----------+3 rows in set (0.00 sec)
主从复制链路状态失常
greatsql> show slave status\G;*************************** 1. row *************************** Slave_IO_State: Waiting for source to send event Master_Host: 192.168.137.179 Master_User: root Master_Port: 3308 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 1059 Relay_Log_File: oracle_dts-relay-bin.000002 Relay_Log_Pos: 1269 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes
能够看到主库在利用从库产生的二进制日志时,从库没有反复利用这些二进制日志(By default, the replication I/O (receiver) thread does not write binary log events to the relay log if they have the replica's server ID (this optimization helps save disk usage). )
,呈现主键抵触,导致复制状态出错
查看主库数据
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME | LOC |+--------+------------+----------+| 10 | ACCOUNTING | NEW YORK || 20 | RESEARCH | DALLAS || 30 | SALES | CHICAGO || 40 | OPERATIONS | BOSTON || 60 | it | 成都 || 70 | IT | CTU || 80 | IT | SZ || 90 | SALES | SZ |+--------+------------+----------+8 rows in set (0.00 sec)
后续测试,主库写入数据可失常同步到从库。
Enjoy GreatSQL :)
## 对于 GreatSQL
GreatSQL是实用于金融级利用的国内自主开源数据库,具备高性能、高牢靠、高易用性、高平安等多个外围个性,能够作为MySQL或Percona Server的可选替换,用于线上生产环境,且完全免费并兼容MySQL或Percona Server。
相干链接: GreatSQL社区 Gitee GitHub Bilibili
GreatSQL社区:
社区博客有奖征稿详情:https://greatsql.cn/thread-100-1-1.html
技术交换群:
微信:扫码增加GreatSQL社区助手
微信好友,发送验证信息加群
。