关于数据迁移:分布式-利用-TiDB-DM-将数据从-MySQL-迁移至-DBLE

34次阅读

共计 16798 个字符,预计需要花费 42 分钟才能阅读完成。

作者:沈光宇

爱可生南区 DBA 团队成员,次要负责 MySQL 故障解决和性能优化。对技术执着,为客户负责。

本文起源:原创投稿

* 爱可生开源社区出品,原创内容未经受权不得随便应用,转载请分割小编并注明起源。


一、背景

某客户 MySQL 实例中数据较大(上 T 的数据量),须要将 MySQL 中数据全量 / 增量疾速迁徙至 DBLE。TiDB DM 反对多线程数据导出导入及增量同步,因而抉择了 TiDB DM 为作数据迁徙工具。本文以此案例为背景,介绍应用 TiDB DM 将数据从 MySQL 迁徙至 DBLE 的办法及遇到的一些问题。

二、数据迁徙示意图及服务器环境

(1)数据迁徙示意图如下:

(2)服务器环境如下:

服务器 IP 角色 端口 版本 备注
10.186.65.83 dm master 8261/8291 V2.0.7 tiup 装置节点
10.186.65.118 dm worker 8262 V2.0.7
10.186.65.14 SRC MySQL 4490 MySQL 8.0.18
10.186.65.4 DBLE 8066 DBLE 3.21.10.6
10.186.65.61 DBLE datanode 4408 MySQL 8.0.25
10.186.65.65 DBLE datanode 4408 MySQL 8.0.25

三、装置 TiDB DM

(1)筹备环境及装置 tiup

# 别离在 dm master 和 worker 节点主机创立 tidb 用户, 明码为 dmadmin, 并配置 sudo
shell> useradd tidb
shell> echo "dmadmin" | passwd tidb --stdin
shell> echo "tidb ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/tidb

#切换至 tidb 用户, 生成密钥,并做 ssh 信赖,dm 集群内机器都须要做 ssh 信赖
shell> su - tidb
shell> ssh-keygen -t rsa
shell> ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@10.186.65.118
shell> ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@10.186.65.83

#装置 tiup, 在 tidb 用户下
shell> curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
shell> source .bash_profile
shell> which tiup
~/.tiup/bin/tiup

#装置 dmctl
shell> tiup install dm dmctl:v2.0.7
component dm version v1.10.2 is already installed
download https://tiup-mirrors.pingcap.com/dmctl-v2.0.7-linux-amd64.tar.gz 26.92 MiB / 26.92 MiB 100.00% 10.01 MiB/s

(2)装置 dm-cluster

# 生成配置模版
shell> tiup dm template > dm_topology.yaml
#批改后配置文件如下:shell> cat dm_topology.yaml  |grep -v ^[#-] |grep -v ^$
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/home/tidb/dm/deploy"
  data_dir: "/home/tidb/dm/data"
master_servers:
  - host: 10.186.65.83
worker_servers:
  - host: 10.186.65.118
monitoring_servers:
  - host: 10.186.65.83
grafana_servers:
  - host: 10.186.65.83
alertmanager_servers:
  - host: 10.186.65.83
  
#装置、并启动 DM 集群
shell> tiup dm deploy dm-test v2.0.7 ./dm_topology.yaml --user tidb -i /home/tidb/.ssh/id_rsa
shell> tiup dm start dm-test
#查看 DM 集群
[tidb@10_186_65_83 ~]$ tiup dm display dm-test
tiup is checking updates for component dm ...
Starting component `dm`: /home/tidb/.tiup/components/dm/v1.10.2/tiup-dm display dm-test
Cluster type:       dm
Cluster name:       dm-test
Cluster version:    v2.0.7
Deploy user:        tidb
SSH type:           builtin
Grafana URL:        http://10.186.65.83:3000
ID                  Role          Host           Ports      OS/Arch       Status     Data Dir                              Deploy Dir
--                  ----          ----           -----      -------       ------     --------                              ----------
10.186.65.83:9093   alertmanager  10.186.65.83   9093/9094  linux/x86_64  Up         /home/tidb/dm/data/alertmanager-9093  /home/tidb/dm/deploy/alertmanager-9093
10.186.65.83:8261   dm-master     10.186.65.83   8261/8291  linux/x86_64  Healthy|L  /home/tidb/dm/data/dm-master-8261     /home/tidb/dm/deploy/dm-master-8261
10.186.65.118:8262  dm-worker     10.186.65.118  8262       linux/x86_64  Free       /home/tidb/dm/data/dm-worker-8262     /home/tidb/dm/deploy/dm-worker-8262
10.186.65.83:3000   grafana       10.186.65.83   3000       linux/x86_64  Up         -                                     /home/tidb/dm/deploy/grafana-3000
10.186.65.83:9090   prometheus    10.186.65.83   9090       linux/x86_64  Up         /home/tidb/dm/data/prometheus-9090    /home/tidb/dm/deploy/prometheus-9090
Total nodes: 5

四、DBLE 端配置

(1)db.xml

<?xml version="1.0"?>
<dble:db xmlns:dble="http://dble.cloud/">
    <dbGroup name="dbGroup1" rwSplitMode="0" delayThreshold="-1">
        <heartbeat>select 1</heartbeat>
        <dbInstance name="host_1" url="10.186.65.61:4408" user="sgy" password="QfnHoIeIYL7ZT+EdJMYNuiLw6glbx2hEyrqxK+uxFPU8vja8vZHCLEKvKBOt1vOJdXCCgW7wNidJaTYWORDaUg==" maxCon="2000" minCon="50" primary="true"  usingDecrypt="true"></dbInstance>
    </dbGroup>
    <dbGroup name="dbGroup2" rwSplitMode="0" delayThreshold="-1">
        <heartbeat>select 1</heartbeat>
        <dbInstance name="host_2" url="10.186.65.65:4408" user="sgy" password="d+rG/82+4h21ARS8D6Gu5MIFQ2UBC0h+ZS0EI7bvWAtTJUBsKuFY8+AVZtW1pZk+qtISoI2WpVt29Z9eqMRP5A==" maxCon="2000" minCon="50" primary="true"  usingDecrypt="true"></dbInstance>
    </dbGroup>
</dble:db>
#注:用户明码为加密后的字符串

(2)sharding.xml

<?xml version="1.0"?>
<dble:sharding xmlns:dble="http://dble.cloud/" >
    <schema name="dm_meta" sqlMaxLimit="-1">
        <singleTable name="mysql_sync_to_dble_loader_checkpoint" shardingNode="dm_meta" />
        <singleTable name="mysql_sync_to_dble_syncer_checkpoint" shardingNode="dm_meta" />
    </schema>
    <schema name="sbtest" sqlMaxLimit="-1">
        <shardingTable name="sbtest1" shardingNode="dn1,dn2,dn3,dn4" sqlMaxLimit="-1" shardingColumn="id" function="pro_func_jumpHash"></shardingTable>
        <shardingTable name="sbtest2" shardingNode="dn1,dn2,dn3,dn4" sqlMaxLimit="-1" shardingColumn="id" function="pro_func_jumpHash"></shardingTable>
        <shardingTable name="t1" shardingNode="dn1,dn2,dn3,dn4" sqlMaxLimit="-1" shardingColumn="c1" function="pro_func_jumpHash"></shardingTable>
        <shardingTable name="t2" shardingNode="dn1,dn2,dn3,dn4" sqlMaxLimit="-1" shardingColumn="c1" function="pro_func_jumpHash"></shardingTable>
    </schema>

    <shardingNode name="dn1" dbGroup="dbGroup1" database="dh_dn1"></shardingNode>
    <shardingNode name="dn2" dbGroup="dbGroup1" database="dh_dn2"></shardingNode>
     <shardingNode name="dn3" dbGroup="dbGroup2" database="dh_dn3"></shardingNode>
    <shardingNode name="dn4" dbGroup="dbGroup2" database="dh_dn4"></shardingNode>
    <shardingNode name="dm_meta" dbGroup="dbGroup1" database="dm_meta"/>
    
    <function name="pro_func_jumpHash" class="jumpStringHash">
        <property name="partitionCount">4</property>
        <property name="hashSlice">0:0</property>
    </function>

</dble:sharding>
注:须要提前配置好 DM 同步时保留数据的 schema 及两张表, 表名别离是
dm 同步工作名 + _loader_checkpoint 即表 mysql_sync_to_dble_loader_checkpoint
dm 同步工作名 + _syncer_checkpoint 即表 mysql_sync_to_dble_syncer_checkpoint

(3)user.xml

<?xml version="1.0"?>
<dble:user xmlns:dble="http://dble.cloud/">
    <managerUser name="root" password="RYQdYYnzbcZlDuhV4LhJxFTM8kbU3d0y183LU+FgRWzNscvvvFPcdmeEKMdswMyzIaA+kObcozliHEYAlT0AjA==" usingDecrypt="true"></managerUser>
    <shardingUser name="sz" schemas="sbtest,dm_meta" password="jyaXmGFU+mdTGbUZIVbdEcVwItLPI+Yjxdq4wkOzhAYRB29WGZJd0/PkTJh3ky/v4E2yYoqgUzJXPzPRPiQF0Q==" usingDecrypt="true" readOnly="false" maxCon="1000"></shardingUser>
</dble:user>
#注:用户明码为加密后的字符串

(4)登陆 DBLE 治理端执行创立物理数据库命令

shell> /data/mysql/base/5.7.36/bin/mysql -uroot -padmin -h 10.186.65.4 -P 9066
dble> create database @@shardingNode = 'dn$1-4';
Query OK, 1 row affected (0.06 sec)

dble> create database @@shardingNode = 'dm_meta';
Query OK, 1 row affected (0.01 sec)

五、在源端 MySQL 压测创立测试数据

# 执行 prepare,初始化 100W 行数据
 /usr/share/sysbench/oltp_insert.lua --mysql-db=sbtest --mysql-host=10.186.65.14 --mysql-port=4490  --mysql-user=sgy --mysql-password=admin  --tables=2 --table-size=1000000 --threads=16 --time=60 --report-interval=1 --max-requests=0 --percentile=95 --mysql-ignore-errors=1062   prepare
sysbench 1.0.17 (using system LuaJIT 2.0.4)

Initializing worker threads...

Creating table 'sbtest1'...
Creating table 'sbtest2'...
Inserting 1000000 records into 'sbtest2'
Inserting 1000000 records into 'sbtest1'
Creating a secondary index on 'sbtest2'...
Creating a secondary index on 'sbtest1'...

#利用 sysbench 一直写入数据
shell> /usr/share/sysbench/oltp_read_write.lua --mysql-db=sbtest --mysql-host=10.186.65.14 --mysql-port=4490  --mysql-user=sgy --mysql-password=admin  --tables=2 --table-size=2000000 --threads=16 --time=6000 --report-interval=1 --max-requests=0 --percentile=95 --mysql-ignore-errors=1062   run
sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:
Number of threads: 16
Report intermediate results every 1 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[1s] thds: 16 tps: 585.44 qps: 11916.49 (r/w/o: 8360.33/1282.63/2273.53) lat (ms,95%): 55.82 err/s: 0.00 reconn/s: 0.00
[2s] thds: 16 tps: 744.18 qps: 14819.02 (r/w/o: 10379.16/1609.36/2830.50) lat (ms,95%): 41.85 err/s: 0.00 reconn/s: 0.00
[3s] thds: 16 tps: 605.11 qps: 12144.20 (r/w/o: 8500.54/1321.24/2322.42) lat (ms,95%): 47.47 err/s: 0.00 reconn/s: 0.00
[4s] thds: 16 tps: 773.91 qps: 15558.16 (r/w/o: 10881.71/1737.79/2938.65) lat (ms,95%): 41.10 err/s: 0.00 reconn/s: 0.00

六、启动数据同步工作

(1)配置数据源

# 应用 dmctl 工具生成明码加密字符串, 每次执行都会产生不同的加密字符串
shell> tiup dmctl encrypt 'admin'
dmXgktpuF18RP1mDN/B2UkA6pDN9PQ==

#mysql 数据库配置文件如下
shell> cat mysql_source_14_4490.yaml
source-id: "mysql_source_14_4490"
from:
  host: "10.186.65.14"
  port: 4490
  user: "sgy"
  password: "dmXgktpuF18RP1mDN/B2UkA6pDN9PQ=="
  
#创立数据源
shell> tiup dmctl --master-addr 10.186.65.83:8261  operate-source create ./mysql_source_14_4490.yaml
Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v2.0.7/dmctl/dmctl --master-addr 10.186.65.83:8261 operate-source create ./mysql_source_14_4490.yaml
{
    "result": true,
    "msg": "","sources": [
        {
            "result": true,
            "msg": "","source":"mysql_source_14_4490","worker":"dm-10.186.65.118-8262"
        }
    ]
}

#查看数据源
shell> tiup dmctl --master-addr 10.186.65.83:8261  operate-source show
{
    "result": true,
    "msg": "","sources": [
        {
            "result": true,
            "msg": "","source":"mysql_source_14_4490","worker":"dm-10.186.65.118-8262"
        }
    ]
}

(2)配置同步工作

# 同步配置文件如下
shell> cat mysql_sync_to_dble.yaml
name: "mysql_sync_to_dble"      #同步工作名
task-mode: "all"                #全量 + Binlog 实时同步
clean-dump-file: true
timezone: "Asia/Shanghai"
ignore-checking-items: ["auto_increment_ID"]
target-database:
  host: "10.186.65.4"           #DBLE 主机 IP 地址
  port: 8066                    #DBLE 流量端口
  user: "sz"                    #连贯 DBLE 的用户
  password: "KRfSNtdxe63tehpnCYoCz0ABdUGivg=="    #连贯 DBLE 用户明码,用 tiup dmctl encrypt 'password' 生成

mysql-instances:
-
  source-id: "mysql_source_14_4490"
  block-allow-list: "global"
  mydumper-config-name: "global"
  loader-config-name:   "global"
  syncer-config-name: "global"

block-allow-list:
  global:
    do-dbs: ["sbtest"]            #只迁徙 sbtest 库

mydumpers:
  global:
    extra-args: "-B sbtest"      #只 dump sbtest 库

loaders:
  global:
    pool-size: 8

syncers:
  global:
    worker-count: 8

#更具体配置阐明参考官网文档:https://docs.pingcap.com/zh/tidb-data-migration/v2.0/task-configuration-file-full#%E5%AE%8C%E6%95%B4%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E7%A4%BA%E4%BE%8B

#查看配置文件
shell> tiup dmctl:v2.0.7 --master-addr=10.186.65.83:8261  check-task ./mysql_sync_to_dble.yaml
Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v2.0.7/dmctl/dmctl --master-addr=10.186.65.83:8261 check-task ./mysql_sync_to_dble.yaml
{
    "result": true,
    "msg": "check pass!!!"
}

#启动同步工作
shell> tiup dmctl:v2.0.7 --master-addr=10.186.65.83:8261  start-task ./mysql_sync_to_dble.yaml
Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v2.0.7/dmctl/dmctl --master-addr=10.186.65.83:8261 start-task ./mysql_sync_to_dble.yaml
{
    "result": true,
    "msg": "","sources": [
        {
            "result": true,
            "msg": "","source":"mysql_source_14_4490","worker":"dm-10.186.65.118-8262"
        }
    ]
}

#在 dm-worker 节点查看导出的 SQL 文件, 
shell> ls -lh /home/tidb/dm/deploy/dm-worker-8262/dumped_data.mysql_sync_to_dble
total 384M
-rw-r--r-- 1 tidb tidb 187 Aug  5 14:04 metadata
-rw-r--r-- 1 tidb tidb 48M Aug  5 14:04 sbtest.sbtest1.0000000000000.sql
-rw-r--r-- 1 tidb tidb 48M Aug  5 14:04 sbtest.sbtest1.0000000010000.sql
-rw-r--r-- 1 tidb tidb 48M Aug  5 14:04 sbtest.sbtest1.0000000020000.sql
-rw-r--r-- 1 tidb tidb 48M Aug  5 14:04 sbtest.sbtest1.0000000030000.sql
-rw-r--r-- 1 tidb tidb 369 Aug  5 14:04 sbtest.sbtest1-schema.sql
-rw-r--r-- 1 tidb tidb 48M Aug  5 14:04 sbtest.sbtest2.0000000000000.sql
-rw-r--r-- 1 tidb tidb 48M Aug  5 14:04 sbtest.sbtest2.0000000010000.sql
-rw-r--r-- 1 tidb tidb 48M Aug  5 14:04 sbtest.sbtest2.0000000020000.sql
-rw-r--r-- 1 tidb tidb 48M Aug  5 14:04 sbtest.sbtest2.0000000030000.sql
-rw-r--r-- 1 tidb tidb 369 Aug  5 14:04 sbtest.sbtest2-schema.sql
-rw-r--r-- 1 tidb tidb 152 Aug  5 14:04 sbtest-schema-create.sql

#创立库 SQL 文件
shell> head /home/tidb/dm/deploy/dm-worker-8262/dumped_data.mysql_sync_to_dble/sbtest-schema-create.sql
/*!40101 SET NAMES binary*/;
CREATE DATABASE `sbtest` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_bin */ /*!80016 DEFAULT ENCRYPTION='N' */;

#创立表 SQL 文件
shell> head /home/tidb/dm/deploy/dm-worker-8262/dumped_data.mysql_sync_to_dble/sbtest.sbtest1-schema.sql
/*!40101 SET NAMES binary*/;
CREATE TABLE `sbtest1` (`id` int(11) NOT NULL AUTO_INCREMENT,
  `k` int(11) NOT NULL DEFAULT '0',
  `c` char(120) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
  `pad` char(60) COLLATE utf8mb4_bin NOT NULL DEFAULT '',
  PRIMARY KEY (`id`),
  KEY `k_1` (`k`)
) ENGINE=InnoDB AUTO_INCREMENT=1000001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;

#dump 生成批量插入数据 SQL 文件
shell> head /home/tidb/dm/deploy/dm-worker-8262/dumped_data.mysql_sync_to_dble/sbtest.sbtest1.0000000000000.sql
/*!40101 SET NAMES binary*/;
INSERT INTO `sbtest1` (`id`,`k`,`c`,`pad`) VALUES
(1,498670,'31451373586-15688153734-79729593694-96509299839-83724898275-86711833539-78981337422-35049690573-51724173961-87474696253','98996621624-36689827414-04092488557-09587706818-65008859162'),
(2,497778,'21472970079-70972780322-70018558993-71769650003-09270326047-32417012031-10768856803-14235120402-93989080412-18690312264','04776826683-45880822084-77922711547-29057964468-76514263618'),
(3,498956,'49376827441-24903985029-56844662308-79012577859-40518387141-60588419212-24399130405-42612257832-29494881732-71506024440','26843035807-96849339132-53943793991-69741192222-48634174017'),
(4,518727,'85762858421-36258200885-10758669419-44272723583-12529521893-95630803635-53907705724-07005352902-43001596772-53048338959','37979424284-37912826784-31868864947-42903702727-96097885121'),
(5,502480,'24805466175-85245528617-94635882649-46305216925-28637832581-03224489581-68883711727-95491561683-91969681472-12022277774','19288959552-55556468076-14192290426-55457672510-18043372364'),
(6,500774,'52892836230-54177743992-01821871718-48412537487-30066596248-87215430797-00375777469-64498831720-58542556455-90784765418','59487960480-08453890592-99628797439-16757639138-29377916560'),
(7,501466,'85820931248-14475640036-11980694501-86588543167-31029306229-09626867980-90685354565-02350460358-25863585366-53793794448','26081374730-86321700986-51212137094-30635959762-03880194434'),
(8,497838,'81578049255-33453976301-67096870761-27658738403-30546242249-53677469854-26594573136-34292002037-52736825353-99165193170','64289062455-51067794311-09919261228-11533354367-07401173317'),

(3)查看同步工作状态

shell> tiup dmctl:v2.0.7 --master-addr=10.186.65.83:8261  query-status ./mysql_sync_to_dble.yaml
Starting component `dmctl`: /home/tidb/.tiup/components/dmctl/v2.0.7/dmctl/dmctl --master-addr=10.186.65.83:8261 query-status ./mysql_sync_to_dble.yaml
{
    "result": true,
    "msg": "","sources": [
        {
            "result": true,
            "msg": "","sourceStatus": {"source":"mysql_source_14_4490","worker":"dm-10.186.65.118-8262","result": null,"relayStatus": null},
            "subTaskStatus": [
                {
                    "name": "mysql_sync_to_dble",
                    "stage": "Running",
                    "unit": "Sync",
                    "result": null,
                    "unresolvedDDLLockID": "","sync": {"totalEvents":"425355","totalTps":"1203","recentTps":"2386","masterBinlog":"(mysql-bin.000027, 76114275)","masterBinlogGtid":"c7827165-bf89-11ec-92e6-02000aba410e:1-5091258","syncerBinlog":"(mysql-bin.000026, 229462770)","syncerBinlogGtid":"",
                        "blockingDDLs": [ ],
                        "unresolvedGroups": [ ],
                        "synced": false,
                        "binlogType": "remote",
                        "secondsBehindMaster": "38"
                    }
                }
            ]
        }
    ]
}

七、验证数据

# 暂停先前执行的 sysbench 程序,确保 dm 同步与源端无提早时,对表作 count(*)
#在源端 MySQL,对表进行 count(*)
mysql> select count(*) from sbtest.sbtest1;
+----------+
| count(*) |
+----------+
|  1116471 |
+----------+
1 row in set (0.68 sec)

mysql> select count(*) from sbtest.sbtest2;
+----------+
| count(*) |
+----------+
|  1117020 |
+----------+
1 row in set (0.54 sec)

#通过 dble 对表进行 count(*)
dble> select count(*) from sbtest.sbtest1;
+----------+
| count(*) |
+----------+
|  1116471 |
+----------+
1 row in set (1.04 sec)

dble> select count(*) from sbtest.sbtest2;
+----------+
| count(*) |
+----------+
|  1117020 |
+----------+
1 row in set (1.58 sec)

#在源端数据进行更新操作
mysql> update sbtest.sbtest1 set c=uuid() where id=20;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> select * from sbtest.sbtest1 where id=20;
+----+--------+--------------------------------------+-------------------------------------------------------------+
| id | k      | c                                    | pad                                                         |
+----+--------+--------------------------------------+-------------------------------------------------------------+
| 20 | 501448 | 24649f0f-14d1-11ed-b4f2-02000aba410e | 91052688950-96415657187-00012408429-12357288330-41295735957 |
+----+--------+--------------------------------------+-------------------------------------------------------------+
1 row in set (0.00 sec)

#通过 dble 查问数据
dble> select * from sbtest.sbtest1 where id=20;
+----+--------+--------------------------------------+-------------------------------------------------------------+
| id | k      | c                                    | pad                                                         |
+----+--------+--------------------------------------+-------------------------------------------------------------+
| 20 | 501448 | 24649f0f-14d1-11ed-b4f2-02000aba410e | 91052688950-96415657187-00012408429-12357288330-41295735957 |
+----+--------+--------------------------------------+-------------------------------------------------------------+
1 row in set (0.01 sec)

八、注意事项

(1)TiDB DM 版本要求:TiDB DM v2.0.7

(2)在进行数据量导入时须要敞开 DBLE 慢查问日志

# 因为数据同步在全量导入阶段会产生大量慢查问,须要在执行同步工作之前先敞开 DBLE 的慢查问日志
shell> mysql --prompt='dble>' -uroot -p -h 10.186.65.4 -P 9066
#查看慢查问日志状态,1 示意开启,0 示意敞开
dble> show @@slow_query_log;   
+------------------+
| @@slow_query_log |
+------------------+
| 1                |
+------------------+
1 row in set (0.00 sec)
 
#全量数据导入前,敞开 dble 慢查问日志
dble> disable @@slow_query_log;
Query OK, 1 row affected (0.01 sec)
disable slow_query_log success
 
#待全量数据导入实现后,开启 dble 慢查问日志
dble> enable @@slow_query_log;
Query OK, 1 row affected (0.01 sec)
enable slow_query_log success

(3)查看源端 MySQL 实例上是否有运行相似 pt-kill 的程序

# 如果在源端 MySQL 实例上运行有 pt-kill 或相似的程序,DM 同步工作在执行全量数据 dump 阶段,导出线程可能会被程序 kill 掉,导致全备失败。dm-worker.log 日志如下:[ERROR] [subtask.go:311] ["unit process error"] [subtask=oms-sync-receiver] [unit=Dump] ["error information"="{\"ErrCode\":32001,\"ErrClass\":\"dump-unit\",\"ErrScope\":\"internal\",\"ErrLevel\":\"high\",\"Message\":\"mydumper/dumpling runs with error, with output (may empty): \",\"RawCause\":\"invalid connection\"}"]

解决办法:先将 pt-kill 实用工具进行,而后再从新运行 TiDM 同步工作,待全量数据 dump 实现后在开启 pt-kill。

(4)迁徙时如需对指标表表构造进行批改,如源表有建设分区而指标表通过分片后不须要建设分区、批改表字符集 (倡议应用 utf8mb4) 等,可先将表构造从源库导出,批改后导入 DBLE 或导入 DBLE 后在批改表构造,最初再开启 DM 同步工作。

(5)源、指标表字段程序必须要统一,否则可能会导致数据不统一,两字段同为 varchar 并且长度不相等时,同步工作并不会报错,但写入指标表的值可能会被截断,如下图:

DM 工作启动后连贯指标端时,会将 session 级别 sql_mode 设置成上面值:
sql_mode=’IGNORE_SPACE,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,ALLOW_INVALID_DATES,ONLY_FULL_GROUP_BY’;

正文完
 0