疾速开始
部署启动
1. 执行以下命令,编译生成sharding-scaling二进制包:
git clone https://github.com/apache/shardingsphere.git;cd shardingsphere;mvn clean install -Prelease;
公布包所在目录为:/sharding-distribution/sharding-scaling-distribution/target/apache-shardingsphere-${latest.release.version}-sharding-scaling-bin.tar.gz
。
2. 解压缩公布包,批改配置文件conf/server.yaml
,这里次要批改启动端口,保障不与本机其余端口抵触,其余值放弃默认即可:
port: 8888blockQueueSize: 10000pushTimeout: 1000workerThread: 30
3. 启动sharding-scaling:
sh bin/start.sh
留神:
如果后端连贯MySQL数据库,须要下载MySQL Connector/J,
解压缩后,将mysql-connector-java-5.1.47.jar拷贝到${sharding-scaling}lib目录。
4. 查看日志logs/stdout.log
,确保启动胜利。
创立迁徙工作
Sharding-Scaling提供相应的HTTP接口来治理迁徙工作,部署启动胜利后,咱们能够调用相应的接口来启动迁徙工作。
创立迁徙工作:
curl -X POST \ http://localhost:8888/shardingscaling/job/start \ -H 'content-type: application/json' \ -d '{ "ruleConfiguration": { "sourceDatasource": "ds_0: !!org.apache.shardingsphere.orchestration.core.configuration.YamlDataSourceConfiguration\n dataSourceClassName: com.zaxxer.hikari.HikariDataSource\n properties:\n jdbcUrl: jdbc:mysql://127.0.0.1:3306/test?serverTimezone=UTC&useSSL=false\n username: root\n password: '\''123456'\''\n connectionTimeout: 30000\n idleTimeout: 60000\n maxLifetime: 1800000\n maxPoolSize: 50\n minPoolSize: 1\n maintenanceIntervalMilliseconds: 30000\n readOnly: false\n", "sourceRule": "defaultDatabaseStrategy:\n inline:\n algorithmExpression: ds_${user_id % 2}\n shardingColumn: user_id\ntables:\n t1:\n actualDataNodes: ds_0.t1\n keyGenerator:\n column: order_id\n type: SNOWFLAKE\n logicTable: t1\n tableStrategy:\n inline:\n algorithmExpression: t1\n shardingColumn: order_id\n t2:\n actualDataNodes: ds_0.t2\n keyGenerator:\n column: order_item_id\n type: SNOWFLAKE\n logicTable: t2\n tableStrategy:\n inline:\n algorithmExpression: t2\n shardingColumn: order_id\n", "destinationDataSources": { "name": "dt_0", "password": "123456", "url": "jdbc:mysql://127.0.0.1:3306/test2?serverTimezone=UTC&useSSL=false", "username": "root" } }, "jobConfiguration": { "concurrency": 3 }}'
留神:上述须要批改ruleConfiguration.sourceDatasource
和ruleConfiguration.sourceRule
,别离为源端ShardingSphere数据源和数据表规定相干配置;
以及ruleConfiguration.destinationDataSources
中指标端sharding-proxy的相干信息。
返回如下信息,示意工作创立胜利:
{ "success": true, "errorCode": 0, "errorMsg": null, "model": null}
须要留神的是,目前Sharding-Scaling工作创立胜利后,便会主动运行,进行数据的迁徙。
查问工作进度
执行如下命令获取以后所有迁徙工作:
curl -X GET \ http://localhost:8888/shardingscaling/job/list
返回示例如下:
{ "success": true, "errorCode": 0, "model": [ { "jobId": 1, "jobName": "Local Sharding Scaling Job", "status": "RUNNING" } ]}
进一步查问工作具体迁徙状态:
curl -X GET \ http://localhost:8888/shardingscaling/job/progress/1
返回工作详细信息如下:
{ "success": true, "errorCode": 0, "errorMsg": null, "model": { "id": 1, "jobName": "Local Sharding Scaling Job", "status": "RUNNING" "syncTaskProgress": [{ "id": "127.0.0.1-3306-test", "status": "SYNCHRONIZE_REALTIME_DATA", "historySyncTaskProgress": [{ "id": "history-test-t1#0", "estimatedRows": 41147, "syncedRows": 41147 }, { "id": "history-test-t1#1", "estimatedRows": 42917, "syncedRows": 42917 }, { "id": "history-test-t1#2", "estimatedRows": 43543, "syncedRows": 43543 }, { "id": "history-test-t2#0", "estimatedRows": 39679, "syncedRows": 39679 }, { "id": "history-test-t2#1", "estimatedRows": 41483, "syncedRows": 41483 }, { "id": "history-test-t2#2", "estimatedRows": 42107, "syncedRows": 42107 }], "realTimeSyncTaskProgress": { "id": "realtime-test", "delayMillisecond": 1576563771372, "logPosition": { "filename": "ON.000007", "position": 177532875, "serverId": 0 } } }] }}
结束任务
数据迁徙实现后,咱们能够调用接口结束任务:
curl -X POST \ http://localhost:8888/shardingscaling/job/stop \ -H 'content-type: application/json' \ -d '{ "jobId":1}'
返回如下信息示意工作胜利完结:
{ "success": true, "errorCode": 0, "errorMsg": null, "model": null}
完结Sharding-Scaling
sh bin/stop.sh