初始化环境

最小化装置操作系统,官网倡议版本为 CentOS 7.3 及以上,这里的版本为:

[root@localhost ~]# cat /etc/redhat-release CentOS Linux release 7.8.2003 (Core)[root@localhost ~]# uname -r3.10.0-1127.el7.x86_64

内存至多为4G,否定有可能会呈现集群启动失败的状况。

应用以下脚本初始化环境:

[root@localhost ~]# vi tidb-init.sh#!/bin/bash# 敞开防火墙echo "=========stop firewalld============="systemctl stop firewalldsystemctl disable firewalldsystemctl status firewalld# 敞开NetworkManagerecho "=========stop NetworkManager ============="systemctl stop NetworkManagersystemctl disable NetworkManagersystemctl status NetworkManager# 敞开selinuxecho "=========disable selinux============="sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/configsetenforce 0getenforce# 敞开swapecho "=========close swap============="sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -afree -m# 工夫同步echo "=========sync time============="yum install chrony -ycat >> /etc/chrony.conf << EOFserver ntp.aliyun.com iburstEOFsystemctl start chronydsystemctl enable chronydchronyc sources#配置yumecho "=========config yum============="yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix -ywget -O /etc/yum.repos.d/CentOS-Base.repo  http://mirrors.aliyun.com/repo/Centos-7.repowget -O /etc/yum.repos.d/epel.repo  http://mirrors.aliyun.com/repo/epel-7.repoyum clean allyum makecache[root@localhost ~]# sh tidb-init.sh

配置网络:

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33TYPE="Ethernet"PROXY_METHOD="none"BROWSER_ONLY="no"BOOTPROTO="none"DEFROUTE="yes"IPV4_FAILURE_FATAL="no"IPV6INIT="yes"IPV6_AUTOCONF="yes"IPV6_DEFROUTE="yes"IPV6_FAILURE_FATAL="no"IPV6_ADDR_GEN_MODE="stable-privacy"NAME="ens33"UUID="96350fad-1ffc-4410-a068-5d13244affb7"DEVICE="ens33"ONBOOT="yes"IPADDR=192.168.44.134NETMASK=255.255.255.0GATEWAY=192.168.44.2DNS1=192.168.44.2[root@localhost ~]# /etc/init.d/network restart

批改主机名:

[root@localhost ~]# hostnamectl set-hostname tidbtest01[root@tidbtest01 ~]# echo "192.168.44.134   tidbtest01" >> /etc/hosts

集群拓扑

最小规模的 TiDB 集群拓扑:

实例个数IP配置
TiKV3192.168.44.134 192.168.44.134 192.168.44.134防止端口和目录抵触
TiDB1192.168.44.134默认端口 全局目录配置
PD1192.168.44.134默认端口 全局目录配置
TiFlash1192.168.44.134默认端口 全局目录配置
Monitor1192.168.44.134默认端口 全局目录配置

施行部署

  1. 下载并装置 TiUP
[root@tidbtest01 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100 8697k  100 8697k    0     0  4637k      0  0:00:01  0:00:01 --:--:-- 4636kWARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.jsonYou can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.jsonSet mirror to https://tiup-mirrors.pingcap.com successDetected shell: bashShell profile:  /root/.bash_profile/root/.bash_profile has been modified to add tiup to PATHopen a new terminal or source /root/.bash_profile to use itInstalled path: /root/.tiup/bin/tiup===============================================Have a try:     tiup playground===============================================[root@tidbtest01 ~]# source /root/.bash_profile
  1. 装置 TiUP 的 cluster 组件
[root@tidbtest01 ~]# tiup clusterThe component `cluster` is not installed; downloading from repository.download https://tiup-mirrors.pingcap.com/cluster-v1.3.2-linux-amd64.tar.gz 10.05 MiB / 10.05 MiB 100.00% 9.91 MiB p/s            Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-clusterDeploy a TiDB cluster for productionUsage:  tiup cluster [command]Available Commands:  check       Perform preflight checks for the cluster.  deploy      Deploy a cluster for production  start       Start a TiDB cluster  stop        Stop a TiDB cluster  restart     Restart a TiDB cluster  scale-in    Scale in a TiDB cluster  scale-out   Scale out a TiDB cluster  destroy     Destroy a specified cluster  clean       (EXPERIMENTAL) Cleanup a specified cluster  upgrade     Upgrade a specified TiDB cluster  exec        Run shell command on host in the tidb cluster  display     Display information of a TiDB cluster  prune       Destroy and remove instances that is in tombstone state  list        List all clusters  audit       Show audit log of cluster operation  import      Import an exist TiDB cluster from TiDB-Ansible  edit-config Edit TiDB cluster config.Will use editor from environment variable `EDITOR`, default use vi  reload      Reload a TiDB cluster's config and restart if needed  patch       Replace the remote package with a specified package and restart the service  rename      Rename the cluster  enable      Enable a TiDB cluster automatically at boot  disable     Disable starting a TiDB cluster automatically at boot  help        Help about any commandFlags:  -h, --help                help for tiup      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)  -v, --version             version for tiup      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)  -y, --yes                 Skip all confirmations and assumes 'yes'Use "tiup cluster help [command]" for more information about a command.
  1. 调大 sshd 服务的连接数限度
[root@tidbtest01 ~]# vi /etc/ssh/sshd_configMaxSessions 20[root@tidbtest01 ~]# service sshd restart
  1. 创立配置文件
[root@tidbtest01 ~]# vi topo.yaml# # Global variables are applied to all deployments and used as the default value of# # the deployments if a specific deployment value is missing.global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data"# # Monitored variables are applied to all the machines.monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115server_configs: tidb:   log.slow-threshold: 300 tikv:   readpool.storage.use-unified-pool: false   readpool.coprocessor.use-unified-pool: true pd:   replication.enable-placement-rules: true   replication.location-labels: ["host"] tiflash:   logger.level: "info"pd_servers: - host: 192.168.44.134tidb_servers: - host: 192.168.44.134tikv_servers: - host: 192.168.44.134   port: 20160   status_port: 20180   config:     server.labels: { host: "logic-host-1" } - host: 192.168.44.134   port: 20161   status_port: 20181   config:     server.labels: { host: "logic-host-2" } - host: 192.168.44.134   port: 20162   status_port: 20182   config:     server.labels: { host: "logic-host-3" }tiflash_servers: - host: 192.168.44.134monitoring_servers: - host: 192.168.44.134grafana_servers: - host: 192.168.44.134
  1. 查看可用版本

    通过 tiup list tidb 命令来查看以后反对部署的 TiDB 版本

[root@tidbtest01 ~]# tiup list tidb
  1. 部署集群

命令为:

tiup cluster deploy <cluster-name> <tidb-version> ./topo.yaml --user root -p
  • 参数 cluster-name 示意设置集群名称
  • 参数 tidb-version 示意设置集群版本
[root@tidbtest01 ~]# tiup cluster deploy tidb-cluster v4.0.10 ./topo.yaml --user root -pStarting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster deploy tidb-cluster v4.0.10 ./topo.yaml --user root -pPlease confirm your topology:Cluster type:    tidbCluster name:    tidb-clusterCluster version: v4.0.10Type        Host            Ports                            OS/Arch       Directories----        ----            -----                            -------       -----------pd          192.168.44.134  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379tikv        192.168.44.134  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160tikv        192.168.44.134  20161/20181                      linux/x86_64  /tidb-deploy/tikv-20161,/tidb-data/tikv-20161tikv        192.168.44.134  20162/20182                      linux/x86_64  /tidb-deploy/tikv-20162,/tidb-data/tikv-20162tidb        192.168.44.134  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000tiflash     192.168.44.134  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000prometheus  192.168.44.134  9090                             linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090grafana     192.168.44.134  3000                             linux/x86_64  /tidb-deploy/grafana-3000Attention:    1. If the topology is not what you expected, check your yaml file.    2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]:  yInput SSH password: 这里输出root明码+ Generate SSH keys ... Done+ Download TiDB components  - Download pd:v4.0.10 (linux/amd64) ... Done  - Download tikv:v4.0.10 (linux/amd64) ... Done  - Download tidb:v4.0.10 (linux/amd64) ... Done  - Download tiflash:v4.0.10 (linux/amd64) ... Done  - Download prometheus:v4.0.10 (linux/amd64) ... Done  - Download grafana:v4.0.10 (linux/amd64) ... Done  - Download node_exporter:v0.17.0 (linux/amd64) ... Done  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done+ Initialize target host environments  - Prepare 192.168.44.134:22 ... Done+ Copy files  - Copy pd -> 192.168.44.134 ... Done  - Copy tikv -> 192.168.44.134 ... Done  - Copy tikv -> 192.168.44.134 ... Done  - Copy tikv -> 192.168.44.134 ... Done  - Copy tidb -> 192.168.44.134 ... Done  - Copy tiflash -> 192.168.44.134 ... Done  - Copy prometheus -> 192.168.44.134 ... Done  - Copy grafana -> 192.168.44.134 ... Done  - Copy node_exporter -> 192.168.44.134 ... Done  - Copy blackbox_exporter -> 192.168.44.134 ... Done+ Check statusEnabling component pd        Enabling instance pd 192.168.44.134:2379        Enable pd 192.168.44.134:2379 successEnabling component node_exporterEnabling component blackbox_exporterEnabling component tikv        Enabling instance tikv 192.168.44.134:20162        Enabling instance tikv 192.168.44.134:20160        Enabling instance tikv 192.168.44.134:20161        Enable tikv 192.168.44.134:20162 success        Enable tikv 192.168.44.134:20161 success        Enable tikv 192.168.44.134:20160 successEnabling component tidb        Enabling instance tidb 192.168.44.134:4000        Enable tidb 192.168.44.134:4000 successEnabling component tiflash        Enabling instance tiflash 192.168.44.134:9000        Enable tiflash 192.168.44.134:9000 successEnabling component prometheus        Enabling instance prometheus 192.168.44.134:9090        Enable prometheus 192.168.44.134:9090 successEnabling component grafana        Enabling instance grafana 192.168.44.134:3000        Enable grafana 192.168.44.134:3000 successCluster `tidb-cluster` deployed successfully, you can start it with command: `tiup cluster start tidb-cluster`
  1. 启动集群
[root@tidbtest01 ~]# tiup cluster start tidb-clusterStarting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster start tidb-clusterStarting cluster tidb-cluster...+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134+ [ Serial ] - StartClusterStarting component pd        Starting instance pd 192.168.44.134:2379        Start pd 192.168.44.134:2379 successStarting component node_exporter        Starting instance 192.168.44.134        Start 192.168.44.134 successStarting component blackbox_exporter        Starting instance 192.168.44.134        Start 192.168.44.134 successStarting component tikv        Starting instance tikv 192.168.44.134:20162        Starting instance tikv 192.168.44.134:20160        Starting instance tikv 192.168.44.134:20161        Start tikv 192.168.44.134:20162 success        Start tikv 192.168.44.134:20161 success        Start tikv 192.168.44.134:20160 successStarting component tidb        Starting instance tidb 192.168.44.134:4000        Start tidb 192.168.44.134:4000 successStarting component tiflash        Starting instance tiflash 192.168.44.134:9000        Start tiflash 192.168.44.134:9000 successStarting component prometheus        Starting instance prometheus 192.168.44.134:9090        Start prometheus 192.168.44.134:9090 successStarting component grafana        Starting instance grafana 192.168.44.134:3000        Start grafana 192.168.44.134:3000 success+ [ Serial ] - UpdateTopology: cluster=tidb-clusterStarted cluster `tidb-cluster` successfully

拜访集群

  1. 装置MySQL客户端
[root@tidbtest01 ~]# yum -y install mysql
  1. 拜访TiDB,明码为空
[root@tidbtest01 ~]# mysql -h 192.168.44.134 -P 4000 -u rootWelcome to the MariaDB monitor.  Commands end with ; or \g.Your MySQL connection id is 2Server version: 5.7.25-TiDB-v4.0.10 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatibleCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MySQL [(none)]> show databases;+--------------------+| Database           |+--------------------+| INFORMATION_SCHEMA || METRICS_SCHEMA     || PERFORMANCE_SCHEMA || mysql              || test               |+--------------------+5 rows in set (0.00 sec)MySQL [(none)]> select tidb_version()\G*************************** 1. row ***************************tidb_version(): Release Version: v4.0.10Edition: CommunityGit Commit Hash: dbade8cda4c5a329037746e171449e0a1dfdb8b3Git Branch: heads/refs/tags/v4.0.10UTC Build Time: 2021-01-15 02:59:27GoVersion: go1.13Race Enabled: falseTiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306Check Table Before Drop: false1 row in set (0.00 sec)
  1. 查看曾经部署的集群列表
[root@tidbtest01 ~]# tiup cluster listStarting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster listName          User  Version  Path                                               PrivateKey----          ----  -------  ----                                               ----------tidb-cluster  tidb  v4.0.10  /root/.tiup/storage/cluster/clusters/tidb-cluster  /root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa
  1. 查看集群的拓扑构造和状态
[root@tidbtest01 ~]# tiup cluster display tidb-clusterStarting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster display tidb-clusterCluster type:       tidbCluster name:       tidb-clusterCluster version:    v4.0.10SSH type:           builtinDashboard URL:      http://192.168.44.134:2379/dashboardID                    Role        Host            Ports                            OS/Arch       Status   Data Dir                    Deploy Dir--                    ----        ----            -----                            -------       ------   --------                    ----------192.168.44.134:3000   grafana     192.168.44.134  3000                             linux/x86_64  Up       -                           /tidb-deploy/grafana-3000192.168.44.134:2379   pd          192.168.44.134  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379          /tidb-deploy/pd-2379192.168.44.134:9090   prometheus  192.168.44.134  9090                             linux/x86_64  Up       /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090192.168.44.134:4000   tidb        192.168.44.134  4000/10080                       linux/x86_64  Up       -                           /tidb-deploy/tidb-4000192.168.44.134:9000   tiflash     192.168.44.134  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000192.168.44.134:20160  tikv        192.168.44.134  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160       /tidb-deploy/tikv-20160192.168.44.134:20161  tikv        192.168.44.134  20161/20181                      linux/x86_64  Up       /tidb-data/tikv-20161       /tidb-deploy/tikv-20161192.168.44.134:20162  tikv        192.168.44.134  20162/20182                      linux/x86_64  Up       /tidb-data/tikv-20162       /tidb-deploy/tikv-20162Total nodes: 8
  1. 拜访Dashboard

通过下面输入的Dashboard URL:http://192.168.44.134:2379/dashboard拜访集群控制台页面,默认用户名为 root,明码为空。

  1. 拜访 TiDB 的 Grafana 监控

通过下面输入的grafanaHostPorts拜访集群 Grafana 监控页面,默认用户名和明码均为 admin。

欢送关注我的公众号,一起学习。