关于数据库:单机部署TiDB测试集群

39次阅读

共计 13288 个字符,预计需要花费 34 分钟才能阅读完成。

初始化环境

最小化装置操作系统,官网倡议版本为 CentOS 7.3 及以上,这里的版本为:

[root@localhost ~]# cat /etc/redhat-release 
CentOS Linux release 7.8.2003 (Core)
[root@localhost ~]# uname -r
3.10.0-1127.el7.x86_64

内存至多为 4G,否定有可能会呈现集群启动失败的状况。

应用以下脚本初始化环境:

[root@localhost ~]# vi tidb-init.sh
#!/bin/bash

# 敞开防火墙
echo "=========stop firewalld============="
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

# 敞开 NetworkManager
echo "=========stop NetworkManager ============="
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl status NetworkManager

# 敞开 selinux
echo "=========disable selinux============="
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
getenforce

# 敞开 swap
echo "=========close swap============="
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
free -m

# 工夫同步
echo "=========sync time============="
yum install chrony -y
cat >> /etc/chrony.conf << EOF
server ntp.aliyun.com iburst
EOF
systemctl start chronyd
systemctl enable chronyd
chronyc sources

#配置 yum
echo "=========config yum============="
yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix -y
wget -O /etc/yum.repos.d/CentOS-Base.repo  http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo  http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

[root@localhost ~]# sh tidb-init.sh

配置网络:

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="96350fad-1ffc-4410-a068-5d13244affb7"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.44.134
NETMASK=255.255.255.0
GATEWAY=192.168.44.2
DNS1=192.168.44.2

[root@localhost ~]# /etc/init.d/network restart

批改主机名:

[root@localhost ~]# hostnamectl set-hostname tidbtest01
[root@tidbtest01 ~]# echo "192.168.44.134   tidbtest01" >> /etc/hosts

集群拓扑

最小规模的 TiDB 集群拓扑:

实例 个数 IP 配置
TiKV 3 192.168.44.134 192.168.44.134 192.168.44.134 防止端口和目录抵触
TiDB 1 192.168.44.134 默认端口 全局目录配置
PD 1 192.168.44.134 默认端口 全局目录配置
TiFlash 1 192.168.44.134 默认端口 全局目录配置
Monitor 1 192.168.44.134 默认端口 全局目录配置

施行部署

  1. 下载并装置 TiUP
[root@tidbtest01 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8697k  100 8697k    0     0  4637k      0  0:00:01  0:00:01 --:--:-- 4636k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

[root@tidbtest01 ~]# source /root/.bash_profile
  1. 装置 TiUP 的 cluster 组件
[root@tidbtest01 ~]# tiup cluster
The component `cluster` is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.3.2-linux-amd64.tar.gz 10.05 MiB / 10.05 MiB 100.00% 9.91 MiB p/s            
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster
Deploy a TiDB cluster for production

Usage:
  tiup cluster [command]

Available Commands:
  check       Perform preflight checks for the cluster.
  deploy      Deploy a cluster for production
  start       Start a TiDB cluster
  stop        Stop a TiDB cluster
  restart     Restart a TiDB cluster
  scale-in    Scale in a TiDB cluster
  scale-out   Scale out a TiDB cluster
  destroy     Destroy a specified cluster
  clean       (EXPERIMENTAL) Cleanup a specified cluster
  upgrade     Upgrade a specified TiDB cluster
  exec        Run shell command on host in the tidb cluster
  display     Display information of a TiDB cluster
  prune       Destroy and remove instances that is in tombstone state
  list        List all clusters
  audit       Show audit log of cluster operation
  import      Import an exist TiDB cluster from TiDB-Ansible
  edit-config Edit TiDB cluster config.
Will use editor from environment variable `EDITOR`, default use vi
  reload      Reload a TiDB cluster's config and restart if needed
  patch       Replace the remote package with a specified package and restart the service
  rename      Rename the cluster
  enable      Enable a TiDB cluster automatically at boot
  disable     Disable starting a TiDB cluster automatically at boot
  help        Help about any command

Flags:
  -h, --help                help for tiup
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  -v, --version             version for tiup
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.
  1. 调大 sshd 服务的连接数限度
[root@tidbtest01 ~]# vi /etc/ssh/sshd_config
MaxSessions 20
[root@tidbtest01 ~]# service sshd restart
  1. 创立配置文件
[root@tidbtest01 ~]# vi topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
   replication.location-labels: ["host"]
 tiflash:
   logger.level: "info"

pd_servers:
 - host: 192.168.44.134

tidb_servers:
 - host: 192.168.44.134

tikv_servers:
 - host: 192.168.44.134
   port: 20160
   status_port: 20180
   config:
     server.labels: {host: "logic-host-1"}

 - host: 192.168.44.134
   port: 20161
   status_port: 20181
   config:
     server.labels: {host: "logic-host-2"}

 - host: 192.168.44.134
   port: 20162
   status_port: 20182
   config:
     server.labels: {host: "logic-host-3"}

tiflash_servers:
 - host: 192.168.44.134

monitoring_servers:
 - host: 192.168.44.134

grafana_servers:
 - host: 192.168.44.134
  1. 查看可用版本

    通过 tiup list tidb 命令来查看以后反对部署的 TiDB 版本

[root@tidbtest01 ~]# tiup list tidb
  1. 部署集群

命令为:

tiup cluster deploy <cluster-name> <tidb-version> ./topo.yaml --user root -p
  • 参数 cluster-name 示意设置集群名称
  • 参数 tidb-version 示意设置集群版本
[root@tidbtest01 ~]# tiup cluster deploy tidb-cluster v4.0.10 ./topo.yaml --user root -p
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster deploy tidb-cluster v4.0.10 ./topo.yaml --user root -p
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidb-cluster
Cluster version: v4.0.10
Type        Host            Ports                            OS/Arch       Directories
----        ----            -----                            -------       -----------
pd          192.168.44.134  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv        192.168.44.134  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv        192.168.44.134  20161/20181                      linux/x86_64  /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv        192.168.44.134  20162/20182                      linux/x86_64  /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tidb        192.168.44.134  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000
tiflash     192.168.44.134  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus  192.168.44.134  9090                             linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana     192.168.44.134  3000                             linux/x86_64  /tidb-deploy/grafana-3000
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password: 这里输出 root 明码
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.10 (linux/amd64) ... Done
  - Download tikv:v4.0.10 (linux/amd64) ... Done
  - Download tidb:v4.0.10 (linux/amd64) ... Done
  - Download tiflash:v4.0.10 (linux/amd64) ... Done
  - Download prometheus:v4.0.10 (linux/amd64) ... Done
  - Download grafana:v4.0.10 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 192.168.44.134:22 ... Done
+ Copy files
  - Copy pd -> 192.168.44.134 ... Done
  - Copy tikv -> 192.168.44.134 ... Done
  - Copy tikv -> 192.168.44.134 ... Done
  - Copy tikv -> 192.168.44.134 ... Done
  - Copy tidb -> 192.168.44.134 ... Done
  - Copy tiflash -> 192.168.44.134 ... Done
  - Copy prometheus -> 192.168.44.134 ... Done
  - Copy grafana -> 192.168.44.134 ... Done
  - Copy node_exporter -> 192.168.44.134 ... Done
  - Copy blackbox_exporter -> 192.168.44.134 ... Done
+ Check status
Enabling component pd
        Enabling instance pd 192.168.44.134:2379
        Enable pd 192.168.44.134:2379 success
Enabling component node_exporter
Enabling component blackbox_exporter
Enabling component tikv
        Enabling instance tikv 192.168.44.134:20162
        Enabling instance tikv 192.168.44.134:20160
        Enabling instance tikv 192.168.44.134:20161
        Enable tikv 192.168.44.134:20162 success
        Enable tikv 192.168.44.134:20161 success
        Enable tikv 192.168.44.134:20160 success
Enabling component tidb
        Enabling instance tidb 192.168.44.134:4000
        Enable tidb 192.168.44.134:4000 success
Enabling component tiflash
        Enabling instance tiflash 192.168.44.134:9000
        Enable tiflash 192.168.44.134:9000 success
Enabling component prometheus
        Enabling instance prometheus 192.168.44.134:9090
        Enable prometheus 192.168.44.134:9090 success
Enabling component grafana
        Enabling instance grafana 192.168.44.134:3000
        Enable grafana 192.168.44.134:3000 success
Cluster `tidb-cluster` deployed successfully, you can start it with command: `tiup cluster start tidb-cluster`
  1. 启动集群
[root@tidbtest01 ~]# tiup cluster start tidb-cluster
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster start tidb-cluster
Starting cluster tidb-cluster...
+ [Serial] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134
+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134
+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134
+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134
+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134
+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134
+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134
+ [Parallel] - UserSSH: user=tidb, host=192.168.44.134
+ [Serial] - StartCluster
Starting component pd
        Starting instance pd 192.168.44.134:2379
        Start pd 192.168.44.134:2379 success
Starting component node_exporter
        Starting instance 192.168.44.134
        Start 192.168.44.134 success
Starting component blackbox_exporter
        Starting instance 192.168.44.134
        Start 192.168.44.134 success
Starting component tikv
        Starting instance tikv 192.168.44.134:20162
        Starting instance tikv 192.168.44.134:20160
        Starting instance tikv 192.168.44.134:20161
        Start tikv 192.168.44.134:20162 success
        Start tikv 192.168.44.134:20161 success
        Start tikv 192.168.44.134:20160 success
Starting component tidb
        Starting instance tidb 192.168.44.134:4000
        Start tidb 192.168.44.134:4000 success
Starting component tiflash
        Starting instance tiflash 192.168.44.134:9000
        Start tiflash 192.168.44.134:9000 success
Starting component prometheus
        Starting instance prometheus 192.168.44.134:9090
        Start prometheus 192.168.44.134:9090 success
Starting component grafana
        Starting instance grafana 192.168.44.134:3000
        Start grafana 192.168.44.134:3000 success
+ [Serial] - UpdateTopology: cluster=tidb-cluster
Started cluster `tidb-cluster` successfully

拜访集群

  1. 装置 MySQL 客户端
[root@tidbtest01 ~]# yum -y install mysql
  1. 拜访 TiDB,明码为空
[root@tidbtest01 ~]# mysql -h 192.168.44.134 -P 4000 -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.25-TiDB-v4.0.10 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA     |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
5 rows in set (0.00 sec)

MySQL [(none)]> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v4.0.10
Edition: Community
Git Commit Hash: dbade8cda4c5a329037746e171449e0a1dfdb8b3
Git Branch: heads/refs/tags/v4.0.10
UTC Build Time: 2021-01-15 02:59:27
GoVersion: go1.13
Race Enabled: false
TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
Check Table Before Drop: false
1 row in set (0.00 sec)
  1. 查看曾经部署的集群列表
[root@tidbtest01 ~]# tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster list
Name          User  Version  Path                                               PrivateKey
----          ----  -------  ----                                               ----------
tidb-cluster  tidb  v4.0.10  /root/.tiup/storage/cluster/clusters/tidb-cluster  /root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa
  1. 查看集群的拓扑构造和状态
[root@tidbtest01 ~]# tiup cluster display tidb-cluster
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster display tidb-cluster
Cluster type:       tidb
Cluster name:       tidb-cluster
Cluster version:    v4.0.10
SSH type:           builtin
Dashboard URL:      http://192.168.44.134:2379/dashboard
ID                    Role        Host            Ports                            OS/Arch       Status   Data Dir                    Deploy Dir
--                    ----        ----            -----                            -------       ------   --------                    ----------
192.168.44.134:3000   grafana     192.168.44.134  3000                             linux/x86_64  Up       -                           /tidb-deploy/grafana-3000
192.168.44.134:2379   pd          192.168.44.134  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379          /tidb-deploy/pd-2379
192.168.44.134:9090   prometheus  192.168.44.134  9090                             linux/x86_64  Up       /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
192.168.44.134:4000   tidb        192.168.44.134  4000/10080                       linux/x86_64  Up       -                           /tidb-deploy/tidb-4000
192.168.44.134:9000   tiflash     192.168.44.134  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
192.168.44.134:20160  tikv        192.168.44.134  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
192.168.44.134:20161  tikv        192.168.44.134  20161/20181                      linux/x86_64  Up       /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
192.168.44.134:20162  tikv        192.168.44.134  20162/20182                      linux/x86_64  Up       /tidb-data/tikv-20162       /tidb-deploy/tikv-20162
Total nodes: 8
  1. 拜访 Dashboard

通过下面输入的 Dashboard URL:http://192.168.44.134:2379/dashboard 拜访集群控制台页面,默认用户名为 root,明码为空。

  1. 拜访 TiDB 的 Grafana 监控

通过下面输入的 grafanaHostPorts 拜访集群 Grafana 监控页面,默认用户名和明码均为 admin。

欢送关注我的公众号,一起学习。

正文完
 0