学习笔记
1. 装置要求
在开始之前,Dcoker Swarm 集群部署 机器须要满足以下几个条件:
- 虚构了 3 台机器,操作系统:Linux version 3.10.0-1127.el7.x86_64
- 硬件配置:CPU(s)-2;3GB 或更多 RAM; 硬盘 20G 左右;
- 3 台机器形成一个集群,机器之前网络互通;
- 集群能够拜访外网,用于拉取镜像
- 禁用 swap 分区
2. Swarm 的工作模式
2.1 集群调度图原理
swarm 集群由治理节点(manager)和工作节点(work node)形成。
2.2 Swarm 集群调用架构图
docker 集群之间通过近程 API 实现通信(Remote API)
3. 环境筹备
- master 节点:192.168.11.99
- node1 节点:192.168.11.131
- node2 节点:192.168.11.130
## 1. 敞开防火墙
$ systemctl stop firewalld
### 敞开防火墙主动启动
$ systemctl disable firewalld
## 2. 敞开 selinux
### 2.1 批改配置文件永恒敞开 selinux【重启失效】$ vi /etc/sysconfig/selinux
SELINUX=enforcing
### 2.2 长期敞开 selinux
$ setenforce 0
## 3. 别离设置设置主机名:$ hostnamectl set-hostname swarm-master
$ hostnamectl set-hostname swarm-node1
$ hostnamectl set-hostname swarm-node2
### 3.1 查看 hostname
$ hostname
## 4. 配置 hosts 文件
$ cat >> /etc/hosts << EOF
192.168.11.99 swarm-master
192.168.11.130 swarm-node1
192.168.11.131 swarm-node2
EOF
## 5. 安裝 docker
$ yum -y install docker
### 5.1 docker 服务启动
$ systemctl enable docker && systemctl start docker
4. 创立 Swarm 并增加节点
# Swarm-master 虚拟机操作
## 1. 创立 Swarm 集群 (记录 token)
[root@localhost vagrant]# docker swarm init --advertise-addr 192.168.11.99
Swarm initialized: current node (6jir7f507ur05lrhtc2xq8plt) is now a manager.
To add a worker to this swarm, run the following command:
# 这就是增加节点的形式 (要保留初始化后 token,因为在节点退出时要应用 token 作为通信的密钥)
docker swarm join --token SWMTKN-1-30sw7z3qrh3gs0pi5fmllypw0c5dxrnpzf2uwd5nldcifzzroh-0l4pkovtgy708qs5pn5v8rzsk 192.168.11.99:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
## 2. 查看集群的相干信息
[root@localhost vagrant]# docker info | grep swarm
WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-ip6tables is disabled
Name: swarm-master
### 2.1 查看节点信息
[root@localhost vagrant]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6jir7f507ur05lrhtc2xq8plt * swarm-master Ready Active Leader
# Swarm-node 虚拟机
## 3. 退出的集群
### 3.1 Swarm-node1 节点退出到集群
[root@localhost vagrant]# docker swarm join --token SWMTKN-1-30sw7z3qrh3gs0pi5fmllypw0c5dxrnpzf2uwd5nldcifzzroh-
0l4pkovtgy708qs5pn5v8rzsk 192.168.11.99:2377
This node joined a swarm as a worker.
### 3.2 Swarm-node2 节点退出到集群
[root@localhost vagrant]# docker swarm join --token SWMTKN-1-30sw7z3qrh3gs0pi5fmllypw0c5dxrnpzf2uwd5nldcifzzroh-
0l4pkovtgy708qs5pn5v8rzsk 192.168.11.99:2377
This node joined a swarm as a worker.
## 4. 查看集群节点状态
[root@localhost vagrant]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6jir7f507ur05lrhtc2xq8plt * swarm-master Ready Active Leader
qt9ja2qbmq4n040y60g6xbdlz swarm-node2 Ready Active
scnyaecep2d1eiccvubusykww swarm-node1 Ready Active
5. Swarm 罕用操作
更改节点的 availablity 状态
swarm 集群中 node 的 availability 状态能够为 active 或者 drain,其中:
active 状态下,node 能够承受来自 manager 节点的工作分派;
drain 状态下,node 节点会完结 task,且不再承受来自 manager 节点的工作分派(也就是下线节点)
## 1. 删除节点
[root@localhost vagrant]# docker node rm --force swarm-node1
swarm-node1
[root@localhost vagrant]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6jir7f507ur05lrhtc2xq8plt * swarm-master Ready Active Leader
qt9ja2qbmq4n040y60g6xbdlz swarm-node2 Ready Active
## 2. 更新状态
[root@localhost vagrant]# docker node update --availability drain swarm-node1
swarm-node1
[root@localhost vagrant]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6jir7f507ur05lrhtc2xq8plt * swarm-master Ready Active Leader
8nk8i2ekx8kykyzmaab475tnw swarm-node1 Ready Drain
qt9ja2qbmq4n040y60g6xbdlz swarm-node2 Ready Active
[root@localhost vagrant]# docker node update --availability active swarm-node1
swarm-node1
[root@localhost vagrant]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6jir7f507ur05lrhtc2xq8plt * swarm-master Ready Active Leader
8nk8i2ekx8kykyzmaab475tnw swarm-node1 Ready Active
qt9ja2qbmq4n040y60g6xbdlz swarm-node2 Ready Active
6. Swarm 中部署服务(nginx)
Docker 1.12 版本提供服务的 Scaling、health check、滚动降级等性能,并提供了内置的 dns、vip 机制,实现 service 的服务发现和负载平衡能力
## 1. 部署服务
### 1.1 创立网络
[root@localhost vagrant]# docker network create -d overlay nginx_net
lj3drxnqik170ta6fr7f4rq5i
[root@localhost vagrant]# docker network ls | grep "nginx_net"
lj3drxnqik17 nginx_net overlay swarm
### 1.2 部署服务
// 就创立了一个具备一个正本(--replicas 1)的 nginx 服务,应用镜像 nginx
[root@localhost vagrant]# docker service create --replicas 1 --network nginx_net --name my_nginx -p 80:80 ng
inx
ufxp6pe02z3xggd73ni2jcxay
### 1.3 查看正在运行的服务列表
[root@localhost vagrant]# docker service ls
ID NAME MODE REPLICAS IMAGE
ufxp6pe02z3x my_nginx replicated 1/1 nginx:latest
## 2. 查问 Swarm 中服务的信息
[root@localhost vagrant]# docker service inspect --pretty my_nginx
ID: ufxp6pe02z3xggd73ni2jcxay
Name: my_nginx
Service Mode: Replicated
Replicas: 1
Placement:
UpdateConfig:
Parallelism: 1
On failure: pause
Max failure ratio: 0
ContainerSpec:
Image: nginx:latest@sha256:06e4235e95299b1d6d595c5ef4c41a9b12641f6683136c18394b858967cd1506
Resources:
Networks: nginx_net
Endpoint Mode: vip
Ports:
PublishedPort 80
Protocol = tcp
TargetPort = 80
## 3. 查看服务在哪个节点上;(在 swarm-master 节点)[root@localhost vagrant]# docker service ps my_nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
t7o6f7hrjd9v my_nginx.1 nginx:latest swarm-master Running Running 6 minutes ago
## 4. 在 Swarm 中动静扩大服务(scale)// 容器动静扩大到 4 个
[root@localhost vagrant]# docker service scale my_nginx=4
my_nginx scaled to 4
[root@localhost vagrant]# docker service ps my_nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
t7o6f7hrjd9v my_nginx.1 nginx:latest swarm-master Running Running 58 minutes ago
ek7ckkc1g1b4 my_nginx.2 nginx:latest swarm-node2 Running Preparing 5 seconds ago
jdl14u7a8or5 my_nginx.3 nginx:latest swarm-node1 Running Preparing 5 seconds ago
2vflrnohh6h8 my_nginx.4 nginx:latest swarm-node1 Running Preparing 5 seconds ago
7. Swarm 服务(nginx)— 故障切換
如果一个节点宕机了(即该节点就会从 swarm 集群中被踢出),则 Docker 应该会将在该节点运行的容器,调度到其余节点,以满足指定数量的正本放弃运行状态。
# 查看服务所在节点
[root@localhost vagrant]# docker service ps my_nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
t7o6f7hrjd9v my_nginx.1 nginx:latest swarm-master Running Running 58 minutes ago
ek7ckkc1g1b4 my_nginx.2 nginx:latest swarm-node2 Running Preparing 5 seconds ago
jdl14u7a8or5 my_nginx.3 nginx:latest swarm-node1 Running Preparing 5 seconds ago
2vflrnohh6h8 my_nginx.4 nginx:latest swarm-node1 Running Preparing 5 seconds ago
模仿其中一个节点机器关机了
# swarm-node1 节点敞开 docker-service
[root@localhost vagrant]# systemctl stop docker
查看主节点信息
[root@localhost vagrant]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
6jir7f507ur05lrhtc2xq8plt * swarm-master Ready Active Leader
8nk8i2ekx8kykyzmaab475tnw swarm-node1 Down Active
qt9ja2qbmq4n040y60g6xbdlz swarm-node2 Ready Active
能够看到 swarm-node1 曾经敞开了
[root@localhost vagrant]# docker service ps my_nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
t7o6f7hrjd9v my_nginx.1 nginx:latest swarm-master Running Running about an hour ago
ek7ckkc1g1b4 my_nginx.2 nginx:latest swarm-node2 Running Running 19 minutes ago
buaz7bbd2tjx my_nginx.3 nginx:latest swarm-node2 Running Running 2 minutes ago
jdl14u7a8or5 \_ my_nginx.3 nginx:latest swarm-node1 Shutdown Running 2 minutes ago
5puac9f6ya0j my_nginx.4 nginx:latest swarm-node2 Running Running 2 minutes ago
2vflrnohh6h8 \_ my_nginx.4 nginx:latest swarm-node1 Shutdown Running 2 minutes ago
咱们能够看到原来 swarm-node1 上的 nginx 曾经敞开了,而且在其余节点上再创立了等量的 nginx 服务
当咱们从新复原 swarm-node1 节点的 docker 服务(模仿机器复原了)
[root@localhost vagrant]# docker service ps my_nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PO
RTS
t7o6f7hrjd9v my_nginx.1 nginx:latest swarm-master Running Running about an hour ago
ek7ckkc1g1b4 my_nginx.2 nginx:latest swarm-node2 Running Running 24 minutes ago
buaz7bbd2tjx my_nginx.3 nginx:latest swarm-node2 Running Running 7 minutes ago
jdl14u7a8or5 \_ my_nginx.3 nginx:latest swarm-node1 Shutdown Failed 2 minutes ago "No such container: my_nginx.3…"
5puac9f6ya0j my_nginx.4 nginx:latest swarm-node2 Running Running 7 minutes ago
2vflrnohh6h8 \_ my_nginx.4 nginx:latest swarm-node1 Shutdown Failed 2 minutes ago "No such container: my_nginx.4…"
调配到 swarm-node1 节点的并没复原
8. 多服务 Swarm 部署到集群
docker-compose 和 Swarm 联合实现对多服务进行编排
- nginx 服务
- visualizer 服务
- portainer 服务
docker service 部署的是单个服务,咱们能够应用 docker stack 进行多服务编排部署
创立 docker-compose.yml
version: "3"
services:
nginx:
image: nginx
ports:
- 8888:80
deploy:
mode: replicated
replicas: 3
visualizer:
image: dockersamples/visualizer
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
portainer:
image: portainer/portainer
ports:
- "9000:9000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
通过 docker-compose.yaml 部署服务
[root@localhost testswarm]# docker stack deploy -c docker-compose.yml deploy_deamon
Creating network deploy_deamon_default
Creating service deploy_deamon_visualizer
Creating service deploy_deamon_portainer
Creating service deploy_deamon_nginx
# 查看状态
[root@localhost testswarm]# docker service ls
ID NAME MODE REPLICAS IMAGE
il2vuomkg3by deploy_deamon_portainer replicated 1/1 portainer/portainer:latest
sudtu6byntye deploy_deamon_visualizer replicated 1/1 dockersamples/visualizer:latest
ywtzr8ssersf deploy_deamon_nginx replicated 3/3 nginx:latest