乐趣区

关于docker:49-Docker-Overlay网络和etcd实现多机容器通信


实现多机容器的通信,首先须要保障容器的 ip 在多台虚拟机中都是惟一的。以上图为例,在 192.168.205.10 中新建容器,就不能再呈现容器的 ip 为 172.17.0.3 了,因为这个 ip 曾经在 192.168.205.10 中存在了。

etcd 装置

为了保障容器在多机中的 ip 惟一,能够应用 etcd,etcd 是一个开源的、分布式的键值对数据存储系统

https://coreos.com/etcd/

在 docker-node1 上装置 etcd

wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz
tar zxvf etcd-v3.0.12-linux-amd64.tar.gz
cd etcd-v3.0.12-linux-amd64
nohup ./etcd --name docker-node1 --initial-advertise-peer-urls http://192.168.205.10:2380 
--listen-peer-urls http://192.168.205.10:2380 
--listen-client-urls http://192.168.205.10:2379,http://127.0.0.1:2379 
--advertise-client-urls http://192.168.205.10:2379 
--initial-cluster-token etcd-cluster 
--initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 
--initial-cluster-state new &

在 docker-node2 上装置 etcd

wget https://github.com/coreos/etcd/releases/download/v3.0.12/etcd-v3.0.12-linux-amd64.tar.gz
tar zxvf etcd-v3.0.12-linux-amd64.tar.gz
cd etcd-v3.0.12-linux-amd64/
nohup ./etcd --name docker-node2 --initial-advertise-peer-urls http://192.168.205.11:2380 
--listen-peer-urls http://192.168.205.11:2380 
--listen-client-urls http://192.168.205.11:2379,http://127.0.0.1:2379 
--advertise-client-urls http://192.168.205.11:2379 
--initial-cluster-token etcd-cluster 
--initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 
--initial-cluster-state new &

在 docker-node1 或 docker-node2 任一主机上查看 cluster 运行状态

./etcdctl cluster-health
member 21eca106efe4caee is healthy: got healthy result from http://192.168.205.10:2379
member 8614974c83d1cc6d is healthy: got healthy result from http://192.168.205.11:2379
cluster is healthy

docker 重启

应用 systemctl stop docker 命令,敞开 docker 服务后再应用如下形式重启
docker-node1

sudo /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.10:2379 --cluster-advertise=192.168.205.10:2375 &

docker-node2

sudo /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.11:2379 --cluster-advertise=192.168.205.11:2375 &

创立 overlay network

在 docker-node1 上创立一个 demo 的 overlay network

docker network create -d overlay demo

docker-node1,能够看到 demo 的 DRIVER 是 overlay,SCOPE 是 global

[vagrant@docker-node1 etcd-v3.0.12-linux-amd64]$ docker network ls
NETWORK ID NAME DRIVER SCOPE
9bacf5670fa1 bridge bridge local
5a8cd36174a1 demo overlay global
efb6975c8935 host host local
4213425c5293 my-bridge bridge local
3fa0f2e1a00b none null local

而后在 docker-node2 上发现会同步创立

[vagrant@docker-node2 etcd-v3.0.12-linux-amd64]$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1c79331058ff bridge bridge local
5a8cd36174a1 demo overlay global
148fe990ddc5 host host local
452bd665cf13 none null local

通过查看 etcd 的 key-value,发现 docker network inspect demo 的内容与 key-value 中的内容是一样的,也就是所谓的分布式网络

[
    {
        "Name": "demo",
        "Id": "5a8cd36174a1b37f9146aaa99de33ff20bfa8df472e07412e36048a39fe9a0e9",
        "Created": "2018-07-03T09:21:08.95776633Z",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {"Network": ""},"ConfigOnly": false,"Containers": {},"Options": {},"Labels": {}}
]

测试多机通信成果

在 docker-node1 上

docker run -d --name test1 --network demo busybox /bin/sh -c "while true;do sleep 3600;done"

在 docker-node2 上如果反复执行上述命令,就会报如下谬误了,从另一方面也阐明了 etcd 在发挥作用

docker: Error response from daemon: Conflict. The container name "/test1" is already in use by container "14d690ada81a4510eb5dd24286d061064e8131f549dcb5320f5b5b441cc49c40". You have to remove (or rename) that container to be able to reuse that name.

能够改为

docker run -d --name test2 --network demo busybox /bin/sh -c "while true;do sleep 3600;done"

此时查看一下 docker-node1 与 docker-node2 别离对应的 test1 与 test2 容器的网络

docker network inspect demo
{
 "Containers": {
 "a9fa503852b26267d13b67067ba6c119eebcf0677f8365483b3b798dcab616a5": {
 "Name": "test1",
 "EndpointID": "3869243fcfb6a975fadf64dc31533161d2922137b18af0c3ee78c02f82b84def",
 "MacAddress": "","IPv4Address":"10.0.0.2/24","IPv6Address":""
 },
 "ep-8f8e30e36867c5b906bc84438049b2911660057fe031536a97efcf01d7d98699": {
 "Name": "test2",
 "EndpointID": "8f8e30e36867c5b906bc84438049b2911660057fe031536a97efcf01d7d98699",
 "MacAddress": "","IPv4Address":"10.0.0.3/24","IPv6Address":""
 }
 }
}

而后在 test1 上 ping 位于 docker-node2 上的 test2 容器是能够 ping 通的

docker exec test1 ping 10.0.0.3

在 test2 上 ping 位于 docker-node1 上的 test1 容器也是能够 ping 通的

docker exec test2 ping 10.0.0.2

胜利实现了多机容器通信的成果

退出移动版