关于容器:使用kubeasz部署高可用kubernetets集群

6次阅读

共计 16811 个字符,预计需要花费 43 分钟才能阅读完成。

一、筹备工作

1.1 环境介绍

本试验采纳 kubeasz 作为 kubernetets 环境部署工具,它是一个基于二进制形式部署和利用 ansible-playbook 实现自动化来疾速部署高可用 kubernetets 集群的工具,具体介绍请查看 kubeasz 官网。本试验用到的所有虚拟机默认软件源更新为阿里云的源,操作系统为最小化装置,已预装好 vim、net-tools、ssh 等常用工具,时钟默认曾经全副同步至阿里云,默认已敞开操作系统自带防火墙。
本试验用到的操作系统版本为:
零碎:Ubuntu Server 20.04 LTS 64 位
Kubernetets:v1.26
运行时:containerd v1.6.8
网络:calico

1.2 地址布局

角色 IP 地址 主机名 VIP
ETCD 192.168.10.101 etcd01
ETCD 192.168.10.102 etcd02
ETCD 192.168.10.103 etcd03
MATSER/ANSIBLE 192.168.10.104 master01
MASTER 192.168.10.105 master02
MATSER 192.168.10.106 master03
NODE 192.168.10.107 node01
NODE 192.168.10.108 node02
NODE 192.168.10.109 node03
HA 192.168.10.110 ha01 192.168.10.115
HA 192.168.10.111 ha02 192.168.10.115

二、环境部署

2.1 根底环境搭建

首先解决 ubuntu 零碎开机默认会把 dns 配置成 127.0.0.53 的问题,所有节点都要操作
参考 https://blog.csdn.net/qifei71…
具体解决办法

批改 /etc/systemd/resolved.conf 文件
[Resolve]
DNS=8.8.8.8
之后以 root 身份执行
sudo systemctl restart systemd-resolved
sudo systemctl enable systemd-resolved
sudo mv /etc/resolv.conf  /etc/resolv.conf.bak
sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

批改所有节点时区为上海

root@master01:~# timedatectl set-timezone Asia/Shanghai

在 master01 节点操作

装置 ansible:
root@k8s-master01:~# apt install ansible
配置免密登录,须要配置为 master01 节点到所有节点都毋庸明码即可登录,只拿 master02 作为示范,其余节点操作相似。生成秘钥对
root@master01:~# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:2kfD/vlpbkKZtG90oujjd90CLZZQHb4buOFSHx4p+so root@k8s-master01
The key's randomart image is:
+---[RSA 3072]----+
|             ... |
|            ...  |
|           .  .  |
|         .. .. o |
|        S +o=**  |
|       o o =X*+=.|
|      . . =+o*++.|
|         oo++.B.o|
|         oE++O+. |
+----[SHA256]-----+

散发公钥至各个节点
root@master01:~# ssh-copy-id 192.168.10.102
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.10.102 (192.168.10.102)' can't be established.
ECDSA key fingerprint is SHA256:LHdJ1aX0Rx+tQlCcGKwIk7aJsFjsUm4/Ze7vwhMqsS8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.10.102's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh'192.168.10.102'"
and check to make sure that only the key(s) you wanted were added.

配置 DNS 解析,并且将 hosts 文件同步到所有结点,只拿 master02 节点作为示范,其余节点操作相似。

root@master01:~# cat /etc/hosts
127.0.0.1 localhost
192.168.10.101 etcd01
192.168.10.102 etcd02
192.168.10.103 etcd03
192.168.10.104 master01
192.168.10.105 master02
192.168.10.106 master03
192.168.10.107 node01
192.168.10.108 node02
192.168.10.109 node03
192.168.10.110 ha1
192.168.10.111 ha2

将 hosts 文件同步到所有节点(仅应用个别节点举例)root@master01:~# scp /etc/hosts 192.168.10.101:/etc/hosts
hosts                                                                       100%  512   349.0KB/s   00:00    
root@master01:~# scp /etc/hosts 192.168.10.102:/etc/hosts
hosts                                                                       100%  512   346.3KB/s   00:00    

2.2 开始部署环境

在 master01 节点下载我的项目源码、二进制文件及离线镜像,下载工具脚本 ezdown,举例应用 kubeasz 版本 3.5.0

root@master01:~# export release=3.5.0
root@master01:~# wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@master01:~# chmod +x ezdown

下载 kubeasz 代码、二进制、默认容器镜像
root@master01:~# ./ezdown -D
下载实现后会提醒
INFO Action successed: download_all

生成 ansible hosts 等相干配置文件。

root@master01:/etc/kubeasz# ./ezctl new k8s01
2022-12-31 22:46:36 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s01
2022-12-31 22:46:36 DEBUG set versions
2022-12-31 22:46:36 DEBUG cluster k8s01: files successfully created.
2022-12-31 22:46:36 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s01/hosts'
2022-12-31 22:46:36 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s01/config.yml'

编辑 hosts 文件

root@k8s-master01:/etc/kubeasz/clusters/k8s01# cat hosts 
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
192.168.10.101
192.168.10.102
192.168.10.103

# master node(s)
[kube_master]
192.168.10.104
192.168.10.105
192.168.10.106

# work node(s)
[kube_node]
192.168.10.107
192.168.10.108
192.168.10.109

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
192.168.10.110 LB_ROLE=master EX_APISERVER_VIP=192.168.10.115 EX_APISERVER_PORT=6443
192.168.10.111 LB_ROLE=backup EX_APISERVER_VIP=192.168.10.115 EX_APISERVER_PORT=6443

# [optional] ntp server for the cluster
[chrony]
#192.168.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

# Cluster container-runtime supported: docker, containerd
# if k8s version >= 1.24, docker is not supported
CONTAINER_RUNTIME="containerd"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"

# NodePort Range
NODE_PORT_RANGE="30000-32767"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{base_dir}}/clusters/k8s01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

编辑 config.yaml 文件,次要将 coredns 与 metric-server 主动装置关上。

root@master01:/etc/kubeasz/clusters/k8s01# cat config.yml 
############################
# prepare
############################
# 可选离线装置零碎软件包 (offline|online)
INSTALL_SOURCE: "online"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# force to recreate CA and other certs, not suggested to set 'true'
CHANGE_CA: false

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{CLUSTER_NAME}}"

# k8s version
K8S_VER: "1.26.0"

############################
# role:etcd
############################
# 设置不同的 wal 目录,能够防止磁盘 io 竞争,进步性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

# [containerd]根底容器镜像
SANDBOX_IMAGE: "easzlab.io.local:5000/easzlab/pause:3.9"

# [containerd]容器长久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启 Restful API
ENABLE_REMOTE_API: false

# [docker]信赖的 HTTP 仓库
INSECURE_REG: '["http://easzlab.io.local:5000"]'


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,能够增加多个 ip 和域名(比方减少公网 ip 和域名)MASTER_CERT_HOSTS:
  - "10.1.1.1"
  - "k8s.easzlab.io"
  - "www.snow.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能调配的 pod ip 地址)# 如果 flannel 应用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点调配 pod 网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node 节点最大 pod 数
MAX_PODS: 110

# 配置为 kube 组件(kubelet,kube-proxy,dockerd 等)预留的资源量
# 数值设置详见 templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官网不倡议粗率开启 system-reserved, 除非你基于长期监控,理解零碎的资源占用情况;# 并且随着零碎运行工夫,须要适当减少资源预留,数值设置详见 templates/kubelet-config.yaml.j2
# 零碎预留设置基于 4c/8g 虚机,最小化装置零碎服务,如果应用高性能物理机能够适当减少预留
# 另外,集群装置时候 apiserver 等资源占用会短时较大,倡议至多预留 1g 内存
SYS_RESERVED_ENABLED: "no"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置 flannel 后端 "host-gw","vxlan" 等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] 
flannel_ver: "v0.19.2"

# ------------------------------------------- calico
# [calico] IPIP 隧道模式可选项有: [Always, CrossSubnet, Never], 跨子网能够配置为 Always 与 CrossSubnet(私有云倡议应用 always 比拟省事,其余的话须要批改各自私有云的网络配置,具体能够参考各个私有云阐明)
# 其次 CrossSubnet 为隧道 +BGP 路由混合模式能够晋升网络性能,同子网配置为 Never 即可.
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node 应用的 host IP,bgp 街坊通过该地址建设,可手工指定也能够主动发现
IP_AUTODETECTION_METHOD: "can-reach={{groups['kube_master'][0] }}"

# [calico]设置 calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]设置 calico 是否应用 route reflectors
# 如果集群规模超过 50 个节点,倡议启用该个性
CALICO_RR_ENABLED: false

# CALICO_RR_NODES 配置 route reflectors 的节点,如果未设置默认应用集群 master 节点 
# CALICO_RR_NODES: ["192.168.1.1", "192.168.1.2"]
CALICO_RR_NODES: []

# [calico]更新反对 calico 版本: ["3.19", "3.23"]
calico_ver: "v3.23.5"

# [calico]calico 主版本
calico_ver_main: "{{calico_ver.split('.')[0] }}.{{calico_ver.split('.')[1] }}"

# ------------------------------------------- cilium
# [cilium]镜像版本
cilium_ver: "1.12.4"
cilium_connectivity_check: true
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false

# ------------------------------------------- kube-ovn
# [kube-ovn]抉择 OVN DB and OVN Control Plane 节点,默认为第一个 master 节点
OVN_DB_NODE: "{{groups['kube_master'][0] }}"

# [kube-ovn]离线镜像 tar 包
kube_ovn_ver: "v1.5.3"

# ------------------------------------------- kube-router
# [kube-router]私有云上存在限度,个别须要始终开启 ipinip;自有环境能够设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 反对开关
FIREWALL_ENABLE: true

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"


############################
# role:cluster-addon
############################
# coredns 主动装置
dns_install: "yes"
corednsVer: "1.9.3"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.22.13"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 主动装置
metricsserver_install: "yes"
metricsVer: "v0.5.2"

# dashboard 主动装置
dashboard_install: "no"
dashboardVer: "v2.7.0"
dashboardMetricsScraperVer: "v1.0.8"

# prometheus 主动装置
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "39.11.0"

# nfs-provisioner 主动装置
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

# network-check 主动装置
network_check_enabled: false 
network_check_schedule: "*/5 * * * *"

############################
# role:harbor
############################
# harbor version,残缺版本号
HARBOR_VER: "v2.1.5"
HARBOR_DOMAIN: "harbor.easzlab.io.local"
HARBOR_PATH: /var/data
HARBOR_TLS_PORT: 8443
HARBOR_REGISTRY: "{{HARBOR_DOMAIN}}:{{HARBOR_TLS_PORT}}"

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

部署 k8s 集群
根底环境初始化,筹备 CA 和根底零碎设置

root@k8s-master01:/etc/kubeasz# ./ezctl setup k8s01 01
执行结束后提醒如下信息示意装置没问题

PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.10.101             : ok=23   changed=6    unreachable=0    failed=0    skipped=97   rescued=0    ignored=0   
192.168.10.102             : ok=23   changed=6    unreachable=0    failed=0    skipped=97   rescued=0    ignored=0   
192.168.10.103             : ok=23   changed=6    unreachable=0    failed=0    skipped=97   rescued=0    ignored=0   
192.168.10.104             : ok=24   changed=6    unreachable=0    failed=0    skipped=96   rescued=0    ignored=0   
192.168.10.105             : ok=23   changed=6    unreachable=0    failed=0    skipped=97   rescued=0    ignored=0   
192.168.10.106             : ok=23   changed=6    unreachable=0    failed=0    skipped=97   rescued=0    ignored=0   
192.168.10.107             : ok=23   changed=6    unreachable=0    failed=0    skipped=97   rescued=0    ignored=0   
192.168.10.108             : ok=23   changed=6    unreachable=0    failed=0    skipped=97   rescued=0    ignored=0   
192.168.10.109             : ok=23   changed=6    unreachable=0    failed=0    skipped=97   rescued=0    ignored=0   
192.168.10.110             : ok=1    changed=0    unreachable=0    failed=0    skipped=78   rescued=0    ignored=0   
192.168.10.111             : ok=1    changed=0    unreachable=0    failed=0    skipped=78   rescued=0    ignored=0   
localhost                  : ok=31   changed=21   unreachable=0    failed=0    skipped=13   rescued=0    ignored=0

部署 etcd 集群

root@k8s-master01:/etc/kubeasz# ./ezctl setup k8s01 02

执行结束后提醒如下信息示意装置没问题
PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.10.101             : ok=10   changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.10.102             : ok=8    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.10.103             : ok=8    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

验证 etcd 集群
在 etcd01 节点执行,返回如下信息证实 etcd 集群运行失常

root@etcd01:~# export ETCD_IPS="192.168.10.101 192.168.10.102 192.168.10.103"
root@etcd01:~# cp -a /opt/kube/bin/etcdctl /usr/local/bin/
root@etcd01:~# for ip in ${ETCD_IPS}; do ETCD_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
https://192.168.10.101:2379 is healthy: successfully committed proposal: took = 35.684337ms
https://192.168.10.102:2379 is healthy: successfully committed proposal: took = 21.125465ms
https://192.168.10.103:2379 is healthy: successfully committed proposal: took = 28.284479ms

装置容器运行时

root@k8s-master01:/etc/kubeasz# ./ezctl setup k8s01 03

执行结束后提醒如下信息示意装置没问题
PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.10.104             : ok=2    changed=1    unreachable=0    failed=0    skipped=28   rescued=0    ignored=0   
192.168.10.105             : ok=2    changed=1    unreachable=0    failed=0    skipped=25   rescued=0    ignored=0   
192.168.10.106             : ok=2    changed=1    unreachable=0    failed=0    skipped=25   rescued=0    ignored=0   
192.168.10.107             : ok=2    changed=1    unreachable=0    failed=0    skipped=25   rescued=0    ignored=0   
192.168.10.108             : ok=2    changed=1    unreachable=0    failed=0    skipped=25   rescued=0    ignored=0   
192.168.10.109             : ok=2    changed=1    unreachable=0    failed=0    skipped=25   rescued=0    ignored=0

装置 master 节点

root@k8s-master01:/etc/kubeasz# ./ezctl setup k8s01 04
执行结束后提醒如下信息示意装置没问题

PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.10.104             : ok=55   changed=36   unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
192.168.10.105             : ok=54   changed=36   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.10.106             : ok=54   changed=36   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

部署 node 节点

root@master01:/etc/kubeasz# ./ezctl setup k8s01 05
执行结束后提醒如下信息示意装置没问题

PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.10.107             : ok=35   changed=21   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.10.108             : ok=35   changed=21   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.10.109             : ok=35   changed=21   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

部署网络服务

root@master01:/etc/kubeasz# ./ezctl setup k8s01 06
执行结束后提醒如下信息示意装置没问题
PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.10.104             : ok=13   changed=7    unreachable=0    failed=0    skipped=39   rescued=0    ignored=0   
192.168.10.105             : ok=7    changed=3    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
192.168.10.106             : ok=7    changed=3    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
192.168.10.107             : ok=7    changed=3    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
192.168.10.108             : ok=7    changed=3    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
192.168.10.109             : ok=7    changed=3    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0

验证网络服务, 输入如下信息示意网络服务失常。

root@master01:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.10.105 | node-to-node mesh | up    | 15:45:31 | Established |
| 192.168.10.106 | node-to-node mesh | up    | 15:45:30 | Established |
| 192.168.10.107 | node-to-node mesh | up    | 15:45:31 | Established |
| 192.168.10.108 | node-to-node mesh | up    | 15:45:31 | Established |
| 192.168.10.109 | node-to-node mesh | up    | 15:45:30 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

部署负载平衡服务

root@master01:/etc/kubeasz# ./ezctl setup k8s01 10

执行结束后提醒如下信息示意装置没问题
PLAY RECAP ***************************************************************************************************
192.168.10.110             : ok=17   changed=14   unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
192.168.10.111             : ok=16   changed=14   unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

部署 coredns 与 metric-server


root@master01:/etc/kubeasz# ./ezctl setup k8s01 07
执行结束后提醒如下信息示意装置没问题
localhost                  : ok=8    changed=7    unreachable=0    failed=0    skipped=34   rescued=0    ignored=0

三、集群验证

3.1 集群部署实现验证

root@k8s-master01:~# kubectl  get no
NAME             STATUS                     ROLES    AGE   VERSION
192.168.10.104   Ready,SchedulingDisabled   master   51m   v1.25.4
192.168.10.105   Ready,SchedulingDisabled   master   51m   v1.25.4
192.168.10.106   Ready,SchedulingDisabled   master   51m   v1.25.4
192.168.10.107   Ready                      node     38m   v1.25.4
192.168.10.108   Ready                      node     38m   v1.25.4
192.168.10.109   Ready                      node     38m   v1.25.4

3.2 metric-server 验证

root@master01:/etc/kubeasz# kubectl top node 
NAME             CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
192.168.10.104   111m         11%    1776Mi          70%       
192.168.10.105   114m         11%    669Mi           98%       
192.168.10.106   130m         13%    657Mi           97%       
192.168.10.107   61m          6%     482Mi           71%       
192.168.10.108   82m          8%     530Mi           78%       
192.168.10.109   59m          5%     513Mi           75%       
正文完
 0