前言:
Kubernetes 中的卷有明确的寿命一一与封装它的 Pod 雷同。所以卷的生命比 Pod 中的所有容器都长,当这个容器重启时数据依然得以保留。当然,当 Pod 不再存在时,卷也将不复存在
为了保证数据的持久性,必须保证数据在内部存储,在docker容器中为了实现数据的持久性存储,在宿主机和容器内做映射,能够保障在容器的生命周期完结,数据仍旧能够实现持久性存储。然而在k8s中,因为pod散布在各个不同的节点之上,并不能实现不同节点之间持久性数据的共享,并且,在节点故障时,可能会导致数据的永久性失落。为此,k8s就引入了内部存储卷的性能。
Kubernetes 反对以下类型的卷:
- awsElasticBlockStore、azureDisk、azureFile、cephfs、csi、downwardAPI、emptyDir
- fc、flocker、gcePersistentDisk、gitRepo、glusterfs、hostPath、iscsi、local、nfs
- persistentVolumeClaim、projected、portworxVolume、quobyte、rbd、scaleIO、secret
- storageos、vsphereVolume
按存储类型分类:
- 长期存储:emptyDir :Pod删除,数据也会被革除,这种存储成为emptyDir,用于数据的长期存储。
- Host级别: hostPath,Local :数据不能跨节点
- 网络级别: NFS、GlusterFS、rbd(块设施)、cephFS (文件系统) 具备长久能力的存储;
- CSI: Container Storage InterfaceLonghorn(容器存储接口)
- 云存储(EBS,Azure Disk)
依据共享式存储设备属性不同又分为:
- 多路并行读写多路
- 只读单路读写
几种罕用的卷类型 emptyDir、hostPath、NFS简介
emptyDir
当 Pod 被调配给节点时,首先创立 emptyDir 卷,并且只有该 Pod 在该节点上运行,该卷就会存在。正如卷的名字所述,它最后是空的。Pod 中的容器能够读取和写入 emptyDir 卷中的雷同文件,只管该卷能够挂载到每个容器中的雷同或不同门路上。当出于任何起因从节点中删除 Pod 时, emptyDir 中的数据将被永恒删除
- emptyDir 的用法有:
- 暂存空间,例如用于基于磁盘的合并排序
- 用作长时间计算解体复原时的检查点
- Web服务器容器提供数据时,保留内容管理器容器提取的文件
查看默认反对volumes 存储类型
[root@k8s-master ~]# kubectl explain pods.spec.volumes #查看默认反对volumes 存储类型KIND: PodVERSION: v1RESOURCE: volumes <[]Object>DESCRIPTION: List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Volume represents a named volume in a pod that may be accessed by any container in the pod.FIELDS: awsElasticBlockStore <Object> AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk <Object> AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile <Object> AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs <Object> CephFS represents a Ceph FS mount on the host that shares a pod's lifetime...示例1:emptyDir Pod上间接挂载长期存储卷
[root@k8s-master storage]# cat volumes-emptydir-demo.yaml apiVersion: v1kind: Podmetadata: name: volumes-emptydir-demo namespace: defaultspec: initContainers: - name: config-file-downloader image: ikubernetes/admin-box imagePullPolicy: IfNotPresent command: ["/bin/sh","-c","wget -O /data/envoy.yaml http://192.168.4.254/envoy.yaml"] #初始化容器 下载配置文件 volumeMounts: - name: config-file-store mountPath: /data #挂载目录 containers: - name: envoy image: envoyproxy/envoy-alpine:v1.13.1 command: ['/bin/sh','-c'] args: ['envoy -c /etc/envoy/envoy.yaml'] volumeMounts: - name: config-file-store mountPath: /etc/envoy #挂载目录 readOnly: true volumes: - name: config-file-store #被调用时的卷名称 emptyDir: #卷插件类型 临时文件 medium: Memory #应用内存 sizeLimit: 16Mi[root@k8s-master storage]# kubectl apply -f volumes-emptydir-demo.yaml pod/volumes-emptydir-demo created[root@k8s-master storage]# kubectl get podNAME READY STATUS RESTARTS AGEcentos-deployment-66d8cd5f8b-fkhft 1/1 Running 0 2mvolumes-emptydir-demo 1/1 Running 0 4s[root@k8s-master storage]# kubectl exec volumes-emptydir-demo volumes-emptydir-demo -it -- /bin/sh/ # cat /etc/envoy/envoy.yaml #查看挂载文件admin: access_log_path: /tmp/admin_access.log address: socket_address: { address: 127.0.0.1, port_value: 9999 }# 定义动态资源static_resources:...... clusters: - name: some_service connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN # 配置后端挂载的IP地址 hosts: [{ socket_address: { address: 127.0.0.1, port_value: 8080 }}] / # netstat -tnlActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:9999 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTENhostPath
hostPath 卷将主机节点的文件系统中的文件或目录挂载到集群中
hostPath 的用处如下:
- 运行须要拜访 Docker 外部的容器;应用 /var/lib/docker 的 hostPath
- 在容器中运行 cAdvisor;应用 /dev/cgroups 的 hostPath
- 容许 pod 指定给定的 hostPath 是否应该在 pod 运行之前存在,是否应该创立,以及它应该以什么模式存在
hostPath应用选项
- DirectoryOrCreate:指定的门路不存时主动将其创立为0755权限的空目录,属主属组均为kubelet;
- Directory:当时必须存在的目录门路;
- FileOrCreate:指定的门路不存时主动将其创立为0644权限的空文件,属主和属组同为kubelet;
- File:当时必须存在的文件门路;
- Socket:当时必须存在的Socket文件门路;
- CharDevice:当时必须存在的字符设施文件门路;
- BlockDevice:当时必须存在的块设施文件门路;
- "":空字符串,默认配置,在关联hostPath存储卷之前不进行任何查看
应用这种卷类型是请留神,因为:
- 因为每个节点上的文件都不同,具备雷同配置(例如从 podTemplate 创立的)的 pod 在不同节点上的行为
可能会有所不同 - 当 Kubernetes 依照打算增加资源感知调度时,将无奈思考 hostPath 应用的资源
- 在底层主机上创立的文件或目录只能由 root 写入。您须要在特权容器中以 root 身份运行过程,或批改主机
上的文件权限以便写入 hostPath 卷
示例2:长久化 节点存储卷挂载 hostPath: 不能跨节点
[root@k8s-master storage]# cat volumes-hostpath-demo.yaml apiVersion: v1kind: Podmetadata: name: volumes-hostpath-demospec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.7-alpine env: - name: REDIS_HOST value: redis.ilinux.io:6379 - name: LOG_LEVEL value: info volumeMounts: - name: varlog mountPath: /var/log - name: socket mountPath: /var/run/docker.sock - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly : true volumes: - name: varlog hostPath : path: /var/log #宿主机上门路 - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers #宿主机上门路 type: Directory #目录必须存在 - name: socket hostPath: path: /var/run/docker.sock[root@k8s-master storage]# kubectl describe pod volumes-hostpath-demo... Mounts: #挂载点 /var/lib/docker/containers from varlibdockercontainers (ro) /var/log from varlog (rw) /var/run/docker.sock from socket (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-fsshk (ro)Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: varlog: Type: HostPath (bare host directory volume) #bare host directory volume示意没有做查看 Path: /var/log HostPathType: varlibdockercontainers: Type: HostPath (bare host directory volume) Path: /var/lib/docker/containers HostPathType: socket: Type: HostPath (bare host directory volume) Path: /var/run/docker.sock HostPathType: default-token-fsshk: Type: Secret (a volume populated by a Secret) [root@k8s-master storage]# kubectl exec volumes-hostpath-demo -it -- /bin/sh~ # ls /var/log/ #宿主机logsanaconda chrony journal messages-20210801 spoolerboot.log containers lastlog ntpstats spooler-20210711boot.log-20210616 cron maillog pods spooler-20210718boot.log-20210629 cron-20210711 maillog-20210711 qemu-ga spooler-20210725boot.log-20210701 cron-20210718 maillog-20210718 rhsm spooler-20210801boot.log-20210704 cron-20210725 maillog-20210725 sa tallylogboot.log-20210706 cron-20210801 maillog-20210801 secure wtmpboot.log-20210717 dmesg messages secure-20210711 yum.logboot.log-20210723 dmesg.old messages-20210711 secure-20210718 yum.log-20210402btmp grubby messages-20210718 secure-20210725btmp-20210801 grubby_prune_debug messages-20210725 secure-20210801~ # ls /var/lib/docker/containers/ #宿主机/var/lib/docker/containers11520a271e7ab8e3e3e4a06738429087a489491f4c4fa3ccb9e3eae49dbcc8051ad9339e7b61e0b6538470dd40fd89cc409d7594403f7a908aae9ff14f9e197f28694971823e1c126ea0c3616f29a3ee29c5cbb709fddf3a1989c277190915da32ac0bd0d3f7dd28a1b8b2a53b0690e2e28d59b4952a60b9603e786fa0cf7e2f336f21fc7ec8d980c232ad5a96f5df80fd9afd820579a7e0569f465910ec5ddb37ab98ed1001ab94053d24ba2c87ae8863f2731760a5602ea85b04fb3e2a5d403a3a845172627fcd8bd5c887c1917d6e2f5d80e1447696a3018e6d1e9faa44db...网络长久化 NFS存储卷
示例3:网络长久化 NFS存储卷挂载 跨节点
1.筹备nfs服务器,并在node节点上测试挂载及权限
2.容器挂载nfs存储卷,并实现数据长久化及跨节点
查看nfs资源配置标准
[root@k8s-master ~]# kubectl explain pods.spec.volumes.nfsKIND: PodVERSION: v1RESOURCE: nfs <Object>DESCRIPTION: NFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling.FIELDS: path <string> -required- Path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly <boolean> ReadOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server <string> -required- Server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs- 部署nfs服务提供存储卷
[root@nfs ~]# mkdir -pv /data/redis[root@nfs redis]# cat /etc/exports/data/redis 192.168.4.0/24(rw)[root@nfs ~]# useradd -u 1010 redis[root@nfs ~]# chown 1010 /data/redis[root@nfs redis]# systemctl restart nfs #没有服务装置nfs-utils[root@nfs redis]# ss -anput|grep 2049udp UNCONN 0 0 *:2049 *:* udp UNCONN 0 0 :::2049 :::* tcp LISTEN 0 64 *:2049 *:* tcp TIME-WAIT 0 0 192.168.4.100:2049 192.168.4.171:772 tcp ESTAB 0 0 192.168.4.100:2049 192.168.4.172:997 - 在master、node1、node2创立redis用户 装置nfs-utils服务并测试挂载及权限
[root@k8s-node1 ~]# yum -y install nfs-utils[root@k8s-node1 ~]# useradd -u 1010 redis[root@k8s-node1 ~]# showmount -e 192.168.4.100Export list for 192.168.4.100: [root@k8s-node1 ~]# mount -t nfs 192.168.4.100:/data/redis /mnt #挂载[root@k8s-node1 ~]# df -h|grep redis192.168.4.100:/data/redis 20G 13G 7.2G 65% /mnt[root@k8s-node1 ~]# su - redis #测试写权限[root@k8s-node1 ~]# cd /mnt/[root@k8s-node1 ~]# mkdir test[root@k8s-node1 ~]# lstest- redis服务部署
[root@k8s-master storage]# cat volumes-nfs-demo.yaml apiVersion: v1kind : Podmetadata: name: volumes-nfs-demo labels: app: redisspec: containers: - name: redis image: redis:alpine ports: - containerPort: 6379 name: redisport securityContext: runAsUser: 1010 #应用1010 redis用户登录 volumeMounts: - mountPath: /data name: redisdata volumes: - name: redisdata nfs: server: 192.168.4.100 #近程nfs地址 path: /data/redis readOnly: false [root@k8s-master storage]# kubectl apply -f volumes-nfs-demo.yaml[root@k8s-master storage]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScentos-deployment-66d8cd5f8b-fkhft 1/1 Running 0 22h 10.244.2.101 k8s-node2 <none> <none>my-grafana-7d788c5479-zwlhm 1/1 Running 0 22h 10.244.1.109 k8s-node1 <none> <none>volumes-emptydir-demo 1/1 Running 0 22h 10.244.1.110 k8s-node1 <none> <none>volumes-hostpath-demo 1/1 Running 0 21h 10.244.1.114 k8s-node1 <none> <none>volumes-nfs-demo 1/1 Running 0 4s 10.244.1.117 k8s-node1 <none> <none> #节点1上[root@k8s-node1 ~]# df -h|grep redis #在节点1上能够看到曾经挂载胜利192.168.4.100:/data/redis 20G 13G 7.2G 65% /var/lib/kubelet/pods/5b6cb5b5-0350-4be5-b129-985911b5a2f7/volumes/kubernetes.io~nfs/redisdata[root@k8s-master storage]# kubectl exec volumes-nfs-demo -it -- /bin/sh/data $ netstat -tnl #查看redis服务Active Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN tcp 0 0 :::6379 :::* LISTEN /data $ redis-cli -h 127.0.0.1127.0.0.1:6379> set mykey www.google.com #写数据OK127.0.0.1:6379> get mykey"www.google.com"127.0.0.1:6379> BGSAVE #写入磁盘Background saving started127.0.0.1:6379> exit/data $ lsdump.rdb/data $ exit[root@nfs redis]# ls # nfs服务器上曾经能够看到数据dump.rdb[root@k8s-master storage]# kubectl delete pod volumes-nfs-demo pod "volumes-nfs-demo" deleted[root@k8s-node1 ~]# systemctl stop kubelet #进行node1 kubelet让node1 宕机,pod运行到node2上[root@k8s-master storage]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master Ready master 34d v1.19.9k8s-node1 NotReady <none> 34d v1.19.9 #node1 NotReadyk8s-node2 Ready <none> 34d v1.19.9[root@k8s-master storage]# kubectl apply -f volumes-nfs-demo.yaml pod/volumes-nfs-demo created[root@k8s-master storage]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESvolumes-hostpath-demo 1/1 Running 1 22h 10.244.1.120 k8s-node1 <none> <none>volumes-nfs-demo 1/1 Running 0 6s 10.244.2.104 k8s-node2 <none> <none> #运行在node2节点上[root@k8s-master storage]# kubectl exec volumes-nfs-demo -it -- /bin/sh #测试跨节点数据长久化/data $ lsdump.rdb/data $ redis-cli -h 127.0.0.1127.0.0.1:6379> get mykey"www.google.com"