共计 5340 个字符,预计需要花费 14 分钟才能阅读完成。
前言
本节中 K8S 应用 NFS 近程存储,为托管的 pod 提供了动静存储服务,pod 创建者无需关怀数据以何种形式存在哪里,只须要提出须要多大空间的申请即可。
总体流程是:
- 创立 NFS 服务器。
- 创立 Service Account。用来管控 NFS provisioner 在 k8s 集群中运行的权限。
- 创立 StorageClass。负责创立 PVC 并调用 NFS provisioner 进行预约的工作,并关联 PV 和 PVC。
- 创立 NFS provisioner。有两个性能, 一个是在 NFS 共享目录下创立挂载点 (volume), 二是建设 PV 并将 PV 与 NFS 挂载点建设关联。
更新历史
- 20200610 – 初稿 – 左程立
- 原文地址 – https://blog.zuolinux.com/2020/06/10/nfs-client-provisioner.html
配置 NFS 服务器
server ip: 192.168.10.17
[root@work03 ~]# yum install nfs-utils rpcbind -y
[root@work03 ~]# systemctl start nfs
[root@work03 ~]# systemctl start rpcbind
[root@work03 ~]# systemctl enable nfs
[root@work03 ~]# systemctl enable rpcbind
[root@work03 ~]# mkdir -p /data/nfs/
[root@work03 ~]# chmod 777 /data/nfs/
[root@work03 ~]# cat /etc/exports
/data/nfs/ 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
[root@work03 ~]# exportfs -arv
exporting 192.168.10.0/24:/data/nfs
[root@work03 ~]# showmount -e localhost
Export list for localhost:
/data/nfs 192.168.10.0/24
参数:
sync:将数据同步写入内存缓冲区与磁盘中,效率低,但能够保证数据的一致性
async:将数据先保留在内存缓冲区中,必要时才写入磁盘
所有 work 节点装置 nfs-utils rpcbind
yum install nfs-utils rpcbind -y
systemctl start nfs
systemctl start rpcbind
systemctl enable nfs
systemctl enable rpcbind
创立动静卷提供者
创立 RBAC 受权
# wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml
# kubectl apply -f rbac.yaml
创立 Storageclass
# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
创立 nfs-client-provisioner 主动配置程序,以便主动创立长久卷 (PV)
- 主动创立的 PV 以 ${namespace}-${pvcName}-${pvName} 的命名格局创立在 NFS 上
- 当这个 PV 被回收后会以 archieved-${namespace}-${pvcName}-${pvName} 的命名格局存在 NFS 服务器上
# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.10.17
- name: NFS_PATH
value: /data/nfs
volumes:
- name: nfs-client-root
nfs:
server: 192.168.10.17
path: /data/nfs
创立一个有状态利用
# cat statefulset-nfs.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nfs-web
spec:
serviceName: "nginx"
replicas: 3
selector:
matchLabels:
app: nfs-web # has to match .spec.template.metadata.labels
template:
metadata:
labels:
app: nfs-web
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
annotations:
volume.beta.kubernetes.io/storage-class: managed-nfs-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
[root@master01 ~]# kubectl apply -f statefulset-nfs.yaml
查看 Pod/PV/PVC
[root@master01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-5f5fff65ff-2pmxh 1/1 Running 0 26m
nfs-web-0 1/1 Running 0 2m33s
nfs-web-1 1/1 Running 0 2m27s
nfs-web-2 1/1 Running 0 2m21s
[root@master01 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-nfs-web-0 Bound pvc-62f4868f-c6f7-459e-a280-26010c3a5849 1Gi RWO managed-nfs-storage 2m35s
www-nfs-web-1 Bound pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9 1Gi RWO managed-nfs-storage 2m29s
www-nfs-web-2 Bound pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0 1Gi RWO managed-nfs-storage 2m23s
[root@master01 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0 1Gi RWO Delete Bound default/www-nfs-web-2 managed-nfs-storage 2m25s
pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9 1Gi RWO Delete Bound default/www-nfs-web-1 managed-nfs-storage 2m31s
pvc-62f4868f-c6f7-459e-a280-26010c3a5849 1Gi RWO Delete Bound default/www-nfs-web-0 managed-nfs-storage 2m36s
查看 nfs server 目录中信息,同时各子目录中内容为空
[root@work03 ~]# ls -l /data/nfs/
total 12
default-www-nfs-web-0-pvc-62f4868f-c6f7-459e-a280-26010c3a5849
default-www-nfs-web-1-pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9
default-www-nfs-web-2-pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0
破坏性测试
将每个 pod 中写入内容
[root@master01 ~]# for i in 0 1 2; do kubectl exec nfs-web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done
近程 nfs 各子目录中不再为空,呈现了内容
[root@work03 ~]# ls /data/nfs/default-www-nfs-web-0-pvc-62f4868f-c6f7-459e-a280-26010c3a5849/
index.html
[root@work03 ~]#
查看每个容器中内容,均为各自主机名
[root@master01 ~]# for i in 0 1 2; do kubectl exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; done
nfs-web-0
nfs-web-1
nfs-web-2
删除对应 pod
[root@master01 ~]# kubectl get pod -l app=nfs-web
NAME READY STATUS RESTARTS AGE
nfs-web-0 1/1 Running 0 7m7s
nfs-web-1 1/1 Running 0 7m3s
nfs-web-2 1/1 Running 0 7m
[root@master01 ~]# kubectl delete pod -l app=nfs-web
pod "nfs-web-0" deleted
pod "nfs-web-1" deleted
pod "nfs-web-2" deleted
能够看到又被主动创立了
[root@master01 ~]# kubectl get pod -l app=nfs-web
NAME READY STATUS RESTARTS AGE
nfs-web-0 1/1 Running 0 15s
nfs-web-1 1/1 Running 0 11s
nfs-web-2 1/1 Running 0 8s
再次查看每个 pod 中内容,能够看到文件内容没有变动
[root@master01 ~]# for i in 0 1 2; do kubectl exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; done
nfs-web-0
nfs-web-1
nfs-web-2
结束语
能够看到,statefulset 控制器通过固定的 pod 创立程序能够确保 pod 之间的拓扑关系始终处于稳固不变的状态,通过 nfs-client-provisioner 主动创立和每个 pod 有固定对应关系的近程存储卷,确保 pod 重建后数据不会失落。
分割我
微信公众号:zuolinux_com
正文完