前言

本节中 K8S 应用 NFS 近程存储,为托管的 pod 提供了动静存储服务,pod 创建者无需关怀数据以何种形式存在哪里,只须要提出须要多大空间的申请即可。

总体流程是:

  1. 创立 NFS 服务器。
  2. 创立 Service Account。用来管控 NFS provisioner 在k8s集群中运行的权限。
  3. 创立 StorageClass。负责创立 PVC 并调用 NFS provisioner 进行预约的工作,并关联 PV 和 PVC。
  4. 创立 NFS provisioner。有两个性能,一个是在NFS共享目录下创立挂载点(volume),二是建设 PV 并将 PV 与 NFS 挂载点建设关联。

更新历史

  • 20200610 - 初稿 - 左程立
  • 原文地址 - https://blog.zuolinux.com/2020/06/10/nfs-client-provisioner.html

配置NFS服务器

server ip: 192.168.10.17

[root@work03 ~]# yum install nfs-utils rpcbind -y[root@work03 ~]# systemctl start nfs[root@work03 ~]# systemctl start rpcbind[root@work03 ~]# systemctl enable nfs[root@work03 ~]# systemctl enable rpcbind[root@work03 ~]# mkdir -p /data/nfs/[root@work03 ~]# chmod 777 /data/nfs/[root@work03 ~]# cat /etc/exports/data/nfs/    192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)[root@work03 ~]# exportfs -arvexporting 192.168.10.0/24:/data/nfs[root@work03 ~]# showmount -e localhostExport list for localhost:/data/nfs 192.168.10.0/24参数:sync:将数据同步写入内存缓冲区与磁盘中,效率低,但能够保证数据的一致性async:将数据先保留在内存缓冲区中,必要时才写入磁盘

所有work节点装置 nfs-utils rpcbind

yum install nfs-utils rpcbind -ysystemctl start nfssystemctl start rpcbindsystemctl enable nfssystemctl enable rpcbind

创立动静卷提供者

创立RBAC受权

# wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml# kubectl apply -f rbac.yaml

创立 Storageclass

# cat class.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:  name: managed-nfs-storageprovisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'parameters:  archiveOnDelete: "false"

创立nfs-client-provisioner主动配置程序,以便主动创立长久卷(PV)

  • 主动创立的 PV 以 ${namespace}-${pvcName}-${pvName} 的命名格局创立在 NFS 上
  • 当这个 PV 被回收后会以 archieved-${namespace}-${pvcName}-${pvName} 的命名格局存在 NFS 服务器上
# cat deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: nfs-client-provisioner  labels:    app: nfs-client-provisioner  # replace with namespace where provisioner is deployed  namespace: defaultspec:  replicas: 1  strategy:    type: Recreate  selector:    matchLabels:      app: nfs-client-provisioner  template:    metadata:      labels:        app: nfs-client-provisioner    spec:      serviceAccountName: nfs-client-provisioner      containers:        - name: nfs-client-provisioner          image: quay.io/external_storage/nfs-client-provisioner:latest          volumeMounts:            - name: nfs-client-root              mountPath: /persistentvolumes          env:            - name: PROVISIONER_NAME              value: fuseim.pri/ifs            - name: NFS_SERVER              value: 192.168.10.17            - name: NFS_PATH              value: /data/nfs      volumes:        - name: nfs-client-root          nfs:            server: 192.168.10.17            path: /data/nfs

创立一个有状态利用

# cat statefulset-nfs.yamlapiVersion: v1kind: Servicemetadata:  name: nginx  labels:    app: nginxspec:  ports:  - port: 80    name: web  clusterIP: None  selector:    app: nginx---apiVersion: apps/v1kind: StatefulSetmetadata:  name: nfs-webspec:  serviceName: "nginx"  replicas: 3  selector:    matchLabels:      app: nfs-web # has to match .spec.template.metadata.labels  template:    metadata:      labels:        app: nfs-web    spec:      terminationGracePeriodSeconds: 10      containers:      - name: nginx        image: nginx:1.7.9        ports:        - containerPort: 80          name: web        volumeMounts:        - name: www          mountPath: /usr/share/nginx/html  volumeClaimTemplates:  - metadata:      name: www      annotations:        volume.beta.kubernetes.io/storage-class: managed-nfs-storage    spec:      accessModes: [ "ReadWriteOnce" ]      resources:        requests:          storage: 1Gi
[root@master01 ~]# kubectl apply -f statefulset-nfs.yaml

查看 Pod/PV/PVC

[root@master01 ~]# kubectl get podsNAME                                      READY   STATUS    RESTARTS   AGEnfs-client-provisioner-5f5fff65ff-2pmxh   1/1     Running   0          26mnfs-web-0                                 1/1     Running   0          2m33snfs-web-1                                 1/1     Running   0          2m27snfs-web-2                                 1/1     Running   0          2m21s[root@master01 ~]# kubectl get pvcNAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGEwww-nfs-web-0   Bound    pvc-62f4868f-c6f7-459e-a280-26010c3a5849   1Gi        RWO            managed-nfs-storage   2m35swww-nfs-web-1   Bound    pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9   1Gi        RWO            managed-nfs-storage   2m29swww-nfs-web-2   Bound    pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0   1Gi        RWO            managed-nfs-storage   2m23s[root@master01 ~]# kubectl get pvNAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS          REASON   AGEpvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0   1Gi        RWO            Delete           Bound    default/www-nfs-web-2   managed-nfs-storage            2m25spvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9   1Gi        RWO            Delete           Bound    default/www-nfs-web-1   managed-nfs-storage            2m31spvc-62f4868f-c6f7-459e-a280-26010c3a5849   1Gi        RWO            Delete           Bound    default/www-nfs-web-0   managed-nfs-storage            2m36s

查看 nfs server 目录中信息,同时各子目录中内容为空

[root@work03 ~]# ls -l /data/nfs/total 12default-www-nfs-web-0-pvc-62f4868f-c6f7-459e-a280-26010c3a5849default-www-nfs-web-1-pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9default-www-nfs-web-2-pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0

破坏性测试

将每个 pod 中写入内容

[root@master01 ~]# for i in 0 1 2; do kubectl exec nfs-web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done

近程nfs各子目录中不再为空,呈现了内容

[root@work03 ~]# ls /data/nfs/default-www-nfs-web-0-pvc-62f4868f-c6f7-459e-a280-26010c3a5849/            index.html[root@work03 ~]# 

查看每个容器中内容,均为各自主机名

[root@master01 ~]# for i in 0 1 2; do kubectl exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; done                      nfs-web-0nfs-web-1nfs-web-2

删除对应 pod

[root@master01 ~]# kubectl get pod -l app=nfs-webNAME        READY   STATUS    RESTARTS   AGEnfs-web-0   1/1     Running   0          7m7snfs-web-1   1/1     Running   0          7m3snfs-web-2   1/1     Running   0          7m[root@master01 ~]# kubectl delete pod -l app=nfs-web   pod "nfs-web-0" deletedpod "nfs-web-1" deletedpod "nfs-web-2" deleted

能够看到又被主动创立了

[root@master01 ~]# kubectl get pod -l app=nfs-web   NAME        READY   STATUS    RESTARTS   AGEnfs-web-0   1/1     Running   0          15snfs-web-1   1/1     Running   0          11snfs-web-2   1/1     Running   0          8s

再次查看每个pod中内容,能够看到文件内容没有变动

[root@master01 ~]# for i in 0 1 2; do kubectl exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; donenfs-web-0nfs-web-1nfs-web-2

结束语

能够看到, statefulset 控制器通过固定的 pod 创立程序能够确保 pod 之间的拓扑关系始终处于稳固不变的状态,通过 nfs-client-provisioner 主动创立和每个 pod 有固定对应关系的近程存储卷,确保 pod 重建后数据不会失落。

分割我

微信公众号:zuolinux_com