文章链接
PVPVC 模式要先创立好 PV,而后再定义好 PVC 进行一对一的绑定。那么如果遇到大集群,也一一的创立吗?这样来说保护老本很高,工作量大。这个时候就有了 Kubernetes 提供一种主动创立 PV 的机制,叫 StorageClass ,它的作用就是创立 PV 的模板。

StorageClass 会定义两局部:

  • PV的属性:
    比方存储的大小、类型等
  • PV须要应用到的存储插件
    比方Ceph等;

有了这两局部信息,Kubernetes 就可能依据用户提交的 PVC ,找到对应的 StorageClass ,而后 Kubernetes 就会调用 StorageClass 申明的存储插件,主动创立 PV

不过要应用 NFS ,咱们就须要一个 nfs-client 的插件。这个插件会使 NFS 服务主动帮咱们创立 PV

主动创立的 PV 会以 ${namespace}-${pvcName}-${pvName} 的格局存储
如果 PV 被回收,则会以 archieved-${namespace}-${pvcName}-${pvName} 的格局存储
具体能够参考 Github
PV、PVC、NFS不再介绍,没有实现的请查看 kubernetes应用PV和PVC治理数据存储

创立ServiceAccount

创立 ServiceAccount 的目标是为了给 nfs-client 受权。

# 下载 rbac.yamlwget https://github.com/kubernetes-retired/external-storage/blob/201f40d78a9d3fd57d8a441cfc326988d88f35ec/nfs-client/deploy/rbac.yaml

部署 rbac.yaml

kubectl  apply  -f   rbac.yaml # 输入如下serviceaccount/nfs-client-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner createdrole.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner createdrolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

创立 nfs-client

应用 Deployment 来创立 nfs-client

# 下载 deployment.yamlwget https://github.com/kubernetes-retired/external-storage/blob/201f40d78a9d3fd57d8a441cfc326988d88f35ec/nfs-client/deploy/deployment.yaml

批改 yaml 如下

apiVersion: apps/v1kind: Deploymentmetadata:  name: nfs-client-provisioner  labels:    app: nfs-client-provisioner  # replace with namespace where provisioner is deployed  namespace: defaultspec:  replicas: 1  strategy:    type: Recreate  selector:    matchLabels:      app: nfs-client-provisioner  template:    metadata:      labels:        app: nfs-client-provisioner    spec:      serviceAccountName: nfs-client-provisioner      containers:        - name: nfs-client-provisioner          image: quay.io/external_storage/nfs-client-provisioner:latest          volumeMounts:            - name: nfs-client-root              mountPath: /persistentvolumes          env:            - name: PROVISIONER_NAME              value: fuseim.pri/ifs      # 这里的供应者名称必须和class.yaml中的provisioner的名称统一,否则部署不胜利            - name: NFS_SERVER              value: 10.0.10.51          # 这里写NFS服务器的IP地址或者能解析到的主机名            - name: NFS_PATH              value: /home/bsh/nfs       # 这里写NFS服务器中的共享挂载目录(强调:这里的门路必须是目录中最初一层的文件夹,否则部署的利用将无权限创立目录导致Pending)      volumes:        - name: nfs-client-root          nfs:            server: 10.0.10.51          # NFS服务器的IP或可解析到的主机名             path: /home/bsh/nfs        # NFS服务器中的共享挂载目录(强调:这里的门路必须是目录中最初一层的文件夹,否则部署的利用将无权限创立目录导致Pending)

⚠️ 留神
value: fuseim.pri/ifs # 这里的供应者名称必须和 class.yaml 中的 provisioner 的名称统一,否则部署不胜利

创立查看

# 部署 nfs-clientkubectl  apply  -f   deployment.yaml# 输入如下deployment.apps/nfs-client-provisioner created

查看pod

kubectl get pod# 输入如下NAME                                     READY   STATUS    RESTARTS   AGEnfs-client-provisioner-fd74f99b4-wr58j   1/1     Running   1          30s

创立 StorageClass

class.yaml 内容比拟少,能够不必下载,具体内容如下
class.yaml 下载地址

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:  name: managed-nfs-storageprovisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'parameters:  archiveOnDelete: "false"

⚠️ 留神
provisioner 必须和下面得 Deployment 的 YAML 文件中 PROVISIONER_NAME 的值保持一致。

创立 storageclass

# 创立 kubectl  apply  -f  class.yaml # 输入如下storageclass.storage.k8s.io/managed-nfs-storage created

查看状态

kubectl get storageclass# 输入如下NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGEmanaged-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  53s

创立 PVC

创立 tomcat-storageclass-pvc.yaml

kind: PersistentVolumeClaimapiVersion: v1metadata:  name: tomcat  annotations:    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"spec:  accessModes:    - ReadWriteMany  resources:    requests:      storage: 500Mi

部署 yaml

kubectl  apply  -f tomcat-storageclass-pvc.yaml# 输入如下persistentvolumeclaim/tomcat created

查看状态

kubectl get pvc# 输入如下NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGEtomcat       Bound    pvc-d35c82e3-29f3-4f6d-b25d-3ccdd365d1ec   500Mi      RWX            managed-nfs-storage   48s

pod 应用增加 pvc

还拿之前的 tomcat 做试验,咱们把 tomcat 目录下的 logs 拿到本地 nfs 中。

⚠️ 留神
如果遇到应用PVC 创立 pod 的时候发现无奈创立胜利。呈现一下报错的时候请参考 kubernetes 应用 PCV 创立 pod 报错 persistentvolume-controller waiting for a volume to be created

具体 yaml 如下:

apiVersion: apps/v1 kind: Deployment   metadata:               name: tomcat-deployment       labels:           app: tomcat  spec:            replicas: 3  selector:          matchLabels:       app: tomcat  minReadySeconds: 1  progressDeadlineSeconds: 60  revisionHistoryLimit: 2  strategy:    type: RollingUpdate    rollingUpdate:      maxSurge: 1      maxUnavailable: 1  template:            metadata:        labels:          app: tomcat    spec:               containers:           - name: tomcat             image: wenlongxue/tomcat:tomcat-demo-62-123xw2            imagePullPolicy: Always                  ports:        - containerPort: 8080        resources:          requests:            memory: "2Gi"            cpu: "80m"          limits:             memory: "2Gi"             cpu: "80m"        readinessProbe:          httpGet:            path: /            port: 8080          initialDelaySeconds: 180          periodSeconds: 5          timeoutSeconds: 3          successThreshold: 1          failureThreshold: 30        volumeMounts:        - mountPath: "/usr/local/tomcat/logs"          name: tomcat# pvc 局部      volumes:      - name: tomcat        persistentVolumeClaim:          claimName: tomcat---# Service 服务局部apiVersion: v1kind: Servicemetadata:        name: tomcat-service  labels:          app: tomcat spec:          selector:       app: tomcat    ports:  - name: tomcat-port     protocol: TCP          port: 8080             targetPort: 8080     type: ClusterIP ---# ingress 服务局部apiVersion: networking.k8s.io/v1kind: Ingressmetadata:  name: tomcat  annotations:    kubernetes.io/ingress.class: "nginx"spec:  tls:  - hosts:    - tomcat.cnsre.cn    secretName: tls-secret  rules:  - host: tomcat.cnsre.cn    http:      paths:      - path: "/"        pathType: Prefix        backend:          service:            name: tomcat-service            port:              number: 8080

部署 pod 服务

kubectl  apply  -f tomcatc.yaml# 输入如下deployment.apps/tomcat-deployment created

查看状态

kubectl get pod# 输入如下NAME                                     READY   STATUS    RESTARTS   AGEnfs-client-provisioner-fd74f99b4-wr58j   1/1     Running   0          76mtomcat-deployment-7588b5c8fd-cnwvt       1/1     Running   0          59mtomcat-deployment-7588b5c8fd-kl8fj       1/1     Running   0          59mtomcat-deployment-7588b5c8fd-ksbg9       1/1     Running   0          59m

查看 PV PVC

[root@master tomccat]# kubectl  get  pv,pvcNAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS          REASON   AGEpersistentvolume/pvc-d35c82e3-29f3-4f6d-b25d-3ccdd365d1ec   500Mi      RWX            Delete           Bound    default/tomcat   managed-nfs-storage            65mNAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGEpersistentvolumeclaim/tomcat   Bound    pvc-d35c82e3-29f3-4f6d-b25d-3ccdd365d1ec   500Mi      RWX            managed-nfs-storage   65m

查看 nfs server 目录中信息

[root@node1 ~]# ll /home/bsh/nfs/default-tomcat-pvc-d35c82e3-29f3-4f6d-b25d-3ccdd365d1ec/总用量 220-rw-r-----. 1 root root  22217 9月   3 14:49 catalina.2021-09-03.log-rw-r-----. 1 root root      0 9月   3 14:41 host-manager.2021-09-03.log-rw-r-----. 1 root root   2791 9月   3 14:49 localhost.2021-09-03.log-rw-r-----. 1 root root 118428 9月   3 15:31 localhost_access_log.2021-09-03.txt-rw-r-----. 1 root root      0 9月   3 14:41 manager.2021-09-03.log

文章链接