前言:

后面介绍的PV、PVC实现了存储类型和Pod挂载的解耦,开发人员间接应用PVC,而Volume存储系统的PV由管理员保护,但随之带来的问题,开发人员应用PVC之前,须要系统管理员提前手动创立PV,而随着pod的一直增多,系统管理员的工作会变得反复和繁琐,动态供应形式这并不合乎自动化运维的趋势,随之应运而生的Storageclass 动静预配存储空间,它的作用就是创立PV的模板,解决了PV的主动创立,备份等扩大性能.

Storageclass简介

简称SC;PV和PVC都可属于某个特定的SC;
创立PV的模板:能够将某个存储服务与SC关联起来,并且将该存储服务的治理接口提供给SC,从而让SC可能在存储服务上CRUD(create、Read.Update和Delete)存储单元;因此,在同一个SC上申明PVC时,若无现存可匹配的PV,则SC可能调用治理接口间接创立出一个合乎PVC申明的需要的PV来。这种PV的提供机制,就称为Dynamic Provision.

具体来说,StorageClass会定义一下两局部:

  1. PV的属性 ,比方存储的大小、类型等;
  2. 创立这种PV须要应用到的存储插件,比方Ceph等;

有了这两局部信息,Kubernetes就可能依据用户提交的PVC,找到对应的StorageClass,而后Kubernetes就会调用 StorageClass申明的存储插件,创立出须要的PV。

这里须要留神的是这么多种存储系统并不是所有的存储系统都反对StorageClass,它须要你的存储系统提供某种接口来让controller能够调用并传递进去PVC的参数去创立PV

常用字段:

StorageClass资源的冀望状态间接与apiversion、kind和metadata定义于同一级别而无须嵌套于spec字段中,它反对应用的字段包含如下几个

  • allowVolumeExpansion <boolean>:是否反对存储卷空间扩大性能;
  • allowedTopologies <[]0bject>:定义能够动静配置存储卷的节点拓扑,仅启用了卷调度性能的服务器才会用到该字段;每个卷插件都有本人反对的拓扑标准,空的拓扑选择器示意无拓扑限度;
  • provisioner <strinp>:必选字段,用于指定存储服务方(provisioner,或称为准备器),存储类要依赖该字段值来断定要应用的存储插件以便适配到指标存储系统;kubernetes内建反对许多的Provisioner,它们的名字都以kubernetes.io/为前缀,例如kubernetes.io/glusterfs等;
  • parameters <map[string]stringp: 定义连贯至指定的Provisioner类别下的某特定存储时须要应用的各相干参数;不同Provisioner的可用的参数各不相同;
  • reclaimPolicy <strinp:由以后存储类动态创建约PV资源的默认回收策路,可用值为Delete(默认)和Retain两个;但那些动态FV的回收策略则取决于它们本身的定义;
  • volumeBindingMlode <string>:定义如何为PVC实现预配和绑定,默认值为VolumeBindingImmediate;该字段仅在启用了存储卷调度性能时能力失效;
  • mountoptions<[]string>:由以后类动态创建的PV资源的默认挂载选项列表。

模板示例:

[root@k8s-master storage]# cat Storageclass-rdb-demo.yaml apiversion: storage.k8s.io/v1 kind: Storageclassmetadata:  name: fast-rbdprovisioner: kubernetes.io/rbd  #不同的存储后端 parameters模板各不一样 须要查阅官网文档 这里是ceph的 也不是所有的存储类型都是反对Storageclassparameters:  monitors: ceph01.ilinux.io:6789,ceph02.ilinux.io:6789,ceph03.ilinux.io:6789  adminId: admin  adminSecretName: ceph-admin-secret  adminSecretNamespace: kube-system  pool: kube  userId: kube  userSecretName: ceph-kube-secret  userSecretNamespace: kube-system  fsType: ext4  imageFormat: "2"  imageFeatures: "layering"reclaimPolicy: Retain  #回收策略
  • 因为测试环境没有ceph集群这里不作演示

Longhorn Storageclass插件

官网文档: https://longhorn.io

Longhorn极大地晋升了开发人员和ITOps的效率,仅需点击一下鼠标,即可轻松实现长久化存储,并且无需为专有解决方案领取低廉的费用。除此之外,Longhorn缩小了治理数据及操作环境所需的资源,从而帮忙企业更加专一且疾速地交付代码及应用程序。

Longhorn仍旧秉承Rancher 100%开源的产品理念,它是一个应用微服务构建的分布式块存储我的项目。2019年,Longhorn公布Beta版本,并于同年10月作为沙箱(Sandbox)我的项目募捐给CNCF。Longhorn受到了开发者们的宽泛关注,成千上万名用户对其进行了压力测试,并提供了极为贵重的反馈意见。

Longhorn的GA版本提供了一系列功能丰富的企业级存储性能,包含:

  • 主动配置,快照,备份和复原
  • 零中断卷扩容
  • 具备定义的RTO和RPO的跨集群劫难复原卷
  • 在不影响卷的状况下实时降级
  • 全功能的Kubernetes CLI集成和独立的用户界面

装置Longhorn

装置筹备:节点数至多为3台 要抉择leader

[root@k8s-master storage]#  yum -y install iscsi-initiator-utils  #装置前须要装置ISCSI[root@k8s-master storage]# kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.2/deploy/longhorn.yaml  #装置 因为要下载镜像 须要期待一些工夫[root@k8s-master storage]# kubectl get pods -n longhorn-system --watch  #期待所有pod就绪[root@k8s-master storage]# kubectl get pods --namespace longhorn-system -o wide   #所有pod就绪NAME                                        READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATEScsi-attacher-54c7586574-fvv4p               1/1     Running   0          37m     10.244.3.16    k8s-node3   <none>           <none>csi-attacher-54c7586574-swdsr               1/1     Running   0          43m     10.244.2.111   k8s-node2   <none>           <none>csi-attacher-54c7586574-zkzrg               1/1     Running   0          43m     10.244.3.10    k8s-node3   <none>           <none>csi-provisioner-5ff5bd6b88-bs687            1/1     Running   0          37m     10.244.3.17    k8s-node3   <none>           <none>csi-provisioner-5ff5bd6b88-gl4xn            1/1     Running   0          43m     10.244.2.112   k8s-node2   <none>           <none>csi-provisioner-5ff5bd6b88-qkzt4            1/1     Running   0          43m     10.244.3.11    k8s-node3   <none>           <none>csi-resizer-7699cdfc4-4w49w                 1/1     Running   0          37m     10.244.3.15    k8s-node3   <none>           <none>csi-resizer-7699cdfc4-l2j49                 1/1     Running   0          43m     10.244.3.12    k8s-node3   <none>           <none>csi-resizer-7699cdfc4-sndlm                 1/1     Running   0          43m     10.244.2.113   k8s-node2   <none>           <none>csi-snapshotter-8f58f46b4-6s89m             1/1     Running   0          37m     10.244.2.119   k8s-node2   <none>           <none>csi-snapshotter-8f58f46b4-qgv5r             1/1     Running   0          43m     10.244.3.13    k8s-node3   <none>           <none>csi-snapshotter-8f58f46b4-tf5ls             1/1     Running   0          43m     10.244.2.115   k8s-node2   <none>           <none>engine-image-ei-a5a44787-5ntlm              1/1     Running   0          44m     10.244.1.146   k8s-node1   <none>           <none>engine-image-ei-a5a44787-h45hr              1/1     Running   0          44m     10.244.3.6     k8s-node3   <none>           <none>engine-image-ei-a5a44787-phnjf              1/1     Running   0          44m     10.244.2.108   k8s-node2   <none>           <none>instance-manager-e-4384d6f1                 1/1     Running   0          44m     10.244.2.110   k8s-node2   <none>           <none>instance-manager-e-54f46256                 1/1     Running   0          34m     10.244.1.148   k8s-node1   <none>           <none>instance-manager-e-e008dd8a                 1/1     Running   0          44m     10.244.3.7     k8s-node3   <none>           <none>instance-manager-r-0ad3175d                 1/1     Running   0          44m     10.244.3.8     k8s-node3   <none>           <none>instance-manager-r-61277092                 1/1     Running   0          44m     10.244.2.109   k8s-node2   <none>           <none>instance-manager-r-d8a9eb0e                 1/1     Running   0          34m     10.244.1.149   k8s-node1   <none>           <none>longhorn-csi-plugin-5htsd                   2/2     Running   0          7m41s   10.244.2.123   k8s-node2   <none>           <none>longhorn-csi-plugin-hpjgl                   2/2     Running   0          16s     10.244.1.151   k8s-node1   <none>           <none>longhorn-csi-plugin-wtkcj                   2/2     Running   0          43m     10.244.3.14    k8s-node3   <none>           <none>longhorn-driver-deployer-5479f45d86-l4fpq   1/1     Running   0          57m     10.244.3.4     k8s-node3   <none>           <none>longhorn-manager-dgk4d                      1/1     Running   1          57m     10.244.1.145   k8s-node1   <none>           <none>longhorn-manager-hb7cl                      1/1     Running   0          57m     10.244.2.107   k8s-node2   <none>           <none>longhorn-manager-xrxll                      1/1     Running   0          57m     10.244.3.3     k8s-node3   <none>           <none>longhorn-ui-79f8976fbf-sb79r                1/1     Running   0          57m     10.244.3.5     k8s-node3   <none>           <none>

示例1:测试创立PVC 主动创立PV

[root@k8s-master storage]# cat  pvc-dyn-longhorn-demo.yaml   apiVersion: v1kind: PersistentVolumeClaim   #资源类型metadata:  name: pvc-dyn-longhorn-demo  namespace: defaultspec:  accessModes: ["ReadWriteOnce"]  volumeMode: Filesystem  resources:    requests:      storage: 2Gi   #新的PV会以最低容量创立PV    limits:      storage: 10Gi  storageClassName: longhorn  #抉择longhorn发明[root@k8s-master storage]# kubectl apply -f longhorn.yaml 
  • 批改回收策略及正本数 非必要操作 依据理论须要批改

    [root@k8s-master storage]# kubectl get scNAME       PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGElonghorn   driver.longhorn.io   Delete(留神回收策略 是删除会有危险)  Immediate   true      48s[root@k8s-master storage]# vim longhorn.yaml  #批改回收策略 批改yaml文件---apiVersion: v1kind: ConfigMapmetadata:name: longhorn-storageclassnamespace: longhorn-systemdata:storageclass.yaml: |  kind: StorageClass  apiVersion: storage.k8s.io/v1  metadata:    name: longhorn  provisioner: driver.longhorn.io   #这个是longhorn自定义的provisioner 存储在本地,如果有网络存储按本人需要批改  allowVolumeExpansion: true  reclaimPolicy: Delete  volumeBindingMode: Immediate  parameters:    numberOfReplicas: "3"   #数据正本数 正本数越多数据当然会越平安,但同时的磁盘容量和性能要求也会更高    staleReplicaTimeout: "2880"    fromBackup: ""  reclaimPolicy: Retain   #增加字段 批改回收策略...---[root@k8s-master storage]# kubectl apply -f longhorn.yaml  #从新利用配置[root@k8s-master storage]# kubectl delete  -f  pvc-dyn-longhorn-demo.yaml[root@k8s-master storage]# kubectl apply   -f  pvc-dyn-longhorn-demo.yaml  #重启创立Pod[root@k8s-master storage]# kubectl get sc  #批改胜利NAME       PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGElonghorn   driver.longhorn.io   Retain          Immediate           true                   10m[root@k8s-master storage]# kubectl get pv #主动创立PVNAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                           STORAGECLASS   REASON   AGEpv-nfs-demo002                             10Gi       RWX            Retain           Available                                                           2d5hpv-nfs-demo003                             1Gi        RWO            Retain           Available                                                           2d5hpvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58   2Gi        RWO            Retain           Bound       default/pvc-dyn-longhorn-demo   longhorn                118s[root@k8s-master storage]# kubectl get pvc #主动创立PVCNAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGEpvc-dyn-longhorn-demo   Bound     pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58   2Gi        RWO            longhorn       2m19s
    批改svc 关上longhorn UI
    [root@k8s-master ~]# kubectl get svc --namespace longhorn-system -o wideNAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE   SELECTORcsi-attacher        ClusterIP   10.102.194.8     <none>        12345/TCP   17h   app=csi-attachercsi-provisioner     ClusterIP   10.99.37.10      <none>        12345/TCP   17h   app=csi-provisionercsi-resizer         ClusterIP   10.111.56.226    <none>        12345/TCP   17h   app=csi-resizercsi-snapshotter     ClusterIP   10.110.198.133   <none>        12345/TCP   17h   app=csi-snapshotterlonghorn-backend    ClusterIP   10.106.163.23    <none>        9500/TCP    17h   app=longhorn-managerlonghorn-frontend   ClusterIP   10.111.219.113   <none>        80/TCP      17h   app=longhorn-ui  #批改SVC为NodePort[root@k8s-master ~]# kubectl edit  svc longhorn-frontend  --namespace longhorn-system service/longhorn-frontend edited[root@k8s-master ~]# kubectl get svc --namespace longhorn-system -o wideNAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE   SELECTORcsi-attacher        ClusterIP   10.102.194.8     <none>        12345/TCP      17h   app=csi-attachercsi-provisioner     ClusterIP   10.99.37.10      <none>        12345/TCP      17h   app=csi-provisionercsi-resizer         ClusterIP   10.111.56.226    <none>        12345/TCP      17h   app=csi-resizercsi-snapshotter     ClusterIP   10.110.198.133   <none>        12345/TCP      17h   app=csi-snapshotterlonghorn-backend    ClusterIP   10.106.163.23    <none>        9500/TCP       17h   app=longhorn-managerlonghorn-frontend   NodePort    10.111.219.113   <none>        80:30745/TCP   17h   app=longhorn-ui   #应用节点IP关上 节点:30745

    能够关上UI 查看到现有的卷、创立新卷、查看节点信息、备份等

  • 创立redis Pod绑定PVC
[root@k8s-master storage]# cat volumes-pvc-longhorn-demo.yaml apiVersion: v1kind: Podmetadata:  name: volumes-pvc-longhorn-demo  namespace: defaultspec:  containers:  - name: redis    image: redis:alpine    imagePullPolicy: IfNotPresent    ports:    - containerPort: 6379      name: redisport    volumeMounts:    - mountPath: /data      name: redis-data-vol  volumes:    - name: redis-data-vol      persistentVolumeClaim:        claimName: pvc-dyn-longhorn-demo  #应用sc创立的pvc[root@k8s-master storage]# kubectl apply -f volumes-pvc-longhorn-demo.yaml[root@k8s-master storage]# kubectl get pod -o wideNAME                                 READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATEScentos-deployment-66d8cd5f8b-95brg   1/1     Running   0          18h     10.244.2.117   k8s-node2   <none>           <none>my-grafana-7d788c5479-bpztz          1/1     Running   0          18h     10.244.2.120   k8s-node2   <none>           <none>volumes-pvc-longhorn-demo            1/1     Running   0          2m30s   10.244.1.172   k8s-node1   <none>           <none>[root@k8s-master storage]# kubectl exec volumes-pvc-longhorn-demo -it -- /bin/sh/data # redis-cli127.0.0.1:6379> set mykey www.qq.comOK127.0.0.1:6379> bgsabe(error) ERR unknown command `bgsabe`, with args beginning with: 127.0.0.1:6379> bgsaveBackground saving started127.0.0.1:6379> exit/data # lsdump.rdb    lost+found/data # exit[root@k8s-master storage]# kubectl delete -f  volumes-pvc-longhorn-demo.yaml pod "volumes-pvc-longhorn-demo" deleted[root@k8s-master storage]# cat volumes-pvc-longhorn-demo.yamlapiVersion: v1kind: Podmetadata:  name: volumes-pvc-longhorn-demo  namespace: defaultspec:  nodeName: k8s-node2  #指定运行节点  containers:  - name: redis    image: redis:alpine    imagePullPolicy: IfNotPresent    ports:    - containerPort: 6379      name: redisport    volumeMounts:    - mountPath: /data      name: redis-data-vol  volumes:    - name: redis-data-vol      persistentVolumeClaim:        claimName: pvc-dyn-longhorn-demo[root@k8s-master storage]# kubectl get pod -o wide -wNAME                                 READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATEScentos-deployment-66d8cd5f8b-95brg   1/1     Running             0          18h   10.244.2.117   k8s-node2   <none>           <none>my-grafana-7d788c5479-bpztz          1/1     Running             0          18h   10.244.2.120   k8s-node2   <none>           <none>volumes-pvc-longhorn-demo            1/1     Running             0          68s   10.244.2.124   k8s-node2   <none>           <none>[root@k8s-master storage]# kubectl exec volumes-pvc-longhorn-demo -it -- /bin/sh #查看pvc数据/data # redis-cli127.0.0.1:6379> get mykey"www.qq.com"127.0.0.1:6379> exit/data # exit

longhorn数据存储及自定义资源

  • 因为之前正本数,默认设置为3,所以3个节点,每个节点都一份数据保留
[root@k8s-node1 ~]# ls /var/lib/longhorn/replicas/ #默认的数据保留门路 pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-da2330a2[root@k8s-node1 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-da2330a2/revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta[root@k8s-node2 ~]# ls /var/lib/longhorn/replicas/ pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-83f7f58c[root@k8s-node2 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-83f7f58c/revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta[root@k8s-node3 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-c90965c0[root@k8s-node3 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-c90965c0/revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta[root@k8s-master storage]# kubectl api-resources --api-group=longhorn.io   #longhorn自定义的资源类型NAME                   SHORTNAMES   APIGROUP      NAMESPACED   KINDbackingimagemanagers   lhbim        longhorn.io   true         BackingImageManagerbackingimages          lhbi         longhorn.io   true         BackingImageengineimages           lhei         longhorn.io   true         EngineImageengines                lhe          longhorn.io   true         Engineinstancemanagers       lhim         longhorn.io   true         InstanceManagernodes                  lhn          longhorn.io   true         Nodereplicas               lhr          longhorn.io   true         Replicasettings               lhs          longhorn.io   true         Settingsharemanagers          lhsm         longhorn.io   true         ShareManagervolumes                lhv          longhorn.io   true         Volume[root@k8s-master storage]# kubectl get replicas -n longhorn-systemNAME                                                  STATE     NODE        DISK                                   INSTANCEMANAGER               IMAGE                               AGEpvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-08f28c21   running   k8s-node3   49846d34-b5a8-4a86-96f4-f0d7ca191f2a   instance-manager-r-0ad3175d   longhornio/longhorn-engine:v1.1.2   3h54mpvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-6694fc48   running   k8s-node2   ce1ed80b-43c9-4fc9-8266-cedb736bacaa   instance-manager-r-61277092   longhornio/longhorn-engine:v1.1.2   3h54mpvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-86a35cd3   running   k8s-node1   3d40b18a-c0e9-459c-b37e-d878152d1261   instance-manager-r-d8a9eb0e   longhornio/longhorn-engine:v1.1.2   3h54m