关于kubernetes:12kubernetes笔记-Volume存储卷三-Longhorn-Storageclass-动态预配存储空间

5次阅读

共计 12281 个字符,预计需要花费 31 分钟才能阅读完成。

前言:

后面介绍的 PV、PVC 实现了存储类型和 Pod 挂载的解耦,开发人员间接应用 PVC,而 Volume 存储系统的 PV 由管理员保护,但随之带来的问题,开发人员应用 PVC 之前,须要系统管理员提前手动创立 PV,而随着 pod 的一直增多,系统管理员的工作会变得反复和繁琐,动态供应形式这并不合乎自动化运维的趋势,随之应运而生的 Storageclass 动静预配存储空间,它的作用就是创立 PV 的模板, 解决了 PV 的主动创立,备份等扩大性能.

Storageclass 简介

简称 SC;PV 和 PVC 都可属于某个特定的 SC;
创立 PV 的模板: 能够将某个存储服务与 SC 关联起来,并且将该存储服务的治理接口提供给 SC,从而让 SC 可能在存储服务上 CRUD(create、Read.Update 和 Delete) 存储单元; 因此,在同一个 SC 上申明 PVC 时,若无现存可匹配的 PV,则 SC 可能调用治理接口间接创立出一个合乎 PVC 申明的需要的 PV 来。这种 PV 的提供机制,就称为 Dynamic Provision.

具体来说,StorageClass 会定义一下两局部:

  1. PV 的属性,比方存储的大小、类型等;
  2. 创立这种 PV 须要应用到的存储插件,比方 Ceph 等;

有了这两局部信息,Kubernetes 就可能依据用户提交的 PVC,找到对应的 StorageClass,而后 Kubernetes 就会调用 StorageClass 申明的存储插件,创立出须要的 PV。

这里须要留神的是这么多种存储系统并不是所有的存储系统都反对 StorageClass, 它须要你的存储系统提供某种接口来让 controller 能够调用并传递进去 PVC 的参数去创立 PV

常用字段:

StorageClass 资源的冀望状态间接与 apiversion、kind 和 metadata 定义于同一级别而无须嵌套于 spec 字段中,它反对应用的字段包含如下几个

  • allowVolumeExpansion <boolean>: 是否反对存储卷空间扩大性能;
  • allowedTopologies <[]0bject>: 定义能够动静配置存储卷的节点拓扑,仅启用了卷调度性能的服务器才会用到该字段; 每个卷插件都有本人反对的拓扑标准,空的拓扑选择器示意无拓扑限度;
  • provisioner <strinp>: 必选字段,用于指定存储服务方 (provisioner,或称为准备器),存储类要依赖该字段值来断定要应用的存储插件以便适配到指标存储系统;kubernetes 内建反对许多的 Provisioner,它们的名字都以 kubernetes.io/ 为前缀,例如 kubernetes.io/glusterfs 等;
  • parameters <map[string]stringp: 定义连贯至指定的 Provisioner 类别下的某特定存储时须要应用的各相干参数; 不同 Provisioner 的可用的参数各不相同;
  • reclaimPolicy <strinp: 由以后存储类动态创建约 PV 资源的默认回收策路,可用值为 Delete(默认) 和 Retain 两个; 但那些动态 FV 的回收策略则取决于它们本身的定义;
  • volumeBindingMlode <string>: 定义如何为 PVC 实现预配和绑定,默认值为 VolumeBindingImmediate; 该字段仅在启用了存储卷调度性能时能力失效;
  • mountoptions<[]string>: 由以后类动态创建的 PV 资源的默认挂载选项列表。

模板示例:

[root@k8s-master storage]# cat Storageclass-rdb-demo.yaml 
apiversion: storage.k8s.io/v1 
kind: Storageclass
metadata:
  name: fast-rbd
provisioner: kubernetes.io/rbd  #不同的存储后端 parameters 模板各不一样 须要查阅官网文档 这里是 ceph 的 也不是所有的存储类型都是反对 Storageclass
parameters:
  monitors: ceph01.ilinux.io:6789,ceph02.ilinux.io:6789,ceph03.ilinux.io:6789
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-kube-secret
  userSecretNamespace: kube-system
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"
reclaimPolicy: Retain  #回收策略 
  • 因为测试环境没有 ceph 集群这里不作演示

Longhorn Storageclass 插件

官网文档: https://longhorn.io

Longhorn 极大地晋升了开发人员和 ITOps 的效率,仅需点击一下鼠标,即可轻松实现长久化存储,并且无需为专有解决方案领取低廉的费用。除此之外,Longhorn 缩小了治理数据及操作环境所需的资源,从而帮忙企业更加专一且疾速地交付代码及应用程序。

Longhorn 仍旧秉承 Rancher 100% 开源的产品理念,它是一个应用微服务构建的分布式块存储我的项目。2019 年,Longhorn 公布 Beta 版本,并于同年 10 月作为沙箱(Sandbox)我的项目募捐给 CNCF。Longhorn 受到了开发者们的宽泛关注,成千上万名用户对其进行了压力测试,并提供了极为贵重的反馈意见。

Longhorn 的 GA 版本提供了一系列功能丰富的企业级存储性能,包含:

  • 主动配置,快照,备份和复原
  • 零中断卷扩容
  • 具备定义的 RTO 和 RPO 的跨集群劫难复原卷
  • 在不影响卷的状况下实时降级
  • 全功能的 Kubernetes CLI 集成和独立的用户界面

装置 Longhorn

装置筹备: 节点数至多为 3 台 要抉择 leader

[root@k8s-master storage]#  yum -y install iscsi-initiator-utils  #装置前须要装置 ISCSI
[root@k8s-master storage]# kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.2/deploy/longhorn.yaml  #装置 因为要下载镜像 须要期待一些工夫

[root@k8s-master storage]# kubectl get pods -n longhorn-system --watch  #期待所有 pod 就绪

[root@k8s-master storage]# kubectl get pods --namespace longhorn-system -o wide   #所有 pod 就绪
NAME                                        READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
csi-attacher-54c7586574-fvv4p               1/1     Running   0          37m     10.244.3.16    k8s-node3   <none>           <none>
csi-attacher-54c7586574-swdsr               1/1     Running   0          43m     10.244.2.111   k8s-node2   <none>           <none>
csi-attacher-54c7586574-zkzrg               1/1     Running   0          43m     10.244.3.10    k8s-node3   <none>           <none>
csi-provisioner-5ff5bd6b88-bs687            1/1     Running   0          37m     10.244.3.17    k8s-node3   <none>           <none>
csi-provisioner-5ff5bd6b88-gl4xn            1/1     Running   0          43m     10.244.2.112   k8s-node2   <none>           <none>
csi-provisioner-5ff5bd6b88-qkzt4            1/1     Running   0          43m     10.244.3.11    k8s-node3   <none>           <none>
csi-resizer-7699cdfc4-4w49w                 1/1     Running   0          37m     10.244.3.15    k8s-node3   <none>           <none>
csi-resizer-7699cdfc4-l2j49                 1/1     Running   0          43m     10.244.3.12    k8s-node3   <none>           <none>
csi-resizer-7699cdfc4-sndlm                 1/1     Running   0          43m     10.244.2.113   k8s-node2   <none>           <none>
csi-snapshotter-8f58f46b4-6s89m             1/1     Running   0          37m     10.244.2.119   k8s-node2   <none>           <none>
csi-snapshotter-8f58f46b4-qgv5r             1/1     Running   0          43m     10.244.3.13    k8s-node3   <none>           <none>
csi-snapshotter-8f58f46b4-tf5ls             1/1     Running   0          43m     10.244.2.115   k8s-node2   <none>           <none>
engine-image-ei-a5a44787-5ntlm              1/1     Running   0          44m     10.244.1.146   k8s-node1   <none>           <none>
engine-image-ei-a5a44787-h45hr              1/1     Running   0          44m     10.244.3.6     k8s-node3   <none>           <none>
engine-image-ei-a5a44787-phnjf              1/1     Running   0          44m     10.244.2.108   k8s-node2   <none>           <none>
instance-manager-e-4384d6f1                 1/1     Running   0          44m     10.244.2.110   k8s-node2   <none>           <none>
instance-manager-e-54f46256                 1/1     Running   0          34m     10.244.1.148   k8s-node1   <none>           <none>
instance-manager-e-e008dd8a                 1/1     Running   0          44m     10.244.3.7     k8s-node3   <none>           <none>
instance-manager-r-0ad3175d                 1/1     Running   0          44m     10.244.3.8     k8s-node3   <none>           <none>
instance-manager-r-61277092                 1/1     Running   0          44m     10.244.2.109   k8s-node2   <none>           <none>
instance-manager-r-d8a9eb0e                 1/1     Running   0          34m     10.244.1.149   k8s-node1   <none>           <none>
longhorn-csi-plugin-5htsd                   2/2     Running   0          7m41s   10.244.2.123   k8s-node2   <none>           <none>
longhorn-csi-plugin-hpjgl                   2/2     Running   0          16s     10.244.1.151   k8s-node1   <none>           <none>
longhorn-csi-plugin-wtkcj                   2/2     Running   0          43m     10.244.3.14    k8s-node3   <none>           <none>
longhorn-driver-deployer-5479f45d86-l4fpq   1/1     Running   0          57m     10.244.3.4     k8s-node3   <none>           <none>
longhorn-manager-dgk4d                      1/1     Running   1          57m     10.244.1.145   k8s-node1   <none>           <none>
longhorn-manager-hb7cl                      1/1     Running   0          57m     10.244.2.107   k8s-node2   <none>           <none>
longhorn-manager-xrxll                      1/1     Running   0          57m     10.244.3.3     k8s-node3   <none>           <none>
longhorn-ui-79f8976fbf-sb79r                1/1     Running   0          57m     10.244.3.5     k8s-node3   <none>           <none>

示例 1: 测试创立 PVC 主动创立 PV

[root@k8s-master storage]# cat  pvc-dyn-longhorn-demo.yaml   
apiVersion: v1
kind: PersistentVolumeClaim   #资源类型
metadata:
  name: pvc-dyn-longhorn-demo
  namespace: default
spec:
  accessModes: ["ReadWriteOnce"]
  volumeMode: Filesystem
  resources:
    requests:
      storage: 2Gi   #新的 PV 会以最低容量创立 PV
    limits:
      storage: 10Gi
  storageClassName: longhorn  #抉择 longhorn 发明

[root@k8s-master storage]# kubectl apply -f longhorn.yaml 
  • 批改回收策略及正本数 非必要操作 依据理论须要批改

    [root@k8s-master storage]# kubectl get sc
    NAME       PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    longhorn   driver.longhorn.io   Delete(留神回收策略 是删除会有危险)  Immediate   true      48s
    
    [root@k8s-master storage]# vim longhorn.yaml  #批改回收策略 批改 yaml 文件
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: longhorn-storageclass
    namespace: longhorn-system
    data:
    storageclass.yaml: |
      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
        name: longhorn
      provisioner: driver.longhorn.io   #这个是 longhorn 自定义的 provisioner 存储在本地, 如果有网络存储按本人需要批改
      allowVolumeExpansion: true
      reclaimPolicy: Delete
      volumeBindingMode: Immediate
      parameters:
        numberOfReplicas: "3"   #数据正本数 正本数越多数据当然会越平安, 但同时的磁盘容量和性能要求也会更高
        staleReplicaTimeout: "2880"
        fromBackup: ""
      reclaimPolicy: Retain   #增加字段 批改回收策略
    ...
    ---
    [root@k8s-master storage]# kubectl apply -f longhorn.yaml  #从新利用配置
    [root@k8s-master storage]# kubectl delete  -f  pvc-dyn-longhorn-demo.yaml
    [root@k8s-master storage]# kubectl apply   -f  pvc-dyn-longhorn-demo.yaml  #重启创立 Pod
    
    [root@k8s-master storage]# kubectl get sc  #批改胜利
    NAME       PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    longhorn   driver.longhorn.io   Retain          Immediate           true                   10m
    
    [root@k8s-master storage]# kubectl get pv #主动创立 PV
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                           STORAGECLASS   REASON   AGE
    pv-nfs-demo002                             10Gi       RWX            Retain           Available                                                           2d5h
    pv-nfs-demo003                             1Gi        RWO            Retain           Available                                                           2d5h
    pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58   2Gi        RWO            Retain           Bound       default/pvc-dyn-longhorn-demo   longhorn                118s
    
    [root@k8s-master storage]# kubectl get pvc #主动创立 PVC
    NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pvc-dyn-longhorn-demo   Bound     pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58   2Gi        RWO            longhorn       2m19s
    批改 svc 关上 longhorn UI
    [root@k8s-master ~]# kubectl get svc --namespace longhorn-system -o wide
    NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE   SELECTOR
    csi-attacher        ClusterIP   10.102.194.8     <none>        12345/TCP   17h   app=csi-attacher
    csi-provisioner     ClusterIP   10.99.37.10      <none>        12345/TCP   17h   app=csi-provisioner
    csi-resizer         ClusterIP   10.111.56.226    <none>        12345/TCP   17h   app=csi-resizer
    csi-snapshotter     ClusterIP   10.110.198.133   <none>        12345/TCP   17h   app=csi-snapshotter
    longhorn-backend    ClusterIP   10.106.163.23    <none>        9500/TCP    17h   app=longhorn-manager
    longhorn-frontend   ClusterIP   10.111.219.113   <none>        80/TCP      17h   app=longhorn-ui  #批改 SVC 为 NodePort
    
    [root@k8s-master ~]# kubectl edit  svc longhorn-frontend  --namespace longhorn-system 
    service/longhorn-frontend edited
    
    [root@k8s-master ~]# kubectl get svc --namespace longhorn-system -o wide
    NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE   SELECTOR
    csi-attacher        ClusterIP   10.102.194.8     <none>        12345/TCP      17h   app=csi-attacher
    csi-provisioner     ClusterIP   10.99.37.10      <none>        12345/TCP      17h   app=csi-provisioner
    csi-resizer         ClusterIP   10.111.56.226    <none>        12345/TCP      17h   app=csi-resizer
    csi-snapshotter     ClusterIP   10.110.198.133   <none>        12345/TCP      17h   app=csi-snapshotter
    longhorn-backend    ClusterIP   10.106.163.23    <none>        9500/TCP       17h   app=longhorn-manager
    longhorn-frontend   NodePort    10.111.219.113   <none>        80:30745/TCP   17h   app=longhorn-ui   #应用节点 IP 关上 节点:30745
    

    能够关上 UI 查看到现有的卷、创立新卷、查看节点信息、备份等

  • 创立 redis Pod 绑定 PVC
[root@k8s-master storage]# cat volumes-pvc-longhorn-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: volumes-pvc-longhorn-demo
  namespace: default
spec:
  containers:
  - name: redis
    image: redis:alpine
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 6379
      name: redisport
    volumeMounts:
    - mountPath: /data
      name: redis-data-vol
  volumes:
    - name: redis-data-vol
      persistentVolumeClaim:
        claimName: pvc-dyn-longhorn-demo  #应用 sc 创立的 pvc

[root@k8s-master storage]# kubectl apply -f volumes-pvc-longhorn-demo.yaml

[root@k8s-master storage]# kubectl get pod -o wide
NAME                                 READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
centos-deployment-66d8cd5f8b-95brg   1/1     Running   0          18h     10.244.2.117   k8s-node2   <none>           <none>
my-grafana-7d788c5479-bpztz          1/1     Running   0          18h     10.244.2.120   k8s-node2   <none>           <none>
volumes-pvc-longhorn-demo            1/1     Running   0          2m30s   10.244.1.172   k8s-node1   <none>           <none>

[root@k8s-master storage]# kubectl exec volumes-pvc-longhorn-demo -it -- /bin/sh
/data # redis-cli
127.0.0.1:6379> set mykey www.qq.com
OK
127.0.0.1:6379> bgsabe
(error) ERR unknown command `bgsabe`, with args beginning with: 
127.0.0.1:6379> bgsave
Background saving started
127.0.0.1:6379> exit
/data # ls
dump.rdb    lost+found
/data # exit

[root@k8s-master storage]# kubectl delete -f  volumes-pvc-longhorn-demo.yaml 
pod "volumes-pvc-longhorn-demo" deleted

[root@k8s-master storage]# cat volumes-pvc-longhorn-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: volumes-pvc-longhorn-demo
  namespace: default
spec:
  nodeName: k8s-node2  #指定运行节点
  containers:
  - name: redis
    image: redis:alpine
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 6379
      name: redisport
    volumeMounts:
    - mountPath: /data
      name: redis-data-vol
  volumes:
    - name: redis-data-vol
      persistentVolumeClaim:
        claimName: pvc-dyn-longhorn-demo

[root@k8s-master storage]# kubectl get pod -o wide -w
NAME                                 READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
centos-deployment-66d8cd5f8b-95brg   1/1     Running             0          18h   10.244.2.117   k8s-node2   <none>           <none>
my-grafana-7d788c5479-bpztz          1/1     Running             0          18h   10.244.2.120   k8s-node2   <none>           <none>
volumes-pvc-longhorn-demo            1/1     Running             0          68s   10.244.2.124   k8s-node2   <none>           <none>

[root@k8s-master storage]# kubectl exec volumes-pvc-longhorn-demo -it -- /bin/sh #查看 pvc 数据
/data # redis-cli
127.0.0.1:6379> get mykey
"www.qq.com"
127.0.0.1:6379> exit
/data # exit

longhorn 数据存储及自定义资源

  • 因为之前正本数, 默认设置为 3, 所以 3 个节点, 每个节点都一份数据保留
[root@k8s-node1 ~]# ls /var/lib/longhorn/replicas/ #默认的数据保留门路 
pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-da2330a2
[root@k8s-node1 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-da2330a2/
revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta

[root@k8s-node2 ~]# ls /var/lib/longhorn/replicas/ 
pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-83f7f58c
[root@k8s-node2 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-83f7f58c/
revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta

[root@k8s-node3 ~]# ls /var/lib/longhorn/replicas/
pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-c90965c0
[root@k8s-node3 ~]# ls /var/lib/longhorn/replicas/pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-c90965c0/
revision.counter  volume-head-000.img  volume-head-000.img.meta  volume.meta

[root@k8s-master storage]# kubectl api-resources --api-group=longhorn.io   #longhorn 自定义的资源类型
NAME                   SHORTNAMES   APIGROUP      NAMESPACED   KIND
backingimagemanagers   lhbim        longhorn.io   true         BackingImageManager
backingimages          lhbi         longhorn.io   true         BackingImage
engineimages           lhei         longhorn.io   true         EngineImage
engines                lhe          longhorn.io   true         Engine
instancemanagers       lhim         longhorn.io   true         InstanceManager
nodes                  lhn          longhorn.io   true         Node
replicas               lhr          longhorn.io   true         Replica
settings               lhs          longhorn.io   true         Setting
sharemanagers          lhsm         longhorn.io   true         ShareManager
volumes                lhv          longhorn.io   true         Volume

[root@k8s-master storage]# kubectl get replicas -n longhorn-system
NAME                                                  STATE     NODE        DISK                                   INSTANCEMANAGER               IMAGE                               AGE
pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-08f28c21   running   k8s-node3   49846d34-b5a8-4a86-96f4-f0d7ca191f2a   instance-manager-r-0ad3175d   longhornio/longhorn-engine:v1.1.2   3h54m
pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-6694fc48   running   k8s-node2   ce1ed80b-43c9-4fc9-8266-cedb736bacaa   instance-manager-r-61277092   longhornio/longhorn-engine:v1.1.2   3h54m
pvc-dbcbe588-a088-45b4-9972-74b0e6ca0b58-r-86a35cd3   running   k8s-node1   3d40b18a-c0e9-459c-b37e-d878152d1261   instance-manager-r-d8a9eb0e   longhornio/longhorn-engine:v1.1.2   3h54m
正文完
 0