关于k8s:使用cephrbd-作为k8s后端存储

8次阅读

共计 3659 个字符,预计需要花费 10 分钟才能阅读完成。

前两篇说了 ceph 集群搭建及 rbd 的应用等一些性能,然而 ceph 毕竟是作为一个存储,须要集成到利用中,rdb反对多种设施应用,大体上有 libvirt,OpenStack,CloudStack,Kubernetes 当初应用 ceph 作为 k8s 的存储,也就是 ceph-rbdStorageClass相结合,为 pod 提供存储,没有 k8s 集群能够应用 kubeadm 搭建

Ceph 集群中创立资源等配置

创立存储池并设定 pgpg_num

ceph osd pool create kubernetes 32 32

初始化存储池

rbd pool init kubernetes

创立 client 用户拜访过程池

ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'

Kubernetes 装置 ceph-rbd 驱动配置

每台 k8s 节点装置 ceph-rbd 命令包

yum install ceph-common

设置 ceph-cm 配置文件

  • clusterID为你 ceph 集群id
  • monitors为你 cephmon服务地址
---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "9bb7af93-99f3-4254-b0b3-864f5d1413e4",
        "monitors": [
          "192.168.21.100:6789",
          "192.168.21.101:6789",
          "192.168.21.102:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-config
  namespace: ceph-st
kubectl apply -f csi-config-map.yaml

创立拜访 ceph 集群的证书,userID为下面创立的 nameuserKey 为生成的 key,遗记应用ceph auth list 查看

cat <<EOF > csi-rbd-secret.yaml
metadata:
  name: csi-rbd-secret
  namespace: default
stringData:
  userID: kubernetes
  userKey: AQCb7mpfCr4oGhAAalv/q8WSnUE/vyu59Ge3Hg==
EOF

装置 ceph-rbd 驱动程序

wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml

[root@k8s-master rbd]# cat ceph-csi-encryption-kms-config.yaml 
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    {
      "vault-test": {
        "encryptionKMSType": "vault",
        "vaultAddress": "http://vault.default.svc.cluster.local:8200",
        "vaultAuthPath": "/v1/auth/kubernetes/login",
        "vaultRole": "csi-kubernetes",
        "vaultPassphraseRoot": "/v1/secret",
        "vaultPassphrasePath": "ceph-csi/",
        "vaultCAVerify": "false"
      }
    }
metadata:
  name: ceph-csi-encryption-kms-config
  namespace: ceph-st
kubectl apply -f ./

创立StorageClass

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
   namespace: ceph-st
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: 9bb7af93-99f3-4254-b0b3-864f5d1413e4
   pool: kubernetes
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: ceph-st
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: ceph-st
   imageFeatures: "layering"
reclaimPolicy: Delete
mountOptions:
   - discard

创立 PVC 测试性能

查看服务状态

创立文件设施

[root@k8s-master rbd]# cat pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem #指定应用文件格式
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc
---
apiVersion: v1
kind: Pod
metadata:
  name: csi-rbd-demo-pod
spec:
  containers:
    - name: web-server
      image: nginx
      volumeMounts:
        - name: mypvc
          mountPath: /var/lib/www/html
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: rbd-pvc
        readOnly: false

查看各个组件是否失常

  • k8s 服务
[root@k8s-master rbd]# kubectl get pvc | grep rbd-pvc
rbd-pvc                                       Bound    pvc-6c63b1c3-0b81-4a51-8c1b-e3d0acae1b6c   1Gi        RWO            csi-rbd-sc           22d
[root@k8s-master rbd]# kubectl get pv | grep rbd-pvc
pvc-6c63b1c3-0b81-4a51-8c1b-e3d0acae1b6c   1Gi        RWO            Delete           Bound      default/rbd-pvc
  • ceph服务
[root@node-1 ~]# rbd -p kubernetes ls
csi-vol-4e6663e7-fd80-11ea-9f64-c25f47044f33
csi-vol-a2195114-fd7e-11ea-9f64-c25f47044f33

创立块设施 pvc 这样挂载的是一个块文件

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: raw-block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc
  
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-with-raw-block-volume
spec:
  containers:
    - name: fc-container
      image: fedora:26
      command: ["/bin/sh", "-c"]
      args: ["tail -f /dev/null"]
      volumeDevices:
        - name: data
          devicePath: /dev/xvda
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: raw-block-pvc
正文完
 0