前两篇说了ceph集群搭建及rbd的应用等一些性能,然而ceph毕竟是作为一个存储,须要集成到利用中,rdb反对多种设施应用,大体上有libvirt,OpenStack,CloudStack,Kubernetes当初应用ceph作为k8s的存储,也就是ceph-rbdStorageClass相结合,为pod提供存储,没有k8s集群能够应用kubeadm搭建

Ceph集群中创立资源等配置

创立存储池并设定pgpg_num
ceph osd pool create kubernetes 32 32
初始化存储池
rbd pool init kubernetes
创立client用户拜访过程池
ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'

Kubernetes装置ceph-rbd驱动配置

每台k8s节点装置ceph-rbd命令包
yum install ceph-common
设置ceph-cm配置文件
  • clusterID为你ceph集群id
  • monitors为你cephmon服务地址
---apiVersion: v1kind: ConfigMapdata:  config.json: |-    [      {        "clusterID": "9bb7af93-99f3-4254-b0b3-864f5d1413e4",        "monitors": [          "192.168.21.100:6789",          "192.168.21.101:6789",          "192.168.21.102:6789"        ]      }    ]metadata:  name: ceph-csi-config  namespace: ceph-stkubectl apply -f csi-config-map.yaml
创立拜访ceph集群的证书,userID为下面创立的nameuserKey为生成的key,遗记应用ceph auth list查看
cat <<EOF > csi-rbd-secret.yamlmetadata:  name: csi-rbd-secret  namespace: defaultstringData:  userID: kubernetes  userKey: AQCb7mpfCr4oGhAAalv/q8WSnUE/vyu59Ge3Hg==EOF
装置ceph-rbd驱动程序
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yamlwget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yamlwget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yamlwget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml[root@k8s-master rbd]# cat ceph-csi-encryption-kms-config.yaml apiVersion: v1kind: ConfigMapdata:  config.json: |-    {      "vault-test": {        "encryptionKMSType": "vault",        "vaultAddress": "http://vault.default.svc.cluster.local:8200",        "vaultAuthPath": "/v1/auth/kubernetes/login",        "vaultRole": "csi-kubernetes",        "vaultPassphraseRoot": "/v1/secret",        "vaultPassphrasePath": "ceph-csi/",        "vaultCAVerify": "false"      }    }metadata:  name: ceph-csi-encryption-kms-config  namespace: ceph-stkubectl apply -f ./
创立StorageClass
---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:   name: csi-rbd-sc   namespace: ceph-stprovisioner: rbd.csi.ceph.comparameters:   clusterID: 9bb7af93-99f3-4254-b0b3-864f5d1413e4   pool: kubernetes   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret   csi.storage.k8s.io/provisioner-secret-namespace: ceph-st   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret   csi.storage.k8s.io/node-stage-secret-namespace: ceph-st   imageFeatures: "layering"reclaimPolicy: DeletemountOptions:   - discard

创立PVC测试性能

查看服务状态

创立文件设施
[root@k8s-master rbd]# cat pvc.yaml ---apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: rbd-pvcspec:  accessModes:    - ReadWriteOnce  volumeMode: Filesystem #指定应用文件格式  resources:    requests:      storage: 1Gi  storageClassName: csi-rbd-sc---apiVersion: v1kind: Podmetadata:  name: csi-rbd-demo-podspec:  containers:    - name: web-server      image: nginx      volumeMounts:        - name: mypvc          mountPath: /var/lib/www/html  volumes:    - name: mypvc      persistentVolumeClaim:        claimName: rbd-pvc        readOnly: false
查看各个组件是否失常
  • k8s服务
[root@k8s-master rbd]# kubectl get pvc | grep rbd-pvcrbd-pvc                                       Bound    pvc-6c63b1c3-0b81-4a51-8c1b-e3d0acae1b6c   1Gi        RWO            csi-rbd-sc           22d[root@k8s-master rbd]# kubectl get pv | grep rbd-pvcpvc-6c63b1c3-0b81-4a51-8c1b-e3d0acae1b6c   1Gi        RWO            Delete           Bound      default/rbd-pvc
  • ceph服务
[root@node-1 ~]# rbd -p kubernetes lscsi-vol-4e6663e7-fd80-11ea-9f64-c25f47044f33csi-vol-a2195114-fd7e-11ea-9f64-c25f47044f33
创立块设施pvc这样挂载的是一个块文件
---apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: raw-block-pvcspec:  accessModes:    - ReadWriteOnce  volumeMode: Block  resources:    requests:      storage: 1Gi  storageClassName: csi-rbd-sc  ---apiVersion: v1kind: Podmetadata:  name: pod-with-raw-block-volumespec:  containers:    - name: fc-container      image: fedora:26      command: ["/bin/sh", "-c"]      args: ["tail -f /dev/null"]      volumeDevices:        - name: data          devicePath: /dev/xvda  volumes:    - name: data      persistentVolumeClaim:        claimName: raw-block-pvc