乐趣区

关于kubernetes:kubernetes-Local-PV基本使用以及原理

Local PV 呈现起因

1.如果应用 hostPath Volume 这种办法 , 还得抉择节点来调度
2.须要当时创立好目录 , 而且还得留神权限的配置, 比方 root 用户创立的目录 , 普通用户就用不了了
3.不能指定大小 , 可能 会面临磁盘随时被写满的危险 , 而且没有 I / O 隔离机制
4.statefulset 不能应用 hostPath Volume , 写好的 Helm 不能兼容 hostPath Volume

Local PV 应用场景

实用于高优先级零碎,须要在多个不同节点上存储数据,而且 I / O 要求较高。

Local PV 和惯例 PV 的区别

对于惯例的 PV,Kubernetes 都是先调度 Pod 到某个节点上,而后再长久化这台机器上的 Volume 目录。而 Local PV,则须要运维人员提前准备好节点的磁盘,当 Pod 调度的时候要思考这些 LocalPV 的散布。

创立 Local PV


下面定义的 local 字段,指定了它是一个 Local Persistent Volume;而 path 字段,指定的是这个 PV 对应的磁盘的门路。而这个磁盘存在于 k8s-node01 节点上,也就意味着 pod 应用这个 pv 就必须运行在这个节点上。

创立 PVC

创立 pod


当 pod 创立之后,能够看到 pod 会调度到 k8s-node01 上,这时 pvc 和 pv 的状态曾经绑定。

删除 Local PV

删除 Local PV 时因为咱们是手动创立 PV,在删除时须要依照如下流程操作:
1. 删除应用这个 PV 的 Pod
2. 删除 PVC
3. 删除 PV

hostPath 与 Local PV 比照

StorageClass 提早绑定机制

provisioner 字段定义为 no-provisioner,这是因为 Local Persistent Volume 目前尚不反对 Dynamic Provisioning 动静生成 PV,所以咱们须要提前手动创立 PV。

volumeBindingMode 字段定义为 WaitForFirstConsumer,它是 Local Persistent Volume 里一个十分重要的个性,即:提早绑定。提早绑定就是在咱们提交 PVC 文件时,StorageClass 为咱们提早绑定 PV 与 PVC 的对应关系。

这样做的起因是:比方咱们在以后集群上有两个雷同属性的 PV,它们散布在不同的节点 Node1 和 Node2 上,而咱们定义的 Pod 须要运行在 Node1 节点上,然而 StorageClass 曾经为 Pod 申明的 PVC 绑定了在 Node2 上的 PV,这样的话,Pod 调度就会失败,所以咱们要提早 StorageClass 的绑定操作。

也就是提早到到第一个申明应用该 PVC 的 Pod 呈现在调度器之后,调度器再综合思考所有的调度规定,当然也包含每个 PV 所在的节点地位,来对立决定,这个 Pod 申明的 PVC,到底应该跟哪个 PV 进行绑定。

数据安全危险

local volume 仍受 node 节点可用性方面的限度,因而并不适用于所有应用程序。如果 node 节点变得不衰弱,则 local volume 也将变得不可拜访,应用这个 local volume 的 Pod 也将无奈运行。应用 local voluems 的应用程序必须可能容忍这种升高的可用性以及潜在的数据失落,是否会真得导致这个结果将取决于 node 节点底层磁盘存储与数据保护的具体实现了。

Local PV 最佳实际

<1> 为了更好的 IO 隔离成果,倡议将一整块磁盘作为一个存储卷应用;
<2> 为了失去存储空间的隔离,倡议为每个存储卷应用一个独立的磁盘分区;
<3> 在依然存在指定了某个 node 节点的亲和性关系的旧 PV 时,要防止从新创立具备雷同节点名称的 node 节点。否则,零碎可能会认为新节点蕴含旧的 PV。
<4> 对于具备文件系统的存储卷,倡议在 fstab 条目和该卷的 mount 装置点的目录名中应用它们的 UUID(例如 ls -l /dev/disk/by-uuid 的输入)。这种做法可确保不会装置谬误的本地卷,即便其设施门路产生了更改(例如,如果 /dev/sda1 在增加新磁盘时变为 /dev/sdb1)。此外,这种做法将确保如果创立了具备雷同名称的另一个节点时,该节点上的任何卷依然都会是惟一的,而不会被误认为是具备雷同名称的另一个节点上的卷。
<5> 对于没有文件系统的原始块存储卷,请应用其惟一 ID 作为符号链接的名称。依据您的环境,/dev/disk/by-id/ 中的卷 ID 可能蕴含惟一的硬件序列号。否则,应自行生成一个惟一 ID。符号链接名称的唯一性将确保如果创立了另一个具备雷同名称的节点,则该节点上的任何卷都依然是惟一的,而不会被误认为是具备雷同名称的另一个节点上的卷。

Local PV 局限性

在应用 Local PV 进行测试的时候,是无奈对 Pod 应用的 Local PV 容量进行限度的,Pod 会始终应用挂载的 Local PV 的容量。因而,Local PV 不反对动静的 PV 空间申请治理。也就是说,须要手动对 Local PV 进行容量布局,须要对可能应用的本地资源做一个全局布局,而后划分为各种尺寸的卷后挂载到主动发现目录下。
如果容器调配的一个存储空间不够用了怎么办?
倡议应用 Linux 下的 LVM(逻辑分区治理)来治理每个 node 节点上的本地磁盘存储空间。
<1> 创立一个大的 VG 分组,把一个 node 节点上能够应用的存储空间都放进去;
<2> 按将来一段时间内的容器存储空间应用预期,提前批量创立出一部分逻辑卷 LVs,都挂载到主动发现目录上来;
<3> 不要把 VG 中的存储资源全副用尽,预留少部分用于将来给个别容器扩容存储空间的资源;
<4> 应用 lvextend 为特定容器应用的存储卷进行扩容;

根本实现原理

// Run starts all of this controller's control loops
func (ctrl *PersistentVolumeController) Run(stopCh <-chan struct{}) {
    
    ......
    go wait.Until(ctrl.resync, ctrl.resyncPeriod, stopCh)
    go wait.Until(ctrl.volumeWorker, time.Second, stopCh)
    go wait.Until(ctrl.claimWorker, time.Second, stopCh)

    metrics.Register(ctrl.volumes.store, ctrl.claims, &ctrl.volumePluginMgr)

    <-stopCh
}

上述 run 函数是入口,次要是启动三个 goroutine,外面三个重要办法别离是 resync,volumeWorker,claimWorker。resync 次要作用是将同步到的 PV 和 PVC 搁置到 volumeQueue 和 claimQueue 中,供 volumeWorker 和 claimWorker 生产。

volumeWorker

volumeworker 一直循环生产 volumeQueue 中的数据,volumeWorker 中次要的是 updateVolume 函数,代码如下:

// updateVolume runs in worker thread and handles "volume added",
// "volume updated" and "periodic sync" events.
func (ctrl *PersistentVolumeController) updateVolume(volume *v1.PersistentVolume) {
    // Store the new volume version in the cache and do not process it if this
    // is an old version.
    // 更新缓存的 Volume
    new, err := ctrl.storeVolumeUpdate(volume)
    if err != nil {klog.Errorf("%v", err)
    }
    if !new {return}
    // 依据以后 PV 对象规格将 PVC 和 PV 进行绑定
    err = ctrl.syncVolume(volume)
    if err != nil {if errors.IsConflict(err) {
            // Version conflict error happens quite often and the controller
            // recovers from it easily.
            klog.V(3).Infof("could not sync volume %q: %+v", volume.Name, err)
        } else {klog.Errorf("could not sync volume %q: %+v", volume.Name, err)
        }
    }
}

updateVolume 函数次要是调用 syncVolume 函数,syncVolume 函数如下:

// syncVolume is the main controller method to decide what to do with a volume.
// It's invoked by appropriate cache.Controller callbacks when a volume is
// created, updated or periodically synced. We do not differentiate between
// these events.
func (ctrl *PersistentVolumeController) syncVolume(volume *v1.PersistentVolume) error {klog.V(4).Infof("synchronizing PersistentVolume[%s]: %s", volume.Name, getVolumeStatusForLogging(volume))

    // Set correct "migrated-to" annotations on PV and update in API server if
    // necessary
    newVolume, err := ctrl.updateVolumeMigrationAnnotations(volume)
    if err != nil {
        // Nothing was saved; we will fall back into the same
        // condition in the next call to this method
        return err
    }
    volume = newVolume

    // [Unit test set 4]
    // 如果 claimRef 未找到,则是未应用过的 PV,调用 updateVolumePhase 函数将 volume 的状态变更为 available
    if volume.Spec.ClaimRef == nil {
        // Volume is unused
        klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is unused", volume.Name)
        if _, err := ctrl.updateVolumePhase(volume, v1.VolumeAvailable, ""); err != nil {
            // Nothing was saved; we will fall back into the same
            // condition in the next call to this method
            return err
        }
        return nil
    } else /* pv.Spec.ClaimRef != nil */ {
        // Volume is bound to a claim.
        // 正在绑定过程中,更新 volume 状态为 available
        if volume.Spec.ClaimRef.UID == "" {
            // The PV is reserved for a PVC; that PVC has not yet been
            // bound to this PV; the PVC sync will handle it.
            klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is pre-bound to claim %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
            if _, err := ctrl.updateVolumePhase(volume, v1.VolumeAvailable, ""); err != nil {
                // Nothing was saved; we will fall back into the same
                // condition in the next call to this method
                return err
            }
            return nil
        }
        klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound to claim %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
        // Get the PVC by _name_
        var claim *v1.PersistentVolumeClaim
        claimName := claimrefToClaimKey(volume.Spec.ClaimRef)
        // 获取 PVC
        obj, found, err := ctrl.claims.GetByKey(claimName)
        if err != nil {return err}
        // 如果是未找到,从新同步 PVC
        if !found {
            // If the PV was created by an external PV provisioner or
            // bound by external PV binder (e.g. kube-scheduler), it's
            // possible under heavy load that the corresponding PVC is not synced to
            // controller local cache yet. So we need to double-check PVC in
            //   1) informer cache
            //   2) apiserver if not found in informer cache
            // to make sure we will not reclaim a PV wrongly.
            // Note that only non-released and non-failed volumes will be
            // updated to Released state when PVC does not exist.
            if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {obj, err = ctrl.claimLister.PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(volume.Spec.ClaimRef.Name)
                if err != nil && !apierrors.IsNotFound(err) {return err}
                found = !apierrors.IsNotFound(err)
                if !found {
                    // 从新获取 PVC
                    obj, err = ctrl.kubeClient.CoreV1().PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(context.TODO(), volume.Spec.ClaimRef.Name, metav1.GetOptions{})
                    if err != nil && !apierrors.IsNotFound(err) {return err}
                    found = !apierrors.IsNotFound(err)
                }
            }
        }
        // 还是未找到 PVC
        if !found {klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s not found", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
            // Fall through with claim = nil
        } else {
            var ok bool
            claim, ok = obj.(*v1.PersistentVolumeClaim)
            if !ok {return fmt.Errorf("cannot convert object from volume cache to volume %q!?: %#v", claim.Spec.VolumeName, obj)
            }
            klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s found: %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef), getClaimStatusForLogging(claim))
        }
        // 如果 if 条件成立,volume 指定的 PVC 被删除,另一个同名的 PVC 被创立,获取最新 PVC,而后比拟他们
        if claim != nil && claim.UID != volume.Spec.ClaimRef.UID {
            // The claim that the PV was pointing to was deleted, and another
            // with the same name created.
            // in some cases, the cached claim is not the newest, and the volume.Spec.ClaimRef.UID is newer than cached.
            // so we should double check by calling apiserver and get the newest claim, then compare them.
            klog.V(4).Infof("Maybe cached claim: %s is not the newest one, we should fetch it from apiserver", claimrefToClaimKey(volume.Spec.ClaimRef))

            claim, err = ctrl.kubeClient.CoreV1().PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(context.TODO(), volume.Spec.ClaimRef.Name, metav1.GetOptions{})
            if err != nil && !apierrors.IsNotFound(err) {return err} else if claim != nil { //pvc 与 volume 从新绑定
                // Treat the volume as bound to a missing claim.
                if claim.UID != volume.Spec.ClaimRef.UID {klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s has a newer UID than pv.ClaimRef, the old one must have been deleted", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
                    claim = nil
                } else {klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s has a same UID with pv.ClaimRef", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
                }
            }
        }
        //pvc 可能被删除,if claim == nil {
            // If we get into this block, the claim must have been deleted;
            // NOTE: reclaimVolume may either release the PV back into the pool or
            // recycle it or do nothing (retain)

            // Do not overwrite previous Failed state - let the user see that
            // something went wrong, while we still re-try to reclaim the
            // volume.
            if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
                // Also, log this only once:
                klog.V(2).Infof("volume %q is released and reclaim policy %q will be executed", volume.Name, volume.Spec.PersistentVolumeReclaimPolicy)
                if volume, err = ctrl.updateVolumePhase(volume, v1.VolumeReleased, ""); err != nil {
                    // Nothing was saved; we will fall back into the same condition
                    // in the next call to this method
                    return err
                }
            }
            if err = ctrl.reclaimVolume(volume); err != nil {
                // Release failed, we will fall back into the same condition
                // in the next call to this method
                return err
            }
            if volume.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimRetain {
                // volume is being retained, it references a claim that does not exist now.
                klog.V(4).Infof("PersistentVolume[%s] references a claim %q (%s) that is not found", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef), volume.Spec.ClaimRef.UID)
            }
            return nil
        } else if claim.Spec.VolumeName == "" {if pvutil.CheckVolumeModeMismatches(&claim.Spec, &volume.Spec) {
                // Binding for the volume won't be called in syncUnboundClaim,
                // because findBestMatchForClaim won't return the volume due to volumeMode mismatch.
                volumeMsg := fmt.Sprintf("Cannot bind PersistentVolume to requested PersistentVolumeClaim %q due to incompatible volumeMode.", claim.Name)
                ctrl.eventRecorder.Event(volume, v1.EventTypeWarning, events.VolumeMismatch, volumeMsg)
                claimMsg := fmt.Sprintf("Cannot bind PersistentVolume %q to requested PersistentVolumeClaim due to incompatible volumeMode.", volume.Name)
                ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.VolumeMismatch, claimMsg)
                // Skipping syncClaim
                return nil
            }

            if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnBoundByController) {
                // The binding is not completed; let PVC sync handle it
                klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume not bound yet, waiting for syncClaim to fix it", volume.Name)
            } else {
                // Dangling PV; try to re-establish the link in the PVC sync
                klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it", volume.Name)
            }
            // In both cases, the volume is Bound and the claim is Pending.
            // Next syncClaim will fix it. To speed it up, we enqueue the claim
            // into the controller, which results in syncClaim to be called
            // shortly (and in the right worker goroutine).
            // This speeds up binding of provisioned volumes - provisioner saves
            // only the new PV and it expects that next syncClaim will bind the
            // claim to it.
            ctrl.claimQueue.Add(claimToClaimKey(claim))
            return nil
        } else if claim.Spec.VolumeName == volume.Name { // volume 与 pvc 绑定,更新 volume 状态
            // Volume is bound to a claim properly, update status if necessary
            klog.V(4).Infof("synchronizing PersistentVolume[%s]: all is bound", volume.Name)
            if _, err = ctrl.updateVolumePhase(volume, v1.VolumeBound, ""); err != nil {
                // Nothing was saved; we will fall back into the same
                // condition in the next call to this method
                return err
            }
            return nil
        } else { //pv 绑定到 PVC 上,然而 PVC 被绑定到其余 PV 上,重置
            // Volume is bound to a claim, but the claim is bound elsewhere
            if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnDynamicallyProvisioned) && volume.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimDelete {
                // This volume was dynamically provisioned for this claim. The
                // claim got bound elsewhere, and thus this volume is not
                // needed. Delete it.
                // Mark the volume as Released for external deleters and to let
                // the user know. Don't overwrite existing Failed status!
                if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
                    // Also, log this only once:
                    klog.V(2).Infof("dynamically volume %q is released and it will be deleted", volume.Name)
                    if volume, err = ctrl.updateVolumePhase(volume, v1.VolumeReleased, ""); err != nil {
                        // Nothing was saved; we will fall back into the same condition
                        // in the next call to this method
                        return err
                    }
                }
                if err = ctrl.reclaimVolume(volume); err != nil {
                    // Deletion failed, we will fall back into the same condition
                    // in the next call to this method
                    return err
                }
                return nil
            } else {
                // Volume is bound to a claim, but the claim is bound elsewhere
                // and it's not dynamically provisioned.
                if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnBoundByController) {
                    // This is part of the normal operation of the controller; the
                    // controller tried to use this volume for a claim but the claim
                    // was fulfilled by another volume. We did this; fix it.
                    klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound by controller to a claim that is bound to another volume, unbinding", volume.Name)
                    if err = ctrl.unbindVolume(volume); err != nil {return err}
                    return nil
                } else {
                    // The PV must have been created with this ptr; leave it alone.
                    klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound by user to a claim that is bound to another volume, waiting for the claim to get unbound", volume.Name)
                    // This just updates the volume phase and clears
                    // volume.Spec.ClaimRef.UID. It leaves the volume pre-bound
                    // to the claim.
                    if err = ctrl.unbindVolume(volume); err != nil {return err}
                    return nil
                }
            }
        }
    }
}

上述代码比拟长,次要逻辑如下:
首先判断 PV 的 claimRef 是否为空,如果为空更新 PV 为 available 状态。如果 claimRef 不为空,然而 UID 为空,阐明 PV 绑定了 PVC,然而 PVC 没有绑定 PV,所以须要设置 PV 的状态为 available。之后获取 PV 对应的 PVC,为了避免本地缓存还未更新 PVC,通过 apiServer 从新获取一次。

如果找到了对应的 PVC,而后比拟一下 UID 是否相等,如果不相等,阐明不是对应绑定的 PVC,可能 PVC 被删除了,更新 PV 的状态为 released。这个时候会调用 reclaimVolume 办法,依据 persistentVolumeReclaimPolicy 进行相应的解决。

对 claim 校验之后,会持续查看 claim.Spec.VolumeName 是否为空,如果为空阐明正在绑定中。

如果 claim.Spec.VolumeName == volume.Name,阐明 volume 与 PVC 绑定,更新 pv 状态为 Bound。

剩下还有一部分逻辑是说 PV 绑定到 PVC 上,然而 PVC 被绑定到其余 PV 上,检查一下是否是 dynamically provisioned 主动生成的,如果是的话就开释这个 PV;如果是手动创立的 PV,那么调用 unbindVolume 进行解绑

下面是 VolumeWoker 的次要的工作逻辑。

上面看一下 ClaimWorker 的工作逻辑:
claimWorker 也是通过一直的同步 PVC,而后通过 updateClaim 调用 syncClaim 办法。

// syncClaim is the main controller method to decide what to do with a claim.
// It's invoked by appropriate cache.Controller callbacks when a claim is
// created, updated or periodically synced. We do not differentiate between
// these events.
// For easier readability, it was split into syncUnboundClaim and syncBoundClaim
// methods.
func (ctrl *PersistentVolumeController) syncClaim(claim *v1.PersistentVolumeClaim) error {klog.V(4).Infof("synchronizing PersistentVolumeClaim[%s]: %s", claimToClaimKey(claim), getClaimStatusForLogging(claim))

    // Set correct "migrated-to" annotations on PVC and update in API server if
    // necessary
    newClaim, err := ctrl.updateClaimMigrationAnnotations(claim)
    if err != nil {
        // Nothing was saved; we will fall back into the same
        // condition in the next call to this method
        return err
    }
    claim = newClaim

    if !metav1.HasAnnotation(claim.ObjectMeta, pvutil.AnnBindCompleted) {return ctrl.syncUnboundClaim(claim)
    } else {return ctrl.syncBoundClaim(claim)
    }
}

syncClaim 这个办法的次要逻辑是通过 syncUnboundClaim 和 syncBoundClaim 这两个办法进行绑定和解绑的操作。
syncUnboundClaim 这个办法的次要逻辑分为两局部:一部分是当 claim.Spec.VolumeName == “” , 代码如下:

// syncUnboundClaim is the main controller method to decide what to do with an
// unbound claim.
func (ctrl *PersistentVolumeController) syncUnboundClaim(claim *v1.PersistentVolumeClaim) error {
    // This is a new PVC that has not completed binding
    // OBSERVATION: pvc is "Pending"
    //pending 状态,没有实现绑定操作
    if claim.Spec.VolumeName == "" {
        // User did not care which PV they get.
        // 是否是提早绑定,这里波及到了 Local PV 的提早绑定操作
        delayBinding, err := pvutil.IsDelayBindingMode(claim, ctrl.classLister)
        if err != nil {return err}

        // [Unit test set 1]
        // 依据 claim 的申明去找到适合的 PV,这里波及到提早绑定,顺着这个办法始终看上来会看到会通过 accessMode 找到对应的 PV,// 而后再通过 pvutil.FindMatchingVolume 找到适合的 PV,FindMatchingVolume 会被 PV controller and scheduler 应用,// 被 scheduler 应用就是因为波及到了 LocalPV 的提早绑定,调度时会综合思考各种因素,抉择最合适的节点运行 pod
        volume, err := ctrl.volumes.findBestMatchForClaim(claim, delayBinding)
        if err != nil {klog.V(2).Infof("synchronizing unbound PersistentVolumeClaim[%s]: Error finding PV for claim: %v", claimToClaimKey(claim), err)
            return fmt.Errorf("error finding PV for claim %q: %w", claimToClaimKey(claim), err)
        }
        // 如果没有 volume 可用
        if volume == nil {klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: no volume found", claimToClaimKey(claim))
            // No PV could be found
            // OBSERVATION: pvc is "Pending", will retry
            switch {case delayBinding && !pvutil.IsDelayBindingProvisioning(claim):
                if err = ctrl.emitEventForUnboundDelayBindingClaim(claim); err != nil {return err}
                // 依据对应的插件创立 PV
            case storagehelpers.GetPersistentVolumeClaimClass(claim) != "":
                if err = ctrl.provisionClaim(claim); err != nil {return err}
                return nil
            default:
                ctrl.eventRecorder.Event(claim, v1.EventTypeNormal, events.FailedBinding, "no persistent volumes available for this claim and no storage class is set")
            }

            // Mark the claim as Pending and try to find a match in the next
            // periodic syncClaim
            // 下次循环再查找匹配的 PV 进行绑定
            if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {return err}
            return nil
        } else /* pv != nil */ {
            // Found a PV for this claim
            // OBSERVATION: pvc is "Pending", pv is "Available"
            claimKey := claimToClaimKey(claim)
            klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q found: %s", claimKey, volume.Name, getVolumeStatusForLogging(volume))
            if err = ctrl.bind(volume, claim); err != nil {
                // On any error saving the volume or the claim, subsequent
                // syncClaim will finish the binding.
                // record count error for provision if exists
                // timestamp entry will remain in cache until a success binding has happened
                metrics.RecordMetric(claimKey, &ctrl.operationTimestamps, err)
                return err
            }
            // OBSERVATION: claim is "Bound", pv is "Bound"
            // if exists a timestamp entry in cache, record end to end provision latency and clean up cache
            // End of the provision + binding operation lifecycle, cache will be cleaned by "RecordMetric"
            // [Unit test 12-1, 12-2, 12-4]
            metrics.RecordMetric(claimKey, &ctrl.operationTimestamps, nil)
            return nil
        }
    }

次要解决逻辑是看是否能找到适合的 PV,如果有就进行绑定。如果没有,查看 PV 是否是动静提供的,如果是则创立 PV,而后设置 PVC 为 pending,在下一轮的循环中进行查看绑定。
syncUnboundClaim 办法的下半局部逻辑:

// syncUnboundClaim is the main controller method to decide what to do with an
// unbound claim.
func (ctrl *PersistentVolumeController) syncUnboundClaim(claim *v1.PersistentVolumeClaim) error {
else /* pvc.Spec.VolumeName != nil */ {// [Unit test set 2]
        // User asked for a specific PV.
        klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q requested", claimToClaimKey(claim), claim.Spec.VolumeName)
        //volume 不为空,找到对应的 PV
        obj, found, err := ctrl.volumes.store.GetByKey(claim.Spec.VolumeName)
        if err != nil {return err}
        // 对应的 PV 不存在了,更新状态为 Pending
        if !found {
            // User asked for a PV that does not exist.
            // OBSERVATION: pvc is "Pending"
            // Retry later.
            klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q requested and not found, will try again next time", claimToClaimKey(claim), claim.Spec.VolumeName)
            if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {return err}
            return nil
        } else {volume, ok := obj.(*v1.PersistentVolume)
            if !ok {return fmt.Errorf("cannot convert object from volume cache to volume %q!?: %+v", claim.Spec.VolumeName, obj)
            }
            klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume %q requested and found: %s", claimToClaimKey(claim), claim.Spec.VolumeName, getVolumeStatusForLogging(volume))
            if volume.Spec.ClaimRef == nil { //PVC 对应 PV 的 Claim 为空,调用 bind 办法进行绑定。// User asked for a PV that is not claimed
                // OBSERVATION: pvc is "Pending", pv is "Available"
                klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume is unbound, binding", claimToClaimKey(claim))
                if err = checkVolumeSatisfyClaim(volume, claim); err != nil {klog.V(4).Infof("Can't bind the claim to volume %q: %v", volume.Name, err)
                    // send an event
                    msg := fmt.Sprintf("Cannot bind to requested volume %q: %s", volume.Name, err)
                    ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.VolumeMismatch, msg)
                    // volume does not satisfy the requirements of the claim
                    if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {return err}
                } else if err = ctrl.bind(volume, claim); err != nil {
                    // On any error saving the volume or the claim, subsequent
                    // syncClaim will finish the binding.
                    return err
                }
                // OBSERVATION: pvc is "Bound", pv is "Bound"
                return nil
                // 校验 volume 是是否曾经绑定了别的 PVC,如果没有的话,执行绑定
            } else if pvutil.IsVolumeBoundToClaim(volume, claim) {
                // User asked for a PV that is claimed by this PVC
                // OBSERVATION: pvc is "Pending", pv is "Bound"
                klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume already bound, finishing the binding", claimToClaimKey(claim))

                // Finish the volume binding by adding claim UID.
                if err = ctrl.bind(volume, claim); err != nil {return err}
                // OBSERVATION: pvc is "Bound", pv is "Bound"
                return nil
            } else { //PVC 申明的 PV 绑定了其余 PVC,期待下次循环
                // User asked for a PV that is claimed by someone else
                // OBSERVATION: pvc is "Pending", pv is "Bound"
                if !metav1.HasAnnotation(claim.ObjectMeta, pvutil.AnnBoundByController) {klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume already bound to different claim by user, will retry later", claimToClaimKey(claim))
                    claimMsg := fmt.Sprintf("volume %q already bound to a different claim.", volume.Name)
                    ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.FailedBinding, claimMsg)
                    // User asked for a specific PV, retry later
                    if _, err = ctrl.updateClaimStatus(claim, v1.ClaimPending, nil); err != nil {return err}
                    return nil
                } else {
                    // This should never happen because someone had to remove
                    // AnnBindCompleted annotation on the claim.
                    klog.V(4).Infof("synchronizing unbound PersistentVolumeClaim[%s]: volume already bound to different claim %q by controller, THIS SHOULD NEVER HAPPEN", claimToClaimKey(claim), claimrefToClaimKey(volume.Spec.ClaimRef))
                    claimMsg := fmt.Sprintf("volume %q already bound to a different claim.", volume.Name)
                    ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.FailedBinding, claimMsg)

                    return fmt.Errorf("invalid binding of claim %q to volume %q: volume already claimed by %q", claimToClaimKey(claim), claim.Spec.VolumeName, claimrefToClaimKey(volume.Spec.ClaimRef))
                }
            }
        }
    }
}

syncUnboundClaim 下半局部逻辑次要是判断 volume 不为空,取出对应的 PV 进行绑定。如果 claimRef 不为空,校验一下是否曾经是否绑定了别的 PVC,如果没有的话,执行绑定。

syncClaim 除了下面的 syncUnboundClaim,还有 syncBoundClaim 办法,
syncBoundClaim 办法次要是解决 PVC 和 PV 曾经绑定的各种异常情况。代码不贴了。

退出移动版