乐趣区

关于人工智能:MindSpore报错RuntimeError-Unable-to-data-from-Generator

1 报错形容 1.1 零碎环境 ardware Environment(Ascend/GPU/CPU): CPUSoftware Environment:– MindSpore version (source or binary): 1.6.0– Python version (e.g., Python 3.7.5): 3.7.6– OS platform and distribution (e.g., Linux Ubuntu 16.04): Ubuntu 4.15.0-74-generic– GCC/Compiler version (if compiled from source):1.2 根本信息 1.2.1 脚本此案例应用自定义可迭代数据集进行训练,在训练过程中,第一个 epoch 数据失常迭代,第二个 epoch 就会报错,自定义数据代码如下:import numpy as np
import mindspore.dataset as ds
from tqdm import tqdm

class IterDatasetGenerator:

def __init__(self, datax, datay, classes_per_it, num_samples, iterations):
    self.__iterations = iterations
    self.__data = datax
    self.__labels = datay
    self.__iter = 0
    self.classes_per_it = classes_per_it
    self.sample_per_class = num_samples
    self.classes, self.counts = np.unique(self.__labels, return_counts=True)
    self.idxs = range(len(self.__labels))
    self.indexes = np.empty((len(self.classes), max(self.counts)), dtype=int) * np.nan
    self.numel_per_class = np.zeros_like(self.classes)
    for idx, label in tqdm(enumerate(self.__labels)):
        label_idx = np.argwhere(self.classes == label).item()
        self.indexes[label_idx, np.where(np.isnan(self.indexes[label_idx]))[0][0]] = idx
        self.numel_per_class[label_idx] = int(self.numel_per_class[label_idx]) + 1

def __next__(self):
    spc = self.sample_per_class
    cpi = self.classes_per_it

    if self.__iter >= self.__iterations:
        raise StopIteration
    else:
        batch_size = spc * cpi
        batch = np.random.randint(low=batch_size, high=10 * batch_size, size=(batch_size), dtype=np.int64)
        c_idxs = np.random.permutation(len(self.classes))[:cpi]
        for i, c in enumerate(self.classes[c_idxs]):
            index = i*spc
            ci = [c_i for c_i in range(len(self.classes)) if self.classes[c_i] == c][0]
            label_idx = list(range(len(self.classes)))[ci]
            sample_idxs = np.random.permutation(int(self.numel_per_class[label_idx]))[:spc]
            ind = 0
            for i in sample_idxs:
                batch[index+ind] = self.indexes[label_idx]
                ind = ind + 1
        batch = batch[np.random.permutation(len(batch))]
        data_x = []
        data_y = []
        for b in batch:
            data_x.append(self.__data<b>)
            data_y.append(self.__labels<b>)
        self.__iter += 1
        item = (data_x, data_y)
        return item

def __iter__(self):
    return self

def __len__(self):
    return self.__iterations

np.random.seed(58)
data1 = np.random.sample((500,2))
data2 = np.random.sample((500,1))
dataset_generator = IterDatasetGenerator(data1,data2,5,10,10)
dataset = ds.GeneratorDataset(dataset_generator,[“data”,”label”],shuffle=False)
epochs=3
for epoch in range(epochs):

for data in dataset.create_dict_iterator():
    print("success")

1.2.2 报错报错信息:RuntimeError: Exception thrown from PyFunc. Unable to fetch data from GeneratorDataset, try iterate the source function of GeneratorDataset or check value of num_epochs when create iterator.

2 起因剖析每次数据迭代的过程中,self.__iter 会累加,第二个 epoch 的预取时,self.__iter 曾经累计到设置好的 iterations 的值,导致 self.__iter >= self.__iterations,循环完结。3 解决办法在 def iter(self):中退出清零操作,设置 self.__iter = 0

此时执行胜利, 输入如下:

4 相似问题在 mindspore1.3.0 中,用户自定义训练,应用 Generator dataset 迭代数据报错。谬误截图如下:

此报错中,dataset 的 len 函数返回值是 36,然而实在的 next 返回的数据量只有 35 条,导致报错,可将返回值改为小于 35 的数进行疾速验证。5 总结 5.1 定位报错问题的步骤 1、找到报错的用户代码行:for data in dataset.create_dict_iterator():;2、依据报错信息提醒,无奈从 GeneratorDataset 获取数据,查看是否在自定义数据的时候就呈现问题。打印运行中的过程数据,发现第一个 epoch 数据读取完后,实在读取的数据条数与__len__是相等的,没有问题。但因为没有清零操作,在第二个 epoch 预取时 self.__iter >= self.__iterations,循环完结,导致第二个 epoch 取不到数据报错。5.2 此类问题剖析此类问题的根本原因是须要获取的数据索引与数据量对不上,在结构可迭代的的数据集类时须要留神每次运行后数据清零的问题,在疾速验证时,也须要满足索引小于数据总量的条件。6 参考文档 mindspore 文档 -> 数据管道 -> 数据加载 -> 自定义数据集加载 -> 结构可迭代的数据集类 https://www.mindspore.cn/doc/…

退出移动版