关于人工智能:MindSpore易点通模型测试和验证

33次阅读

共计 5592 个字符,预计需要花费 14 分钟才能阅读完成。

1 模型测试在训练实现之后,须要测试模型在测试集上的体现。根据模型评估形式的不同,分以下两种状况 1. 评估形式在 MindSpore 中已实现 MindSpore 中提供了多种 Metrics 形式:Accuracy、Precision、Recall、F1、TopKCategoricalAccuracy、Top1CategoricalAccuracy、Top5CategoricalAccuracy、MSE、MAE、Loss。在测试中调用 MindSpore 已有的评估函数,须要定义一个 dict,蕴含要应用的评估形式,并在定义 model 时传入,后续调用 model.eval() 会返回一个 dict,内容即为 metrics 的指标和后果。…def test_net(network, model, test_data_path, test_batch):”””define the evaluation method”””print(“============== Start Testing ==============”)# load the saved model for evaluationparam_dict = load_checkpoint(“./train_resnet_cifar10-1_390.ckpt”)#load parameter to the networkload_param_into_net(network, param_dict)#load testing datasetds_test = create_dataset(test_data_path, do_train=False,batch_size=test_batch)acc = model.eval(ds_test, dataset_sink_mode=False)print(“============== test result:{} ==============”.format(acc))if name == “__main__”:…net = resnet()loss = nn.loss.SoftmaxCrossEntropyWithLogits(sparse=True,reduction=’mean’)opt = nn.SGD(net.trainable_params(), LR_ORI, MOMENTUM_ORI, WEIGHT_DECAY)metrics = {‘accuracy’: nn.Accuracy(),’loss’: nn.Loss()}model = Model(net, loss, opt, metrics=metrics)test_net(net, model_constructed, TEST_PATH, TEST_BATCH_SIZE)2. 评估形式在 MindSpore 中没有实现如果 MindSpore 中的评估函数不能满足要求,可参考 accuracy.py 通过继承 Metric 基类实现 Metric 定义之后,并重写 clear,updata,eval 三个办法即可。通过调用 model.predict() 接口,失去网络输入后,依照自定义评估规范计算结果。上面以计算测试集精度为例,实现自定义 Metrics:class AccuracyV2(EvaluationBase):

def __init__(self, eval_type='classification'):
    super(AccuracyV2, self).__init__(eval_type)
    self.clear()

def clear(self):
    """Clears the internal evaluation result."""
    self._correct_num = 0
    self._total_num = 0

def update(self, output_y, label_input):
    y_pred = self._convert_data(output_y)
    y = self._convert_data(label_input)
    indices = y_pred.argmax(axis=1)
    results = (np.equal(indices, y) * 1).reshape(-1)        
    self._correct_num += results.sum()
    self._total_num += label_input.shape[0]

def eval(self):
    if self._total_num == 0:
        raise RuntimeError('Accuary can not be calculated')
    return self._correct_num / self._total_num
    def test_net(network, model, test_data_path, test_batch):
"""define the evaluation method"""
print("============== Start Testing ==============")

# Load the saved model for evaluation

param_dict = load_checkpoint("./train_resnet_cifar10-1_390.ckpt")

# Load parameter to the network

load_param_into_net(network, param_dict)

# Load testing dataset

ds_test = create_dataset(test_data_path, do_train=False,
                        batch_size=test_batch)
metric = AccuracyV2()
metric.clear()
for data, label in ds_test.create_tuple_iterator():
    output = model.predict(data)
    metric.update(output, label)
results = metric.eval()
print("============== New Metric:{} ==============".format(results))
if __name__ == "__main__":

net = resnet()
loss = nn.loss.SoftmaxCrossEntropyWithLogits(sparse=True,
                                    reduction='mean')
opt = nn.SGD(net.trainable_params(), LR_ORI, MOMENTUM_ORI, WEIGHT_DECAY)
model_constructed = Model(net, loss, opt)
test_net(net, model_constructed, TEST_PATH, TEST_BATCH_SIZE)2 边训练边验证在训练的过程中,在验证集上测试模型的成果。目前 MindSpore 有两种形式。1、交替调用 model.train() 和 model.eval(),实现边训练边验证。...def train_and_val(model, dataset_train, dataset_val, steps_per_train,
                epoch_max, evaluation_interval):
config_ck = CheckpointConfig(save_checkpoint_steps=steps_per_train,
                             keep_checkpoint_max=epoch_max)
ckpoint_cb = ModelCheckpoint(prefix="train_resnet_cifar10",
                             directory="./", config=config_ck)
model.train(evaluation_interval, dataset_train,
        callbacks=[ckpoint_cb, LossMonitor()], dataset_sink_mode=True)
acc = model.eval(dataset_val, dataset_sink_mode=False)
print("============== Evaluation:{} ==============".format(acc))

if name == “__main__”:

...
ds_train, steps_per_epoch_train = create_dataset(TRAIN_PATH,
        do_train=True, batch_size=TRAIN_BATCH_SIZE, repeat_num=1)
ds_val, steps_per_epoch_val = create_dataset(VAL_PATH, do_train=False,
            batch_size=VAL_BATCH_SIZE, repeat_num=1)
net = resnet()
loss = nn.loss.SoftmaxCrossEntropyWithLogits(sparse=True,
                                reduction='mean')
opt = nn.SGD(net.trainable_params(), LR_ORI, MOMENTUM_ORI, WEIGHT_DECAY)
metrics = {'accuracy': nn.Accuracy(),
'loss': nn.Loss()}
net = Model(net, loss, opt, metrics=metrics)

for i in range(int(EPOCH_MAX / EVAL_INTERVAL)):
train_and_val(net, ds_train, ds_val, steps_per_epoch_train,

                    EPOCH_MAX, EVAL_INTERVAL)

2、MindSpore 通过调用 model.train 接口,在 callbacks 中传入自定义的 EvalCallBack 实例,进行训练并验证。

class EvalCallBack(Callback):

def __init__(self, model, eval_dataset, eval_epoch, result_evaluation):
    self.model = model
    self.eval_dataset = eval_dataset
    self.eval_epoch = eval_epoch
    self.result_evaluation = result_evaluation

def epoch_end(self, run_context):
    cb_param = run_context.original_args()
    cur_epoch = cb_param.cur_epoch_num
    if cur_epoch % self.eval_epoch == 0:
        acc = self.model.eval(self.eval_dataset, dataset_sink_mode=False)
        self.result_evaluation["epoch"].append(cur_epoch)
        self.result_evaluation["acc"].append(acc["accuracy"])
        self.result_evaluation["loss"].append(acc["loss"])
        print(acc)

if name == “__main__”:

...
ds_train, steps_per_epoch_train = create_dataset(TRAIN_PATH,
    do_train=True, batch_size=TRAIN_BATCH_SIZE, repeat_num=REPEAT_SIZE)
ds_val, steps_per_epoch_val = create_dataset(VAL_PATH, do_train=False,
            batch_size=VAL_BATCH_SIZE, repeat_num=REPEAT_SIZE)
net = resnet()
loss = nn.loss.SoftmaxCrossEntropyWithLogits(sparse=True,
                                        reduction='mean')
opt = nn.SGD(net.trainable_params(), LR_ORI, MOMENTUM_ORI, WEIGHT_DECAY)
metrics = {'accuracy': nn.Accuracy(),
'loss': nn.Loss()}
net = Model(net, loss, opt, metrics=metrics)

result_eval = {“epoch”: [], “acc”: [], “loss”: []}

eval_cb = EvalCallBack(net, ds_val, EVAL_PER_EPOCH, result_eval)
net.train(EPOCH_MAX, ds_train,
        callbacks=[ckpoint_cb, LossMonitor(), eval_cb],
        dataset_sink_mode=True, sink_size=steps_per_epoch_train)3 样例代码应用阐明本文的样例代码是一个 Resnet50 在 Cifar10 上训练的分类网络,采纳 datasets.Cifar10Dataset 接口读取二进制版本的 CIFAR-10 数据集,因而下载 CIFAR-10 binary version (suitable for C programs),并在代码中配置好数据门路。启动命令:python xxx.py --data_path=xxx --epoch_num=xxx 运行脚本,能够看到网络输入后果:

具体代码请返回 MindSpore 论坛进行下载:华为云论坛_云计算论坛_开发者论坛_技术论坛 - 华为云

正文完
 0