摘要:这篇文章跟大家分享下THOR的实际利用。THOR算法的局部内容以后曾经在MindSpore中开源

本文分享自华为云社区《MindSpore 自研高阶优化器源码剖析和实际利用》,原文作者:HWCloudAI 。

这篇文章跟大家分享下THOR的实际利用。THOR算法的局部内容以后曾经在MindSpore中开源,源码地位:

https://gitee.com/mindspore/mindspore/blob/master/mindspore/nn/optim/thor.py

MindSpore中应用THOR训练网络非常简单,上面用四行代码先来带大家看一下怎么应用。

from mindspore.nn.optim import THOR  #援用二阶优化器#创立网络net = Net() #调用优化器opt = THOR(net, lr, Tensor(damping), config.momentum, config.weight_decay, config.loss_scale,           config.batch_size, split_indices=split_indices)  #减少计算图晋升性能model = ConvertModelUtils().convert_to_thor_model(model=model, network=net, loss_fn=loss, optimizer=opt,                                            loss_scale_manager=loss_scale, metrics={'acc'}, amp_level="O2", keep_batchnorm_fp32=False,                                            frequency=config.frequency)  #训练网络model.train(config.epoch_size, dataset, callbacks=cb, sink_size=dataset.get_dataset_size(), dataset_sink_mode=True)
  • 导入二阶优化器THOR所须要的包
  • 第一行代码惯例创立网络
  • 第二行代码定义咱们应用的优化器THOR
  • 第三行代码是为了减少计算图从而使THOR达到更优性能
  • 第四行代码训练网络

咱们再具体开展介绍下。首先导入MindSpore所需的二阶优化器的包,位于 mindspore.nn.optim

而后创立你所需的网络;接着定义THOR优化器,传入网络信息和THOR所需的超参信息(如学习率,正则化项系数等);

再调用 convert_to_thor_model函数,该函数是通过减少计算图使THOR达到更优性能,什么意思呢,自身网络运行的时候是一张计算图,THOR中会应用过期的二阶信息,通过额定减少一张计算图,两张计算图别离执行更新二阶矩阵和不更新二阶矩阵的操作从而达到更优性能(PS. MindSpore反对动动态图,在这里为了更好的性能应用的是动态图模式,对这块内容比拟感兴趣的同学,能够点这个链接:https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/white_paper/MindSpore_white_paper.pdf);

最初,调用model.train就能够开始训练啦。简略介绍了下怎么应用,接下来咱们来看下它的源码。

源码剖析

init 函数用于THOR的初始化,须要传入THOR所需的超参和网络结构,THOR反对GPU和Ascend,别离为class THOR_GPU(Optimizer)和 class THOR_Ascend(Optimizer),这两个类之间的次要差异是算子不同。上面咱们以 class THOR_Ascend(Optimizer)为例,来剖析一下。

class THOR_Ascend(Optimizer):    def __init__(self, net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0, batch_size=32,                 decay_filter=lambda x: x.name not in [], split_indices=None):        params = filter(lambda x: x.requires_grad, net.get_parameters())        super(THOR_Ascend, self).__init__(learning_rate, params, weight_decay, loss_scale)        if isinstance(momentum, float) and momentum < 0.0:            raise ValueError("momentum should be at least 0.0, but got momentum {}".format(momentum))        self.momentum = Parameter(Tensor(momentum, mstype.float32), name="momentum")        self.params = self.parameters        self.moments = self.params.clone(prefix="moments", init='zeros')        self.hyper_map = C.HyperMap()        self.opt = P.ApplyMomentum()        self.net = net        self.matrix_A_cov = ParameterTuple(filter(lambda x: 'matrix_A' in x.name, net.get_parameters()))        self.matrix_G_cov = ParameterTuple(filter(lambda x: 'matrix_G' in x.name, net.get_parameters()))        ...

MindSpore中所有优化器都继承了 class Optimizer,该基类中定义了一些根本函数(如获取学习率,梯度缩放等)。THOR初始化时将传进去的超参定义为类属性不便调用,并且定义了后续计算会应用到的算子。

也就是说初始化函数的作用就是定义THOR计算所须要用到的算子和变量(Parameter,Tensor等)。

重点介绍下 self.matrix_A_cov , self.matrix_G_cov 。这两个变量是计算二阶梯度所须要的信息,别离为每层输出 的协方差矩阵 和每层输入的一阶导数 的协方差矩阵 ,其中 曾经在运行时的前向过程和反向过程中保留下来。

咱们再来看下创立THOR时的入参:

  • net:本次训练建设的模型;
  • learning_rate:学习率超参;
    - damping:二阶矩阵中加的正则化项的超参;
  • momentum:动量超参;
  • weight_decay:权值衰减,用于避免过拟合,默认值为0.0,即不使用权值衰减;loss_scale:用于缩放训练过程中的loss,避免梯度越界,默认值为1.0,即不应用缩放;batch_size:以后训练一个step所应用的数据量,默认为32;
  • decay_filter:抉择对哪些层做weight decay,当weight_decay>0时起作用;split_indices:这个参数的作用是用于减速allreduce过程。
  • _get_Ainv_Ginv_Amax_Gmax_list函数用于计算协方差矩阵A/G的逆,并返回求完逆后的矩阵。具体过程是遍历模型所有层,按层解决,对每一层的协方差矩阵加上正则化项,而后对矩阵进行cholesky合成从而来求逆。以后开源代码THOR中反对全连贯层和卷积层的解决。
def _get_Ainv_Ginv_Amax_Gmax_list(self, gradients, damping_step, matrix_a_allreduce, matrix_g_allreduce,                                      matrix_a_max_allreduce, matrix_g_max_allreduce):        """get matrixA inverse list, matrixG inverse list, matrixA_max list, matrixG_max list"""        for i in range(len(self.params)):            thor_layer_count = self.weight_fim_idx_map[i]            conv_layer_count = self.weight_conv_idx_map[i]            layer_type = self.weight_layerType_idx_map[i]            if layer_type in [Conv, FC, Embedding]:                g = gradients[i]                matrix_A = self.matrix_A_cov[thor_layer_count]                matrix_G = self.matrix_G_cov[thor_layer_count]                matrix_A = F.depend(matrix_A, g)                matrix_G = F.depend(matrix_G, g)                A_shape = self.shape(matrix_A)                A_eye = self.eye(A_shape[0], A_shape[0], mstype.float32)                G_shape = self.shape(matrix_G)                G_eye = self.eye(G_shape[0], G_shape[0], mstype.float32)                if layer_type == Conv:                    ...                elif layer_type == FC:                    matrix_A = matrix_A + damping * A_eye                    matrix_A_inv = self.cholesky(matrix_A)                    matrix_A_inv = self.vector_matmul(matrix_A_inv, matrix_A_inv)
  • _get_second_gradients函数用于计算最终参数更新方向,在论文中参数更新方向公式为

,所以代码理论实现的形式为


,代码如下

def _get_second_gradients(self, new_grads, damping_step, gradients):        """get second gradients for thor"""        params_len = len(self.params)        for i in range(params_len):            ...            else:                ...                elif layer_type == FC:                    temp_a = self.matrix_A_cov[thor_layer_count]                    temp_g = self.matrix_G_cov[thor_layer_count]                    temp_a = self.cast(temp_a, mstype.float16)                    temp_g = self.cast(temp_g, mstype.float16)                    g = self.cast(g, mstype.float16)                    g = self.matmul(temp_g, g)                    g = self.matmul(g, temp_a)                    g = self.cast(g, mstype.float32)

construct函数是在网络训练过程中会理论执行的内容,该函数中蕴含了上述两个函数_get_Ainv_Ginv_Amax_Gmax_list和_get_second_gradients的调用,该函数实现了二阶矩阵的计算和梯度更新方向的调整。

def construct(self, gradients):        params = self.params        moments = self.moments        damping_step = self.gather(self.damping, self.cov_step, self.axis)        damping_step = self.cast(damping_step, mstype.float32)        if self.thor:            matrix_A_allreduce = ()            matrix_G_allreduce = ()            matrix_A_max_allreduce = ()            matrix_G_max_allreduce = ()            matrix_A_allreduce, matrix_G_allreduce, matrix_A_max_allreduce, matrix_G_max_allreduce = \                self._get_Ainv_Ginv_Amax_Gmax_list(gradients, damping_step, matrix_A_allreduce, matrix_G_allreduce,                                                   matrix_A_max_allreduce, matrix_G_max_allreduce) #计算A/G的逆            ...            new_grads = ()            for i in range(len(self.params)):                ...                if self.conv_layer_count > 0:#有卷积层时的解决                   ...                else: #都是全连贯层时的解决                    if layer_type == Embedding:                        ...                    elif layer_type == FC:                        temp_a = matrix_A_allreduce[thor_layer_count]                        temp_g = matrix_G_allreduce[thor_layer_count]                        fake_A = self.assign(self.matrix_A_cov[thor_layer_count], temp_a)                        fake_G = self.assign(self.matrix_G_cov[thor_layer_count], temp_g)                        g = F.depend(g, fake_A)#确保执行程序                        g = F.depend(g, fake_G)                        temp_a = self.cast(temp_a, mstype.float16)                        temp_g = self.cast(temp_g, mstype.float16)                        g = self.cast(g, mstype.float16)                        g = self.matmul(temp_g, g)                        g = self.matmul(g, temp_a)#将一阶方向变为二阶方向                        g = self.cast(g, mstype.float32)                    elif layer_type == LayerNorm:                        g = self._process_layernorm(damping_step, g)                new_grads = new_grads + (g,)            gradients = new_grads #计算后失去的更新方向        else: #该分支示意应用过期二阶信息更新参数            new_grads = ()            gradients = self._get_second_gradients(new_grads, damping_step, gradients) #调用_get_second_gradients函数计算方向        ...

THOR的实际利用

在这一节中跟大家分享下THOR的实际利用,举了两个例子别离为ResNet50和BERT,这两个例子的代码也已开源,链接如下:ResNet50:https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/train.pyBERT:https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/run_pretrain.py

ResNet50[1]

优化器的调用形式与文中结尾提到的统一,在这个例子中把具体训练过程给开展了。

首先创立了网络训练须要的训练集和网络定义为ResNet50;随后设置THOR所须要用到的超参策略,其余超参值设定可去该目录下的src/config.py中批改;接着创立THOR优化器,并传入设置的超参值;而后转换模型保留二阶所需信息;最初就能够训练网络了。

from mindspore.nn.optim import Momentum, THOR  #援用二阶优化器from src.resnet import resnet50 as resnet from mindspore.train.model import Model...if __name__ == '__main__':    ...    #创立网络训练过程中的训练集     dataset = create_dataset(dataset_path=args_opt.dataset_path, do_train=True, repeat_num=1,                             batch_size=config.batch_size, target=target, distribute=args_opt.run_distribute)    step_size = dataset.get_dataset_size()     #创立resnet50模型    net = resnet(class_num=config.class_num)     ...    # init lr    if cfg.optimizer == "Thor":         #设置超参值        from src.lr_generator import get_thor_lr        lr = get_thor_lr(0, config.lr_init, config.lr_decay, config.lr_end_epoch, step_size, decay_epochs=39)    # define loss, model    if target == "Ascend":        if args_opt.dataset == "imagenet2012":            if not config.use_label_smooth:                config.label_smooth_factor = 0.0            loss = CrossEntropySmooth(sparse=True, reduction="mean",                                      smooth_factor=config.label_smooth_factor, num_classes=config.class_num)        else:            loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')        loss_scale = FixedLossScaleManager(config.loss_scale, drop_overflow_update=False)        #高层形象,集成网络模型的训练和测试        model = Model(net, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale, metrics={'acc'},                      amp_level="O2", keep_batchnorm_fp32=False)     if cfg.optimizer == "Thor" and args_opt.dataset == "imagenet2012":        from src.lr_generator import get_thor_damping        #设置超参damping        damping = get_thor_damping(0, config.damping_init, config.damping_decay, 70, step_size)         #用于通信时的并行减速        split_indices = [26, 53]         #创立THOR优化器        opt = THOR(net, lr, Tensor(damping), config.momentum, config.weight_decay, config.loss_scale,                   config.batch_size, split_indices=split_indices)        #减少计算图晋升性能        model = ConvertModelUtils().convert_to_thor_model(model=model, network=net, loss_fn=loss, optimizer=opt,                                                          loss_scale_manager=loss_scale, metrics={'acc'},                                                          amp_level="O2", keep_batchnorm_fp32=False,                                                          frequency=config.frequency)     ...    #训练网络    model.train(config.epoch_size - config.pretrain_epoch_size, dataset, callbacks=cb,                sink_size=dataset.get_dataset_size(), dataset_sink_mode=dataset_sink_mode) 

最初输出

即可运行脚本啦。

BERT[2]

BERT中步骤与ResNet50差不多。首先创立了网络训练须要的训练集和网络定义为BERT;随后设置THOR所须要用到的超参策略,其余超参值设定可去该目录下的src/config.py中批改;优化器创立时传入BERT设定的超参值,本例中创立时传入了:

示意做weight decay操作时排除LN层和FC中的bias参数;而后转换模型保留二阶所需信息;最初就能够训练网络了。

from mindspore.nn.optim import Lamb, Momentum, AdamWeightDecay, THOR  #援用二阶优化器from src import BertNetworkWithLoss...def _get_optimizer(args_opt, network):    """get bert optimizer, support Lamb, Momentum, AdamWeightDecay."""    if cfg.optimizer == 'Lamb':       ...    elif cfg.optimizer == "Thor":        from src.utils import get_bert_thor_lr, get_bert_thor_damping        #设置lr和damping的超参值        lr = get_bert_thor_lr(cfg.Thor.lr_max, cfg.Thor.lr_min, cfg.Thor.lr_power, cfg.Thor.lr_total_steps)        damping = get_bert_thor_damping(cfg.Thor.damping_max, cfg.Thor.damping_min, cfg.Thor.damping_power,                                        cfg.Thor.damping_total_steps)        split_indices = None        #设置并行减速形式        if bert_net_cfg.num_hidden_layers == 12:            if bert_net_cfg.use_relative_positions:                split_indices = [29, 58, 87, 116, 145, 174, 203, 217]            else:                split_indices = [28, 55, 82, 109, 136, 163, 190, 205]        elif bert_net_cfg.num_hidden_layers == 24:            if bert_net_cfg.use_relative_positions:                split_indices = [30, 90, 150, 210, 270, 330, 390, 421]            else:                split_indices = [38, 93, 148, 203, 258, 313, 368, 397]        #创立优化器        optimizer = THOR(network, lr, damping, cfg.Thor.momentum,                         cfg.Thor.weight_decay, cfg.Thor.loss_scale, cfg.batch_size,                         decay_filter=lambda x: 'layernorm' not in x.name.lower() and 'bias' not in x.name.lower(),                         split_indices=split_indices)     ...    return optimizerdef run_pretrain():    ...    #创立数据集    ds = create_bert_dataset(device_num, rank, args_opt.do_shuffle, args_opt.data_dir, args_opt.schema_dir)    #网络和损失函数创立    net_with_loss = BertNetworkWithLoss(bert_net_cfg, True)    ...    #加载初始checkpoint    if args_opt.load_checkpoint_path:        param_dict = load_checkpoint(args_opt.load_checkpoint_path)        load_param_into_net(net_with_loss, param_dict)    #动静loss缩放    if args_opt.enable_lossscale == "true":             ...    #固定loss缩放值    else:         #反向过程梯度计算过程创立        net_with_grads = BertTrainOneStepCell(net_with_loss, optimizer=optimizer)    #创立网络    model = Model(net_with_grads)    #减少计算图晋升性能    model = ConvertModelUtils().convert_to_thor_model(model, network=net_with_grads, optimizer=optimizer,                                                      frequency=cfg.Thor.frequency)     #网络训练    model.train(new_repeat_count, ds, callbacks=callback,                dataset_sink_mode=(args_opt.enable_data_sink == "true"), sink_size=args_opt.data_sink_steps)if __name__ == '__main__':    set_seed(0)

最初输出

即可运行脚本啦.至此高阶优化器系列的内容就完结啦,该系列总共有三篇文章别离从优化器的背景,MindSpore自研优化器的介绍和MindSpore 高阶优化器THOR 的源码剖析&实际利用这三个内容来跟大家分享,如有不足之处欢送大家批评指正。同时也欢送大家到MindSpore开源社区中一起游玩。

参考文献:

[1]He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.

[2]Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.

点击关注,第一工夫理解华为云陈腐技术~