简介: 本⽂简要介绍CKBERT的技术解读,以及如何在EasyNLP框架、HuggingFace Models和阿里云机器学习平台PAI上使⽤CKBERT模型。导读预训练语言模型在NLP的各个利用中都有及其宽泛的利用;然而,经典的预训练语言模型(例如BERT)不足对常识的了解,例如常识图谱中的关系三元组。常识加强预训练模型应用内部常识(常识图谱,字典和文本等)或者句子外部的语言学常识进行加强。咱们发现,常识注入的过程都随同着很大规模的常识参数,上游工作fine-tune的时候依然须要内部数据的撑持能力达到比拟好的成果,从而无奈在云环境中很好的提供给用户进行应用。CKBERT(Chinese Knowledge-enhanced BERT)是EasyNLP团队自研的中文预训练模型,联合了两种常识类型(内部常识图谱,外部语言学常识)对模型进行常识注入,同时使得常识注入的形式不便模型可扩大。咱们的试验验证也表明CKBERT的模型精度超过了多种经典中文模型。在本次的框架降级中,咱们将多种规模的CKBERT模型奉献给开源社区,并且这些CKBERT模型与HuggingFace Models齐全兼容。此外,用户也能够在阿里云机器学习平台PAI上不便地利用云资源应用CKBERT模型。 EasyNLP(https://github.com/alibaba/Ea...)是阿⾥云机器学习PAI 团队基于 PyTorch 开发的易⽤且丰盛的中⽂NLP算法框架,⽀持常⽤的中⽂预训练模型和⼤模型落地技术,并且提供了从训练到部署的⼀站式 NLP 开发体验。EasyNLP 提供了简洁的接⼝供⽤户开发 NLP 模型,包含NLP应⽤ AppZoo 和预训练 ModelZoo,同时提供技术帮忙⽤户⾼效的落地超⼤预训练模型到业务。因为跨模态了解需要的一直减少,EasyNLP也⽀持各种跨模态模型,特地是中⽂畛域的跨模态模型,推向开源社区,心愿可能服务更多的 NLP 和多模态算法开发者和研 究者,也心愿和社区⼀起推动 NLP /多模态技术的倒退和模型落地。 本⽂简要介绍CKBERT的技术解读,以及如何在EasyNLP框架、HuggingFace Models和阿里云机器学习平台PAI上使⽤CKBERT模型。中文预训练语言模型概览在这一节中,咱们首先简要回顾经典的中文预训练语言模型。目前中文预训练语言模型次要包含了两种类型:通用畛域的预训练语言模型,次要包含了BERT、MacBERT和PERT等模型;常识加强的中文预训练模型,次要包含了ERNIE-baidu,Lattice-BERT,K-BERT和ERNIE-THU等模型。通用畛域的预训练语言模型BERT间接应用Google公布的基于中文维基文本语料进行训练的模型。MacBERT是BERT的改良版本,引入了纠错型掩码语言模型(MLM as correction,Mac)预训练任务,缓解了“预训练-上游工作”不统一的问题。在掩码语言模型(MLM)中,引入了[MASK]标记进行掩码,但[MASK]标记并不会呈现在上游工作中。在MacBERT中,应用类似词来取代[MASK]标记。类似词通过Synonyms toolkit 工具获取,算法基于word2vec类似度计算。同时,MacBERT也引入了Whole Word Masking和N-gram Masking技术。当要对N-gram进行掩码时,会对N-gram里的每个词别离查找类似词;当没有类似词可替换时,将应用随机词进行替换。因为肯定水平的乱序文本不影响语义了解,PBERT从乱序文本中学习语义常识。它对原始输出文本进行肯定的词序调换,从而造成乱序文本(因而不会引入额定的[MASK]标记),其学习指标是预测原Token所在的地位。常识加强的中文预训练模型BERT在预训练过程中应用的数据仅是对单个字符进行屏蔽,例如下图所示,训练BERT时通过“哈”与“滨”的部分共现判断出“尔”字,然而模型其实并没有学习到与“哈尔滨”相干的常识,即只是学习到“哈尔滨”这个词,然而并不知道“哈尔滨”所代表的含意。ERNIE-Baidu在预训练时应用的数据是对整个词进行屏蔽,从而学习词与实体的表白,例如屏蔽“哈尔滨”与“冰雪”这样的词,使模型可能建模出“哈尔滨”与“黑龙江”的关系,学到“哈尔滨”是“黑龙江”的省会以及“哈尔滨”是个冰雪城市这样的含意。
与ERNIE-Baidu相似,Lattice-BERT利用Word-Lattice构造整合词级别信息。具体来说,Lattice-BERT设计了一个Lattice地位留神机制,来表白词级别的信息,同时提出了Masked Segment Prediction的预测工作,以推动模型学习来自丰盛但冗余的外在Lattice信息。
除了语言学常识,更多的工作利用常识图谱中的事实性常识丰盛中文预训练模型的表征。其中,K-BERT提出了面向常识图谱的常识加强语言模型,将三元组作为畛域常识注入到句子中。然而,过多的常识融入会导致常识乐音,使句子偏离其正确的含意。为了克服常识乐音, K-BERT引入了Soft-position和Visibel Matrix来限度常识的影响。因为K-BERT可能从预训练的BERT中加载模型参数,因而通过装备KG,能够很容易地将畛域常识注入到模型中,而不须要对模型进行预训练。EasyNLP框架也集成了K-BERT的模型和性能(看这里)。
ERNIE-THU是一种融入常识Embedding的预训练模型。它首先应用TAGME提取文本中的实体,并将这些实体链指到KG中的对应实体对象,而后取得这些实体对象对应的Embedding。实体对象的Embedding由常识示意办法(例如TransE)训练失去。此外,ERNIE-THU在BERT模型的根底上进行改良,除了MLM、NSP工作外,从新增加了一个和KG相干的预训练指标:Mask掉Token和Entity (实体) 的对齐关系,并要求模型从图谱的实体中抉择适合的Entity实现对齐。
自研CKBERT模型技术详解因为以后的常识加强预训练模型大都应用内部常识(常识图谱,字典和文本等)或者句子外部的语言学常识进行加强,同时常识注入的过程都随同着很大规模的常识参数,上游工作fine-tune的时候依然须要内部数据的撑持能力达到比拟好的成果,从而无奈在云环境中很好的提供给用户进行应用。CKBERT(Chinese Knowledge-enhanced BERT)是EasyNLP团队自研的中文预训练模型,联合了两种常识类型(内部常识图谱,外部语言学常识)对模型进行常识注入,同时使得常识注入的形式不便模型可扩大。针对实际的业务需要,咱们提供了三种不同规模参数量的模型,具体配置如下所示:模型配置alibaba-pai/pai-ckbert-base-zhalibaba-pai/pai-ckbert-large-zhalibaba-pai/pai-ckbert-huge-zh参数量(Parameters)151M428M1.3B层数(Number of Layers)122424注意力头数(Attention Heads)12168隐向量维度(Hidden Size)76810242048文本长度(Text Length)128128128FFN 层维度307240968192 CKBERT的模型架构如下图所示:
为了不便模型进行扩大参数,模型只在数据输出层面和预训练任务层面进行了改变,没有对模型架构进行改变。因而,CKBERT的模型构造与社区版的BERT模型对齐。在数据输出层,一共要解决两局部的常识,内部图谱三元组和句子级外部的语言学常识。针对语言学常识,咱们应用了哈工大LTP平台进行句子数据的解决,进行语义角色标注和依存句法分析等,而后依据规定,将辨认后果中重要的成分进行标注。针对内部三元组常识是依据句子中呈现的实体结构实体的正负三元组样本,正样本是依据图谱中1-hop 实体进行的采样,负样本是依据图谱中multi-hop进行的采样,但负样本的采样过程只能在规定的多跳范畴内,而不能在图谱中距离太远。 CKBERT采纳两种预训练任务进行模型的预训练,语言学感知的掩码语言模型和多跳常识比照学习:语言学感知的掩码语言模型(Linguistic-aware MLM):在语义依存关系中的主体角色(施事者AGT和当事者EXP )局部用[MASK]进行遮掩,同时在词的前后都加上SDP,附加上词汇的边界信息。在依存句法关系中,将主谓冰关系,定中关系,并列关系等依照上述mask机制进行解决为DEP。整体进行预训练的token数量是整句话的15%,其中40%进行随机MASK,30%和30%调配到语义依存关系和依存句法关系词汇上来。损失函数如下:
多跳常识比照学习:将上述结构的正负样本数据针对该注入的实体进行解决,每一个句中实体结构1个正样本,4个负样本,通过规范的infoNCE损失工作进行内部常识的学习。损失函数如下:
其中,
是预训练模型产生的上下文实体示意,
示意正样本的三元组示意后果,
示意负样本的三元组示意后果。CKBERT模型的实现在EasyNLP框架中,咱们的模型实现分为三个局部:数据预处理,模型构造微调和损失函数的设计。首先,在数据预处理环节,次要由以下两个步骤组成:1.NER实体及语义关系的提取;2.常识图谱的信息注入。对于NER实体及语义信息的提取,次要采纳LTP(Language Technology Platform)对原始句子进行分词和句法分析,该局部的外围代码如下所示:def ltp_process(ltp: LTP,
data: List[Dict[str, Union[str, List[Union[int, str]]]]]):"""use ltp to process the dataArgs: Dict ([str, str]): data example: {'text':['我叫汤姆去拿伞。'],...}Returns: Dict[str, str]: result"""new_data = list(map(lambda x:x['text'][0].replace(" ", ""), data))seg, hiddens = ltp.seg(new_data)result = {}result['seg'] = segresult['ner'] = ltp.ner(hiddens)result['dep'] = ltp.dep(hiddens)result['sdp'] = ltp.sdp(hiddens)for index in range(len(data)): data[index]['text'][0] = data[index]['text'][0].replace(" ", "") data[index]['seg'] = result['seg'][index] data[index]['ner'] = result['ner'][index] data[index]['dep'] = result['dep'][index] data[index]['sdp'] = result['sdp'][index]该局部实现之后须要基于原始句子中的语义依存关系对相应的词进行整体的mask,该局部的mask策略参考BERT的mask策略的设计,给不同类型的关系调配特定的概率,并基于该概率对不同类型关系进行mask,该局部的外围代码如下:def dep_sdp_mask(left_numbers: List[int], data_new: List[List[Union[int, str]]], markers_: List[List[int]], selected_numbers_: set, number_: int, marker_attrs: Dict[str, List[int]]) -> int: """ mask the `mask_labels` for sdp and dep and record the maskers for each mask item Args: left_numbers (List[int]): the options that have not been used data_new (List[List[Union[int, str]]]): preprocessed data for original dep and sdp markers_ (List[List[int]]): a list that is uesd to save the maskers for each mask item selected_numbers_ (set): a set that is used to save the selected options number_ (int): the number of mask labels marker_attrs Dict[str, List[int]]: marker attributes Returns: int: 0 mean no mask, the others mean the number of masked ids """ np.random.shuffle(left_numbers) for item_ in left_numbers: target_item = data_new[item_] seg_ids = np.array(target_item[:2]) - 1 delete_ids = np.where(seg_ids < 1)[0] seg_ids = np.delete(seg_ids, delete_ids) temp_ids = seg2id(seg_ids) ids = [] for item in temp_ids: ids += item.copy() if check_ids(ids): length_ = len(ids) if number_ > length_: for id_ in ids: mask_labels[id_] = 1 if target_item[2] in marker_attrs: detail_info.append([ target_item, [seg_data[target_item[0] - 1],seg_data[target_item[1] - 1]], ]) if len(temp_ids) == 1: markers_.append([temp_ids[0][0], temp_ids[0][-1]]) elif len(temp_ids) == 2: for i in marker_attrs[target_item[2]]: markers_.append([temp_ids[i][0], temp_ids[i][-1]]) selected_numbers_.add(item_) return length_ else: return 0 return 0在实现对原始句子的预处理之后,在模型的dataloader里须要对数据进行常识注入,因为模型中引入了比照学习,因而该局部须要在数据转换阶段同时生成positive和negative的样本数据。实现这一过程的外围代码如下:def get_positive_and_negative_examples(self,ner_data: str,negative_level: int = 3) -> Union[bool, Dict[str, List[str]]]:"""get the positive examples and negative examples for the ner dataArgs: ner_data (str): the ner entity negative_level (int, optional): the deepth of the relationship. Defaults to 3.Returns: Union[bool, Dict[str, List[str]]]: if the `ner_data` not in `konwledge`, return False, otherwise, return the positive and negative examples"""knowledge: Dict[str, Dict[str, str]] = self.Knowledge_Gcommon_used = set()def get_data(key: str, data: Dict[str, str], results: List[str], deep: int, insert_flag: bool = False): """get the negative examples recursively Args: key (str): the ner data (Dict[str, str]): the related data about `key` results (List[str]): a list used to save the negative examples deep (int): the recursive number insert_flag (bool, optional): whether insert data to `results`. Defaults to False. """ nonlocal knowledge common_used.add(key) if deep == 0: return else: for key_item in data: if data[key_item] not in common_used and insert_flag == True: results.append(data[key_item]) if data[key_item] in knowledge and data[key_item] not in common_used: get_data(data[key_item], knowledge[data[key_item]], results, deep - 1, True)all_examples = { 'ner': ner_data, 'positive_examples': [], 'negative_examples': []}if ner_data in knowledge: tp_data = knowledge[ner_data] negative_examples = [] if '形容' in tp_data: positive_example = tp_data['形容'] else: keys = list(tp_data.keys()) choice = np.random.choice([_ for _ in range(len(keys))], 1)[0] positive_example = tp_data[keys[choice]] # # the description usually contains the ner entity, if not, concate the `ner_data` and the positive example if ner_data in positive_example: all_examples['positive_examples'].append(positive_example) else: all_examples['positive_examples'].append(ner_data + positive_example) get_data(ner_data, tp_data, negative_examples, negative_level) # concate the ner entity and each negative example negative_examples = list(map(lambda x: ner_data + x if ner_data not in x else x, negative_examples)) all_examples['negative_examples'] = negative_examples return all_examplesreturn False在实现常识注入之后,模型的数据预处理环节就实现了。紧接着,因为常识注入须要额定增加非凡的Token,因而,在模型的Embedding层须要从新调整大小,该局部的实现代码如下:model.backbone.resize_token_embeddings(len(train_dataset.tokenizer))
model.config.vocab_size = len(train_dataset.tokenizer)在对模型构造进行微调之后,最初就是批改原始的loss函数,因为引入了比照学习,这里须要在原来loss的根底之上新加一个比照学习的loss(CKBert采纳SimCLS作为比照学习的loss函数),该局部的外围代码实现如下:def compute_simcse(self, original_outputs: torch.Tensor,
forward_outputs: torch.Tensor) -> float: original_hidden_states = original_outputs['hidden_states'].unsqueeze(-2) loss = nn.CrossEntropyLoss() forward_outputs = torch.mean(forward_outputs, dim=-2) cos_result = self.CosSim(original_hidden_states, forward_outputs) cos_result_size = cos_result.size() cos_result = cos_result.view(-1, cos_result_size[-1]) labels = torch.zeros(cos_result.size(0), device=original_outputs['hidden_states'].device).long() loss_ = loss(cos_result, labels) return loss_CKBERT减速预训练因为CKBERT的预训练须要消耗大量工夫和计算资源,咱们有必须对CKBERT的预训练进行减速。因为CKBERT采纳PyTorch框架实现,与Tensorflow 1.x Graph Execution形式相比,PyTorch采纳Eager Execution的形式运行,具备很好的易用性、容易开发调试等特点。然而,Pytorch短少模型的Graph IR(Intermediate Representation)表白,因而无奈进行更深度的优化。受到LazyTensor 和Pytorch/XLA(https://github.com/pytorch/xla)的启发,PAI团队在PyTorch框架中开发了TorchAccelerator,旨在解决PyTorch上的训练优化问题,在保障用户易用性和可调试行的根底上,晋升用户训练速度。 因为LazyTensor在Eager Execution到Graph Execution转化过程中还存在很多缺点。通过将Custom Operation封装进XLA CustomCall、对Python代码进行AST解析等伎俩,TorchAccelerator晋升了Eager Execution到Graph Execution的齐备性和转化性能,通过多Stream优化、Tensor异步传输等伎俩晋升编译优化成果。
从试验后果来看,将TorchAccelerator和AMP(Automatic Mixed Precision,混合精度训练)联合起来应用,训练速度将会有40%以上的晋升,阐明在AMP和TorchAccelerator进行相互作用下有比拟好的减速成果。CKBERT试验成果评测为了验证CKBERT模型在各种工作上的精度,咱们在多个公开数据集上验证了句子分类和NER工作的成果,如下所示:CLUE数据集试验成果模型Text ClassificationQuestion AnsweringTotal ScoreAFQMCTNEWSIFLYTEKOCNLIWSCCSLCMRCCHIDC3BERT72.7355.2259.5466.5372.4981.7773.4079.1957.9169.72MacBERT69.9057.9360.3567.4374.7182.1373.5579.5158.8970.28PERT73.6154.5057.4266.7076.0782.7773.8080.1958.0370.18ERNIE-Baidu73.0856.2260.1167.4875.7982.1472.8680.0357.6369.83Lattice-BERT72.9656.1458.9767.5476.1081.9973.4780.2457.8070.29K-BERT73.1555.9160.1967.8376.2182.2472.7480.2957.4870.35ERNIE-THU72.8856.5959.3367.9575.8282.3572.9680.2256.3069.98CKBERT-base73.1756.4460.6568.5376.3882.6373.5581.6957.9171.36CKBERT-large74.7555.8660.6270.5778.8982.3073.4582.3458.1272.23CKBERT-huge75.0359.7260.9678.2685.1689.4777.2597.7386.5978.91CKBERT-huge(ensemble)77.0561.1661.1982.8087.1494.2380.4097.9187.2681.02 NER数据集试验成果模型MSRAWeiboOnto.Resu.BERT95.2054.6581.6194.86MacBERT95.0754.9381.9695.22PERT94.9953.7481.4495.10ERNIE-Baidu95.3955.1481.1795.13Lattice-BERT95.2854.9982.0195.31K-BERT94.9755.2181.9894.92ERNIE-THU95.2553.8582.0394.89CKBERT-base95.3555.9782.1995.68CKBERT-large95.5857.0982.4396.08CKBERT-huge96.7958.6683.8797.19 上述后果阐明,首先在CLUE数据集上:(1)常识加强预训练模型的性能相较于BERT均有较大晋升,在肯定水平阐明了常识的注入能帮忙模型进行更好的语义推理;(2)跟先前的较好的baseline模型相比,CKBERT的性能进一步失去了晋升,这也阐明了异构常识的注入有利于模型性能的晋升;(3)模型参数量越大,异构常识的的注入所带来的晋升越显著,这在咱们的huge模型和base模型之间的比照上能够看出。其次,在NER数据集上:(1)常识加强预训练模型的性能相较于BERT也有肯定的晋升;(2)CKBERT模型相较于其余baseline模型的晋升较大,这进一步阐明了异构常识的注入对于模型性能的晋升是有帮忙的。CKBERT模型使⽤教程以下咱们简要介绍如何在EasyNLP框架使⽤CKBERT模型。装置EasyNLP⽤户能够间接参考GitHub(https://github.com/alibaba/Ea...)上的阐明装置EasyNLP算法框架。模型预训练以下介绍CKBERT模型的预训练调用过程,如果用户没有自行预训练的需要能够跳过此局部。数据筹备CKBERT是一个常识嵌入的预训练模型,须要用户本人筹备相应的原始训练数据(xxx.json)和常识图谱(xxx.spo),其中数据分隔均应用\t分隔符。训练数据的格局为{'text':['xxx'], 'title':'xxx'},样例如下:{'text': ['我想,如果我没有去做大学生村官,恐怕我这个在昆明长大的孩子永远都不能切身感受到云南这次60年一遇的特大旱情的严重性,恐怕我只是每天看着新闻上那些缺水的镜头,嘴上说要节水,但事实口头放弃不了三天。 我任职的中央在昆明市禄劝县的一个村委会,说实话这里间隔禄劝县城不远,自然环境不算很差。目前,只有一个自然村保障不了饮用水。一个自然村根本能保障有饮用水到5月。这里所说的饮用水,是指从山肚子里进去的水,积在小水坝或是水塘里又通过管道输送到村子里的水,和咱们城市里真正意义上消过毒的、能平安饮用的饮用水不同。在整个输送的过程中,可能曾经产生了有害物质。我感觉是。 没有饮用水的那个自然村叫大海子村,50户,近200多人。地处山头,交通很不便当,走路大略要1个半小时到两个小时,而且坡度比拟大,是一个苗族村寨。天文条件限度,根本没有什么经济作物,算是靠天吃饭的那种。往年遇到60年一遇的干旱,村里的两个水窖都根本干了,之前几天,他们村长来反映,几个老人曾经是抬个小板凳坐到窖底用碗舀水了。 面对这么严厉的旱情,村委会的领导和各小组长都在想方法。然而上山的路路面状况差,大车重车上不去;周边水源地少。最可行的方法就是从武定那边绕路上去。但每天运水上去也不是方法,久远来看还是要建筑一个小水坝。村委会的领导被动捐款,村民也自行筹资,开始自救。 最近每个周末都回家,添置防晒品,因为根本每天都上山下村去理解状况,必须把握辖区内13个村小组水资源的状况。我每次回家见到敌人们,第一句就是,要节约用水啊 敌人们,你们当初看到的只是简略了解的"缺水"。你们所不晓得的是,没水小春作物面临绝收、4月份插秧没有水泡秧田、5月份种烤烟也没有水。。。那么对农民就意味着往年一年就没有了支出。咱们当初能致力做好的,只是保障人的饮用水。 上周就在想能不能通过什么渠道帮村民们做点事。叔叔叫我弄个抗旱的基金,他动员四周的敌人来捐献,心愿能口口相传带动多一点敌人。我正在筹备阶段,看怎样才能做到最大的公开通明,让捐献的人齐全释怀。 周一接到一个敌人的电话,说他们公司想为旱灾献点爱心,想买点水送去咱们那儿。昨天见了负责人,很谦和的一个姐姐,她说大家都想做点什么,感觉捐钱没什么意义,想亲自送水到最须要的中央。 其实人家只是家私营的小公司,但我真的很感激他们。姐姐还特地交代我,咱们只须要找拖拉机下来帮忙把水运上山去,其余的什么都不必管,他们会安顿好的。这个周末,将有 400件矿泉水会送到村民家里。我想,应该能够临时缓解旱情。再次代村民感激他们! 下半年,旱情给农民的生产、生存带来的问题还很多。然而我集体的力量很无限,心愿可能看到这些帖子的敌人们,如果有能力,有这份情意的话,请给予旱灾地区的农民更多的帮忙。 我想大家都晓得,昆明80%以上的用水都来自禄劝的云龙水库,云龙的共事们也在"抗旱",他们的工作工作是要保障严格的节约用水,要寻求其余水源用水,从而保障昆明的用水。所以,请每一个昆明人都节水吧,禄劝的很多中央都在缺水,咱们那里不算重大的,请珍惜你们当初在用的每一滴水~ 兴许,要经验过这样一次惊心动魄的大旱才真正晓得水的宝贵。心愿咱们都口头起来,不要再让这样的旱灾侵袭咱们的他乡。'], 'title': '旱情记要-----昆明人,请珍惜你们当初在用的每一滴水~'}常识图谱数据格式为三列数据,从左到右别离是实体关系的形容,样例如下:红色食品 标签 生存数据预处理能够应用提供的数据预处理脚本(preprocess/run_local_preprocess.sh)来对原始数据进行一键解决,在通过LTP解决之后,数据样例如下:{"text": ["我想,如果我没有去做大学生村官,恐怕我这个在昆明长大的孩子永远都不能切身感受到云南这次60年一遇的特大旱情的严重性,恐怕我只是每天看着新闻上那些缺水的镜头,嘴上说要节水,但事实口头放弃不了三天。我任职的中央在昆明市禄劝县的一个村委会,说实话这里间隔禄劝县城不远,自然环境不算很差。目前,只有一个自然村保障不了饮用水。一个自然村根本能保障有饮用水到5月。这里所说的饮用水,是指从山肚子里进去的水,积在小水坝或是水塘里又通过管道输送到村子里的水,和咱们城市里真正意义上消过毒的、能平安饮用的饮用水不同。在整个输送的过程中,可能曾经产生了有害物质。我感觉是。没有饮用水的那个自然村叫大海子村,50户,近200多人。地处山头,交通很不便当,走路大略要1个半小时到两个小时,而且坡度比拟大,是一个苗族村寨。天文条件限度,根本没有什么经济作物,算是靠天吃饭的那种。往年遇到60年一遇的干旱,村里的两个水窖都根本干了,之前几天,他们村长来反映,几个老人曾经是抬个小板凳坐到窖底用碗舀水了。面对这么严厉的旱情,村委会的领导和各小组长都在想方法。然而上山的路路面状况差,大车重车上不去;周边水源地少。最可行的方法就是从武定那边绕路上去。但每天运水上去也不是方法,久远来看还是要建筑一个小水坝。村委会的领导被动捐款,村民也自行筹资,开始自救。最近每个周末都回家,添置防晒品,因为根本每天都上山下村去理解状况,必须把握辖区内13个村小组水资源的状况。我每次回家见到敌人们,第一句就是,要节约用水啊敌人们,你们当初看到的只是简略了解的\"缺水\"。你们所不晓得的是,没水小春作物面临绝收、4月份插秧没有水泡秧田、5月份种烤烟也没有水。。。那么对农民就意味着往年一年就没有了支出。咱们当初能致力做好的,只是保障人的饮用水。上周就在想能不能通过什么渠道帮村民们做点事。叔叔叫我弄个抗旱的基金,他动员四周的敌人来捐献,心愿能口口相传带动多一点敌人。我正在筹备阶段,看怎样才能做到最大的公开通明,让捐献的人齐全释怀。周一接到一个敌人的电话,说他们公司想为旱灾献点爱心,想买点水送去咱们那儿。昨天见了负责人,很谦和的一个姐姐,她说大家都想做点什么,感觉捐钱没什么意义,想亲自送水到最须要的中央。其实人家只是家私营的小公司,但我真的很感激他们。姐姐还特地交代我,咱们只须要找拖拉机下来帮忙把水运上山去,其余的什么都不必管,他们会安顿好的。这个周末,将有400件矿泉水会送到村民家里。我想,应该能够临时缓解旱情。再次代村民感激他们!下半年,旱情给农民的生产、生存带来的问题还很多。然而我集体的力量很无限,心愿可能看到这些帖子的敌人们,如果有能力,有这份情意的话,请给予旱灾地区的农民更多的帮忙。我想大家都晓得,昆明80%以上的用水都来自禄劝的云龙水库,云龙的共事们也在\"抗旱\",他们的工作工作是要保障严格的节约用水,要寻求其余水源用水,从而保障昆明的用水。所以,请每一个昆明人都节水吧,禄劝的很多中央都在缺水,咱们那里不算重大的,请珍惜你们当初在用的每一滴水~兴许,要经验过这样一次惊心动魄的大旱才真正晓得水的宝贵。心愿咱们都口头起来,不要再让这样的旱灾侵袭咱们的他乡。"], "title": "旱情记要-----昆明人,请珍惜你们当初在用的每一滴水~", "seg": ["我", "想", ",", "如果", "我", "没有", "去", "做", "大学生村官", ",", "恐怕", "我", "这个", "在", "昆明长大", "的", "孩子", "永远", "都", "不能切身感受到", "云南", "这次", "60年", "一遇的", "特大旱情", "的", "严重性", ",", "恐怕", "我", "只是", "每天", "看", "着", "新闻上", "那些", "缺水", "的", "镜头", ",", "嘴上说要节水", ",", "但事实口头", "放弃不了", "三天", "。", "我", "任职", "的", "中央", "在", "昆明市禄劝县", "的", "一个", "村委会", ",", "说实话", "这里", "间隔禄", "劝县城不远", ",", "自然环境", "不算", "很差", "。", "目前", ",", "只有", "一个", "自然村", "保障不了", "饮用水", "。", "一个", "自然村", "根本", "能", "保障", "有", "饮用水", "到", "5月", "。", "这里所说", "的", "饮用水", ",", "是指", "从", "山肚子里", "进去", "的", "水", ",积在", "小水坝", "或是", "水塘里", "又", "通过", "管道", "输送到", "村子里", "的水", ",", "和", "咱们", "城市里", "真正意义上", "消过毒的", "、能平安", "饮用", "的", "饮用水不同", "。", "在", "整个", "输送", "的", "过程", "中", ",", "可能", "曾经", "产生", "了", "有害物质", "。", "我", "感觉", "是", "。", "没有", "饮用水", "的", "那", "个", "自然村叫大海子村", ",", "50户", ",近", "200多人", "。地处山头", ",", "交通", "很不便当", ",", "走路", "大略要", "1个半小时到", "两个", "小时", ",", "而且坡度", "比拟大", ",", "是", "一个", "苗族村寨", "。天文条件", "限度", ",", "根本", "没有什么经济作物", ",", "算是", "靠天", "吃饭", "的", "那种", "。", "往年", "遇到", "60年一遇", "的", "干旱", ",村里", "的", "两个水窖", "都", "根本", "干", "了", ",之前几天", ",", "他们", "村长来反映", ",", "几个老人", "曾经", "是", "抬个小板凳坐到窖底用碗舀水", "了", "。面对", "这么", "严厉的", "旱情", ",", "村委会", "的领导", "和", "各小组长", "都", "在", "想", "方法", "。", "然而上山的", "路路面状况", "差", ",", "大车重车上不去", ";", "周边水源地少", "。", "最", "可行", "的", "方法", "就是", "从武定", "那边", "绕路上去", "。", "但", "每天运水上"], "ner": [["Ns", 14, 14], ["Ns", 20, 20], ["Ns", 51, 51]], "dep": [[1, 2, "SBV"], [2, 0, "HED"], [3, 2, "WP"], [4, 6, "ADV"], [5, 6, "SBV"], [6, 2, "VOB"], [7, 6, "COO"], [8, 7, "COO"], [9, 8, "VOB"], [10, 8, "WP"], [11, 7, "COO"], [12, 172, "SBV"], [13, 172, "ADV"], [14, 172, "ADV"], [15, 14, "POB"], [16, 172, "RAD"], [17, 172, "SBV"], [18, 172, "ADV"], [19, 172, "ADV"], [20, 172, "ADV"], [21, 172, "SBV"], [22, 172, "ADV"], [23, 172, "ADV"], [24, 172, "ADV"], [25, 172, "ADV"], [26, 172, "RAD"], [27, 172, "VOB"], [28, 156, "WP"], [29, 33, "ADV"], [30, 33, "SBV"], [31, 33, "ADV"], [32, 33, "ADV"], [33, 156, "COO"], [34, 33, "RAD"], [35, 39, "ATT"], [36, 39, "ATT"], [37, 39, "ATT"], [38, 37, "RAD"], [39, 33, "VOB"], [40, 33, "WP"], [41, 44, "ADV"], [42, 44, "WP"], [43, 44, "ADV"], [44, 33, "COO"], [45, 44, "CMP"], [46, 44, "WP"], [47, 48, "SBV"], [48, 55, "ATT"], [49, 48, "RAD"], [50, 51, "POB"], [51, 48, "ADV"], [52, 51, "POB"], [53, 48, "RAD"], [54, 55, "ATT"], [55, 44, "SBV"], [56, 44, "WP"], [57, 44, "ADV"], [58, 59, "SBV"], [59, 44, "ADV"], [60, 44, "COO"], [61, 44, "WP"], [62, 63, "SBV"], [63, 71, "CMP"], [64, 71, "CMP"], [65, 71, "WP"], [66, 71, "ADV"], [67, 71, "WP"], [68, 71, "ADV"], [69, 70, "ATT"], [70, 71, "SBV"], [71, 44, "COO"], [72, 71, "COO"], [73, 71, "WP"], [74, 75, "ATT"], [75, 71, "SBV"], [76, 78, "ADV"], [77, 78, "ADV"], [78, 44, "COO"], [79, 78, "VOB"], [80, 78, "VOB"], [81, 80, "CMP"], [82, 81, "POB"], [83, 78, "WP"], [84, 86, "ATT"], [85, 84, "RAD"], [86, 78, "COO"], [87, 78, "WP"], [88, 44, "COO"], [89, 91, "ADV"], [90, 89, "POB"], [91, 93, "ATT"], [92, 91, "RAD"], [93, 101, "VOB"], [94, 101, "WP"], [95, 101, "CMP"], [96, 97, "LAD"], [97, 101, "VOB"], [98, 101, "ADV"], [99, 101, "ADV"], [100, 99, "POB"], [101, 156, "COO"], [102, 101, "SBV"], [103, 101, "RAD"], [104, 101, "WP"], [105, 109, "ADV"], [106, 107, "ATT"], [107, 105, "POB"], [108, 109, "ADV"], [109, 156, "COO"], [110, 111, "WP"], [111, 109, "COO"], [112, 109, "RAD"], [113, 109, "COO"], [114, 129, "WP"], [115, 129, "ADV"], [116, 119, "ATT"], [117, 119, "ATT"], [118, 117, "RAD"], [119, 120, "ATT"], [120, 115, "POB"], [121, 129, "WP"], [122, 124, "ADV"], [123, 124, "ADV"], [124, 129, "COO"], [125, 124, "RAD"], [126, 129, "COO"], [127, 129, "WP"], [128, 129, "SBV"], [129, 109, "COO"], [130, 129, "VOB"], [131, 129, "WP"], [132, 133, "COO"], [133, 109, "COO"], [134, 109, "RAD"], [135, 109, "ADV"], [136, 137, "ATT"], [137, 109, "SBV"], [138, 109, "WP"], [139, 109, "ADV"], [140, 109, "WP"], [141, 109, "ADV"], [142, 109, "WP"], [143, 109, "WP"], [144, 109, "COO"], [145, 109, "ADV"], [146, 156, "WP"], [147, 156, "SBV"], [148, 156, "ADV"], [149, 151, "ATT"], [150, 151, "ATT"], [151, 156, "VOB"], [152, 156, "WP"], [153, 156, "ADV"], [154, 156, "ADV"], [155, 156, "WP"], [156, 167, "COO"], [157, 158, "ATT"], [158, 167, "VOB"], [159, 160, "WP"], [160, 167, "COO"], [161, 160, "WP"], [162, 163, "ADV"], [163, 167, "COO"], [164, 165, "WP"], [165, 163, "COO"], [166, 163, "ADV"], [167, 27, "ATT"], [168, 167, "RAD"], [169, 172, "ADV"], [170, 172, "WP"], [171, 172, "ADV"], [172, 7, "COO"], [173, 175, "ATT"], [174, 175, "RAD"], [175, 172, "VOB"], [176, 175, "WP"], [177, 175, "RAD"], [178, 175, "ATT"], [179, 181, "ADV"], [180, 181, "ADV"], [181, 6, "COO"], [182, 181, "RAD"], [183, 181, "WP"], [184, 181, "WP"], [185, 190, "SBV"], [186, 190, "SBV"], [187, 190, "WP"], [188, 190, "SBV"], [189, 190, "ADV"], [190, 181, "COO"], [191, 190, "VOB"], [192, 191, "RAD"], [193, 191, "WP"], [194, 195, "ADV"], [195, 204, "CMP"], [196, 204, "VOB"], [197, 204, "WP"], [198, 204, "SBV"], [199, 198, "RAD"], [200, 204, "LAD"], [201, 204, "SBV"], [202, 204, "ADV"], [203, 204, "ADV"], [204, 191, "COO"], [205, 204, "VOB"], [206, 204, "WP"], [207, 204, "ADV"], [208, 209, "SBV"], [209, 191, "ADV"], [210, 209, "WP"], [211, 209, "SBV"], [212, 209, "WP"], [213, 209, "SBV"], [214, 209, "WP"], [215, 216, "ADV"], [216, 218, "ATT"], [217, 216, "RAD"], [218, 209, "SBV"], [219, 209, "ADV"], [220, 191, "ADV"], [221, 191, "ADV"], [222, 181, "COO"], [223, 181, "WP"], [224, 181, "ADV"], [225, 181, "ADV"]], "sdp": [[1, 2, "AGT"], [1, 129, "AGT"], [2, 0, "Root"], [3, 2, "mPUNC"], [4, 7, "mRELA"], [5, 7, "AGT"], [5, 8, "AGT"], [6, 7, "mNEG"], [6, 8, "mNEG"], [7, 2, "dCONT"], [8, 7, "eSUCC"], [9, 8, "LINK"], [10, 8, "mPUNC"], [11, 8, "eSUCC"], [12, 172, "EXP"], [13, 172, "SCO"], [14, 15, "mRELA"], [15, 167, "LOC"], [15, 172, "LOC"], [16, 172, "mDEPD"], [17, 172, "EXP"], [18, 172, "mDEPD"], [19, 172, "mDEPD"], [20, 172, "mNEG"], [21, 172, "AGT"], [22, 172, "SCO"], [23, 172, "EXP"], [24, 172, "EXP"], [25, 172, "MANN"], [26, 172, "mDEPD"], [27, 172, "CONT"], [28, 172, "mPUNC"], [29, 33, "mDEPD"], [30, 33, "AGT"], [31, 33, "mDEPD"], [32, 33, "mDEPD"], [33, 172, "eSUCC"], [34, 33, "mDEPD"], [35, 39, "FEAT"], [36, 39, "SCO"], [37, 39, "rEXP"], [38, 37, "mDEPD"], [39, 33, "CONT"], [40, 33, "mPUNC"], [41, 44, "LOC"], [42, 44, "mPUNC"], [43, 44, "mRELA"], [44, 33, "eSUCC"], [45, 44, "TIME"], [46, 44, "mPUNC"], [47, 48, "AGT"], [47, 60, "PAT"], [47, 204, "AGT"], [48, 55, "rDATV"], [49, 48, "mDEPD"], [50, 48, "LOC"], [51, 50, "mRELA"], [52, 50, "FEAT"], [53, 48, "mDEPD"], [54, 55, "MEAS"], [55, 44, "EXP"], [56, 44, "mPUNC"], [57, 44, "eCOO"], [58, 59, "EXP"], [59, 44, "eSUCC"], [60, 44, "eSUCC"], [61, 60, "mPUNC"], [62, 60, "PAT"], [63, 60, "eSUCC"], [64, 63, "mDEPD"], [65, 71, "mPUNC"], [66, 71, "TIME"], [67, 66, "mPUNC"], [68, 71, "mDEPD"], [69, 70, "MEAS"], [70, 71, "AGT"], [71, 60, "eCOO"], [72, 71, "dCONT"], [73, 72, "mPUNC"], [74, 75, "MEAS"], [75, 72, "AGT"], [76, 78, "mDEPD"], [77, 78, "mDEPD"], [78, 44, "eSUCC"], [79, 78, "dCONT"], [80, 79, "LINK"], [81, 79, "eCOO"], [82, 81, "TIME"], [83, 91, "mPUNC"], [84, 91, "LOC"], [85, 91, "mDEPD"], [86, 88, "EXP"], [87, 88, "mPUNC"], [88, 91, "mDEPD"], [89, 90, "mRELA"], [90, 91, "LOC"], [91, 79, "eSUCC"], [92, 91, "mDEPD"], [93, 79, "EXP"], [94, 93, "mPUNC"], [95, 93, "FEAT"], [96, 97, "mRELA"], [97, 93, "eCOO"], [98, 78, "mDEPD"], [99, 100, "mRELA"], [100, 101, "MANN"], [101, 78, "dCONT"], [102, 107, "FEAT"], [102, 109, "SCO"], [103, 109, "mDEPD"], [104, 109, "mPUNC"], [105, 107, "mRELA"], [106, 107, "FEAT"], [107, 109, "SCO"], [108, 109, "mDEPD"], [109, 101, "ePREC"], [110, 109, "mPUNC"], [111, 109, "eSUCC"], [112, 109, "mDEPD"], [113, 109, "eSUCC"], [114, 129, "mPUNC"], [115, 119, "mRELA"], [116, 119, "SCO"], [117, 119, "FEAT"], [118, 117, "mDEPD"], [119, 124, "STAT"], [120, 119, "mDEPD"], [121, 119, "mPUNC"], [122, 124, "mDEPD"], [123, 124, "mDEPD"], [124, 129, "dCONT"], [125, 124, "mDEPD"], [126, 129, "dCONT"], [127, 129, "mPUNC"], [128, 129, "AGT"], [129, 109, "eSUCC"], [130, 129, "dCONT"], [131, 129, "mPUNC"], [132, 109, "ePREC"], [133, 109, "eSUCC"], [134, 109, "mDEPD"], [135, 109, "SCO"], [136, 137, "MEAS"], [137, 109, "FEAT"], [138, 109, "mPUNC"], [139, 109, "FEAT"], [140, 109, "mPUNC"], [141, 109, "FEAT"], [142, 109, "mPUNC"], [143, 8, "mPUNC"], [143, 109, "mPUNC"], [144, 109, "eSUCC"], [145, 160, "mDEPD"], [146, 160, "mPUNC"], [147, 160, "eSUCC"], [148, 147, "mDEPD"], [149, 147, "MEAS"], [150, 151, "MEAS"], [151, 147, "TIME"], [152, 147, "mPUNC"], [153, 156, "mRELA"], [154, 156, "mRELA"], [155, 156, "mPUNC"], [156, 160, "mDEPD"], [157, 158, "MEAS"], [158, 156, "LINK"], [159, 156, "mPUNC"], [160, 109, "eSUCC"], [161, 160, "mPUNC"], [162, 163, "mDEPD"], [163, 160, "dEXP"], [164, 165, "mPUNC"], [165, 160, "eCOO"], [166, 167, "mRELA"], [167, 165, "dEXP"], [168, 167, "mDEPD"], [169, 167, "SCO"], [170, 160, "mPUNC"], [171, 172, "TIME"], [172, 11, "dCONT"], [173, 8, "MEAS"], [174, 175, "mDEPD"], [175, 8, "eSUCC"], [176, 175, "mPUNC"], [177, 181, "mDEPD"], [178, 181, "EXP"], [179, 181, "mDEPD"], [180, 181, "mDEPD"], [181, 2, "dCONT"], [182, 44, "mDEPD"], [182, 209, "mDEPD"], [183, 209, "mPUNC"], [184, 209, "mPUNC"], [185, 44, "AGT"], [185, 209, "EXP"], [186, 185, "eCOO"], [187, 209, "mPUNC"], [188, 191, "MEAS"], [189, 191, "mDEPD"], [190, 191, "eSUCC"], [191, 209, "dEXP"], [192, 204, "mDEPD"], [193, 204, "mPUNC"], [194, 195, "SCO"], [195, 204, "FEAT"], [196, 204, "CONT"], [197, 204, "mPUNC"], [198, 204, "AGT"], [199, 48, "mDEPD"], [199, 198, "mDEPD"], [200, 204, "mRELA"], [201, 204, "AGT"], [202, 204, "mDEPD"], [203, 204, "eCOO"], [204, 209, "ePREC"], [205, 204, "CONT"], [206, 204, "mPUNC"], [207, 204, "mDEPD"], [208, 204, "LOC"], [209, 181, "eSUCC"], [210, 209, "mPUNC"], [211, 209, "EXP"], [212, 209, "mPUNC"], [213, 209, "EXP"], [214, 209, "mPUNC"], [215, 216, "mDEPD"], [216, 218, "FEAT"], [217, 37, "mDEPD"], [217, 216, "mDEPD"], [218, 209, "EXP"], [219, 209, "mDEPD"], [220, 221, "mRELA"], [221, 209, "LOC"], [222, 181, "eSUCC"], [223, 181, "mPUNC"], [224, 181, "mRELA"], [225, 181, "SCO"]]}紧接着,调用相应的mask策略对数据进行解决,解决后的数据样例如下:[['[CLS]', '我', '想', ',', '如', '果', '我', '没', '有', '去', '做', '大', '学', '生', '村', '官', ',', '恐', '怕', '我', '这', '个', '在', '昆', '明', '长', '大', '的', '孩', '子', '永', '远', '都', '不', '能', '切', '身', '感', '受', '到', '云', '南', '这', '次', '6', '0', '年', '一', '遇', '的', '特', '大', '旱', '情', '的', '严', '重', '性', ',', '恐', '怕', '[sdp]', '我', '[sdp]', '只', '是', '每', '天', '[sdp]', '看', '[sdp]', '着', '新', '闻', '上', '那', '些', '缺', '水', '的', '镜', '头', ',', '嘴', '上', '说', '要', '节', '水', ',', '但', '事', '实', '行', '动', '保', '持', '不', '了', '三', '天', '。', '我', '任', '职', '的', '地', '方', '在', '昆', '明', '市', '禄', '劝', '县', '的', '一', '个', '村', '委', '会', ',', '说', '实', '话', '这', '里', '[SEP]'], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ['昆明长大', '云南', '昆明市禄劝县']]预训练脚本数据处理结束之后,就能够调用预训练脚本进行模型的预训练,脚本如下:gpu_number=1
negative_e_number=4
negative_e_length=16
base_dir=$PWD
checkpoint_dir=$base_dir/checkpoints
resources=$base_dir/resources
local_kg=$resources/ownthink_triples_small.txt
local_train_file=$resources/train_small.txt
remote_kg=https://atp-modelzoo-sh.oss-c...
remote_train_file=https://atp-modelzoo-sh.oss-c...
if [ ! -d $checkpoint_dir ];then
mkdir $checkpoint_dir
fi
if [ ! -d $resources ];then
mkdir $resources
fi
if [ ! -f $local_kg ];then
wget -P $resources $remote_kg
fi
if [ ! -f $local_train_file ];then
wget -P $resources $remote_train_file
fi
python -m torch.distributed.launch --nproc_per_node=$gpu_number \
--master_port=52349 \
$base_dir/main.py \
--mode=train \
--worker_gpu=$gpu_number \
--tables=$local_train_file, \
--learning_rate=5e-5 \
--epoch_num=5 \
--logging_steps=10 \
--save_checkpoint_steps=2150 \
--sequence_length=256 \
--train_batch_size=20 \
--checkpoint_dir=$checkpoint_dir \
--app_name=language_modeling \
--use_amp \
--save_all_checkpoints \
--user_defined_parameters="pretrain_model_name_or_path=hfl/macbert-base-zh external_mask_flag=True contrast_learning_flag=True negative_e_number=${negative_e_number} negative_e_length=${negative_e_length} kg_path=${local_kg}"模型FinetuneCKBERT模型与BERT是同样的架构,只须要应用通用的EasyNLP框架命令就能够进行调用。以下命令别离为Train和Predict状态的例子,应用的模型为ckbert-base。以后在EasyNLP框架中也能够调用large和huge模型进行测试,只须要替换命令中的参数即可pretrain_model_name_or_path=alibaba-pai/pai-ckbert-large-zhpretrain_model_name_or_path=alibaba-pai/pai-ckbert-huge-zh$ easynlp \
--mode=train \
--worker_gpu=1 \
--tables=train.tsv,dev.tsv \
--input_schema=label:str:1,sid1:str:1,sid2:str:1,sent1:str:1,sent2:str:1 \
--first_sequence=sent1 \
--label_name=label \
--label_enumerate_values=0,1 \
--checkpoint_dir=./classification_model \
--epoch_num=1 \
--sequence_length=128 \
--app_name=text_classify \
--user_defined_parameters='pretrain_model_name_or_path=alibaba-pai/pai-ckbert-base-zh'$ easynlp \
--mode=predict \
--tables=dev.tsv \
--outputs=dev.pred.tsv \
--input_schema=label:str:1,sid1:str:1,sid2:str:1,sent1:str:1,sent2:str:1 \
--output_schema=predictions,probabilities,logits,output \
--append_cols=label \
--first_sequence=sent1 \
--checkpoint_path=./classification_model \
--app_name=text_classify在HuggingFace上应用CKBERT模型为了不便开源用户应用CKBERT,咱们也将三个CKBERT模型在HuggingFace Models上架,其Model Card如下所示:
用户也能够间接应用HuggingFace提供的pipeline进行模型推理,样例如下:from transformers import AutoTokenizer, AutoModelForMaskedLM, FillMaskPipeline
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/pai-ckbert-base-zh", use_auth_token=True)
model = AutoModelForMaskedLM.from_pretrained("alibaba-pai/pai-ckbert-base-zh", use_auth_token=True)
unmasker = FillMaskPipeline(model, tokenizer)
unmasker("巴黎是[MASK]国的首都。",top_k=5)
[
{'score': 0.8580496311187744, 'token': 3791, 'token_str': '法', 'sequence': '巴 黎 是 法 国 的 首 都 。'}, {'score': 0.08550138026475906, 'token': 2548, 'token_str': '德', 'sequence': '巴 黎 是 德 国 的 首 都 。'}, {'score': 0.023137662559747696, 'token': 5401, 'token_str': '美', 'sequence': '巴 黎 是 美 国 的 首 都 。'}, {'score': 0.012281022034585476, 'token': 5739, 'token_str': '英', 'sequence': '巴 黎 是 英 国 的 首 都 。'}, {'score': 0.005729076452553272, 'token': 704, 'token_str': '中', 'sequence': '巴 黎 是 中 国 的 首 都 。'}
]或者也能够应用Pytorch加载模型,样例如下:from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/pai-ckbert-base-zh", use_auth_token=True)
model = AutoModelForMaskedLM.from_pretrained("alibaba-pai/pai-ckbert-base-zh", use_auth_token=True)
text = "巴黎是[MASK]国的首都。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)在阿里云机器学习平台PAI上应用CKBERT模型PAI-DSW(Data Science Workshop)是阿里云机器学习平台PAI开发的云上IDE,面向不同程度的开发者,提供了交互式的编程环境(文档)。在DSW Gallery中,提供了各种Notebook示例,不便用户轻松上手DSW,搭建各种机器学习利用。咱们也在DSW Gallery中上架了应用CKBERT进行中文命名实体辨认的Sample Notebook(见下图),欢送大家体验!
将来瞻望在将来,咱们打算在EasyNLP框架中集成更多中⽂常识模型,笼罩各个常⻅中⽂畛域,敬请期待。咱们也将在EasyNLP框架中集成更多SOTA模型(特地是中⽂模型),来⽀持各种NLP和多模态工作。此外, 阿⾥云机器学习PAI团队也在继续推动中⽂多模态模型的⾃研⼯作,欢送⽤户继续关注咱们,也欢送加⼊ 咱们的开源社区,共建中⽂NLP和多模态算法库!Github地址:https://github.com/alibaba/Ea... Wang, Minghui Qiu, Taolin Zhang, Tingting Liu, Lei Li, Jianing Wang, Ming Wang, Jun Huang, Wei Lin. EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing. EMNLP 2022Taolin Zhang, Junwei Dong, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He. Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training. EMNLP 2022Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu. Revisiting Pre-Trained Models for Chinese Natural Language Processing. EMNLP (Findings) 2020Yiming Cui, Ziqing Yang, Ting Liu. PERT: Pre-training BERT with Permuted Language Model. arXivYu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. ERNIE: Enhanced Representation through Knowledge Integration. arXivYuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, and Dongyan Zhao. Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models. NAACL 2021Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, Ping Wang. K-BERT: Enabling Language Representation with Knowledge Graph. AAAI 2020Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu. ERNIE: Enhanced Language Representation with Informative Entities. ACL 2019阿里灵杰回顾阿里灵杰:阿里云机器学习PAI开源中文NLP算法框架EasyNLP,助力NLP大模型落地阿里灵杰:预训练常识度量较量夺冠!阿里云PAI公布常识预训练工具阿里灵杰:EasyNLP带你玩转CLIP图文检索阿里灵杰:EasyNLP中文文图生成模型带你秒变艺术家阿里灵杰:EasyNLP集成K-BERT算法,借助常识图谱实现更优Finetune阿里灵杰:中文稠密GPT大模型落地 — 通往低成本&高性能多任务通用自然语言了解的要害里程碑阿里灵杰:EasyNLP玩转文本摘要(新闻标题)生成阿里灵杰:跨模态学习能力再降级,EasyNLP电商文图检索成果刷新SOTA阿里灵杰:EasyNLP带你实现中英文机器浏览了解
原文链接:https://click.aliyun.com/m/10...
本文为阿里云原创内容,未经容许不得转载。