【摘要】前言利用新型的人工智能(深度学习)算法,联合清华大学开源语音数据集 THCHS30 进行语音辨认的实战演练,让使用者在理解语音辨认根本的原理与实战的同时,更好的理解人工智能的相干内容与利用。通过这个实际能够理解如何应用 Keras 和 Tensorflow 构建 DFCNN 的语音辨认神经网络,并且相熟整个解决流程,包含数据预处理、模型训练、模型保留和模型预测等环节。实际流程根底环境筹备 OBS 筹备 Model… 前言利用新型的人工智能(深度学习)算法,联合清华大学开源语音数据集 THCHS30 进行语音辨认的实战演练,让使用者在理解语音辨认根本的原理与实战的同时,更好的理解人工智能的相干内容与利用。通过这个实际能够理解如何应用 Keras 和 Tensorflow 构建 DFCNN 的语音辨认神经网络,并且相熟整个解决流程,包含数据预处理、模型训练、模型保留和模型预测等环节。实际流程根底环境筹备 OBS 筹备 ModelArts 利用开始语音辨认操作开始语言模型操作 1. 根底环境筹备在应用 ModelArts 进行 AI 开发前,需先实现以下根底操作哦(如有已实现局部,请疏忽),次要分为 4 步(注册–> 实名认证–> 服务受权–> 领代金券):应用手机号注册华为云账号:点击注册点此去实现实名认证,账号类型选 ” 集体 ”,集体认证类型举荐应用 ” 扫码认证 ”。
点此进入 ModelArts 控制台数据管理页面,上方会提醒拜访受权,点击【服务受权】按钮,按下图程序操作:
进入 ModelArts 控制台首页,如下图,点击页面上的 ” 彩蛋 ”,支付老手福利代金券!后续步骤可能会产生资源耗费费用,请务必支付。
以上操作,也提供了具体的视频教程,点此查看:ModelArts 环境配置
基于深度学习算法的语音辨认具体步骤什么是 OBS?对象存储服务(Object Storage Service,OBS)是一个基于对象的海量存储服务,为客户提供海量、平安、高牢靠、低成本的数据存储能力,包含:创立、批改、删除桶,上传、下载、删除对象等。2.OBS 筹备 1). 将本地筹备的 data.zip 和语音数据包 data_thchs30.tar 上传到 OBS 中,为后续步骤筹备。创立 OBS 桶将光标挪动至右边栏,弹出菜单中选择“服务列表”->“存储”->“对象存储服务 OBS”,如下图:
进去对象存储服务 OBS 后,点击创立桶,配置参数如下:区域:华北 - 北京四,数据冗余存储策略:多 AZ 存储,桶名称:自定义(请记录,后续应用),存储类别:规范存储,桶策略:公有,默认加密:敞开,归档数据直读:敞开,点击“立刻创立”,实现创立后跳转到桶列表,如下图所示:
2)创立 AK/SK 登录华为云,在页面右上方单击“控制台”,进入华为云治理控制台。图 1 控制台入口
在控制台右上角的帐户名下方,单击“我的凭证”,进入“我的凭证”页面。图 2 我的凭证
在“我的凭证”页面,抉择“拜访密钥 > 新增拜访密钥”,如图 3 所示。图 3 单击新增拜访密钥
填写该密钥的形容阐明,单击“确定”。依据提醒单击“立刻下载”,下载密钥。图 4 新增拜访密钥
密钥文件会间接保留到浏览器默认的下载文件夹中。关上名称为“credentials.csv”的文件,即可查看拜访密钥(Access Key Id 和 Secret Access Key)。3). 装置 OBS 客户端首先下载 OBS 工具到云服务器,在本人电脑关上命令行界面,执行如下命令:mkdir /home/user/Desktop/data; cd /home/user/Desktop/data; wget https://obs-community.obs.cn-… 输出解压缩指令,并查看当前目录列表:tar -zxf obsutil_linux_amd64.tar.gz; ls -l
执行如下指令配置 OBS 工具,操作阐明:本人获取的密钥的 AK/SK 填入 -i - k 参数前面
执行如下查看对象列表指令,验证是否配置胜利,指令如下:./obsutil ls 配置胜利后果如下图:
4)上传语音材料执行如下指令下载试验相干材料:注:因为语音材料较大(7 个多 g),须要急躁期待 cd ../; wget https://sandbox-experiment-re…; wget https://sandbox-experiment-re…
下载结束,执行如下查看指令:ll
输出以下指令,将刚刚下载好 的语音文件上传到创立的 OBS 桶 nlpdemo 中:./obsutil_linux_amd64_5./obsutil cp ./data.zip obs://nlpdemo; ./obsutil_linux_amd64_5./obsutil cp ./data_thchs30.tar obs://nlpdemo 上传完毕后【大概 60 分钟,上传的速率较慢】倡议华为云优化一下 OBS 的上传速度,在华为云控制台顺次抉择“控制台”->“服务列表”->“存储”->“对象存储服务 OBS”,进入服务界面,点击创立的桶名称 nlpdemo 进入详情页,于页面左侧抉择“对象”,点击后于页面右侧可查看到刚传入的材料,
2.ModelArts 利用什么是 ModelArts?ModelArts 是面向 AI 开发者的一站式开发平台,提供海量数据预处理及半自动化标注、大规模分布式训练、自动化模型生成及端 - 边 - 云模型按需部署能力,帮忙用户疾速创立和部署模型,治理全周期 AI 工作流。1) 创立 Notebook 在服务列表中找到并进入人工智能服务 ModelArts,而后点击 ModelArts 页面中左侧的【开发环境】选项一点击【notebook】进入 notebook 页面。点击【创立】按钮进入创立页面,并按以下参数进行配置:名称:自定义,主动进行:自定义 12 小时 (在抉择非[限时收费] 规格后显示),镜像:公共镜像:在第二页抉择 tensorflow1.13-cuda10.0-cudnn7-ubuntu18.0.4-GPU 算法开发和训练根底镜像,预置 AI 引擎 Tensorflow1.13.1,资源池:公共资源池,类型:CPU,规格:8 核 64GiB,存储配置:云硬盘(30GB)。点击“下一步”->“提交”->“返回 Notebook 列表”,Notebook 列表如下图所示:注:大概 3 分钟后状态由“启动中”变为“运行中”,可点击列表右上角的“刷新”查看最新状态。
3. 开始语音辨认操作采纳 CNN+CTC 的形式进行语音辨认。1)导入包创立胜利,返回 NoteBook 列表,期待状态变为“运行中”【约期待 3 分钟】,点击“关上”,进入 NoteBook 详情页。在页面中抉择“TensorFlow-1.13.1”,如下图所示:
在新建的 Python 环境页面的输入框中,输出以下代码:import moxing as mox
import numpy as np
import scipy.io.wavfile as wav
from scipy.fftpack import fft
import matplotlib.pyplot as plt
%matplotlib inline
import keras
from keras.layers import Input, Conv2D, BatchNormalization, MaxPooling2D
from keras.layers import Reshape, Dense, Lambda
from keras.optimizers import Adam
from keras import backend as K
from keras.models import Model
from keras.utils import multi_gpu_model
import os
import pickle 点击“代码旁边的小三角形”run,查看执行后果,如下图:
2)数据筹备持续在下方空白的输入框中输出以下代码,从上传到 OBS 的数据拷贝到当前目录:注:下方框选的局部是之前创立的 OBS 桶名点击“run”,查看执行后果,如下图:current_path = os.getcwd()
mox.file.copy(‘s3://nlpdemo/data.zip’, current_path+’/data.zip’)
mox.file.copy(‘s3://nlpdemo/data_thchs30.tar’, current_path+’/data_thchs30.tar’)
持续在下方空白的输入框中输出以下代码,解压缩数据:!unzip data.zip
!tar -xvf data_thchs30.tar
点击“run”,查看执行后果,如下图:
3)数据处理持续在下方空白的输入框中输出以下代码,生成音频文件和标签文件列表:注:思考神经网络训练过程中接管的输入输出。首先须要 batch_size 内数据具备对立的 shape。格局为:[batch_size, time_step, feature_dim],然而读取的每一个 sample 的时间轴长都不一样,所以须要对时间轴进行解决,抉择 batch 内最长的那个工夫为基准,进行 padding。这样一个 batch 内的数据都雷同,就能够进行并行训练了。source_file = ‘data/thchs_train.txt’
def source_get(source_file):
train_file = source_file
label_data = []
wav_lst = []
with open(train_file,"r",encoding="utf-8") as f:
lines = f.readlines()
for line in lines:
datas = line.split("\t")
wav_lst.append(datas[0])
label_data.append(datas[1])
return label_data, wav_lst
label_data, wav_lst = source_get(source_file)
print(label_data[:10])
print(wav_lst[:10])
点击“run”,查看执行后果,如下图:
持续在下方空白的输入框中输出以下代码,进行 label 数据处理(为 label 建设拼音到 id 的映射,即词典):def mk_vocab(label_data):
vocab = []
for line in label_data:
line = line.split(' ')
for pny in line:
if pny not in vocab:
vocab.append(pny)
vocab.append('_')
return vocab
vocab = mk_vocab(label_data)
def word2id(line, vocab):
return [vocab.index(pny) for pny in line.split(' ')]
label_id = word2id(label_data[0], vocab)
print(label_data[0])
print(label_id)
点击“run”,查看执行后果,如下图:
持续在下方空白的输入框中输出以下代码,进行音频数据处理:def compute_fbank(file):
x=np.linspace(0, 400 - 1, 400, dtype = np.int64)
w = 0.54 - 0.46 * np.cos(2 * np.pi * (x) / (400 - 1) )
fs, wavsignal = wav.read(file)
time_window = 25
window_length = fs / 1000 * time_window
wav_arr = np.array(wavsignal)
wav_length = len(wavsignal)
range0_end = int(len(wavsignal)/fs*1000 - time_window) // 10
data_input = np.zeros((range0_end, 200), dtype = np.float)
data_line = np.zeros((1, 400), dtype = np.float)
for i in range(0, range0_end):
p_start = i * 160
p_end = p_start + 400
data_line = wav_arr[p_start:p_end]
data_line = data_line * w
data_line = np.abs(fft(data_line))
data_input[i]=data_line[0:200]
data_input = np.log(data_input + 1)
#data_input = data_input[::]
return data_input
fbank = compute_fbank(wav_lst[0])
print(fbank.shape)点击“run”,查看执行后果,如下图:
持续在下方空白的输入框中输出以下代码,生成数据生成器:total_nums = 10000
batch_size = 4
batch_num = total_nums // batch_size
from random import shuffle
shuffle_list = list(range(10000))
shuffle(shuffle_list)
def get_batch(batch_size, shuffle_list, wav_lst, label_data, vocab):
for i in range(10000//batch_size):
wav_data_lst = []
label_data_lst = []
begin = i * batch_size
end = begin + batch_size
sub_list = shuffle_list[begin:end]
for index in sub_list:
fbank = compute_fbank(wav_lst[index])
fbank = fbank[:fbank.shape[0] // 8 * 8, :]
label = word2id(label_data[index], vocab)
wav_data_lst.append(fbank)
label_data_lst.append(label)
yield wav_data_lst, label_data_lst
batch = get_batch(4, shuffle_list, wav_lst, label_data, vocab)
wav_data_lst, label_data_lst = next(batch)
for wav_data in wav_data_lst:
print(wav_data.shape)
for label_data in label_data_lst:
print(label_data)
lens = [len(wav) for wav in wav_data_lst]
print(max(lens))
print(lens)
def wav_padding(wav_data_lst):
wav_lens = [len(data) for data in wav_data_lst]
wav_max_len = max(wav_lens)
wav_lens = np.array([leng//8 for leng in wav_lens])
new_wav_data_lst = np.zeros((len(wav_data_lst), wav_max_len, 200, 1))
for i in range(len(wav_data_lst)):
new_wav_data_lst[i, :wav_data_lst[i].shape[0], :, 0] = wav_data_lst[i]
return new_wav_data_lst, wav_lens
pad_wav_data_lst, wav_lens = wav_padding(wav_data_lst)
print(pad_wav_data_lst.shape)
print(wav_lens)
def label_padding(label_data_lst):
label_lens = np.array([len(label) for label in label_data_lst])
max_label_len = max(label_lens)
new_label_data_lst = np.zeros((len(label_data_lst), max_label_len))
for i in range(len(label_data_lst)):
new_label_data_lst[i][:len(label_data_lst[i])] = label_data_lst[i]
return new_label_data_lst, label_lens
pad_label_data_lst, label_lens = label_padding(label_data_lst)
print(pad_label_data_lst.shape)
print(label_lens)代码执行胜利,如下图:
执行后果输入如下图:
持续在下方空白的输入框中输出以下代码,点击“run”运行,生成用于训练格局的数据生成器(此段代码无输入):
4)模型搭建持续输出以下代码:阐明:训练输出为时频图,标签为对应的拼音标签,搭建语音辨认模型,采纳了 CNN+CTC 的构造。def conv2d(size):
return Conv2D(size, (3,3), use_bias=True, activation='relu',
padding='same', kernel_initializer='he_normal')
def norm(x):
return BatchNormalization(axis=-1)(x)
def maxpool(x):
return MaxPooling2D(pool_size=(2,2), strides=None, padding="valid")(x)
def dense(units, activation=”relu”):
return Dense(units, activation=activation, use_bias=True, kernel_initializer='he_normal')
def cnn_cell(size, x, pool=True):
x = norm(conv2d(size)(x))
x = norm(conv2d(size)(x))
if pool:
x = maxpool(x)
return x
def ctc_lambda(args):
labels, y_pred, input_length, label_length = args
y_pred = y_pred[:, :, :]
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
class Amodel():
"""docstring for Amodel."""
def __init__(self, vocab_size):
super(Amodel, self).__init__()
self.vocab_size = vocab_size
self._model_init()
self._ctc_init()
self.opt_init()
def _model_init(self):
self.inputs = Input(name='the_inputs', shape=(None, 200, 1))
self.h1 = cnn_cell(32, self.inputs)
self.h2 = cnn_cell(64, self.h1)
self.h3 = cnn_cell(128, self.h2)
self.h4 = cnn_cell(128, self.h3, pool=False)
# 200 / 8 * 128 = 3200
self.h6 = Reshape((-1, 3200))(self.h4)
self.h7 = dense(256)(self.h6)
self.outputs = dense(self.vocab_size, activation='softmax')(self.h7)
self.model = Model(inputs=self.inputs, outputs=self.outputs)
def _ctc_init(self):
self.labels = Input(name='the_labels', shape=[None], dtype='float32')
self.input_length = Input(name='input_length', shape=[1], dtype='int64')
self.label_length = Input(name='label_length', shape=[1], dtype='int64')
self.loss_out = Lambda(ctc_lambda, output_shape=(1,), name='ctc')\
([self.labels, self.outputs, self.input_length, self.label_length])
self.ctc_model = Model(inputs=[self.labels, self.inputs,
self.input_length, self.label_length], outputs=self.loss_out)
def opt_init(self):
opt = Adam(lr = 0.0008, beta_1 = 0.9, beta_2 = 0.999, decay = 0.01, epsilon = 10e-8)
#self.ctc_model=multi_gpu_model(self.ctc_model,gpus=2)
self.ctc_model.compile(loss={'ctc': lambda y_true, output: output}, optimizer=opt)
am = Amodel(len(vocab))
am.ctc_model.summary()
点击“run”,执行后果如下图:
5)训练模型 持续输出以下代码创立语音辨认模型:total_nums = 100
batch_size = 20
batch_num = total_nums // batch_size
epochs = 8
source_file = ‘data/thchs_train.txt’
label_data,wav_lst = source_get(source_file)
vocab = mk_vocab(label_data)
vocab_size = len(vocab)
print(vocab_size)
shuffle_list = list(range(100))
am = Amodel(vocab_size)
for k in range(epochs):
print('this is the', k+1, 'th epochs trainning !!!')
#shuffle(shuffle_list)
batch = data_generator(batch_size, shuffle_list, wav_lst, label_data, vocab)
am.ctc_model.fit_generator(batch, steps_per_epoch=batch_num, epochs=1)
执行后果如下图【大概须要 11 分钟,如果须要更好成果能够调整参数 epochs 为 50 次】:)
6)保留模型将训练模型保留到 OBS 中。持续输出如下代码(此段代码无输入):操作阐明:用创立的 OBS 桶名填写参数 am.model.save(“asr-model.h5”)
with open(“vocab”,”wb”) as fw:
pickle.dump(vocab,fw)
mox.file.copy(“asr-model.h5”, ‘s3://nlpdemo/asr-model.h5’)
mox.file.copy(“vocab”, ‘s3://nlpdemo/vocab’)
7)测试模型持续输出如下代码,点击“run”运行,用以导入包及加载模型和数据(此段代码无输入):# 导入包
import pickle
from keras.models import load_model
import os
import tensorflow as tf
from keras import backend as K
import numpy as np
import scipy.io.wavfile as wav
from scipy.fftpack import fft
加载模型和数据
bm = load_model(“asr-model.h5”)
with open(“vocab”,”rb”) as fr:
vocab_for_test = pickle.load(fr)
持续输出如下代码,点击“run”运行,获取测试数据(此段代码无输入):def wav_padding(wav_data_lst):
wav_lens = [len(data) for data in wav_data_lst]
wav_max_len = max(wav_lens)
wav_lens = np.array([leng//8 for leng in wav_lens])
new_wav_data_lst = np.zeros((len(wav_data_lst), wav_max_len, 200, 1))
for i in range(len(wav_data_lst)):
new_wav_data_lst[i, :wav_data_lst[i].shape[0], :, 0] = wav_data_lst[i]
return new_wav_data_lst, wav_lens
获取信号的时频图
def compute_fbank(file):
x=np.linspace(0, 400 - 1, 400, dtype = np.int64)
w = 0.54 - 0.46 * np.cos(2 * np.pi * (x) / (400 - 1) ) # 汉明窗
fs, wavsignal = wav.read(file)
# wav 波形 加工夫窗以及时移 10ms
time_window = 25 # 单位 ms
window_length = fs / 1000 * time_window # 计算窗长度的公式,目前全副为 400 固定值
wav_arr = np.array(wavsignal)
wav_length = len(wavsignal)
range0_end = int(len(wavsignal)/fs*1000 - time_window) // 10 # 计算循环终止的地位,也就是最终生成的窗数
data_input = np.zeros((range0_end, 200), dtype = np.float) # 用于寄存最终的频率特色数据
data_line = np.zeros((1, 400), dtype = np.float)
for i in range(0, range0_end):
p_start = i * 160
p_end = p_start + 400
data_line = wav_arr[p_start:p_end]
data_line = data_line * w # 加窗
data_line = np.abs(fft(data_line))
data_input[i]=data_line[0:200] # 设置为 400 除以 2 的值(即 200)是取一半数据,因为是对称的
data_input = np.log(data_input + 1)
#data_input = data_input[::]
return data_input
def test_data_generator(test_path):
test_file_list = []
for root, dirs, files in os.walk(test_path):
for file in files:
if file.endswith(".wav"):
test_file = os.sep.join([root, file])
test_file_list.append(test_file)
print(len(test_file_list))
for file in test_file_list:
fbank = compute_fbank(file)
pad_fbank = np.zeros((fbank.shape[0]//8*8+8, fbank.shape[1]))
pad_fbank[:fbank.shape[0], :] = fbank
test_data_list = []
test_data_list.append(pad_fbank)
pad_wav_data, input_length = wav_padding(test_data_list)
yield pad_wav_data
test_path =”data_thchs30/test”
test_data = test_data_generator(test_path)
持续输出以下代码进行测试:def decode_ctc(num_result, num2word):
result = num_result[:, :, :]
in_len = np.zeros((1), dtype = np.int32)
in_len[0] = result.shape[1];
r = K.ctc_decode(result, in_len, greedy = True, beam_width=10, top_paths=1)
r1 = K.get_value(r[0][0])
r1 = r1[0]
text = []
for i in r1:
text.append(num2word[i])
return r1, text
for i in range(10):
# 获取测试数据
x = next(test_data)
#载入训练好的模型,并进行辨认语音
result = bm.predict(x, steps=1)
#将数字后果转化为拼音后果
_, text = decode_ctc(result, vocab_for_test)
print('文本后果:', text)
点击“run”,执行胜利如下图所示:.
4. 开始语言模型操作训练语言模型是采纳捕获特色能力更强的 Transformer,创立基于自注意力机制的语言模型,在试验的过程中会跟大家介绍具体的实现步骤。1)导入包持续输出如下代码,点击“run”运行,导入相干包(此段代码无输入):from tqdm import tqdm
import tensorflow as tf
import moxing as mox
import numpy as np
2)数据处理持续输出如下代码,点击“run”运行:with open(“data/zh.tsv”, ‘r’, encoding=’utf-8′) as fout:
data = fout.readlines()[:10000]
inputs = []
labels = []
for i in tqdm(range(len(data))):
key, pny, hanzi = data[i].split('\t')
inputs.append(pny.split(' '))
labels.append(hanzi.strip('\n').split(' '))
print(inputs[:5])
print()
print(labels[:5])
def get_vocab(data):
vocab = ['<PAD>']
for line in tqdm(data):
for char in line:
if char not in vocab:
vocab.append(char)
return vocab
pny2id = get_vocab(inputs)
han2id = get_vocab(labels)
print(pny2id[:10])
print(han2id[:10])
input_num = [[pny2id.index(pny) for pny in line] for line in tqdm(inputs)]
label_num = [[han2id.index(han) for han in line] for line in tqdm(labels)]
获取 batch 数据
def get_batch(input_data, label_data, batch_size):
batch_num = len(input_data) // batch_size
for k in range(batch_num):
begin = k * batch_size
end = begin + batch_size
input_batch = input_data[begin:end]
label_batch = label_data[begin:end]
max_len = max([len(line) for line in input_batch])
input_batch = np.array([line + [0] * (max_len - len(line)) for line in input_batch])
label_batch = np.array([line + [0] * (max_len - len(line)) for line in label_batch])
yield input_batch, label_batch
batch = get_batch(input_num, label_num, 4)
input_batch, label_batch = next(batch)
print(input_batch)
print(label_batch)
执行胜利后果如下图:
3)模型搭建模型采纳 self-attention 的左侧编码器,如下图:
持续输出如下代码,点击“run”运行,用以实现图片构造中的 layer norm 层(代码无输入):#layer norm 层
def normalize(inputs,
epsilon = 1e-8,
scope="ln",
reuse=None):
with tf.variable_scope(scope, reuse=reuse):
inputs_shape = inputs.get_shape()
params_shape = inputs_shape[-1:]
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
beta= tf.Variable(tf.zeros(params_shape))
gamma = tf.Variable(tf.ones(params_shape))
normalized = (inputs - mean) / ((variance + epsilon) ** (.5) )
outputs = gamma * normalized + beta
return outputs
持续输出如下代码,点击“run”运行,以实现图片构造中的 embedding 层(代码无输入):def embedding(inputs,
vocab_size,
num_units,
zero_pad=True,
scale=True,
scope="embedding",
reuse=None):
with tf.variable_scope(scope, reuse=reuse):
lookup_table = tf.get_variable('lookup_table',
dtype=tf.float32,
shape=[vocab_size, num_units],
initializer=tf.contrib.layers.xavier_initializer())
if zero_pad:
lookup_table = tf.concat((tf.zeros(shape=[1, num_units]),
lookup_table[1:, :]), 0)
outputs = tf.nn.embedding_lookup(lookup_table, inputs)
if scale:
outputs = outputs * (num_units ** 0.5)
return outputs
持续输出如下代码,点击“run”运行,以实现 multihead 层(此段代码无输入):def multihead_attention(emb,
queries,
keys,
num_units=None,
num_heads=8,
dropout_rate=0,
is_training=True,
causality=False,
scope="multihead_attention",
reuse=None):
with tf.variable_scope(scope, reuse=reuse):
# Set the fall back option for num_units
if num_units is None:
num_units = queries.get_shape().as_list[-1]
# Linear projections
Q = tf.layers.dense(queries, num_units, activation=tf.nn.relu) # (N, T_q, C)
K = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)
V = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)
# Split and concat
Q_ = tf.concat(tf.split(Q, num_heads, axis=2), axis=0) # (h*N, T_q, C/h)
K_ = tf.concat(tf.split(K, num_heads, axis=2), axis=0) # (h*N, T_k, C/h)
V_ = tf.concat(tf.split(V, num_heads, axis=2), axis=0) # (h*N, T_k, C/h)
# Multiplication
outputs = tf.matmul(Q_, tf.transpose(K_, [0, 2, 1])) # (h*N, T_q, T_k)
# Scale
outputs = outputs / (K_.get_shape().as_list()[-1] ** 0.5)
# Key Masking
key_masks = tf.sign(tf.abs(tf.reduce_sum(emb, axis=-1))) # (N, T_k)
key_masks = tf.tile(key_masks, [num_heads, 1]) # (h*N, T_k)
key_masks = tf.tile(tf.expand_dims(key_masks, 1), [1, tf.shape(queries)[1], 1]) # (h*N, T_q, T_k)
paddings = tf.ones_like(outputs)*(-2**32+1)
outputs = tf.where(tf.equal(key_masks, 0), paddings, outputs) # (h*N, T_q, T_k)
# Causality = Future blinding
if causality:
diag_vals = tf.ones_like(outputs[0, :, :]) # (T_q, T_k)
tril = tf.contrib.linalg.LinearOperatorTriL(diag_vals).to_dense() # (T_q, T_k)
masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(outputs)[0], 1, 1]) # (h*N, T_q, T_k)
paddings = tf.ones_like(masks)*(-2**32+1)
outputs = tf.where(tf.equal(masks, 0), paddings, outputs) # (h*N, T_q, T_k)
# Activation
outputs = tf.nn.softmax(outputs) # (h*N, T_q, T_k)
# Query Masking
query_masks = tf.sign(tf.abs(tf.reduce_sum(emb, axis=-1))) # (N, T_q)
query_masks = tf.tile(query_masks, [num_heads, 1]) # (h*N, T_q)
query_masks = tf.tile(tf.expand_dims(query_masks, -1), [1, 1, tf.shape(keys)[1]]) # (h*N, T_q, T_k)
outputs *= query_masks # broadcasting. (N, T_q, C)
# Dropouts
outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=tf.convert_to_tensor(is_training))
# Weighted sum
outputs = tf.matmul(outputs, V_) # (h*N, T_q, C/h)
# Restore shape
outputs = tf.concat(tf.split(outputs, num_heads, axis=0), axis=2 ) # (N, T_q, C)
# Residual connection
outputs += queries
# Normalize
outputs = normalize(outputs) # (N, T_q, C)
return outputs
持续输出如下代码,点击“run”运行,以实现 feedforward 层(此段代码无输入):def feedforward(inputs,
num_units=[2048, 512],
scope="multihead_attention",
reuse=None):
with tf.variable_scope(scope, reuse=reuse):
# Inner layer
params = {"inputs": inputs, "filters": num_units[0], "kernel_size": 1,
"activation": tf.nn.relu, "use_bias": True}
outputs = tf.layers.conv1d(**params)
# Readout layer
params = {"inputs": outputs, "filters": num_units[1], "kernel_size": 1,
"activation": None, "use_bias": True}
outputs = tf.layers.conv1d(**params)
# Residual connection
outputs += inputs
# Normalize
outputs = normalize(outputs)
return outputs
def label_smoothing(inputs, epsilon=0.1):
'''Applies label smoothing. See https://arxiv.org/abs/1512.00567.
Args:
inputs: A 3d tensor with shape of [N, T, V], where V is the number of vocabulary.
epsilon: Smoothing rate.
'''
K = inputs.get_shape().as_list()[-1] # number of channels
return ((1-epsilon) * inputs) + (epsilon / K)
持续输出如下代码,点击“run”运行,以搭建模型(此段代码无输入):class Graph():
def __init__(self, is_training=True):
tf.reset_default_graph()
self.is_training = arg.is_training
self.hidden_units = arg.hidden_units
self.input_vocab_size = arg.input_vocab_size
self.label_vocab_size = arg.label_vocab_size
self.num_heads = arg.num_heads
self.num_blocks = arg.num_blocks
self.max_length = arg.max_length
self.lr = arg.lr
self.dropout_rate = arg.dropout_rate
# input
self.x = tf.placeholder(tf.int32, shape=(None, None))
self.y = tf.placeholder(tf.int32, shape=(None, None))
# embedding
self.emb = embedding(self.x, vocab_size=self.input_vocab_size, num_units=self.hidden_units, scale=True, scope="enc_embed")
self.enc = self.emb + embedding(tf.tile(tf.expand_dims(tf.range(tf.shape(self.x)[1]), 0), [tf.shape(self.x)[0], 1]),
vocab_size=self.max_length,num_units=self.hidden_units, zero_pad=False, scale=False,scope="enc_pe")
## Dropout
self.enc = tf.layers.dropout(self.enc,
rate=self.dropout_rate,
training=tf.convert_to_tensor(self.is_training))
## Blocks
for i in range(self.num_blocks):
with tf.variable_scope("num_blocks_{}".format(i)):
### Multihead Attention
self.enc = multihead_attention(emb = self.emb,
queries=self.enc,
keys=self.enc,
num_units=self.hidden_units,
num_heads=self.num_heads,
dropout_rate=self.dropout_rate,
is_training=self.is_training,
causality=False)
### Feed Forward
self.outputs = feedforward(self.enc, num_units=[4*self.hidden_units, self.hidden_units])
# Final linear projection
self.logits = tf.layers.dense(self.outputs, self.label_vocab_size)
self.preds = tf.to_int32(tf.argmax(self.logits, axis=-1))
self.istarget = tf.to_float(tf.not_equal(self.y, 0))
self.acc = tf.reduce_sum(tf.to_float(tf.equal(self.preds, self.y))*self.istarget)/ (tf.reduce_sum(self.istarget))
tf.summary.scalar('acc', self.acc)
if is_training:
# Loss
self.y_smoothed = label_smoothing(tf.one_hot(self.y, depth=self.label_vocab_size))
self.loss = tf.nn.softmax_cross_entropy_with_logits(logits=self.logits, labels=self.y_smoothed)
self.mean_loss = tf.reduce_sum(self.loss*self.istarget) / (tf.reduce_sum(self.istarget))
# Training Scheme
self.global_step = tf.Variable(0, name='global_step', trainable=False)
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.lr, beta1=0.9, beta2=0.98, epsilon=1e-8)
self.train_op = self.optimizer.minimize(self.mean_loss, global_step=self.global_step)
# Summary
tf.summary.scalar('mean_loss', self.mean_loss)
self.merged = tf.summary.merge_all()
4) 训练模型持续输出如下代码,点击“run”运行,用于参数设定(此段代码无输入):def create_hparams():
params = tf.contrib.training.HParams(
num_heads = 8,
num_blocks = 6,
# vocab
input_vocab_size = 50,
label_vocab_size = 50,
# embedding size
max_length = 100,
hidden_units = 512,
dropout_rate = 0.2,
lr = 0.0003,
is_training = True)
return params
arg = create_hparams()
arg.input_vocab_size = len(pny2id)
arg.label_vocab_size = len(han2id)
持续输出以下代码,点击“run”运行,用于模型训练:import os
epochs = 3
batch_size = 4
g = Graph(arg)
saver =tf.train.Saver()
with tf.Session() as sess:
merged = tf.summary.merge_all()
sess.run(tf.global_variables_initializer())
if os.path.exists('logs/model.meta'):
saver.restore(sess, 'logs/model')
writer = tf.summary.FileWriter('tensorboard/lm', tf.get_default_graph())
for k in range(epochs):
total_loss = 0
batch_num = len(input_num) // batch_size
batch = get_batch(input_num, label_num, batch_size)
for i in range(batch_num):
input_batch, label_batch = next(batch)
feed = {g.x: input_batch, g.y: label_batch}
cost,_ = sess.run([g.mean_loss,g.train_op], feed_dict=feed)
total_loss += cost
if (k * batch_num + i) % 10 == 0:
rs=sess.run(merged, feed_dict=feed)
writer.add_summary(rs, k * batch_num + i)
print('epochs', k+1, ': average loss =', total_loss/batch_num)
saver.save(sess, 'logs/model')
writer.close()
执行胜利输入内容如下图【大概 10 分钟执行实现,如果须要更好成果请将参数 epochs 调整为 15】:
5) 模型测试持续输出如下代码,进行拼音测试:arg.is_training = False
g = Graph(arg)
saver =tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, 'logs/model')
while True:
line = input('Input Test Content:')
if line == 'exit': break
line = line.strip('\n').split(' ')
x = np.array([pny2id.index(pny) for pny in line])
x = x.reshape(1, -1)
preds = sess.run(g.preds, {g.x: x})
got = ''.join(han2id[idx] for idx in preds[0])
print(got)
点击“run”运行后,呈现输入框提醒输出,如下图:
我输出以下拼音 nian2 de bu4 bu4 ge4 de shang4 shi2 qu1 pei4 wai4 gu4 de nian2 ming2 de zi4 ren2 na4 ren2 bu4 zuo4 de jia1 zhong4 shi2 wei4 yu4 you3 ta1 yang2 mu4 yu4 ci3 单击键盘回车键后,模型返回中文内容,如下图:
持续在输入框中输出“exit”,回车后呈现提醒进行运行,如下图:
6) 保留模型持续输出如下代码,点击“run”运行后,保留模型到 OBS 桶中,不便其余 notebook 应用:操作阐明:用本人创立的 OBS 桶名代替“OBS”字符,我的是 nlpdemo!zip -r languageModel.zip logs #递归压缩模型所在文件夹
mox.file.copy(“languageModel.zip”, ‘s3://nlpdemo/languageModel.zip’)
至此基于深度学习算法的语音辨认实际全副实现,整个流程下来体验还是很不错的!总结整个流程用到了很多的华为云服务,例如 OBS 和 ModelArts 的 NoteBook,性能十分弱小,体验感很好,对深度学习算法的语音辨认有了肯定的理解,也对整个实际的过程有了意识,欢送大家一起在华为云社区交流学习。如有不当之处,欢送斧正!感恩能与大家在华为云遇见!心愿能与大家一起在华为云社区独特成长。【华为云至简致远】有奖征文炽热进行中:https://bbs.huaweicloud.com/b…【版权申明】本文为华为云社区用户原创内容,转载时必须标注文章的起源(华为云社区),文章链接,文章作者等根本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌剽窃的内容,欢送发送邮件至:cloudbbs@huaweicloud.com 进行举报,并提供相干证据,一经查实,本社区将立即删除涉嫌侵权内容。