在 Mask_RCNN 项目的示例项目 nucleus 中,stepbystep 步骤里面,需要对网络模型的中间变量进行提取和可视化,常见方式有两种:
通过 get_layer 方法:
outputs = [
(“rpn_class”, model.keras_model.get_layer(“rpn_class”).output),
(“proposals”, model.keras_model.get_layer(“ROI”).output)
]
此方法可以读取层的输出,对于输出多于 1 个 tensor 的,可以指定 get_layer(“rpn_class”).output[0:2] 等确定。但是对于自定义层的中间变量,就没办法获得了,因此需要使用方法二。
通过 tensor.op.inputs 逐层向上查找
定义一个迭代函数,不断查找
def find_in_tensor(tensor,name,index=0):
index += 1
if index >20:
return
tensor_parent = tensor.op.inputs
for each_ptensor in tensor_parent:
#print(each_ptensor.name)
if bool(re.fullmatch(name, each_ptensor.name)):
print(‘find it!’)
return each_ptensor
result = find_in_tensor(each_ptensor,name,index)
if result is not None:
return result
接着获得某层的输出, 调用迭代函数,找到该 tensor
pillar = model.keras_model.get_layer(“ROI”).output
nms_rois = find_in_tensor(pillar,’ROI_3/rpn_non_max_suppression/NonMaxSuppressionV2:0′)
outputs.append((‘NonMaxSuppression’,nms_rois))
最后,调用 kf.fuction 构建局部图,并运行:
submodel = model.keras_model
outputs = OrderedDict(outputs)
if submodel.uses_learning_phase and not isinstance(K.learning_phase(), int):
inputs += [K.learning_phase()]
kf = K.function(submodel.inputs, list(outputs.values()))
in_p,ou_p = next(train_generator)
output_all = kf(in_p)
此时打印 outputs 可以看到类似如下:
OrderedDict([(‘rpn_class’,<tf.Tensor ‘rpn_class_3/concat:0’ shape=(?, ?, 2) dtype=float32>),
(‘proposals’,<tf.Tensor ‘ROI_3/packed_2:0’ shape=(1, ?, ?) dtype=float32>),
(‘fpn_p2’,<tf.Tensor ‘fpn_p2_3/BiasAdd:0’ shape=(?, 192, 192, 256) dtype=float32>),
(‘fpn_p3’,<tf.Tensor ‘fpn_p3_3/BiasAdd:0’ shape=(?, 96, 96, 256) dtype=float32>),
(‘fpn_p4’,<tf.Tensor ‘fpn_p4_3/BiasAdd:0’ shape=(?, 48, 48, 256) dtype=float32>),
(‘fpn_p6’,<tf.Tensor ‘fpn_p6_3/MaxPool:0’ shape=(?, 12, 12, 256) dtype=float32>),
(‘NonMaxSuppression’,<tf.Tensor ‘ROI_3/rpn_non_max_suppression/NonMaxSuppressionV2:0’ shape=(?,) dtype=int32>)])
大功告成~