本我的项目基于 MindSpre 框架、YOLOv3-Darknet53、VisDrone 数据集实现目标检测与计数。1. 我的项目地址 https://github.com/whitewings… 环境筹备 MindSpore 版本为 1.5。3. 数据集解决 VisDrone 数据集下载 http://aiskyeye.com/download/… 须要将原始 VisDrone 数据集转换为 coco 格局,而后寄存在本地目录应用 https://github.com/whitewings… 来进行解决,python VisDrone2coco.py 即可。4. 基于 albumentations 的数据加强 https://github.com/whitewings… 应用了 albumentations 库中的 RandomBrightnessContrast 办法、HueSaturationValue 办法、Cutout 办法进行随机调整亮度和对比度、随机调整输出图像的色调饱和度、在图像中生成正方形区域来对图像数据进行加强,通过 albumentations 库中的 Compose 办法把这三个图像数据加强变换放在一起按程序执行,并在前面读取图片进行图像数据加强。transform = A.Compose([
A.RandomBrightnessContrast(p=0.5),
A.HueSaturationValue(),
A.Cutout(num_holes=10, max_h_size=20, max_w_size=20, fill_value=0, p=0.5)
])5.DIoU-NMS 将一般 NMS 算法替换为 DIoU-NMS 算法。在传统 NMS 代码的根底上,后面按置信度得分从高到底排序并抉择置信度得分最高的候选框的操作是一样的,次要减少的额定变量就是两框最小外接框的对角线长的平方以及两框中心点间隔的平方,之后再依据 DIoU 计算的公式进行计算失去 DIoU 而后过滤掉 DIoU 高于阈值的框,保留置信度分数最高的指标框并对其余剩下的指标框进行递归。def _diou_nms(self, dets, thresh=0.6):
x1 = dets[:, 0]
y1 = dets[:, 1]
x2 = x1 + dets[:, 2]
y2 = y1 + dets[:, 3]
scores = dets[:, 4]
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order[0]
keep.append(i)
xx1 = np.maximum(x1[i], x1[order[1:]])
yy1 = np.maximum(y1[i], y1[order[1:]])
xx2 = np.minimum(x2[i], x2[order[1:]])
yy2 = np.minimum(y2[i], y2[order[1:]])
w = np.maximum(0.0, xx2 - xx1 + 1)
h = np.maximum(0.0, yy2 - yy1 + 1)
inter = w * h
ovr = inter / (areas[i] + areas[order[1:]] - inter)
center_x1 = (x1[i] + x2[i]) / 2
center_x2 = (x1[order[1:]] + x2[order[1:]]) / 2
center_y1 = (y1[i] + y2[i]) / 2
center_y2 = (y1[order[1:]] + y2[order[1:]]) / 2
inter_diag = (center_x2 - center_x1) ** 2 + (center_y2 - center_y1) ** 2
out_max_x = np.maximum(x2[i], x2[order[1:]])
out_max_y = np.maximum(y2[i], y2[order[1:]])
out_min_x = np.minimum(x1[i], x1[order[1:]])
out_min_y = np.minimum(y1[i], y1[order[1:]])
outer_diag = (out_max_x - out_min_x) ** 2 + (out_max_y - out_min_y) ** 2
diou = ovr - inter_diag / outer_diag
diou = np.clip(diou, -1, 1)
inds = np.where(diou <= thresh)[0]
order = order[inds + 1]
return keep6. 模型训练、验证、应用模型训练、验证、应用别离对应 train.py,eval.py,predict.py,具体办法参考 https://github.com/whitewings-hub/mindspore-yolov3-vehicle_counting/blob/main/code_2/README.md7. 车辆检测与计数计数性能实现。在后面从图像中检测进去的指标框出来并显示指标的类别的根底上,统计每个类别的指标的数量,即创立一个列表,每个指标类别对应一个项,初始值为 0,在给物体画指标检测框的时候顺便在方才创立列表中指标类别对应的项加 1,在画框完结后,对于每种指标类别对应的数量也保留到了列表中,而后构建好示意指标类别和数量的字符串并通过 cv2 库中的 putText 办法将统计到的指标的品种及绝对应的数量增加到图片当中。def draw_boxes_in_image(self, img_path):
num_record = [0 for i in range(12)]
img = cv2.imread(img_path, 1)
for i in range(len(self.det_boxes)):
x = int(self.det_boxes[i]['bbox'][0])
y = int(self.det_boxes[i]['bbox'][1])
w = int(self.det_boxes[i]['bbox'][2])
h = int(self.det_boxes[i]['bbox'][3])
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 225, 0), 1)
score = round(self.det_boxes[i]['score'], 3)
classname = self.det_boxes[i]['category_id']
text = self.det_boxes[i]['category_id'] + ',' + str(score)
cv2.putText(img, text, (x, y), cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 225), 2)
num_record[label_list.index(classname)] = num_record[label_list.index(classname)] + 1
result_str = ""
for ii in range(12):
current_name = label_list[ii]
current_num = num_record[ii]
if current_num != 0:
result_str = result_str + "{}:{}".format(current_name, current_num)
font = cv2.FONT_HERSHEY_SIMPLEX
img = cv2.putText(img, result_str, (20, 20), font, 0.5, (255, 0, 0), 2)
return img 具体代码运行办法参考 https://github.com/whitewings-hub/mindspore-yolov3-vehicle_counting/blob/main/code_2/README.md 运行效果图如下:检测前
检测后
8. 我的项目参考 https://gitee.com/mindspore/m…