关于前端:使用WebWorker优化由tensorflowjsbodytPix实现的摄像头视频背景虚化功能

38次阅读

共计 4256 个字符,预计需要花费 11 分钟才能阅读完成。

背景

WebRTC 我的项目中增加背景虚化性能,应用 tensorflow.js 和 google 现成的模型 bodyPix 实现,但理论应用中存在两个问题,一是帧率低 (临时未解决),二是切换到其余 tab 页后,背景虚化会很卡,简直停滞,查阅材料后发现 Chrome 浏览器会升高暗藏 tab 页的性能,解决方案是应用 WebWorker。
应用一周工夫,一直地踩坑最初在 webWorker 中实现了优化。

踩坑

1. 最后认为卡顿的起因是 requestAnimationFrame 办法,理论改成 setTimeout 或 setInterval 或间接持续虚化后发现仍旧卡顿,最初发现理论卡顿的中央是 getSegmentPerson()的调用,在 tab 页切换后该办法调用须要几秒钟工夫。

2. 造成卡顿的是 getSegmentPerson() 办法,就只能将该办法放到 worker 中进行,然而 getSegmentPerson 须要传入原始的 video 或者 canvas, 而dom 不能在 worker 中应用。持续翻阅源码发现 bodyPix 还反对传入 OffscreenCanvasImageData,OffscreenCanvas是专门在 webWorker 上应用的 canvas,被叫做离屏canvas,ImageData 是接口形容 <canvas> 元素的一个隐含像素数据的区域, 能够间接在 canvas.getContext('2d').getImageData() 取得。最终我实现了两种,抉择了 ImageData 这种形式。

这是 bodyPix getSegmentPerson 的源码

segmentPerson(input: BodyPixInput, config?: PersonInferenceConfig): Promise<SemanticPersonSegmentation>;
export declare type ImageType = HTMLImageElement | HTMLCanvasElement | HTMLVideoElement | OffscreenCanvas;
export declare type BodyPixInput = ImageData | ImageType | tf.Tensor3D;

3. 无论是 OffscreenCanvas 还是 ImageData 都须要创立一个新的 canvas 来实时画 video 帧,新的 canvas 必须设置 width,heightvideo的保持一致, 否则获取的 segmentation 不准, 我没设置时就始终看到的是全副虚化了包含我本人, 钻研了很久发现是 widthheight的问题。

WebWorker

1. 创立 my.worker.ts
2.bodyPix.load()从主体代码迁徙到 worker 中,主体代码间接接管 segmentation 值
3. 监听主体代码传过来的 ImageData, 调用 net.segmentPerson() 办法,获取的值再传回去

import * as tfjs from '@tensorflow/tfjs';
import * as bodyPix from '@tensorflow-models/body-pix';
import BodyPix from './service/BodyPix';
const webWorker: Worker = self as any;
let body = null;
let offscreen = null;
let context = null;
webWorker.addEventListener('message', async (event) => {const { action, data} = event.data;
switch(action) {
case 'init':
body = new BodyPix();
await body.loadAndPredict();
webWorker.postMessage({inited: true});
break;
case 'imageData':
body.net.segmentPerson(data.imageData, BodyPix.option.config).then((segmentation) => {requestAnimationFrame(() => {webWorker.postMessage({segmentation});
})
})
break;
}
});
export default null as any;

主体代码

截取局部代码

async blurBackground (canvas: HTMLCanvasElement, video: HTMLVideoElement) {
// 渲染的 canvas 和获取 segmentation 的 canvas 的 widht,height 必须和 video 保持一致,否则 bodyPix 返回的 segmentation 会不准
const [width, height] = [video.videoWidth, video.videoHeight];
video.width = width;
canvas.width = width;
video.height = height;
canvas.height = height;
this.workerCanvas = document.createElement('canvas');
this.workerCanvas.width = video.width;
this.workerCanvas.height = video.height;
this.bluring = true;
this.blurInWorker(video, canvas);
}
async drawImageData (newCanvas: HTMLCanvasElement, video: HTMLVideoElement) {const ctx = newCanvas.getContext('2d');
ctx.drawImage(video, 0, 0, newCanvas.width, newCanvas.height);
const imageData = ctx.getImageData(0, 0, newCanvas.width, newCanvas.height);
this.worker.postMessage({action: 'imageData', data: {imageData} });
}
async blurInWorker (video: HTMLVideoElement, canvas: HTMLCanvasElement) {this.worker = new myWorker('');
this.worker.addEventListener('message', (event) => {if(event.data.inited) {this.drawImageData(this.workerCanvas, video);
} else if(event.data.segmentation) {
bodyPix.drawBokehEffect(
canvas, video, event.data.segmentation, BodyPix.option.backgroundBlurAmount,
BodyPix.option.edgeBlurAmount, BodyPix.option.flipHorizontal);
this.bluring && this.drawImageData(this.workerCanvas, video);
}
})
this.worker.postMessage({action: 'init', data: null});
}
async unBlurBackground (canvas: HTMLCanvasElement, video: HTMLVideoElement) {
this.bluring = false;
this.worker.terminate();
this.worker = null;
canvas?.getContext('2d')?.clearRect(0, 0, canvas.width, canvas.height);
this.workerCanvas?.getContext('2d')?.clearRect(0, 0, this.workerCanvas.width, this.workerCanvas.height);
this.workerCanvas = null;
}

OffScreenCanvas 实现

//worker 中
let offscreen = nulll;
let context = null;
case 'offscreen':
offscreen = new OffscreenCanvas(data.width, data.height);
context = offscreen.getContext('2d');
break;
case 'imageBitmap':
offscreen.getContext('2d').drawImage(event.data.imageBitmap, 0, 0);
body.net.segmentPerson(offscreen, BodyPix.option.config).then((segmentation) => {requestAnimationFrame(() => {webWorker.postMessage({segmentation});
})
});
break;
// 主体中
const [track] = video.srcObject.getVideoTracks();
const imageCapture = new ImageCapture(track);
imageCapture.grabFrame().then(imageBitmap => {this.worker.postMessage({ imageBitmap});
});

bodyPix 帧率问题还在钻研中 …

参考资料

bodyPix github: https://github.com/tensorflow…
背景虚化 demo: https://segmentfault.com/a/11…
bodyPix 其余应用: https://segmentfault.com/a/11…
webWorker 优化 bodyPix: https://segmentfault.com/a/11…
webWorker 应用: https://www.ruanyifeng.com/bl…

正文完
 0