关于前端:使用WebWorker优化由tensorflowjsbodytPix实现的摄像头视频背景虚化功能

背景

WebRTC我的项目中增加背景虚化性能,应用tensorflow.js和google现成的模型bodyPix实现,但理论应用中存在两个问题,一是帧率低(临时未解决),二是切换到其余tab页后,背景虚化会很卡,简直停滞,查阅材料后发现Chrome浏览器会升高暗藏tab页的性能,解决方案是应用WebWorker。
应用一周工夫,一直地踩坑最初在webWorker中实现了优化。

踩坑

1.最后认为卡顿的起因是requestAnimationFrame办法,理论改成setTimeout或setInterval或间接持续虚化后发现仍旧卡顿,最初发现理论卡顿的中央是getSegmentPerson()的调用,在tab页切换后该办法调用须要几秒钟工夫。

2.造成卡顿的是getSegmentPerson()办法,就只能将该办法放到worker中进行,然而getSegmentPerson须要传入原始的video或者canvas,而dom不能在worker中应用。持续翻阅源码发现bodyPix还反对传入OffscreenCanvasImageData,OffscreenCanvas是专门在webWorker上应用的canvas,被叫做离屏canvas,ImageData是接口形容 <canvas> 元素的一个隐含像素数据的区域,能够间接在canvas.getContext('2d').getImageData()取得。最终我实现了两种,抉择了ImageData这种形式。

这是bodyPix getSegmentPerson的源码

segmentPerson(input: BodyPixInput, config?: PersonInferenceConfig): Promise<SemanticPersonSegmentation>;
export declare type ImageType = HTMLImageElement | HTMLCanvasElement | HTMLVideoElement | OffscreenCanvas;
export declare type BodyPixInput = ImageData | ImageType | tf.Tensor3D;

3.无论是OffscreenCanvas还是ImageData都须要创立一个新的canvas来实时画video帧,新的canvas必须设置width,heightvideo的保持一致,否则获取的segmentation不准,我没设置时就始终看到的是全副虚化了包含我本人,钻研了很久发现是widthheight的问题。

WebWorker

1.创立my.worker.ts
2.bodyPix.load()从主体代码迁徙到worker中,主体代码间接接管segmentation值
3.监听主体代码传过来的ImageData,调用net.segmentPerson()办法,获取的值再传回去

import * as tfjs from '@tensorflow/tfjs';
import * as bodyPix from '@tensorflow-models/body-pix';
import BodyPix from './service/BodyPix';

const webWorker: Worker = self as any; 
let body = null;
let offscreen = null;
let context = null;

webWorker.addEventListener('message', async (event) => { 

    const { action, data } = event.data;
    switch(action) {
        case 'init':
            body = new BodyPix();
            await body.loadAndPredict();
            webWorker.postMessage({inited: true});
            break;
        case 'imageData':
            body.net.segmentPerson(data.imageData, BodyPix.option.config).then((segmentation) => {
                requestAnimationFrame(() => {
                    webWorker.postMessage({segmentation});
                })
            })
            break;
    }
});
export default null as any;

主体代码

截取局部代码

async blurBackground (canvas: HTMLCanvasElement, video: HTMLVideoElement) {
    //渲染的canvas和获取segmentation的canvas的widht,height必须和video保持一致,否则bodyPix返回的segmentation会不准
    const [ width, height ] = [ video.videoWidth, video.videoHeight ];
    video.width = width;
    canvas.width = width;
    video.height = height;
    canvas.height = height;
    this.workerCanvas = document.createElement('canvas');
    this.workerCanvas.width = video.width;
    this.workerCanvas.height = video.height;
    this.bluring = true;
    this.blurInWorker(video, canvas);
}

async drawImageData (newCanvas: HTMLCanvasElement, video: HTMLVideoElement) {
    const ctx = newCanvas.getContext('2d');
    ctx.drawImage(video, 0, 0, newCanvas.width, newCanvas.height);
    const imageData = ctx.getImageData(0, 0, newCanvas.width, newCanvas.height);
    this.worker.postMessage({ action: 'imageData', data: {imageData} });
}

async blurInWorker (video: HTMLVideoElement, canvas: HTMLCanvasElement) {
    this.worker = new myWorker('');
    this.worker.addEventListener('message', (event) => {
        if(event.data.inited) {
            this.drawImageData(this.workerCanvas, video);
        } else if(event.data.segmentation) {
            bodyPix.drawBokehEffect(
                canvas, video, event.data.segmentation, BodyPix.option.backgroundBlurAmount,
                BodyPix.option.edgeBlurAmount, BodyPix.option.flipHorizontal);
            this.bluring && this.drawImageData(this.workerCanvas, video);
        }
    })
    this.worker.postMessage({action: 'init', data: null});
}

async unBlurBackground (canvas: HTMLCanvasElement, video: HTMLVideoElement) {
    this.bluring = false;
    this.worker.terminate();
    this.worker = null;
    canvas?.getContext('2d')?.clearRect(0, 0, canvas.width, canvas.height);
    this.workerCanvas?.getContext('2d')?.clearRect(0, 0, this.workerCanvas.width, this.workerCanvas.height);
    this.workerCanvas = null;
}

OffScreenCanvas实现

//worker中
let offscreen = nulll;
let context = null;

        case 'offscreen':
            offscreen = new OffscreenCanvas(data.width, data.height);
            context = offscreen.getContext('2d');
            break;
        case 'imageBitmap':
            offscreen.getContext('2d').drawImage(event.data.imageBitmap, 0, 0);
            body.net.segmentPerson(offscreen, BodyPix.option.config).then((segmentation) => {
                requestAnimationFrame(() => {
                    webWorker.postMessage({segmentation});
                })
            });
            break;

//主体中
const [track] = video.srcObject.getVideoTracks();
const imageCapture = new ImageCapture(track);
imageCapture.grabFrame().then(imageBitmap => {
    this.worker.postMessage({ imageBitmap });
});

bodyPix帧率问题还在钻研中…

参考资料

bodyPix github: https://github.com/tensorflow…
背景虚化demo: https://segmentfault.com/a/11…
bodyPix其余应用: https://segmentfault.com/a/11…
webWorker优化bodyPix: https://segmentfault.com/a/11…
webWorker应用: https://www.ruanyifeng.com/bl…

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理