当给本人拍一张美美的自拍照时,却发现照片中本人的脸不够瘦、眼睛不够大、表情不够丰盛可恶…如果此时可能一键美颜瘦脸并且增加可恶的贴纸的话,是不是很棒?

当家里的小孩观看iPad屏幕工夫过长或者眼睛离屏幕间隔过近,家长没能时刻关注到时,如果有一款能够实现parent control的利用,那是不是很不便?面对以上问题,华为机器学习服务(ML Kit)的人脸检测性能轻松帮你搞定!

华为机器学习服务的人脸检测性能能够对人脸多达855个关键点进行检测,从而返回人脸的轮廓、眉毛、眼睛、鼻子、嘴巴、耳朵等部位的坐标以及人脸偏转角度等信息。集成人脸检测服务后开发者能够依据这些信息疾速构建人脸丑化的利用,或者在脸上加一些乏味可恶的贴纸元素,减少图片的趣味性。除了这个弱小的性能外,人脸检测服务还能够辨认人脸中包含眼睛是否睁开、是否戴眼镜或帽子、性别、年龄、是否有胡子等特色。除此之外,人脸检测性能能够辨认人脸多达七种表情,包含微笑、无表情、愤恨、讨厌、惊恐、悲伤和诧异。


“瘦脸大眼”开发实战

1. 开发筹备

具体的筹备步骤能够参考华为开发者联盟:

这里列举要害的开发步骤。

1.1 我的项目级gradle里配置Maven仓地址

buildscript {    repositories {            ...        maven {url 'https://developer.huawei.com/repo/'}    }} dependencies {                              ...        classpath 'com.huawei.agconnect:agcp:1.3.1.300'    }allprojects {    repositories {            ...        maven {url 'https://developer.huawei.com/repo/'}    }}

1.2 文件头减少配置

集成SDK后,在文件头增加配置

apply plugin: 'com.android.application'apply plugin: 'com.huawei.agconnect'

1.3 利用级gradle里配置SDK依赖

dependencies{     // 引入根底SDK    implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'    // 引入人脸轮廓+关键点检测模型包    implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'    // 引入表情检测模型包    implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'    // 引入特色检测模型包    implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'}

1.4 将以下语句增加到AndroidManifest.xml文件中,用于自动更新机器学习模型

<manifest    ...    <meta-data        android:name="com.huawei.hms.ml.DEPENDENCY"         android:value= "face"/>    ...</manifest> 1.3   申请摄像头权限<uses-permission android:name="android.permission.CAMERA" /><uses-feature android:name="android.hardware.camera" />

2. 代码开发

2.1 应用默认参数配置,创建人脸分析器

analyzer =   MLAnalyzerFactory.getInstance().getFaceAnalyzer(); 2.2  通过android.graphics.Bitmap创立MLFrame对象用于分析器检测图片MLFrame frame = MLFrame.fromBitmap(bitmap); 2.3  调用“asyncAnalyseFrame”办法进行人脸检测Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {     @Override     public void onSuccess(List<MLFace> faces) {         // 检测胜利,获取脸部关键点信息。     } }).addOnFailureListener(new OnFailureListener() {     @Override     public void onFailure(Exception e) {         // 检测失败。    } });

2.4 通过进度条进行不同水平的大眼瘦脸解决。别离调用magnifyEye办法和smallFaceMesh办法实现大眼算法和瘦脸算法

private SeekBar.OnSeekBarChangeListener onSeekBarChangeListener = new SeekBar.OnSeekBarChangeListener() {    @Override    public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {        switch (seekBar.getId()) {            case R.id.seekbareye: // 当大眼进度条变动时,…            case R.id.seekbarface: // 当瘦脸进度条变动时,…        }    }}2.5 检测实现,开释分析器try {    if (analyzer != null) {        analyzer.stop();    }} catch (IOException e) {    Log.e(TAG, "e=" + e.getMessage());}

Demo成果

“乏味可恶贴纸”开发实战

开发前筹备

在我的项目级gradle里增加华为maven仓

关上AndroidStudio我的项目级build.gradle文件

增量增加如下maven地址:

buildscript {     {                maven {url 'http://developer.huawei.com/repo/'}    }     } allprojects {    repositories {               maven { url 'http://developer.huawei.com/repo/'}    } }

在利用级的build.gradle外面加上SDK依赖

// Face detection SDK. implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300' // Face detection model. implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'

在AndroidManifest.xml文件外面申请相机、拜访网络和存储权限

<!--相机权限-->  <uses-feature android:name="android.hardware.camera" /> <uses-permission android:name="android.permission.CAMERA" /> <!--写权限-->  <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <!--读权限-->  <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

代码开发关键步骤

设置人脸检测器

MLFaceAnalyzerSetting detectorOptions; detectorOptions = new MLFaceAnalyzerSetting.Factory()        .setFeatureType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_FEATURES)        .setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)        .allowTracing(MLFaceAnalyzerSetting.MODE_TRACING_FAST)        .create(); detector = MLAnalyzerFactory.getInstance().getFaceAnalyzer(detectorOptions);

这里咱们通过相机回调拿到相机帧数据,并通过调用人脸检测器拿到人脸轮廓点后写入FacePointEngine供贴纸滤镜应用.

@Override public void onPreviewFrame(final byte[] imgData, final Camera camera) {    int width = mPreviewWidth;    int height = mPreviewHeight;     long startTime = System.currentTimeMillis();    //设置前后摄方向统一    if (isFrontCamera()){        mOrientation = 0;    }else {        mOrientation = 2;    }    MLFrame.Property property =            new MLFrame.Property.Creator()                    .setFormatType(ImageFormat.NV21)                    .setWidth(width)                    .setHeight(height)                    .setQuadrant(mOrientation)                    .create();     ByteBuffer data = ByteBuffer.wrap(imgData);    // 调用人脸检测接口    SparseArray<MLFace> faces = detector.analyseFrame(MLFrame.fromByteBuffer(data,property));    //判断是否获取到人脸信息    if(faces.size()>0){        MLFace mLFace = faces.get(0);        EGLFace EGLFace = FacePointEngine.getInstance().getOneFace(0);        EGLFace.pitch = mLFace.getRotationAngleX();        EGLFace.yaw = mLFace.getRotationAngleY();        EGLFace.roll = mLFace.getRotationAngleZ() - 90;        if (isFrontCamera())            EGLFace.roll = -EGLFace.roll;        if (EGLFace.vertexPoints == null) {            EGLFace.vertexPoints = new PointF[131];        }        int index = 0;        // 获取一个人的轮廓点坐标并转化到openGL归一化坐标系下的浮点值        for (MLFaceShape contour : mLFace.getFaceShapeList()) {            if (contour == null) {                continue;            }            List<MLPosition> points = contour.getPoints();             for (int i = 0; i < points.size(); i++) {                MLPosition point = points.get(i);                float x = ( point.getY() / height) * 2 - 1;                float y = ( point.getX() / width ) * 2 - 1;                if (isFrontCamera())                    x = -x;                PointF Point = new PointF(x,y);                EGLFace.vertexPoints[index] = Point;                index++;            }        }        // 插入人脸对象        FacePointEngine.getInstance().putOneFace(0, EGLFace);        // 设置人脸个数        FacePointEngine.getInstance().setFaceSize(faces!= null ? faces.size() : 0);    }else{        FacePointEngine.getInstance().clearAll();    }    long endTime = System.currentTimeMillis();    Log.d("TAG","Face detect time: " + String.valueOf(endTime - startTime)); }

ML kit接口返回的人脸轮廓点状况如图所示:

介绍如何设计贴纸,首先看一下贴纸数JSON数据定义介绍如何设计贴纸,首先看一下贴纸数JSON数据定义

public class FaceStickerJson {     public int[] centerIndexList;   // 核心坐标索引列表,有可能是多个关键点计算中心点    public float offsetX;           // 绝对于贴纸核心坐标的x轴偏移像素    public float offsetY;           // 绝对于贴纸核心坐标的y轴偏移像素    public float baseScale;         // 贴纸基准缩放倍数    public int startIndex;          // 人脸起始索引,用于计算人脸的宽度    public int endIndex;            // 人脸完结索引,用于计算人脸的宽度    public int width;               // 贴纸宽度    public int height;              // 贴纸高度    public int frames;              // 贴纸帧数    public int action;              // 动作,0示意默认显示,这里用来解决贴纸动作等    public String stickerName;      // 贴纸名称,用于标记贴纸所在文件夹以及png文件的    public int duration;            // 贴纸帧显示距离    public boolean stickerLooping;  // 贴纸是否循环渲染    public int maxCount;            // 最大贴纸渲染次数 ... }

咱们制作猫耳贴纸JSON文件,通过人脸索引找到眉心84号点和鼻尖85号点别离贴上耳朵和鼻子,而后把它和图片都放在assets目录下

{    "stickerList": [{        "type": "sticker",        "centerIndexList": [84],        "offsetX": 0.0,        "offsetY": 0.0,        "baseScale": 1.3024,        "startIndex": 11,        "endIndex": 28,        "width": 495,        "height": 120,        "frames": 2,        "action": 0,        "stickerName": "nose",        "duration": 100,        "stickerLooping": 1,        "maxcount": 5    }, {    "type": "sticker",        "centerIndexList": [83],        "offsetX": 0.0,        "offsetY": -1.1834,        "baseScale": 1.3453,        "startIndex": 11,        "endIndex": 28,        "width": 454,        "height": 150,        "frames": 2,        "action": 0,        "stickerName": "ear",        "duration": 100,        "stickerLooping": 1,        "maxcount": 5    }] }

这里渲染贴纸纹理咱们应用GLSurfaceView,应用起来比TextureView简略, 首先在onSurfaceChanged实例化贴纸滤镜,传入贴纸门路并开启相机

@Override public void onSurfaceCreated(GL10 gl, EGLConfig config) {     GLES30.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);    mTextures = new int[1];    mTextures[0] = OpenGLUtils.createOESTexture();    mSurfaceTexture = new SurfaceTexture(mTextures[0]);    mSurfaceTexture.setOnFrameAvailableListener(this);     //将samplerExternalOES 输出到纹理中    cameraFilter = new CameraFilter(this.context);     //设置assets目录下人脸贴纸门路    String folderPath ="cat";    stickerFilter = new FaceStickerFilter(this.context,folderPath);     //创立屏幕滤镜对象    screenFilter = new BaseFilter(this.context);     facePointsFilter = new FacePointsFilter(this.context);    mEGLCamera.openCamera(); }

而后在onSurfaceChanged初始化贴纸滤镜

@Override public void onSurfaceChanged(GL10 gl, int width, int height) {    Log.d(TAG, "onSurfaceChanged. width: " + width + ", height: " + height);    int previewWidth = mEGLCamera.getPreviewWidth();    int previewHeight = mEGLCamera.getPreviewHeight();    if (width > height) {        setAspectRatio(previewWidth, previewHeight);    } else {        setAspectRatio(previewHeight, previewWidth);    }    // 设置画面的大小,创立FrameBuffer,设置显示尺寸    cameraFilter.onInputSizeChanged(previewWidth, previewHeight);    cameraFilter.initFrameBuffer(previewWidth, previewHeight);    cameraFilter.onDisplaySizeChanged(width, height);     stickerFilter.onInputSizeChanged(previewHeight, previewWidth);    stickerFilter.initFrameBuffer(previewHeight, previewWidth);    stickerFilter.onDisplaySizeChanged(width, height);     screenFilter.onInputSizeChanged(previewWidth, previewHeight);    screenFilter.initFrameBuffer(previewWidth, previewHeight);    screenFilter.onDisplaySizeChanged(width, height);     facePointsFilter.onInputSizeChanged(previewHeight, previewWidth);    facePointsFilter.onDisplaySizeChanged(width, height);    mEGLCamera.startPreview(mSurfaceTexture); }

最初通过onDrawFrame把贴纸绘制到屏幕

@Override public void onDrawFrame(GL10 gl) {    int textureId;    // 革除屏幕和深度缓存    GLES30.glClear(GLES30.GL_COLOR_BUFFER_BIT | GLES30.GL_DEPTH_BUFFER_BIT);    //更新获取一张图    mSurfaceTexture.updateTexImage();    //获取SurfaceTexture转化矩阵    mSurfaceTexture.getTransformMatrix(mMatrix);    //设置相机显示转化矩阵    cameraFilter.setTextureTransformMatrix(mMatrix);     //绘制相机纹理    textureId = cameraFilter.drawFrameBuffer(mTextures[0],mVertexBuffer,mTextureBuffer);    //绘制贴纸纹理    textureId = stickerFilter.drawFrameBuffer(textureId,mVertexBuffer,mTextureBuffer);    //绘制到屏幕    screenFilter.drawFrame(textureId , mDisplayVertexBuffer, mDisplayTextureBuffer);    if(drawFacePoints){        facePointsFilter.drawFrame(textureId, mDisplayVertexBuffer, mDisplayTextureBuffer);    } }

这样咱们的贴纸就画到人脸上了.

Demo成果

欲了解更多详情,请参阅:华为开发者联盟官网、开发领导文档

参加开发者探讨请到Reddit: https://www.reddit.com/r/HuaweiDevelopers/

下载demo和示例代码请到Github:https://github.com/HMS-Core

解决集成问题请到Stack Overflow:https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest