共计 15432 个字符,预计需要花费 39 分钟才能阅读完成。
简介
置信大部分同学们都已理解或接触过 OpenAtom OpenHarmony(以下简称“OpenHarmony”)了,但你肯定没在 OpenHarmony 上实现过人脸识别性能,跟着本文带你疾速在 OpenHarmony 规范设施上基于 SeetaFace2 和 OpenCV 实现人脸识别。
我的项目成果
本我的项目实现了导入人脸模型、人脸框选和人脸识别三大性能,操作流程如下:
- 录入页面点击右下角按钮,跳转拍摄页面进行拍照;
- 抉择一张或多张人脸作为训练模型,并设置对应的名字;
- 抉择一张未录入的人脸图片,点击框选按钮实现人脸图片框选性能;
-
最初点击辨认,利用会对以后图片进行匹配,最终在界面中显示辨认后果。
疾速上手
设施端开发
设施端通过 OpenCV 对图像进行解决并通过 Seetaface2 对图形数据进行人脸头像的辨认,最终输入对应的 NAPI 接口提供给利用端调用。因而设施端开发次要波及到 OpenCV 和 Seetaface2 的移植以及 NAPI 接口的开发。
OpenCV 库移植
OpenCV 是一个性能十分弱小的开源计算机视觉库。此库已由常识体系工作组移植到了 OpenHarmony 中,前期还会将此库合入到主仓。在此库上主仓之前,咱们只须要以下几个步骤就能够实现 OpenCV 的移植应用。- 通过以下命令下载曾经移植好的 OpenCV
git clone git@gitee.com:zhong-luping/ohos_opencv.git
- 将 OpenCV 拷贝到 OpenHarmony 目录的 third_party 下
cp -raf opencv ~/openharmony/third_party/
- 适当裁剪编译选项
关上 OpenCV 目录下的 BUILD.gn,如下:
不须要 video 以及 flann 性能,将对应的模块正文即可。
import("//build/ohos.gni")
group("opencv") {
deps = [
"//third_party/opencv/modules/core:opencv_core",
// "//third_party/opencv/modules/flann:opencv_flann",
"//third_party/opencv/modules/imgproc:opencv_imgproc",
"//third_party/opencv/modules/ml:opencv_ml",
"//third_party/opencv/modules/photo:opencv_photo",
"//third_party/opencv/modules/dnn:opencv_dnn",
"//third_party/opencv/modules/features2d:opencv_features2d",
"//third_party/opencv/modules/imgcodecs:opencv_imgcodecs",
"//third_party/opencv/modules/videoio:opencv_videoio",
"//third_party/opencv/modules/calib3d:opencv_calib3d",
"//third_party/opencv/modules/highgui:opencv_highgui",
"//third_party/opencv/modules/objdetect:opencv_objdetect",
"//third_party/opencv/modules/stitching:opencv_stitching",
"//third_party/opencv/modules/ts:opencv_ts",
// "//third_party/opencv/modules/video:opencv_video",
"//third_party/opencv/modules/gapi:opencv_gapi",
]
- 增加依赖子系统的 part_name,编译框架子系统会将编译出的库拷贝到系统文件中。
此我的项目中咱们新建了一个 SeetaFaceApp 的子系统,该子系统中命名 part_name 为 SeetafaceApi,所以咱们须要在对应模块中的 BUILD.gn 中加上 part_name=”SeetafaceApi”
以 module/core 为例:
ohos_shared_library("opencv_core"){sources = [ ...]
configs = [...]
deps = [...]
part_name = "SeetafaceApi"
}
- 编译工程须要增加 OpenCV 的依赖。
在生成 NAPI 的 BUILD.gn 中增加以下依赖:
deps += ["//third_party/opencv:opencv"]
至此,人脸识别中 OpenCV 的移植应用实现。
SeetaFace2 库移植
SeetaFace2 是中科视拓开源的第二代人脸识别库。包含了搭建一套全自动人脸识别零碎所需的三个外围模块,即:人脸检测模块 FaceDetector、面部关键点定位模块 FaceLandmarker 以及人脸特征提取与比对模块 FaceRecognizer。
对于 SeetaFace2 的移植请参照文档:SeetaFace2 移植开发文档。
NAPI 接口开发
对于 OpenHarmony 中的 NAPI 开发,参考视频:
OpenHarmony 中 napi 的开发视频教程。本文将重点解说 NAPI 接口如何实现 OpenCV 以及 SeetaFace 的调用。
- 人脸框获取的 NAPI 接口的实现。
int GetRecognizePoints(const char *image_path);
此接口次要是通过应用层输出一张图片,通过 OpenCV 的 imread 接口获取到图片数据,并通过人脸检测模块 FaceDetector 剖析取得图片中所有的人脸矩形框(矩形框是以 x,y,w,h 的形式)并将人脸框矩形以数组的形式返回到应用层。
人脸框矩形获取的次要代码如下:
static int RecognizePoint(string image_path, FaceRect *rect, int num)
{if (rect == nullptr) {
cerr << "NULL POINT!" << endl;
LOGE("NULL POINT! \n");
return -1;
}
seeta::ModelSetting::Device device = seeta::ModelSetting::CPU;
int id = 0;
/* 设置人脸识别模型。*/
seeta::ModelSetting FD_model("/system/usr/model/fd_2_00.dat", device, id);
seeta::ModelSetting FL_model("/system/usr/model/pd_2_00_pts81.dat", device, id);
seeta::FaceDetector FD(FD_model);
seeta::FaceLandmarker FL(FL_model);
FD.set(seeta::FaceDetector::PROPERTY_VIDEO_STABLE, 1);
/* 读取图片数据 */
auto frame = imread(image_path);
seeta::cv::ImageData simage = frame;
if (simage.empty()) {
cerr << "Can not open image:" << image_path << endl;
LOGE("Can not open image: %{public}s", image_path.c_str());
return -1;
}
/* 图片数据进行人脸识别解决,获取所有的人脸框数据对象 */
auto faces = FD.detect(simage);
if (faces.size <= 0) {
cerr << "detect" << image_path << "failed!" << endl;
LOGE("detect image: %s failed!", image_path.c_str());
return -1;
}
for (int i = 0; (i < faces.size && i < num); i++) {
/* 将所有人脸框对象数据以坐标模式输入 */
auto &face = faces.data[i];
memcpy(&rect[i], &(face.pos), sizeof(FaceRect));
}
return faces.size;
}
其中 FD_model 是人脸检测模型,而 FL_model 是面部关键点定位模型 (此模型分为 5 点定位和 81 点定位,本我的项目中应用的是 81 点定位模型),这些模型从开源我的项目中收费获取。
通过以上形式获取到对应的人脸矩形框后,再将矩形框以数组的形式返回到利用端:
string image = path;
p = (FaceRect *)malloc(sizeof(FaceRect) * MAX_FACE_RECT);
/* 依据图片进行人脸识别并获取人脸框坐标点 */
int retval = RecognizePoint(image, p, MAX_FACE_RECT);
if (retval <= napi_ok) {LOGE("GetNapiValueString failed!");
free(p);
return result;
}
/* 将所有坐标点以数组形式返回到利用端 */
for (int i = 0; i < retval; i++) {int arry_int[4] = {p[i].x, p[i].y, p[i].w, p[i].h};
int arraySize = (sizeof(arry_int) / sizeof(arry_int[0]));
for (int j = 0; j < arraySize; j++) {
napi_value num_val;
if (napi_create_int32(env, arry_int[j], &num_val) != napi_ok) {LOGE("napi_create_int32 failed!");
return result;
}
napi_set_element(env, array, i*arraySize + j, num_val);
}
}
if (napi_create_object(env, &result) != napi_ok) {LOGE("napi_create_object failed!");
free(p);
return result;
}
if (napi_set_named_property(env, result, "recognizeFrame", array) != napi_ok) {LOGE("napi_set_named_property failed!");
free(p);
return result;
}
LOGI("");
free(p);
return result;
其中 array 是通过 napi_create_array 创立的一个 NAPI 数组对象,通过 napi_set_element 将所有的矩形框数据保留到 array 对象中,最初通过 napi_set_named_property 将 array 转换成利用端可辨认的对象类型 result 并将其返回。
-
人脸搜寻辨认初始化与逆初始化。
- int FaceSearchInit();
- int FaceSearchDeinit();
这 2 个接口次要是提供给人脸搜寻以及辨认调用的,初始化次要蕴含模型的注册以及辨认模块的初始化:
static int FaceSearchInit(FaceSearchInfo *info)
{if (info == NULL) {info = (FaceSearchInfo *)malloc(sizeof(FaceSearchInfo));
if (info == nullptr) {
cerr << "NULL POINT!" << endl;
return -1;
}
}
seeta::ModelSetting::Device device = seeta::ModelSetting::CPU;
int id = 0;
seeta::ModelSetting FD_model("/system/usr/model/fd_2_00.dat", device, id);
seeta::ModelSetting PD_model("/system/usr//model/pd_2_00_pts5.dat", device, id);
seeta::ModelSetting FR_model("/system/usr/model/fr_2_10.dat", device, id);
info->engine = make_shared<seeta::FaceEngine>(FD_model, PD_model, FR_model, 2, 16);
info->engine->FD.set(seeta::FaceDetector::PROPERTY_MIN_FACE_SIZE, 80);
info->GalleryIndexMap.clear();
return 0;
}
而逆初始化就是做一些内存的开释。
static void FaceSearchDeinit(FaceSearchInfo *info, int need_delete)
{if (info != nullptr) {if (info->engine != nullptr) { }
info->GalleryIndexMap.clear();
if (need_delete) {free(info);
info = nullptr;
}
}
}
- 人脸搜寻辨认注册接口的实现。
int FaceSearchRegister(const char *value);
须要留神的是,该接口须要利用端传入一个 json 数据的参数,次要蕴含注册人脸的名字,图片以及图片个数,如{“name”:” 刘德华 ”,”sum”:”2″,”image”:{“11.jpg”,”12.jpg”}}。而解析参数的时候须要调用 napi_get_named_property 对 json 数据的各个对象进行解析,具体代码如下:
napi_get_cb_info(env, info, &argc, &argv, &thisVar, &data);
napi_value object = argv;
napi_value value = nullptr;
if (napi_get_named_property(env, object, (const char *)"name", &value) == napi_ok) {char name[64] = {0};
if (GetNapiValueString(env, value, (char *)name, sizeof(name)) < 0) {LOGE("GetNapiValueString failed!");
return result;
}
reg_info.name = name;
}
LOGI("name = %{public}s", reg_info.name.c_str());
if (napi_get_named_property(env, object, (const char *)"sum", &value) == napi_ok) {if (napi_get_value_uint32(env, value, &sum) != napi_ok) {LOGE("napi_get_value_uint32 failed!");
return result;
}
}
LOGI("sum = %{public}d", sum);
if (napi_get_named_property(env, object, (const char *)"image", &value) == napi_ok) {
bool res = false;
if (napi_is_array(env, value, &res) != napi_ok || res == false) {LOGE("napi_is_array failed!");
return result;
}
for (int i = 0; i < sum; i++) {char image[256] = {0};
napi_value imgPath = nullptr;
if (napi_get_element(env, value, i, &imgPath) != napi_ok) {LOGE("napi_get_element failed!");
return result;
}
if (GetNapiValueString(env, imgPath, (char *)image, sizeof(image)) < 0) {LOGE("GetNapiValueString failed!");
return result;
}
reg_info.path = image;
if (FaceSearchRegister(g_FaceSearch, reg_info) != napi_ok) {
retval = -1;
break;
}
}
}
通过 napi_get_cb_info 获取从利用端传来的参数,并通过 napi_get_named_property 获取对应的 name 以及图片个数,最初通过 napi_get_element 获取图片数组中的各个 image,将 name 和 image 通过 FaceSearchRegister 接口将图片和名字注册到 SeetaFace2 模块的辨认引擎中。具体实现如下:
static int FaceSearchRegister(FaceSearchInfo &info, RegisterInfo &gegister)
{if (info.engine == nullptr) {
cerr << "NULL POINT!" << endl;
return -1;
}
seeta::cv::ImageData image = cv::imread(gegister.path);
auto id = info.engine->Register(image);
if (id >= 0) {info.GalleryIndexMap.insert(make_pair(id, gegister.name));
}
return 0;
}
注册完数据后,后续能够通过该引擎来辨认对应的图片。
- 获取人脸搜寻辨认后果接口的实现。
char *FaceSearchGetRecognize(const char *image_path);
该接口实现了通过传入一张图片,在辨认引擎中进行搜寻辨认。如果辨认引擎中有相似的人脸注册,则返回对应人脸注册时的名字,否则返回不辨认 (ignored) 字样。该办法是通过异步回调的形式实现的:
// 创立 async work,创立胜利后通过最初一个参数 (commandStrData->asyncWork) 返回 async work 的 handle
napi_value resourceName = nullptr;
napi_create_string_utf8(env, "FaceSearchGetPersonRecognizeMethod", NAPI_AUTO_LENGTH, &resourceName);
napi_create_async_work(env, nullptr, resourceName, FaceSearchRecognizeExecuteCB, FaceSearchRecognizeCompleteCB,
(void *)commandStrData, &commandStrData->asyncWork);
// 将刚创立的 async work 加到队列,由底层去调度执行
napi_queue_async_work(env, commandStrData->asyncWork);
其中 FaceSearchRecognizeExecuteCB 实现了人脸识别
static void FaceSearchRecognizeExecuteCB(napi_env env, void *data)
{CommandStrData *commandStrData = dynamic_cast<CommandStrData*>((CommandStrData *)data);
if (commandStrData == nullptr) {HILOG_ERROR("nullptr point!", __FUNCTION__, __LINE__);
return;
}
FaceSearchInfo faceSearch = *(commandStrData->mFaceSearch);
commandStrData->result = FaceSearchSearchRecognizer(faceSearch, commandStrData->filename);
LOGI("Recognize result : %s !", __FUNCTION__, __LINE__, commandStrData->result.c_str());
}
FaceSearchRecognizeCompleteCB 函数通过 napi_resolve_deferred 接口将辨认后果返回到利用端。
static void FaceSearchRecognizeCompleteCB(napi_env env, napi_status status, void *data)
{CommandStrData *commandStrData = dynamic_cast<CommandStrData*>((CommandStrData *)data);
napi_value result;
if (commandStrData == nullptr || commandStrData->deferred == nullptr) {LOGE("nullptr", __FUNCTION__, __LINE__);
if (commandStrData != nullptr) {napi_delete_async_work(env, commandStrData->asyncWork);
delete commandStrData;
}
return;
}
const char *result_str = (const char *)commandStrData->result.c_str();
if (napi_create_string_utf8(env, result_str, strlen(result_str), &result) != napi_ok) {LOGE("napi_create_string_utf8 failed!", __FUNCTION__, __LINE__);
napi_delete_async_work(env, commandStrData->asyncWork);
delete commandStrData;
return;
}
napi_resolve_deferred(env, commandStrData->deferred, result);
napi_delete_async_work(env, commandStrData->asyncWork);
delete commandStrData;
}
通过人脸特征提取与比对模块,对传入的数据与已注册的数据进行比照,并通过返回比照的类似度来进行判断以后人脸是否为可辨认的,最初返回辨认后果。具体实现代码:
static string FaceSearchSearchRecognizer(FaceSearchInfo &info, string filename)
{if (info.engine == nullptr) {
cerr << "NULL POINT!" << endl;
return "recognize error 0";
}
string name;
float threshold = 0.7f;
seeta::QualityAssessor QA;
auto frame = cv::imread(filename);
if (frame.empty()) {LOGE("read image %{public}s failed!", filename.c_str());
return "recognize error 1!";
}
seeta::cv::ImageData image = frame;
std::vector<SeetaFaceInfo> faces = info.engine->DetectFaces(image);
for (SeetaFaceInfo &face : faces) {
int64_t index = 0;
float similarity = 0;
auto points = info.engine->DetectPoints(image, face);
auto score = QA.evaluate(image, face.pos, points.data());
if (score == 0) {name = "ignored";} else {auto queried = info.engine->QueryTop(image, points.data(), 1, &index, &similarity);
// no face queried from database
if (queried < 1) continue;
// similarity greater than threshold, means recognized
if(similarity > threshold) {name = info.GalleryIndexMap[index];
}
}
}
LOGI("name : %{public}s \n", name.length() > 0 ? name.c_str() : "null");
return name.length() > 0 ? name : "recognize failed";}
至此,所有的 NAPI 接口曾经开发实现。
- NAPI 库编译开发完 NAPI 接口后,咱们须要将咱们编写的库退出到零碎中进行编译,咱们须要增加一个本人的子系统。
首先在库目录下新建一个 ohos.build
{
"subsystem": "SeetafaceApp",
"parts": {
"SeetafaceApi": {
"module_list": ["//seetaface:seetafaceapp_napi"],
"test_list": []}
}
}
其次同一目录新建一个 BUILD.gn,将库源文件以及对应的依赖加上,如下:
import("//build/ohos.gni")
config("lib_config") {
cflags_cc = [
"-frtti",
"-fexceptions",
"-DCVAPI_EXPORTS",
"-DOPENCV_ALLOCATOR_STATS_COUNTER_TYPE=int",
"-D_USE_MATH_DEFINES",
"-D__OPENCV_BUILD=1",
"-D__STDC_CONSTANT_MACROS",
"-D__STDC_FORMAT_MACROS",
"-D__STDC_LIMIT_MACROS",
"-O2",
"-Wno-error=header-hygiene",
]
}
ohos_shared_library("seetafaceapp_napi") {
sources = ["app.cpp",]
include_dirs = [
"./",
"//third_party/opencv/include",
"//third_party/opencv/common",
"//third_party/opencv/modules/core/include",
"//third_party/opencv/modules/highgui/include",
"//third_party/opencv/modules/imgcodecs/include",
"//third_party/opencv/modules/imgproc/include",
"//third_party/opencv/modules/calib3d/include",
"//third_party/opencv/modules/dnn/include",
"//third_party/opencv/modules/features2d/include",
"//third_party/opencv/modules/flann/include",
"//third_party/opencv/modules/ts/include",
"//third_party/opencv/modules/video/include",
"//third_party/opencv/modules/videoio/include",
"//third_party/opencv/modules/ml/include",
"//third_party/opencv/modules/objdetect/include",
"//third_party/opencv/modules/photo/include",
"//third_party/opencv/modules/stitching/include",
"//third_party/SeetaFace2/FaceDetector/include",
"//third_party/SeetaFace2/FaceLandmarker/include",
"//third_party/SeetaFace2/FaceRecognizer/include",
"//third_party/SeetaFace2/QualityAssessor/include",
"//base/accessibility/common/log/include",
"//base/hiviewdfx/hilog_lite/interfaces/native/innerkits"
]
deps = ["//foundation/ace/napi:ace_napi"]
deps += ["//third_party/opencv:opencv"]
deps += ["//third_party/SeetaFace2:SeetaFace2"]
external_deps = ["hiviewdfx_hilog_native:libhilog",]
configs = [":lib_config"]
# 指定库生成的门路
relative_install_dir = "module"
# 子系统及其组件,前面会援用
subsystem_name = "SeetafaceApp"
part_name = "SeetafaceApi"
}
增加完对应的文件后,咱们须要将咱们的子系统增加到零碎中进行编译,关上 build/subsystem_config.json 并在最初增加以下代码:
"SeetafaceApp": {
"path": "seetaface",
"name": "SeetafaceApp"
}
增加完子系统再批改产对应的品配置
关上 productdefine/common/products/rk3568.json 并在最初增加以下代码:
"SeetafaceApp:SeetafaceApi":{}
做完以上批改后咱们就能够通过以下命令间接编译 NAPI 的库文件了:
./build.sh --product-name rk3568 --ccache
参考 RK3568 疾速上手 - 镜像烧录实现烧录即可。
利用端开发
在实现设施 NAPI 性能开发后,利用端通过调用 NAPI 组件中裸露给利用的人脸识别接口,即可实现对应性能。接下来就带着大家应用 NAPI 实现人脸识别性能。
开发筹备
- 下载 DevEco Studio 3.0 Beta4;
- 搭建开发环境,参考开发筹备;
-
理解属性 eTS 开发,参考 eTS 语言疾速入门;
SeetaFace2 初始化- 首先将 SeetaFace2 NAPI 接口申明文件搁置于 SDK 目录 /api 下;
- 而后导入 SeetaFace2 NAPI 模块;ck-start/star
- 调用初始化接口;
// 首页实例创立后
async aboutToAppear() {await StorageUtils.clearModel();
CommonLog.info(TAG,'aboutToAppear')
// 初始化人脸识别
let res = SeetafaceApp.FaceSearchInit()
CommonLog.info(TAG,`FaceSearchInit res=${res}`)
this.requestPermissions()}
// 申请权限
requestPermissions(){CommonLog.info(TAG,'requestPermissions')
let context = featureAbility.getContext()
context.requestPermissionsFromUser(PERMISSIONS, 666,(res)=>{this.getMediaImage()
})
}
获取所有人脸图片
通过文件治理模块 fileio 和媒体库治理 mediaLibrary,获取指定利用数据目录下所有的图片信息,并将门路赋值给 faceList,faceList 数据用于 Image 组件提供 url 进行加载图片
// 获取所有图片
async getMediaImage(){let context = featureAbility.getContext();
// 获取本地利用沙箱门路
let localPath = await context.getOrCreateLocalDir()
CommonLog.info(TAG, `localPath:${localPath}`)
let facePath = localPath + "/files"
// 获取所有照片
this.faceList = await FileUtil.getImagePath(facePath)
}
设置人脸模型
获取选中的人脸图片地址和输出的名字,调用 SeetafaceApp.FaceSearchRegister(params)进行设置人脸模型。其中参数 params 由 name 名字、image 图片地址汇合和 sum 图片数量组成。
async submit(name) {if (!name || name.length == 0) {CommonLog.info(TAG, 'name is empty')
return
}
let selectArr = this.faceList.filter(item => item.isSelect)
if (selectArr.length == 0) {CommonLog.info(TAG, 'faceList is empty')
return
}
// 敞开弹窗
this.dialogController.close()
try {let urls = []
let files = []
selectArr.forEach(item => {let source = item.url.replace('file://', '')
CommonLog.info(TAG, `source:${source}`)
urls.push(item.url)
files.push(source)
})
// 设置人脸识别模型参数
let params = {
name: name,
image: files,
sum: files.length
}
CommonLog.info(TAG, 'FaceSearchRegister' + JSON.stringify(params))
let res = SeetafaceApp.FaceSearchRegister(params)
CommonLog.info(TAG, 'FaceSearchRegister res' + res)
// 保留已设置的人脸模型到轻量存储
let data = {
name:name,
urls:urls
}
let modelStr = await StorageUtils.getModel()
let modelList = JSON.parse(modelStr)
modelList.push(data)
StorageUtils.setModel(modelList)
router.back()} catch (err) {CommonLog.error(TAG, 'submit fail' + err)
}
}
实现框选人脸
调用 SeetafaceApp.GetRecognizePoints 传入以后图片地址,获取到人脸左上和右下坐标,再通过 CanvasRenderingContext2D 对象绘画出人脸框。
实现人脸识别
调用 SeetafaceApp.FaceSearchGetRecognize(url),传入图片地址对人脸进行辨认并返回对应辨认进去的名字。
// 人脸识别
recognize(){SeetafaceApp.FaceSearchGetRecognize(this.url).then(res=>{CommonLog.info(TAG,'recognize suceess' + JSON.stringify(res))
if(res && res != 'ignored' && res != "recognize failed" && res != 'recognize error 1!'){
// 赋值辨认到的人物模型
this.name = res
}else{this.name = '未辨认到该模型'}
}).catch(err=>{CommonLog.error(TAG,'recognize' + err)
this.name = '未辨认到该模型'
})
}
参考文档
SeetaFace2 移植开发文档:
https://gitee.com/openharmony…
OpenHarmony 中 napi 的开发视频教程:
https://www.bilibili.com/vide…
RK3568 疾速上手:
https://growing.openharmony.c…
人脸识别利用:
https://gitee.com/openharmony…
利用开发筹备:
https://docs.openharmony.cn/p…
eTS 语言疾速入门:
https://docs.openharmony.cn/p…
常识体系工作组:
https://gitee.com/openharmony…