共计 14363 个字符,预计需要花费 36 分钟才能阅读完成。
一、简介
媒体子系统为开发者提供了媒体相干的很多性能,本文针对其中的视频录制性能做个具体的介绍。首先,我将通过媒体子系统提供的视频录制 Test 代码作为切入点,给大家梳理一下整个录制的流程。
二、目录
foundation/multimedia/camera_framework
├── frameworks
│ ├── js
│ │ └── camera_napi #napi 实现
│ │ └── src
│ │ ├── input #Camera 输出
│ │ ├── output #Camera 输入
│ │ └── session #会话治理
│ └── native #native 实现
│ └── camera
│ ├── BUILD.gn
│ ├── src
│ │ ├── input #Camera 输出
│ │ ├── output #Camera 输入
│ │ └── session #会话治理
├── interfaces #接口定义
│ ├── inner_api #外部 native 实现
│ │ └── native
│ │ ├── camera
│ │ │ └── include
│ │ │ ├── input
│ │ │ ├── output
│ │ │ └── session
│ └── kits #napi 接口
│ └── js
│ └── camera_napi
│ ├── BUILD.gn
│ ├── include
│ │ ├── input
│ │ ├── output
│ │ └── session
│ └── @ohos.multimedia.camera.d.ts
└── services #服务端
└── camera_service
├── binder
│ ├── base
│ ├── client #IPC 的客户端
│ │ └── src
│ └── server #IPC 的服务端
│ └── src
└── src
三、录制的总体流程
四、Native 接口应用
在 OpenAtom OpenHarmony(以下简称“OpenHarmony”)零碎中,多媒体子系统通过 N -API 接口提供给下层 JS 调用,N-API 相当于是 JS 和 Native 之间的桥梁,在 OpenHarmony 源码中,提供了 C ++ 间接调用视频录制性能的例子,foundation/multimedia/camera_framework/interfaces/inner_api/native/test 目录中。本文章次要参考了 camera_video.cpp 文件中的视频录制流程。
首先依据 camera_video.cpp 的 main 办法,理解下视频录制的次要流程代码。
int main(int argc, char **argv)
{
......
// 创立 CameraManager 实例
sptr<CameraManager> camManagerObj = CameraManager::GetInstance();
// 设置回调
camManagerObj->SetCallback(std::make_shared<TestCameraMngerCallback>(testName));
// 获取反对的相机设施列表
std::vector<sptr<CameraDevice>> cameraObjList = camManagerObj->GetSupportedCameras();
// 创立采集会话
sptr<CaptureSession> captureSession = camManagerObj->CreateCaptureSession();
// 开始配置采集会话
captureSession->BeginConfig();
// 创立 CameraInput
sptr<CaptureInput> captureInput = camManagerObj->CreateCameraInput(cameraObjList[0]);
sptr<CameraInput> cameraInput = (sptr<CameraInput> &)captureInput;
// 开启 CameraInput
cameraInput->Open();
// 设置 CameraInput 的 Error 回调
cameraInput->SetErrorCallback(std::make_shared<TestDeviceCallback>(testName));
// 增加 CameraInput 实例到采集会话中
ret = captureSession->AddInput(cameraInput);
sptr<Surface> videoSurface = nullptr;
std::shared_ptr<Recorder> recorder = nullptr;
// 创立 Video 的 Surface
videoSurface = Surface::CreateSurfaceAsConsumer();
sptr<SurfaceListener> videoListener = new SurfaceListener("Video", SurfaceType::VIDEO, g_videoFd, videoSurface);
// 注册 Surface 的事件监听
videoSurface->RegisterConsumerListener((sptr<IBufferConsumerListener> &)videoListener);
// 视频的配置
VideoProfile videoprofile = VideoProfile(static_cast<CameraFormat>(videoFormat), videosize, videoframerates);
// 创立 VideoOutput 实例
sptr<CaptureOutput> videoOutput = camManagerObj->CreateVideoOutput(videoprofile, videoSurface);
// 设置 VideoOutput 的回调
((sptr<VideoOutput> &)videoOutput)->SetCallback(std::make_shared<TestVideoOutputCallback>(testName));
// 增加 videoOutput 到采集会话中
ret = captureSession->AddOutput(videoOutput);
// 提交会话配置
ret = captureSession->CommitConfig();
// 开始录制
ret = ((sptr<VideoOutput> &)videoOutput)->Start();
sleep(videoPauseDuration);
MEDIA_DEBUG_LOG("Resume video recording");
// 暂停录制
ret = ((sptr<VideoOutput> &)videoOutput)->Resume();
MEDIA_DEBUG_LOG("Wait for 5 seconds before stop");
sleep(videoCaptureDuration);
MEDIA_DEBUG_LOG("Stop video recording");
// 进行录制
ret = ((sptr<VideoOutput> &)videoOutput)->Stop();
MEDIA_DEBUG_LOG("Closing the session");
// 进行采集会话
ret = captureSession->Stop();
MEDIA_DEBUG_LOG("Releasing the session");
// 开释会话采集
captureSession->Release();
// Close video file
TestUtils::SaveVideoFile(nullptr, 0, VideoSaveMode::CLOSE, g_videoFd);
cameraInput->Release();
camManagerObj->SetCallback(nullptr);
return 0;
}
以上是视频录制的整体流程,其过程次要通过 Camera 模块反对的能力来实现,其中波及几个重要的类:CaptureSession、CameraInput、VideoOutput。CaptureSession 是整个过程的控制者,CameraInput 和 VideoOutput 相当于是设施的输出和输入。
五、调用流程
后续次要针对下面的调用流程,梳理具体的调用流程,不便咱们对理解视频录制的整顿架构有一个更加深刻的理解。
-
创立 CameraManager 实例
通过 CameraManager::GetInstance()获取 CameraManager 的实例,后续的一些接口都是通过该实例进行调用的。GetInstance 应用了单例模式,在 OpenHarmony 代码中这种形式很常见。sptr<CameraManager> &CameraManager::GetInstance() {if (CameraManager::cameraManager_ == nullptr) {MEDIA_INFO_LOG("Initializing camera manager for first time!"); CameraManager::cameraManager_ = new(std::nothrow) CameraManager(); if (CameraManager::cameraManager_ == nullptr) {MEDIA_ERR_LOG("CameraManager::GetInstance failed to new CameraManager"); } } return CameraManager::cameraManager_; }
-
获取反对的相机设施列表
通过调用 CameraManager 的 GetSupportedCameras()接口,获取设施反对的 CameraDevice 列表。跟踪代码能够发现 serviceProxy_->GetCameras 最终会调用到 Camera 服务端的对应接口。std::vector<sptr<CameraDevice>> CameraManager::GetSupportedCameras() { CAMERA_SYNC_TRACE; std::lock_guard<std::mutex> lock(mutex_); std::vector<std::string> cameraIds; std::vector<std::shared_ptr<Camera::CameraMetadata>> cameraAbilityList; int32_t retCode = -1; sptr<CameraDevice> cameraObj = nullptr; int32_t index = 0; if (cameraObjList.size() > 0) {cameraObjList.clear(); } if (serviceProxy_ == nullptr) {MEDIA_ERR_LOG("CameraManager::GetCameras serviceProxy_ is null, returning empty list!"); return cameraObjList; } std::vector<sptr<CameraDevice>> supportedCameras; retCode = serviceProxy_->GetCameras(cameraIds, cameraAbilityList); if (retCode == CAMERA_OK) {for (auto& it : cameraIds) {cameraObj = new(std::nothrow) CameraDevice(it, cameraAbilityList[index++]); if (cameraObj == nullptr) {MEDIA_ERR_LOG("CameraManager::GetCameras new CameraDevice failed for id={public}%s", it.c_str()); continue; } supportedCameras.emplace_back(cameraObj); } } else {MEDIA_ERR_LOG("CameraManager::GetCameras failed!, retCode: %{public}d", retCode); } ChooseDeFaultCameras(supportedCameras); return cameraObjList; }
-
创立采集会话
上面是比拟重要的环节,通过调用 CameraManager 的 CreateCaptureSession 接口创立采集会话。CameraManager 创立采集会话,是通过 serviceProxy_->CreateCaptureSession 形式进行调用,这里波及到了 OpenHarmony 中的 IPC 的调用,serviceProxy_是远端服务在本地的代理,通过这个代理能够调用到具体的服务端,这里是 HCameraService。sptr<CaptureSession> CameraManager::CreateCaptureSession() { CAMERA_SYNC_TRACE; sptr<ICaptureSession> captureSession = nullptr; sptr<CaptureSession> result = nullptr; int32_t retCode = CAMERA_OK; if (serviceProxy_ == nullptr) {MEDIA_ERR_LOG("CameraManager::CreateCaptureSession serviceProxy_ is null"); return nullptr; } retCode = serviceProxy_->CreateCaptureSession(captureSession); if (retCode == CAMERA_OK && captureSession != nullptr) {result = new(std::nothrow) CaptureSession(captureSession); if (result == nullptr) {MEDIA_ERR_LOG("Failed to new CaptureSession"); } } else {MEDIA_ERR_LOG("Failed to get capture session object from hcamera service!, %{public}d", retCode); } return result; }
代码最终来到 HCameraService::CreateCaptureSession 中,该办法中 new 了一个 HCaptureSession 对象,并且将该对象传递给了参数 session,所以后面的 captureSession 对象就是这里 new 进去的 HCaptureSession,后面的 CameraManager 的 CreateCaptureSession()办法中将 captureSession 封装成 CaptureSession 对象返回给应用层应用。
int32_t HCameraService::CreateCaptureSession(sptr<ICaptureSession> &session)
{
CAMERA_SYNC_TRACE;
sptr<HCaptureSession> captureSession;
if (streamOperatorCallback_ == nullptr) {streamOperatorCallback_ = new(std::nothrow) StreamOperatorCallback();
if (streamOperatorCallback_ == nullptr) {MEDIA_ERR_LOG("HCameraService::CreateCaptureSession streamOperatorCallback_ allocation failed");
return CAMERA_ALLOC_ERROR;
}
}
std::lock_guard<std::mutex> lock(mutex_);
OHOS::Security::AccessToken::AccessTokenID callerToken = IPCSkeleton::GetCallingTokenID();
captureSession = new(std::nothrow) HCaptureSession(cameraHostManager_, streamOperatorCallback_, callerToken);
if (captureSession == nullptr) {MEDIA_ERR_LOG("HCameraService::CreateCaptureSession HCaptureSession allocation failed");
return CAMERA_ALLOC_ERROR;
}
session = captureSession;
return CAMERA_OK;
}
-
开始配置采集会话
调用 CaptureSession 的 BeginConfig 进行采集会话的配置工作。这个工作最终调用到被封装的 HCaptureSession 中。int32_t HCaptureSession::BeginConfig() { CAMERA_SYNC_TRACE; if (curState_ == CaptureSessionState::SESSION_CONFIG_INPROGRESS) {MEDIA_ERR_LOG("HCaptureSession::BeginConfig Already in config inprogress state!"); return CAMERA_INVALID_STATE; } std::lock_guard<std::mutex> lock(sessionLock_); prevState_ = curState_; curState_ = CaptureSessionState::SESSION_CONFIG_INPROGRESS; tempCameraDevices_.clear(); tempStreams_.clear(); deletedStreamIds_.clear(); return CAMERA_OK; }
-
创立 CameraInput
应用层通过 camManagerObj->CreateCameraInput(cameraObjList[0])的形式进行 CameraInput 的创立,cameraObjList[0]就是后面获取反对设施的第一个。依据 CameraDevice 创立对应的 CameraInput 对象。sptr<CameraInput> CameraManager::CreateCameraInput(sptr<CameraDevice> &camera) { CAMERA_SYNC_TRACE; sptr<CameraInput> cameraInput = nullptr; sptr<ICameraDeviceService> deviceObj = nullptr; if (camera != nullptr) {deviceObj = CreateCameraDevice(camera->GetID()); if (deviceObj != nullptr) {cameraInput = new(std::nothrow) CameraInput(deviceObj, camera); if (cameraInput == nullptr) {MEDIA_ERR_LOG("failed to new CameraInput Returning null in CreateCameraInput"); return cameraInput; } } else {MEDIA_ERR_LOG("Returning null in CreateCameraInput"); } } else {MEDIA_ERR_LOG("CameraManager::CreateCameraInput: Camera object is null"); } return cameraInput; }
- 开启 CameraInput
调用了 CameraInput 的 Open 办法,进行输出设施的启动关上。
void CameraInput::Open()
{int32_t retCode = deviceObj_->Open();
if (retCode != CAMERA_OK) {MEDIA_ERR_LOG("Failed to open Camera Input, retCode: %{public}d", retCode);
}
}
-
增加 CameraInput 实例到采集会话中
通过调用 captureSession 的 AddInput 办法,将创立的 CameraInput 对象增加到采集会话的输出中,这样采集会话就晓得采集输出的设施。int32_t CaptureSession::AddInput(sptr<CaptureInput> &input) { CAMERA_SYNC_TRACE; if (input == nullptr) {MEDIA_ERR_LOG("CaptureSession::AddInput input is null"); return CAMERA_INVALID_ARG; } input->SetSession(this); inputDevice_ = input; return captureSession_->AddInput(((sptr<CameraInput> &)input)->GetCameraDevice()); }
最终调用到 HCaptureSession 的 AddInput 办法,该办法中外围的代码是 tempCameraDevices_.emplace_back(localCameraDevice),将须要增加的 CameraDevice 插入到 tempCameraDevices_容器中。
int32_t HCaptureSession::AddInput(sptr<ICameraDeviceService> cameraDevice)
{
CAMERA_SYNC_TRACE;
sptr<HCameraDevice> localCameraDevice = nullptr;
if (cameraDevice == nullptr) {MEDIA_ERR_LOG("HCaptureSession::AddInput cameraDevice is null");
return CAMERA_INVALID_ARG;
}
if (curState_ != CaptureSessionState::SESSION_CONFIG_INPROGRESS) {MEDIA_ERR_LOG("HCaptureSession::AddInput Need to call BeginConfig before adding input");
return CAMERA_INVALID_STATE;
}
if (!tempCameraDevices_.empty() || (cameraDevice_ != nullptr && !cameraDevice_->IsReleaseCameraDevice())) {MEDIA_ERR_LOG("HCaptureSession::AddInput Only one input is supported");
return CAMERA_INVALID_SESSION_CFG;
}
localCameraDevice = static_cast<HCameraDevice*>(cameraDevice.GetRefPtr());
if (cameraDevice_ == localCameraDevice) {cameraDevice_->SetReleaseCameraDevice(false);
} else {tempCameraDevices_.emplace_back(localCameraDevice);
CAMERA_SYSEVENT_STATISTIC(CreateMsg("CaptureSession::AddInput"));
}
sptr<IStreamOperator> streamOperator;
int32_t rc = localCameraDevice->GetStreamOperator(streamOperatorCallback_, streamOperator);
if (rc != CAMERA_OK) {MEDIA_ERR_LOG("HCaptureSession::GetCameraDevice GetStreamOperator returned %{public}d", rc);
localCameraDevice->Close();
return rc;
}
return CAMERA_OK;
}
-
创立 Video 的 Surface
通过 Surface::CreateSurfaceAsConsumer 创立 Surface。sptr<Surface> Surface::CreateSurfaceAsConsumer(std::string name, bool isShared) {sptr<ConsumerSurface> surf = new ConsumerSurface(name, isShared); GSError ret = surf->Init(); if (ret != GSERROR_OK) {BLOGE("Failure, Reason: consumer surf init failed"); return nullptr; } return surf; }
-
创立 VideoOutput 实例
通过调用 CameraManager 的 CreateVideoOutput 来创立 VideoOutput 实例。sptr<VideoOutput> CameraManager::CreateVideoOutput(VideoProfile &profile, sptr<Surface> &surface) { CAMERA_SYNC_TRACE; sptr<IStreamRepeat> streamRepeat = nullptr; sptr<VideoOutput> result = nullptr; int32_t retCode = CAMERA_OK; camera_format_t metaFormat; metaFormat = GetCameraMetadataFormat(profile.GetCameraFormat()); retCode = serviceProxy_->CreateVideoOutput(surface->GetProducer(), metaFormat, profile.GetSize().width, profile.GetSize().height, streamRepeat); if (retCode == CAMERA_OK) {result = new(std::nothrow) VideoOutput(streamRepeat); if (result == nullptr) {MEDIA_ERR_LOG("Failed to new VideoOutput"); } else {std::vector<int32_t> videoFrameRates = profile.GetFrameRates(); if (videoFrameRates.size() >= 2) { // vaild frame rate range length is 2 result->SetFrameRateRange(videoFrameRates[0], videoFrameRates[1]); } POWERMGR_SYSEVENT_CAMERA_CONFIG(VIDEO, profile.GetSize().width, profile.GetSize().height); } } else {MEDIA_ERR_LOG("VideoOutpout: Failed to get stream repeat object from hcamera service! %{public}d", retCode); } return result; }
该办法中通过 IPC 的调用最终调用到了 HCameraService 的 CreateVideoOutput(surface->GetProducer(), format, streamRepeat)。
sptr<VideoOutput> CameraManager::CreateVideoOutput(VideoProfile &profile, sptr<Surface> &surface) { CAMERA_SYNC_TRACE; sptr<IStreamRepeat> streamRepeat = nullptr; sptr<VideoOutput> result = nullptr; int32_t retCode = CAMERA_OK; camera_format_t metaFormat; metaFormat = GetCameraMetadataFormat(profile.GetCameraFormat()); retCode = serviceProxy_->CreateVideoOutput(surface->GetProducer(), metaFormat, profile.GetSize().width, profile.GetSize().height, streamRepeat); if (retCode == CAMERA_OK) {result = new(std::nothrow) VideoOutput(streamRepeat); if (result == nullptr) {MEDIA_ERR_LOG("Failed to new VideoOutput"); } else {std::vector<int32_t> videoFrameRates = profile.GetFrameRates(); if (videoFrameRates.size() >= 2) { // vaild frame rate range length is 2 result->SetFrameRateRange(videoFrameRates[0], videoFrameRates[1]); } POWERMGR_SYSEVENT_CAMERA_CONFIG(VIDEO, profile.GetSize().width, profile.GetSize().height); } } else {MEDIA_ERR_LOG("VideoOutpout: Failed to get stream repeat object from hcamera service! %{public}d", retCode); } return result; }
HCameraService 的 CreateVideoOutput 办法中次要创立了 HStreamRepeat,并且通过参数传递给后面的 CameraManager 应用,CameraManager 通过传递的 HStreamRepeat 对象,进行封装,创立出 VideoOutput 对象。
- 增加 videoOutput 到采集会话中,并且提交采集会话
该步骤相似增加 CameraInput 到采集会话的过程,能够参考后面的流程。 -
开始录制
通过调用 VideoOutput 的 Start 进行录制的操作。int32_t VideoOutput::Start() {return static_cast<IStreamRepeat *>(GetStream().GetRefPtr())->Start();}
该办法中会调用到 HStreamRepeat 的 Start 办法。
int32_t HStreamRepeat::Start()
{
CAMERA_SYNC_TRACE;
if (streamOperator_ == nullptr) {return CAMERA_INVALID_STATE;}
if (curCaptureID_ != 0) {MEDIA_ERR_LOG("HStreamRepeat::Start, Already started with captureID: %{public}d", curCaptureID_);
return CAMERA_INVALID_STATE;
}
int32_t ret = AllocateCaptureId(curCaptureID_);
if (ret != CAMERA_OK) {MEDIA_ERR_LOG("HStreamRepeat::Start Failed to allocate a captureId");
return ret;
}
std::vector<uint8_t> ability;
OHOS::Camera::MetadataUtils::ConvertMetadataToVec(cameraAbility_, ability);
CaptureInfo captureInfo;
captureInfo.streamIds_ = {streamId_};
captureInfo.captureSetting_ = ability;
captureInfo.enableShutterCallback_ = false;
MEDIA_INFO_LOG("HStreamRepeat::Start Starting with capture ID: %{public}d", curCaptureID_);
CamRetCode rc = (CamRetCode)(streamOperator_->Capture(curCaptureID_, captureInfo, true));
if (rc != HDI::Camera::V1_0::NO_ERROR) {ReleaseCaptureId(curCaptureID_);
curCaptureID_ = 0;
MEDIA_ERR_LOG("HStreamRepeat::Start Failed with error Code:%{public}d", rc);
ret = HdiToServiceError(rc);
}
return ret;
}
外围的代码是 streamOperator_->Capture,其中最初一个参数 true,示意采集间断数据。
- 录制完结,保留录制文件
六、总结
本文次要对 OpenHarmony 3.2 Beta 多媒体子系统的视频录制进行介绍,首先梳理了整体的录制流程,而后对录制过程中的次要步骤进行了具体地剖析。视频录制次要分为以下几个步骤:
(1)获取 CameraManager 实例。
(2)创立采集会话 CaptureSession。
(3)创立 CameraInput 实例,并且将输出设施增加到 CaptureSession 中。
(4)创立 Video 录制须要的 Surface。
(5)创立 VideoOutput 实例,并且将输入增加到 CaptureSession 中。
(6)提交采集会话的配置。
(7)调用 VideoOutput 的 Start 办法,进行视频的录制。
(8)录制完结,保留录制的文件。
对于 OpenHarmony 3.2 Beta 多媒体系列开发,我之前还分享过
《OpenHarmony 3.2 Beta 源码剖析之 MediaLibrary》
《OpenHarmony 3.2 Beta 多媒体系列——音视频播放框架》
《OpenHarmony 3.2 Beta 多媒体系列——音视频播放 gstreamer》
这几篇文章,欢送感兴趣的开发者进行浏览。