Android的共享输出
安卓官网文档有提到该性能,
共享音频输出:
https://developer.android.goo...
也就是在Android10的时候就反对共享音频输出了,也就是多利用能够共用录音。
介绍文字一大堆,把人搞晕了,不过其中提到:
在大多数状况下,如果新利用获取音频输出,则之前的捕捉利用将持续运行,但会受到静默解决。在某些状况下,零碎能够持续向这两个利用传送音频
从这句话来看,所谓的静默解决更多的是出于隐衷平安的思考,多利用是能够同时收到失常的音频的。
咱们晓得录音各个利用可能采样率(48K,16K...),格局(16bit,24bit,32bit...),通道数(1ch,2ch...)等都不一样,要想共享,是否须要各利用的参数保持一致?
或者能够不一样,像播放一样不同的参数的利用在AudioFlinger里有重采样?带着这个问题,咱们能够钻研下是咋实现的(从实践上来说,必定是得AF重采样的,要是各参数统一就太弱鸡了,显示不出安卓的实力)。
能够先看下这篇文档:
Android Q共享音频输出:
https://blog.csdn.net/u013490...
总之,
- 当初多利用同时录音不会报错,会持续,只是因为隐衷策略,有的利用会拿到静音数据, 可通过
AudioPolicyService::updateUidStates_l()
定制你的策略; - 利用可通过AudioManager.AudioRecordingCallback()回调取得属性更改,是否静音,设施,源等更改信息。
接下来咱们看下AF RecordThread::threadLoop()
对数据转换的解决。
注:录音个别是RecordThread也有flags有AUDIO_INPUT_FLAG_MMAP_NOIRQ用MmapCaptureThread的状况,咱们以RecordThread为例
AF里录音解决流程
录音的threadLoop()和播放的都是很长的一个函数,大体的构造差不多,次要为:
- 处理事件
- 非Active Track的解决
- 音效链解决
- 从HAL读数据
- 数据转换
// frameworks/av/services/audioflinger/Threads.cppAudioFlinger::RecordThread::threadLoop()| // 处理事件+ processConfigEvents_l();| // 依据active pause等状态,是否要移除+ mActiveTracks.remove(activeTrack); | // 音效链解决 咦,咋不是读数据后处理?+ effectChains[i]->process_l();| // 从HAL读数据到 mRsmpInBuffer 里+ mSource->read(| (uint8_t*)mRsmpInBuffer + rear * mFrameSize, mBufferSize, &bytesRead);|+ for (size_t i = 0; i < size; i++) {+ activeTrack = activeTracks[i];+ activeTrack->getNextBuffer(&activeTrack->mSink);+ activeTrack->mResamplerBufferProvider->sync(&framesIn, &hasOverrun);| // 如果有flag AUDIO_INPUT_FLAG_DIRECT,将数据拷到 mSink.raw+ if (activeTrack->isDirect()) { | activeTrack->mResamplerBufferProvider->getNextBuffer(&buffer);| memcpy(activeTrack->mSink.raw, buffer.raw, buffer.frameCount * mFrameSize);| activeTrack->mResamplerBufferProvider->releaseBuffer(&buffer);+ } else {| // 如果须要转换,将mResamplerBufferProvider的数据处理后给到 mSink.raw+ activeTrack->mRecordBufferConverter->convert(| activeTrack->mSink.raw,| activeTrack->mResamplerBufferProvider,| framesOut);+ }
对于数据转换,如果是direct就间接把数据给到activeTrack->mSink.raw
了,否则就要转换下。
这里activeTrack又是mResamplerBufferProvider
,又是mRecordBufferConverter
的很容易让人晕,
简略说从字面上看就晓得Provider就是数据的提供着,Converter就将提供者的数据按要求进行转换的。
在看convert()函数之前,咱们还是看下Converter的创立。
Converter的创立
// frameworks/av/services/audioflinger/Tracks.cppAudioFlinger::RecordThread::RecordTrack::RecordTrack()+ if (!isDirect())| // 创立 Converter+ mRecordBufferConverter = new RecordBufferConverter(| thread->mChannelMask, thread->mFormat, thread->mSampleRate,| channelMask, format, sampleRate);|+ mServerProxy = new AudioRecordServerProxy(mCblk, mBuffer, frameCount,| mFrameSize, !isExternalTrack());|| // Buffer Provider+ mResamplerBufferProvider = new ResamplerBufferProvider(this);
Converter最次要的就是创立Resampler,他会依据品质不同创立不同的Resampler,比方linear的, cubic的,sic的或者dynamic等类型,
默认的会创立动静中等品质的Resampler,当然厂商也能够在这实现本人的。
// frameworks/av/media/libaudioprocessing/RecordBufferConverter.cppRecordBufferConverter::RecordBufferConverter()+ updateParameters() | // 如果采样率不一样,创立Resampler + if (mSrcSampleRate != mDstSampleRate) { + mResampler = AudioResampler::create(AUDIO_FORMAT_PCM_FLOAT, | mSrcChannelCount, mDstSampleRate);// 依据不同品质创立AudioResampler// frameworks/av/media/libaudioprocessing/AudioResampler.cppAudioResampler* AudioResampler::create(...) {... // 默认的创立动静中等品质的 if (quality == DEFAULT_QUALITY) { quality = DYN_MED_QUALITY; } ... switch (quality) { default: case LOW_QUALITY: ALOGV("Create linear Resampler");... resampler = new AudioResamplerOrder1(inChannelCount, sampleRate); break; case MED_QUALITY: ALOGV("Create cubic Resampler");... resampler = new AudioResamplerCubic(inChannelCount, sampleRate); break; case HIGH_QUALITY: ALOGV("Create HIGH_QUALITY sinc Resampler");... resampler = new AudioResamplerSinc(inChannelCount, sampleRate); break; case VERY_HIGH_QUALITY: ALOGV("Create VERY_HIGH_QUALITY sinc Resampler = %d", quality);... resampler = new AudioResamplerSinc(inChannelCount, sampleRate, quality); break; case DYN_LOW_QUALITY: case DYN_MED_QUALITY: case DYN_HIGH_QUALITY: ALOGV("Create dynamic Resampler = %d", quality); if (format == AUDIO_FORMAT_PCM_FLOAT) { resampler = new AudioResamplerDyn<float, float, float>(inChannelCount, sampleRate, quality); } else { LOG_ALWAYS_FATAL_IF(format != AUDIO_FORMAT_PCM_16_BIT); if (quality == DYN_HIGH_QUALITY) { resampler = new AudioResamplerDyn<int32_t, int16_t, int32_t>(inChannelCount, sampleRate, quality); } else { resampler = new AudioResamplerDyn<int16_t, int16_t, int32_t>(inChannelCount, sampleRate, quality); } }
数据转换
回过头来看下convert()函数,也即RecordBufferConverter::convert()
,
分为不须要重采样的状况和须要重采样的状况。
AudioFlinger::RecordThread::threadLoop()+ activeTrack->mRecordBufferConverter->convert()| // frameworks/av/media/libaudioprocessing/RecordBufferConverter.cpp| // RecordBufferConverter::convert()| // 不须要重采样状况+ if (mResampler == NULL) {+ provider->getNextBuffer(&buffer);| // format convert to destination buffer+ convertNoResampler(dst, buffer.raw, buffer.frameCount); // --> 见下|+ } else {+ frames = mResampler->resample((int32_t*)mBuf, frames, provider); // 重采| // format convert to destination buffer+ convertResampler(dst, mBuf, frames); // --> 见下+ }
不须要重采样
对于不须要重采样,也就是采样率雷同状况,就只须要依据channel和format(8bit, 16bit...)进行转换,
先进行的是ch的转换,
对2ch->1ch这种状况,会两声道相加再x0.5,也就是2声道取均匀组成新的1ch
对于1ch->2ch的,赋值给左右声道就行,也就是这两声道的值都一样的。
RecordBufferConverter::convertNoResampler()| // do we need to do legacy upmix and downmix?+ if (mIsLegacyUpmix || mIsLegacyDownmix) { // 当初还是用的老式办法| if (mIsLegacyUpmix) {| // 上混,1ch -> 2ch,间接赋值+ upmix_to_stereo_float_from_mono_float()+ | // /primitives.c| + dst[0] = temp;| + dst[1] = temp;| } else /*mIsLegacyDownmix */ {| // 下混, 2ch->1ch, 取均匀+ downmix_to_mono_float_from_stereo_float()| + *dst++ = (src[0] + src[1]) * 0.5| }|+ memcpy_by_audio_format() // format转换| return;+ }| // 新的办法按index转换channel+ if (mSrcChannelMask != mDstChannelMask) {...| memcpy_by_index_array(dstBuf, mDstChannelCount,| src, mSrcChannelCount, mIdxAry, audio_bytes_per_sample(mSrcFormat), frames);|| // format转换+ memcpy_by_audio_format()
之后进行格局转换,其实也就是借助uint8_t, int16_t, int32_t 进行一些解决,有趣味的能够认真的钻研下,特地是24bit的互相转换。
这里只贴下指标格局为16bit的代码。
// system/media/audio_utils/format.cvoid memcpy_by_audio_format(void *dst, audio_format_t dst_format, const void *src, audio_format_t src_format, size_t count){... switch (dst_format) { case AUDIO_FORMAT_PCM_16_BIT: switch (src_format) { case AUDIO_FORMAT_PCM_FLOAT: memcpy_to_i16_from_float((int16_t*)dst, (float*)src, count); return; case AUDIO_FORMAT_PCM_8_BIT: memcpy_to_i16_from_u8((int16_t*)dst, (uint8_t*)src, count); return; case AUDIO_FORMAT_PCM_24_BIT_PACKED: memcpy_to_i16_from_p24((int16_t*)dst, (uint8_t*)src, count); return; case AUDIO_FORMAT_PCM_32_BIT: memcpy_to_i16_from_i32((int16_t*)dst, (int32_t*)src, count); return; case AUDIO_FORMAT_PCM_8_24_BIT: memcpy_to_i16_from_q8_23((int16_t*)dst, (int32_t*)src, count); return; default: break; }
须要重采样
须要重采样会先进行mResampler->resample()
,这个就是后面提到的依据品质创立的不同的Resampler,如同还是挺简单的,一时半会儿也看不明确,当前有机会再说。
而后就进行convertResampler()
转换了,须要留神的是对于上混这种状况resampler会做解决,该函数就不再解决了,用到的函数和不须要重采的差不多,就不再过多的讲了。
RecordBufferConverter::convertResampler()+ if (mIsLegacyUpmix) {+ ; // mono to stereo already handled by resampler| } else if (mIsLegacyDownmix| || (mSrcChannelMask == mDstChannelMask && mSrcChannelCount == 1)) {+ downmix_to_mono_float_from_stereo_float((...);| } else if (mSrcChannelMask != mDstChannelMask) { // ch mask不一样的状况+ if (mSrcChannelCount == 1)+ downmix_to_mono_float_from_stereo_float(...);|| // 和不须要重采的有点小区别,先进行格局转换| // convert to destination format (in place, OK as float is larger than other types)| if (mDstFormat != AUDIO_FORMAT_PCM_FLOAT)+ memcpy_by_audio_format() // 格局转换| // channel convert and save to dst| memcpy_by_index_array() // 依据index拷贝| return;| }| // ch雷同只须要格局转换就行+ memcpy_by_audio_format()