AVAssetReader介绍
能够通过AVAssetReader获取视频文件里媒体样本,能够间接从存储器中读取未解码的原始媒体样本,取得解码成可渲染模式的样本。
文档里阐明AVAssetrader管道外部是多线程的。初始化之后,读取器在应用前加载并解决正当数量的样本数据,以copyNextSampleBuffer(AVAssetReaderOutput)等检索操作的提早非常低。但AVAssetReader还是不适用于实时源,并且它的性能也不能保障用于实时操作。
因为应用前须要加载并解决一些样本数据,导致占用的内存可能会比拟大,须要留神同一时间应用的reader个数不要过多,视频像素越高,占用的内存也会越大。
AVAssetReader初始化
应用AVAsset对AVAssetReader进行初始化,后面也说了初始化之后就会加载样本数据,所以这一步就曾经会对内存产生印象,如果内存缓和就不要事后初始化。
NSError *createReaderError; _reader = [[AVAssetReader alloc]initWithAsset:_asset error:&createReaderError];
AVAssetReader设置Output
在开始读取之前,须要增加output来管制读取初始化应用的asset中哪些track,以及配置如何读取。
AVAssetReaderOutput还有其余的子类实现,如AVAssetReaderVideoCompositionOutput,AVAssetReaderAudioMixOutput和AVAssetReaderSampleReferenceOutput。
这里应用AVAssetReaderTrackOutput演示。须要一个track来初始化,track从asset中获取。
NSArray tracks = [_asset tracksWithMediaType:AVMediaTypeAudio];if (tracks.count > 0) { AVAssetTrack audioTrack = [tracks objectAtIndex:0];}
或者
NSArray tracks = [_asset tracksWithMediaType:AVMediaTypeVideo];if (tracks.count > 0) { AVAssetTrack videoTrack = [tracks objectAtIndex:0];}
还能够对输入的格局进行配置,更多配置能够查阅文档。
NSDictionary * const VideoAssetTrackReaderOutputOptions = @{(id) kCVPixelBufferOpenGLESCompatibilityKey : @(YES), (id) kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary], (NSString*)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)};_readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:_track outputSettings:VideoAssetTrackReaderOutputOptions];if ([_reader canAddOutput:_readerOutput]) { [_reader addOutput:_readerOutput];}
seek操作
AVAssetReader并不适宜频繁随机读取的操作,如果须要频繁seek可能须要别的形式实现。
在开始读取之前,能够对读取的范畴进行设置,当开始读取后不能够批改,只能程序向后读。
有两种计划来调整读取范畴:
- output中能够设置supportsRandomAccess,当为true时,能够重置读取范畴,但须要调用方调用copyNextSampleBuffer,直到该办法返回NULL。
- 或者从新初始化一个AVAssetReader来设置读取工夫。
如果尝试第一种计划,须要应用seek,能够尝试每次设置一个不太长的区间,以保障读取残缺个区间不会耗时太多,且工夫距离最好以关键帧划分。
读取数据
_reader.timeRange = range;[_reader startReading]; _sampleBuffer = [_readerOutput copyNextSampleBuffer];
CMSampleBuffer中提供了办法获取解码数据,比方获取图像信息能够应用
CVImageBufferRef pixelBuffer =
CMSampleBufferGetImageBuffer(_sampleBuffer);
须要留神,当CMSampleBuffer应用结束,须要调用release来开释
CFRelease(_sampleBuffer);
代码示例
NSDictionary * const AssetOptions = @{AVURLAssetPreferPreciseDurationAndTimingKey:@YES};NSDictionary * const VideoAssetTrackReaderOutputOptions = @{(id) kCVPixelBufferOpenGLESCompatibilityKey : @(YES), (id) kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary], (NSString*)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)};_videoAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath: filePath] options:AssetOptions];_videoTrack = [[mPrivate->mVideoAsset tracksWithMediaType:AVMediaTypeVideo] firstObject];if (_videoTrack) { NSError *createReaderError; _reader = [[AVAssetReader alloc] initWithAsset:_videoAsset error:&createReaderError]; if (!createReaderError) { _readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:mPrivate->_videoTrack outputSettings:VideoAssetTrackReaderOutputOptions]; _readerOutput.supportsRandomAccess = YES; if ([_reader canAddOutput:_readerOutput]) { [_reader addOutput:_readerOutput]; } [_reader startReading]; if (_reader.status == AVAssetReaderStatusReading || _reader.status == AVAssetReaderStatusCompleted) { CMSampleBufferRef samplebuffer = [_readerOutput copyNextSampleBuffer]; if (samplebuffer) { //绘制samepleBuffer中的画面 CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(samplebuffer); CVPixelBufferLockBaseAddress(imageBuffer, 0); uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0); size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer); CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL); CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow, rgbColorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrderDefault, provider, NULL, true, kCGRenderingIntentDefault); CGImageRelease(cgImage); CVPixelBufferUnlockBaseAddress(imageBuffer, 0); CFRelease(samplebuffer); } } }}