共计 3413 个字符,预计需要花费 9 分钟才能阅读完成。
- 问题
领导看了后面做的拍照,问了句 ” 哪来的声音 ”,
“ 零碎的,自带的,你看零碎的拍照也有声音 ”
“ 有方法能去掉吗?挺糟心的 ”
“ 我试试 ”
- 思路
路漫漫其修远兮,吾在度娘 +SDK 中求索
拍砖 AVCaptureVideoDataOutput, 代理办法中将 CMSampleBufferRef 转成 UIImage
-
上码
- session 设置不提
- layer 设置可参考上篇 [iOS 拍照定制之 AVCapturePhotoOutput] 以及 上上篇 [iOS 写在定制相机之前]
- 获取摄像头、取到设施输出增加到 session、初始化 videoOutput 增加入 session
AVCaptureDevice *device = [self cameraDevice]; if (!device) {NSLog(@"获得后置摄像头出问题"); return;; } NSError *error = nil; self.videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:device error:nil]; // 设施增加到会话中 if ([self.captureSession canAddInput:self.videoInput]) {[self.captureSession addInput:self.videoInput]; } [self.videoOutput setSampleBufferDelegate:self queue:self.videoQueue]; if ([self.captureSession canAddOutput:self.videoOutput]) {[self.captureSession addOutput:self.videoOutput]; } // 懒加载 - (AVCaptureVideoDataOutput *)videoOutput {if (!_videoOutput) {_videoOutput = [[AVCaptureVideoDataOutput alloc] init]; _videoOutput.alwaysDiscardsLateVideoFrames = YES; _videoOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; } return _videoOutput; } - (dispatch_queue_t)videoQueue {if (!_videoQueue) {_videoQueue = dispatch_queue_create("queue", DISPATCH_QUEUE_SERIAL); } return _videoQueue; }
- 代理 AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { @autoreleasepool {if (connection == [self.videoOutput connectionWithMediaType:AVMediaTypeVideo]) { // 视频 @synchronized (self) {UIImage *image = [self bufferToImage:sampleBuffer rect:self.scanView.scanRect]; self.uploadImg = image; } } } }
- CMSampleBufferRef 转成 UIImage, 该办法有所调整,截图整张图中的某一部分,按需设置。具体获取指定区域图片需本人调整
- (UIImage *)bufferToImage:(CMSampleBufferRef)sampleBuffer rect:(CGRect)rect { // Get a CMSampleBuffer's Core Video image buffer for the media data CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, 0); // Get the number of bytes per row for the pixel buffer void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the number of bytes per row for the pixel buffer size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Unlock the pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer,0); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // 获取指定区域图片 CGRect dRect; CGSize msize = UIScreen.mainScreen.bounds.size; msize.height = msize.height - 150; CGFloat x = width * rect.origin.x / msize.width; CGFloat y = height * rect.origin.y / msize.height; CGFloat w = width * rect.size.width / msize.width; CGFloat h = height * rect.size.height / msize.height; dRect = CGRectMake(x, y, w, h); CGImageRef partRef = CGImageCreateWithImageInRect(quartzImage, dRect); // Create an image object from the Quartz image UIImage *image = [UIImage imageWithCGImage:partRef]; // Release the Quartz image CGImageRelease(partRef); CGImageRelease(quartzImage); return image; }
- 图有了,出工。怎么用图,业务该干活了
正文完