关于ios:如何制作Live-Photo并存入照片库

31次阅读

共计 14419 个字符,预计需要花费 37 分钟才能阅读完成。

本文次要是讲如何将一张照片和一部视频合并为动静照片,而后存入照片库。

动静照片 (Live Photo) 的机密

动静照片外表上看就是一张图片加一部短视频,它们在手机中的文件名是雷同的,只是扩展名不同,一个是 JPG,一个是 MOV,都是大写的。

注:iOS 是辨别大小写的,而 macOS 是不辨别大小写的,然而在显示的时候是辨别大小写的。

我已经试图将电脑中的短视频,外加一张截图,把它们名字改成一样的,而后放入我的手机中(/User/Media/DCIM/100APPLE/),接着删除照片库的数据库(/User/Media/PhotoData/Photos.sqlite),重启手机再进入照片库,发现零碎只能认出截图,并没有将它辨认为动静照片。

上 StackOverflow 搜寻发现,图片和视频中都有额定的元数据用于辨认动静照片。
1、图片(JPEG)

  • 元数据(Metadata)
{"{MakerApple}" : {"17" : "<Identifier>"}
}

2、视频(MOV)

  • H.264 编码
  • YUV420P 色彩编码
  • 顶层元数据(Metadata)
{"com.apple.quicktime.content.identifier" : "<Identifier>"}
  • 元数据轨道(Metadata Track)
{
  "MetadataIdentifier" : "mdta/com.apple.quicktime.still-image-time",
  "MetadataDataType" : "com.apple.metadata.datatype.int8"
}
  • 元数据轨道中的元数据
{"com.apple.quicktime.still-image-time" : 0}

其中,图片和视频的 <Identifier> 必须统一。

应用代码增加元数据

晓得了动静照片的机密,那么事件就好办了,咱们只须要增加相应的元数据到图片和视频中就能够了。
尽管不须要对视频从新编码,然而也没法简略地应用一两行代码就能够实现元数据的插入。只能应用 AVAssetReaderAVAssetWriter边读边写。

前提

1、iOS版本须要 9.1 以上,否则不反对将动静照片存入照片库。
2、导入以下头文件。

#import <Photos/Photos.h>
#import <CoreMedia/CMMetadata.h>
#import <MobileCoreServices/MobileCoreServices.h>

图片增加元数据

给图片增加元数据非常简单,代码如下。

- (void)addMetadataToPhoto:(NSURL *)photoURL outputPhotoFile:(NSString *)outputFile identifier:(NSString *)identifier {NSMutableData *data = [NSData dataWithContentsOfURL:photoURL].mutableCopy;
    UIImage *image = [UIImage imageWithData:data];
    CGImageRef imageRef = image.CGImage;
    NSDictionary *imageMetadata = @{(NSString *)kCGImagePropertyMakerAppleDictionary : @{@"17" : identifier}};
    CGImageDestinationRef dest = CGImageDestinationCreateWithData((CFMutableDataRef)data, kUTTypeJPEG, 1, nil);
    CGImageDestinationAddImage(dest, imageRef, (CFDictionaryRef)imageMetadata);
    CGImageDestinationFinalize(dest);
    [data writeToFile:outputFile atomically:YES];
}

其中,kCGImagePropertyMakerAppleDictionary的值是 {MakerApple}identifier 的值由 [NSUUID UUID].UUIDString 生成。

视频增加元数据

给视频增加元数据十分麻烦,须要应用 AVAssetReaderAVAssetWriter,前者读的同时,后者同时写。
在给出具体代码前,先对 AVAssetReaderAVAssetWriter有个大略的意识。

AVAssetReader

AVAsset能够看成视频对象。
AVAssetReader能够看成是 AVAsset 的数据读取管理器,它不负责读数据,只负责读取状态变更。
AVAssetReaderOutput能够看成数据读取器,负责数据的读取工作,它须要退出到 AVAssetReader 中能力工作。能够创立多个 AVAssetReaderOutput 退出到 AVAssetReader 中。
AVAssetReaderTrackOutputAVAssetReaderOutput 的子类,传入 [AVAsset tracks] 中的轨道来创立轨道数据读取器。
[AVAssetReader startReading]示意 AVAssetReaderTrackOutput 能够开始读取数据。
[AVAssetReaderOutput copyNextSampleBuffer]示意读取下一段数据。能够是音频数据,也能够是视频数据,也能够是其它数据。
[AVAssetReader cancelReading]示意进行读取数据。进行后任何 AVAssetReaderOutput 都无奈读取数据。

AVAssetWriter

AVAssetWriter能够看成是视频数据写入管理器,它不负责写数据,只负责写入状态的变更。
AVAssetWriterInput能够看成轨道数据写入器,负责数据的写入工作,它须要退出到 AVAssetWriter 能力工作。能够创立多个 AVAssetWriterInput 退出到 AVAssetWriter 中。
[AVAssetWriter startWriting]示意 AVAssetWriterInput 能够开始写入数据。
[AVAssetWriter startSessionAtSourceTime:kCMTimeZero]示意从音频或视频工夫第 0 秒开始写入数据。
AVAssetWriterInput.readyForMoreMediaData示意有足够的缓冲区可供写入数据。
AVAssetWriterInput appendSampleBuffer:buffer]示意将数据 buffer 写入缓冲区。当 AVAssetWriterInput 的缓冲区写满时,会对数据进行解决并清空缓冲区。
留神:如果有多个 AVAssetWriterInput,当其中一个AVAssetWriterInput 写满缓冲区时,并不会对数据进行解决,而是期待其它 AVAssetWriterInput 写入相应时长的数据后,才会对数据进行解决。
[AVAssetWriterInput markAsFinished]示意曾经没有数据能够写入,并且不再接管任何数据。
[AVAssetWriter finishWritingWithCompletionHandler示意所有写入曾经实现,解决所有数据,生成一个残缺的视频。

读写流程

1、初始化 AVAssetReaderAVAssetWriter
2、通过 AVAsset 获取轨道,用于创立 AVAssetReaderTrackOutput,以及对应的AVAssetWriterInput
3、AVAssetReader 读取顶层元数据,批改后让 AVAssetWriter 写入顶层元数据。
4、让 AVAssetReaderAVAssetWriter进入读写状态。
5、AVAssetReaderOuput读取轨道数据,AVAssetWriterInput写入轨道数据。
6、数据全副读完后,让 AVAssetReader 变为进行读取状态。让所有 AVAssetWriterInput 标记为写完状态。
7、让 AVAssetWriter 变为实现状态,至此视频创立实现。

代码详解

创立顶层元数据
- (AVMetadataItem *)createContentIdentifierMetadataItem:(NSString *)identifier {AVMutableMetadataItem *item = [AVMutableMetadataItem metadataItem];
    item.keySpace = AVMetadataKeySpaceQuickTimeMetadata;
    item.key = AVMetadataQuickTimeMetadataKeyContentIdentifier;
    item.value = identifier;
    return item;
}

此处视频的 identifier 必须和图片的 identifier 的值一样。

创立元数据轨道
- (AVAssetWriterInput *)createStillImageTimeAssetWriterInput {NSArray *spec = @[@{(NSString *)kCMMetadataFormatDescriptionMetadataSpecificationKey_Identifier : @"mdta/com.apple.quicktime.still-image-time",
                        (NSString *)kCMMetadataFormatDescriptionMetadataSpecificationKey_DataType : (NSString *)kCMMetadataBaseDataType_SInt8 }];
    CMFormatDescriptionRef desc = NULL;
    CMMetadataFormatDescriptionCreateWithMetadataSpecifications(kCFAllocatorDefault, kCMMetadataFormatType_Boxed, (__bridge CFArrayRef)spec, &desc);
    AVAssetWriterInput *input = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeMetadata outputSettings:nil sourceFormatHint:desc];
    return input;
}
创立元数据轨道中的元数据
- (AVMetadataItem *)createStillImageTimeMetadataItem {AVMutableMetadataItem *item = [AVMutableMetadataItem metadataItem];
    item.keySpace = AVMetadataKeySpaceQuickTimeMetadata;
    item.key = @"com.apple.quicktime.still-image-time";
    item.value = @(-1);
    item.dataType = (NSString *)kCMMetadataBaseDataType_SInt8;
    return item;
}

留神:这里 dataType 必须赋值,否则在插入到元数据轨道时会出错。

创立 AVAssetReaderAVAssetWriter

首先,定义一个增加元数据到视频中的入口办法。

- (void)addMetadataToVideo:(NSURL *)videoURL outputFile:(NSString *)outputFile identifier:(NSString *)identifier;

而后,创立 AVAssetReaderAVAssetWriter,同时增加顶层元数据com.apple.quicktime.content.identifier

NSError *error = nil;
  
// Reader
AVAsset *asset = [AVAsset assetWithURL:videoURL];
AVAssetReader *reader = [AVAssetReader assetReaderWithAsset:asset error:&error];
if (error) {NSLog(@"Init reader error: %@", error);
    return;
}
  
// Add content identifier metadata item
NSMutableArray<AVMetadataItem *> *metadata = asset.metadata.mutableCopy;
AVMetadataItem *item = [self createContentIdentifierMetadataItem:identifier];
[metadata addObject:item];
  
// Writer
NSURL *videoFileURL = [NSURL fileURLWithPath:outputFile];
[self deleteFile:outputFile];
AVAssetWriter *writer = [AVAssetWriter assetWriterWithURL:videoFileURL fileType:AVFileTypeQuickTimeMovie error:&error];
if (error) {NSLog(@"Init writer error: %@", error);
    return;
}
[writer setMetadata:metadata];
创立 AVAssetReaderTrackOutputAVAssetWriterInput
// Tracks
NSArray<AVAssetTrack *> *tracks = [asset tracks];
for (AVAssetTrack *track in tracks) {
    NSDictionary *readerOutputSettings = nil;
    NSDictionary *writerOuputSettings = nil;
    if ([track.mediaType isEqualToString:AVMediaTypeAudio]) {readerOutputSettings = @{AVFormatIDKey : @(kAudioFormatLinearPCM)};
        writerOuputSettings = @{AVFormatIDKey : @(kAudioFormatMPEG4AAC),
                                AVSampleRateKey : @(44100),
                                AVNumberOfChannelsKey : @(2),
                                AVEncoderBitRateKey : @(128000)};
    }
    AVAssetReaderTrackOutput *output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:readerOutputSettings];
    AVAssetWriterInput *input = [AVAssetWriterInput assetWriterInputWithMediaType:track.mediaType outputSettings:writerOuputSettings];
    if ([reader canAddOutput:output] && [writer canAddInput:input]) {[reader addOutput:output];
        [writer addInput:input];
    }
}

针对 音频轨道 ,在AVAssetReaderTrackOutput 读取数据时,解码生成 kAudioFormatLinearPCM 编码格局的数据。在 AVAssetWriterInput 写入数据时,以 kAudioFormatMPEG4AAC 编码格局进行编码写入,如果不想从新编码,能够将 nil 传入 outputSettings 参数,这样最初生成的音频轨道是 kAudioFormatLinearPCM 编码格局。
针对 视频轨道 outputSettings 参数传入 nil 示意不从新编码。

** 留神:
依据官网文档,AVAssetReaderTrackOutput只能生成无压缩的编码格局。
针对音频轨道,只能是 kAudioFormatLinearPCM
针对视频轨道,无压缩编码格局须要遵循 AVVideoSettings.h 文件中指定的规定。当然,出于性能思考,对于设施反对的原生解码器,能够不必转换。例如,应用 YUV420P 色彩编码的 H.264 编码的视频就不须要转换。
残缺的文档内容如下。**

The track must be one of the tracks contained by the target AVAssetReader's asset.  
  
A value of nil for outputSettings configures the output to vend samples in their original format as stored by the specified track.  
Initialization will fail if the output settings cannot be used with the specified track.  
  
AVAssetReaderTrackOutput can only produce uncompressed output.  
For audio output settings, this means that AVFormatIDKey must be kAudioFormatLinearPCM.  
For video output settings, this means that the dictionary must follow the rules for uncompressed video output, as laid out in AVVideoSettings.h.  
AVAssetReaderTrackOutput does not support the AVAudioSettings.h key AVSampleRateConverterAudioQualityKey or the following AVVideoSettings.h keys:  
  
  AVVideoCleanApertureKey  
  AVVideoPixelAspectRatioKey  
  AVVideoScalingModeKey  
  
When constructing video output settings the choice of pixel format will affect the performance and quality of the decompression.  
For optimal performance when decompressing video the requested pixel format should be one that the decoder supports natively to avoid unnecessary conversions.  
Below are some recommendations:  
  
For H.264 use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, or kCVPixelFormatType_420YpCbCr8BiPlanarFullRange if the video is known to be full range.  
For JPEG on iOS, use kCVPixelFormatType_420YpCbCr8BiPlanarFullRange.  
  
For other codecs on OSX, kCVPixelFormatType_422YpCbCr8 is the preferred pixel format for video and is generally the most performant when decoding.  
If you need to work in the RGB domain then kCVPixelFormatType_32BGRA is recommended on iOS and kCVPixelFormatType_32ARGB is recommended on OSX.  
  
ProRes encoded media can contain up to 12bits/ch.  
If your source is ProRes encoded and you wish to preserve more than 8bits/ch during decompression then use one of the following pixel formats:  
kCVPixelFormatType_4444AYpCbCr16, kCVPixelFormatType_422YpCbCr16, kCVPixelFormatType_422YpCbCr10, or kCVPixelFormatType_64ARGB.  
AVAssetReader does not support scaling with any of these high bit depth pixel formats.  
If you use them then do not specify kCVPixelBufferWidthKey or kCVPixelBufferHeightKey in your outputSettings dictionary.  
If you plan to append these sample buffers to an AVAssetWriterInput then note that only the ProRes encoders support these pixel formats.  
  
ProRes 4444 encoded media can contain a mathematically lossless alpha channel.  
To preserve the alpha channel during decompression use a pixel format with an alpha component such as kCVPixelFormatType_4444AYpCbCr16 or kCVPixelFormatType_64ARGB.  
To test whether your source contains an alpha channel check that the track's format description has kCMFormatDescriptionExtension_Depth and that its value is 32.
创立元数据轨道
// Metadata track
AVAssetWriterInput *input = [self createStillImageTimeAssetWriterInput];
AVAssetWriterInputMetadataAdaptor *adaptor = [AVAssetWriterInputMetadataAdaptor assetWriterInputMetadataAdaptorWithAssetWriterInput:input];
if ([writer canAddInput:input]) {[writer addInput:input];
}

其中,AVAssetWriterInputMetadataAdaptor的作用是将 元数据 (Metadata) 作为 校准元数据组(Timed Metadata Groups) 写入单个AVAssetWriterInput

开始读写
// Start reading and writing
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];
[reader startReading];
将元数据写入元数据轨道
// Write metadata track's metadata
AVMetadataItem *timedItem = [self createStillImageTimeMetadataItem];
CMTimeRange timedRange = CMTimeRangeMake(kCMTimeZero, CMTimeMake(1, 100));
AVTimedMetadataGroup *timedMetadataGroup = [[AVTimedMetadataGroup alloc] initWithItems:@[timedItem] timeRange:timedRange];
[adaptor appendTimedMetadataGroup:timedMetadataGroup];

** 留神:
timedRange必须是个有范畴的值,如果为 0,则会写入失败。
[AVTimedMetadataGroup appendTimedMetadataGroup:]必须在 [AVAssetWriter startWriting] 之后能力写入。**

异步读写轨道数据
// Write other tracks
self.reader = reader;
self.writer = writer;
self.queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
self.group = dispatch_group_create();
for (NSInteger i = 0; i < reader.outputs.count; ++i) {dispatch_group_enter(self.group);
    [self writeTrack:i];
}

此处,保留 AVAssetReaderAVAssetWriter对象以及 dispatch_queue_tdispatch_group_t,会在次要的读写操作办法 - (void)writeTrack:(NSInteger)trackIndex; 中应用。
应用 dispatch_group 是为了在异步读写各个轨道数据全副实现后,进行最初的收尾工作。
至此,- (void)addMetadataToVideo:(NSURL *)videoURL outputFile:(NSString *)outputFile identifier:(NSString *)identifier;办法的代码曾经全副实现。
上面是异步读写轨道数据的代码。

- (void)writeTrack:(NSInteger)trackIndex {AVAssetReaderOutput *output = self.reader.outputs[trackIndex];
    AVAssetWriterInput *input = self.writer.inputs[trackIndex];
    
    [input requestMediaDataWhenReadyOnQueue:self.queue usingBlock:^{while (input.readyForMoreMediaData) {
            AVAssetReaderStatus status = self.reader.status;
            CMSampleBufferRef buffer = NULL;
            if ((status == AVAssetReaderStatusReading) &&
                (buffer = [output copyNextSampleBuffer])) {BOOL success = [input appendSampleBuffer:buffer];
                CFRelease(buffer);
                if (!success) {NSLog(@"Track %d. Failed to append buffer.", (int)trackIndex);
                    [input markAsFinished];
                    dispatch_group_leave(self.group);
                    return;
                }
            } else {if (status == AVAssetReaderStatusReading) {NSLog(@"Track %d complete.", (int)trackIndex);
                } else if (status == AVAssetReaderStatusCompleted) {NSLog(@"Reader completed.");
                } else if (status == AVAssetReaderStatusCancelled) {NSLog(@"Reader cancelled.");
                } else if (status == AVAssetReaderStatusFailed) {NSLog(@"Reader failed.");
                }
                [input markAsFinished];
                dispatch_group_leave(self.group);
                return;
            }
        }
    }];
}

[AVAssetWriterInput requestMediaDataWhenReadyOnQueue:usingBlock:]的 block 中应该不停地将数据增加到 AVAssetWriterInput,直到AVAssetWriterInput.readyForMoreMediaData 属性值变为 NO,或者没有数据可供增加(通常会调用[AVAssetWriterInput markAsFinished] 办法)。而后退出 block。
退出 block 后,且 [AVAssetWriterInput markAsFinished] 还没有被调用,一旦 AVAssetWriterInput 解决完数据,AVAssetWriterInput.readyForMoreMediaData属性值就会变为YES,block 将会再次被调用,以获取更多的数据。

收尾工作

当数据全副读完写完之后,进行收尾工作。

- (void)finishWritingTracksWithPhoto:(NSString *)photoFile video:(NSString *)videoFile complete:(void (^)(BOOL success, NSString *photoFile, NSString *videoFile, NSError *error))complete {[self.reader cancelReading];
    [self.writer finishWritingWithCompletionHandler:^{if (complete) complete(YES, photoFile, videoFile, nil);
    }];
}

简略地进行读取和实现写入即可,在视频文件齐全生成之后会有回调,在回调中将视频存入照片库即可。

封装

图片和视频的元数据的增加代码都曾经实现,将其封装一下,代码如下。

- (void)useAssetWriter:(NSURL *)photoURL video:(NSURL *)videoURL identifier:(NSString *)identifier complete:(void (^)(BOOL success, NSString *photoFile, NSString *videoFile, NSError *error))complete {
    // Photo
    NSString *photoName = [photoURL lastPathComponent];
    NSString *photoFile = [self filePathFromDoc:photoName];
    [self addMetadataToPhoto:photoURL outputFile:photoFile identifier:identifier];
    
    // Video
    NSString *videoName = [videoURL lastPathComponent];
    NSString *videoFile = [self filePathFromDoc:videoName];
    [self addMetadataToVideo:videoURL outputFile:videoFile identifier:identifier];
    
    if (!self.group) return;
    dispatch_group_notify(self.group, dispatch_get_main_queue(), ^{[self finishWritingTracksWithPhoto:photoFile video:videoFile complete:complete];
    });
}

存入照片库

查看设施是否反对动静照片

BOOL available = [PHAssetCreationRequest supportsAssetResourceTypes:@[@(PHAssetResourceTypePhoto), @(PHAssetResourceTypePairedVideo)]];
if (!available) {NSLog(@"Device does NOT support LivePhoto.");
    return;
}

受权

拜访照片库之前,须要进行受权。
首先在 Info.plst 文件中增加 NSPhotoLibraryUsageDescription 键值对。
程序运行时,被动申请受权。

[PHPhotoLibrary requestAuthorization:^(PHAuthorizationStatus status) {if (status != PHAuthorizationStatusAuthorized) {NSLog(@"Photo Library access denied.");
        return;
    }
}];

存入照片库

NSURL *photo = [NSURL fileURLWithPath:photoFile];
NSURL *video = [NSURL fileURLWithPath:videoFile];
  
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{PHAssetCreationRequest *request = [PHAssetCreationRequest creationRequestForAsset];
    [request addResourceWithType:PHAssetResourceTypePhoto fileURL:photo options:nil];
    [request addResourceWithType:PHAssetResourceTypePairedVideo fileURL:video options:nil];
} completionHandler:^(BOOL success, NSError * _Nullable error) {if (success) {NSLog(@"Saved."); }
    else {NSLog(@"Save error: %@", error); }
}];

针对照片,PHAssetResourceTypePhoto
针对视频,PHAssetResourceTypePairedVideo

残缺代码

残缺代码已上传至 GitHub: DeviLeo/LivePhotoConverter。

参考

  • Apple Live Photo file format
  • Is there a way to save a Live Photo to the Photo Library?
  • How do I export UIImage array as a movie?
  • genadyo/LivePhotoDemo

正文完
 0