视频cell滑动时自动播放,实现起来主要是获取滚动时刻的cell的frame,IGListScrollDelegatelistAdapter(_:didEndDragging:willDecelerate:) 获取可见cell,播放符合规则的视频cell,并暂停其余cell。

主要记录思路,供大家参考。规则如下:

  • 视频frame超过一半在Screen上的最前面的cell
  • 少于一半则停止播放,以导航栏底部64为准线
  • vc appear 手动调用此方法
  • vc didMove(toParentViewController:) release player
  • visibleCells 方法返回 cell indexPath 不是顺序的,是个坑,要重新排序
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
extension VideoSectionController: IGListScrollDelegate {

// MARK: - GListScrollDelegate

func listAdapter(_ listAdapter: IGListAdapter!, didScroll sectionController: IGListSectionController!) {

}

func listAdapter(_ listAdapter: IGListAdapter!, willBeginDragging sectionController: IGListSectionController!) {

}

func listAdapter(_ listAdapter: IGListAdapter!, didEndDragging sectionController: IGListSectionController!, willDecelerate decelerate: Bool) {

guard var cells = self.collectionContext?.visibleCells(for: sectionController) as? [VideoCell] else { return }

// cells 重新排序
cells = cells.sorted { (cell0, cell1) -> Bool in
guard let indexPath0 = collectionContext?.index(for: cell0, sectionController: sectionController) else {
return true
}
guard let indexPath1 = collectionContext?.index(for: cell1, sectionController: sectionController) else {
return true
}

if indexPath0 < indexPath1 {
return true
} else {
return false
}

}

for cell in cells {

let videoCenter = cell.convert(cell.videoCoverImageView.center, to: nil)

if videoCenter.y < 64 {
// pause
cell.pause()
} else {
// play
cell.play()
break
}
}

}
}
1
2
3
4
5
6
7
8
override func didMove(toParentViewController parent: UIViewController?) {
super.didMove(toParentViewController: parent)

if parent == self.navigationController?.parent {
print("Back tapped")
NotificationCenter.default.post(name: Notification.Name.VideoPlayer.VideoCellStopPlay, object: nil, userInfo: nil)
}
}

官方教程地址:https://www.paintcodeapp.com/examples

PaintCode能够画出各种自定义的曲线图形(再也不怕设计师的各种曲线和细节实现不了),而且很方便的集成到iOS项目中,支持Swift和Objective-C。尤其是 Dynamic Shapes 支持简单约束,可以保持大小变化时图形规则变化。

软件截图:

操作很简单,和Sketch习惯差不多。

Screen Shot 2017-02-05 at 12.14.33

自定义视图代码:

实现一个带有箭头的圆形边框的图片视图

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import UIKit

class ButtonView: UIView {

var image: UIImage? {
didSet {
if imageView == nil {
imageView = UIImageView()
imageView?.backgroundColor = .clear
imageView?.layer.masksToBounds = true
self.insertSubview(imageView!, at: 0)
}
imageView?.image = image
}
}

private var imageView: UIImageView?

override func draw(_ rect: CGRect) {
// 重点代码,一行即可完成
JLXStyleKit.drawBubbleButton(frame: self.bounds)
}

override func layoutSubviews() {
super.layoutSubviews()

imageView?.frame = CGRect(x: self.bounds.width * 0.5 / 38.0, y: self.bounds.width * 0.5 / 38.0, width: self.bounds.width * 37 / 38.0, height: self.bounds.width * 37 / 38.0)
imageView?.layer.cornerRadius = bounds.width / 2.0
}

}

运行截图:

Simulator Screen Shot Feb 5, 2017, 12.16.37

Demo 地址

https://github.com/gewill/PaintCode-Dynamic-Bezier-Shapes-Demo

Apple Swift 博客原文地址:https://developer.apple.com/swift/blog/?id=39

主要变化:Swift 3 中 Any 映射 Objective-C 的 id

Objective-C Swift 2 Swift 3
id AnyObject Any
NSArray * [AnyObject] [Any]
NSDictionary * [NSObject: AnyObject] [AnyHashable: Any]
NSSet * Set<NSObject> Set<AnyHashable>
  • 方法和协议的中的AnyObject均改为Any
  • 调用大部分 C 和 Objective-C 需要显示类型转换,指针为 UnsafePointer<AnyObject>
  • Objective-C 协议仍是限制在 Class,而 structs 和 enums 无法符合。需要显示转换,如:String as NSString, Array as NSArray
  • Any 没有 AnyObject 中的一些魔法查询方法可用,如:(x as AnyObject).description
  • Swift 值类型隐式转换 id
  • Cocoa 也紧随 Swift 进化的脚步,而变得更强大

Learn iOS Design 是 Design Code 第一本,详细介绍了iOS 设计的方方面面,几乎每篇都是理论加工具。总体很全面,还有待进一步的实践中得到提高。

Core Philosophies

讲解了设计的哲学,和最低的三个要求:consider the touch interface, make the text readable and optimize for the iPhone 5, 6 and 6 Plus.

iOS is driven by 3 core philosophies: deference, clarity and depth.

In Retina, typography should have a minimum size of 11pt. The optimal size for reading is around 16pt.

Designing for iOS 9

详细讲解了 iOS 9 上常见控件的合理布局大小。

iOS uses vibrant colors to bring out the buttons.

iOS often uses neutral colors to serve as the background and menu areas.

iOS-Colors

Learn Colors

色彩运用和对比是 HSB 在数值上更有易于理解和对比。
下面介绍了:单色系、相似色、互补色、中间色、反色等色彩运用。将用的中间色的色板、颜色的含义、原质化设计(Material Design)颜色、阶梯色(UI Gradients)。

use colors only to draw attention to a button or element of importance.

I suggest starting with a vibrant, pastel color that is Primary or Secondary.

These are the colors used by Apple in their native apps. They’re vibrant and perfect for buttons, icons and actionable items.

I can easily map in my mind how much Hue, Saturation and Brightness. Those values make a lot more sense to me.

Meaning In Colors:I suggest reading this guide about colors.

This is a nice collection of gradients: http://uigradients.com

Learn Typography

介绍了一些字体常用使用,和字体网站。

字体一些基础知识:位置线、有无衬线字体

Typography-Basics

Let’s look at these 5 rules of good typography and apply them to modern design for mobile and for Websites.

The font size should be at least 11pt to be readable on the iPhone, iPad and Apple Watch. While that is the minimum value, the recommended size for the body text is actually15-18pt.

At 12-18pt, use Regular. At 18-24pt, use Light, at 24-32pt, use Thin and at 32pt or more, use Ultralight. Notice that for each scale, the text remains readable while looking clean and sophisticated.

Typography-LineHeight

“People say design isn’t art. It isn’t. Great design is art.”

字体资源网站:

Google Fonts
Typekit
fonts.com

Learn Animations

介绍动画在交互中的作用,和一些基本原则,以及一些做动画原型的工具。

Good animations enhance, bad animations distract.

Good animations should provide feedback on taps and gestures, and give a sense of direct manipulation.

Modern apps tend to use Spring and Ease animations much more than Linear.

Animation-Good

Animation Curve

On Spring, an animation framework that I created for iOS, I made available a bunch of preset animations that combine many transforms at once. They can be inexpensively integrated to your app, without even learning how to code. 和 IBAnimatable 类似的一个不用代码的动画库。

效果视频:https://designcode.io/cloud/chapter1/Animation-Spring.mp4

Animations Shouldn’t Last Longer Than 1 second.

“Design is the fundamental soul of a human-made creation that ends up expressing itself in successive outer layers of the product or service.” 

– Steve Jobs

The Animation Tools:

  • Principle
  • Flinto for Mac
  • Pixate
  • Origami
  • Framer
  • After Effects

UI Icons

好水的一章,介绍一些图标的网站。

图标要表意,避免于其他app混淆,除非故意为之。

UI Sounds

音效也是应用增强体验的一个方面,可应用在通知、正操作反馈和负操作反馈。

Design Inspiration

介绍获取涉及灵感的方式:观察生活中的技艺,看书,和一些网站。

书:

  • Becoming Steve Jobs
  • Steve Jobs by Walter Isaacson
  • Jony Ive: The Genius Behind Apple’s Greatest Products
  • Dieter Rams
  • Elon Musk
  • The Tipping Point,
  • Outliers,
  • Blink,
  • David and Goliath
  • What The Dog Saw

网站:Twitter、Medium、Sidebar

it’s about 10% reading, 30% writing and collecting, and 60% design and code.

Design Principles

介绍了设计常见的几个原则和作者个人的一些建议:自学能力、尽可能少的设计、三原则法、一万小时法、做梦想的事、休息为强者准备的。

Getting Your Product Out There

介绍了产品运营的一些建议。

Benefits, Not Features

[weak self] 比[unowned self]更安全。

拓展:分割代码快,更简洁,不要滥用

协议:是一个更简洁表达API的方式。是一种类型: 协议用在属性/方法的参数或返回值中。

下面是协议的一个例子:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
protocol Moveable {
mutating func moveTo(p: CGPoint)
}

class Car: Moveable {
func moveTo(p: CGPoint) {
print("Car move to \(p)")
}

func changeOil() {
print("changeOil")
}

var name: String

init() {
self.name = ""
}

init(name: String) {
self.name = name
}

}

struct Shape: Moveable {
mutating func moveTo(p: CGPoint) {
print("Shape move to \(p)")
}

func draw() {

print("draw")
}
}

let prius: Car = Car(name: "A")
let square: Shape = Shape()

var thingToMove: Moveable = prius
thingToMove.moveTo(CGPoint(x: 100, y: 100))
var find: Car = prius
find.name

// 协议可以作为一个类型,来存储实现该协议的变量,但是不能直接调用后者的非协议的方法和属性
//thingToMove.changeOil()
thingToMove = square

let thingsToMove: [Moveable] = [prius, square]

func slide(var slider: Moveable) {
let positionToSlideTo = CGPoint(x: 88, y: 88)
slider.moveTo(positionToSlideTo)
}

slide(prius)
slide(square)

protocol Slippery {
var speed: Double { get }
}

extension Car: Slippery {
var speed: Double {
return 1
}
}

func slipAndSlide(x: protocol<Slippery, Moveable>) {
print("slipAndSlide")
}
slipAndSlide(prius)


最近项目中,非持久化部分使用了数组来暂存数据,涉及到去重。一直以来都是 for in 循序遍历来处理数组,比较了函数式编程,还是后者的模块化组装性比较有优势。

《Advanced Swift》中 Transforming Arrays 一节讲解了一些数组的基本映射转换方法。其中主要是重新实现了部分方法,以及推荐我们自己写一些拓展标准库的方法。

以下是标准库中13个独立的方法,可以随意组合使用:

  • map and flatMap — how to transform an element 映射:如何转换每一个元素
  • filter — should an element be included? 过滤:是否应该包含该元素
  • reduce — how to fold an element into an aggregate value 归纳:如何折叠元素到一个总值
  • sort and lexicographicCompare — in what order should two elements come? 排序和字典顺序比较:两个元素如何排序
  • indexOf and contains — does this element match? 索引/下标 和 包含:是否与该元素匹配
  • minElement and maxElement — which is the min/max of two elements? 最小元素和最大元素:两个元素中最小/最大的那个
  • elementsEqual and startsWith — are two elements equivalent? 元素等于和开始于:两个元素是否是对等
  • split — is this element a separator? 分割:该元素是否是分隔符

编程毕竟是操作性的语言,以上都有表意很明确的方法,只要在实际调用几次也就理解了。下面是我写的 Demo :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
// : Playground - noun: a place where people can play

import UIKit

var fibs = [2, 3, 45, 53, 32, 12, 32, 1, 0, 1]

fibs.map {
Double($0 * 3)
}

let suits = ["♠", "♥", "♣", "♦"]

let ranks = ["J", "Q", "K", "A"]

let allCombinations = suits.flatMap { suit in
ranks.map { rank in
(suit, rank)
}
}

suits.flatMap { (suit) -> [String]? in
[suit, suit]
}

suits.flatMap { (suit) -> [String] in
if suit != "♥" {
return [suit]
} else {
return []
}
}

fibs.flatMap { (num) in
print(num)
}

fibs.sort {
$0 > $1
}

fibs.lexicographicalCompare([3])
fibs.lexicographicalCompare([1])
fibs.lexicographicalCompare([3]) { (num0, num1) -> Bool in
num0 > num1
}
fibs.lexicographicalCompare([1]) { (num0, num1) -> Bool in
num0 > num1
}

fibs.reduce(0) { (total, num) -> Int in
total + num
}

fibs.filter { (num) -> Bool in
num % 3 == 0
}

fibs.indexOf(4)
fibs.indexOf(1)
fibs.contains(4)
fibs.contains(1)

fibs.minElement()
fibs.minElement { (num0, num1) -> Bool in
num0 < num1
}
fibs.minElement { (num0, num1) -> Bool in
num0 > num1
}
fibs.maxElement()
fibs.maxElement { (num0, num1) -> Bool in
print("num0: \(num0), num1: \(num1)")
return num0 < num1
}

fibs.maxElement { (num0, num1) -> Bool in
num0 > num1
}

var strs = ["Lee", "Bee", "Will", "10", ""]
strs.maxElement { (str0, str1) -> Bool in
str0 < str1
}
strs.maxElement { (str0, str1) -> Bool in
str0 > str1
}

strs.elementsEqual(["Lee", "Bee", "Will", "10"])
strs.elementsEqual(["Lee", "Bee", "Will", "10", ""])

strs.startsWith(["Lee"])
strs.startsWith(["10"])
strs.startsWith(["Lee"]) { (str0, str1) -> Bool in
str0 == str1
}

strs.split("10")
strs.split("3")
fibs.split(1)

fibs.split(1, maxSplit: 2, allowEmptySlices: true)
fibs.split(1, maxSplit: 2, allowEmptySlices: false)
fibs.split(1, maxSplit: 3, allowEmptySlices: true)
fibs.split(1, maxSplit: 6, allowEmptySlices: false)
fibs.split(3333, maxSplit: 2, allowEmptySlices: true)
fibs.split(44444, maxSplit: 2, allowEmptySlices: false)

fibs.split(32, maxSplit: 0, allowEmptySlices: true)
fibs.split(32, maxSplit: 1, allowEmptySlices: false)
fibs.split(32, maxSplit: 2, allowEmptySlices: true)
fibs.split(32, maxSplit: 3, allowEmptySlices: false)

fibs.split { (num) -> Bool in
num % 2 == 0
}

fibs.split(1, allowEmptySlices: true) { (num) -> Bool in
num % 2 == 1
}

fibs.split(66, allowEmptySlices: true) { (num) -> Bool in
num % 2 == 1
}

fibs.forEach { (num) in
print(num - 44)
}

Demo 下载地址:https://github.com/gewill/test-projects/tree/master/collections%20transform.playground

Export

To read and write audiovisual assets, you must use the export APIs provided by the AVFoundation framework. The AVAssetExportSession class provides an interface for simple exporting needs, such as modifying the file format or trimming the length of an asset (see Trimming and Transcoding a Movie). For more in-depth exporting needs, use the AVAssetReader and AVAssetWriter classes.

Use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, use an AVAssetWriter object.

Note: The asset reader and writer classes are not intended to be used for real-time processing. In fact, an asset reader cannot even be used for reading from a real-time source like an HTTP live stream. However, if you are using an asset writer with a real-time data source, such as an AVCaptureOutput object, set the expectsMediaDataInRealTime property of your asset writer’s inputs to YES. Setting this property to YES for a non-real-time data source will result in your files not being interleaved properly.

1. Reading an Asset

Each AVAssetReader object can be associated only with a single asset at a time, but this asset may contain multiple tracks. For this reason, you must assign concrete subclasses of the AVAssetReaderOutput class to your asset reader before you begin reading in order to configure how the media data is read. There are three concrete subclasses of the AVAssetReaderOutput base class that you can use for your asset reading needs: AVAssetReaderTrackOutput, AVAssetReaderAudioMixOutput, and AVAssetReaderVideoCompositionOutput.

1.1. Creating the Asset Reader

All you need to initialize an AVAssetReader object is the asset that you want to read.

直接初始化即可,是一个可失败的构造器,注意检查是否成功。

1
2
3
4
NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);

Note: Always check that the asset reader returned to you is non-nil to ensure that the asset reader was initialized successfully. Otherwise, the error parameter (outError in the previous example) will contain the relevant error information.

1.2. Setting Up the Asset Reader Outputs

After you have created your asset reader, set up at least one output to receive the media data being read. When setting up your outputs, be sure to set the alwaysCopiesSampleData property to NO. In this way, you reap the benefits of performance improvements. In all of the examples within this chapter, this property could and should be set to NO.

alwaysCopiesSampleData 设置 NO,来获取性能的提升。

If you want only to read media data from one or more tracks and potentially convert that data to a different format, use the AVAssetReaderTrackOutput class, using a single track output object for each AVAssetTrack object that you want to read from your asset. To decompress an audio track to Linear PCM with an asset reader, you set up your track output as follows:

设置 track output

1
2
3
4
5
6
7
8
9
10
AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];

Note: To read the media data from a specific asset track in the format in which it was stored, pass nil to the outputSettings parameter.

You use the AVAssetReaderAudioMixOutput and AVAssetReaderVideoCompositionOutput classes to read media data that has been mixed or composited together using an AVAudioMix object or AVVideoComposition object, respectively. Typically, these outputs are used when your asset reader is reading from an AVComposition object.

With a single audio mix output, you can read multiple audio tracks from your asset that have been mixed together using an AVAudioMix object. To specify how the audio tracks are mixed, assign the mix to the AVAssetReaderAudioMixOutput object after initialization. The following code displays how to create an audio mix output with all of the audio tracks from your asset, decompress the audio tracks to Linear PCM, and assign an audio mix object to the output. For details on how to configure an audio mix, see Editing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];

Note: Passing nil for the audioSettings parameter tells the asset reader to return samples in a convenient uncompressed format. The same is true for the AVAssetReaderVideoCompositionOutput class.

The video composition output behaves in much the same way: You can read multiple video tracks from your asset that have been composited together using an AVVideoComposition object. To read the media data from multiple composited video tracks and decompress it to ARGB, set up your output as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
[assetReader addOutput:videoCompositionOutput];

1.3. Reading the Asset’s Media Data

To start reading after setting up all of the outputs you need, call the startReading method on your asset reader. Next, retrieve the media data individually from each output using the copyNextSampleBuffer method. To start up an asset reader with a single output and read all of its media samples, do the following:

使用 copyNextSampleBuffer 方法获取 CMSampleBufferRef:一个抽象类,封装了零或多个媒体类型的小样。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
// Copy the next sample buffer from the reader output.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Do something with sampleBuffer here.
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why the asset reader output couldn't copy another sample buffer.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
// Handle the error here.
}
else
{
// The asset reader output has read all of its samples.
done = YES;
}
}
}

2. Writing an Asset

The AVAssetWriter class to write media data from multiple sources to a single file of a specified file format. You don’t need to associate your asset writer object with a specific asset, but you must use a separate asset writer for each output file that you want to create. Because an asset writer can write media data from multiple sources, you must create an AVAssetWriterInput object for each individual track that you want to write to the output file. Each AVAssetWriterInput object expects to receive data in the form of CMSampleBufferRef objects, but if you want to append CVPixelBufferRef objects to your asset writer input, use the AVAssetWriterInputPixelBufferAdaptor class.

CVPixelBufferRef: A reference to a Core Video pixel buffer object. The pixel buffer stores an image in main memory.

2.1. Creating the Asset Writer

To create an asset writer, specify the URL for the output file and the desired file type. The following code displays how to initialize an asset writer to create a QuickTime movie:

1
2
3
4
5
6
NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
fileType:AVFileTypeQuickTimeMovie
error:&outError];
BOOL success = (assetWriter != nil);

2.2. Setting Up the Asset Writer Inputs

For your asset writer to be able to write media data, you must set up at least one asset writer input. For example, if your source of media data is already vending media samples as CMSampleBufferRef objects, just use the AVAssetWriterInput class. To set up an asset writer input that compresses audio media data to 128 kbps AAC and connect it to your asset writer, do the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};

// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};

// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];

Note: If you want the media data to be written in the format in which it was stored, pass nil in the outputSettings parameter. Pass nil only if the asset writer was initialized with a fileType of AVFileTypeQuickTimeMovie.

Your asset writer input can optionally include some metadata or specify a different transform for a particular track using the metadata and transform properties respectively. For an asset writer input whose data source is a video track, you can maintain the video’s original transform in the output file by doing the following:

1
2
3
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;

Note: Set the metadata and transform properties before you begin writing with your asset writer for them to take effect.

When writing media data to the output file, sometimes you may want to allocate pixel buffers. To do so, use the AVAssetWriterInputPixelBufferAdaptor class. For greatest efficiency, instead of adding pixel buffers that were allocated using a separate pool, use the pixel buffer pool provided by the pixel buffer adaptor. The following code creates a pixel buffer object working in the RGB domain that will use CGImage objects to create its pixel buffers.

1
2
3
4
5
6
NSDictionary *pixelBufferAttributes = @{
kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];

Note: All AVAssetWriterInputPixelBufferAdaptor objects must be connected to a single asset writer input. That asset writer input must accept media data of type AVMediaTypeVideo.

2.3. Writing Media Data

When you have configured all of the inputs needed for your asset writer, you are ready to begin writing media data. As you did with the asset reader, initiate the writing process with a call to the startWriting method. You then need to start a sample-writing session with a call to the startSessionAtSourceTime: method. All writing done by an asset writer has to occur within one of these sessions and the time range of each session defines the time range of media data included from within the source. For example, if your source is an asset reader that is supplying media data read from an AVAsset object and you don’t want to include media data from the first half of the asset, you would do the following:

配置完就是开始了:startWriting,和上面的 reader 用法基本一致。

1
2
3
CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.

Normally, to end a writing session you must call the endSessionAtSourceTime: method. However, if your writing session goes right up to the end of your file, you can end the writing session simply by calling the finishWriting method. To start up an asset writer with a single input and write all of its media data, do the following:

两种方法都可以结束 writing session:endSessionAtSourceTime 和 finishWriting

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the next sample buffer.
CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
if (nextSampleBuffer)
{
// If it exists, append the next sample buffer to the output file.
[self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
CFRelease(nextSampleBuffer);
nextSampleBuffer = nil;
}
else
{
// Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}];

The copyNextSampleBufferToWrite method in the code above is simply a stub. The location of this stub is where you would need to insert some logic to return CMSampleBufferRef objects representing the media data that you want to write. One possible source of sample buffers is an asset reader output.

这里的copyNextSampleBufferToWrite只是一个存根,你有必要在这里进行了一下逻辑上的处理。

3. Reencoding Assets

重新编码

You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects, you have more control over the conversion than you do with an AVAssetExportSession object. For example, you can choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. The first step in this process is just to set up your asset reader outputs and asset writer inputs as desired. After your asset reader and writer are fully configured, you start up both of them with calls to the startReading and startWriting methods, respectively. The following code snippet displays how to use a single asset writer input to write media data supplied by a single asset reader output:

同时使用 asset reader 和 writer 可以用来转换编辑格式。而且比AVAssetExportSession有更多选项可控:指定自己的输出格式、处理中更改 asset。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the asset reader output's next sample buffer.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
// If it exists, append this sample buffer to the output file.
BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
// Check for errors that may have occurred when appending the new sample buffer.
if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
{
NSError *failureError = self.assetWriter.error;
//Handle the error.
}
}
else
{
// If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
//Handle the error here.
}
else
{
// The asset reader output must have vended all of its samples. Mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}
}];

4. Putting It All Together: Using an Asset Reader and Writer in Tandem to Reencode an Asset

This brief code example illustrates how to use an asset reader and writer to reencode the first video and audio track of an asset into a new file. It shows how to:

  • Use serialization queues to handle the asynchronous nature of reading and writing audiovisual data 使用串行队列来调度异步处理读写媒体数据
  • Initialize an asset reader and configure two asset reader outputs, one for audio and one for video 初始化 reader 并配置两个输出源
  • Initialize an asset writer and configure two asset writer inputs, one for audio and one for video 初始化 writer 并配置两个输出源
  • Use an asset reader to asynchronously supply media data to an asset writer through two different output/input combinations 使用一个 reader 来异步提供媒体数据给 writer
  • Use a dispatch group to be notified of completion of the reencoding process 使用一个 dispatch group 来通知重新编码的进度
  • Allow a user to cancel the reencoding process once it has begun 允许用户取消操作

Note: To focus on the most relevant code, this example omits several aspects of a complete application. To use AVFoundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.

4.1. Handling the Initial Setup

Before you create your asset reader and writer and configure their outputs and inputs, you need to handle some initial setup. The first part of this setup involves creating three separate serialization queues to coordinate the reading and writing process.

1
2
3
4
5
6
7
8
9
10
11
12
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

The main serialization queue is used to coordinate the starting and stopping of the asset reader and writer (perhaps due to cancellation) and the other two serialization queues are used to serialize the reading and writing by each output/input combination with a potential cancellation.

主队列用来定位 reader 和 writer 的开始和结束,另外连个队列用来调度读写的输入输出源的联合体以及潜在的取消操作。

Now that you have some serialization queues, load the tracks of your asset and begin the reencoding process.

有了队列,就可以加载 asset 中的 tracks 并开始重新编码的进程了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
// Once the tracks have finished loading, dispatch the work to the main serialization queue.
dispatch_async(self.mainSerializationQueue, ^{
// Due to asynchronous nature, check to see if user has already cancelled.
if (self.cancelled)
return;
BOOL success = YES;
NSError *localError = nil;
// Check for success of loading the assets tracks.
success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
NSFileManager *fm = [NSFileManager defaultManager];
NSString *localOutputPath = [self.outputURL path];
if ([fm fileExistsAtPath:localOutputPath])
success = [fm removeItemAtPath:localOutputPath error:&localError];
}
if (success)
success = [self setupAssetReaderAndAssetWriter:&localError];
if (success)
success = [self startAssetReaderAndWriter:&localError];
if (!success)
[self readingAndWritingDidFinishSuccessfully:success withError:localError];
});
}];

When the track loading process finishes, whether successfully or not, the rest of the work is dispatched to the main serialization queue to ensure that all of this work is serialized with a potential cancellation. Now all that’s left is to implement the cancellation process and the three custom methods at the end of the previous code listing.

4.2. Initializing the Asset Reader and Writer

The custom setupAssetReaderAndAssetWriter: method initializes the reader and writer and configures two output/input combinations, one for an audio track and one for a video track. In this example, the audio is decompressed to Linear PCM using the asset reader and compressed back to 128 kbps AAC using the asset writer. The video is decompressed to YUV using the asset reader and compressed to H.264 using the asset writer.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
// Create and initialize the asset reader.
self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
BOOL success = (self.assetReader != nil);
if (success)
{
// If the asset reader was successfully initialized, do the same for the asset writer.
self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
success = (self.assetWriter != nil);
}

if (success)
{
// If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
if ([audioTracks count] > 0)
assetAudioTrack = [audioTracks objectAtIndex:0];
NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] > 0)
assetVideoTrack = [videoTracks objectAtIndex:0];

if (assetAudioTrack)
{
// If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
[self.assetReader addOutput:self.assetReaderAudioOutput];
// Then, set the compression settings to 128kbps AAC and create the asset writer input.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
[self.assetWriter addInput:self.assetWriterAudioInput];
}

if (assetVideoTrack)
{
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
NSDictionary *decompressionVideoSettings = @{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
};
self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
[self.assetReader addOutput:self.assetReaderVideoOutput];
CMFormatDescriptionRef formatDescription = NULL;
// Grab the video format descriptions from the video track and grab the first one if it exists.
NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
if ([videoFormatDescriptions count] > 0)
formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
// If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
if (formatDescription)
trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
else
trackDimensions = [assetVideoTrack naturalSize];
NSDictionary *compressionSettings = nil;
// If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
if (formatDescription)
{
NSDictionary *cleanAperture = nil;
NSDictionary *pixelAspectRatio = nil;
CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
if (cleanApertureFromCMFormatDescription)
{
cleanAperture = @{
AVVideoCleanApertureWidthKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
AVVideoCleanApertureHeightKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
AVVideoCleanApertureVerticalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
};
}
CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
if (pixelAspectRatioFromCMFormatDescription)
{
pixelAspectRatio = @{
AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
AVVideoPixelAspectRatioVerticalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
};
}
// Add whichever settings we could grab from the format description to the compression settings dictionary.
if (cleanAperture || pixelAspectRatio)
{
NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
if (cleanAperture)
[mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
if (pixelAspectRatio)
[mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
compressionSettings = mutableCompressionSettings;
}
}
// Create the video settings dictionary for H.264.
NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : [NSNumber numberWithDouble:trackDimensions.width],
AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
};
// Put the compression settings into the video settings dictionary if we were able to grab them.
if (compressionSettings)
[videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
// Create the asset writer input and add it to the asset writer.
self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
[self.assetWriter addInput:self.assetWriterVideoInput];
}
}
return success;
}

4.3. Reencoding the Asset

Provided that the asset reader and writer are successfully initialized and configured, the startAssetReaderAndWriter: method described in Handling the Initial Setup is called. This method is where the actual reading and writing of the asset takes place.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
BOOL success = YES;
// Attempt to start the asset reader.
success = [self.assetReader startReading];
if (!success)
*outError = [self.assetReader error];
if (success)
{
// If the reader started successfully, attempt to start the asset writer.
success = [self.assetWriter startWriting];
if (!success)
*outError = [self.assetWriter error];
}

if (success)
{
// If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
self.dispatchGroup = dispatch_group_create();
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
self.audioFinished = NO;
self.videoFinished = NO;

if (self.assetWriterAudioInput)
{
// If there is audio to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
[self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.audioFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next audio sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}

if (self.assetWriterVideoInput)
{
// If we had video to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
[self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.videoFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
// Set up the notification that the dispatch group will send when the audio and video work have both finished.
dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
// Check to see if the work has finished due to cancellation.
if (self.cancelled)
{
// If so, cancel the reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
}
else
{
// If cancellation didn't occur, first make sure that the asset reader didn't fail.
if ([self.assetReader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [self.assetReader error];
}
// If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
if (finalSuccess)
{
finalSuccess = [self.assetWriter finishWriting];
if (!finalSuccess)
finalError = [self.assetWriter error];
}
}
// Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
[self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
});
}
// Return success here to indicate whether the asset reader and writer were started successfully.
return success;
}

During reencoding, the audio and video tracks are asynchronously handled on individual serialization queues to increase the overall performance of the process, but both queues are contained within the same dispatch group. By placing the work for each track within the same dispatch group, the group can send a notification when all of the work is done and the success of the reencoding process can be determined.

音频和视频 track 在各自队列里异步的处理,又在同一个队列组中,这样方便获取编码 成功的通知。

4.4. Handling Completion

To handle the completion of the reading and writing process, the readingAndWritingDidFinishSuccessfully: method is called—with parameters indicating whether or not the reencoding completed successfully. If the process didn’t finish successfully, the asset reader and writer are both canceled and any UI related tasks are dispatched to the main queue.

处理完成时:进度和是否成功。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
if (!success)
{
// If the reencoding process failed, we need to cancel the asset reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to failure.
});
}
else
{
// Reencoding was successful, reset booleans.
self.cancelled = NO;
self.videoFinished = NO;
self.audioFinished = NO;
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to success.
});
}
}

4.5. Handling Cancellation

Using multiple serialization queues, you can allow the user of your app to cancel the reencoding process with ease. On the main serialization queue, messages are asynchronously sent to each of the asset reencoding serialization queues to cancel their reading and writing. When these two serialization queues complete their cancellation, the dispatch group sends a notification to the main serialization queue where the cancelled property is set to YES. You might associate the cancel method from the following code listing with a button on your UI.

处理取消操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
- (void)cancel
{
// Handle cancellation asynchronously, but serialize it with the main queue.
dispatch_async(self.mainSerializationQueue, ^{
// If we had audio data to reencode, we need to cancel the audio work.
if (self.assetWriterAudioInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the audio queue.
dispatch_async(self.rwAudioSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
// Leave the dispatch group since the audio work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}

if (self.assetWriterVideoInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the video queue.
dispatch_async(self.rwVideoSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
// Leave the dispatch group, since the video work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
// Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
self.cancelled = YES;
});
}

5. Asset Output Settings Assistant

The AVOutputSettingsAssistant class aids in creating output-settings dictionaries for an asset reader or writer. This makes setup much simpler, especially for high frame rate H264 movies that have a number of specific presets. Listing 5-1 shows an example that uses the output settings assistant to use the settings assistant.

资源输出源设置助手

Listing 5-1 AVOutputSettingsAssistant sample

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];

if (audioFormat != NULL)
[outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];

CMFormatDescriptionRef videoFormat = [self getVideoFormat];

if (videoFormat != NULL)
[outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];

CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]

[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];

AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];

Demo

AVReaderWriter: Offline Audio / Video Processing:
https://developer.apple.com/library/mac/samplecode/ReaderWriter/Introduction/Intro.html

最近项目写 Core Data 感觉之前的看过零碎的知识忘记的差不多了,又遇到异步处理的问题。重新看了一些资料,总结了一些要点如下。

1.

Core Data 是一个 对象图管理和存储框架。简单明确的属性和关系以及获取,都已封装好。不管底层数据库的实现,开发者只需关心数据和获取就行了。

2.

图形化编辑器:xcdatamodel

managed object model :

  • 属性支持 NSData:Binary Data 和符合 NSCoding protocol 的类型:Transformable
  • 关系建议采取 inverse
  • 关系一对多和一对一,其中有有序和无序的一对多的关系,分别为 NSSet 和 NSOrderedSet,具体可以参考这样文章

Core Data and Swift: Relationships and More Fetching :
http://code.tutsplus.com/tutorials/core-data-and-swift-relationships-and-more-fetching--cms-25070

3.

Core Data Stack 涉及四个类:

  • NSManagedObjectModel
  • NSPersistentStore
  • NSPersistentStoreCoordinator
  • NSManagedObjectContext

4.

NSManagedObjectContext:

  • 内存寄存器来处理 managed objects
  • 记得 save()
  • 掌管 managed objects 生命周期包括创建和获取
  • managed object 不能独立于 context 存在
  • context 具有领域性,一旦一个 managed object 被管理在一个 context ,将会在其整个生命周期绑定该 context
  • 支持多个 context
  • context 不是线程安全的

5.

如何配置 Core Data Stack:

其中 lazy、try catch 等技术细节不用多解释,后面在介绍多个 context 和异步处理的线程安全问题。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
import CoreData

class CoreDataStack {

let modelName = "Dog Walk"

lazy var context: NSManagedObjectContext = {

var managedObjectContext = NSManagedObjectContext(
concurrencyType: .MainQueueConcurrencyType)

managedObjectContext.persistentStoreCoordinator = self.psc
return managedObjectContext
}()

private lazy var psc: NSPersistentStoreCoordinator = {

let coordinator = NSPersistentStoreCoordinator(
managedObjectModel: self.managedObjectModel)

let url = self.applicationDocumentsDirectory
.URLByAppendingPathComponent(self.modelName)

do {
let options =
[NSMigratePersistentStoresAutomaticallyOption : true]

try coordinator.addPersistentStoreWithType(
NSSQLiteStoreType, configuration: nil, URL: url,
options: options)
} catch {
print("Error adding persistent store.")
}

return coordinator
}()

private lazy var managedObjectModel: NSManagedObjectModel = {

let modelURL = NSBundle.mainBundle()
.URLForResource(self.modelName,
withExtension: "momd")!
return NSManagedObjectModel(contentsOfURL: modelURL)!
}()

private lazy var applicationDocumentsDirectory: NSURL = {
let urls = NSFileManager.defaultManager().URLsForDirectory(
.DocumentDirectory, inDomains: .UserDomainMask)
return urls[urls.count-1]
}()

func saveContext () {
if context.hasChanges {
do {
try context.save()
} catch let error as NSError {
print("Error: \(error.localizedDescription)")
abort()
}
}
}
}

6.

Fetch

  • NSManagedObjectResultType: 默认值,返回 managed objects
  • NSCountResultType: 返回 count
  • NSDictionaryResultType: 返回一个计算后值,如 sum。详细用法可以看文档 NSExpression
  • NSManagedObjectIDResultType:

从性能优化的角度,可以考虑时候后面的几个类型。
iOS8异步fetch:NSAsynchronousFetchRequest、批量更新/删除属性

7.

fetched results controller 可以帮助我们处理 core data 和 table view datasource,可以简单的看成专用的 datasource。

记得添加 cacheName

8.

后台处理使用 context PrivateQueueConcurrencyType,默认使用 MainQueueConcurrencyType,尤其设计 UI。

可以使用 child context,先保存 child context 至 内存寄存器,一直到 parent context 保存后,才会保存至硬盘。

这里就涉及一个好的实践:有多个 context 总是调用 performBlock 来保证安全。

下面是一个 private context 后台处理,回到主线程的实践:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68


// 1
let privateContext = NSManagedObjectContext(
concurrencyType: .PrivateQueueConcurrencyType)
privateContext.persistentStoreCoordinator =
coreDataStack.context.persistentStoreCoordinator

// 2
privateContext.performBlock { () -> Void in
// 3
let results: [AnyObject]
do {
results = try self.coreDataStack.context
.executeFetchRequest(self.surfJournalFetchRequest())
} catch {
let nserror = error as NSError
print("ERROR: \(nserror)")
results = []
}

let exportFilePath =
NSTemporaryDirectory() + "export.csv"
let exportFileURL = NSURL(fileURLWithPath: exportFilePath)
NSFileManager.defaultManager().createFileAtPath(
exportFilePath, contents: NSData(), attributes: nil)

// 3
let fileHandle: NSFileHandle?
do {
fileHandle = try NSFileHandle(forWritingToURL: exportFileURL)
} catch {
let nserror = error as NSError
print("ERROR: \(nserror)")
fileHandle = nil
}

if let fileHandle = fileHandle {
// 4
for object in results {
let journalEntry = object as! JournalEntry

fileHandle.seekToEndOfFile()
let csvData = journalEntry.csv().dataUsingEncoding(
NSUTF8StringEncoding, allowLossyConversion: false)
fileHandle.writeData(csvData!)
}

// 5
fileHandle.closeFile()

// 4
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.navigationItem.leftBarButtonItem =
self.exportBarButtonItem()
print("Export Path: \(exportFilePath)")
self.showExportFinishedAlertView(exportFilePath)
})
} else {
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.navigationItem.leftBarButtonItem =
self.exportBarButtonItem()
})
}

} // 5 Closing brace for performBlock()


9. 参考资料

AVCam-iOSUsingAVFoundationtoCaptureImagesandMovies:
https://github.com/robovm/apple-ios-samples/tree/master/AVCam-iOSUsingAVFoundationtoCaptureImagesandMovies

参考 Apple 的 Objective-C 转化为 Swift 版本。过程中学了不少知识,因没办法 copy-paste,都必须理解相关知识点才行。KVO 、Notification 和多线程等,注释也十分详细。唯一不舒服的就是AVFoundation API 本身没有对 Swift 优化,Swift KVO 也是一个坑。

源码:

https://github.com/gewill/test-projects/tree/master/test%20AVCam

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
//
// JLXCameraViewController.swift
// test AVCam
//
// Created by Will on 4/15/16.
// Copyright © 2016 gewill.org. All rights reserved.
//

import UIKit
import Foundation
import AVFoundation
import Photos
import AssetsLibrary

protocol JLXCameraViewControllerDelegate: NSObjectProtocol {
func cameraViewController(vc: JLXCameraViewController, didFinishCaptureVideoUrl url: NSURL!)
func cameraViewControllerDidCancel(vc: JLXCameraViewController)
}

enum JLXAVCamSetupResult {
case Success
case CameraNotAuthorized
case SessionConfiguratonFailed
}

private var SessionRunningContext = 0

class JLXCameraViewController: UIViewController, AVCaptureFileOutputRecordingDelegate {
@IBOutlet var previewView: JLXPreviewView!

@IBOutlet var cameraUnavailableLabel: UILabel!
@IBOutlet var resumeButton: UIButton!

@IBOutlet var flashButton: UIButton!
@IBOutlet var changeCameraButton: UIButton!
@IBOutlet var cancelButton: UIButton!

@IBOutlet var durationLabel: UILabel!

@IBOutlet var recordButton: UIButton!

var delegate: JLXCameraViewControllerDelegate?

// Session management

// Communicate with the session and other session objects on this queue.
var sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL)
dynamic var session: AVCaptureSession!
var videoDeviceInput: AVCaptureDeviceInput!
var movieFileOutput: AVCaptureMovieFileOutput!

// Utilities
var setupResult: JLXAVCamSetupResult!
var sessionRunning: Bool!
var backgroundRecordingId: UIBackgroundTaskIdentifier!
var durationTimer: NSTimer?
var seconds: Int!
var isRecording = false

// MARK: - life cycle

override func viewDidLoad() {
super.viewDidLoad()

self.setupUI()

self.setupSession()
}

func setupUI() {
// Disable UI. The UI is enabled if and only if the session starts running.
self.changeCameraButton.enabled = false
self.recordButton.enabled = false
self.flashButton.enabled = false

self.resumeButton.setTitle("Tap to resume", forState: .Normal)
self.resumeButton.hidden = true
self.cameraUnavailableLabel.text = "Camera Unavailable"
self.cameraUnavailableLabel.hidden = true

let tapGesture = UITapGestureRecognizer(target: self, action: #selector(JLXCameraViewController.focusAndExposeTap(_:)))
self.previewView.addGestureRecognizer(tapGesture)
}

func setupAuthorization() {
// Check video authorization status. Video access is required and audio access is optional.
// If audio access is denied, audio is not recorded during movie recording.

switch AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo) {
case AVAuthorizationStatus.NotDetermined:
dispatch_suspend(self.sessionQueue)
AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted) in
if granted == false {
self.setupResult = JLXAVCamSetupResult.CameraNotAuthorized
}
dispatch_resume(self.sessionQueue)
})
case AVAuthorizationStatus.Authorized:
self.setupResult = JLXAVCamSetupResult.Success
default:
self.setupResult = JLXAVCamSetupResult.CameraNotAuthorized
}
}

// Setup the capture session.
// In general it is not safe to mutate an AVCaptureSession or any of its inputs, outputs, or connections from multiple threads at the same time.
// Why not do all of this on the main queue?
// Because -[AVCaptureSession startRunning] is a blocking call which can take a long time. We dispatch session setup to the sessionQueue
// so that the main queue isn't blocked, which keeps the UI responsive.
func setupSession() {
// Create the AVCaptureSession.
self.session = AVCaptureSession()

// Setup the preview view.
self.previewView.setSession(self.session)

self.setupResult = JLXAVCamSetupResult.Success

self.setupAuthorization()

dispatch_async(self.sessionQueue) {
if self.setupResult != JLXAVCamSetupResult.Success {
return
}

self.backgroundRecordingId = UIBackgroundTaskInvalid

let videoDevice: AVCaptureDevice = JLXCameraViewController.deviceWithMediaType(AVMediaTypeVideo, preferringPosition: AVCaptureDevicePosition.Back)

var videoDeviceInput: AVCaptureDeviceInput?
do {
videoDeviceInput = try AVCaptureDeviceInput.init(device: videoDevice)
} catch let error as NSError {
print("Could not create video device input: \(error.debugDescription)")
}

self.session.beginConfiguration()

if self.session.canAddInput(videoDeviceInput) {
self.session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput

dispatch_async(dispatch_get_main_queue()) {
// Why are we dispatching this to the main queue?
// Because AVCaptureVideoPreviewLayer is the backing layer for AAPLPreviewView and UIView
// can only be manipulated on the main thread.
// Note: As an exception to the above rule, it is not necessary to serialize video orientation changes
// on the AVCaptureVideoPreviewLayer’s connection with other session manipulation.

// Use the status bar orientation as the initial video orientation. Subsequent orientation changes are handled by
// -[viewWillTransitionToSize:withTransitionCoordinator:].
let orientation = AVCaptureVideoOrientation.LandscapeRight
let previewLayer = self.previewView.layer as! AVCaptureVideoPreviewLayer
previewLayer.connection.videoOrientation = orientation
}
} else {
print("Could not add video device input to the session")
self.setupResult = JLXAVCamSetupResult.SessionConfiguratonFailed
}

// TODO: - test failed
let audioDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
let audioDeviceInput: AVCaptureDeviceInput?
do {
audioDeviceInput = try AVCaptureDeviceInput.init(device: audioDevice)
} catch let error as NSError {
print("Could not create audio device input: \(error.debugDescription.debugDescription)")
}

let movieFileOutput = AVCaptureMovieFileOutput()
if self.session.canAddOutput(movieFileOutput) {
self.session.addOutput(movieFileOutput)
let connection = movieFileOutput.connectionWithMediaType(AVMediaTypeVideo)
if #available(iOS 8.0, *) {
if connection.supportsVideoStabilization {
connection.preferredVideoStabilizationMode = .Auto
}
} else {
connection.enablesVideoStabilizationWhenAvailable = true
}

if connection.supportsVideoOrientation {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
}

self.movieFileOutput = movieFileOutput
} else {
print("Could not add movie file output to the session")
self.setupResult = JLXAVCamSetupResult.SessionConfiguratonFailed
}

self.session.commitConfiguration()
}
}

override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}

override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)

// response setupResult

dispatch_async(self.sessionQueue) {
if let result = self.setupResult {
switch result {
case .Success:
// Only setup observers and start the session running if setup succeeded.
self.addObservers()
self.session.startRunning()
self.sessionRunning = self.session.running
case .CameraNotAuthorized:
dispatch_async(dispatch_get_main_queue()) {
let title = NSBundle.mainBundle().localizedInfoDictionary!["CFBundleName"] as! String
let message = String.localizedStringWithFormat("AVCam doesn't have permission to use the camera, please change privacy settings", "Alert message when the user has denied access to the camera")
let cancelText = String.localizedStringWithFormat("OK", "Alert OK button")
let settingsText = String.localizedStringWithFormat("Settings", "Alert button to open Settings")
if #available(iOS 8.0, *) {
let alertController = UIAlertController(title: title, message: message, preferredStyle: UIAlertControllerStyle.Alert)
let cancelAction = UIAlertAction(title: cancelText, style: UIAlertActionStyle.Cancel, handler: nil)
alertController.addAction(cancelAction)
let settingsAction = UIAlertAction(title: settingsText, style: UIAlertActionStyle.Default, handler: { (action) in
UIApplication.sharedApplication().openURL(NSURL(string: UIApplicationOpenSettingsURLString)!)
})
alertController.addAction(settingsAction)
self.presentViewController(alertController, animated: true, completion: nil)
} else {
let alert = UIAlertView(title: title, message: message, delegate: nil, cancelButtonTitle: cancelText, otherButtonTitles: settingsText)
alert.show()
}
}
case .SessionConfiguratonFailed:
let title = NSBundle.mainBundle().localizedInfoDictionary!["CFBundleName"] as! String
let message = String.localizedStringWithFormat("Unable to capture media", "Alert message when something goes wrong during capture session configuration")
let cancelText = String.localizedStringWithFormat("OK", "Alert OK button")
if #available(iOS 8.0, *) {
let alertController = UIAlertController(title: title, message: message, preferredStyle: UIAlertControllerStyle.Alert)
let cancelAction = UIAlertAction(title: cancelText, style: UIAlertActionStyle.Cancel, handler: nil)
alertController.addAction(cancelAction)
self.presentViewController(alertController, animated: true, completion: nil)
} else {
let alert = UIAlertView(title: title, message: message, delegate: nil, cancelButtonTitle: cancelText)
alert.show()
}
}
}
}
}

override func viewDidDisappear(animated: Bool) {
dispatch_async(self.sessionQueue) {
if self.setupResult == JLXAVCamSetupResult.Success {
self.session.stopRunning()
self.removeObservers()
}
}

super.viewDidDisappear(animated)
}

// MARK: - Orientation

override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
return UIInterfaceOrientationMask.LandscapeRight
}

// MARK: - KVO and Notifications

func addObservers() {
self.session.addObserver(self, forKeyPath: "running", options: NSKeyValueObservingOptions.New, context: &SessionRunningContext)

NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(subjectAreaDidChange(_:)), name: AVCaptureDeviceSubjectAreaDidChangeNotification, object: self.videoDeviceInput.device)
// A session can only run when the app is full screen. It will be interrupted
// in a multi-app layout, introduced in iOS 9,
// see also the documentation of AVCaptureSessionInterruptionReason. Add
// observers to handle these session interruptions
// and show a preview is paused message. See the documentation of
// AVCaptureSessionWasInterruptedNotification for other
// interruption reasons.
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(sessionWatInterruptedEnded(_:)), name: AVCaptureSessionWasInterruptedNotification, object: self.session)
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(sessionWatInterruptedEnded(_:)), name: AVCaptureSessionInterruptionEndedNotification, object: self.session)
}

func removeObservers() {
self.session.removeObserver(self, forKeyPath: "running", context: &SessionRunningContext)

NSNotificationCenter.defaultCenter().removeObserver(self)
}

override func observeValueForKeyPath(keyPath: String?, ofObject object: AnyObject?, change: [String: AnyObject]?, context: UnsafeMutablePointer<Void>) {
if context == &SessionRunningContext {
if let isSessionRunning = change?[NSKeyValueChangeNewKey]?.boolValue where
isSessionRunning == true {
dispatch_async(dispatch_get_main_queue()) {
// Only enable the ability to change camera if the device has more than
// one camera.
self.changeCameraButton.enabled = isSessionRunning && (AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo).count > 1)
self.recordButton.enabled = isSessionRunning
}
}
} else {
super.observeValueForKeyPath(keyPath, ofObject: object, change: change, context: context)
}
}

func subjectAreaDidChange(notification: NSNotification) {
let devicePoiont = CGPoint(x: 0.5, y: 0.5)
self.focusWithMode(AVCaptureFocusMode.ContinuousAutoFocus, exposureWithMode: AVCaptureExposureMode.ContinuousAutoExposure, atDevicePoint: devicePoiont, monitorSubjectAreaChange: false)
}

func sessionRuntimeError(notification: NSNotification) {
// Automatically try to restart the session running if media services were
// reset and the last start running succeeded.
// Otherwise, enable the user to try to resume the session running.
if let error = notification.userInfo?[AVCaptureSessionErrorKey] where
error.code == AVError.MediaServicesWereReset.rawValue {
dispatch_async(self.sessionQueue, {
if self.sessionRunning == true {
self.session.startRunning()
self.sessionRunning = self.session.running
} else {
dispatch_async(dispatch_get_main_queue(), {
self.resumeButton.hidden = false
})
}
})
} else {
self.resumeButton.hidden = false
}
}

func sessionWasInterrupted(notification: NSNotification) {
// In some scenarios we want to enable the user to resume the session running.
// For example, if music playback is initiated via control center while using AVCam,
// then the user can let AVCam resume the session running, which will stop music playback.
// Note that stopping music playback in control center will not automatically resume the session running.
// Also note that it is not always possible to resume, see -[resumeInterruptedSession:].

var showResumeButton = false

// In iOS 9 and later, the userInfo dictionary contains information on why the
// session was interrupted.
if #available(iOS 9.0, *) {
if let reason = notification.userInfo?[AVCaptureSessionInterruptionReasonKey] where reason is Int
{
if (reason as! Int) == AVCaptureSessionInterruptionReason.AudioDeviceInUseByAnotherClient.rawValue || (reason as! Int) == AVCaptureSessionInterruptionReason.VideoDeviceInUseByAnotherClient.rawValue {
showResumeButton = true
} else if (reason as! Int) == AVCaptureSessionInterruptionReason.VideoDeviceNotAvailableWithMultipleForegroundApps.rawValue {
// Simply fade-in a label to inform the user that the camera is
// unavailable.
self.cameraUnavailableLabel.hidden = false
self.cameraUnavailableLabel.alpha = 0
UIView.animateWithDuration(0.25, animations: {
self.cameraUnavailableLabel.alpha = 1
})
}
}
} else {
print("Capture session was interrupted")
showResumeButton = UIApplication.sharedApplication().applicationState == UIApplicationState.Inactive
}

if showResumeButton {
// Simply fade-in a button to enable the user to try to resume the session
// running.
self.resumeButton.hidden = false
self.resumeButton.alpha = 0
UIView.animateWithDuration(0.25, animations: {
self.resumeButton.alpha = 1
})
}
}

func sessionWatInterruptedEnded(notification: NSNotification) {
print("Capture session interruption ended")

// hide buttons with animations
if !self.resumeButton.hidden {
UIView.animateWithDuration(0.25, animations: {
self.resumeButton.alpha = 0
}, completion: { (finished) in
self.resumeButton.hidden = true
})
}

if !self.cameraUnavailableLabel.hidden {
UIView.animateWithDuration(0.25, animations: {
self.cameraUnavailableLabel.alpha = 0
}, completion: { (finished) in
self.cameraUnavailableLabel.hidden = true
})
}
}

// MARK: - Response Actions

@IBAction func resumeButtonClick(sender: AnyObject) {
dispatch_async(self.sessionQueue) {
// The session might fail to start running, e.g., if a phone or FaceTime
// call is still using audio or video.
// A failure to start the session running will be communicated via a session
// runtime error notification.
// To avoid repeatedly failing to start the session running, we only try to
// restart the session running in the
// session runtime error handler if we aren't trying to resume the session
// running.
self.session.startRunning()
self.durationTimer = NSTimer(timeInterval: 1.0, target: self, selector: #selector(JLXCameraViewController.refreshDurationLabel), userInfo: nil, repeats: true)
NSRunLoop.currentRunLoop().addTimer(self.durationTimer!, forMode: NSRunLoopCommonModes)
self.durationTimer?.fire()

self.sessionRunning = self.session.running
if !self.session.running {
dispatch_async(dispatch_get_main_queue()) {
let title = NSBundle.mainBundle().localizedInfoDictionary!["CFBundleName"] as! String
let message = String.localizedStringWithFormat("Unable to resume", "Alert message when unable to resume the session running")
let cancelText = String.localizedStringWithFormat("OK", "Alert OK button")
if #available(iOS 8.0, *) {
let alertController = UIAlertController(title: title, message: message, preferredStyle: UIAlertControllerStyle.Alert)
let cancelAction = UIAlertAction(title: cancelText, style: UIAlertActionStyle.Cancel, handler: nil)
alertController.addAction(cancelAction)
self.presentViewController(alertController, animated: true, completion: nil)
} else {
let alert = UIAlertView(title: title, message: message, delegate: nil, cancelButtonTitle: cancelText)
alert.show()
}
}
} else {
dispatch_async(dispatch_get_main_queue()) {
self.resumeButton.hidden = false
}
}
}
}
@IBAction func recordButtonClick(sender: AnyObject) {
// Disable the Camera button until recording finishes, and disable the Record
// button until recording starts or finishes. See the
// AVCaptureFileOutputRecordingDelegate methods.
self.changeCameraButton.enabled = false
self.recordButton.enabled = false

if self.isRecording == true {
self.durationTimer?.invalidate()
self.durationTimer = nil
self.seconds = 0
self.durationLabel.text = secondsToFormatTimeFull(0)
} else {
self.seconds = 0
self.durationTimer = NSTimer(timeInterval: 1.0, target: self, selector: #selector(JLXCameraViewController.refreshDurationLabel), userInfo: nil, repeats: true)
NSRunLoop.currentRunLoop().addTimer(self.durationTimer!, forMode: NSRunLoopCommonModes)
self.durationTimer?.fire()
}

self.isRecording = !isRecording

dispatch_async(self.sessionQueue) {
if !self.movieFileOutput.recording && UIDevice.currentDevice().multitaskingSupported {
// Setup background task. This is needed because the
// -[captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:]
// callback is not received until AVCam returns to the foreground unless
// you request background execution time.
// This also ensures that there will be time to write the file to the
// photo library when AVCam is backgrounded.
// To conclude this background execution, -endBackgroundTask is called
// in
// -[captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:]
// after the recorded file has been saved.
self.backgroundRecordingId = UIApplication.sharedApplication().beginBackgroundTaskWithExpirationHandler(nil)

// Turn OFF flash for video recording.
JLXCameraViewController.setFlashMode(AVCaptureFlashMode.Off, forDevice: self.videoDeviceInput.device)

// Start recording to a temporary file.
let outputFileName = NSProcessInfo.processInfo().globallyUniqueString
let outputFileUrl = NSURL(fileURLWithPath: NSTemporaryDirectory()).URLByAppendingPathComponent(outputFileName).URLByAppendingPathExtension("mov")
self.movieFileOutput.startRecordingToOutputFileURL(outputFileUrl, recordingDelegate: self)
} else {
self.movieFileOutput.stopRecording()
}
}
}

@IBAction func changeCameraButtonClick(sender: AnyObject) {
self.changeCameraButton.enabled = false
self.recordButton.enabled = false

dispatch_async(self.sessionQueue) {
let currentVideoDivice = self.videoDeviceInput.device
var preferredPosition = AVCaptureDevicePosition.Unspecified
let currentPosition = currentVideoDivice.position

switch currentPosition {
case AVCaptureDevicePosition.Front:
preferredPosition = AVCaptureDevicePosition.Back
case AVCaptureDevicePosition.Back:
preferredPosition = AVCaptureDevicePosition.Front
default:
break
}

let videoDevice = JLXCameraViewController.deviceWithMediaType(AVMediaTypeVideo, preferringPosition: preferredPosition)

var videoDeviceInput: AVCaptureDeviceInput?
do {
videoDeviceInput = try AVCaptureDeviceInput.init(device: videoDevice)
} catch let error as NSError {
print("Could not create video device input: \(error.debugDescription)")
}

self.session.beginConfiguration()

// Remove the existing device input first, since using the front and back
// camera simultaneously is not supported.
self.session.removeInput(self.videoDeviceInput)

if self.session.canAddInput(videoDeviceInput) {
NSNotificationCenter.defaultCenter().removeObserver(self, name: AVCaptureDeviceSubjectAreaDidChangeNotification, object: currentVideoDivice)

JLXCameraViewController.setFlashMode(AVCaptureFlashMode.Auto, forDevice: videoDevice)
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(JLXCameraViewController.subjectAreaDidChange(_:)), name: AVCaptureDeviceSubjectAreaDidChangeNotification, object: videoDevice)

self.session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
} else {
self.session.addInput(self.videoDeviceInput)
}

let connection = self.movieFileOutput.connectionWithMediaType(AVMediaTypeVideo)
if connection.supportsVideoStabilization {
if #available(iOS 8.0, *) {
connection.preferredVideoStabilizationMode = .Auto
} else {
connection.enablesVideoStabilizationWhenAvailable = true
}
}

self.session.commitConfiguration()

dispatch_async(dispatch_get_main_queue()) {
self.changeCameraButton.enabled = true
self.recordButton.enabled = true
}
}
}

func focusAndExposeTap(gestureRecognizer: UIGestureRecognizer) {
let devicePoint = (self.previewView.layer as! AVCaptureVideoPreviewLayer).captureDevicePointOfInterestForPoint(gestureRecognizer.locationInView(gestureRecognizer.view))
self.focusWithMode(AVCaptureFocusMode.AutoFocus, exposureWithMode: AVCaptureExposureMode.AutoExpose, atDevicePoint: devicePoint, monitorSubjectAreaChange: true)
}

@IBAction func flashButtonClick(sender: AnyObject) {
// TODO: - should deal while changeCameraButton
}

func refreshDurationLabel() {
seconds = seconds + 1
self.durationLabel.text = secondsToFormatTimeFull(Double(self.seconds))
}

@IBAction func cancelButtonClick(sender: AnyObject) {
delegate?.cameraViewControllerDidCancel(self)
self.dismissViewControllerAnimated(true, completion: nil)
}

// MARK: - File Output Recording Delegate
func captureOutput(captureOutput: AVCaptureFileOutput!, didStartRecordingToOutputFileAtURL fileURL: NSURL!, fromConnections connections: [AnyObject]!) {
// Enable the Record button to let the user stop the recording.
dispatch_async(dispatch_get_main_queue()) {
self.recordButton.enabled = true
self.recordButton.setTitle(String.localizedStringWithFormat("Stop", "Recording button stop title"), forState: .Normal)
}
}

func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!) {
// Note that currentBackgroundRecordingID is used to end the background task
// associated with this recording.
// This allows a new recording to be started, associated with a new
// UIBackgroundTaskIdentifier, once the movie file output's isRecording
// property
// is back to NO — which happens sometime after this method returns.
// Note: Since we use a unique file path for each recording, a new recording
// will not overwrite a recording currently being saved.

self.delegate?.cameraViewController(self, didFinishCaptureVideoUrl: outputFileURL)
self.dismissViewControllerAnimated(true, completion: nil)
}

// MARK: - Device Configuration
func focusWithMode(focusMode: AVCaptureFocusMode, exposureWithMode exposureMode: AVCaptureExposureMode, atDevicePoint point: CGPoint, monitorSubjectAreaChange: Bool) {
dispatch_async(self.sessionQueue) {
let device = self.videoDeviceInput.device
do {
try device.lockForConfiguration()
// Setting (focus/exposure)PointOfInterest alone does not initiate a
// (focus/exposure) operation.
// Call -set(Focus/Exposure)Mode: to apply the new point of interest.
if device.focusPointOfInterestSupported && device.isFocusModeSupported(AVCaptureFocusMode.AutoFocus) {
device.focusPointOfInterest = point
device.focusMode = focusMode
}

if device.exposurePointOfInterestSupported && device.isExposureModeSupported(AVCaptureExposureMode.AutoExpose) {
device.exposurePointOfInterest = point
device.exposureMode = exposureMode
}

device.subjectAreaChangeMonitoringEnabled = monitorSubjectAreaChange

device.unlockForConfiguration()
} catch let error as NSError {
print(" \(error.debugDescription)")
}
}
}

class func setFlashMode(flashMode: AVCaptureFlashMode, forDevice device: AVCaptureDevice) {
if device.hasFlash && device.isFlashModeSupported(flashMode) {
do {
try device.lockForConfiguration()
device.flashMode = flashMode
device.unlockForConfiguration()
} catch let error as NSError {
print("Could not lock device for configuration: \(error.debugDescription)")
}
}
}

class func deviceWithMediaType(mediaType: String, preferringPosition position: AVCaptureDevicePosition) -> AVCaptureDevice {
let devices = AVCaptureDevice.devicesWithMediaType(mediaType) as![AVCaptureDevice!]
var captureDevice = devices.first

for device in devices {
if device.position == position {
captureDevice = device
break
}
}

return captureDevice!
}
}

WWDC 2011 - Session 405 - Exploring AV Foundation
https://developer.apple.com/videos/play/wwdc2011/405/

1. 扯淡

可以学习英文教程,但要循序渐进,之后总结实践很重要。转化为自己的理解才行。之前看了一段时间 Doctmentation,感觉很大很复杂的框架。之后动手写了一些代码,回头再看相关 Session 就豁然开朗,一切不过是熟悉 Cocoa 框架结构,无非就是本事视频相关的不熟悉,直接开发就比较抽象。开发实践中持续学习,也就进入佳境。

视频学习时,双屏必备利器。加上最近 iPad 在 Session Keynote 上笔记,效率很高。

Session 不停的示例代码非常易于理解。

2. 总结

Keynote 要点如下:

  1. 五大功能:检测/播放/编辑片段/导出/录制。
  2. 两种媒体 model:static/dynamic, 类似NSArray/NSMutableArray,对应读取时是否会 mutate。
  3. 异步加载
  4. Key-Value Observe 支持大多数属性
  5. “There’s a protocol for that” TM
  6. AVPlayerItem: AVAsynchronousKeyValueLoading: loadValuesAsynchronouslyForKeys(_:completionHandler:),可以异步获取状态/属性变化,以更新 UI 等。
  7. AVPlayer 时间属性变化很快,异步 KVO 不再合适,改为同步 KVO,需要添加/移除观察:addPeriodicTimeObserverForInterval(_:queue:usingBlock:)
  8. AVPlayerItem 可获取媒体相关的属性,对 status 添加KVO。
  9. AVPlayerItemTrack:enabled 属性可以选择性播放 track。
  10. AVQueuePlayer:播放一组 AVAsset,适用于编辑完播放。
  11. AVPlayerLayer 用于显示媒体在屏幕上。有 readyForDisplay/videoGravity 等属性。
  12. AVMediaSelectionGroup 用于字幕/音频等可选 track。
  13. iPod Library: MPMediaQuery
  14. Camera Roll: AssetsLibrary (iOS 9: Photos)
  15. static/dynamic model 对应不同的观察机制,如下:

Matters of protocol And platform etiquette

  • AVAsynchronousKeyValueLoading
1
2
loadValuesAsynchronouslyForKeys:completionHandler: 
statusOfValueForKey:error:
  • NSObject(NSKeyValueObserving)
1
2
3
addObserver:forKeyPath:options:context: 
removeObserver:forKeyPath:
observeValueForKeyPath:ofObject:change:context:

3. 部分示例代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
NSArray *keys = [NSArray arrayWithObject:@”playable”];
[asset
loadValuesAsynchronouslyForKeys:keys
completionHandler:^{
NSError *error = nil;
AVKeyValueStatus playableStatus =
[asset statusOfValueForKey:@"playable" error:&error];
switch (playableStatus) {
case AVKeyValueStatusLoaded:
[self updateUIForAsset];
break;
case AVKeyValueStatusFailed:
[self reportError:error forAsset:asset];
break;
...
}
}];
1
2
3
4
5
6
7
8
9
10
11
12
13
14
- (void)setUpTransportUI {
CMTime interval = CMTimeMakeWithSeconds(0.5);
id myObserver =
[[myPlayer addPeriodicTimeObserverForInterval:interval
queue:dispatch_get_main_queue()
usingBlock:^{
[self movePlayheadUI];
}] retain];
}

- (void)cleanUp {
[myPlayer removeTimeObserver:myObserver];
[myObserver release];
}
0%