WWDC 2017 Session 402 地址:https://developer.apple.com/videos/play/wwdc2017/402/

一些特性在 https://github.com/ole/whats-new-in-swift-4 的 Playground 已经演示,就不具体记录了。

总的来说:应用方面小修小补,迁移成本也很小,最高兴的就是编译方面的优化,结合Xcode的优化,相信Swift 4会更快。

1. 语言修饰和增补

  • private 变回 Swift 2,extension可见
  • 组合类和协议 (Composing Classes and Protocols),用法class & protocol
  • KeyPath
  • Swift 对 JSON 解析,Codable协议
  • 对Swift 3兼容,项目中3和4并存

2. 编译的提升

  • New build system 必须手动打开
  • index 优化,很赞会节约不少时间,我一度以为这是Xcode的bug
  • 不可预知类型性能优化(Unpredictable Performance in Swift 3),用COW优化(COW Existential Buffers)
  • 更小的二进制:移除无用的@objc部分代码,编译设置(Change build setting to Default: Swift 3 @objc Inference)
  • 符号大小(Symbol Size)

3. 字符串

  • Unicode支持,重音或表情,为一个字母,count为1。In Swift, a Character is a grapheme.

  • 表情处理速度提升

  • characters 改为集合类型

  • 字符串也改为集合类型

  • 支持切片(Slicing),let s = "one,two,three"; s.split(separator: ",")

  • 子字符串(Substrings)浪费内存,应该String(substring)

  • 终于支持多行字符串,格式为首尾 """

4. 一些通用特性

  • 拓展Sequence
  • Sequence拥有Element
  • 通用的下标(Generic Subscripts)

5. 独占的访问内存(Exclusive Access to Memory)

运行时强制:

  • 全局变量(Global variables )
  • 类的属性(Properties of classes )
  • 逃逸闭包中的变量(Local variables captured in escaping closures)

默认编译时打开,运行时关闭,可在build settings更改。

这一段有点难,回头再看看Onevcat翻译的 所有权宣言 - Swift 官方文章 Ownership Manifesto 译文评注版

自从学习iOS开发以来,断断续续加起来也有2年时间了。最近离职,是时候做一下总结,梳理一下知识点。

iOS开发最开始入门是 Stanford University Developing iOS 7 Apps,虽然是英语,但是最基本的概念也是那时候一点点学的。当然后来离职参加培训班,是进步最快速的一段时间,也培养了真正自学的能力。到后来参加iOS的工作,一直以来引以为好的就是自学能力。因为对于iOS开发来说:语言、设计模式、框架、文档、调试和测试等都已经掌握了,实际开发工作中,无非是一些实际的问题或者新的框架和系统特性,只要查询文档或者Google一下都能解决了。

框架

谈到iOS框架,必须是MVC,工作以来的项目一直都是MVC。足够简单的分层和经久验证的良好设计。对于其他的如:MVVM,只是了解,实际项目中未曾应用。比较简单的业务逻辑和团队人数少的情况下,MVC足够好用。

语言

Objective-C是入门语言,其语言复杂性足够新手望而却步,但是也就属性特性和方法调用比较难。后来工作中一直使用Swift,才感觉到现代语言的简单和强大。尤其是Swift多范式,支持函数式编程,尤为方便。由于对Swift的偏爱,我也逐渐把第三方库都替换成了Swift对应的版本。Cocoapods对于Framework的支持也比较好, 添加use_frameworks!,即可调用Objective-C的库也很Swifty。

设计模式

Delegate、通知、KVO、Target-Action、Block(Closure)、工厂方法、单例等就不一一介绍,更具实际需求选择合适且喜欢的就行。特别推荐这篇文章:消息传递机制

Cocoa Touch

iOS渲染层次树、view、layer、响应链、Core Animation、自定义view、常见UIKit的控件的继承关系,table view、collection view等都应该熟练使用。

学习资源

微博、Twitter、博客、书、视频教程都是很好的。列几个对我影响比较的:

常用库

这里是最为推荐的几个库:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

pod 'SwiftyJSON'
pod 'SnapKit'
pod 'IGListKit'
pod 'Kanna'
pod 'URLNavigator'
pod 'MXSegmentedPager'
pod 'QMUIKit'
pod 'IBAnimatable'
pod 'Ruler'
pod 'DZNEmptyDataSet'
pod 'CYLTabBarController'
pod 'FDFullscreenPopGesture'
pod 'MJRefresh'
pod 'SwiftyUserDefaults'
pod 'YYKit'

# Test tools
pod 'MLeaksFinder'
pod 'FLEX', '~> 2.0', :configurations => ['Debug']
pod 'Reveal-SDK', '~> 4.0', :configurations => ['Debug']

视频cell滑动时自动播放,实现起来主要是获取滚动时刻的cell的frame,IGListScrollDelegatelistAdapter(_:didEndDragging:willDecelerate:) 获取可见cell,播放符合规则的视频cell,并暂停其余cell。

主要记录思路,供大家参考。规则如下:

  • 视频frame超过一半在Screen上的最前面的cell
  • 少于一半则停止播放,以导航栏底部64为准线
  • vc appear 手动调用此方法
  • vc didMove(toParentViewController:) release player
  • visibleCells 方法返回 cell indexPath 不是顺序的,是个坑,要重新排序
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
extension VideoSectionController: IGListScrollDelegate {

// MARK: - GListScrollDelegate

func listAdapter(_ listAdapter: IGListAdapter!, didScroll sectionController: IGListSectionController!) {

}

func listAdapter(_ listAdapter: IGListAdapter!, willBeginDragging sectionController: IGListSectionController!) {

}

func listAdapter(_ listAdapter: IGListAdapter!, didEndDragging sectionController: IGListSectionController!, willDecelerate decelerate: Bool) {

guard var cells = self.collectionContext?.visibleCells(for: sectionController) as? [VideoCell] else { return }

// cells 重新排序
cells = cells.sorted { (cell0, cell1) -> Bool in
guard let indexPath0 = collectionContext?.index(for: cell0, sectionController: sectionController) else {
return true
}
guard let indexPath1 = collectionContext?.index(for: cell1, sectionController: sectionController) else {
return true
}

if indexPath0 < indexPath1 {
return true
} else {
return false
}

}

for cell in cells {

let videoCenter = cell.convert(cell.videoCoverImageView.center, to: nil)

if videoCenter.y < 64 {
// pause
cell.pause()
} else {
// play
cell.play()
break
}
}

}
}
1
2
3
4
5
6
7
8
override func didMove(toParentViewController parent: UIViewController?) {
super.didMove(toParentViewController: parent)

if parent == self.navigationController?.parent {
print("Back tapped")
NotificationCenter.default.post(name: Notification.Name.VideoPlayer.VideoCellStopPlay, object: nil, userInfo: nil)
}
}

官方教程地址:https://www.paintcodeapp.com/examples

PaintCode能够画出各种自定义的曲线图形(再也不怕设计师的各种曲线和细节实现不了),而且很方便的集成到iOS项目中,支持Swift和Objective-C。尤其是 Dynamic Shapes 支持简单约束,可以保持大小变化时图形规则变化。

软件截图:

操作很简单,和Sketch习惯差不多。

Screen Shot 2017-02-05 at 12.14.33

自定义视图代码:

实现一个带有箭头的圆形边框的图片视图

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import UIKit

class ButtonView: UIView {

var image: UIImage? {
didSet {
if imageView == nil {
imageView = UIImageView()
imageView?.backgroundColor = .clear
imageView?.layer.masksToBounds = true
self.insertSubview(imageView!, at: 0)
}
imageView?.image = image
}
}

private var imageView: UIImageView?

override func draw(_ rect: CGRect) {
// 重点代码,一行即可完成
JLXStyleKit.drawBubbleButton(frame: self.bounds)
}

override func layoutSubviews() {
super.layoutSubviews()

imageView?.frame = CGRect(x: self.bounds.width * 0.5 / 38.0, y: self.bounds.width * 0.5 / 38.0, width: self.bounds.width * 37 / 38.0, height: self.bounds.width * 37 / 38.0)
imageView?.layer.cornerRadius = bounds.width / 2.0
}

}

运行截图:

Simulator Screen Shot Feb 5, 2017, 12.16.37

Demo 地址

https://github.com/gewill/PaintCode-Dynamic-Bezier-Shapes-Demo

Apple Swift 博客原文地址:https://developer.apple.com/swift/blog/?id=39

主要变化:Swift 3 中 Any 映射 Objective-C 的 id

Objective-C Swift 2 Swift 3
id AnyObject Any
NSArray * [AnyObject] [Any]
NSDictionary * [NSObject: AnyObject] [AnyHashable: Any]
NSSet * Set<NSObject> Set<AnyHashable>
  • 方法和协议的中的AnyObject均改为Any
  • 调用大部分 C 和 Objective-C 需要显示类型转换,指针为 UnsafePointer<AnyObject>
  • Objective-C 协议仍是限制在 Class,而 structs 和 enums 无法符合。需要显示转换,如:String as NSString, Array as NSArray
  • Any 没有 AnyObject 中的一些魔法查询方法可用,如:(x as AnyObject).description
  • Swift 值类型隐式转换 id
  • Cocoa 也紧随 Swift 进化的脚步,而变得更强大

Learn iOS Design 是 Design Code 第一本,详细介绍了iOS 设计的方方面面,几乎每篇都是理论加工具。总体很全面,还有待进一步的实践中得到提高。

Core Philosophies

讲解了设计的哲学,和最低的三个要求:consider the touch interface, make the text readable and optimize for the iPhone 5, 6 and 6 Plus.

iOS is driven by 3 core philosophies: deference, clarity and depth.

In Retina, typography should have a minimum size of 11pt. The optimal size for reading is around 16pt.

Designing for iOS 9

详细讲解了 iOS 9 上常见控件的合理布局大小。

iOS uses vibrant colors to bring out the buttons.

iOS often uses neutral colors to serve as the background and menu areas.

iOS-Colors

Learn Colors

色彩运用和对比是 HSB 在数值上更有易于理解和对比。
下面介绍了:单色系、相似色、互补色、中间色、反色等色彩运用。将用的中间色的色板、颜色的含义、原质化设计(Material Design)颜色、阶梯色(UI Gradients)。

use colors only to draw attention to a button or element of importance.

I suggest starting with a vibrant, pastel color that is Primary or Secondary.

These are the colors used by Apple in their native apps. They’re vibrant and perfect for buttons, icons and actionable items.

I can easily map in my mind how much Hue, Saturation and Brightness. Those values make a lot more sense to me.

Meaning In Colors:I suggest reading this guide about colors.

This is a nice collection of gradients: http://uigradients.com

Learn Typography

介绍了一些字体常用使用,和字体网站。

字体一些基础知识:位置线、有无衬线字体

Typography-Basics

Let’s look at these 5 rules of good typography and apply them to modern design for mobile and for Websites.

The font size should be at least 11pt to be readable on the iPhone, iPad and Apple Watch. While that is the minimum value, the recommended size for the body text is actually15-18pt.

At 12-18pt, use Regular. At 18-24pt, use Light, at 24-32pt, use Thin and at 32pt or more, use Ultralight. Notice that for each scale, the text remains readable while looking clean and sophisticated.

Typography-LineHeight

“People say design isn’t art. It isn’t. Great design is art.”

字体资源网站:

Google Fonts
Typekit
fonts.com

Learn Animations

介绍动画在交互中的作用,和一些基本原则,以及一些做动画原型的工具。

Good animations enhance, bad animations distract.

Good animations should provide feedback on taps and gestures, and give a sense of direct manipulation.

Modern apps tend to use Spring and Ease animations much more than Linear.

Animation-Good

Animation Curve

On Spring, an animation framework that I created for iOS, I made available a bunch of preset animations that combine many transforms at once. They can be inexpensively integrated to your app, without even learning how to code. 和 IBAnimatable 类似的一个不用代码的动画库。

效果视频:https://designcode.io/cloud/chapter1/Animation-Spring.mp4

Animations Shouldn’t Last Longer Than 1 second.

“Design is the fundamental soul of a human-made creation that ends up expressing itself in successive outer layers of the product or service.” 

– Steve Jobs

The Animation Tools:

  • Principle
  • Flinto for Mac
  • Pixate
  • Origami
  • Framer
  • After Effects

UI Icons

好水的一章,介绍一些图标的网站。

图标要表意,避免于其他app混淆,除非故意为之。

UI Sounds

音效也是应用增强体验的一个方面,可应用在通知、正操作反馈和负操作反馈。

Design Inspiration

介绍获取涉及灵感的方式:观察生活中的技艺,看书,和一些网站。

书:

  • Becoming Steve Jobs
  • Steve Jobs by Walter Isaacson
  • Jony Ive: The Genius Behind Apple’s Greatest Products
  • Dieter Rams
  • Elon Musk
  • The Tipping Point,
  • Outliers,
  • Blink,
  • David and Goliath
  • What The Dog Saw

网站:Twitter、Medium、Sidebar

it’s about 10% reading, 30% writing and collecting, and 60% design and code.

Design Principles

介绍了设计常见的几个原则和作者个人的一些建议:自学能力、尽可能少的设计、三原则法、一万小时法、做梦想的事、休息为强者准备的。

Getting Your Product Out There

介绍了产品运营的一些建议。

Benefits, Not Features

[weak self] 比[unowned self]更安全。

拓展:分割代码快,更简洁,不要滥用

协议:是一个更简洁表达API的方式。是一种类型: 协议用在属性/方法的参数或返回值中。

下面是协议的一个例子:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
protocol Moveable {
mutating func moveTo(p: CGPoint)
}

class Car: Moveable {
func moveTo(p: CGPoint) {
print("Car move to \(p)")
}

func changeOil() {
print("changeOil")
}

var name: String

init() {
self.name = ""
}

init(name: String) {
self.name = name
}

}

struct Shape: Moveable {
mutating func moveTo(p: CGPoint) {
print("Shape move to \(p)")
}

func draw() {

print("draw")
}
}

let prius: Car = Car(name: "A")
let square: Shape = Shape()

var thingToMove: Moveable = prius
thingToMove.moveTo(CGPoint(x: 100, y: 100))
var find: Car = prius
find.name

// 协议可以作为一个类型,来存储实现该协议的变量,但是不能直接调用后者的非协议的方法和属性
//thingToMove.changeOil()
thingToMove = square

let thingsToMove: [Moveable] = [prius, square]

func slide(var slider: Moveable) {
let positionToSlideTo = CGPoint(x: 88, y: 88)
slider.moveTo(positionToSlideTo)
}

slide(prius)
slide(square)

protocol Slippery {
var speed: Double { get }
}

extension Car: Slippery {
var speed: Double {
return 1
}
}

func slipAndSlide(x: protocol<Slippery, Moveable>) {
print("slipAndSlide")
}
slipAndSlide(prius)


最近项目中,非持久化部分使用了数组来暂存数据,涉及到去重。一直以来都是 for in 循序遍历来处理数组,比较了函数式编程,还是后者的模块化组装性比较有优势。

《Advanced Swift》中 Transforming Arrays 一节讲解了一些数组的基本映射转换方法。其中主要是重新实现了部分方法,以及推荐我们自己写一些拓展标准库的方法。

以下是标准库中13个独立的方法,可以随意组合使用:

  • map and flatMap — how to transform an element 映射:如何转换每一个元素
  • filter — should an element be included? 过滤:是否应该包含该元素
  • reduce — how to fold an element into an aggregate value 归纳:如何折叠元素到一个总值
  • sort and lexicographicCompare — in what order should two elements come? 排序和字典顺序比较:两个元素如何排序
  • indexOf and contains — does this element match? 索引/下标 和 包含:是否与该元素匹配
  • minElement and maxElement — which is the min/max of two elements? 最小元素和最大元素:两个元素中最小/最大的那个
  • elementsEqual and startsWith — are two elements equivalent? 元素等于和开始于:两个元素是否是对等
  • split — is this element a separator? 分割:该元素是否是分隔符

编程毕竟是操作性的语言,以上都有表意很明确的方法,只要在实际调用几次也就理解了。下面是我写的 Demo :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
// : Playground - noun: a place where people can play

import UIKit

var fibs = [2, 3, 45, 53, 32, 12, 32, 1, 0, 1]

fibs.map {
Double($0 * 3)
}

let suits = ["♠", "♥", "♣", "♦"]

let ranks = ["J", "Q", "K", "A"]

let allCombinations = suits.flatMap { suit in
ranks.map { rank in
(suit, rank)
}
}

suits.flatMap { (suit) -> [String]? in
[suit, suit]
}

suits.flatMap { (suit) -> [String] in
if suit != "♥" {
return [suit]
} else {
return []
}
}

fibs.flatMap { (num) in
print(num)
}

fibs.sort {
$0 > $1
}

fibs.lexicographicalCompare([3])
fibs.lexicographicalCompare([1])
fibs.lexicographicalCompare([3]) { (num0, num1) -> Bool in
num0 > num1
}
fibs.lexicographicalCompare([1]) { (num0, num1) -> Bool in
num0 > num1
}

fibs.reduce(0) { (total, num) -> Int in
total + num
}

fibs.filter { (num) -> Bool in
num % 3 == 0
}

fibs.indexOf(4)
fibs.indexOf(1)
fibs.contains(4)
fibs.contains(1)

fibs.minElement()
fibs.minElement { (num0, num1) -> Bool in
num0 < num1
}
fibs.minElement { (num0, num1) -> Bool in
num0 > num1
}
fibs.maxElement()
fibs.maxElement { (num0, num1) -> Bool in
print("num0: \(num0), num1: \(num1)")
return num0 < num1
}

fibs.maxElement { (num0, num1) -> Bool in
num0 > num1
}

var strs = ["Lee", "Bee", "Will", "10", ""]
strs.maxElement { (str0, str1) -> Bool in
str0 < str1
}
strs.maxElement { (str0, str1) -> Bool in
str0 > str1
}

strs.elementsEqual(["Lee", "Bee", "Will", "10"])
strs.elementsEqual(["Lee", "Bee", "Will", "10", ""])

strs.startsWith(["Lee"])
strs.startsWith(["10"])
strs.startsWith(["Lee"]) { (str0, str1) -> Bool in
str0 == str1
}

strs.split("10")
strs.split("3")
fibs.split(1)

fibs.split(1, maxSplit: 2, allowEmptySlices: true)
fibs.split(1, maxSplit: 2, allowEmptySlices: false)
fibs.split(1, maxSplit: 3, allowEmptySlices: true)
fibs.split(1, maxSplit: 6, allowEmptySlices: false)
fibs.split(3333, maxSplit: 2, allowEmptySlices: true)
fibs.split(44444, maxSplit: 2, allowEmptySlices: false)

fibs.split(32, maxSplit: 0, allowEmptySlices: true)
fibs.split(32, maxSplit: 1, allowEmptySlices: false)
fibs.split(32, maxSplit: 2, allowEmptySlices: true)
fibs.split(32, maxSplit: 3, allowEmptySlices: false)

fibs.split { (num) -> Bool in
num % 2 == 0
}

fibs.split(1, allowEmptySlices: true) { (num) -> Bool in
num % 2 == 1
}

fibs.split(66, allowEmptySlices: true) { (num) -> Bool in
num % 2 == 1
}

fibs.forEach { (num) in
print(num - 44)
}

Demo 下载地址:https://github.com/gewill/test-projects/tree/master/collections%20transform.playground

Export

To read and write audiovisual assets, you must use the export APIs provided by the AVFoundation framework. The AVAssetExportSession class provides an interface for simple exporting needs, such as modifying the file format or trimming the length of an asset (see Trimming and Transcoding a Movie). For more in-depth exporting needs, use the AVAssetReader and AVAssetWriter classes.

Use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, use an AVAssetWriter object.

Note: The asset reader and writer classes are not intended to be used for real-time processing. In fact, an asset reader cannot even be used for reading from a real-time source like an HTTP live stream. However, if you are using an asset writer with a real-time data source, such as an AVCaptureOutput object, set the expectsMediaDataInRealTime property of your asset writer’s inputs to YES. Setting this property to YES for a non-real-time data source will result in your files not being interleaved properly.

1. Reading an Asset

Each AVAssetReader object can be associated only with a single asset at a time, but this asset may contain multiple tracks. For this reason, you must assign concrete subclasses of the AVAssetReaderOutput class to your asset reader before you begin reading in order to configure how the media data is read. There are three concrete subclasses of the AVAssetReaderOutput base class that you can use for your asset reading needs: AVAssetReaderTrackOutput, AVAssetReaderAudioMixOutput, and AVAssetReaderVideoCompositionOutput.

1.1. Creating the Asset Reader

All you need to initialize an AVAssetReader object is the asset that you want to read.

直接初始化即可,是一个可失败的构造器,注意检查是否成功。

1
2
3
4
NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);

Note: Always check that the asset reader returned to you is non-nil to ensure that the asset reader was initialized successfully. Otherwise, the error parameter (outError in the previous example) will contain the relevant error information.

1.2. Setting Up the Asset Reader Outputs

After you have created your asset reader, set up at least one output to receive the media data being read. When setting up your outputs, be sure to set the alwaysCopiesSampleData property to NO. In this way, you reap the benefits of performance improvements. In all of the examples within this chapter, this property could and should be set to NO.

alwaysCopiesSampleData 设置 NO,来获取性能的提升。

If you want only to read media data from one or more tracks and potentially convert that data to a different format, use the AVAssetReaderTrackOutput class, using a single track output object for each AVAssetTrack object that you want to read from your asset. To decompress an audio track to Linear PCM with an asset reader, you set up your track output as follows:

设置 track output

1
2
3
4
5
6
7
8
9
10
AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];

Note: To read the media data from a specific asset track in the format in which it was stored, pass nil to the outputSettings parameter.

You use the AVAssetReaderAudioMixOutput and AVAssetReaderVideoCompositionOutput classes to read media data that has been mixed or composited together using an AVAudioMix object or AVVideoComposition object, respectively. Typically, these outputs are used when your asset reader is reading from an AVComposition object.

With a single audio mix output, you can read multiple audio tracks from your asset that have been mixed together using an AVAudioMix object. To specify how the audio tracks are mixed, assign the mix to the AVAssetReaderAudioMixOutput object after initialization. The following code displays how to create an audio mix output with all of the audio tracks from your asset, decompress the audio tracks to Linear PCM, and assign an audio mix object to the output. For details on how to configure an audio mix, see Editing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];

Note: Passing nil for the audioSettings parameter tells the asset reader to return samples in a convenient uncompressed format. The same is true for the AVAssetReaderVideoCompositionOutput class.

The video composition output behaves in much the same way: You can read multiple video tracks from your asset that have been composited together using an AVVideoComposition object. To read the media data from multiple composited video tracks and decompress it to ARGB, set up your output as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
[assetReader addOutput:videoCompositionOutput];

1.3. Reading the Asset’s Media Data

To start reading after setting up all of the outputs you need, call the startReading method on your asset reader. Next, retrieve the media data individually from each output using the copyNextSampleBuffer method. To start up an asset reader with a single output and read all of its media samples, do the following:

使用 copyNextSampleBuffer 方法获取 CMSampleBufferRef:一个抽象类,封装了零或多个媒体类型的小样。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
// Copy the next sample buffer from the reader output.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Do something with sampleBuffer here.
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why the asset reader output couldn't copy another sample buffer.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
// Handle the error here.
}
else
{
// The asset reader output has read all of its samples.
done = YES;
}
}
}

2. Writing an Asset

The AVAssetWriter class to write media data from multiple sources to a single file of a specified file format. You don’t need to associate your asset writer object with a specific asset, but you must use a separate asset writer for each output file that you want to create. Because an asset writer can write media data from multiple sources, you must create an AVAssetWriterInput object for each individual track that you want to write to the output file. Each AVAssetWriterInput object expects to receive data in the form of CMSampleBufferRef objects, but if you want to append CVPixelBufferRef objects to your asset writer input, use the AVAssetWriterInputPixelBufferAdaptor class.

CVPixelBufferRef: A reference to a Core Video pixel buffer object. The pixel buffer stores an image in main memory.

2.1. Creating the Asset Writer

To create an asset writer, specify the URL for the output file and the desired file type. The following code displays how to initialize an asset writer to create a QuickTime movie:

1
2
3
4
5
6
NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
fileType:AVFileTypeQuickTimeMovie
error:&outError];
BOOL success = (assetWriter != nil);

2.2. Setting Up the Asset Writer Inputs

For your asset writer to be able to write media data, you must set up at least one asset writer input. For example, if your source of media data is already vending media samples as CMSampleBufferRef objects, just use the AVAssetWriterInput class. To set up an asset writer input that compresses audio media data to 128 kbps AAC and connect it to your asset writer, do the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};

// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};

// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];

Note: If you want the media data to be written in the format in which it was stored, pass nil in the outputSettings parameter. Pass nil only if the asset writer was initialized with a fileType of AVFileTypeQuickTimeMovie.

Your asset writer input can optionally include some metadata or specify a different transform for a particular track using the metadata and transform properties respectively. For an asset writer input whose data source is a video track, you can maintain the video’s original transform in the output file by doing the following:

1
2
3
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;

Note: Set the metadata and transform properties before you begin writing with your asset writer for them to take effect.

When writing media data to the output file, sometimes you may want to allocate pixel buffers. To do so, use the AVAssetWriterInputPixelBufferAdaptor class. For greatest efficiency, instead of adding pixel buffers that were allocated using a separate pool, use the pixel buffer pool provided by the pixel buffer adaptor. The following code creates a pixel buffer object working in the RGB domain that will use CGImage objects to create its pixel buffers.

1
2
3
4
5
6
NSDictionary *pixelBufferAttributes = @{
kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];

Note: All AVAssetWriterInputPixelBufferAdaptor objects must be connected to a single asset writer input. That asset writer input must accept media data of type AVMediaTypeVideo.

2.3. Writing Media Data

When you have configured all of the inputs needed for your asset writer, you are ready to begin writing media data. As you did with the asset reader, initiate the writing process with a call to the startWriting method. You then need to start a sample-writing session with a call to the startSessionAtSourceTime: method. All writing done by an asset writer has to occur within one of these sessions and the time range of each session defines the time range of media data included from within the source. For example, if your source is an asset reader that is supplying media data read from an AVAsset object and you don’t want to include media data from the first half of the asset, you would do the following:

配置完就是开始了:startWriting,和上面的 reader 用法基本一致。

1
2
3
CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.

Normally, to end a writing session you must call the endSessionAtSourceTime: method. However, if your writing session goes right up to the end of your file, you can end the writing session simply by calling the finishWriting method. To start up an asset writer with a single input and write all of its media data, do the following:

两种方法都可以结束 writing session:endSessionAtSourceTime 和 finishWriting

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the next sample buffer.
CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
if (nextSampleBuffer)
{
// If it exists, append the next sample buffer to the output file.
[self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
CFRelease(nextSampleBuffer);
nextSampleBuffer = nil;
}
else
{
// Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}];

The copyNextSampleBufferToWrite method in the code above is simply a stub. The location of this stub is where you would need to insert some logic to return CMSampleBufferRef objects representing the media data that you want to write. One possible source of sample buffers is an asset reader output.

这里的copyNextSampleBufferToWrite只是一个存根,你有必要在这里进行了一下逻辑上的处理。

3. Reencoding Assets

重新编码

You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects, you have more control over the conversion than you do with an AVAssetExportSession object. For example, you can choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. The first step in this process is just to set up your asset reader outputs and asset writer inputs as desired. After your asset reader and writer are fully configured, you start up both of them with calls to the startReading and startWriting methods, respectively. The following code snippet displays how to use a single asset writer input to write media data supplied by a single asset reader output:

同时使用 asset reader 和 writer 可以用来转换编辑格式。而且比AVAssetExportSession有更多选项可控:指定自己的输出格式、处理中更改 asset。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the asset reader output's next sample buffer.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
// If it exists, append this sample buffer to the output file.
BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
// Check for errors that may have occurred when appending the new sample buffer.
if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
{
NSError *failureError = self.assetWriter.error;
//Handle the error.
}
}
else
{
// If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
//Handle the error here.
}
else
{
// The asset reader output must have vended all of its samples. Mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}
}];

4. Putting It All Together: Using an Asset Reader and Writer in Tandem to Reencode an Asset

This brief code example illustrates how to use an asset reader and writer to reencode the first video and audio track of an asset into a new file. It shows how to:

  • Use serialization queues to handle the asynchronous nature of reading and writing audiovisual data 使用串行队列来调度异步处理读写媒体数据
  • Initialize an asset reader and configure two asset reader outputs, one for audio and one for video 初始化 reader 并配置两个输出源
  • Initialize an asset writer and configure two asset writer inputs, one for audio and one for video 初始化 writer 并配置两个输出源
  • Use an asset reader to asynchronously supply media data to an asset writer through two different output/input combinations 使用一个 reader 来异步提供媒体数据给 writer
  • Use a dispatch group to be notified of completion of the reencoding process 使用一个 dispatch group 来通知重新编码的进度
  • Allow a user to cancel the reencoding process once it has begun 允许用户取消操作

Note: To focus on the most relevant code, this example omits several aspects of a complete application. To use AVFoundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.

4.1. Handling the Initial Setup

Before you create your asset reader and writer and configure their outputs and inputs, you need to handle some initial setup. The first part of this setup involves creating three separate serialization queues to coordinate the reading and writing process.

1
2
3
4
5
6
7
8
9
10
11
12
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

The main serialization queue is used to coordinate the starting and stopping of the asset reader and writer (perhaps due to cancellation) and the other two serialization queues are used to serialize the reading and writing by each output/input combination with a potential cancellation.

主队列用来定位 reader 和 writer 的开始和结束,另外连个队列用来调度读写的输入输出源的联合体以及潜在的取消操作。

Now that you have some serialization queues, load the tracks of your asset and begin the reencoding process.

有了队列,就可以加载 asset 中的 tracks 并开始重新编码的进程了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
// Once the tracks have finished loading, dispatch the work to the main serialization queue.
dispatch_async(self.mainSerializationQueue, ^{
// Due to asynchronous nature, check to see if user has already cancelled.
if (self.cancelled)
return;
BOOL success = YES;
NSError *localError = nil;
// Check for success of loading the assets tracks.
success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
NSFileManager *fm = [NSFileManager defaultManager];
NSString *localOutputPath = [self.outputURL path];
if ([fm fileExistsAtPath:localOutputPath])
success = [fm removeItemAtPath:localOutputPath error:&localError];
}
if (success)
success = [self setupAssetReaderAndAssetWriter:&localError];
if (success)
success = [self startAssetReaderAndWriter:&localError];
if (!success)
[self readingAndWritingDidFinishSuccessfully:success withError:localError];
});
}];

When the track loading process finishes, whether successfully or not, the rest of the work is dispatched to the main serialization queue to ensure that all of this work is serialized with a potential cancellation. Now all that’s left is to implement the cancellation process and the three custom methods at the end of the previous code listing.

4.2. Initializing the Asset Reader and Writer

The custom setupAssetReaderAndAssetWriter: method initializes the reader and writer and configures two output/input combinations, one for an audio track and one for a video track. In this example, the audio is decompressed to Linear PCM using the asset reader and compressed back to 128 kbps AAC using the asset writer. The video is decompressed to YUV using the asset reader and compressed to H.264 using the asset writer.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
// Create and initialize the asset reader.
self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
BOOL success = (self.assetReader != nil);
if (success)
{
// If the asset reader was successfully initialized, do the same for the asset writer.
self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
success = (self.assetWriter != nil);
}

if (success)
{
// If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
if ([audioTracks count] > 0)
assetAudioTrack = [audioTracks objectAtIndex:0];
NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] > 0)
assetVideoTrack = [videoTracks objectAtIndex:0];

if (assetAudioTrack)
{
// If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
[self.assetReader addOutput:self.assetReaderAudioOutput];
// Then, set the compression settings to 128kbps AAC and create the asset writer input.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
[self.assetWriter addInput:self.assetWriterAudioInput];
}

if (assetVideoTrack)
{
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
NSDictionary *decompressionVideoSettings = @{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
};
self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
[self.assetReader addOutput:self.assetReaderVideoOutput];
CMFormatDescriptionRef formatDescription = NULL;
// Grab the video format descriptions from the video track and grab the first one if it exists.
NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
if ([videoFormatDescriptions count] > 0)
formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
// If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
if (formatDescription)
trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
else
trackDimensions = [assetVideoTrack naturalSize];
NSDictionary *compressionSettings = nil;
// If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
if (formatDescription)
{
NSDictionary *cleanAperture = nil;
NSDictionary *pixelAspectRatio = nil;
CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
if (cleanApertureFromCMFormatDescription)
{
cleanAperture = @{
AVVideoCleanApertureWidthKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
AVVideoCleanApertureHeightKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
AVVideoCleanApertureVerticalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
};
}
CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
if (pixelAspectRatioFromCMFormatDescription)
{
pixelAspectRatio = @{
AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
AVVideoPixelAspectRatioVerticalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
};
}
// Add whichever settings we could grab from the format description to the compression settings dictionary.
if (cleanAperture || pixelAspectRatio)
{
NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
if (cleanAperture)
[mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
if (pixelAspectRatio)
[mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
compressionSettings = mutableCompressionSettings;
}
}
// Create the video settings dictionary for H.264.
NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : [NSNumber numberWithDouble:trackDimensions.width],
AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
};
// Put the compression settings into the video settings dictionary if we were able to grab them.
if (compressionSettings)
[videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
// Create the asset writer input and add it to the asset writer.
self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
[self.assetWriter addInput:self.assetWriterVideoInput];
}
}
return success;
}

4.3. Reencoding the Asset

Provided that the asset reader and writer are successfully initialized and configured, the startAssetReaderAndWriter: method described in Handling the Initial Setup is called. This method is where the actual reading and writing of the asset takes place.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
BOOL success = YES;
// Attempt to start the asset reader.
success = [self.assetReader startReading];
if (!success)
*outError = [self.assetReader error];
if (success)
{
// If the reader started successfully, attempt to start the asset writer.
success = [self.assetWriter startWriting];
if (!success)
*outError = [self.assetWriter error];
}

if (success)
{
// If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
self.dispatchGroup = dispatch_group_create();
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
self.audioFinished = NO;
self.videoFinished = NO;

if (self.assetWriterAudioInput)
{
// If there is audio to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
[self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.audioFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next audio sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}

if (self.assetWriterVideoInput)
{
// If we had video to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
[self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.videoFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
// Set up the notification that the dispatch group will send when the audio and video work have both finished.
dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
// Check to see if the work has finished due to cancellation.
if (self.cancelled)
{
// If so, cancel the reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
}
else
{
// If cancellation didn't occur, first make sure that the asset reader didn't fail.
if ([self.assetReader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [self.assetReader error];
}
// If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
if (finalSuccess)
{
finalSuccess = [self.assetWriter finishWriting];
if (!finalSuccess)
finalError = [self.assetWriter error];
}
}
// Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
[self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
});
}
// Return success here to indicate whether the asset reader and writer were started successfully.
return success;
}

During reencoding, the audio and video tracks are asynchronously handled on individual serialization queues to increase the overall performance of the process, but both queues are contained within the same dispatch group. By placing the work for each track within the same dispatch group, the group can send a notification when all of the work is done and the success of the reencoding process can be determined.

音频和视频 track 在各自队列里异步的处理,又在同一个队列组中,这样方便获取编码 成功的通知。

4.4. Handling Completion

To handle the completion of the reading and writing process, the readingAndWritingDidFinishSuccessfully: method is called—with parameters indicating whether or not the reencoding completed successfully. If the process didn’t finish successfully, the asset reader and writer are both canceled and any UI related tasks are dispatched to the main queue.

处理完成时:进度和是否成功。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
if (!success)
{
// If the reencoding process failed, we need to cancel the asset reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to failure.
});
}
else
{
// Reencoding was successful, reset booleans.
self.cancelled = NO;
self.videoFinished = NO;
self.audioFinished = NO;
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to success.
});
}
}

4.5. Handling Cancellation

Using multiple serialization queues, you can allow the user of your app to cancel the reencoding process with ease. On the main serialization queue, messages are asynchronously sent to each of the asset reencoding serialization queues to cancel their reading and writing. When these two serialization queues complete their cancellation, the dispatch group sends a notification to the main serialization queue where the cancelled property is set to YES. You might associate the cancel method from the following code listing with a button on your UI.

处理取消操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
- (void)cancel
{
// Handle cancellation asynchronously, but serialize it with the main queue.
dispatch_async(self.mainSerializationQueue, ^{
// If we had audio data to reencode, we need to cancel the audio work.
if (self.assetWriterAudioInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the audio queue.
dispatch_async(self.rwAudioSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
// Leave the dispatch group since the audio work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}

if (self.assetWriterVideoInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the video queue.
dispatch_async(self.rwVideoSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
// Leave the dispatch group, since the video work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
// Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
self.cancelled = YES;
});
}

5. Asset Output Settings Assistant

The AVOutputSettingsAssistant class aids in creating output-settings dictionaries for an asset reader or writer. This makes setup much simpler, especially for high frame rate H264 movies that have a number of specific presets. Listing 5-1 shows an example that uses the output settings assistant to use the settings assistant.

资源输出源设置助手

Listing 5-1 AVOutputSettingsAssistant sample

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];

if (audioFormat != NULL)
[outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];

CMFormatDescriptionRef videoFormat = [self getVideoFormat];

if (videoFormat != NULL)
[outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];

CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]

[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];

AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];

Demo

AVReaderWriter: Offline Audio / Video Processing:
https://developer.apple.com/library/mac/samplecode/ReaderWriter/Introduction/Intro.html

最近项目写 Core Data 感觉之前的看过零碎的知识忘记的差不多了,又遇到异步处理的问题。重新看了一些资料,总结了一些要点如下。

1.

Core Data 是一个 对象图管理和存储框架。简单明确的属性和关系以及获取,都已封装好。不管底层数据库的实现,开发者只需关心数据和获取就行了。

2.

图形化编辑器:xcdatamodel

managed object model :

  • 属性支持 NSData:Binary Data 和符合 NSCoding protocol 的类型:Transformable
  • 关系建议采取 inverse
  • 关系一对多和一对一,其中有有序和无序的一对多的关系,分别为 NSSet 和 NSOrderedSet,具体可以参考这样文章

Core Data and Swift: Relationships and More Fetching :
http://code.tutsplus.com/tutorials/core-data-and-swift-relationships-and-more-fetching--cms-25070

3.

Core Data Stack 涉及四个类:

  • NSManagedObjectModel
  • NSPersistentStore
  • NSPersistentStoreCoordinator
  • NSManagedObjectContext

4.

NSManagedObjectContext:

  • 内存寄存器来处理 managed objects
  • 记得 save()
  • 掌管 managed objects 生命周期包括创建和获取
  • managed object 不能独立于 context 存在
  • context 具有领域性,一旦一个 managed object 被管理在一个 context ,将会在其整个生命周期绑定该 context
  • 支持多个 context
  • context 不是线程安全的

5.

如何配置 Core Data Stack:

其中 lazy、try catch 等技术细节不用多解释,后面在介绍多个 context 和异步处理的线程安全问题。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
import CoreData

class CoreDataStack {

let modelName = "Dog Walk"

lazy var context: NSManagedObjectContext = {

var managedObjectContext = NSManagedObjectContext(
concurrencyType: .MainQueueConcurrencyType)

managedObjectContext.persistentStoreCoordinator = self.psc
return managedObjectContext
}()

private lazy var psc: NSPersistentStoreCoordinator = {

let coordinator = NSPersistentStoreCoordinator(
managedObjectModel: self.managedObjectModel)

let url = self.applicationDocumentsDirectory
.URLByAppendingPathComponent(self.modelName)

do {
let options =
[NSMigratePersistentStoresAutomaticallyOption : true]

try coordinator.addPersistentStoreWithType(
NSSQLiteStoreType, configuration: nil, URL: url,
options: options)
} catch {
print("Error adding persistent store.")
}

return coordinator
}()

private lazy var managedObjectModel: NSManagedObjectModel = {

let modelURL = NSBundle.mainBundle()
.URLForResource(self.modelName,
withExtension: "momd")!
return NSManagedObjectModel(contentsOfURL: modelURL)!
}()

private lazy var applicationDocumentsDirectory: NSURL = {
let urls = NSFileManager.defaultManager().URLsForDirectory(
.DocumentDirectory, inDomains: .UserDomainMask)
return urls[urls.count-1]
}()

func saveContext () {
if context.hasChanges {
do {
try context.save()
} catch let error as NSError {
print("Error: \(error.localizedDescription)")
abort()
}
}
}
}

6.

Fetch

  • NSManagedObjectResultType: 默认值,返回 managed objects
  • NSCountResultType: 返回 count
  • NSDictionaryResultType: 返回一个计算后值,如 sum。详细用法可以看文档 NSExpression
  • NSManagedObjectIDResultType:

从性能优化的角度,可以考虑时候后面的几个类型。
iOS8异步fetch:NSAsynchronousFetchRequest、批量更新/删除属性

7.

fetched results controller 可以帮助我们处理 core data 和 table view datasource,可以简单的看成专用的 datasource。

记得添加 cacheName

8.

后台处理使用 context PrivateQueueConcurrencyType,默认使用 MainQueueConcurrencyType,尤其设计 UI。

可以使用 child context,先保存 child context 至 内存寄存器,一直到 parent context 保存后,才会保存至硬盘。

这里就涉及一个好的实践:有多个 context 总是调用 performBlock 来保证安全。

下面是一个 private context 后台处理,回到主线程的实践:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68


// 1
let privateContext = NSManagedObjectContext(
concurrencyType: .PrivateQueueConcurrencyType)
privateContext.persistentStoreCoordinator =
coreDataStack.context.persistentStoreCoordinator

// 2
privateContext.performBlock { () -> Void in
// 3
let results: [AnyObject]
do {
results = try self.coreDataStack.context
.executeFetchRequest(self.surfJournalFetchRequest())
} catch {
let nserror = error as NSError
print("ERROR: \(nserror)")
results = []
}

let exportFilePath =
NSTemporaryDirectory() + "export.csv"
let exportFileURL = NSURL(fileURLWithPath: exportFilePath)
NSFileManager.defaultManager().createFileAtPath(
exportFilePath, contents: NSData(), attributes: nil)

// 3
let fileHandle: NSFileHandle?
do {
fileHandle = try NSFileHandle(forWritingToURL: exportFileURL)
} catch {
let nserror = error as NSError
print("ERROR: \(nserror)")
fileHandle = nil
}

if let fileHandle = fileHandle {
// 4
for object in results {
let journalEntry = object as! JournalEntry

fileHandle.seekToEndOfFile()
let csvData = journalEntry.csv().dataUsingEncoding(
NSUTF8StringEncoding, allowLossyConversion: false)
fileHandle.writeData(csvData!)
}

// 5
fileHandle.closeFile()

// 4
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.navigationItem.leftBarButtonItem =
self.exportBarButtonItem()
print("Export Path: \(exportFilePath)")
self.showExportFinishedAlertView(exportFilePath)
})
} else {
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.navigationItem.leftBarButtonItem =
self.exportBarButtonItem()
})
}

} // 5 Closing brace for performBlock()


9. 参考资料

0%