肇鑫的技术博客

业精于勤,荒于嬉

SwiftUI view recreated may cause bugs what are hard to debug

Today, I encountered an issue that VideoPlayer turned to blank when iPhone screen was rotated.

The VideoPlayer was in a modal view that created with fullScreenCover. The sample code was like this:

import SwiftUI
import AVKit

struct TopView: View {
    private let player = AVQueuePlayer(playerItem: nil)
    
    var body: some View {
        VideoPlayer(player: player)
            .onAppear {
                if let videoURL = Bundle.main.url(forResource: "sample", withExtension: "mov") {
                    let playerItem = AVPlayerItem(url: videoURL)
                    player.replaceCurrentItem(with: playerItem)
                    
                    Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in
                        if player.status == .readyToPlay {
                            player.play()
                            
                            if player.timeControlStatus == .playing {
                                timer.invalidate()
                            }
                        }
                    }
                }
            }
    }
}

Using Publisher for UIDevice.orientationDidChangeNotification

At first, I thought I should recreate the player when the screen was rotated.

import SwiftUI
import AVKit

struct TopView: View {
    @State private var playerItem:AVPlayerItem?
    
    private let player = AVQueuePlayer(playerItem: nil)
    private let publisher = NotificationCenter.default.publisher(for: UIDevice.orientationDidChangeNotification)
    
    var body: some View {
        if playerItem != nil {
            VideoPlayer(player: player)
                .onReceive(publisher, perform: { _ in
                    self.player.pause()
                    self.playerItem = nil
                })
        } else {
            ProgressView()
                .onAppear(perform: setPlayer)
        }
    }
    
    private func setPlayer() {
        if let videoURL = Bundle.main.url(forResource: "sample", withExtension: "mov") {
            self.playerItem = AVPlayerItem(url: videoURL)
            player.replaceCurrentItem(with: playerItem)
            
            Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in
                if player.status == .readyToPlay {
                    player.play()
                    
                    if player.timeControlStatus == .playing {
                        timer.invalidate()
                    }
                }
            }
        }
    }
}

However, the new code didn't work. I added more debug point and finally found that TopView was recreated when the screen was rotated. Since the view was recreated, the player in onReceive was newly created, it couldn't stop the playing of the previously played item.

The issue was because, unlike other structs and objects, which could automatically released when container view was released. AVPlayer hold its owned reference when playing. This behavior caused the SwiftUI view was not released properly.

Use Binding from parent view

The solution was easy. Since I wanted the player to be constant, I should set it up in the parent view.

import SwiftUI
import AVKit

struct TopView: View {
    @Binding var player:AVQueuePlayer
    
    var body: some View {
        VideoPlayer(player: player)
            .onAppear {
                if let videoURL = Bundle.main.url(forResource: "sample", withExtension: "mov") {
                    let playerItem = AVPlayerItem(url: videoURL)
                    player.replaceCurrentItem(with: playerItem)
                    
                    Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in
                        if player.status == .readyToPlay {
                            player.play()
                            
                            if player.timeControlStatus == .playing {
                                timer.invalidate()
                            }
                        }
                    }
                }
            }
    }
}

Now everything worked fine.

Final Thoughts

Some objects hold their own references as strong. Those objects may keep view from release and cause bugs. We should using Binding to create those objects in a higher view which is not recreated. Then the bugs are fixed.

Remove the video part from a live photo

Someone may think the process is as easy as getting the video from a live photo, removing the video and saving the other parts back. It is wrong.

We can get the video from a live photo using PHAssetResource's class func assetResources(for livePhoto: PHLivePhoto) -> [PHAssetResource]. But the PHAssetResource it gets contains empty assetLocalIdentifier. So you can't get its asset directly. And you can't remove it separately.

The correct way is to get the photo part, save it and remove the live photo.

Get the photo and save it

We could get the photo in three ways. Two from PHImageManager and one from PHAssetResourceManager. However, only one method is right way.

requestImage(for:targetSize:contentMode:options:resultHandler:)

OS, Mac Catalyst, tvOS
func requestImage(for asset: PHAsset, targetSize: CGSize, contentMode: PHImageContentMode, options: PHImageRequestOptions?, resultHandler: @escaping (UIImage?, [AnyHashable : Any]?) -> Void) -> PHImageRequestID
macOS
func requestImage(for asset: PHAsset, targetSize: CGSize, contentMode: PHImageContentMode, options: PHImageRequestOptions?, resultHandler: @escaping (NSImage?, [AnyHashable : Any]?) -> Void) -> PHImageRequestID

We should not use this method as it returns UIImage/NSImage, according to Apple, those two classes lacks metadata.

A UIImage object does not contain all metadata associated with the image file it was originally loaded from (for example, Exif tags such as geographic location, camera model, and exposure parameters). To ensure such metadata is saved in the Photos library, instead use the creationRequestForAssetFromImage(atFileURL:) method or the PHAssetCreationRequest class. To copy metadata from one file to another, see Image I/O.
creationRequestForAsset(from:)

requestData(for:options:dataReceivedHandler:completionHandler:)

We cannot use requestData(for:options:dataReceivedHandler:completionHandler:) of PHAssetResourceManager either. It does return Data instead of UIImage/NSImage. But the data it returns cannot save to PHPhotoLibrary correctly.

I think it is because of the data it returns contains the same localIdentifier as the live photo, which is allowed to save separately.

requestImageDataAndOrientation(for:options:resultHandler:)

This is the only way we should use to save the photo part from a live photo.

Remove the live photo

This is an easy job. Just use PHAssetChangeRequest.deleteAssets(_ assets: NSFastEnumeration).

Gain Insight of Serial and Concurrent Operations, Closure and async/await, Dispatch Semaphore and Dispatch Group

I understood those concepts deeper with a recent project.

Serial and Concurrent Operations

Serial

func serial() {
    for i in 0..<5 {
        let seconds:UInt32 = (1...3).randomElement()!
        sleep(seconds)
        print("\tWait \(seconds)s.")
        print(i)
    }
}
serial()

	Wait 3s.
0
	Wait 2s.
1
	Wait 1s.
2
	Wait 2s.
3
	Wait 1s.
4

Concurrent

func concurent() {
    let queue = DispatchQueue.global()
    
    for i in 0..<5 {
        queue.async {
            let seconds:UInt32 = (1...3).randomElement()!
            sleep(seconds)
            print("\tWait \(seconds)s.")
            print(i)
        }
    }
}
concurent()

	Wait 2s.
2
	Wait 3s.
0
	Wait 3s.
	Wait 3s.
	Wait 3s.
4
3
1

We can see that with serial operations, the codes are running one by one. And with concurrent operations, the codes are running in parallels.

Closure and async/await

Closure

func closure() {
    let url = URL(string: "https://zhaoxin.pro")!

    let task = URLSession.shared.dataTask(with: url) { data, urlResponse, error in
        if let error {
            print(error)
            return
        }
        
        if let httpResponse = urlResponse as? HTTPURLResponse,
           httpResponse.statusCode == 200,
           let data, let output = String(data: data, encoding: .utf8) {
            print(output)
        }
    }
    
    task.resume()
}
closure()

<!DOCTYPE html>
<!--[if IEMobile 7 ]><html class="no-js iem7"><![endif]-->
<!--[if lt IE 9]><html class="no-js lte-ie8"><![endif]-->
<!--[if (gt IE 8)|(gt IEMobile 7)|!(IEMobile)|!(IE)]><!--><html class="no-js"><!--<![endif]-->
<head>
  <meta charset="utf-8">
  <title>
  
  肇鑫的技术博客
  

  </title>
  <meta name="author" content="">
  <meta name="description" content="业精于勤,荒于嬉">
  ...

async/await

func async_await() async throws {
    let url = URL(string: "https://zhaoxin.pro")!
    let (data, urlResponse) = try await URLSession.shared.data(from: url)
    if let httpResponse = urlResponse as? HTTPURLResponse,
       httpResponse.statusCode == 200,
       let output = String(data: data, encoding: .utf8) {
        print(output)
    }
}
do {
    try await async_await()
} catch let error {
    print(error)
}

As you can see, most of the closures can convert to async/await, which makes them easy to understand. However, not all closures could be converted as async/await. For example, there are many API in Photos could not be converted to async/await. As those closures are called more than one time.

Dispatch Semaphore and Dispatch Group

For concurrent operations, if we want to do something after all operations are finished. We can use a timer, a Dispatch Semaphore, or a Dispatch Group.

Timer

func concurrentWithTimer() {
    let queue = DispatchQueue.global()
    let total = 5
    var finished = 0
    
    let lock = NSRecursiveLock()
    
    for i in 0..<5 {
        queue.async {
            let seconds:UInt32 = (1...3).randomElement()!
            sleep(seconds)
            print("\tWait \(seconds)s.")
            print(i)

            lock.lock()
            finished += 1
            lock.unlock()
        }
    }
    
    Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true, block: { timer in
        if finished == total {
            timer.invalidate()
            print("All Finished")
        }
    })
}
concurrentWithTimer()

	Wait 1s.
	Wait 1s.
0
2
	Wait 2s.
3
	Wait 3s.
	Wait 3s.
4
1
All Finished

Using Timer is easy to understand, but it costs more as it runs many times.

Dispatch Semaphore

func dispatch_semaphore() {
    let queue = DispatchQueue.global()
    let semaphore = DispatchSemaphore(value: 0)
    let total = 5
    var finished = 0
    
    let lock = NSRecursiveLock()
    
    for i in 0..<5 {
        queue.async {
            let seconds:UInt32 = (1...3).randomElement()!
            sleep(seconds)
            print("\tWait \(seconds)s.")
            print(i)

            lock.lock()
            
            finished += 1
            
            if finished == total {
                semaphore.signal()
            }
            
            lock.unlock()
        }
    }
    
    semaphore.wait()
    
    print("All Finished")
}

Semaphore only runs one time. But you need to deal with extra local variables, even more, you have to use lock to void data racing.

Dispatch Group

func dispatch_group() {
    let queue = DispatchQueue.global()
    let group = DispatchGroup()
    
    for i in 0..<5 {
        group.enter()
        
        queue.async {
            let seconds:UInt32 = (1...3).randomElement()!
            sleep(seconds)
            print("\tWait \(seconds)s.")
            print(i)

            group.leave()
        }
    }
    
    group.wait()
    
    print("All Finished")
}

Dispatch Group is the most elegant way to do the same job. There is no extra variables and no more extra operations. Just enter and leave then wait. Everything works like a charm.