swiftmultithreadingmemory-managementgrand-central-dispatchretaincount

How does retain count with synchronous dispatch work?


I'm trying to explain ownership of objects and how GCD does its work. These are the things I've learned:

class C {
    var name = "Adam"

    func foo () {
        print("inside func before sync", CFGetRetainCount(self)) // 3
        DispatchQueue.global().sync {
            print("inside func inside sync", CFGetRetainCount(self)) // 4
        }
        sleep(2)
        print("inside func after sync", CFGetRetainCount(self)) // 4 ?????? I thought this would go back to 3
    }
}

Usage:

var c: C? = C()
print("before func call", CFGetRetainCount(c)) // 2
c?.foo()
print("after func call", CFGetRetainCount(c)) // 2

Solution

  • A couple of thoughts:

    1. If you ever have questions about precisely where ARC is retaining and releasing behind the scenes, just add breakpoint after “inside func after sync”, run it, and when it stops use “Debug” » “Debug Workflow” » “Always Show Disassembly” and you can see the assembly, to precisely see what’s going on. I’d also suggest doing this with release/optimized builds.

      Looking at the assembly, the releases are at the end of your foo method.

    2. As you pointed out, if you change your DispatchQueue.global().sync call to be async, you see the behavior you’d expect.

      Also, unsurprisingly, if you perform functional decomposition, moving the GCD sync call into a separate function, you’ll again see the behavior you were expecting.

    3. You said:

      a function will increase the retain count of the object its calling against

      Just to clarify what’s going on, I’d refer you to WWDC 2018 What’s New in Swift, about 12:43 into the video, in which they discuss where the compiler inserts the retain and release calls, and how it changed in Swift 4.2.

      In Swift 4.1, it used the “Owned” calling convention where the caller would retain the object before calling the function, and the called function was responsible for performing the release before returning.

      In 4.2 (shown in the WWDC screen snapshot below), they implemented a “Guaranteed” calling convention, eliminating a lot of redundant retain and release calls:

      enter image description here

      This results, in optimized builds at least, in more efficient and more compact code. So, do a release build and look at the assembly, and you’ll see that in action.

    4. Now, we come to the root of your question, as to why the GCD sync function behaves differently than other scenarios (e.g. where its release call is inserted in a different place than other scenarios with non-escaping closures).

      It seems that this is potentially related to optimizations unique to GCD sync. Specifically, when you dispatch synchronously to some background queue, rather than stopping the current thread and then running the code on one of the worker threads of the designated queue, the compiler is smart enough to determine that the current thread would be idle and it will just run the dispatched code on the current thread if it can. I can easily imagine that this GCD sync optimization, might have introduced wrinkles in the logic about where the compiler inserted the release call.

    IMHO, the fact that the release is done at the end of the method as opposed to at the end of the closure is a somewhat academic matter. I’m assuming they had good reasons (or practical reasons at least), to defer this to the end of the function. What’s important is that when you return from foo, the retain count is what it should be.