I am wondering about implications of running a code after resume
from checked continuation and I couldn't find any resources for that. Example:
func doSomething(value: Bool) {
print(value)
}
func myAsyncMethod() async -> Bool {
withCheckedContinuation { continuation in
continuation.resume(returning: true)
doSomething() // <---
}
}
I understand that it's not the best way to do it, but I'm trying to understand the issues with it (if there are any). Anyone any ideas?
tl;dr
There is nothing technically invalid with your particular example, but this (anti)pattern of continuing execution of additional code after satisfying the await
exposes one to certain risks, limitations, etc., outlined below.
The main concern is that this violates a core principle of Swift concurrency: When you await
, the current execution context is suspended until the async
work is done. The expectation is that when the await
is satisfied and execution resumes in the caller’s function, that the work that we had previously await
’ed is done, but letting doSomething
continue to run violates this assumption.
Consider:
func myAsyncMethod() async -> T {
let result = await withCheckedContinuation { continuation in
someLegacyFunction { (value: T) in
continuation.resume(returning: value)
doSomething(with: value)
}
}
doSomethingElse(with: result)
return result
}
func doSomething(with value: T) { … }
func doSomethingElse(with value: T) { … }
There are several concerns:
You have no assurances about the order of execution of doSomething
and doSomethingElse
, much less whether they might even run in parallel.
You have no way of knowing when doSomething
is done. In fact, in extreme cases, myAsyncMethod
could actually return before doSomething
finishes.
You have no way of cancelling doSomething
.
Swift concurrency’s “cooperative thread pool” makes assumptions about how many threads it can create (limited to the number of processors on the device, in order to avoid over committing the CPU), and by tying up a thread while doSomething
continues to execute after the await
is satisfied, this means that Swift concurrency’s assumptions may not be valid.
For more information, see WWDC 2021 video Swift concurrency: Behind the scenes: It doesn’t cover this issue, specifically, but it offers insights regarding the Swift concurrency threading model, the contract it imposes upon developers, etc.
In your example, you are not doing anything substantial in doSomething
, but it would be easy for some future developer to add code in doSomething
that could introduce problems.
The long and short of it is that this is anti-pattern that unnecessarily exposes you to certain risks that are easily avoided by simply making the resume(returning:)
the last thing that the awaited withCheckedContinuation
does. In your particular example, there are unlikely to be any egregious problems, but it is easy to construct examples that lead to very unusual behaviors.
For example, consider the following, where a
wraps legacy completion-handler-based method, b
, in a continuation, but calls c
after resuming; and a
calls d
after the await
of the checked continuation:
class Experiment {
let poi = OSSignposter(subsystem: "Experiment", category: .pointsOfInterest)
func a() async -> Int {
let state = poi.beginInterval(#function, id: poi.makeSignpostID())
defer { poi.endInterval(#function, state, "\(result)") }
let result = await withCheckedContinuation { continuation in
b { value in
continuation.resume(returning: value) // warning: should really do this *after* calling `c`
self.c(with: value)
}
}
d(with: result)
return result
}
func b(completion: @escaping (Int) -> Void) {
let state = self.poi.beginInterval(#function, id: self.poi.makeSignpostID())
DispatchQueue.global().asyncAfter(deadline: .now() + 1) {
completion(42)
self.poi.endInterval(#function, state, "\(42)")
}
}
func c(with value: Int) {
poi.withIntervalSignpost(#function, id: poi.makeSignpostID(), "\(value)") {
Thread.sleep(forTimeInterval: 2) // we would generally never Thread.sleep`, but it is here for illustrative purposes; usually we would `Task.sleep`, which is non-blocking, but your example was using synchronous methods, so I am giving synchronous example
}
}
func d(with value: Int) {
poi.withIntervalSignpost(#function, id: poi.makeSignpostID(), "\(value)") {
Thread.sleep(forTimeInterval: 1)
}
}
}
When I profile that with Instruments, that results in a
returning before b
or c
finish, c
and d
can run in parallel, all of which is likely to be expected behavior, introduces races, etc.:
So, needless to say, a more idiomatic pattern would be to make the parallelism of c
and d
explicit (presuming that was the intent), and move the call to c
outside of the completion-handler closure for b
:
func a() async -> Int {
let state = poi.beginInterval(#function, id: poi.makeSignpostID())
defer { poi.endInterval(#function, state, "\(result)") }
let result = await withCheckedContinuation { continuation in
b { value in
continuation.resume(returning: value)
}
}
await withDiscardingTaskGroup { group in
group.addTask { self.c(with: result) }
group.addTask { self.d(with: result) }
}
return result
}
That results in:
Now, a
satisfies the contract with the caller that the await
of a
will not return until everything is done. And we can now add cancellation support with c
and d
, too.
Probably needless to say, if c
and d
were really this slow, we would refactor them to be non-blocking async
functions. But the above was merely for illustrative purposes, only.