swiftmultithreadingconcurrencyactoravaudioengine

call swift Actor from AVAudioEngine (synchronous) code


I'm using apple's AVAudioEngine code like this:

var audioEngine = AVAudioEngine()
audioEngine.inputNode.installTap { [buffer, when] in
   let waveform = myGetWaveform(buffer)
   Task {
      await Something.shared.doSomethingWith(waveform)
   }
}

Where the closure for installTap will be executed on some audio thread.

I do this because Something is an actor, and it's hard for me to change it. But of course, the problem is that those Tasks can be executed out of order. That would be bad - I need them to be executed in order. How can I guarantee they execute in order? I suppose this could be applied to any synchronous code that can't change the caller to be async. I can't seem to find any way to inject serialization into async actor code.

The many versions I do find (using semaphores, etc) seem to all break the 'threads must move forward' contract of async...

CLARIFICATION:

  1. I forgot the await, sorry.
  2. When you call installTap on the inputNode, you pass a callback closure. Then when the inputNode is started, it will call the given closure over and over, many times a second. buffer will contain audio waveform data sampled from the device microphone. This is the conundrum: with the implementation I've given, the doSomethingWith will be called over and over for each new audio buffer, but won't necessarily be called in order.

Based on Rob's answer, I did this:

class AudioUploadQueue {

    private var continuation: AsyncStream<[Int16]>.Continuation?

    nonisolated lazy var stream: AsyncStream<[Int16]> = {
        AsyncStream(bufferingPolicy: .bufferingNewest(3)) { (continuation: AsyncStream<[Int16]>.Continuation) -> Void in
            self.continuation = continuation
        }
    }()
    
    func handleWaveformData(_ waveform: [Int16]) {
        self.continuation?.yield(waveform)
    }
    
    func stop() {
        self.continuation?.finish()
    }
    
    func monitorForWaveforms() async {
        for await waveform in stream {
            await AudioUploader.shared.handleAudioWaveform(waveform)
        }
    }
}

And instantiated it when the microphone starts and stop() and nil it out when the microphone stops. TBH it's sort of ugly. I'm surprised swift doesn't have a better built-in answer. There are still all sorts of non-async-threading-code parts of iOS, such as the AVAudioEngine stuff that need to interact with the rest of app code...


Solution

  • This is possible with the experimental ClosureIsolation feature, which should come to Swift 6 soon.

    If you cannot wait for a version of Swift with this, then your best tool is an AsyncStream. You will yield messages to the stream, and the actor will consume them in a loop.

    Your actor needs to provide a nonisolated property containing the stream so that your synchronous code can access it.

    If you want more details about the upcoming changes (which isn't necessarily needed for this question), there is a Mastodon thread with Holly Borla from the Swift team diving into some of that.