The documentation for AVAudioEngine
says
You can connect, disconnect, and remove audio nodes during runtime with minor limitations. Removing an audio node that has differing channel counts, or that’s a mixer, can break the graph. Reconnect audio nodes only when they’re upstream of a mixer.
Is this not the case for AVAudioUnitSampler
? I'm trying to create a new sampler and load a sound bank instrument while the engine is running, and then attach the sampler to the engine and connect it to a mixer that is upstream from the engine's main mixer. There's no sound in this case, but if the engine is stopped while connecting the samplers, everything is fine. Does this fall into one of the "minor limitations" of the engine?
I recently came across a scenario like this, and Apple's documentation is perhaps not as clear as it could be.
Ultimately, the "hack" here is that you should be able to connect the output of your sampler to the input of an AVAudioMixerNode while the engine is running. Then run the output of that mixer to the input you had the sampler connected to.
I found this out, roundaboutedly, from this Github thread: https://github.com/AudioKit/AudioKit/issues/2527#issuecomment-888801850.