I am using AudioKit to develop a quite dynamic pipeline. I will need different filters, taps and nodes depending on the state of the app (eg: recording, playing, configuring, analysing, etc).
What is the officially suggested approach here:
to have a single Conductor (wrapping of AudioEngine, and managing related Player
, Recorder
, Mixer
, Taps
, Etc) and have kind of a state machine reflecting the state of the UI and for each change: stop the engine, reconfigure settings and pipeline, restart engine and nodes
to have multiple conductors (similarly to what i see in the Cookbook app) and somehow deallocate (or is stopping enough?) any active conductor when another takes priority and starts. Although being simpler it may require code duplication (for example device handling and general AV Settings) and it appears to also be discouraged in some other SO comments. In the Cookbook app also the various "recipes" are completely unrelated and it's easier to completely deallocate a Conductor and start a new one for each recipe, they are also quite simple. In my app a Conductor may be required to keep it's state while another is active.
I figure the same question may also be valid for AVAudioEngine
and in various answers here it appears to be suggested to go for option 1, which makes me even more unsure when looking at the AudioKit Cookbook.
Thank you
Cookbook might not be the ideal example of a full audio app. It is merely a collection of mini apps.
Typically an app has only one AudioEngine as this renders audio output. Therefore you would typically also have one conductor only.
Synth One might help you better understand the architecture: https://github.com/AudioKit/AudioKitSynthOne This is already five years old and not using latest release but architecture has not changed much since then.