I wrote a time stretch algorithm that sounds much better than AvAudioUnitTimePitch.
I was making an iOS app around it using AVAudioEngine,
thinking I could insert my algorithm into a AUv3 extension, and simply replace AvAudioUnitTimePitch.
However, reading the documentation it seems like this approach may not be possible as time stretch is not a real time effect, and AVAudioUnitTimePitch
is hence of type 'aufc' (kAudioUnitType_FormatConverter).
The documentation for creating custom audio extensions only mentions four types, and 'aufc' is not one of them, and the template in Xcode only supports the four types.
The glimmer of hope is that while experimenting with instantiating a blank extension, I can set componentDescription.componentType = kAudioUnitType_FormatConverter
and the instantiation is apparently successful, although this may prove to be a dead end.
Has anyone successfully made a v3 extension of 'aufc'? Given that I have wrote substantial code around AVAudioEngine,
are there any other recommended approaches to do what I am wanting to do if AUv3 extension is not the way to go?
After a couple days of experimentation, I discovered that it is perfectly possible to make your own time stretching AUv3 Audio Unit. An AU is responsible for pulling frames from the upstream source, specifying the number of frames to pull and supplying the timestamp.
The hardest thing to figure out was setting the correct AudioTimeStamp
value for mSampleTime
when calling the AUInternalRenderBlock
to pull frames from the upstream. As the documents state, the timestamp sample time you receive from the downstream will be meaningless when time stretching.
I don't think the type of extension used - 'aufc' vs 'aufx' would have any impact.