iosaudiocore-audioaudiobuffermikmidi

AudioUnitRender and ExtAudioFileWrite error -50 in Swift: Trying to convert MIDI to Audio File


I'm trying to convert a MIDI file to an Audio File (.m4a) in Swift.

Right now I'm using MIKMIDI as a tool to sequence and playback MIDI files, however it does not include the ability to save the playback into a file. MIKMID's creator outlines the process to do this here. In an attempt to capture and save the output to an audio file, I've followed this example to try and replace the MIKMIDI Graph's RemoteIO node with a GeneralIO node in Swift. When I try to save the output to a file using AudioUnitRender and ExtAudioFileWrite, they both return error -50 (kAudio_ParamError).

    var channels = 2
    var buffFrames = 512
    var bufferList = AudioBufferList.allocate(maximumBuffers: 1)

         for i in 0...bufferList.count-1{

               var buffer = AudioBuffer()
               buffer.mNumberChannels = 2
               buffer.mDataByteSize = UInt32(buffFrames*sizeofValue(AudioUnitSampleType))
               buffer.mData = calloc(buffFrames, sizeofValue(AudioUnitSampleType))

               bufferList[i] = buffer

               result = AudioUnitRender(generalIOAudioUnit, &flags, &inTimeStamp, busNum, UInt32(buffFrames), bufferList.unsafeMutablePointer)
               inTimeStamp.mSampleTime += 1

               result = ExtAudioFileWrite(extAudioFile, UInt32(buffFrames), bufferList.unsafeMutablePointer)

         }

What is causing error -50, and how can I resolve it to render the MIDI (offline) to .m4a files?

UPDATE: I have resolved the ExtAudioFileWrite error -50 by changing mNumberChannels and channels to = 1. Now I get a one second audio file with noise. AudioUnitRender still returns error -50.


Solution

  • There are a couple of problems with your code:

    1. your AudioBufferList doesn't agree with the client format, try

      let bufferList = AudioBufferList.allocate(maximumBuffers: Int(clientFormat.mChannelsPerFrame)) 
      
    2. you're replacing the wrong node from the AUGraph, and connecting the remaining node to itself, resulting in an infinite loop on AudioUnitRender.

    But the main problem is that you are not implementing the solution that the author suggested. You wish that you could call AudioUnitRender with sample timestamps, faster than realtime, but the author said no, you'll have to manually convert sample time to hosttime and implement the better part of a midi player if you want that.

    So you could do that (sounds hard), or file a feature request, or maybe record to file in realtime as you listen to the music by adding a render notification to the graph's remote IO audio unit with AudioUnitAddRenderNotify and writing the samples during the kAudioUnitRenderAction_PostRender phase.