I am creating a metronome as part of a larger app and I have a few very short wav files to use as the individual sounds. I would like to use AVAudioEngine because NSTimer has significant latency problems and Core Audio seems rather daunting to implement in Swift. I'm attempting the following, but I'm currently unable to implement the first 3 steps and I'm wondering if there is a better way.
Code outline:
audioPlayer.scheduleBuffer(buffer, atTime:nil, options:.Loops, completionHandler:nil)
So far I have been able to play a looping buffer (step 4) of a single sound file, but I haven't been able to construct a buffer from an array of files or create silence programmatically, nor have I found any answers on StackOverflow that address this. So I'm guessing that this isn't the best approach.
My question is: Is it possible to schedule a sequence of sounds with low latency using AVAudioEngine and then loop that sequence? If not, which framework/approach is best suited for scheduling sounds when coding in Swift?
I was able to make a buffer containing sound from file and silence of required length. Hope this will help:
// audioFile here – an instance of AVAudioFile initialized with wav-file
func tickBuffer(forBpm bpm: Int) -> AVAudioPCMBuffer {
audioFile.framePosition = 0 // position in file from where to read, required if you're read several times from one AVAudioFile
let periodLength = AVAudioFrameCount(audioFile.processingFormat.sampleRate * 60 / Double(bpm)) // tick's length for given bpm (sound length + silence length)
let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: periodLength)
try! audioFile.readIntoBuffer(buffer) // sorry for forcing try
buffer.frameLength = periodLength // key to success. This will append silcence to sound
return buffer
}
// player – instance of AVAudioPlayerNode within your AVAudioEngine
func startLoop() {
player.stop()
let buffer = tickBuffer(forBpm: bpm)
player.scheduleBuffer(buffer, atTime: nil, options: .Loops, completionHandler: nil)
player.play()
}