So I'm trying to use Web Audio API
to decode & play MP3 file chunks streamed to the browser using Node.js & Socket.IO.
Is my only option, in this context, to create a new AudioBufferSourceNode
for each audio data chunk received or is it possible to create a single AudioBufferSourceNode
for all chunks and simply append the new audio data to the end of source node's buffer
attribute?
Currently this is how I'm receiving my MP3 chunks, decoding them and scheduling them for playback. I have already verified that each chunk being received is a 'valid MP3 chunk' and is being successfully decoded by the Web Audio API.
audioContext = new AudioContext();
startTime = 0;
socket.on('chunk_received', function(chunk) {
audioContext.decodeAudioData(toArrayBuffer(data.audio), function(buffer) {
var source = audioContext.createBufferSource();
source.buffer = buffer;
source.connect(audioContext.destination);
source.start(startTime);
startTime += buffer.duration;
});
});
Any advice or insight into how best to 'update' Web Audio API playback with new audio data would be greatly appreciated.
No, you can't reuse an AudioBufferSourceNode, and you cant push
onto an AudioBuffer. Their lengths are immutable.
This article (http://www.html5rocks.com/en/tutorials/audio/scheduling/) has some good information about scheduling with the Web Audio API. But you're on the right track.