I'm facing an issue while migrating my library from the deprecated scriptProcessor to AudioWorklet.
Current implementation with ScriptProcessor
It currently use the AudioProcessingEvent, inputBuffer property, which is an AudioBuffer. I apply to this inputBuffer a lowpass filter thanks to OfflineAudioContext then analyze the peaks (of bass frequencies) to count and compute BPM candidates.
The issue is that the lowpass filter work can't be done within the AudioWorkletProcessor. (OfflineAudioContext is not defined)
How to apply a lowpass filter to the sample provided by the process method of an AudioWorkletProcessor (the same way as it's doable with the onaudioprocess event data) ? Thanks
[SOLUTION] AudioWorklet implementation
For the end user it will looks like this:
import { createRealTimeBpmProcessor } from 'realtime-bpm-analyzer';
const realtimeAnalyzerNode = await createRealTimeBpmProcessor(audioContext);
// Set the source with the HTML Audio Node
const track = document.getElementById('track');
const source = audioContext.createMediaElementSource(track);
// Lowpass filter
const filter = audioContext.createBiquadFilter();
filter.type = 'lowpass';
// Connect stuff together
source.connect(filter).connect(realtimeAnalyzerNode);
source.connect(audioContext.destination);
realtimeAnalyzerNode.port.onmessage = (event) => {
if (event.data.message === 'BPM') {
console.log('BPM', event);
}
if (event.data.message === 'BPM_STABLE') {
console.log('BPM_STABLE', event);
}
};
You can find the full code under the version 3 (pre-released right now).
You could make sure to apply the lowpass filter to the signal before it reaches the AudioWorkletNode
. Something like this should work.
const biquadFilterNode = new BiquadFilterNode(audioContext);
const audioWorkletNode = new AudioWorkletNode(
audioContext,
'the-name-of-your-processor'
);
yourInput
.connect(biquadFilterNode)
.connect(audioWorkletNode);
As a result your process()
function inside the AudioWorkletProcessor
gets called with the filtered signal.
However I think your current implementation doesn't really use a lowpass filter. I might be wrong but it looks like startRendering()
is never called which means the OfflineAudioContext
is not processing any data.
If that is true you may not need the lowpass filter at all for your algorithm to work.