Here is the setup:
I have a video file and an audio file with the same duration. The video file is played with [pix_film]
and the audio file via [readsf~]
, both are then distorted by several effects, triggered by user interaction.
How can I keep the video and audio synchronised?
In case you want the user to change the speed of the playback (so not using [readsf~] but rather a sampler based on [vline~] or [phasor~]), then you also need to adjust the speed of playback of the video. A tutorial how to do this and also the playback with more than once instance of Pd to avoid audio dropouts can be found at https://github.com/mxa/AudioVideoPatches