I've researched this question far and wide, but I can't find any useful answers. Basically, I want to create a translucent (or semi-transparent) audio-reactive overlay which can be transposed onto a generic video file. The idea is to give the video the appearance of pulsating with the audio track.
I think I can achieve this effect with Processing and the minim library, but I don't know how to formulate the sketch. The output should be 1920x1080 and the pulsating overlay should produce a sense of vibrant luminosity (e.g. a light color with 30-50% brightness and perhaps 25-50% opacity).
I'm updating this challenge with the sketch provided by @george-profenza (with modifications to use video instead of cam input):
import processing.video.*;
Movie movie;
PGraphics overlay;
import ddf.minim.*;
Minim minim;
AudioInput in;
void setup(){
size(320,240);
movie = new Movie(this, "input.mp4");
movie.play();
// setup sound
minim = new Minim(this);
in = minim.getLineIn();
// setup overlay
overlay = createGraphics(width,height);
// initial draw attributes
overlay.beginDraw();
overlay.strokeWeight(3);
overlay.rectMode(CENTER);
overlay.noFill();
overlay.stroke(255,255,255,32);
overlay.endDraw();
}
void draw(){
//update overlay based on audio data
overlay.beginDraw();
overlay.background(0,0);
for(int i = 0; i < in.bufferSize() - 1; i++)
{
overlay.line( i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50 );
overlay.line( i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50 );
}
overlay.endDraw();
//render video then overlay composite
image(movie,0,0);
image(overlay,0,0);
}
// update movie
void movieEvent(Movie m){
m.read();
}
Presumably this sketch works, but unfortunately, the underlying processing.video
(GStreamer 1+) library seems to be malfunctioning on Ubuntu (and there doesn't appear to be a way to update the library with one of the community provided forks, according to issue #90 on GitHub.
If anyone can suggest a way to fix this problem or has another solution, I'd be appreciative.
That's a wide question. I'll cover a few aspects:
If you simply want to output a video that has beautiful generative audio-responsive graphics, but doesn't need to be in real-time I recommend taking a more "offline" approach:
ffmpeg
, etc.)For reference here is a very basic proof of concept sketch that demonstrates:
Note the low-res video size.
import processing.video.*;
Capture cam;
PGraphics overlay;
import ddf.minim.*;
Minim minim;
AudioInput in;
void setup(){
size(320,240);
// setup video (may be video instead of webcam in your case)
cam = new Capture(this,width,height);
cam.start();
// setup sound
minim = new Minim(this);
in = minim.getLineIn();
// setup overlay
overlay = createGraphics(width,height);
// initial draw attributes
overlay.beginDraw();
overlay.strokeWeight(3);
overlay.rectMode(CENTER);
overlay.noFill();
overlay.stroke(255,255,255,32);
overlay.endDraw();
}
void draw(){
//update overlay based on audio data
overlay.beginDraw();
overlay.background(0,0);
for(int i = 0; i < in.bufferSize() - 1; i++)
{
overlay.line( i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50 );
overlay.line( i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50 );
}
overlay.endDraw();
//render video then overlay composite
image(cam,0,0);
image(overlay,0,0);
}
// update video (may be movieEvent(Movie m) for you
void captureEvent(Capture c){
c.read();
}