audiovideoshared-librariesprocessingminim

How to create a translucent audio-reactive overlay with Processing?


I've researched this question far and wide, but I can't find any useful answers. Basically, I want to create a translucent (or semi-transparent) audio-reactive overlay which can be transposed onto a generic video file. The idea is to give the video the appearance of pulsating with the audio track.

I think I can achieve this effect with Processing and the minim library, but I don't know how to formulate the sketch. The output should be 1920x1080 and the pulsating overlay should produce a sense of vibrant luminosity (e.g. a light color with 30-50% brightness and perhaps 25-50% opacity).

I'm updating this challenge with the sketch provided by @george-profenza (with modifications to use video instead of cam input):

import processing.video.*;

Movie movie;
PGraphics overlay;

import ddf.minim.*;

Minim minim;
AudioInput in;

void setup(){
  size(320,240);

  movie = new Movie(this, "input.mp4");
  movie.play();

  // setup sound
  minim = new Minim(this);
  in = minim.getLineIn();

  // setup overlay
  overlay = createGraphics(width,height);
  // initial draw attributes
  overlay.beginDraw();
  overlay.strokeWeight(3);
  overlay.rectMode(CENTER);
  overlay.noFill();
  overlay.stroke(255,255,255,32);
  overlay.endDraw();
}

void draw(){

  //update overlay based on audio data
  overlay.beginDraw();
  overlay.background(0,0);
  for(int i = 0; i < in.bufferSize() - 1; i++)
  {
    overlay.line( i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50 );
    overlay.line( i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50 );
  }
  overlay.endDraw();
  //render video then overlay composite
  image(movie,0,0);
  image(overlay,0,0);
}
// update movie
void movieEvent(Movie m){
  m.read();
}

Presumably this sketch works, but unfortunately, the underlying processing.video (GStreamer 1+) library seems to be malfunctioning on Ubuntu (and there doesn't appear to be a way to update the library with one of the community provided forks, according to issue #90 on GitHub.

If anyone can suggest a way to fix this problem or has another solution, I'd be appreciative.


Solution

  • That's a wide question. I'll cover a few aspects:

    1. translucent (audio-reactive) overlay: look into PGraphics. It's like layers in Processing. You can draw into PGraphics (with translucency, etc.), then render in what order you want. See commented example bellow
    2. audio-reactive: you can use minim is use loudness, FFT data or some other software that can do more advanced audio analysis from which you can export data for Processing to read.
    3. 1920x1080 output: In my personal experience, at the time of this writing I had the surprise of seeing ok, but not super crisp 1080p video playback within Processing (I would experience staggering every once in a while, tested on a macbook with 16GB RAM and on a PC also with 16GB RAM). Doing sound analysis and overlay graphics on top may degrade performance even further, the main issue being sync between audio and composited graphics, that is you want to do this realtime.

    If you simply want to output a video that has beautiful generative audio-responsive graphics, but doesn't need to be in real-time I recommend taking a more "offline" approach:

    For reference here is a very basic proof of concept sketch that demonstrates:

    Note the low-res video size.

    import processing.video.*;
    
    Capture cam;
    PGraphics overlay;
    
    import ddf.minim.*;
    
    Minim minim;
    AudioInput in;
    
    
    void setup(){
      size(320,240);
    
      // setup video (may be video instead of webcam in your case)
      cam = new Capture(this,width,height);
      cam.start();
    
      // setup sound
      minim = new Minim(this);
      in = minim.getLineIn();
    
      // setup overlay
      overlay = createGraphics(width,height);
      // initial draw attributes
      overlay.beginDraw();
      overlay.strokeWeight(3);
      overlay.rectMode(CENTER);
      overlay.noFill();
      overlay.stroke(255,255,255,32);
      overlay.endDraw();
    }
    
    void draw(){
    
      //update overlay based on audio data
      overlay.beginDraw();
      overlay.background(0,0);
      for(int i = 0; i < in.bufferSize() - 1; i++)
      {
        overlay.line( i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50 );
        overlay.line( i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50 );
      }
      overlay.endDraw();
      //render video then overlay composite
      image(cam,0,0);
      image(overlay,0,0);
    }
    // update video (may be movieEvent(Movie m) for you
    void captureEvent(Capture c){
      c.read();
    }