openglglslshaderpixel-shaderpixel-bender

OpenGL Pixel Shader: how to generate random matrix of 0s and 1s (on each pixel)?


So what I need is simple: each time we perform our shader (meaning on each pixel) I need to calculate random matrix of 1s and 0s with resolution == originalImageResolution. How to do such thing?

As for now I have created one for shadertoy random matrix resolution is set to 15 by 15 here because gpu makes chrome fall often when I try stuff like 200 by 200 while really I need full image resolution size

#ifdef GL_ES
precision highp float;
#endif

uniform vec2 resolution;
uniform float time;
uniform sampler2D tex0;

float rand(vec2 co){
    return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * (43758.5453+ time));
}
vec3 getOne(){
    vec2 p =  gl_FragCoord.xy / resolution.xy;
    vec3 one;
    for(int i=0;i<15;i++){
        for(int j=0;j<15;j++){
            if(rand(p)<=0.5)
                one = (one.xyz + texture2D(tex0,vec2(j,i)).xyz)/2.0;
        }
    }
    return one;
}

void main(void)
{
    gl_FragColor = vec4(getOne(),1.0);
}

And one for Adobe pixel bender:

<languageVersion: 1.0;> 

kernel random
<   namespace : "Random";
    vendor : "Kabumbus";
    version : 3;
    description : "not as random as needed, not as fast as needed"; >
{

    input image4 src;
    output float4 outputColor;

float rand(float2 co, float2 co2){
    return fract(sin(dot(co.xy ,float2(12.9898,78.233))) * (43758.5453 + (co2.x + co2.y )));
}

float4 getOne(){
    float4 one;
    float2 r = outCoord();
    for(int i=0;i<200;i++){
        for(int j=0;j<200;j++){
            if(rand(r, float2(i,j))>=1.0)
                one = (one + sampleLinear(src,float2(j,i)))/2.0;
        }
    }
    return one;
}
void
evaluatePixel()
{
    float4 oc = getOne();
    outputColor = oc;
}
}

So my real problem is - my shaders make my GPU deiver fall. How to use GLSL for same purpose that I do now but with out failing and if possible faster?

Update: What I want to create is called Single-Pixel Camera (google Compressive Imaging or Compressive Sensing), I want to create gpu based software implementation.

Idea is simple:

What I tried to implement in my shaders was simulate that wary process.

What is really stupid in trying to do this on gpu:


Solution

  • Thanks for adding more detail to clarify your question. My comments are getting too long so I'm going to an answer. Moving comments into here to keep them together:

    Sorry to be slow, but I am trying to understand the problem and the goal. In your GLSL sample, I don't see a matrix being generated. I see a single vec3 being generated by summing a random selection (varying over time) of cells from a 15 x 15 texture (matrix). And that vec3 is recomputed for each pixel. Then the vec3 is used as the pixel color.

    So I'm not clear whether you really want to create a matrix, or just want to compute a value for every pixel. The latter is in some sense a 'matrix', but computing a simple random value for 200 x 200 pixels would not strain your graphics driver. Also you said you wanted to use the matrix. So I don't think that's what you mean.

    I'm trying to understand why you want a matrix - to preserve a consistent random basis for all the pixels? If so, you can either precompute a random texture, or use a consistent pseudorandom function like you have in rand() except not use time. You clearly know about that so I guess I still don't understand the goal. Why are you summing a random selection of cells from the texture, for each pixel?

    I believe the reason your shader is crashing is that your main() function is exceeding its time limit - either for a single pixel, or for the whole set of pixels. Calling rand() 40,000 times per pixel (in a 200 * 200 nested loop) could certainly explain that! If you had 200 x 200 pixels, and are calling sin() 40k times for each one, that's 160,000,000 calls per frame. Poor GPU!

    I'm hopeful that if we understand the goal better, we'll be able to recommend a more efficient way to get the effect you want.

    Update.

    (Deleted this part, since it was mistaken. Even though many cells in the source matrix may each contribute less than a visually detectable amount of color to the result, the total of the many cells can contribute a visually detectable amount of color.)

    New update based on updated question.

    OK, (thinking "out loud" here so you can check whether I'm understanding correctly...) Since you need each of the random NxM values only once, there is no actual requirement to store them in a matrix; the values can simply be computed on demand and then thrown away. That's why your example code above does not actually generate a matrix.

    This means we cannot get away from generating (NxM)^2 random values per frame, that is, NxM random values per pixel, and there are NxM pixels. So for N=M=200, that's 160 million random values per frame.

    However, we can still optimize some things.

    .

    int LFSR_Rand_Gen(in int n)
    {
      // <<, ^ and & require GL_EXT_gpu_shader4.
      n = (n << 13) ^ n;
      return (n * (n*n*15731+789221) + 1376312589) & 0x7fffffff;
    }
    

    Whether these optimizations will be enough to enable your GPU driver to drive 200x200 * 200x200 pixels per frame, I don't know. They should definitely enable you to increase your resolution substantially.

    Those are the ideas that occur to me off the top of my head. I am far from being a GPU expert though. It would be great if someone more qualified can chime in with suggestions.

    P.S. In your comment, you jokingly (?) mentioned the option of precomputing N*M NxM random matrices. Maybe that's not a bad idea?? 40,000x40,000 is a big texture (40MB at least), but if you store 32 bits of random data per cell, that comes down to 1250 x 40,000 cells. Too bad vanilla GLSL doesn't help you with bitwise operators to extract the data, but even if you don't have the GL_EXT_gpu_shader4 extension you can still fake it. (Maybe you would also need a special extension then for non-square textures?)