javascriptopengl-esglslwebgl

Reconstruct fragment position from depth


I want to recontruct my xyz from my depth. So that I get the position from the actual fragment. What I already have:

This is how I calculate my values at the moment. hfar is the height of the far plane and wfar the width of it. vec2 tc is a ndc vector

float LinearizeDepth (vec2 coord)
{
        float z = texture2D(depthTexture, coord*ssaoScale).x;
        float d = (2.0 * near) / (far + near - z * (far - near));
        return d;
}

        float z = gl_FragCoord.z;

        vec3 ray = vec3(x, y, z);

        return ray;                      
}   

    vec2 screenPos = vec2(gl_FragCoord.x / 1024.0, gl_FragCoord.y / 512.0);
    vec3 origin = getViewRay(screenPos) * lineardepth; 

but the origin doesn't seem right. Here is a js.fiddle

http://jsfiddle.net/Peters_Stuff/s24TT/

I tried to read through http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/ but that confused me even more. I don't want a solution or something, just a push into the right way :)

Any suggestions?


Edit:

I implemented the suggestion:

vec3 origin = vec3(vec4(screenPos, texture2D(depthTexture, screenPos).x, 1.0) * invPerspectiveMatrix).xyz;

so origin should be my fragments view space position. Does this look right? I don't know what the output should look like


Edit 2:

Here is a fiddle: http://jsfiddle.net/Peters_Stuff/8ddkt/

I'm rendering a full screen quad and generating the texturecoords and the vertexpositions in the vertex shader and pass them to the fragment shader. There I calculate my origin like that

vec3 origin = vec3(vViewRay.xy * 0.5 + 0.5 , linearDepth);

vViewray must be normalized to [0..1] current outcome


Edit 3:

I'm currently implementing ssao, as you can see in the rest of the shader above. If this is right then I don't understand why the current outocome is this: http://www.imgbox.de/users/public/images/ahmqLHYbvm.jpg Fiddle for the shader http://jsfiddle.net/Peters_Stuff/8ddkt/

I create a random vector out of my noise texture, use the gramm schmidt process to calculate a tangent, generate bitangent, generate transformation matrix.

Then go through the samplekernel and look if the sample is occluded or not.

    float range_check = abs(origin.z - sampleDepth);
    if (range_check < radius && sampleDepth <= sample.z) {
        occlusion +=  1.0 ;
    }  

the point should only be occluded if:


Solution

  • (x,y,z) -> clip volume
    
    projMat * xyz = clip_coord;
    
    // - - - - - -
    
    clip_coord.x = gl_FragCoord / screenWidth;
    clip_coord.y = gl_FragCoord / screenHeight;
    clip_coord.z = texture2D(depthMap, clip_coord.xy);
    
    xyz = inverseProjMat * clip_coord;
    

    EDIT:

    link

    It's a one-pass, but does exact same thing as two-pass thing, but because it's a one-pass it doesn't have reconstruction, just varying. In fragment toggle z component of the FragColor between 0.0 and k. K is actually a depth component. With 0.0 it's green-red, but with k it changes color quite a bit, so the image you posted is probably not right (not what you wanted).

    Hope this helps.