c++openglnvidiaati

Rendering primitives through shader on ATI doesn't show


I am refactoring a physics simulator that previously used opengl 1.1 Now I am setting it up to use VBO's and GLSL shaders. Currently the particles in the simulation are all drawn from a VBO where as the physical boundaries are drawn with the opengl immediate mode. (I know immediate mode isn't an end solution but upgrading it is for another time)

The Problem: After testing I have discovered that on ATI cards the boundaries are not being drawn. I set up a simple test project to figure out what is going on and I see the same results as my main project:

I can draw the VBO object, the Buddha, in this case, and when I disable the shaders I can draw in immediate mode. But when I try to use immediate mode to put the triangles in '3d' space it won't render on just the ATI.

ATI rendering http://s23.postimg.org/xcgjxl2h7/ATI.png

Nvidia rendering http://s23.postimg.org/qaimbdyvf/NVIDIA.png

So the white triangle is when I use

glUseProgram(0);
glBegin(GL_TRIANGLES);
    glVertex3f(-0.9, 1, -1.0);
    glVertex3f(-0.9, -1, -1.0);
    glVertex3f(-0.2, 1, -1.0);
glEnd();

The red triangle is

glUseProgram(shaderProgramPrimID);
glBegin(GL_TRIANGLES);
    glVertex3f(0.9, 1, -1.0);
    glVertex3f(0.9, -1, -1.0);
    glVertex3f(0.2, 1, -1.0);
glEnd();

And the Buddha is a glDrawElements call

Relevant shaders: VertexShaderPrim

#version 130

in vec4 s_vPosition;
in vec4 s_vColor;
out vec4 color;

void main () {
    color = s_vColor;
    gl_Position = s_vPosition;
}

FragmentShaderPrim

#version 130

in vec4 color;
out vec4 fColor;

void main () {
    fColor = color;
}

VertexShader

#version 130

in vec4 s_vPosition;    
in vec4 s_vNormal;      

uniform mat4 mM;        
uniform mat4 mV;        
uniform mat4 mP;        
uniform mat4 mRotations;

uniform vec4 vLight;    

out vec3 fN;        
out vec3 fL;        

out vec3 fE;        

void main () {
    fN = (mRotations*s_vNormal).xyz;    
    fL = (vLight).xyz;              
    fE = (mV*mM*s_vPosition).xyz;   
    gl_Position = mP*mV*mM*s_vPosition;
}

FragmentShader

#version 130

in vec3 fN;
in vec3 fL;
in vec3 fE;     

out vec4 fColor;

void main () {
    vec3 N = normalize(fN);
    vec3 L = normalize(fL);

    vec3 E = normalize(-fE);
    vec3 H = normalize(L + E);      

    float diffuse_intensity = max(dot(N, L), 0.0);
    vec4 diffuse_final = diffuse_intensity*vec4(0.1, 0.1, 0.1, 1.0);

    float spec_intensity = pow(max(dot(N, H), 0.0), 30);
    vec4 spec_final = spec_intensity*vec4(0.9, 0.1, 0.1, 1.0);

    fColor = diffuse_final + spec_final;
}

Solution

  • NVIDIA is known for doing things that it is not supposed to with respect to vertex attributes. In this case, you are taking advantage of vertex attribute aliasing, which actually violates the GLSL spec.

    If you are going to used fixed-function vertex attributes, do not write your shader the way you have. Either use glVertexAttrib* (0, ....) or use gl_Vertex instead of a generic vertex attribute (e.g. s_vPosition).

    Even though vertex attribute 0 generally aliases to gl_Vertex, you are relying on undefined behavior to assign in s_vPosition to attribute slot 0. There simply is no guarantee that this will happen. s_vColor could be assigned to 0, particularly since some GLSL implementations will re-arrange your attribute variables alphabetically before automatically assigning them locations.

    The bottom line is you should not be using fixed-function vertex attributes the way you are. I would suggest either using explicit generic attributes and binding the attribute locations in your shader, or simply using gl_Vertex, gl_Color, etc. instead of generic attributes.