I've been making my own OpenGL game for fun to learn C++ (coming from Java). I was testing it on another computer I have so I could test it's performance on a weaker system, and I found that there was a shader compiler error.
It seems that on my computer running with Intel Integrated Graphics, the following line causes a syntax error.
float ambientLight = 2f;
The error is just 'f' syntax error
, so naturally I removed the f, and it now runs fine on both machines. I'm guessing this is some sort of driver error, but I'm not really sure why there is this discrepancy and whether or not this means I should stop putting f
s in my float declarations in glsl all together.
For reasons that continue to elude me, the GLSL specification requires that floating-point literal suffixes (f
, lf
) only appear after unambiguously floating-point values. 2
is an integer literal, not a floating-point literal, so it cannot be adorned with f
. A literal is not a floating-point literal unless it clearly has a decimal or exponent (1e4
, for example) somewhere in it.
So you have to write it as 2.f
.