I have this small code that calculate alpha given x. It's pretty simple:
float a(float x) {
// subdivision = 4 and sizeRatio = 1
x = x * subdivision;
float i = int(x);
return pow((x - i - 0.5) * 2.0, 6.0 / sizeRatio)/5;
}
Since it's nothing so complicated, I can visualize the expected result in GeoGebra. As desired, the function increases until 1, and then decreases after 1, and this for every integer.
Even in Excel, using the same formula I use in the GLSL code
=POWER((x-FLOOR.MATH(x)-0,5)*2; 6)
:
So, when I am running my shader, I expect to have a double gradient. The problem is, I have only one gradient, the one between 0.5 and 1, and nothing between 1.0 and 1.5 etc. I have tried replacing int(x)
with floor(x)
, but the result is the same.
What causes this difference?
First off, using floor(x)
instead of int(x)
is the correct translation of the Excel formula to GLSL, as using int(x)
results in rounding towards 0 instead of rounding towards negative infinity. From the GLSL 330 specification, 5.4.1:
When constructors are used to convert a
float
to anint
oruint
, the fractional part of the floating-point value is dropped.
The other key difference is in Excel/GeoGebra and GLSL's pow
functions. From the GLSL 330 specification, 8.2:
genType pow (genType x, genType y)
Results are undefined if x < 0.
Ignoring the initial multiplication with subdivision
for a minute, this is exactly what happens in x - int(x) - 0.5 * 2.0
for x in [0, .5).
Since the graph is symmetrical around 0.0 over the range [-0.5, 0.5], you could use the absolute function to get around this.
return pow(abs(fract(x) - 0.5) * 2.0, 6.0 / sizeRatio);
The function fract
is built into GLSL and computes x - floor(x)
.