IEEE floating point numbers have a bit assigned to indicate the sign, which means you can technically have different binary representations of zero (+0 and -0). Is there an arithmetic operation I could do for example in C which result in a negative zero floating point value?
This question is inspired by another which called into question whether you can safely compare 0.0f
using ==
, and I wondered further if there is are other ways to represent zero which would cause float1 == 0.0f
to break for seemingly perfectly equal values.
[Edit] Please, do not comment about the safety of comparing floats for equality! I am not trying to add to that overflowing bucket of duplicate questions.
According to the standard, negative zero exists but it is equal to positive zero. For almost all purposes, the two behave the same way and many consider the existence of a negative to be an implementation detail. There are, however, some functions that behave quite differently, namely division and atan2
:
#include <math.h>
#include <stdio.h>
int main() {
double x = 0.0;
double y = -0.0;
printf("%.08f == %.08f: %d\n", x, y, x == y);
printf("%.08f == %.08f: %d\n", 1 / x, 1 / y, 1 / x == 1 / y);
printf("%.08f == %.08f: %d\n", atan2(x, y), atan2(y, y), atan2(x, y) == atan2(y, y));
}
The result from this code is:
0.00000000 == -0.00000000: 1
1.#INF0000 == -1.#INF0000: 0
3.14159265 == -3.14159265: 0
This would mean that code would correctly handle certain limits without a need for explicit handling. It's not certain that relying on this feature for values close to the limits is a good idea, since a simple calculation error can change the sign and make the value far from correct, but you can still take advantage of it if you avoid calculations that would change the sign.