I have a situation where it is reasonable to have a division by 0.0 or by -0.0 where I would expect to see +Inf and -Inf, respectively, as results. It seems that Python enjoys throwing a
ZeroDivisionError: float division by zero
in either case. Obviously, I figured that I could simply wrap this with a test for 0.0. However, I can't find a way to distinguish between +0.0 and -0.0. (FYI you can easily get a -0.0 by typing it or via common calculations such as -1.0 * 0.0).
IEEE handles this all very nicely, but Python seems to take pains to hide the well thought out IEEE behavior. In fact, the fact that 0.0 == -0.0 is actually an IEEE feature so Python's behavior seriously breaks things. It works perfectly well in C, Java, Tcl, and even JavaScript.
Suggestions?
from math import copysign
def divide(numerator, denominator):
if denominator == 0.0:
return copysign(float('inf'), denominator)
return numerator / denominator
>>> divide(1, -0.0)
-inf
>>> divide(1, 0)
inf