pythonfloating-pointprecisionquaternionsscipy-spatial

Python division of small floats in quaternion calculations


Consider the math equation: y = (sin(x/2)/x) which shows up in the calculation of rotations. In the limit as x->0 this has a value of 1/2. Implementing this in Python (or any language with similar 64-bit floating point types) seems to work fine for all values except x = 0.0 exactly.

Yet, when this math shows up in books it is often recommended to implement the function with a truncated Taylor's series expansion that does not have divide by zero issues. Here is an example in SciPy where a seemingly arbitrary threshold of 1e-3 radians was chosen for this logic.

https://github.com/scipy/scipy/blob/9d5d4f86bd50e11ee6e593758ab06f9b6f7cf021/scipy/spatial/transform/_rotation.pyx#L1102

The basic implementation y = np.sin(x/2)/x appears to work fine for small values of x (obviously except x = 0.0 exactly). Is there really a precision issue to be concerned about here? If so, what is a test case that shows it?

Code to check for 0.0 as a special case is cleaner than using the truncated Taylor series with an arbitrary threshold.


Solution

  • The basic implementation y = np.sin(x/2)/x appears to work fine for small values of x (obviously except x = 0.0 exactly). Is there really a precision issue to be concerned about here? If so, what is a test case that shows it?

    Yes. You can see this in the following code sample:

    >>> angle = 5e-324
    >>> np.sin(angle/2)/angle
    0.0
    

    More generally, here is the function np.sin(x/2)/x plotted from -5e-322 to 5e-322. Recall that the limit of sin(x/2)/x as it approaches 0 is 1/2, so this graph should be a nearly straight horizontal line at y=0.5, with one missing value in the middle.

    sin(x/2)/x near zero

    (The X axis is missing, because matplotlib freaks out when you try to plot extremely small X values.)

    The number 5e-324 is an extreme case, because 5e-324 / 2 is not representable as a float, but the numbers after it aren't great either. The computation np.sin(x/2)/x is not consistently accurate to three decimal places until around x=2.5e-321.

    Code to check for 0.0 as a special case is cleaner than using the truncated Taylor series with an arbitrary threshold.

    I disagree - code that uses floating point math and checks for exact equality with zero is almost always a code smell. Frequently, if the code has a division by zero when the input is exactly zero, then it is numerically unstable at values near zero.

    Here is an example in SciPy where a seemingly arbitrary threshold of 1e-3 radians was chosen for this logic.

    I went looking through SciPy's history to find why the threshold is set at 1e-3 specifically. In the commit which originally added the 1e-3 threshold, there isn't any discussion about why that value was picked, nor is there any discussion of it in the associated PR. I don't know why this specific threshold value was chosen.