pythonnumpyfloating-point

Why is numpy's sine function so inaccurate at some points?


I just checked numpy's sine function. Apparently, it produce highly inaccurate results around pi.

In [26]: import numpy as np

In [27]: np.sin(np.pi)
Out[27]: 1.2246467991473532e-16

The expected result is 0. Why is numpy so inaccurate there?

To some extend, I feel uncertain whether it is acceptable to regard the calculated result as inaccurate: Its absolute error comes within one machine epsilon (for binary64), whereas the relative error is +inf -- reason why I feel somewhat confused. Any idea?

[Edit] I fully understand that floating-point calculation can be inaccurate. But most of the floating-point libraries can manage to deliver results within a small range of error. Here, the relative error is +inf, which seems unacceptable. Just imagine that we want to calculate

1/(1e-16 + sin(pi)) 

The results would be disastrously wrong if we use numpy's implementation.


Solution

  • The main problem here is that np.pi is not exactly π, it's a finite binary floating point number that is close to the true irrational real number π but still off by ~1e-16. np.sin(np.pi) is actually returning a value closer to the true infinite-precision result for sin(np.pi) (i.e. the ideal mathematical sin() function being given the approximated np.pi value) than 0 would be.