haskelldoubleprecision

Haskell's double behaviour


It's a known fact that numbers like Double which correspond to IEEE 754 floating-point cannot represent numbers like 0.1 accurately.

But when I played around with it in REPL, I'm confused by the behaviour:

ghci> let x = 0.1 :: Double
ghci> :t x
x :: Double
ghci> import Text.Printf
ghci> printf "%.20f\n" x
0.10000000000000000000

To cross check myself, I used Python repl to do the same where I'm getting expected results:

>>> x = 0.1
>>> type(x)
<class 'float'>
>>> print(f"{x:.20f}")
0.10000000000000000555

Why am I seeing different and a unexpected result in the GHC repl ?


Solution

  • The libraries that come with GHC are all based around the idea that when converting to decimal, you want the shortest representation that rounds to your float. If you want to see the most accurate representation instead, I believe you have to pull some tricks. Here's one example of a trick you can pull, using the numbers package:

    Data.Number.CReal> showCReal 100 (realToFrac 0.1)
    "0.1000000000000000055511151231257827021181583404541015625"
    

    This is <100 digits despite requesting 100 digits because showCReal drops trailing zeros. Be careful not to drop the realToFrac -- it is secretly there to suggest to GHC that it choose Double for the type of the 0.1, using defaulting and possibly ghci's extended defaulting. Contrast:

    Data.Number.CReal> showCReal 100 0.1
    "0.1"