pythonfloating-pointnumerical-methods

Why no floating point error occurs in print(0.1* 100000) vs (Decimal(0.1)*100000) due to FP representation of 0.1?


I am studying numerical analysis and I have come across this dilemma.

Running the following script,

from decimal import Decimal
a = 0.1 ; 
N = 100000 ;

# product calculation
P = N*a

# Print product result with no apparent error
print(' %.22f ' % P)
# Print product result with full Decimal approximation of 0.1
print(Decimal(0.1) * 100000)

I realize that despite 0.1 not having an accurate floating-point representation, when I multiply it by 100000 (which has an exact floating-point representation), and increase the precision of how I print the result, I do not notice any error.

print(' %.22f ' % P) # Result: 10000.0000000000000000000000 

This is in contrast to the case where I use the Decimal method, where I can see the error behind the product.

print(Decimal(0.1) * 100000)

Also, how come I can print up to 55th digits of precision of a number if the IEEE754 standard only allows 53? I reproduced this case with the following instruction:

print("%.55f" % 0.1) #0.1000000000000000055511151231257827021181583404541015625

Can anyone explain why this happens?


Solution

  • a = 0.1 ;

    Assuming your Python implementation uses IEEE-754 binary641, this converts 0.1 to 0.1000000000000000055511151231257827021181583404541015625, because that is the representable value that is nearest to 0.1.

    P = N*a

    The real-number arithmetic product of 100,000 and 0.1000000000000000055511151231257827021181583404541015625 is 10,000.00000000000055511151231257827021181583404541015625. This number is not representable in binary64. The two nearest representable values are 10,000 and 10000.000000000001818989403545856475830078125. The floating-point multiplication produces the representable value that is closer, so N*a produces 10,000.

    print(' %.22f ' % P)

    This prints the value stored in P, formatted with 22 digits after the decimal point, yielding “10000.0000000000000000000000”.

    print(Decimal(0.1) * 100000)

    In this, first 0.1 is converted to binary floating-point, yielding 0.1000000000000000055511151231257827021181583404541015625. Then Decimal(0.1) converts that number to Decimal, which produces the same value. Then the multiplication by 100,000 is performed. By default, Python uses only 28 digits for Decimal arithmetic, so the result of this multiplication is rounded to 10,000.00000000000055511151231.

    Footnote

    1 This is common, but Python does not have a formal specification, and what documentation there is for it is weak about floating-point behavior.