javascriptfloating-pointprecision

Why does 0.1 + 0.2 return unpredictable float results in JavaScript while 0.2 + 0.3 does not?


0.1 + 0.2
// => 0.30000000000000004

0.2 + 0.2
// => 0.4

0.3 + 0.2
// => 0.5

I understand it has to do with floating points but what exactly is happening here?

As per @Eric Postpischil's comment, this isn't a duplicate:

That one only involves why “noise” appears in one addition. This one asks why “noise” appears in one addition and does not appear in another. That is not answered in the other question. Therefore, this is not a duplicate. In fact, the reason for the difference is not due to floating-point arithmetic per se but is due to ECMAScript 2017 7.1.12.1 step 5


Solution

  • When converting Number values to strings in JavaScript, the default is to use just enough digits to uniquely distinguish the Number value.1 This means that when a number is displayed as “0.1”, that does not mean it is exactly 0.1, just that it is closer to 0.1 than any other Number value is, so displaying just “0.1” tells you it is this unique Number value, which is 0.1000000000000000055511151231257827021181583404541015625. We can write this in hexadecimal floating-point notation as 0x1.999999999999ap-4. (The p-4 means to multiply the preceding hexadecimal numeral by two the power of −4, so mathematicians would write it as 1.99999999999916 • 2−4.)

    Here are the values that result when you write 0.1, 0.2, and 0.3 in source code, and they are converted to JavaScript’s Number format:

    When we evaluate 0.1 + 0.2, we are adding 0x1.999999999999ap-4 and 0x1.999999999999ap-3. To do that manually, we can first adjust latter by multiplying its significand (fraction part) by 2 and subtracting one from its exponent, producing 0x3.3333333333334p-4. (You have to do this arithmetic in hexadecimal. A16 • 2 = 1416, so the last digit is 4, and the 1 is carried. Then 916 • 2 = 1216, and the carried 1 makes it 1316. That produces a 3 digit and a 1 carry.) Now we have 0x1.999999999999ap-4 and 0x3.3333333333334p-4, and we can add them. This produces 4.ccccccccccccep-4. That is the exact mathematical result, but it has too many bits for the Number format. We can only have 53 bits in the significand. There are 3 bits in the 4 (1002) and 4 bits in each of the trailing 13 digits, so that is 55 bits total. The computer has to remove 2 bits and round the result. The last digit, E16, is 11102, so the 10 bits have to go. These bits are exactly ½ of the previous bit, so it is a tie between rounding up or down. The rule for breaking ties says to round so the last bit is even, so we round up to make the 11 bits become 100. The E16 becomes 1016, causing a carry to the next digit. The result is 4.cccccccccccd0p-4, which equals 0.3000000000000000444089209850062616169452667236328125.

    Now we can see why printing .1 + .2 shows “0.30000000000000004” instead of “0.3”. For the Number value 0.299999999999999988897769753748434595763683319091796875, JavaScript shows “0.3”, because that Number is closer to 0.3 than any other Number is. It differs from 0.3 by about 1.1 at the 17th digit after the decimal point, whereas the result of the addition we have differs from 0.3 by about 4.4 at the 17th digit. So:

    Now consider 0.2 + 0.2. The result of this is 0.40000000000000002220446049250313080847263336181640625. That is the Number closest to 0.4, so JavaScript prints it as “0.4”.

    Finally, consider 0.3 + 0.2. We are adding 0x1.999999999999ap-3 and 0x1.3333333333333p-2. Again we adjust the second operand, producing 0x2.6666666666666p-3. Then adding produces 0x4.0000000000000p-3, which is 0x1p-1, which is ½ or 0.5. So it is printed as “0.5”.

    Another way of looking at it:

    Footnote

    1 This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification.