Numbers sometimes cannot be expressed exactly when they are represented in double precision or single precision. Of course working with bigdecimal is a solution, I know that.
Let's come to my question:
The number 0.20000000000000029 in double precision
0011111111001001100110011001100110011001100110011001100110100100
This is actually the number
0.20000000000000028865798640254070051014423370361328125
Similarly, the number 0.3213213214213222219 in double precision 0011111111010100100100001000011101001101110000001100010111111001 This number is actually 0.321321321421322247946505967775010503828525543212890625
Now let's look at the java code and outputs below.
class Main {
public static void main(String[] args) {
double x = 0.2000000000000002900; //
double y = 0.3213213214213222219; //
System.out.printf("x = %.20f , y = %.20f%n", x, y);
}
}
Outputs: x = 0.20000000000000030000 , y = 0.32132132142132225000
From here we understand that the output number x = 0.20000000000000030000 is the number 0.20000000000000028865798640254070051014423370361328125 rounded to 16 digits.
the output number y = 0.32132132142132225000 is the number 0.321321321421322247946505967775010503828525543212890625 rounded to 17 digits.
Here is the question: Why was one of the numbers rounded to 16 precision while the other was rounded to 17? Why aren't they both 16 precision? Or why aren't they both 17 precision?
why the precisions are different?
This is essentially answered in this answer, but I will reprise it here since the question there asks about differences between C and Java.
The Java specification requires a troublesome double rounding in this situation. I will detail the steps below, but the effects in these cases are:
For the first case:
For the second case:
Step 1 is an attempt to find the shortest decimal numeral that can “represent” the double
in some sense. Specifically, step 1 is to find the fewest number of decimal digits such that converting the double
to decimal with that many digits and then converting back to double
yields the original value.
In the first case, the 16 digits of 0.2000000000000003 are enough, because converting that to double
produces 0.200000000000000028865798640254070051014423370361328125.
However, with the second case, if we had the 16-digit number 0.3213213214213222, converting it to double
would produce 0.321321321421322192435354736517183482646942138671875, which is different from the original number. So 16 digits is not enough; we need 17 in this case.
This is specified in the Java documentation.
The documentation for formatting with the Double
type and f
format says:
… If the precision is less than the number of digits which would appear after the decimal point in the string returned by
Float.toString(float)
orDouble.toString(double)
respectively, then the value will be rounded using the round half up algorithm. Otherwise, zeros may be appended to reach the precision…
Let’s consider “the string returned by … Double.toString(double)
”. For the number 0.20000000000000028865798640254070051014423370361328125, this string is “0.2000000000000003”. This is because the Java specification says that toString
produces just enough decimal digits to uniquely distinguish the number within the set of Double
values, and “0.2000000000000003” has just enough digits in this case.
The passage quoted above refers to rounding “the value” or appending zeros. Which value does it mean—the actual operand of format
, which is 0.20000000000000028865798640254070051014423370361328125, or that string it mentions, “0.2000000000000003”? Since the latter is not a numeric value (it is a character string), I would have expected “the value” to mean the former. However, the second sentence says “Otherwise [that is, if more digits are requested], zeros may be appended…” If we were using the actual operand of format
, we would show its digits, not use zeros. But, if we take the string as a numeric value, its decimal representation would have only zeros after the digits shown in it. So it seems this is the interpretation intended, and Java implementations appear to conform to that.
So, to format this number with ".20f"
, we first convert it to 0.2000000000000003 and then append zeros, yielding “0.20000000000000030000”.
This is a bad specification because:
.20f
resulted in appending zeros, not rounding. But a shorter precision request, like .8f
, would perform a second rounding.)(Also, it is a shame they wrote zeros “may be” appended. Why not “Otherwise, zeros are appended to reach the precision”? With “may”, it seems like they are giving the implementation a choice, although I suspect they meant the “may” is predicated on whether zeros are needed to reach the precision, not on whether the implementor chooses to append them.)