I have two conflicting mindsets:
JSON numbers are always double-precision floating point numbers. Therefore:
1
and 1.0
- both represent the same exactly number;12345678901234567890
is actually 12345678901234567000
because 12345678901234567890
cannot be accurately represented as a double-precision floating-point number.JSON numbers cannot be always interpreted as double-precision floating-point numbers. JSON is a communication protocol that is distinct from JavaScript. The belief that JSON numbers are always double-precision floating-point numbers stems from the confusion between JavaScript and JSON and the idiosyncrasies of the default JSON parser and serializer in JavaScript, which interprets them in this way. Therefore:
1
and 1.0
need not be the same. In particular, the presence or absence of the trailing .0
can be used to encode type information. Many programming languages, such as Java or C#, distinsguish between integers and floating-point numbers. It is reasonable to demand that integers in such languages must always be serialized without the trailing .0
, while floating point numbers must always be serialized with the trailing .0
.12345678901234567890
and 12345678901234567000
are not the same numbers. Certain widely used parsers interpret them, by default, as the same number because they coerce a JSON number into a double-precision floating-point number - but this is on these parsers, and not on JSON itself.Which - if any - of these two mindsets is correct?
Googling seems to yield conflicting results.
The JSON format does not set limits to the numbers that it can represent: the following JSON is valid:
1e999999999999
...even though it represents a number that far exceeds the capacity of a double-precision floating point number.
Similarly, you can have this valid JSON:
1234567890123456789.01234567890123456789
...even though double-precision floating point numbers cannot represent that many significant digits.
Such concerns are not inherent to the JSON format, but to the implementations that read and write JSON. The RFS 8259 standard touches on this in section 6 on numbers:
This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.
Note that when such software is used, numbers that are integers and are in the range [-(2**53)+1, (2**53)-1] are interoperable in the sense that implementations will agree exactly on their numeric values.
This means that the first article you quoted is not entirely accurate. Namely the statement that "the presence or absence of a decimal point is not enough to distinguish between integers and non-integers".
Although in practice this might be true, this really is an implementation aspect. We can imagine implementations for which it would be enough to distinguish between integers and non-integers. This is not the business of the JSON format itself.