jsonnumberslanguage-lawyer

Are JSON numbers always double-precision floating-point numbers?


I have two conflicting mindsets:

Mindset 1:

JSON numbers are always double-precision floating point numbers. Therefore:

Mindset 2:

JSON numbers cannot be always interpreted as double-precision floating-point numbers. JSON is a communication protocol that is distinct from JavaScript. The belief that JSON numbers are always double-precision floating-point numbers stems from the confusion between JavaScript and JSON and the idiosyncrasies of the default JSON parser and serializer in JavaScript, which interprets them in this way. Therefore:

Which - if any - of these two mindsets is correct?

Googling seems to yield conflicting results.


Solution

  • The JSON format does not set limits to the numbers that it can represent: the following JSON is valid:

    1e999999999999
    

    ...even though it represents a number that far exceeds the capacity of a double-precision floating point number.

    Similarly, you can have this valid JSON:

    1234567890123456789.01234567890123456789 
    

    ...even though double-precision floating point numbers cannot represent that many significant digits.

    Such concerns are not inherent to the JSON format, but to the implementations that read and write JSON. The RFS 8259 standard touches on this in section 6 on numbers:

    This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.

    Note that when such software is used, numbers that are integers and are in the range [-(2**53)+1, (2**53)-1] are interoperable in the sense that implementations will agree exactly on their numeric values.

    This means that the first article you quoted is not entirely accurate. Namely the statement that "the presence or absence of a decimal point is not enough to distinguish between integers and non-integers".

    Although in practice this might be true, this really is an implementation aspect. We can imagine implementations for which it would be enough to distinguish between integers and non-integers. This is not the business of the JSON format itself.