javascriptnumberstheory

How is 65 bits number stored in 64 bits in JavaScript?


In the chapter "How Numbers Work" of the book "How JavaScript Works" by Douglas Crockford it is mentioned that a number in JavaScript is made up of 1 signed bit, 11 exponent bits, and 53 significant bits. This totals to 65 bits and some clever encoding allow these 65 bits to be stored in 64 bits, which we understand as a 64-bit floating-point number.

Going further the significant digits are stored as a binary fraction in the range 0.5 <= significand < 1.0

In that form, the most significant bit is always a 1. Since that bit is always a 1, it does not need to be stored in the number. This yields a bonus bit.

I do not understand

  1. How the most significant bit (the sign bit) is going to be always 1?
  2. And if the sign bit is not stored how does it differentiate between positive and negative numbers?

Please help me in understanding this concept or guide me in the direction that can help me.


Solution

  • The Fraction (mantissa) portion of the double-precision floating point format Crockford refers to has 52 bits, not 53:

    enter image description here

    The Wikipedia article refers to "effective precision." They explain it like this:

    The format is written with the significand having an implicit integer bit of value 1. With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log10(2) ≈ 15.955).

    That must be what Crockford refers to as "clever encoding." From a programmer's perspective, it doesn't matter all that much unless you're doing something exotic like bit twiddling or integer casting. That's why Crockford doesn't explain it further.

    Related: Is it 52 or 53 bits of floating point precision?