In other languages which can represent 64-bit integers, it is possible to do this very easily -
How to store a 64 bit integer in two 32 bit integers and convert back again
How to store a 64 bit integer in two 32 bit integers in Ruby
// convert 64-bit n to two 32-bit x and y
x = (n & 0xFFFFFFFF00000000) >> 32
y = n & 0xFFFFFFFF
But JavaScript CAN NOT represent 64-bit integers. It can only represent 52-bit integers without problems.
Now that means that it is not possible to convert a 64-bit integer into two 32 bit integers, because it is not even possible to have a 64-bit integer in the first place.
But still, we have 52 bits remaining. My question is: how can we split this 52-bit integer in JavaScript in two 32-bit integers (20 high bits and 32 low bits)
Can someone suggest bit manipulation code like above to do 20-bit and 32-bit split in JavaScript?
First of all, your link contains a minor inaccuracy in stating that "Any whole number less than 252 [...] will safely fit in a JavaScript number. " While technically correct, it is not a tight bound: it can be verified without too much trouble that javascript numbers can store every positive integer up to 253 (but not 253+1).
Without further ado, the functions you requested, splitting 52-bit numbers into the bottom 32 bits and the 20 top bits:
function to_int52(hi, lo) {
/* range checking */
if ((lo !== lo|0) && (lo !== (lo|0)+4294967296))
throw new Error ("lo out of range: "+lo);
if (hi !== hi|0 && hi >= 1048576)
throw new Error ("hi out of range: "+hi);
if (lo < 0)
lo += 4294967296;
return hi * 4294967296 + lo;
}
function from_int52(i) {
var lo = i | 0;
if (lo < 0)
lo += 4294967296;
var hi = i - lo;
hi /= 4294967296;
if ((hi < 0) || (hi >= 1048576)
throw new Error ("not an int52: "+i);
return { lo: lo, hi: hi };
}
I would not suggest using these though. Javascript bitwise ops are signed (@dandavis: JS does not have UInt32s), and the sign bit causes headaches when we actually want the positive value. Plus V8 has optimizations for (signed) integers that can be stored in 31 bits. Combining these two facts, you should split at no more than 30 bits, the maximum positive size that will fit in a V8 small integer ("smi").
Here's code to split numbers into 30 low bits and 22 high bits:
function int52_30_get(i) {
var lo = i & 0x3fffffff;
var hi = (i - lo) / 0x40000000;
return { lo: lo, hi: hi };
}
You probably don't want to be creating objects though. These should get inlined (if you're actually bothering with functions at all):
function int52_30_get_lo(i) {
return i & 0x3fffffff;
}
function int52_30_get_hi(i) {
return (i - (i & 0x3fffffff)) / 0x40000000;
}
And to create the numbers from the low and high parts:
function int52_30_new_safe(hi, lo) {
return (hi & 0x3fffff) * 0x40000000 + (lo & 0x3fffffff);
}
If you're really sure hi and lo are in range you can skip the masking:
function int52_30_new(hi, lo) {
return hi * 0x40000000 + lo;
}
Set the high and low parts individually:
/* set high part of i to hi */
i = (hi & 0x3fffff) * 0x40000000 + (i & 0x3fffffff);
/* set low part of i to lo */
i += (lo & 0x3fffffff) - (i & 0x3fffffff);
If you're sure that hi and lo are in range:
/* set high part of i to hi */
i = hi * 0x40000000 + (i & 0x3fffffff);
/* set low part of i to lo */
i += lo - (i & 0x3fffffff);
(These aren't functions because they modify i
.)
For extra fun, a function to pull out arbitrary bitfields:
function int52_30_get_bits(i, lsb, nbits) {
while (lsb >= 32) {
i /= 4294967296;
lsb -= 32;
}
return (i / (1<<lsb)) & ((1<<nbits)-1);
}
(nbits must be <= 31. The failure mode when nbits is 32 is interesting, and is due to only the 5 low bits of the rhs operand of << being significant, a flaw the javascript spec shares with the x86 ISA.)
It's perfectly possible to use the sign bit to store 53-bit binary numbers as integers from -253 to 253-1. I haven't done this but it should be easy enough. After that it starts to get a bit hairy, and you'll eventually run into the fact that there aren't enough floats to go round (many are NaNs) before you get to 264. Packing 63 binary digits into a float should be theoretically doable, but is left as an exercise for the reader :)
Another approach is to use typed arrays and create a Float view and an Int view: this lets you manipulate the underlying binary representation of floats directly. But then you have to start worrying about endianness and the like.
All the people who are suggesting string manipulation are just crazy.