javascriptv8

Can you check if a JavaScript number is 32 bits?


A lesson learned from libs like asm.js and one of Brendan Eich's many talks is that JavaScript's 32-bit integers are much faster than its default IEEE 754 floating point numbers, provided they're used consistently.

I'm writing a video game that exclusively uses V8 and would like to benchmark 32-bit ints in critical areas, and have learned that we can easily force a value to become 32-bit internally by using a bitwise operator, for example:

const myValue = 12 | 0;

Of course, the benchmark is only valid if I know something hasn't accidentally turned it into a float somewhere. I would like to write tests that verify this. Is it possible to reliably confirm that a variable is in fact currently stored as a 32-bit integer?

EDIT:
Note that I'm not looking for a way of removing decimal points. Functions like Math.floor() and Math.trunc() can do that already (testable via x === Math.floor(x)). What I'm looking for is specifically exploiting non-obvious intricacies, such as that a browser engine will sometimes treat a number as float, but other times treat a number as a 32-bit integer (x = 3 is NOT (necessarily) an integer in JavaScript). A real world example of where 32-bit coercion speeds things up is with code like ~~number which is insane amounts faster than Math.floor(number), but only works on numbers smaller than 2,147,483,647 (because beyond that, you're overflowing).


Solution

  • V8 developer here.

    Is it possible to reliably confirm that a variable is in fact currently stored as a 32-bit integer?

    Not from JavaScript.

    Not with special engine introspection features either; one reason is that this will change depending on circumstances. For example, a baseline tier (interpreter or simple compiler) might well make different decisions about how to store things compared to an optimizing compiler that kicks in later.

    Speaking of the latter, the one thing you can do is inspect generated optimized code in V8 using --print-opt-code (with a build that has disassembler support enabled). That's rather tedious though.

    There is also a crucial difference between

    A lesson learned from libs like asm.js and one of Brendan Eich's many talks is that JavaScript's 32-bit integers are much faster than its default IEEE 754 floating point numbers, provided they're used consistently.

    I don't think that statement holds up to scrutiny. Avoiding int<->float conversions certainly helps performance, but there are many cases where sticking with floats is no slower than sticking with ints.
    (Other nitpicks: asm.js is not a library, Brendan is not exactly an authority on JavaScript performance, and the term "JavaScript's 32-bit integers" misleadingly suggests that JavaScript has 32-bit integers that you can choose to use, which it does not have.)

    we can easily force a value to become 32-bit internally by using a bitwise operator, for example: const myValue = 12 | 0;

    Not quite. As far as JavaScript semantics are concerned, there is no difference between 12 and 12 | 0. As far as implementation details are concerned, in V8 const v = 12 | 0; is constant-folded by the parser and hence is exactly equivalent to const v = 12; (or to const v = 12.0; for that matter).

    The asm.js-style pattern you probably meant to refer to here is to put | 0 annotations after operations, as in let x = (y+z)|0. That does indeed have an effect: it changes observable semantics, and most modern engines' optimizing compilers will understand it as a hint that it's probably most convenient/efficient to keep the value in an integer register. It's still not a way to force integer representation, but it's a hint that integer representation is likely a good choice.

    code like ~~number which is insane amounts faster than Math.floor(number)

    That's a prime example of an utterly misleading microbenchmark. One telltale sign is: when you see upwards of a billion operations per second, it's a safe bet that the optimizing compiler has completely optimized away your test and you're measuring nothing at all. Besides, measurethat.net's framework has been shown to produce bogus results in the past; I don't feel like spending an hour right now to figure out what weirdness is going on this time, aside from dead-code elimination my guess would be the that the slow case spends so much time allocating heap numbers that the floor operations barely show up on a profile. Before drawing any useful conclusions from any microbenchmark, you must understand what it's actually doing under the hood! Profiling it is the bare minimum. Inspecting generated code is better. The various JS benchmarking sites out there all utterly fail at this basic requirement.

    As a quick counter-example, the following two functions compile to exactly the same optimized code:

    function f() { for (let i = 0; i < 1000; i++) i = ~~i; }
    function g() { for (let i = 0; i < 1000; i++) i = Math.floor(i); }
    

    If you change the loop bodies to ~~(i + 0.1) and Math.floor(i + 0.1), then both versions will perform conversions to float and back, so then the example doesn't serve as an argument in support of any "integers are faster" claim. There will be a 20% difference in favor of ~~ in that case, but (a) that isn't exactly "insane", and (b) it's due to Math.floor having to execute a few more machine instructions to get cases like -0 and NaN right, which ~~ can just truncate to 0.


    Regarding things said in comments:

    (3 | 0) === (3.1 - 0.1) // true

    That only proves that in JavaScript, all numbers behave as if they were doubles. It doesn't tell you anything about how the engine stores numbers internally.

    the Number type (the only kind of number that exist in javascript)

    Well, there are also BigInts nowadays. But since BigInts have to worry about potentially getting big (and also because they haven't received as much optimization attention yet), they're not faster than Numbers for small values.

    you'd have to look at the bytecode produced by the optimiser

    Almost. Bytecode is not optimized. The optimizing compiler produces machine code, not bytecode.

    you said that 3.0 is a 32-bit int, which is (by spec) completely wrong (it's a 64-bit IEEE 754 float)

    It's tricky. By JS spec, all numbers are 64-bit floats. This question was about 32-bit storage chosen by the engine though, and 3.0 can absolutely be stored as a 32-bit integer by an engine. V8's parser does not distinguish between 3.0 and 3, and V8 will usually initially store it as a 31-bit (sic!) integer. But there's no way you could tell -- there must not be; because per JS spec, all numbers must behave as if represented by 64-bit floats.

    This is actually for an already-completed indie video game [...] I thought, why not squeeze out an extra ~2% performance if possible? It's (possibly premature) optimization that I can definitely do without, but I thought experimenting couldn't hurt

    That information would have belonged in the original question, because it provides relevant framing for what is or isn't a helpful answer.
    Experimenting with something that doesn't move the needle is a waste of time; whether that qualifies as "hurt" is up to your definition.
    As explained above, you can't check for internal representation choices, but you can provide hints. If you have significant amounts of computation that could remain in the integer domain, an experiment you could do is to add | 0 after these operations and see if that makes a difference.
    If I was in your shoes, I would first use profiling to see if the functions in question even matter in the big picture.