What are the differences between this line:
var a = parseInt("1", 10); // a === 1
and this line
var a = +"1"; // a === 1
This jsperf test shows that the unary operator is much faster in the current chrome version, assuming it is for node.js!?
If I try to convert strings which are not numbers both return NaN
:
var b = parseInt("test", 10); // b === NaN
var b = +"test"; // b === NaN
So when should I prefer using parseInt
over the unary plus (especially in node.js)???
edit: and what's the difference to the double tilde operator ~~
?
Well, here are a few differences I know of:
An empty string ""
evaluates to a 0
, while parseInt
evaluates it to NaN
. IMO, a blank string should be a NaN
.
+'' === 0; //true
isNaN(parseInt('',10)); //true
The unary +
acts more like parseFloat
since it also accepts decimals.
parseInt
on the other hand stops parsing when it sees a non-numerical character, like the period that is intended to be a decimal point .
.
+'2.3' === 2.3; //true
parseInt('2.3',10) === 2; //true
parseInt
and parseFloat
parses and builds the string left to right. If they see an invalid character, it returns what has been parsed (if any) as a number, and NaN
if none was parsed as a number.
The unary +
on the other hand will return NaN
if the entire string is non-convertible to a number.
parseInt('2a',10) === 2; //true
parseFloat('2a') === 2; //true
isNaN(+'2a'); //true
As seen in the comment of @Alex K., parseInt
and parseFloat
will parse by character. This means hex and exponent notations will fail since the x
and e
are treated as non-numerical components (at least on base10).
The unary +
will convert them properly though.
parseInt('2e3',10) === 2; //true. This is supposed to be 2000
+'2e3' === 2000; //true. This one's correct.
parseInt("0xf", 10) === 0; //true. This is supposed to be 15
+'0xf' === 15; //true. This one's correct.