I'm trying to write a short code that calculates the Hamming weight of integers:
class Solution {
public:
int hammingWeight(int n) {
if(n==0){
return 0;
}else{
int a=1;
while(a<=(float)n/2){
a*=2;
}
return 1+hammingWeight(n-a);
}
}
};
However it gives error for n=2147483645:
Line 9: Char 18: runtime error: signed integer overflow: 1073741824 * 2 cannot be represented in type 'int' (solution.cpp)
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior solution.cpp:9:18
I don't understand how, I never need to do 1073741824 * 2 in my calculation. Also my code works if instead of a<=(float)n/2
, I just do a<=n/2
.
The difference between a<=(float)n/2
and a<=n/2
is that in the former, a
is converted to float
to be compared with the float
expression (float)n/2
.
During this conversion some precision is lost because the 32 bit float
representation does not have enough bits to represent 2147483645
accurately.
In this case the float value becomes 2147483648
(the closest value that can be represented in a float
) which changes the comparisson outcome.
You can observe this using this minimal example:
#include<iostream>
int main() {
int n = 2147483645;
float f = n;
std::cout << std::fixed << f;
}
Output:
2147483648.000000
A side note:
A 64 bit double
does have enough bits to represent 2147483645
, so if you change the cast to double
you should get the same result as without the cast (see minimal demo).
You can see some more info about limitations of floating point repesentations here: Which is the first integer that an IEEE 754 float is incapable of representing exactly?.