Can someone explains why Brian Kernighan's algorithm takes O(log N) to count set bits (1s) in an integer. A simple implementation of this algorithm is below (in JAVA)
int count_set_bits(int n){
int count = 0;
while(n != 0){
n &= (n-1);
count++;
}
return count;
}
I understand how it works by clearing the rightmost set bit one by one until it becomes 0, but I just don't know how we get O(log N).
This algorithm goes through as many iterations as there are set bits. So if we have a 32-bit word with only the high bit set, then it will only go once through the loop. In the worst case, it will pass once per bit. An integer n
has log(n)
bits, hence the worst case is O(log(n))
. Here's your code annotated at the important bits (pun intended):
int count_set_bits(int n){
int count = 0; // count accumulates the total bits set
while(n != 0){
n &= (n-1); // clear the least significant bit set
count++;
}
}