Problem : To check if a non-negative integer is of form 2^j - 2^k where j>=k>=0
i.e. difference of powers of 2.
My solution : The number n (say)
can be represented as contiguous sequence of 1's for eg. 00011110
. I will turn off the contiguous sequence(right most) of 1's and do a zero check on n
.
What I do here is that, steps for solution
00011110
00011111(turn on trailing 0's)
00000000(then turn off trailing 1's)
.
Using this formula (x | (x - 1)) & ((x | (x - 1)) + 1)
.
But a more efficient formula(maybe because of less number of operation) which does not uses literals is ((x & -x) + x) & x
followed by a zero check. And I can't understand this but it's written it does the same thing, but just can't derive the formula from my result. Can someone explain this to me?
EDIT : 32-bit word, 2's complement
You asked for an algebraic proof connecting the two expressions, so here is one, but with some non-simple steps
((x | (x - 1)) + 1) & (x | (x - 1))
// rename x | (x - 1) into blsfill(x)
(blsfill(x) + 1) & blsfill(x)
// the trailing zeroes that get filled on the right side of the & don't matter,
// they end up being reset by the & anyway
(blsfill(x) + 1) & x
// filling the trailing zeroes and adding 1,
// is the same thing as skipping the trailing zeroes and adding the least-set-bit
(x + blsi(x)) & x
// rewrite blsi into elementary operations
(x + (x & -x)) & x