While learning about Fenwick Trees, I found this implementation:
Source: https://algorithmist.com/wiki/Fenwick_tree
class Fenwick_Tree_Sum
{
vector<int> tree;
Fenwick_Tree_Sum(const vector<int>& Arg)//Arg is our array on which we are going to work
{
tree.resize(Arg.size());
for(int i = 0 ; i<tree.size(); i++)
increase(i, Arg[i]);
}
// Increases value of i-th element by ''delta''.
void increase(int i, int delta)
{
for (; i < (int)tree.size(); i |= i + 1)
tree[i] += delta;
}
// Returns sum of elements with indexes left..right, inclusive; (zero-based);
int getsum(int left, int right)
{
return sum(right) - sum(left-1); //when left equals 0 the function hopefully returns 0
}
int sum(int ind)
{
int sum = 0;
while (ind>=0)
{
sum += tree[ind];
ind &= ind + 1;
ind--;
}
return sum;
}
};
I can see i |= i + 1
and ind &= ind + 1; ind--
in the code.
I know that i |= i + 1
just sets the next unset bit.
But, what does (i & (i + 1)) - 1
actually do?
Here are some examples:
-------------------------------------
i | (i & (i + 1)) - 1
-------------------------------------
0 - 0000000000 | -1 - 1111111111
1 - 0000000001 | -1 - 1111111111
2 - 0000000010 | 1 - 0000000001
3 - 0000000011 | -1 - 1111111111
4 - 0000000100 | 3 - 0000000011
5 - 0000000101 | 3 - 0000000011
6 - 0000000110 | 5 - 0000000101
7 - 0000000111 | -1 - 1111111111
8 - 0000001000 | 7 - 0000000111
9 - 0000001001 | 7 - 0000000111
10 - 0000001010 | 9 - 0000001001
11 - 0000001011 | 7 - 0000000111
12 - 0000001100 | 11 - 0000001011
13 - 0000001101 | 11 - 0000001011
14 - 0000001110 | 13 - 0000001101
15 - 0000001111 | -1 - 1111111111
16 - 0000010000 | 15 - 0000001111
17 - 0000010001 | 15 - 0000001111
18 - 0000010010 | 17 - 0000010001
19 - 0000010011 | 15 - 0000001111
20 - 0000010100 | 19 - 0000010011
21 - 0000010101 | 19 - 0000010011
22 - 0000010110 | 21 - 0000010101
23 - 0000010111 | 15 - 0000001111
24 - 0000011000 | 23 - 0000010111
25 - 0000011001 | 23 - 0000010111
26 - 0000011010 | 25 - 0000011001
27 - 0000011011 | 23 - 0000010111
28 - 0000011100 | 27 - 0000011011
29 - 0000011101 | 27 - 0000011011
30 - 0000011110 | 29 - 0000011101
If the bit pattern of i
is like ...0111
, the bit pattern of i+1
will be ...1000
. Hence, i & (i+1)
means i - (2^{number of trailing ones} - 1)
and transforming all trailing 1
s to zero. For example if i
is even, i & (i+1)
will be equal to i
. On the other hand, -1
means, transforming all trailing zeros to 1
(including all trailing ones to zeros of the previous step) and transforming the successive 1
s to zeros.
By doing the -1
for the next step, i & (i+1)
will decrease the i
by the 2^{number of trailing 1's}
of the previous value of the i
.
What is the reason? It helps to show that the time complexity of finding the cumulative sum will be O(log n)
in the worst case.
To find the reason behind this computation, you need to focus on each node i
will responsible to compute index i
to (i - (1 << r)) + 1
. And "r
represents the last 1 bit set in the index i
". To find the total sum corresponding to index i
, we need to sum all related values from 0
to i
. Beware that we do not need to sum all indices because of the specified property. Hence, we need to sum indices i
, i - (1 << r)
, ... and so on so forth.