Assume we have a Fenwick Tree (BIT) like below:
In the tree, the green color indicates value of node, and the red color - range which is covered by this node (inclusive). At the bottom of picture we have array that is used to calculate Fenwick Tree based on it. The BIT represents the sum of [l, r]
elements.
We know how to query for some index (for index x
we will get prefix sum from [1, x]
, we just need to go back to node parent (using some bitmasking) until we reach index 0
. Then, we have calculated prefix sum for range [1, x]
. Knowing this, we can calculate prefix sum for [l, r]
by using formula like it: query(r) - query(l-1)
.
But I have problem with understanding update operation. I mean, it's clear for me how to update single point - we just add some value y
to node at index x
, and then we add the y
to all nodes that have the x
in range. For instance, if we update x=5
by y=7
, then we will also update node 6, then 8, then 16, then 32 and so on until end (bitmasking one more time). After this update, the at index: 5
the value is 8, 6
(range [5, 6]
) the value is 10, and at 8
(range [1, 8]
) the value will be 53.
What is important to see is that the updating concrete point will have impact on ranges that have x
in their range. It's also where my problem occurs.
Because in range update by value y
for range [l, r]
, every page I read says that we should update index l
by y
, but then we remove this impact on other elements by updating r+1
by -y
. So we literally update e.g. [5, 6]
, but it has no impact on [1, 8]
, so if we query for index 8
we will get invalid result.
I will rather expect range update to look like: iterate from [l, r]
and update every index by y
. By I update I mean whole update not operation, not just single index.
But seems I think wrong. And I can't understand why it has to work this way - and how should we interpret this BIT (because it has no impact on other elements than these in range, so may there is different way to read from BIT?).
It think you're just mixing up definitions of "range update".
In a Fenwick tree, each node stores the sum of some contiguous elements in the original array. If you add a delta to a whole range of elements in the original array, then every node in the tree that covers an overlapping range needs to be updated by the correct amount proportional to the size of the overlap. This can take O(n) time.
But there's a somewhat common kind of programming question you can answer with Fenwick trees that goes something like:
A
;A[i]...A[j]
A[i]
You solve it like this:
A[i]
with an array of differences B
, so B[i] = A[i]-A[i-1]
. Note that the elements of A
are the prefix sums of B
.B
with the Fenwick tree of B
so that the prefix sums are all available in log(N) time.Now:
d
to A[i]...A[j]
, you only have to change two differences in B
: B[i]+=d; B[j+1]-=d;
. We don't have B, though. We have its Fenwick tree instead, so we need to make the corresponding two updates to the Fenwick tree in O(log N) time.A
, we just query the Fenwick tree for the corresponding prefix sum of B
.This 2-point change to the Fenwick tree represents a range update in A
, but A
is not the array underlying the Fenwick tree. The Fenwick tree refers to the successive differences in A
.