algorithmtime-complexity

O(n log n) algorithm to find number of large inversions


I am working on a problem where I need to find the number of large inversions in an array. A pair (i, j) is considered a "large inversion" if i<j and rank[i] > rank[j]+k for some given k, i.e. the inversion pairs are far enough apart from each other in the given array.

For example, if we have [2, 4, 1, 3] and k = 1, we have 2 large inversions because (2, 1) are a distance of 2 away from each other, and (4, 3) are a distance of 2 away from each other. (4, 1) is another inversion but it's only a distance of 1 away from each other so the inversion is not large enough.

I am familiar with the traditional approach of using a modified merge sort to count regular inversions, which works in O(n log n) time. However, I am unsure how to extend this idea to count "large inversions".

Can a merge sort algorithm be extended to find the number of large inversions in O(n log n) time? Or maybe a different approach?

I tried computing all the inversion pairs and filtering out the inversions that are not large enough by comparing indices, but computing all the inversion pairs can only be done in O(n^2) time.


Solution

  • You can simply use a binary indexed tree (after coordinate compression) or an order statistic tree to count the number of previous elements greater than the current value while iterating over the array. To handle the condition of a minimum difference in indexes, add elements to the data structure only once they are at least k indexes before the current index (e.g. at the end of the ith iteration of the loop, add arr[i - k] if it exists).