pythonnumpyscipyranking

Ranking of numpy array with possible duplicates, in pure numpy/scipy


I have a numpy array of floats/ints and want to map each elements to their rank.

If an array doesn't have duplicates the problem can be solved by the following code:

In [49]: a1
Out[49]: array([ 0.1,  5.1,  2.1,  3.1,  4.1,  1.1,  6.1,  8.1,  7.1,  9.1])

In [50]: a1.argsort()
Out[50]: array([0, 5, 2, 3, 4, 1, 6, 8, 7, 9])

...or .argsort().argsort() for a 2D array.

Now I want to extend this method to arrays with possible duplicates, so that duplicates are mapped to the same value. For example, I want this array:

a2 = np.array([0.1, 1.1, 2.1, 3.1, 4.1, 1.1, 6.1, 7.1, 7.1, 1.1])

to be mapped to any of the following three:

0 1 4 5 6 1 7 8 8 1       # a) minimum rank
0 3 4 5 6 3 7 9 9 3       # b) maximum rank
0 2 4 5 6 2 7 8.5 8.5 2   # c) average rank

In case a),b) we map duplicates to the minimum/maximum rank among them if we just apply a2.argsort(). Case c) is just the average of ranks from cases a)+b).

Any suggestions?

EDIT (efficiency requirements)

In the initial description I forgot to mention my speed requirement. I want a solution in pure numpy/scipy functions, which avoids the overhead of native Python. Example: consider the solution proposed by Richard which actually solves the problem but is quite slow:

def argsortdup(a1):
  sorted = np.sort(a1)
  ranked = []
  for item in a1:
    ranked.append(sorted.searchsorted(item))
  return np.array(ranked)

In [86]: a2 = np.array([ 0.1,  1.1,  2.1,  3.1,  4.1,  1.1,  6.1,  7.1,  7.1,  1.1])

In [87]: %timeit a2.argsort().argsort()
1000000 loops, best of 3: 1.55 us per loop

In [88]: %timeit argsortdup(a2)
10000 loops, best of 3: 25.6 us per loop

In [89]: a = np.arange(0.1, 1000.1)

In [90]: %timeit a.argsort().argsort()
10000 loops, best of 3: 24.5 us per loop

In [91]: %timeit argsortdup(a)
1000 loops, best of 3: 1.14 ms per loop

In [92]: a = np.arange(0.1, 10000.1)

In [93]: %timeit a.argsort().argsort()
1000 loops, best of 3: 303 us per loop

In [94]: %timeit argsortdup(a)
100 loops, best of 3: 11.9 ms per loop

It is clear from the analysis above that argsortdup is 30-50 times slower than a.argsort().argsort(). The main reason is the use of python loops and lists.


Solution

  • After upgrading to a latest version of scipy as suggested @WarrenWeckesser in the comments, scipy.stats.rankdata seems to be faster than both scipy.stats.mstats.rankdata and np.searchsorted being the fastet way to do it on larger arrays.

    In [1]: import numpy as np
    
    In [2]: from scipy.stats import rankdata as rd
       ...: from scipy.stats.mstats import rankdata as rd2
       ...: 
    
    In [3]: array = np.arange(0.1, 1000000.1)
    
    In [4]: %timeit np.searchsorted(np.sort(array), array)
    1 loops, best of 3: 385 ms per loop
    
    In [5]: %timeit rd(array)
    10 loops, best of 3: 109 ms per loop
    
    In [6]: %timeit rd2(array)
    1 loops, best of 3: 205 ms per loop