pythonnumpymultidimensional-arrayarray-broadcastingnumpy-indexing

Vectorized way to contract Numpy array using advanced indexing


I have a Numpy array of dimensions (d1,d2,d3,d4), for instance A = np.arange(120).reshape((2,3,4,5)). I would like to contract it so as to obtain B of dimensions (d1,d2,d4). The d3-indices of parts to pick are collected in an indexing array Idx of dimensions (d1,d2). Idx provides, for each couple (x1,x2) of indices along (d1,d2), the index x3 for which B should retain the whole corresponding d4-line in A, for example Idx = rng.integers(4, size=(2,3)).

To sum up, for all (x1,x2), I want B[x1,x2,:] = A[x1,x2,Idx[x1,x2],:].

Is there an efficient, vectorized way to do that, without using a loop? I'm aware that this is similar to Easy way to do nd-array contraction using advanced indexing in Python but I have trouble extending the solution to higher dimensional arrays.

MWE

A = np.arange(120).reshape((2,3,4,5))
Idx = rng.integers(4, size=(2,3))

# correct result:
B = np.zeros((2,3,5))
for i in range(2):
    for j in range(3):
        B[i,j,:] = A[i,j,Idx[i,j],:]

# what I would like, which doesn't work:
B = A[:,:,Idx[:,:],:]

Solution

  • Times for 3 alternatives:

    In [91]: %%timeit
        ...: B = np.zeros((2,3,5),A.dtype)
        ...: for i in range(2):
        ...:     for j in range(3):
        ...:         B[i,j,:] = A[i,j,Idx[i,j],:]
        ...: 
    
    11 µs ± 48.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
    
    In [92]: timeit A[np.arange(2)[:,None],np.arange(3),Idx]
    8.58 µs ± 44 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
    
    In [94]: timeit np.squeeze(np.take_along_axis(A, Idx[:,:,None,None], axis=2), axis=2)
    29.4 µs ± 448 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
    

    Relative times may differ with larger arrays. But this is a good size for testing the correctness.