pythoncupy

Fast square of absolute value of complex numbers with cupy or otherwise


When one is comparing the magnitudes of complex numbers (essentially sqrt(real² + imag²)) to find the largest absolute values, it would suffice to compare the square of the absolute values, thereby saving the slow sqrt() operation and making it faster.

How to do this efficiently with CuPy or otherwise? Furthermore, I did some benchmarking with the code below.

Comparing the abs_only() version (4381.648 us) with the abs_sq_temp() version that calculates the absolute value, stores it into temp, and then squares it (4744.022 us), the squaring with GPU adds a mere 362 us. So it would seem plausible that an efficient way of calculating the absolute square could take about twice that, or even closer 362 us total if the real and imag parts of the complex number could be squared concurrently in place, and then added. Yet how to do it cupy?

It is regrettable that such "square of the absolute value of complex number" is not already included in cupy or numpy.... The implementation would (likely) be identical to np.absolute() and cp.absolute, except that there would not be the final square root taken. It would be faster than the .abs().

Below the benchmark code

import cupy as cp
from cupyx.profiler import benchmark
import time
import numpy as np

# Generate a large complex array
arr = cp.random.random(10000000) + 1j * cp.random.random(10000000)

def abs_only():
    return cp.absolute(arr)

def abs_sq():
    return cp.absolute(arr)**2

def abs_sq_temp():
    temp = cp.absolute(arr)
    return temp*temp

def conj():
    return arr*cp.conj(arr)

def real_imag():
    return cp.real(arr)**2 + cp.imag(arr)**2

# making benchmarks
bench0 = benchmark(abs_only, n_repeat=20)
bench1 = benchmark(abs_sq, n_repeat=20)
bench2 = benchmark(abs_sq_temp, n_repeat=20)
bench3 = benchmark(conj, n_repeat=20)
bench4 = benchmark(real_imag, n_repeat=20)

print(bench0)
print(bench1)
print(bench2)
print(bench3)
print(bench4)

# sanity check with numpy gives much longer, more realistic time for CPU
arr2 = np.random.random(10000000) + 1j * np.random.random(10000000)
start_time = time.time()
plain_abs_numpy = np.abs(arr2)
time_plain_abs = time.time() - start_time
print(f"\nOutside the benchmark() function, CPU takes {time_plain_abs*1e6:.3f} us with np.abs()")


'''  The results:
abs_only            GPU-0:  4381.648 us   +/- 292.389 (min:  4261.536 / max:  5529.536) us
abs_sq              GPU-0: 21369.104 us   +/- 180.435 (min: 21300.129 / max: 22085.632) us
abs_sq_temp         GPU-0:  4744.022 us   +/- 38.082 (min:  4694.080 / max:  4829.056) us
conj                GPU-0:  7396.042 us   +/- 60.760 (min:  7289.728 / max:  7508.256) us
real_imag           GPU-0: 38408.486 us   +/- 211.628 (min: 38300.770 / max: 39266.209) us

Outside the benchmark() function, CPU takes 32956.123 us with np.abs()'''


Solution

  • I found an answer.

    # this cupy kernel is to calculate cp.abs(complex64)**2
    
    squared_abs64 = cp.ElementwiseKernel(
       'complex64 x',
       'float64 z',
       'z = x.real() * x.real() + x.imag()*x.imag()',
       'squared_abs64')
    

    It can be called like any function, with squared_abs64(inputarray), when the inputarray elements are of cupy.complex64 type.