pythonpython-3.xnumba

From within a Python function, how can I tell if it is being executed in the GPU with Numba or called regularly on the host / CPU?


I have a function that sometimes call with Numba as a device function to execute on the GPU and sometimes I call directly from within regular Python on the host:

def process():
     # perform computation

process_cuda = cuda.jit(device=True)(process)

sometimes I call process() from directly from Python, and sometimes I invoke the process_cuda wrapper as a kernel with Numba.

My question, how can I tell, from within the process function, if it was called directly from Python or if it is executing as a Numba device function?


Solution

  • Numba offers a function called cuda.current_context(), if the code is executed from a GPU the function return the current CUDA context otherwise it will return None when run using a CPU

    So, we can initialize a variable in the process() to check if the code is executed from a CPU or GPU

    def process():
        cuda_context = cuda.current_context()
        if cuda_context:
            return 'GPU'
        else:
            return 'CPU'
    

    If there is anything I did wrong, please share it with me.