cudagpu

CUDA process sharing memory(in GPU) among two separate CUDA process


Let's take a scenario where there are two CUDA processes named P1 and P2, and they both do some work on some common data; the common data can emanate from some shared library.

  1. when launching the two processes, will the common data be loaded once in the GPU device memory?
  2. If the P2 is launched after some time, will the common data be copied again into the GPU, or will the P2 process reuse already present data in device memory?
  3. Is data sharing among different CUDA processes a common practice? Can you specify an example where we would see two different processes co-running in the GPU using a common data block(given we only have one copy of that data in the device memory and the data is read-only)?

Solution

    1. when launching the two processes, will the common data be loaded once in the GPU device memory?

    No, the data will be loaded twice in memory. However, you can use CUDA driver APIs cuMemExportToShareableHandle and cuMemImportFromShareableHandle

    1. If the P2 is launched after some time, will the common data be copied again into the GPU, or will the P2 process reuse already present data in device memory?

    No, the data will not be reused.

    1. Is data sharing among different CUDA processes a common practice?

    Yes it is! You can find an example of IPC (inter-process communication) using CUDA on the simpleIPC sample, which is part of NVIDIA's GitHub CUDA samples repository.