cudagpunvidiagpgpucuda-context

what is meant by GPU Context,GPU hardware channel in NVIDIA'S architecture


while reading some papers related to GPU computing, i stuck in understanding theses two terms GPU Context,and GPU hardware channel bellow is brief mention to them ,but i can't understand what they mean,

Command: The GPU operates using the architecturespecific commands. Each GPU context is assigned with a FIFO queue to which the program running on the CPU submits the commands. Computations and data transfers on the GPU are triggered only when the corresponding commands are dispatched by the GPU itself.

Channel: Each GPU context is assigned with a GPU hardware channel within which command dispatching is managed. Fermi does not permit multiple channels to access the same GPU functional unit simultaneously, but allow them to coexist being switched automatically in hardware.

so is there is a clear and simple explanation for that.


Solution

  • A GPU context is described here. It represents all the state (data, variables, conditions, etc.) that are collectively required and instantiated to perform certain tasks (e.g. CUDA compute, graphics, H.264 encode, etc). A CUDA context is instantiated to perform CUDA compute activities on the GPU, either implicitly by the CUDA runtime API, or explicitly by the CUDA driver API.

    A Command is simply a set of data, and instructions to be performed on that data. For example a command could be issued to the GPU to launch a kernel, or to move a graphical window from one place to the other on the desktop.

    A channel represents a communication path between host (CPU) and the GPU. In modern GPUs this makes use of PCI Express, and represents state and buffers in both host and device, that are exchanged over PCI express, to issue commands to, and provide other data to, the GPU, as well as to inform the CPU of GPU activity.

    For the most part, using the CUDA runtime API, it's not necessary to be familiar with these concepts, as they are all abstracted (hidden) underneath the CUDA runtime API.