parallel-processingcudagpuscalability

Scalability Analysis on GPU


I am trying to do a scalability analysis using my Quadro FX 5800 which have 240 cores to the run time scales with number of cores which is a classic study for parallel computing. I was wondering how does the definition of the core fit it in this ? And how can I use it to run on different core settings say ( 8,16,32,64,128,240 cores ) ? My test case is the simple matrix multiplication.


Solution

  • Scalability on the GPU should not be measured in terms of CUDA cores but in terms of SM utilization. IPC is probably the best single measure of SM utilization. When developing an algorithm you want to partition your work such that you can distribute sufficient work to all SMs such that on every cycle the warp scheduler has at least one warp eligible to issue and instruction. In general this means you have to have sufficient warps on each SM to hide instruction and memory latency and to provide a variety of instruction types to fill the execution pipeline.

    If you want to test scaling across CUDA cores (meaningless) then you can launch thread blocks containing 1, 2, 3, ... 32 threads per block. Launching non-multiple of WARP_SIZE (=32) threads per thread block will result in only using a subset of the cores. These are basically wasted execution slots.

    If you want to test scaling in terms of SMs you can scale you algorithm from 1 thread block to 1000s of thread blocks. In order to understand scaling you can artificially limit the thread blocks per SM by configuring the shared memory per thread block when you launch.

    Re-writing matrix multiply to optimally scale in each of these directions is likely to be frustrating. Before you undertake that project I would recommend understanding how distributing a simply parallel computation such as summing from 0-100000 or calculating a factorial scales across multiple thread blocks. These algorithms are only a few lines of code and the aforementioned scaling can be tried by varying the launch configuration (GridDim, BlockDim, SharedMemoryPerBlock) and kernel 1-2 parameters You can time the different launches using the CUDA profiler, Visual Profiler, Parallel Nsight, or CUevents.