cudalwjgl

Questions on CUDA performance for simple non-shared task


I'm a newbie on GPU acceleration. Just tried a basic LWJGL binding on CUDA with a simple kernel, no shared memory, function signature is like follows

__global__ void compute(
unsigned int n,
unsigned long long int* timeMs,
double* a, double* b, double* c, 
double *g_odata)

the kernel function is basically retrieving data for the thread id from above arrays (timeMs, a, b, c etc.) and do some math, and put result in g_odata array on appropriate thread id. n being the number of thread to compute (it checks if thread id overshoots n of course). There is no shared memory or reduction.

Now here's the curious case about n (total thread size / parallelism) and block size when I measure the TOTAL time taken for the kernel to complete (I have a GPU with 80 multi-processors)

Weird plateauing

Through clock64() I added timestamping on the kernel function before and after, and collected the total time for each thread, and it's apparent that the more threads there are, the slower they take for the SAME task

Now questions:

  1. Why the total time takes off after some 100 threads? Given 80 multi-processors and 10K+ cuda cores, I'd expect this number to be bigger, so maybe some config issue?
  2. Why is the kernel function taking more time for more threads? is the execution interlaced (i.e. scheduler can pause one of them before it completes and execute another)
  3. Why is there a plateau behaviour after a bump at 100 threads? and why it takes off again
  4. The varying performance based on block number. I read that grid/block is just developer perspective and has no bearing (esp. for my fully segregated threads with no sharing/reduction). So why does it matter, and how to pick the best block size?

Solution

  • You don't show the relevant code or give us the type of GPU. That makes answering specifics hard. First things first: If this is a consumer-level GPU, you don't use your "cuda cores", since those are only for single precision (float). Just pulling up specs for a random GPU: Going by TechPowerUp's excellent GPU database, the RTX-4090 has 82.58 TFLOPS single precision and only 1.29 TFLOPS in double precision.

    Why is the kernel function taking more time for more threads? is the execution interlaced (i.e. scheduler can pause one of them before it completes and execute another)

    Yes, that's how a GPU operates. If you look at the Compute Capabilities table in the CUDA Programming Guide, you will see that typically an SM (streaming multiprocessor) has 1024-2048 threads but when you compare it to the Arithmetic Instructions table, only a throughput of about 128 single precision instructions per clock cycle. Hiding latency by overcommitting GPU resources is how it works.

    Why is there a plateau behaviour after a bump at 100 threads? and why it takes off again

    A logarithmic scale is hard to interpret but it looks like it could be a bump at 256 threads (?). It could be when the scheduler cannot find a free double precision execution unit every clock cycle. The visual profiler Nsight Compute should be able to tell you.

    Note that a single block always executes on a single SM. So 256 threads with a 1024 block size means all threads execute on the same processor, leaving the compute resources on the other processors unoccupied.

    Overall, I think this metric is meaningless anyway. 100-1000 threads is too little and you need to look at throughput over all threads, meaning number of work items divided by total kernel execution time.

    The varying performance based on block number. I read that grid/block is just developer perspective and has no bearing (esp. for my fully segregated threads with no sharing/reduction). So why does it matter, and how to pick the best block size?

    That's wrong. Block size does matter. Again, if you look at the compute capabilities, there are limits for number of blocks per SM, as well as threads per SM. If your block size is below 64, you will not reach 100% occupancy. A block size that is not a multiple of the warp size will also waste resources with deactivated threads. And of course a block size of 1024 will never reach more than 2/3 occupancy on CUDA 8.6-8.9 devices with 1536 threads per SM.

    Blocks also have launch overhead, so fewer blocks can be beneficial but making blocks too large can have negative effects, too. A new block can only start once all threads of an old block finished. Large blocks mean that at the end of the kernel (or at __syncthreads() barriers), many threads can wait for very few stragglers, occupying resources for longer than necessary.

    Rule of thumb, use about 128-256 threads per block. Benchmark different sizes if required. But you need to make it a meaningful benchmark. Look at throughput over all and fully occupy the GPU.