pythoncudapaddingpycudagpu-shared-memory

Why does padding the shared memory array by one column increase the speed of the kernel by 40%?


Why is this matrix transpose kernel faster, when the shared memory array is padded by one column?

I found the kernel at PyCuda/Examples/MatrixTranspose.

Source

import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy

block_size = 16    

def _get_transpose_kernel(offset):
    mod = SourceModule("""
    #define BLOCK_SIZE %(block_size)d
    #define A_BLOCK_STRIDE (BLOCK_SIZE * a_width)
    #define A_T_BLOCK_STRIDE (BLOCK_SIZE * a_height)

    __global__ void transpose(float *A_t, float *A, int a_width, int a_height)
    {
        // Base indices in A and A_t
        int base_idx_a   = blockIdx.x * BLOCK_SIZE +
        blockIdx.y * A_BLOCK_STRIDE;
        int base_idx_a_t = blockIdx.y * BLOCK_SIZE +
        blockIdx.x * A_T_BLOCK_STRIDE;

        // Global indices in A and A_t
        int glob_idx_a   = base_idx_a + threadIdx.x + a_width * threadIdx.y;
        int glob_idx_a_t = base_idx_a_t + threadIdx.x + a_height * threadIdx.y;

        /** why does the +1 offset make the kernel faster? **/
        __shared__ float A_shared[BLOCK_SIZE][BLOCK_SIZE+%(offset)d]; 

        // Store transposed submatrix to shared memory
        A_shared[threadIdx.y][threadIdx.x] = A[glob_idx_a];

        __syncthreads();

        // Write transposed submatrix to global memory
        A_t[glob_idx_a_t] = A_shared[threadIdx.x][threadIdx.y];
    }
    """% {"block_size": block_size, "offset": offset})

    kernel = mod.get_function("transpose")
    kernel.prepare("PPii", block=(block_size, block_size, 1))
    return kernel 


def transpose(tgt, src,offset):
    krnl = _get_transpose_kernel(offset)
    w, h = src.shape
    assert tgt.shape == (h, w)
    assert w % block_size == 0
    assert h % block_size == 0
    krnl.prepared_call((w / block_size, h /block_size), tgt.gpudata, src.gpudata, w, h)


def run_benchmark():
    from pycuda.curandom import rand
    print pycuda.autoinit.device.name()
    print "time\tGB/s\tsize\toffset\t"
    for offset in [0,1]:
        for size in [2048,2112]:
        
            source = rand((size, size), dtype=numpy.float32)
            target = gpuarray.empty((size, size), dtype=source.dtype)

            start = pycuda.driver.Event()
            stop = pycuda.driver.Event()

            warmup = 2
            for i in range(warmup):
                transpose(target, source,offset)

            pycuda.driver.Context.synchronize()
            start.record()

            count = 10          
            for i in range(count):
                transpose(target, source,offset)

            stop.record()
            stop.synchronize()

            elapsed_seconds = stop.time_since(start)*1e-3
            mem_bw = source.nbytes / elapsed_seconds * 2 * count /1024/1024/1024
            
            print "%6.4fs\t%6.4f\t%i\t%i" %(elapsed_seconds,mem_bw,size,offset)


run_benchmark()

Output

Quadro FX 580
time    GB/s    size    offset  
0.0802s 3.8949  2048    0
0.0829s 4.0105  2112    0
0.0651s 4.7984  2048    1
0.0595s 5.5816  2112    1

Code adopted


Solution

  • The answer is shared memory bank conflicts. The CUDA hardware you are using arranges shared memory into 16 banks, and shared memory is sequentially "striped" across all of those 16 banks. If two threads try and access the same bank simultaneously, a conflict occurs and the threads must be serialized. This is what you are seeing here. By extending the stride of the shared memory array by 1, you are ensuring that the same column indices in successive rows of the shared array are on different banks, which eliminates most of the possible conflicts.

    This phenomena (and an associated global memory phenomena called partition camping) is discussed in great depth in the "Optimizing Matrix Transpose in CUDA" paper which ships with the SDK matrix transpose example. It is well worth reading.