c++cudagpu-shared-memory

Copying whole global memory buffer many times to shared memory buffer


I have a buffer in global memory that I want to copy in shared memory for each block as to speed up my read-only access. Each thread in each block will use the whole buffer at different positions concurrently.

How does one do that?

I know the size of the buffer only at run time:

__global__ void foo( int *globalMemArray, int N )
{
    extern __shared__ int s_array[];

    int idx = blockIdx.x * blockDim.x + threadIdx.x;

    if( idx < N )
    {

       ...?
    }
}

Solution

  • The first point to make is that shared memory is limited to a maximum of either 16kb or 48kb per streaming multiprocessor (SM), depending on which GPU you are using and how it is configured, so unless your global memory buffer is very small, you will not be able to load all of it into shared memory at the same time.

    The second point to make is that the contents of shared memory only has the scope and lifetime of the block it is associated with. Your sample kernel only has a single global memory argument, which makes me think that you are either under the misapprehension that the contents of a shared memory allocation can be preserved beyond the life span of the block that filled it, or that you intend to write the results of the block calculations back into same global memory array from which the input data was read. The first possibility is wrong and the second will result in memory races and inconsistant results. It is probably better to think of shared memory as a small, block scope L1 cache which is fully programmer managed than some sort of faster version of global memory.

    With those points out of the way, a kernel which loaded sucessive segments of a large input array, processed them and then wrote some per thread final result back input global memory might look something like this:

    template <int blocksize>
    __global__ void foo( int *globalMemArray, int *globalMemOutput, int N ) 
    { 
        __shared__ int s_array[blocksize]; 
        int npasses = (N / blocksize) + (((N % blocksize) > 0) ? 1 : 0);
    
        for(int pos = threadIdx.x; pos < (blocksize*npasses); pos += blocksize) { 
            if( pos < N ) { 
                s_array[threadIdx.x] = globalMemArray[pos];
            }
            __syncthreads(); 
    
            // Calculations using partial buffer contents
            .......
    
            __syncthreads(); 
        }
    
        // write final per thread result to output
        globalMemOutput[threadIdx.x + blockIdx.x*blockDim.x] = .....;
    } 
    

    In this case I have specified the shared memory array size as a template parameter, because it isn't really necessary to dynamically allocate the shared memory array size at runtime, and the compiler has a better chance at performing optimizations when the shared memory array size is known at compile time (perhaps in the worst case there could be selection between different kernel instances done at run time).

    The CUDA SDK contains a number of good example codes which demonstrate different ways that shared memory can be used in kernels to improve memory read and write performance. The matrix transpose, reduction and 3D finite difference method examples are all good models of shared memory usage. Each also has a good paper which discusses the optimization strategies behind the shared memory use in the codes. You would be well served by studying them until you understand how and why they work.