indexingcudawolfram-mathematicamathematica-8

CUDAFunctionLoad in Mathematica - Indexing problem


I am trying to debug an index problem I am having on my CUDA machine

Cuda Machine Info:

{1->{Name->Tesla C2050,Clock Rate->1147000,Compute Capabilities->2.,GPU Overlap->1,Maximum Block Dimensions->{1024,1024,64},Maximum Grid Dimensions->{65535,65535,65535},Maximum Threads Per Block->1024,Maximum Shared Memory Per Block->49152,Total Constant Memory->65536,Warp Size->32,Maximum Pitch->2147483647,Maximum Registers Per Block->32768,Texture Alignment->512,Multiprocessor Count->14,Core Count->448,Execution Timeout->0,Integrated->False,Can Map Host Memory->True,Compute Mode->Default,Texture1D Width->65536,Texture2D Width->65536,Texture2D Height->65535,Texture3D Width->2048,Texture3D Height->2048,Texture3D Depth->2048,Texture2D Array Width->16384,Texture2D Array Height->16384,Texture2D Array Slices->2048,Surface Alignment->512,Concurrent Kernels->True,ECC Enabled->True,Total Memory->2817982462},

All this code does is set the values of a 3D array equal to the index that CUDA is using:

__global __ void cudaMatExp(
float *matrix1, float *matrixStore, int lengthx, int lengthy, int lengthz){

long UniqueBlockIndex = blockIdx.y * gridDim.x + blockIdx.x;

long index = UniqueBlockIndex * blockDim.z * blockDim.y * blockDim.x +
    threadIdx.z * blockDim.y * blockDim.x + threadIdx.y * blockDim.x +
    threadIdx.x;

if (index < lengthx*lengthy*lengthz) {

matrixStore[index] =  index;

}
}

For some reason, once the dimension of my 3D array becomes too large, the indexing stops.

I have tried different block dimensions (blockDim.x by blockDim.y by blockDim.z):

8x8x8 only gives correct indexing up to array dimension 12x12x12

9x9x9 only gives correct indexing up to array dimension 14x14x14

10x10x10 only gives correct indexing up to array dimension 15x15x15

For dimensions larger than these all of the different block sizes eventually start to increase again, but they never reach a value of dim^3-1 (which is the maximum index that the cuda thread should reach)

Here are some plots that illustrate this behavior:

For example: This is plotting on the x axis the dimension of the 3D array (which is xxx), and on the y axis the maximum index number that is processed during the cuda execution. This particular plot is for block dimensions of 10x10x10.

enter image description here

Here is the (Mathematica) code to generate that plot, but when I ran this one, I used block dimensions of 1024x1x1:

CUDAExp = CUDAFunctionLoad[codeexp, "cudaMatExp",
  {{"Float", _,"Input"}, {"Float", _,"Output"},
    _Integer, _Integer, _Integer},
  {1024, 1, 1}]; (*These last three numbers are the block dimensions*)

max = 100; (* the maximum dimension of the 3D array *)
hold = Table[1, {i, 1, max}];
compare = Table[i^3, {i, 1, max}];
Do[
   dim = ii;
   AA  = CUDAMemoryLoad[ConstantArray[1.0, {dim, dim, dim}], Real, 
                                     "TargetPrecision" -> "Single"];
   BB  = CUDAMemoryLoad[ConstantArray[1.0, {dim, dim, dim}], Real, 
                                     "TargetPrecision" -> "Single"];

   hold[[ii]] = Max[Flatten[
                  CUDAMemoryGet[CUDAExp[AA, BB, dim, dim, dim][[1]]]]];

 , {ii, 1, max}]

ListLinePlot[{compare, Flatten[hold]}, PlotRange -> All]

This is the same plot, but now plotting x^3 to compare to where it should be. Notice that it diverges after the dimension of the array is >32

enter image description here

I test the dimensions of the 3D array and look at how far the indexing goes and compare it with dim^3-1. E.g. for dim=32, the cuda max index is 32767 (which is 32^3 -1), but for dim=33 the cuda output is 33791 when it should be 35936 (33^3 -1). Notice that 33791-32767 = 1024 = blockDim.x

Question:

Is there a way to correctly index an array with dimensions larger than the block dimensions in Mathematica?

Now, I know that some people use __mul24(threadIdx.y,blockDim.x) in their index equation to prevent errors in bit multiplication, but it doesn't seem to help in my case.

Also, I have seen someone mention that you should compile your code with -arch=sm_11 because by default it's compiled for compute capability 1.0. I don't know if this is the case in Mathematica though. I would assume that CUDAFunctionLoad[] knows to compile with 2.0 capability. Any one know?

Any suggestions would be extremely helpful!


Solution

  • So, Mathematica kind of has a hidden way of dealing with grid dimensions, to fix your grid dimension to something that will work, you have to add another number to the end of the function you are calling.

    The argument denotes the number of threads to launch (or grid dimension times block dimension).

    For example, in my code above:

    CUDAExp = 
      CUDAFunctionLoad[codeexp, 
       "cudaMatExp", {
               {"Float", _, "Input"}, {"Float", _,"Output"}, 
                            _Integer, _Integer, _Integer}, 
         {8, 8, 8}, "ShellOutputFunction" -> Print];
    

    (8,8,8) denotes the dimension of the block.

    When you call CUDAExp[] in mathematica, you can add an argument that denotes the number of threads to launch:

    In this example I finally got it to work with the following:

    // AA and BB are 3D arrays of 0 with dimensions dim^3
    dim = 64;
    CUDAExp[AA, BB, dim, dim, dim, 4089];
    

    Note that when you compile with CUDAFunctionLoad[], it only expects 5 inputs, the first is the array you pass it (of dimensions dim x dim x dim) and the second is where the memory of it is stored. The third, fourth, and fifth are the dimensions.

    When you pass it a 6th, mathematica translates that as gridDim.x * blockDim.x, so, since I know I need gridDim.x = 512 in order for every element in the array to be dealt with, I set this number equal to 512 * 8 = 4089.

    I hope this is clear and useful to someone in the future that comes across this issue.