Assume that I want to perform parallel computations on a large fixed object, e.g. a fixed large sparse (directed) graph, or any similar kind of object.
To do any reasonable computations on this graph or object, such as random walks in the graph, putting the graph in global memory is presumably out of the question for speed reasons.
That leaves local/private memory. If I have understood the GPU architecture correct, there is virtually no speed difference between (read-only) access of local or private memory, is that correct? I'm reluctant to copy the graph to private memory, since this would mean that every single work unit has to store the entire graph, which could eat away the GPU's memory very quickly (and for very large graphs even reducing the number of cores that can be used and/or make the OS unstable).
So, assuming I'm correct above on the read speed of local vs private, how do I do this in practice? If e.g. for simplification I reduce the graph to an int[] from
and an int[] to
(storing begin and end of each directed edge), I can of course make the kernel look like this
computeMe(__local const int *to, __local const int *from, __global int *result) {
//...
}
but I don't see how I should call this from JOCL, since no private/local/global modifier is given there.
Will the local variables be written automatically to the memory of each local workgroup? Or how does this work? It's not clear to me at all how I should be doing this memory assignment correctly.
You can't pass values for local memory arguments from the host. The host cannot read/write local memory. To use local memory, you still need to pass the data in as global, then copy from global to local before you use it. This is only beneficial if you are reading the data many times.
How about constant memory? If your input data does not change and it not too large, putting your input data into constant memory might give you a considerable speedup. The available constant memory is typically around 16K to 64K.
computeMe(__constant int *to, __constant int *from, __global int *result) {
//...
}
Edit (add references):
For an example use of __local memory in OpenCL, see here.
For NVidia hardware, more performance details are NVidia OpenCL best practices guide (PDF). In there, there is more information on performance differences between the memory types.