I have an application where 96% of the time is spent in 3D texture memory interpolation reads (red points in diagram).
My kernels are designed to do 1000~ memory reads on a line that crosses the texture memory arbitrarily, a thread per line (blue lines). This lines are densely packed, very close to each other, travelling in almost parallel directions.
The image shows the concept of what I am talking about. Imagine the image is a single "slice" from the 3D texture memory, e.g. z=24
. The image is repeated for all z
.
At the moment, I am executing threads just one line after the other, but I realized that I might be able to benefit from texture memory locality if I call adjacent lines in the same block, reducing the time for memory reads.
My questions are
if I have 3D texture with linear interpolation, how could I benefit most from the data locality? By running adjacent lines in the same block in 2D or adjacent lines in 3D (3D neighbors or just neighbors per slice)?
How "big" is the cache (or how can I check this in the specs)? Does it load e.g. the asked voxel and +-50 around it in every direction? This will directly relate with the amount of neighboring lines I'd put in each block!
How does the interpolation applies to texture memory cache? Is the interpolation also performed in the cache, or the fact that its interpolated will reduce the memory latency because it needs to be done in the text memory itself?
Working on a NVIDIA TESLA K40, CUDA 7.5, if it helps.
As this question is getting old, and no answers seem to exist to some of the questions I asked, I will give a benchmark answer, based on my research building the TIGRE toolbox. You can get the source code in the Github repo.
As the answer is based in the specific application of the toolbox, computed tomography, it means that my results are not necessarily true for all applications using texture memory. Additionally, my GPU (see above) its quite a decent one, so your mileage may vary in different hardware.
It is important to note: this is a Cone Beam Computed Tomography applications. This means that:
All this information is important for memory locality.
Additionally, as said in the question, 96% of the time of the kernel is memory reading, so its safe to assume that the variation of the kernel times reported are due to changes in speed of memory reading.
If I have 3D texture with linear interpolation, how could I benefit most from the data locality? By running adjacent lines in the same block in 2D or adjacent lines in 3D (3D neighbors or just neighbors per slice)?
Once one gets a bit more experienced with the texture memory sees that the straightforward answer is: run as many as possible adjacent lines together. The closer to each other the memory reads are in image index, the better.
This effectively for tomography means running square detector pixel blocks. Packing rays (blue lines in the original image) together.
How "big" is the cache (or how can I check this in the specs)? Does it load e.g. the asked voxel and +-50 around it in every direction? This will directly relate with the amount of neighboring lines I'd put in each block!
While impossible to say, empirically I found that running smaller blocks is better. My results show that for a 512^3 image, with 512^2 rays, with a sample rate of ~2 samples/voxel, the block size:
32x32 -> [18~25] ms
16x16 -> [14~18] ms
8x8 -> [11~14] ms
4x4 -> [25~29] ms
The block sizes are effectively the size of a square adjacent rays that are computed together. E.g. 32x32 means that 1024 Xrays will be computed in parallel, adjacent to each other in a square 32x32 block. As the exact same operations are performed in each line, this means that the samples are taken about a 32x32 plane on the image, covering approximately 32x32x1 indexes.
It is predictable that at some point when reducing the size of the blocks the speed would get slow again, but this is at (at least for me) surprisingly low value. I think this hints that the memory cache loads relatively small chunks of data from the image.
This results shows an additional information not asked in the original question: what happens with out of bounds samples regarding speed. As adding any if
condition to the kernel would significantly slow it down, the way I programmed the kernel is by starting sampling in a point in the line that is ensured to be out of the image, and stop in a similar case. This has been done by creating a fictional "sphere" around the image, and always sampling the same amount, independent of the angle between the image and the lines themselves.
If you see the times for each kernel that I have shown, you'd notice all of them are [t ~sqrt(2)*t]
, and I have checked that indeed the longer times are from when the angle between the lines and the image is multiples of 45 degrees, where more samples fall inside the image (texture).
This means that sampling out of the image index (tex3d(tex, -5,-5,-5)
) is computationally free. No time is spend in reading out of bounds. Its better to read a lot of out of bounds points than to check if the points fall inside the image, as the if
condition will slow the kernel and sampling out of bounds has zero cost.
How does the interpolation applies to texture memory cache? Is the interpolation also performed in the cache, or the fact that its interpolated will reduce the memory latency because it needs to be done in the text memory itself?
To test this, I ran the same code but with linear interpolation (cudaFilterModeLinear
)and nearest neighbor interpolation (cudaFilterModePoint
). As expected, improvement of speed is present when nearest neighbor interpolation is added. For 8x8
blocks with the previously mentioned image sizes, in my pc:
Linear -> [11~14] ms
Nearest -> [ 9~10] ms
The speedup is not massive but its significant. This hints, as expected, that the time that the cache takes in interpolating the data is measurable, so one needs to be aware of it when designing applications.