I'm in the process of porting an OpenGL app to the web, and discovered that WebGL 1.0 does not support 3D textures (will support in 2.0). I use 16 x 16 x 16 textures for the colour information of some simple models (for a blocky kind of look).
Now without support for the 3D models, I realized I could instead spread the 16 layers on to a 4 x 4 plane, like so:
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0 = layer
The result being a 64 x 64 "pseudo-3D" texture.
Which leads me to my question: what's the difference between 3D and 2D textures on GPUs? And in a situation where a 2D texture can be used instead of a 3D texture (as in my example), would there be a performance difference between the two?
I'm guessing 3D textures have hardware support for interpolation along the z-axis, whereas with a 2D texture two texel fetches would be required, and a manual interpolation between them, for the same result.
You are correct, the biggest difference that comes to my mind immediately has to do with texture filtering 3D vs. 2D. Linear filtering has to sample 4 texels (assuming a 2D image) if the texture coordinate is not precisely at the center of a texel and then do weighted averaging based on the distance the sample location is from each of the texels fetched. This is such a fundamental operation that the hardware is built specially for this (newer hardware even lets you take advantage of this in the form of instructions like gather4
).
You can implement linear filtering yourself (and in fact, you sometimes have to for things like multisampled textures) if you use a nearest-neighbor filter and compute the weighting factor, etc. yourself but it will always come out slower than the hardware implementation.
In your case, if you wanted to implement linear filtering on a 3D texture you would actually have to sample 8 texels (23 to handle interpolation across all 3 dimensions). Obviously hardware is biased toward working with 2D textures, as there is no gather8
instruction... but you can bet that native linear interpolation will be quicker than your manual hack.
WebGL does not even expose gather4
/textureGather
(as that is a DX10.1 / GL4 feature), but hardware worked this way long before that was an instruction you could use in a shader.
You might be able to come up with a compromise if you are clever, where you use the hardware's linear filtering capabilities for filtering in the S
and T
directions of each 2D slice and then perform your own linear filtering between the slices (R
). You will have to be careful when dealing with texels at the edge of your image, however. Make sure there is at least 1 texel worth of border between the slice images in your virtual 3D texture so the hardware does not interpolate across slices.