floating-pointglslgpuvulkanspir-v

How to get 16bit floats in modern GLSL with Vulkan?


It appears at one point in time Nvidia had an extension that permitted half floating point values for OpenGL 1.1, but apparently since that time *the world half has been reclaimed by the modern GLSL spec at some point.

Today I can use 16bit floating point values in CUDA no problem, there should not be an issue in hardware for NVIDIA to support 16bit floats, and they appear to support them in HLSL, and heck even seem to contradictory support them in HLSL cross compilation to SPIR-V while GLSL does not for Nvidia. It seems like SPIR-V has all the primitives needed to support 16bit floating point regardless with primary extensions (KHR) so there doesn't seem to be a reason why it should forbid me from using them.

I'm unsure why, despite having an Nvidia card, I can't take advantage of 16bit floating point arithmetic, and am apparently forced to use AMD or switch API's entirely if I want to take advantage of that. Surely there must be some way to actually use true 16bit floating point values for both?

I am NOT asking about host to device allocated buffers (IE vertex buffers). Yes, you can allocate those as 16bit floats with a KHR extension and not have to much of an issue, but inside the actual shader, using 16bit floats, and not 16bit floats coerced to 32 bit floats is what I'm worried about.


Solution

  • VK_KHR_shader_float16_int8 exposes FP16 capabilities through both SPIR-V and Vulkan (as well as 8-bit integers) within shaders. This extension was promoted to core (as an optional feature) in Vulkan 1.2. This capability only enables computations within a shader, not the use of 16-bit floats in shader interfaces (vertex shader inputs, UBOs, etc).

    SPV_AMD_gpu_shader_half_float exposed the Float16 capability to SPIR-V, but the corresponding Vulkan extension VK_AMD_gpu_shader_half_float did not actually enable a similar capability in Vulkan. So you could not really use it. This was eventually fixed.