From "Image Format", we see:
An 11-bit float has no sign-bit; it has 6 bits of mantissa and 5 bits of exponent.
From "Half-precision floating-point format", we can deduct that: The relative precision of 11-bit float is 2-6, which is 1/64
But for a 8-bit RGB image, the relative precision at 255, is (255-254)/255 = 1/255.
So does it say the precision of GL_R11F_G11F_B10F
is not good for 8-bit RGB images at a brightness range which is large (for example, at 255)?
The reason for using a floating-point image is to allow you to express values outside the range [0, 1]. If that's what you need to do, then that's what you need to do, and normalized integer formats will not be helpful, regardless of their precision.
This particular floating point format compacts floating-point RGB data down as far as possible without using a shared exponent. It does lose precision relative to normalized 8-bit integers, having only about 2.1 digits of decimal precision (because of the way IEEE-754 floats work, you get the effect of having one more bit of mantissa than you actually have).
But as previously stated, if you need values greater than 1, normalized formats are not an option. This particular floating-point format is for cases where the loss of precision is tolerable. This would be for uses like framebuffer outputs in HDR cases. It's half the size of 16-bit float RGBA formats, so that can represent a non-trivial performance improvement on framebuffer bandwidth costs.
It's an optimization. You might use half-floats for development, but then you switch to this and see if you can tell the difference. If not, you switch to it permanently (possibly with a user-facing setting for higher image fidelity).