In researching ways to minimize pose estimation inaccuracies, I found this Stack Overflow answer which suggests that gamma-compression could be one factor to consider. My question is: what is the best way to avoid this? I am using an industrial machine vision camera and have the ability to change gamma. Should I simply set gamma=1, since that implies no compression (or expansion)?
As background, I have taken the ordinary precautions to ensure a good pose:
I end up with what appears to be a slight rotational inaccuracy as shown in the detected pose image. This is particularly apparent at the end of the red x-axis, which diverges from the chessboard edges. The ArUco markers as well as the chessboard corners appear to have been accurately located. Is this error to be expected, or, are there ways I can improve upon this?
If you can control the gamma compression at the camera, it's definitely better to disable it. Even better, if the sensor actually captures at more than 8 bpp you should process at whatever pixel depth it produces. Generally speaking, the closer you are to the metal, the better off you are.
That said, with good illumination and exposure, the effect of gamma compression should generally be quite small. For debugging your example I'd look first at the distribution of the measurements in work volume (did you capture enough depth? did you angle the target w.r.t. the focal axis), focus/depth of fields/other sources of blur in the images. See also my other answer here.