I am developing a simple 'laser line' scanner using C++ and OpenCV. So far I can detect the center of the laser line with an accuracy of 1 pixel, so I have a starting point for a possible 'sub pixel' function/algorithm. (the laser line is approx. 15-20pixels wide)
Now I am interested into refining this to sub-pixel accuracy. I know OpenCV has some sub-pixel detection functions, but as far as I know these are only for detecting corners.
If anyone has any suggestions, I'd like to hear them.
Some information;
System: QT Framework, C++, OpenCV library
Camera; Monochrome (no color), equipped with red filter
Image resolution; 2560 x 1920
Note: Only 1 image will be analyzed for the laser line.
There are two basic methods that I have used with good results:
Easy: on one frame, threshold and locate the region containing the image of the laser stripe, then fit a parabola to the raw pixel intensities in a small interval (5-7 pixels, depending on how well focused your are) around the intensity maximum, at each image row. Your fitting routine must have a robustifier, because outliers are likely, e.g. near scene region with a significant specular reflection.
Harder, but more precise if your camera's framerate is high enough (or the beam moves slowly enough): Curless's spacetime analysis.
A search for "subpixel laser fitting" returns several more recent results.
On the practical side, pay close attention to saturation: your exposure time (or lens aperture) should ensure that your sensor won't saturate even when the beam hits the lightest portions of the object surface. Searching for a peak in an area where the signal has been clipped by saturation is obviously pointless.
Focusing (and depth of field) are other areas to pay attention - a blurred image of the beam on the object surface will yield a biased peak.