How can one design line following bot using only camera sensor but for high speeds? I am currently using opencv library to process frames and calculate steering angle from that. But at higher speeds, since the path changes rapidly the given approach does not work.
P.S. Is there a particular camera that works well with this application?
It supposed to be a complicated system, and line following worth a good science publication. So here's a very simplified answer.
When you think about algorithm it's important to understand, that it is always trade. You can get camera with small FPS and poor resolution, but then you can follow smoother lines with big turn radiuses.
Or if you need to follow some crazy curve with sharp turns, you should get good camera or may be few cameras.
First of all let's assume following conditions:
There we go.
On top level your system looks like block with negative feedback.
In your case:
Usually in order to achieve smooth reaction you should use PID controllers and sometimes (in space industry) Bellman equations.
Schematically your robot might be like this:
Now having this schema at hands we can talk a little bit about algorithm.
As it was said fast camera will allow you to follow sharper turns. BUT turn radius also depends on other physical properties of your robot: mass, wheel tires.
If my understanding of your robot is correct, then camera sensor is just a few filters:
Thresholding. It might be Otsu thresholding or may be even Adaptive thresholding. Both methods allow to work in different light conditions.
Find center of line with moments filter. As long as line is black, you should invert frame. It might be achieved by passing THRES_BINARY_INV
instead of THRES_BINARY
on previous step.
Pick x
coordinate of momentum center, and compare it with frame mid line:
declination = x - frame_width/2
That's it.
This set of filters uses very efficient convolution algorithms and should work with minimal delay even on oldest RPi versions.
If you need to improve FPS, you can crop your frame from top and bottom.
This solution might deal with even right-angled turns like this:
------
|
|
^
<robot>
All you need is to adjust Steering PID parameters.
But it might not work well with U turns where camera captured both directions:
--
| |
| |
^
<robot>
In this case you should reduce camera's angle of view.
In fact the recent case, when your camera can see whole U-turn might be an advantage. Actually the more curves you can recognize the better you can plan robot movements. But it requires more robust and expensive algorithms.
Also you can embed simple LSTM model to recognize some dangerous cases.