Hi I am a student doing research in my university. This is my first time using computer vision (openCV) and I am fairly new to image preprocessing. I have these images of License Plates and I would like to use easyOCR/pytesseract to read the plates. Currently all I have done is convert the image to grayscale, rotate it by a few degrees, but the reading results are very inconsistent. How do I improve that?
I have tried using kernels to sharpen the images but they seem to be fairly inconsistent too.
Here are some images I have to give you a general idea of what the images are like:
I would start with image enhancement. It's hard to tell what exactly is applicable but here are some possible manuevers:
Anyway looking on your images, I am not sure if it will be easy to achieve the desired result without diving into the OCR pipeline.