I am curious what the PIL library does value scaling and normalization wise to show me crisp image and why just doing matplotlib on the extracted numpy value looks really bad.
Here is my code
the_image = Image.open(temp_image_file)
sub_image = the_image.crop((520,965,565,1900))
plt.imshow(sub_image, cmap='gray')
plt.show()
si = np.array(sub_image, dtype=np.uint8)
si[np.where(si == 48)] = 255
plt.imshow(si, cmap='gray')
plt.show()
and attached are the two plots. The first direct plot looks much crisper while the second is rather illegible. This image is supposed to go into EasyOCR
for number recognition, but I would rather feed it the first crisp image than what the numpy array has.
Ideas? Looks like a lowpass filter...?
From the autosuggestions I found this function
sub_image = ImageOps.grayscale(sub_image)
and it did the trick :D