I would like to improve the resolution of images using AI upscaling, but the 4 different SR models supported in python are nowhere near as good as online websites. For example, here is the original image:
The best performing model (EDSR) using this code results in this:
sr = dnn_superres.DnnSuperResImpl_create()
image = cv2.imread('Images/exported_image_76322.png')
path = "EDSR_x3.pb"
sr.readModel(path)
sr.setModel("edsr", 3)
result = sr.upsample(image)
cv2.imwrite('ai_upscale.png', result)
display_image_in_actual_size('ai_upscale.png')
Websites such as zyro.com produce much nicer results:
How can I improve my results so that they are the same level of quality as the ai upscale websites? I want to automate the upscaling of hundreds of images so simply using the websites is not feasible.
Different options:
If you have original hi-res images, you can take an online model and finetune (i.e. continue training it) on your hi-res images (output= hi-res image, input=low-res image). This is the best solution because your model will be training on the correct data distribution.
If you don't have access to original hi-res images, but you believe that zyro provides adequate quality for your purpose, you can create a dataset using zyro and finetune an existing SR model on that dataset.
If you have hi-res images which are not of your dataset you could finetune your existing SR model on those (lower their resolution and train your model to recover the hi-res), and see if it generalizes well to your dataset.