pythonmotion-blur

How to create synthetic blurred image from sharp image using PSF kernel (in image format)


Update as suggestion from @Fix that I should BGR to RGB, but the outputs are still not the same as the paper's output.

(Small note: this post already post on https://dsp.stackexchange.com/posts/60670 but since I need help quickly so I think I reposted here, hope this doesn't violate to any policy)

I tried to create synthetic blurred image from ground-truth image using PSF kernels (in png format), some paper only mentioned that I need to do convolve operation on it, but it's seem to be I need more than that. What I did

import matplotlib.pyplot as plt
import cv2 as cv
import scipy
from scipy import ndimage
import matplotlib.image as mpimg
import numpy as np

img = cv.imread('../dataset/text_01.png')
norm_image = cv.normalize(img, None, alpha=-0.1, beta=1.8, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F)

f = cv.imread('../matlab/uniform_kernel/kernel_01.png')
norm_f = cv.normalize(f, None, alpha=0, beta=1, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F)

result = ndimage.convolve(norm_image, norm_f, mode='nearest')

result = np.clip(result, 0, 1)

imgplot = plt.imshow(result)
plt.show()

And this only give me a white-entire image. I tried to decrease the beta to lower number like this here norm_f = cv.normalize(f, None, alpha=0, beta=0.03, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F) and the image is appeared but it's very different in the color of it.

The paper I got idea how to do it and dataset (images with ground-truth and PSF kernels in PNG format) are here

This is what they said:

We create the synthetic saturated images in a way similar to [3, 10]. Specifically, we first stretch the intensity range of the latent image from [0,1] to [−0.1,1.8], and convolve the blur kernels with the images. We then clip the blurred images into the range of [0,1]. The same process is adopted for generating non-uniform blurred images.

This is some images I got from my source. enter image description here enter image description here enter image description here

And this is the ground-truth image: enter image description here

And this is the PSF kernel in PNG format file: enter image description here

And this is their output (synthetic image): enter image description here

Please help me out, it doesn't matter solution, even it's a software, another languages, another tools. I only care eventually I have synthetic blurred image from original (sharp) image with PSF kernel with good performance (I tried on Matlab but suffered similar problem, I used imfilter, and one more problem with Matlab is they're slow).

(please not judge for only care about the output of the process, I'm not using deconvol method to deblur blurred back to the original image one so I want to have enough datasets (original&blurred) pairs to test my hypothesis/method)

Thanks.


Solution

  • OpenCV reads / writes images in BGR format, and Matplotlib in RGB. So if you want to display the right colours, you should first convert it to RGB :

    result_rgb = cv.cvtColor(result, cv.COLOR_BGR2RGB)
    imgplot = plt.imshow(result)
    plt.show()
    

    Edit: You could convolve each channel separately and normalise your convolve image like this:

    f = cv.cvtColor(f, cv.COLOR_BGR2GRAY)  
    norm_image = img / 255.0
    norm_f = f / 255.0  
    result0 = ndimage.convolve(norm_image[:,:,0], norm_f)/(np.sum(norm_f))
    result1 = ndimage.convolve(norm_image[:,:,1], norm_f)/(np.sum(norm_f))
    result2 = ndimage.convolve(norm_image[:,:,2], norm_f)/(np.sum(norm_f))
    result = np.stack((result0, result1, result2), axis=2).astype(np.float32)
    

    Then you should get the right colors. This though uses a normalisation between 0.0 and 1.0 for both the image and the kernel (unlike between -0.1 and 1.8 for the image as the paper suggests).