pythonimage-processingnoise-reduction

What is the best measurement for validating a denoising function in Image processing? Signal to Noise ratio seems to fail me


I'm using BrainWeb a simulated dataset for normal brain MR images. I want to validate MyDenoise function which calls denoise_nl_means of skimage.restoration package. To do so, I downloaded two sets of images from BrainWeb, a original image with 0% noise and 0% Intensity non-uniformity, and a noisy image with the same options but 9% noise and 40% Intensity non-uniformity. And, I calculate Signal To Noise ratio (SNR) based on a deprecated version of scipy.stats as follows:

def signaltonoise(a, axis=0, ddof=0):
    a = np.asanyarray(a)
    m = a.mean(axis)
    sd = a.std(axis=axis, ddof=ddof)
    return np.where(sd == 0, 0, m/sd)

I assume, after denoising, we should have a higher SNR which is always true. However, when comparing to the original image, we have more SNR in the noisy image. I guess it's because the total mean of the image has increased more significantly than the standard deviation. So, it seems SNR cannot be a good measurement to validate whether my denoised image is closer to the original images or not since noisy images have already a higher SNR than the original images. I want to know if there are better measurements for validating denoising functions in images.

Here is my result: enter image description here

Original image SNR: 1.23
Noisy image SNR: 1.41
Denoised image SNR: 1.44

Thank you.


Solution

  • This is not how you calculate SNR.

    The core concept is that, for any one given image, you don’t know what is noise and what is signal. If we did, denoising wouldn’t be a problem. Therefore, it is impossible to measure the noise level from one image (it is possible to estimate it, but we cannot compute it).

    The solution is to use that noise-free image. This is the ground truth, the objective of the denoise operation. We can thus estimate the noise by comparing any one image to this ground truth, the difference is the noise:

    noise = image - ground_truth
    

    You can now compute the mean square error (MSE):

    mse = np.mean(noise**2)
    

    Or the signal to noise ratio:

    snr = np.mean(ground_truth) / np.mean(noise)
    

    (Note that this is one of many possible different definitions of signal to noise ratio, often we use power of the signals rather that just their means, and often it is measured in dB.)

    In general, MSE is a really good way to talk about the error in denoising. You’ll see most scientific papers in the field additionally using peak signal to noise ratio (PSNR) instead, which is just a scaling and logarithmic mapping of the MSE. Therefore it is pointless to use both.

    You can also look at the mean absolute error (MAE), which is more sensitive to individual pixels with a large error.