pythonpytorchcomputer-visionsignal-processingnoise

How do I add reversible noise to the MNIST dataset using PyTorch?


I would like to add reversible noise to the MNIST dataset for some experimentation.

Here's what I am trying atm:

import torchvision.transforms as transforms
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from PIL import Image
import torchvision

def display_img(pixels, label = None):
    plt.imshow(pixels, cmap="gray")
    if label:    
        plt.title("Label: %d" % label)
    plt.axis("off")
    plt.show()

class NoisyMNIST(torchvision.datasets.MNIST):
    def __init__(self, root, train=True, transform=None, target_transform=None, download=False):
        super(NoisyMNIST, self).__init__(root, train=train, transform=transform, target_transform=target_transform, download=download)

    def __getitem__(self, index):
        img, target = self.data[index], self.targets[index]
        img = Image.fromarray(img.numpy(), mode="L")

        if self.transform is not None:
            img = self.transform(img)
        
        # add the noise
        noise_level = 0.3
        noise = self.generate_safe_random_tensor(img) * noise_level
        noisy_img = img + noise
        
        return noisy_img, noise, img, target

    def generate_safe_random_tensor(self, img):
        """generates random noise for an image but limits the pixel values between -1 and 1""" 
       
        min_values = torch.clamp(-1 - img, max=0)
        max_values = torch.clamp(1 - img, min=0)
       
        return torch.rand(img.shape) * (max_values - min_values) + min_values



# Define transformations to apply to the data
transform = transforms.Compose([
    transforms.ToTensor(),  # Convert images to tensors
    transforms.Normalize((0.1307,), (0.3081,)),
])

train_dataset = NoisyMNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = NoisyMNIST(root='./data', train=False, download=True, transform=transform)

np_noise = train_dataset[img_id][1]
np_data = train_dataset[img_id][0]



display_img(np_data_sub_noise, 4)

Ideally, this would give me the regular MNIST dataset along with a noisy MNIST images and a collection of the noise that was added. Given this, I had assumed I could subtract the noise from the noisy image and go back to the original image, but my image operations are not reversible.

Any pointers or code snippets would be greatly appreciated. Below are the images I currently get wit my code:

Original image:

enter image description here

With added noise:

enter image description here

And with the noise subtracted for the image with noise:

enter image description here


Solution

  • Don't store the noise you generated.

    Store the noise that is the difference between clean subject and noisy subject. Those values you can safely subtract.

    When you apply noise to the picture, and it's made of integers, you will get either integer overflow/underflow/wraparound (200 + 200 = 400 = 144, mod 256), or saturation (200 + 200 = 255, clipped). That is the source of the differences you see.

    The "effective" noise you added (and calculated by subtracting) will look weird. Where the source image is bright, the noise's values cannot be very positive. In dark regions, the noise cannot be very negative.

    You might want to work with numbers that aren't clipped/saturated. Floats are a better candidate for this.

    Also consider gamma compression. Your network might learn that you added synthetic noise. It could learn to distinguish real-noisy pictures from fake-noisy pictures. The (assumed gaussian) noise in real images is gamma-compressed along with the "signal". If you add (gaussian) noise to a gamma-compressed image, then in linear space, the noise appears no longer gaussian.

    Remember that lossy image compression is lossy. Since you seem to care about exact pixel values, you should want to use lossless compressions only.