google-colaboratorytorchtorchvision

After applying torchvision.transforms on MNIST dataset, how to view it using cv2_imshow?


I am trying to implement a simple GAN in Google Colaboratory, After using transforms to normalize the images, I want to view it at the output end to display fake image generated by the generator and real image side by in the dataset once every batch iteration like a video.

transform = transforms.Compose(
[
  
  # Convert a PIL Image or numpy.ndarray to tensor. This transform does not support torchscript.
  # Converts a PIL Image or numpy.ndarray (H x W x C) in the range
  # [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]

  transforms.ToTensor(),
  # Normalize a tensor image with mean and standard deviation.
  transforms.Normalize((0.5,),(0.5,))
])


dataset = datasets.MNIST(root="dataset/", transform=transform, download=True)

loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)

After applying transforms on the dataset it is not in the range of [0,255] anymore. How do we denormalize it and use cv2_imshow to show that series of real and fake images frame by frame in the same place?

enter image description here

The above image shows the output I get, there are two problems here.

  1. The image normalization, rendered the image indistinguishable, it is just all black.
  2. The images are not coming frame by frame in the same place like a video, instead, it is getting printed in a new line.

What approach do I take to solve these issues?


Solution

  • Found that I didn't denormalize.

    def denormalize(x):
      # Denormalizeing 
      pixels =  ((x *.5)+.5)*255
      return pixels
    

    The above function did, to convert it back to the range [0,255].

    I didn't find any solution for problem 2 yet.