I'm going through this tutorial on pytorch. https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html And I've been able to show real images next to the fake ones that I have generated.
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()
Which from my dataset results in this:
I was wondering how I can show one image from the fake images generated. I also want to show it as a 512 X 512 image if possible.
Edit: The img_list[-1].shape is torch.Size([3, 530, 530]).
This part of the training shows that img_list is a list of images with each image being a group of sub-images (not being able to separate them). Is there a way I can edit this to make img_list an image of each fake image generated?
Here is what I wanted:
noise = torch.randn(1, nz, 1, 1, device=device)
with torch.no_grad():
newfake = netG(noise).detach().cpu()
plt.axis("off")
plt.imshow(np.transpose(newfake[0],(1,2,0)))
plt.show()
As it generates a new image, with a new noise. The img_list was combining the generated images into one image. However, this code still only generates 64 by 64 pixel images.