pythonimagenumpypython-imaging-library

Converting an Image to numpy array, then converting it immediately back to an image gives two different results


As the title states I'm converting my image to a numpy array then converting it right back. Here's my code:

import os
import numpy as np
from PIL import Image
img = Image.open(os.path.join(no_black_border, png_files[0]))
img.show()

np_arr = np.asarray(img)
img1 = Image.fromarray(np_arr)
img1.show()

Here's my before converting it Here's my after converting it back


Solution

  • Your image is not RGB, it is a palette image. That means it does not have a Red, a Green and a Blue value at every pixel location, instead it has a single 8-bit palette index at each location that PIL uses to know the colour. You lose the palette when you convert to Numpy array.

    You have 2 choices.

    Either convert your image to RGB when you open it and all 3 values will be carried across to Numpy:

    # Load image and make RGB
    im = Image.open(...).convert('RGB')
    
    # Convert to Numpy array and process
    numpyarray = np.array(im)
    

    Or, do as you currently do, but re-appply the palette from the original image after converting back to PIL Image:

    # Load image
    im = Image.open()
    
    # Convert to Numpy array
    numpyarray = np.array(im)
    
    ... do Numpy stuff ...
    
    # Convert back to PIL Image and re-apply original palette
    r = Image.fromarray(numpyarray,mode='P') 
    r.putpalette(im.getpalette())
    
    # Optionally save
    r.save('result.png')
    

    See answer here and accompanying comments.