I am trying to process an image file into something that can be displayed on a Black/White/Red e-ink display, but I am running into a problem with the output resolution.
Based on the example code for the display, it expects two arrays of bytes (one for Black/White, one for Red), each 15,000 bytes. The resolution of the e-ink display is 400x300.
I'm using the following Python script to generate two BMP files: one for Black/White and one for Red. This is all working, but the file sizes are 360,000 bytes each, which won't fit in the ESP32 memory. The input image (a PNG file) is 195,316 bytes.
The library I'm using has a function called EPD_4IN2B_V2_Display(BLACKWHITEBUFFER, REDBUFFER);
, which wants the full image (one channel for BW, one for Red) to be in memory. But, with these image sizes, it won't fit on the ESP32. And, the example uses 15KB for each color channel (BW, R), so I feel like I'm missing something in the image processing necessary to make this work.
Can anyone shed some light on what I'm missing? How would I update the Python image-processing script to account for this?
I am using the Waveshare 4.2inch E-Ink display and the Waveshare ESP32 driver board. A lot of the Python code is based on this StackOverflow post but I can't seem to find the issue.
import io
import traceback
from wand.image import Image as WandImage
from PIL import Image
# This function takes as input a filename for an image
# It resizes the image into the dimensions supported by the ePaper Display
# It then remaps the image into a tri-color scheme using a palette (affinity)
# for remapping, and the Floyd Steinberg algorithm for dithering
# It then splits the image into two component parts:
# a white and black image (with the red pixels removed)
# a white and red image (with the black pixels removed)
# It then converts these into PIL Images and returns them
# The PIL Images can be used by the ePaper library to display
def getImagesToDisplay(filename):
print(filename)
red_image = None
black_image = None
try:
with WandImage(filename=filename) as img:
img.resize(400, 300)
with WandImage() as palette:
with WandImage(width = 1, height = 1, pseudo ="xc:red") as red:
palette.sequence.append(red)
with WandImage(width = 1, height = 1, pseudo ="xc:black") as black:
palette.sequence.append(black)
with WandImage(width = 1, height = 1, pseudo ="xc:white") as white:
palette.sequence.append(white)
palette.concat()
img.remap(affinity=palette, method='floyd_steinberg')
red = img.clone()
black = img.clone()
red.opaque_paint(target='black', fill='white')
black.opaque_paint(target='red', fill='white')
red_image = Image.open(io.BytesIO(red.make_blob("bmp")))
black_image = Image.open(io.BytesIO(black.make_blob("bmp")))
red_bytes = io.BytesIO(red.make_blob("bmp"))
black_bytes = io.BytesIO(black.make_blob("bmp"))
except Exception as ex:
print ('traceback.format_exc():\n%s',traceback.format_exc())
return (red_image, black_image, red_bytes, black_bytes)
if __name__ == "__main__":
print("Running...")
file_path = "testimage-tree.png"
with open(file_path, "rb") as f:
image_data = f.read()
red_image, black_image, red_bytes, black_bytes = getImagesToDisplay(file_path)
print("bw: ", red_bytes)
print("red: ", black_bytes)
black_image.save("output/bw.bmp")
red_image.save("output/red.bmp")
print("BW file size:", len(black_image.tobytes()))
print("Red file size:", len(red_image.tobytes()))
As requested, and in the event that it may be useful for future reader, I write a little bit more extensively what I've said in comments (and was verified to be indeed the reason of the problem).
The e-ink display needs usually a black&white image. That is 1 bit per pixel image. Not a grayscale (1 channel byte per pixel), even less a RGB (3 channels/bytes per pixel).
I am not familiar with bi-color red/black displays. But it seems quite logical that it behave just like 2 binary displays (one black & white display, and one black-white & red display). Sharing the same location.
What your code seemingly does is to remove all black pixels from a RGB image, and use it as a red image, and remove all red pixels from the same RDB image, and use it as a black image. But since those images are obtained with clone
they are still RGB images. RGB images that happen to contain only black and white pixels, or red and white pixels, but still RGB image.
With PIL, it is the mode that control how images are represented in memory, and therefore, how they are saved to file.
Relevant modes are RGB
, L
(grayscale aka 1 linear byte/channel per pixel), and 1
(binary aka 1 bit per pixel).
So what you need is to convert to mode 1
. Usind .convert('1')
method on both your images.
Note that 400x300×3 (uncompressed rgb data for your image) is 360000, which is what you got. 400×300 (L mode for same image) is 120000, and 400×300/8 (1 mode, 1 bit/pixel) is 15000, which is precisely the expected size as you mentioned. So that is another confirmation that, indeed, 1 bit/pixel image is expected.