I've used Floyd-Steinberg's dithering algorithm to reduce an image to a combination of 6 colours. From the pseudocode on the wikipedia page, I've been able to implement the algorithm with NumPy.
I'd love to speed up the algorithm and I'd like assistance in vectorizing my implementation. Here's my original implementation here:
def findClosestColour(pixel):
colors = np.array([[255, 255, 255], [255, 0, 0], [0, 0, 255], [255, 255, 0], [0, 128, 0], [253, 134, 18]])
distances = np.sum(np.abs(pixel[:, np.newaxis].T - colors), axis=1)
shortest = np.argmin(distances)
closest_color = colors[shortest]
return closest_color
def floydDither(img_array):
height, width, _ = img_array.shape
for y in range(0, height-1):
for x in range(1, width-1):
old_pixel = img_array[y, x, :]
new_pixel = findClosestColour(old_pixel)
img_array[y, x, :] = new_pixel
quant_error = new_pixel - old_pixel
img_array[y, x+1, :] = img_array[y, x+1, :] + quant_error * 7/16
img_array[y+1, x-1, :] = img_array[y+1, x-1, :] + quant_error * 3/16
img_array[y+1, x, :] = img_array[y+1, x, :] + quant_error * 5/16
img_array[y+1, x+1, :] = img_array[y+1, x+1, :] + quant_error * 1/16
return img_array
Now here's my attempt at vectorizing some portions of the algorithm:
def closestColour(img_array):
colors = np.array([[255, 255, 255], [255, 0, 0], [0, 0, 255], [255, 255, 0], [0, 128, 0], [253, 134, 18]])
distances = np.sum(np.abs(img_array[..., np.newaxis] - colors.T), axis=2)
nearest_colors = np.argmin(distances, axis=2)
quantized = colors[nearest_colors]
return quantized
def floydDitherVec(img_array):
new_img = closestColour(img_array)
error = img_array - new_img
height, width, _ = new_img.shape
for y in range(0, height-1):
for x in range(1, width-1):
new_img[x+1, y, :] = new_img[x+1, y, :] + (error[x+1, y, :] * 7/16)
new_img[x-1, y+1, :] = new_img[x-1, y+1, :] + (error[x-1, y+1, :] * 3/16)
new_img[x, y+1, :] = new_img[x, y+1, :] + (error[x, y+1, :] * 5/16)
new_img[x+1, y+1, :] = new_img[x+1, y+1, :] + (error[x+1, y+1, :] * 1/16)
return new_img
Finally, I'm making use of openCV
to read images and process them like so:
image = cv2.imread('test2.png')
img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
My overall goal is to make the algorithm run faster and any means of achieving this goal is welcome.
I've been able to speed up the algorithm using numba
. The implementation is given below:
@jit(nopython=True)
def floydDitherspeed(img_array):
height, width, _ = img_array.shape
colors = np.array([[255, 255, 255], [255, 0, 0], [0, 0, 255], [255, 255, 0], [0, 128, 0], [253, 134, 18]])
for y in range(0, height-1):
for x in range(1, width-1):
old_pixel = img_array[y, x, :]
max_distance = 195075
for color in colors:
p_distances = old_pixel - color
p_distances = np.power(p_distances, 2)
distance = p_distances[0] + p_distances[1] + p_distances[2]
if distance <= max_distance:
max_distance = distance
new_pixel = color
img_array[y, x, :] = new_pixel
quant_error = new_pixel - old_pixel
img_array[y, x+1, :] = img_array[y, x+1, :] + quant_error * 7/16
img_array[y+1, x-1, :] = img_array[y+1, x-1, :] + quant_error * 3/16
img_array[y+1, x, :] = img_array[y+1, x, :] + quant_error * 5/16
img_array[y+1, x+1, :] = img_array[y+1, x+1, :] + quant_error * 1/16
return img_array
In comparison with the floydDither
function, the numba
implementation runs approximately 5x faster.