pythontensorflowmachine-learningkeras

Dimension problems: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (26, 26, 1)


I have a CNN that gets as input the following images converted by canny edge detection to a binary image. And outputs one of three categories.

img = cv2.imread(path)
img = cv2.Canny(img, 33, 76)
img = np.resize(img, (26, 26, 1))
imgs.append(img)

As far I understood I have to convert it to a 3 dimensions (26,26,1) image so that the network can work with it. This is my network:

IMG_HEIGHT = 26
IMG_WIDTH = 26
no_Of_Filters=60
size_of_Filter=(5,5)
size_of_pool=(2,2)
no_Of_Nodes = 500
model_new = Sequential([
    Conv2D(no_Of_Filters, size_of_Filter, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH , 1)),
    MaxPooling2D(pool_size=size_of_pool),
    Conv2D(no_Of_Filters, size_of_Filter, padding='same', activation='relu'),
    MaxPooling2D(pool_size=size_of_pool),
    Conv2D(64, size_of_Filter, padding='same', activation='relu'),
    MaxPooling2D(pool_size=size_of_pool),
    Flatten(),
    Dense(512, activation='relu'),
    Dense(3, activation='softmax')
])

Training works fine. After I trained and created a model I want to test images agains this network

test_image = cv2.Canny(test_image ,33,76)
test_image = np.resize(test_image, (26, 26, 1))
test_image = test_image [np.newaxis, ...]
prediction = model.predict(test_image)
print(prediction)

Now I get the error:

ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (26, 26, 1)

Why the trained model now wants a 4 dimensional input?


Solution

  • You need to add a dimension to your array, because as the message says, keras expects a 4D input.

    test_image = test_image[np.newaxis, ...]
    

    keras works with shapes such as (1, 26, 26, 1), not (26, 26, 1). The added first dimensions is the batch size and keras needs it.