I face the following error:
ValueError: Exception encountered when calling layer 'conv2d' (type Conv2D).
Negative dimension size caused by subtracting 3 from 1 for '{{node sequential/conv2d/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](sequential/Cast, sequential/conv2d/Conv2D/ReadVariableOp)' with input shapes: [?,200,1,1], [3,3,1,9].
Call arguments received by layer 'conv2d' (type Conv2D):
• inputs=tf.Tensor(shape=(None, 200, 1, 1), dtype=float32)
my model:
model = Sequential()
model.add(Conv2D(9, kernel_size=3, activation='relu', input_shape=(200,200, 1)))
model.add(Conv2D(4, kernel_size=3, activation='relu'))
model.add(Flatten())
model.add(Dense(2, activation='sigmoid'))
model.predict(image)
Here "image" is a array of dimensions (200, 200, 1) with values in it as 0's&1's
what should I do??
Your model's input is (200,200, 1) but don't forget that training and inference is done on batches. Your true input shape is (None, 200,200, 1)
If you want predictions only for a single image you should add another dimension at the beginning, like this:
image=np.expand_dims(image, 0)
# image shape should be (1, 200,200, 1)
model.predict(image)