I have a 3 dimensional dataset of audio files where X.shape
is (329,20,85)
. I want to have a simpl bare-bones model running, so please don't nitpick and address only the issue at hand. Here is the code:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(32, return_sequences=True, stateful=False, input_shape = (20,85,1)))
model.add(tf.keras.layers.LSTM(20))
model.add(tf.keras.layers.Dense(nb_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=["accuracy"])
model.summary()
print("Train...")
model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=50, validation_data=(X_test, y_test))
But then I had the error mentioned in the title:
ValueError: Shapes (None, 1) and (None, 3) are incompatible
Here is the model.summary()
Model: "sequential_13"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_21 (LSTM) (None, 20, 32) 15104
_________________________________________________________________
lstm_22 (LSTM) (None, 20) 4240
_________________________________________________________________
dense_8 (Dense) (None, 3) 63
=================================================================
Total params: 19,407
Trainable params: 19,407
Non-trainable params: 0
_________________________________________________________________
Train...
For this, I followed this post and updated Tensorflow to the latest version, but the issue persists. This post is completely unrelated and highly unreliable.This post although a bit relatable is unanswered for a while now.
Update 1.0:
I strongly think the problem has something to do with the final Dense
layer where I pass nb_classes as 3, since I am classifying for 3 categories in y
.
So I changed the Dense
layer's nb_classes
to 1, which ran the model and gives me this output, which I am positive is wrong.
Train...
9/9 [==============================] - 2s 177ms/step - loss: 0.0000e+00 - accuracy: 0.1520 - val_loss: 0.0000e+00 - val_accuracy: 0.3418
<tensorflow.python.keras.callbacks.History at 0x7f50f1dcebe0>
Update 2.0:
I one hot encoded the y
s and resolved the shape issue. But now the above output with <tensorflow.python.keras.callbacks.History at 0x7f50f1dcebe0>
persists. Any help with this? Or should I post a new question for this? Thanks for all the help.
How should I proceed, or what should I be changing?
The first problem is with the LSTM input_shape. input_shape = (20,85,1)
.
From the doc: https://keras.io/layers/recurrent/
LSTM layer expects 3D tensor with shape (batch_size, timesteps, input_dim).
model.add(tf.keras.layers.Dense(nb_classes, activation='softmax'))
- this suggets you're doing a multi-class classification.
So, you need your y_train
and y_test
have to be one-hot-encoded. That means they must have dimension (number_of_samples, 3)
, where 3
denotes number of classes.
You need to apply tensorflow.keras.utils.to_categorical
to them.
y_train = to_categorical(y_train, 3)
y_test = to_categorical(y_test, 3)
ref: https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical
tf.keras.callbacks.History()
- this callback is automatically applied to every Keras model. The History object gets returned by the fit method of models.
ref: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/History