machine-learningkerasdeep-learningneural-networkrecommendation-engine

Keras model not predicting values in the Test set


I'm building a Keras model to predict predict if the user will select the certain product or not (binary classification).

Model seems to be making progress on Validation set that is heldout while training, but the model's predictions are all 0s when it comes to the test set.

My dataset looks something like this:

train_dataset
    
customer_id id  target  customer_num_id
0   TCHWPBT 4   0         1
1   TCHWPBT 13  0         1
2   TCHWPBT 20  0         1
3   TCHWPBT 23  0         1
4   TCHWPBT 28  0         1
... ... ... ... ...
1631695 D4Q7TMM 849 0   7417
1631696 D4Q7TMM 855 0   7417
1631697 D4Q7TMM 856 0   7417
1631698 D4Q7TMM 858 0   7417
1631699 D4Q7TMM 907 0   7417

I split it into Train/Val sets using:

from sklearn.model_selection import train_test_split

Train, Val = train_test_split(train_dataset, test_size=0.1, random_state=42, shuffle=False)

After I split the dataset, I select the features that are used when training and validating the model:

train_customer_id = Train['customer_num_id']
train_vendor_id = Train['id']
train_target = Train['target']

val_customer_id = Val['customer_num_id']
val_vendor_id = Val['id']
val_target = Val['target']

... And run the model:

epochs = 2

for e in range(epochs):
  print('EPOCH: ', e)
  model.fit([train_customer_id, train_vendor_id], train_target, epochs=1, verbose=1, batch_size=384)
  
  prediction = model.predict(x=[train_customer_id, train_vendor_id], verbose=1, batch_size=384)
  train_f1 = f1_score(y_true=train_target.astype('float32'), y_pred=prediction.round())
  print('TRAIN F1: ', train_f1)

  val_prediction = model.predict(x=[val_customer_id, val_vendor_id], verbose=1, batch_size=384)
  val_f1 = f1_score(y_true=val_target.astype('float32'), y_pred=val_prediction.round())
  print('VAL F1: ', val_f1)

EPOCH: 0
1468530/1468530 [==============================] - 19s 13us/step - loss: 0.0891
TRAIN F1:  0.1537511577647422
VAL F1:  0.09745762711864409
EPOCH:  1
1468530/1468530 [==============================] - 19s 13us/step - loss: 0.0691
TRAIN F1:  0.308748569645272
VAL F1:  0.2076433121019108

The validation accuracy seems to be improving with time, and model predicts both 1s and 0s:

    prediction = model.predict(x=[val_customer_id, val_vendor_id], verbose=1, batch_size=384)
    np.unique(prediction.round())

    array([0., 1.], dtype=float32)

But when I try predict the test set, model predicts 0 for all values:

prediction = model.predict(x=[test_dataset['customer_num_id'], test_dataset['id']], verbose=1, batch_size=384)
np.unique(prediction.round())

array([0.], dtype=float32)

Test dataset looks similar to the training and validation sets, and it has been left out during training just like the validation set, yet the model can't output values other than 0.

Here's what test dataset looks like:

 test_dataset
    
         customer_id    id  customer_num_id
    0     Z59FTQD      243      7418
    1     0JP29SK      243      7419
    ... ... ... ...
    1671995 L9G4OFV    907      17414
    1671996 L9G4OFV    907      17414
    1671997 FDZFYBA    907      17415

What might be the issue here?


Solution

  • Please take a look at the distribution of your data. I see in the sample data you've shown that target is all 0's. Consider that if most users don't select the product, then if the model always predicts 0, it will be right most of the time. So, it could be improving it's accuracy by over-fitting to the majority class (0).

    You can prevent over-fitting by adjusting params like the learning rate and model architecture by adding dropout layers.

    Also, I'm not sure what your model looks like, but you're only training for 2 epochs so it may not have had enough time to generalize the data, and depending on how deep your model is it could need a lot more training time