pythontensorflowdeep-learning

Do I need to define metrics in model.compile in order to use them later?


I am trying to train a model and after it is train, I want to see TP TN FP FN, recall, precision, and Sensitivity.

Question 1: Do I need to define all these metrics when I compile the model like this?

metrics = [CategoricalAccuracy(),Precision(),Recall(),TruePositives(),TrueNegatives(),FalseNegatives(),FalsePositives(),SensitivityAtSpecificity(0.5)]
model.compile(optimizer=Adam(learning_rate = 0.004), loss=CategoricalCrossentropy(from_logits = True), metrics=metrics)

What I want to do is after the model is trained, I want to evaluate it with these metrics and see how it did on the test set.

Question 2: If I run model.evaluate, are the metrics used in model.compile going to be used or can I define more metrics when I am doing evaluation?

For example I want to monitor accuracy during training and then Recall/Precision and so on when I evaluate.


Solution

  • If you don't want to monitor Precision, Recall you don't have to put them on compile. You can simply use tf.keras.metrics.Precision() after getting predictions using model.predict. But if you want to use model.evaluate you need to put them on model.compile. This is because model.evaluate makes use only of the metrics mentioned when you compile the model, which are initiated when you call model.compile