kerastalos

Talos hyperparametr search: how to set metric in evaluation step


I want to learn about hyperparameter search in talos. Specifically the evaluation of the models. I was going through this example notebook https://nbviewer.jupyter.org/github/autonomio/talos/blob/master/examples/Hyperparameter%20Optimization%20with%20Keras%20for%20the%20Iris%20Prediction.ipynb#seven

No, my question is: In evaluation (7), how do I set a specific evaluation metric? E.g. F1 score for a classification problem. Do they come from Keras or talos? What is the default, if the parameter is not passed? I could not find it in the talos docs. Did I overlook sth?https://autonomio.github.io/docs_talos/#evaluate


Solution

  • Evaluation in Talos use f1-score with binary average for binary classification, macro average for multi_label and multi_class, and MAE for regression. These come from sklearn.

    The metric argument refers to any metric you've already used in Scan() experiment and is used for first picking the best model/s to evaluate. You can use any Keras or custom metric in Scan() as you would with your Keras models.