I want to evaluate tensorflow_decision_forests GradientBoostedTreesModel (Ranking task) along the training steps with my custom validation metrics. In the other tf keras model, I can do this with a custom ValidationSet class as a part to the Callback object which is then passed into .fit, but for GradientBoostedTreesModel, so far I cannot. I found that for this Tree model, the evaluation of my custom metrics happens only once at the end. This seems to be consistent with how the code works:
callbacks: Callbacks triggered during the training. The training runs in a
single epoch, itself run in a single step. Therefore, callback logic can
be called equivalently before/after the fit function.
If anyone has some workaround to manually log validation metrics or any suggestion for this, that would be very appreciated. Thanks!
What I tried:
I have tried .make_inspector() but from my understanding, it is hardcoded to only log ndcg (and loss which is the negative of ndcg)
I tried to train the model multiple times, hoping for evaluation to happen each end of training as a workaround. However, each new training restarts the model from scratch (the log says it compiled again even if I already compiled it) unlike other keras model that .fit continues from the trained model.
There is currently no built-in way in TF-DF to log custom metrics during training.
TF-DF allows to resume training with the try_resume_training
option in the model constructor, see here for a usage example. This is probably a workaround for your problem. Keep in mind that (depending on the size of your dataset), this might be slow, since the dataset will be read into memory multiple times.
Full Disclosure: I'm one of the authors of Tensorflow Decision Forests.