I trained a model using the Object detection API provided by tensorflow but could not find a lot of resources regarding the evaluation process for the model created.
When using the eval.py script, I get a few results on screen but I have some doubts about that being as follows:
Which checkpoint from the ones stored in checkpoint_dir do the results correspond to ?
I get a value of -1.00 in some cases. How do i interpret that ?
What is the difference between eval.py and model_main.py scripts provided?
Any resource related to evaluation and inference for object detection api that you can refer me to ?
- Which checkpoint from the ones stored in checkpoint_dir do the results correspond to ?
In your train_dir
you will find a checkpoint
file, if you open it, in the first line of code is your last checkpoint, which will be used for the evaluation, you can change that first line to your desired checkpoint for evaluation
- I get a value of -1.00 in some cases. How do i interpret that ?
When you obtain a -1 in the metric it means that it doesnt exist any result that meets the criteria, in your case, it means that your dataset doesn't have any objects with small area so it is discarded, if you had those objects and you had no detection on them, it would appear 0 instead of -1.
- What is the difference between eval.py and model_main.py scripts provided?
The eval.py
script only evaluates the the model and returns the metrics. The model_main.py
combines the train script with the eval, enabling you to do the following of your choosing:
In the latter you should provide the validation data and not your test data.
- Any resource related to evaluation and inference for object detection api that you can refer me to ?
I think you are looking for this Jupyter notebook for off-the-shelf inference