deep-learningpytorchgoogle-colaboratoryobject-detectionfaster-rcnn

Evaluating my object detection model using COCO metric shows 0 and -1 values


I'm currently trying to solve an object detection problem and decided to use faster RCNN on for this. I followed this Youtube video and their Code. The loss decreases but the big problem is it won't evaluate correctly no matter how I try to. I've tried looking into the inputs, if there is any sort of size mismatch or missing information but it still doesn't work. It's always showing -1 and 0 values for all of its metrics like this.

creating index...
index created!
Test:  [0/1]  eta: 0:00:08  model_time: 0.4803 (0.4803)  evaluator_time: 0.0304 (0.0304)  time: 8.4784  data: 7.9563  max mem: 7653
Test: Total time: 0:00:08 (8.6452 s / it)
Averaged stats: model_time: 0.4803 (0.4803)  evaluator_time: 0.0304 (0.0304)
Accumulating evaluation results...
DONE (t=0.01s).
IoU metric: bbox
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
<coco_eval.CocoEvaluator at 0x7ff9989fea10>

Here is my current code: Colab notebook


Solution

  • My labels were given wrong. I figured this out by trying to plot my dataset image with its labels and I figured out that it either wasn't showing the labels or not showing it accurately.

    This evaluation function is based on COCO metric. It evaluates labels of all sizes so it is showing -1.000 for area=large. My current guess that it is because my dataset doesn't have varying sizes of labels. They are all of equal sizes and they are medium/small in size. I might be wrong.