pythonpytorchtensorboardcocopycocotools

Write COCOeval summary to tensorboard


I am using pycocotools to evaluate my R-CNN model

coco_eval = pycocotools.cocoeval.COCOeval(coco_gt)

I perform all of the necessary computations and then call

coco_eval.accumulate()
coco_eval.summarize()

This prints a table more or less like this

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.001
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.005

Is there some way to write this to SummaryWriter.

from torch.utils.tensorboard import SummaryWriter

writer = SummaryWriter()
for category, mAP in coco_eval.summary():
    writer.add_scalar(category, mAP)

Something more or less like this? I can only find coco_eval.stats that constains mAP values, but where are the names of their corresponding categories like Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ]?


Solution

  • I am assuming you are using the helper function from Torchvision. So, if you are running your training loop, you could get the coco_evaluator object back from calling the evaluate function and then loop through the coco_eval dictionary:

    for epoch in range(NUM_EPOCHS):
        # train_one_epoch here...
        
        # evaluate
        coco_evaluator = evaluate(
            model, 
            data_loader, 
            device
        )
        for iou_type, coco_eval in coco_evaluator.coco_eval.items():
            writer.add_scalar("AP/IoU/0.50-0.95/all/100", coco_eval.stats[0], epoch)
            writer.add_scalar("AP/IoU/0.50/all/100", coco_eval.stats[1], epoch)
            writer.add_scalar("AP/IoU/0.75/all/100", coco_eval.stats[2], epoch)
            writer.add_scalar("AP/IoU/0.50-0.95/small/100", coco_eval.stats[3], epoch)
            writer.add_scalar("AP/IoU/0.50-0.95/medium/100", coco_eval.stats[4], epoch)
            writer.add_scalar("AP/IoU/0.50-0.95/large/100", coco_eval.stats[5], epoch)
            writer.add_scalar("AR/IoU/0.50-0.95/all/1", coco_eval.stats[6], epoch)
            writer.add_scalar("AR/IoU/0.50-0.95/all/10", coco_eval.stats[7], epoch)
            writer.add_scalar("AR/IoU/0.50-0.95/all/100", coco_eval.stats[8], epoch)
            writer.add_scalar("AR/IoU/0.50-0.95/small/100", coco_eval.stats[9], epoch)
            writer.add_scalar("AR/IoU/0.50-0.95/medium/100", coco_eval.stats[10], epoch)
            writer.add_scalar("AR/IoU/0.50-0.95/large/100", coco_eval.stats[11], epoch)