I'm working on custom training for YOLOv5. I want to train YOLOv5 with datasets from UA-DETRAC, which contains XML Annotation for a folder containing some images. How could I visualize the datasets XML Annotation like this
and how to use it for YOLOv5 custom training? I have read this tutorial, but this dataset only has 1 XML annotation each folder (which contains several images), not 1 XML for 1 image.
You can visualize all of your notations and images using the labelimg https://github.com/tzutalin/labelImg
To convert the xml (pascal/voc format) to txt (yolo format) you can use the labelimg program, then click on the "pascal/voc" and it will change to "yolo format". Click in check image and save. The program will save in the yolo formatar the image that you are on.
For a automatized conversion, you can use this example as a template to write your own converter https://github.com/rafael-junio/mio-dataset-converter/blob/master/mio-dataset-converter.py