I'm currently working on an object detection project using YOLOv8 and a customized dataset. In my dataset, each image is accompanied by its corresponding label file (Data_1.png -> Data_1.txt
).
The label file Data_1.txt
follows the format:
Class_type, x_1 min, y_1 min, ..., x_4 min, y_4 min.
I'm interested in applying a perspective augmentation to my dataset, and I've decided to use the YOLOv8 augmentation functionality. However, I am unsure whether YOLOv8 generates labels for the augmented data or not. If it does not, I would greatly appreciate any suggestions or alternative approaches to handle the generation of labels for the augmented data.
YOLOv8 augmentation functionality is a convenient way to dynamically augment your dataset during the training process to increase the diversity and size of the dataset. These transformations make sense only if both - an image and labeled instance coordinates in it - are transformed simultaneously to train the model to detect/segment relevant instances in the image. So instance coordinates augmentation is applied respectively. You can look into the augmentation code, also these transformations will be reflected in the batch examples saved during the training process in the current training experiment folder. All you need to apply yolov8 augmentation to the dataset is to provide an initially correctly labeled dataset in yolov8 format and adjust the augmentation parameters of the model.train()
process in case you want something different from the default yolov8 settings.