pythonswiftobject-detectioncoremlyolov7

How can I convert a YOLOv8s model to coreML model using a custom dataset?


I have trained a YOLOv8 object detection model using a custom dataset, and I want to convert it to a Core ML model so that I can use it on iOS.

After exporting the model, I have a converted model to core ml, but I need the coordinates or boxes of the detected objects as output in order to draw rectangular boxes around the detected objects.

As a beginner in this area, I am unsure how to achieve this. Can anyone help me with this problem?

Training model:

!yolo task=detect mode=train model=yolov8s.pt data= data.yaml epochs=25 imgsz=640 plots=True

Validation:

!yolo task=detect mode=val model=runs/detect/train/weights/best.pt data=data.yaml

Export this model to coreML:

!yolo mode=export model=runs/detect/train/weights/best.pt format=coreml

How can I get the co ordinate output?


Solution

  • To get the cordinates as output use nms=True

    from ultralytics import YOLO
    
    model=YOLO('best.pt')
    
    model.export(format='coreml',nms=True)
    

    or

    yolo export model=path/to/best.pt format=onnx nms=True
    

    This will give a option to preview your model in Xcode , and the output will return coordinates Screenshot of .mlmodel from xcode