I want to create an .mlpackage or .mlmodel file which I can import in Xcode to do image segmentation. For this, I want to use the segmentation package within YOLO to check out if it fit my needs.
The problem now is that this script creates an .mlpackage file which only accepts images with a fixed size (640x640):
from ultralytics import YOLO
model = YOLO("yolo11n-seg.pt")
model.export(format="coreml")
I want the change something here, probably with coremltools
, to handle unbounded ranges (I want to handle arbitrary sized images). It's described a bit here: https://apple.github.io/coremltools/docs-guides/source/flexible-inputs.html#enable-unbounded-ranges, but I don't understand how I can implement it with my script.
How to export YOLO segmentation model with flexible input sizes
from ultralytics import YOLO
import coremltools as ct
# Export to torchscript first
model = YOLO("yolov8n-seg.pt")
model.export(format="torchscript")
# Convert to CoreML with flexible input size
input_shape = ct.Shape(
shape=(1, 3,
ct.RangeDim(lower_bound=32, upper_bound=1024, default=640),
ct.RangeDim(lower_bound=32, upper_bound=1024, default=640))
)
mlmodel = ct.convert(
"yolov8n-seg.torchscript",
inputs=[ct.ImageType(
name="images",
shape=input_shape,
color_layout=ct.colorlayout.RGB,
scale=1.0/255.0
)],
minimum_deployment_target=ct.target.iOS16
)
mlmodel.save("yolov8n-seg-flexible.mlpackage")
This creates an .mlpackage that accepts images from 32 32 to 1024 1024 (you can modify these bounds as needed). Default is 640 640.
Read about the stuff here: