I am creating a YOLOV8 model and loading some pre-trained weights. I then want to use that model to run inference on some images however I want to specify that the inference should run on GPU - is it possible to do this when creating the YOLO model?
I am loading the model like this:
model = YOLO("yolov8n.pt")
but when I pass in a device like so:
model = YOLO("yolov8n.pt", device='gpu')
I get an unexpected argument error:
TypeError: __init__() got an unexpected keyword argument 'device'
In order to move a YOLO model to GPU you must use the pytorch .to
syntax like so:
model = YOLO("yolov8n.pt")
model.to('cuda')
some useful docs here
You can also explicitly run a prediction and specify the device. See docs here
model.predict(source, save=True, imgsz=320, conf=0.5,device='xyz')