I'm wondering whether it is possible for the Azure Cognitive Custom Vision Prediction API to return absolute coordinates instead of percentile based ones:
In the above screenshot, you can see the top, left, width and height properties of the prediction results.
Is there any way to let the API return absolute coordinates instead of - what I assume - percentage wise coordinates?
Extra: does anyone have an idea why it returns this type of coordinates?
There is no way of getting absolute values in the current API.
You just have to multiply those relative values by your image width / height. I made an answer about that a moment ago, you can have a look at it here: How to use Azure custom vision service response boundingBox to plot shape
For your extra question: I guess the result is in relative because the processing is scaling/resizing the image to specific ratio. As you can see in the sample of the consumption of an exported Custom Vision model here, they are rescaling the image to a 256x256 square