iosswiftcoremlcoremltoolsmlmodel

Use first MLModel MLMultiArray output as second MLModel MLMultiArray Input


I have two CoreML MLMModels (converted from .pb).
The first model outputs a Float32 3 × 512 × 512 MLMultiArray, which basically describes an image.
The second model input is a Float32 1 × 360 × 640 × 3 MLMultiArray, which is also an image but with a different size.

I know that in theory, I can convert the second model input into an image, and then convert the first model output to an image (post-prediction), resize it, and feed the second model, but It feels not very efficient and there is already a significant delay caused by the models, so I'm trying to improve performance.

Is it possible to "resize"/"reshape"/"transposed" the first model output to match the second model input? I'm using https://github.com/hollance/CoreMLHelpers (by the amazing Matthijs Hollemans) helpers, but I don't really understand how to do it without damaging the data and keeping it as efficient as possible.

Thanks!


Solution

  • You don't have to turn them into images. Some options for using MLMultiArrays instead of images: