iosmlmodel

How To Get Bounding Box Data For Created mlmodel With Playground


We created a mlmodel with playground like https://developer.apple.com/documentation/createml/creating_an_image_classifier_model.

Then we used following code to get bounding box data of objects in that mlmodel. But in "results" we can get just prediction values and object names we modeled, even that was exiting but not our aim.

print("detectOurModelHandler (results)") Shows us the all the objects and prediction values in our mlmodel and it is VNClassificationObservation.

So it is no surprise that we do not have box data.

So the problem is how to create model as VNRecognizedObjectObservation, I think ?

According to https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture we are supposed to get bounding box data.

But we can not. Even print("detectOurModelHandler 2") is never called like dump(objectBounds).

We call findOurModels in captureOutput by the way. We call it like once in 1 second to test our model at the moment.

lazy var ourModel:VNCoreMLModel = { return try! VNCoreMLModel(for: ImageClassifier().model)}()

lazy var ourModelRequest: VNCoreMLRequest = { 
    return VNCoreMLRequest(model: ourModel, completionHandler: detectOutModelHandler)  
}()



 func findOurModels(pixelbuffer: CVPixelBuffer){

    let testImage =  takeAFrameImage(imageBuffer: pixelbuffer)
    let imageForThis   = testImage.cgImage
    let requestOptions2:[VNImageOption : Any] = [:]

    let handler = VNImageRequestHandler(cgImage: imageForThis!,
                                                    orientation: CGImagePropertyOrientation(rawValue: 6)!,
                                                    options: requestOptions2)



    try? handler.perform([ourModelRequest])

}




func detectOurModelHandler(request: VNRequest, error: Error?) {

   DispatchQueue.main.async(execute: {

    if let results = request.results {

       print("detectOurModelHandler \(results)") 

        for observation in results where observation is VNRecognizedObjectObservation {

             print("detectOurModelHandler 2") 

            guard let objectObservation = observation as? VNRecognizedObjectObservation else {
                continue
            }




      let objectBounds = VNImageRectForNormalizedRect(objectObservation.boundingBox, self.frameWidth, self.frameHeight)


             dump(objectBounds)

     }            
   }
 })

}

Solution

  • It can not be done using CreateML. I did not do it yet but it is said a model with bounding data could be created with Turi Create.