my app uses the example code by Apple to track multiple objects of a video using Vision (in my case, it tracks the path of a barbell during weightlifting exercises), but after updating to iOS 13 the video is not displayed properly. Instead of filling the screen like it used to do, now the video is cropped and you can only see a small portion of it. I've talked to Apple Technical support and they acknowledge the bug, but a fix is not in their plans.
What bugs me the most is that a) landscape videos are working, but not portrait videos and b) the bug only happens in real devices and not in the simulator. See attached the portion of the code used to display the video depending on its proportions (landscape or portrait).
private func scaleImage(to viewSize: CGSize) -> UIImage? {
guard self.image != nil && self.image.size != CGSize.zero else {
return nil
}
self.imageAreaRect = CGRect.zero
// There are two possible cases to fully fit self.image into the the ImageTrackingView area:
// Option 1) image.width = view.width ==> image.height <= view.height
// Option 2) image.height = view.height ==> image.width <= view.width
let imageAspectRatio = self.image.size.width / self.image.size.height
// Check if we're in Option 1) case and initialize self.imageAreaRect accordingly
let imageSizeOption1 = CGSize(width: viewSize.width, height: floor(viewSize.width / imageAspectRatio))
if imageSizeOption1.height <= viewSize.height {
print("Landscape. View size: \(viewSize)")
let imageX: CGFloat = 0
let imageY = floor((viewSize.height - imageSizeOption1.height) / 2.0)
self.imageAreaRect = CGRect(x: imageX,
y: imageY,
width: imageSizeOption1.width,
height: imageSizeOption1.height)
}
if self.imageAreaRect == CGRect.zero {
// Check if we're in Option 2) case if Option 1) didn't work out and initialize imageAreaRect accordingly
print("portrait. View size: \(viewSize)")
let imageSizeOption2 = CGSize(width: floor(viewSize.height * imageAspectRatio), height: viewSize.height)
if imageSizeOption2.width <= viewSize.width {
let imageX = floor((viewSize.width - imageSizeOption2.width) / 2.0)
let imageY: CGFloat = 0
self.imageAreaRect = CGRect(x: imageX,
y: imageY,
width: imageSizeOption2.width,
height: imageSizeOption2.height)
}
}
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(self.imageAreaRect.size, false, 0.0)
self.image.draw(in: CGRect(x: 0.0, y: 0.0, width: self.imageAreaRect.size.width, height: self.imageAreaRect.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Any help would be very appreciated.
Thanks
after several weeks stalking Apple Technical Support, they came out with a very simple workaround. Basically, you just need to do a conversion via CGImage instead of going directly from CIImage to UIImage.
This is the old code (as you can see in current example in Apple's documentation)
if let frame = frame {
let ciImage = CIImage(cvPixelBuffer: frame).transformed(by: transform)
let uiImage = UIImage(ciImage: ciImage)
self.trackingView.image = uiImage
}
And this is the correction
if let frame = frame {
let ciImage = CIImage(cvPixelBuffer: frame).transformed(by: transform)
guard let cgImage = CIContext.init().createCGImage(ciImage, from: ciImage.extent) else {
return
}
let uiImage = UIImage(cgImage: cgImage)
self.trackingView.image = uiImage
}
Hope this helps!