I have the following code to scale the input CIImage
to drawable size in MTKView
and center it after adjusting for aspect ratio. But the image placement is not always perfect for all aspect ratios, not sure why. Particularly for square images, I get different placement for (1080, 1080) and (2160,2160) images, neither of which are centered. What am I doing wrong?
func drawCIImage(_ ciImage:CIImage?) {
guard let image = ciImage,
let currentDrawable = currentDrawable,
let commandBuffer = commandQueue?.makeCommandBuffer()
else {
return
}
let drawableSize = self.drawableSize
let adjustedDrawableSize = AVMakeRect(aspectRatio: CGSize(width: image.extent.width, height: image.extent.height), insideRect: CGRect(x: 0, y: 0, width: drawableSize.width, height: drawableSize.height)).size
let scaleX = adjustedDrawableSize.width / image.extent.width
let scaleY = adjustedDrawableSize.height / image.extent.height
NSLog("Adjusted size \(adjustedDrawableSize), \(drawableSize), \(image.extent), \(scaleX), \(scaleY)")
let scaledImage = image.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY).translatedBy(x: (drawableSize.width - adjustedDrawableSize.width)/2, y: (drawableSize.height - adjustedDrawableSize.height)/2)).composited(over: CIImage.red.clampedToExtent())
let destination = CIRenderDestination(width: Int(drawableSize.width),
height: Int(drawableSize.height),
pixelFormat: self.colorPixelFormat,
commandBuffer: commandBuffer,
mtlTextureProvider: { () -> MTLTexture in
return currentDrawable.texture
})
destination.colorSpace = metalLayer.colorspace
_ = try? context.startTask(toRender: scaledImage, to: destination)
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
EDIT: The following code works, not sure what's the difference:
var scaledImage = image.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
scaledImage = scaledImage.transformed(by: CGAffineTransform(translationX: (drawableSize.width - adjustedDrawableSize.width)/2, y: (drawableSize.height - adjustedDrawableSize.height)/2))
scaledImage = scaledImage.composited(over: CIImage.black.clampedToExtent())
In your unedited code, you called translatedBy
on the scaling CGAffineTransform
, which concatenates the two operations into one transform (via matrix multiplication). In this case, the translation would need to be based on the original image size, not the scaled one.
It's easier to perform the two operations one after the other, as you did in your edited code.