I have this code
CIImage * input_ciimage = [CIImage imageWithCGImage:self.CGImage];
CIImage * output_ciimage =
[[CIFilter filterWithName:@"CILanczosScaleTransform" keysAndValues:
kCIInputImageKey, input_ciimage,
kCIInputScaleKey, [NSNumber numberWithFloat:0.72], // [NSNumber numberWithFloat: 800.0 / self.size.width],
nil] outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef output_cgimage = [context createCGImage:output_ciimage
fromRect:[output_ciimage extent]];
UIImage *output_uiimage;
output_uiimage = [UIImage imageWithCGImage:output_cgimage
scale:1.0 orientation:self.imageOrientation];
CGImageRelease(output_cgimage);
return output_uiimage;
So, when scaleKey greater than some value then output_uiimage is black image.
In my case if value of key kCIInputScaleKey > @0.52 then result is black image. When i rotate image on 90 degree then i got the same result but value was 0.72 (not 0.52).
Whats wrong with library or mistake in my code?
I have iPhone 4, iOS 7.1.2, xCode 6.0 if needed.
That's what Apple said:
This scenario exposes a bug in Core Image. The bug occurs when rendering requires an intermediate buffer that has a dimension greater than the GPU texture limits (4096) AND the input image fits into these limits. This happens with any filter that is performing a convolution (blur, lanczos) on an input image that has width or height close to the GL texture limit.
Note: the render is succesful if the one of the dimensions of the input image is increased to 4097.Replacing CILanczosScaleTransform with CIAffineTransform (lower quality) or resizing the image with CG are possible workarounds for the provided sample code.