iosswiftimagecolorfilter

Image manipulation with Swift has random effects


I have this ImageProcessor in Swift (this is not mine), I am trying to use it with some filters that I made but while trying stuff I noticed that there was some random noise.

To test this I made a custom filter NoFilter which does nothing.

public class NoFilter : ImageFilterProtocol{
    public func apply(pixel: Pixel) -> Pixel {
        return pixel
    }   
}

This should output the same image that it gets, and sometimes it gives random errors within the image.

For example:

enter image description here

Notice how it is the same code and it generates different errors everytime. At the end of the question is the link to try it yourself, everytime it runs can be different. What could it be causing that?

The current flow of the program it's that the ImageProcessor receives and image and converts it to RGBAImage, then when apply gets called with a filter, it applies that filter to every pixel in the RGBAImage (which in this case is no filter). And finally when getImage is called, it converts the RGBAImage back to UIImage. This suggest that there may be a problem with the conversion from and/or to RGBAImage but I can't find anything wrong with it.

public struct RGBAImage {
    public var pixels: UnsafeMutableBufferPointer<Pixel>
    
    public var width: Int
    public var height: Int
    
    public init?(image: UIImage) {
        guard let cgImage = image.CGImage else { return nil }
        
        // Redraw image for correct pixel format
        let colorSpace = CGColorSpaceCreateDeviceRGB()
        
        var bitmapInfo: UInt32 = CGBitmapInfo.ByteOrder32Big.rawValue
        bitmapInfo |= CGImageAlphaInfo.PremultipliedLast.rawValue & CGBitmapInfo.AlphaInfoMask.rawValue
        
        width = Int(image.size.width)
        height = Int(image.size.height)
        let bytesPerRow = width * 4
        
        let imageData = UnsafeMutablePointer<Pixel>.alloc(width * height)
        
        guard let imageContext = CGBitmapContextCreate(imageData, width, height, 8, bytesPerRow, colorSpace, bitmapInfo) else { return nil }
        CGContextDrawImage(imageContext, CGRect(origin: CGPointZero, size: image.size), cgImage)
        
        pixels = UnsafeMutableBufferPointer<Pixel>(start: imageData, count: width * height)
    }
    
    public func toUIImage() -> UIImage? {
        let colorSpace = CGColorSpaceCreateDeviceRGB()
        var bitmapInfo: UInt32 = CGBitmapInfo.ByteOrder32Big.rawValue
        bitmapInfo |= CGImageAlphaInfo.PremultipliedLast.rawValue & CGBitmapInfo.AlphaInfoMask.rawValue
        
        let bytesPerRow = width * 4
        
        let imageContext = CGBitmapContextCreateWithData(pixels.baseAddress, width, height, 8, bytesPerRow, colorSpace, bitmapInfo, nil, nil)
        
        guard let cgImage = CGBitmapContextCreateImage(imageContext) else {return nil}
        let image = UIImage(CGImage: cgImage)
        
        return image
    }
}

This is the code that I am currently testing, please try it.

Any idea?

EDIT: Uploaded my code to git, so it can be checked online.


Solution

  • You're forgetting to clear your image context when you draw your image into it. Try adding a call to CGContextClearRect:

    let rect = CGRect(origin: CGPointZero, size: image.size)
    CGContextClearRect(imageContext, rect)    // Avoid undefined pixels!
    CGContextDrawImage(imageContext, rect, cgImage)
    

    That will avoid undefined pixels peeking out from underneath your image's transparent areas.