iosmetalcore-imagecifiltermetalkit

Metal Shading language for Core Image color kernel, how to pass an array of float3


I'm trying to port some CIFilter from this source by using metal shading language for Core Image.
I have a palette of color composed by an array of RGB struct and I want to pass them as an argument to a custom CI color image kernel.
The RGB struct is converted into an array of SIMD3<Float>.

 static func SIMD3Palette(_ palette: [RGB]) -> [SIMD3<Float>] {
        return palette.map{$0.toFloat3()}
    }

The kernel should take and array of simd_float3 values, the problem is the when I launch the filter it tells me that the argument at index 1 is expecting an NSData.

override var outputImage: CIImage? {
        guard let inputImage = inputImage else
        {
            return nil
        }
         let palette = EightBitColorFilter.palettes[Int(inputPaletteIndex)]
        let extent = inputImage.extent
        let arguments = [inputImage, palette, Float(palette.count)] as [Any]

        let final = colorKernel.apply(extent: extent, arguments: arguments)

        return final
    }

This is the kernel:

float4 eight_bit(sample_t image, simd_float3 palette[], float paletteSize, destination dest) {
        float dist = distance(image.rgb, palette[0]);
        float3 returnColor = palette[0];
        for (int i = 1; i < floor(paletteSize); ++i) {
            float tempDist = distance(image.rgb, palette[i]);
            if (tempDist < dist) {
                dist = tempDist;
                returnColor = palette[i];
            }
        }
        return float4(returnColor, 1);
    }

I'm wondering how can I pass a data buffer to the kernel since converting it into an NSData seems not enough.
I saw some example but they are using "full" shading language that is not available for Core Image that is a sort of subset for dealing only with fragments.


Solution

  • Update

    We have now figured out how to pass data buffers directly into Core Image kernels. Using a CIImage as described below is not needed, but still possible.

    Assuming that you have your raw data as an NSData, you can just pass it to the kernel on invocation:

    kernel.apply(..., arguments: [data, ...])
    

    Note: Data might also work, but I know that NSData is an argument type that allows Core Image to cache filter results based on input arguments. So when in doubt, better cast to NSData.

    Then in the kernel function, you only need to declare the parameter with an appropriate constant type:

    extern "C" float4 myKernel(constant float3 data[], ...) {
        float3 data0 = data[0];
        // ...
    }
    

    Previous Answer

    Core Image kernels don't seem to support pointer or array parameter types. Though there seem to be something coming with iOS 13. From the Release Notes:

    Metal CIKernel instances support arguments with arbitrarily structured data.

    But, as so often with Core Image, there seem to be no further documentation for that…

    However, you can still use the "old way" of passing buffer data by wrapping it in a CIImage and sampling it in the kernel. For example:

        let array: [Float] = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]
        let data = array.withUnsafeBufferPointer { Data(buffer: $0) }
        let dataImage = CIImage(bitmapData: data, bytesPerRow: data.count, size: CGSize(width: array.count/4, height: 1), format: .RGBAf, colorSpace: nil)
    

    Note that there is no CIFormat for 3-channel images since the GPU doesn't support those. So you either have to use single-channel .Rf and re-pack the values inside your kernel to float3 again, or add some strides to your data and use .RGBAf and float4 respectively (which I'd recommend since it reduces texture fetches).

    When you pass that image into your kernel, you probably want to set the sampling mode to nearest, otherwise you might get interpolated values when sampling between two pixels:

    kernel.apply(..., arguments: [dataImage.samplingNearest(), ...])
    

    In your (Metal) kernel, you can assess the data as you would with a normal input image via a sampler:

    extern "C" float4 myKernel(coreimage::sampler data, ...) {
        float4 data0 = data.sample(data.transform(float2(0.5, 0.5))); // data[0]
        float4 data1 = data.sample(data.transform(float2(1.5, 0.5))); // data[1]
        // ...
    }
    

    Note that I added 0.5 to the coordinates so that they point in the middle of a pixel in the data image to avoid ambiguity and interpolation.

    Also note that pixel values you get from a sampler always have 4 channels. So even when you are creating your data image with formate .Rf, you'll get a float4 when sampling it (the other values are filled with 0.0 for G and B and 1.0 for alpha). In this case, you can just do

    float data0 = data.sample(data.transform(float2(0.5, 0.5))).x;
    

    Edit

    I previously forgot to transform the sample coordinate from absolute pixel space (where (0.5, 0.5) would be the middle of the first pixel) to relative sampler space (where (0.5, 0.5) would be the middle of the whole buffer). It's fixed now.