I have a disparity image and I am normalizing it using the sample code below but it is very slow. I need to do it using some accelerator like custom CIFilter or any other technique but I dont know how? I am currently running the code with CIContext() and it is running on CPU(not sure). Is there a way to run it on GPU and accelerate without custom CIfilter? Here is the current code:
extension CVPixelBuffer {
func normalize() {
let width = CVPixelBufferGetWidth(self)
let height = CVPixelBufferGetHeight(self)
CVPixelBufferLockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
let baseAddr = CVPixelBufferGetBaseAddress(self)!
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(self), to: UnsafeMutablePointer<Float>.self)
var minPixel: Float = 1.0
var maxPixel: Float = 0.0
for y in 0 ..< height {
for x in 0 ..< width {
let pixel = floatBuffer[y * width + x]
minPixel = min(pixel, minPixel)
maxPixel = max(pixel, maxPixel)
}
}
let range = maxPixel - minPixel
for y in 0 ..< height {
for x in 0 ..< width {
let pixel = floatBuffer[y * width + x]
floatBuffer[y * width + x] = (pixel - minPixel) / range
}
}
CVPixelBufferUnlockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
}
}
You have the pixel values as Float
values, so you could also use vDSP.
vDSP_minv
and vDSP_maxv
compute the extrema, and:
floatBuffer[y * width + x] = (pixel - minPixel) / range
Can be replaced by vDSP_vasm
(you'll need to multiply by the reciprocal of range
).
It might also be useful to look at vDSP_normalize
which does this calculation:
m = sum(A[n], 0 <= n < N) / N;
d = sqrt(sum(A[n]**2, 0 <= n < N) / N - m**2);
if (C)
{
// Normalize.
for (n = 0; n < N; ++n)
C[n] = (A[n] - m) / d;
}