Hi I'm working on getting a raw image data from AVCaptureVideoDataoutput.
I have lots of experience in using avfoundation and worked lots of projects but this time I'm working on image processing project which I don't have any experience.
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}
I know I'm getting CMSampleBuffer right here in delegate callback function.
MY Questions.
in this question, the author tried using opencv and got what he wanted. but.. what is exactly different the result by opencv from CMSamplebuffer?(why the opencv result is real raw data the author said)
if i set as below,
if self.session.canAddOutput(self.captureOutput) {
self.session.addOutput(self.captureOutput)
captureOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_32BGRA
] as [String : Any]
captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "capture"))
captureOutput.alwaysDiscardsLateVideoFrames = true
}
by setting format key _32BGRA,, AM i now getting raw data from samplebuffer?
It's not RAW in your case. All modern sensors build on Bayer filter, so you get an image converted from the Bayer format. You can't get raw image with this api. There is a format called kCVPixelFormatType_14Bayer_BGGR
, but the camera probably won't support it.
Maybe on WWDC 419 session you will find answer. I don't know
It's the same; cv::Mat
is just a wrapper around the image data from CMSampleBuffer
. If you save your data as PNG, you will not lose any quality. The TIFF format saves without any compression, but you can also use PNG without compression.
If you use RGBA format, behind the scenes it is converted from Bayer to RGBA. To get the Y channel, you need to additionally apply an RGBA to YUV conversion and take the Y channel. Or you can use the kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
format and get the Y channel from the first plane. Also note that VideoRange
has different chroma output range.
// ObjC code
int width = CVPixelBufferGetWidth(imageBuffer);
int height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *yBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
cv::Mat yChannel(height, width, CV_8UC1, yBuffer, yPitch);