I am using AVCaptureDataOutputSynchronizerDelegate
to handle capturing data for video, depth and metadata
private let videoDataOutput = AVCaptureVideoDataOutput()
private let depthDataOutput = AVCaptureDepthDataOutput()
private let metadataOutput = AVCaptureMetadataOutput()
So using the below code, I am able to get specifically video data within the delegate method used from AVCaptureDataOutputSynchronizerDelegate
.
func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) {
guard let syncedVideoData = synchronizedDataCollection.synchronizedData(for: self.videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { return }
The problem is, when I try to save the videoData into an array as below, I get a OutOfBuffers error. This problem persists if I try to save the videoData/the imagery associated/anything related to this data.
let array:[CMSampleBuffer] = []
...
array.append(syncedVideoData)
//Gets to about 5-6 sets of data, then it runs out of buffers.
//I think the buffer is being retained permanently since I am saving to a global variable here.
//Leading to out of buffer error
So, what I am thinking is happening is that since I am saving any related data into an array, it is keeping the data in the buffer in memory whereas it is normally released.
The webpage linked earlier for OutOfBuffers indicates that I can
If you need to perform extended processing of captured data, copy that data into buffers whose lifetimes you manage instead of relying on buffers vended by the capture output.
I attempted to create a new CMSampleBuffer
extension VideoCapture: AVCaptureDataOutputSynchronizerDelegate {
func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) {
var newData:CMSampleBuffer?
guard let syncedVideoData = synchronizedDataCollection.synchronizedData(for: self.videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { return }
guard !syncedVideoData.sampleBufferWasDropped else {
print(syncedVideoData.droppedReason.rawValue)
return
}
let videoSampleBuffer = syncedVideoData.sampleBuffer
CMSampleBufferCreateCopy(allocator: kCFAllocatorDefault, sampleBuffer: videoSampleBuffer, sampleBufferOut: &newData)
if(newData != nil) {
self.buffer.append(newData!)
}
}
but this causes the same issues -- the videoData is still staying in the buffer. I get to about 5-6 sets of videoData and then I get no more datums.
Any guidance on how to "copy that data into buffers whose lifetimes you manage instead of relying on buffers vended by the capture output." as indicated on the outOfBuffers website?
I was able to create a buffer following this buffer and this guide and a few others in the Apple Docs.
...
guard let imagePixelBuffer = CMSampleBufferGetImageBuffer(videoSampleBuffer) else { fatalError() }
//First lock buffer
CVPixelBufferLockBaseAddress(imagePixelBuffer,
CVPixelBufferLockFlags.readOnly)
//Do something with buffer
self.buffer = createMyBuffer(pixelBuffer: imagePixelBuffer)
//Unlock buffer
CVPixelBufferUnlockBaseAddress(imagePixelBuffer,
CVPixelBufferLockFlags.readOnly)
self.doSomething(self.buffer)
...
func createMyBuffer(pixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
let scaleWidth:Int = CVPixelBufferGetWidth(pixelBuffer)
let scaleHeight:Int = CVPixelBufferGetHeight(pixelBuffer)
let flags = CVPixelBufferLockFlags(rawValue: 0)
guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(pixelBuffer, flags) else {
return nil
}
defer { CVPixelBufferUnlockBaseAddress(pixelBuffer, flags) }
guard let srcData = CVPixelBufferGetBaseAddress(pixelBuffer) else {
print("Error: could not get pixel buffer base address")
return nil
}
let srcBytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
var srcBuffer = vImage_Buffer(data: srcData,
height: vImagePixelCount(CVPixelBufferGetHeight(pixelBuffer)),
width: vImagePixelCount(CVPixelBufferGetWidth(pixelBuffer)),
rowBytes: srcBytesPerRow)
let destBytesPerRow = scaleWidth*4
guard let destData = malloc(scaleHeight*destBytesPerRow) else {
print("Error: out of memory")
return nil
}
var destBuffer = vImage_Buffer(data: destData,
height: vImagePixelCount(scaleHeight),
width: vImagePixelCount(scaleWidth),
rowBytes: destBytesPerRow)
let error = vImageScale_ARGB8888(&srcBuffer, &destBuffer, nil, vImage_Flags(kvImageLeaveAlphaUnchanged))
if error != kvImageNoError {
print("Error:", error)
free(destData)
return nil
}
let releaseCallback: CVPixelBufferReleaseBytesCallback = { _, ptr in
if let ptr = ptr {
free(UnsafeMutableRawPointer(mutating: ptr))
}
}
let pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer)
var dstPixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreateWithBytes(nil, scaleWidth, scaleHeight,
pixelFormat, destData,
destBytesPerRow, releaseCallback,
nil, nil, &dstPixelBuffer)
if status != kCVReturnSuccess {
print("Error: could not create new pixel buffer")
free(destData)
return nil
}
return dstPixelBuffer
}
This works - but seems redundant. I'm using a function I found that "scales" the buffer, but I just scale it to the exact same size as the current buffer and it returns a new buffer that I delete when I choose to. It is repetitive, but the functionality works.