I’m working with Apple’s AVCamFilter demo app. Unfortunately it doesn’t include movie recording capability, so I tried creating a movie output, right below the photo output, at the top of the class:
let movieOutput = AVCaptureMovieFileOutput()
I then added the output (again, right below where the photo output is added) in configureSession()
like so:
if session.canAddOutput(movieOutput) {
session.addOutput(movieOutput)
} else {
print("Could not add movie output to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
On my iPhone (13 Pro), this works and I can succesfully record video. But on my iPad Air the camera preview just goes black. No error comes up, and setupResult
prints as "success", but there's no video. Interestingly, if I take a photo with the shutter button, it successfully captures (with an image) to the photo library.
I've made no other modifications to the project, so you can try this yourself by downloading the demo app and just adding those lines.
Update: I've found some related posts about how AVCaptureMovieFileOutput
may be incompatible with AVCaptureVideoDataOutput
. But if that's true why does it work on my iPhone?
I never figured out why it worked on my iPhone, but as a workaround I implemented AVAssetWriter. This should allow AVCamFilter to record video (and audio!) on any device.
First, add AVCaptureAudioDataOutputSampleBufferDelegate
to the class declaration.
Add these variables to the class:
let audioSession = AVCaptureSession()
private let audioDataOutputQueue = DispatchQueue(label: "AudioDataQueue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
private let audioDataOutput = AVCaptureAudioDataOutput()
private var _assetWriter: AVAssetWriter?
private var _assetWriterVideoInput: AVAssetWriterInput?
private var _assetWriterAudioInput: AVAssetWriterInput?
private var _adpater: AVAssetWriterInputPixelBufferAdaptor?
private var _filename = ""
private var _time: Double = 0
private var _captureState = _CaptureState.idle
private enum _CaptureState {
case idle, start, capturing, end
}
I put the audio on a seperate session becuase I wanted to include haptics (see note below), but you can put them on the same session if you wish.
In configureSession()
, after the final session.commitConfiguration()
, add:
audioSession.beginConfiguration()
// Add an audio input
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try! AVCaptureDeviceInput(device: audioDevice!)
if audioSession.canAddInput(audioInput) {
audioSession.addInput(audioInput)
}
// Add an audio data output
if audioSession.canAddOutput(audioDataOutput) {
audioSession.addOutput(audioDataOutput)
audioDataOutput.setSampleBufferDelegate(self, queue: audioDataOutputQueue)
} else {
print("Could not add audio data output to the session")
}
audioSession.commitConfiguration()
Then replace the contents of captureOutput(_ didOutput from:)
with this:
if connection.output == videoDataOutput {
processVideo(sampleBuffer: sampleBuffer)
}
let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds
switch _captureState {
case .start:
// Set up recorder
guard connection.output == videoDataOutput else { break } // ensures that the writer is set up with a video frame. When it starts with audio, it can fail.
_filename = UUID().uuidString
let videoPath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent("\(_filename).mov")
let writer = try! AVAssetWriter(outputURL: videoPath, fileType: .mov)
let settings = videoDataOutput.recommendedVideoSettingsForAssetWriter(writingTo: .mov)
let input = AVAssetWriterInput(mediaType: .video, outputSettings: settings) // [AVVideoCodecKey: AVVideoCodecType.h264, AVVideoWidthKey: 1920, AVVideoHeightKey: 1080])
input.mediaTimeScale = CMTimeScale(bitPattern: 600)
input.expectsMediaDataInRealTime = true
input.transform = CGAffineTransform(rotationAngle: .pi/2)
let adapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: input, sourcePixelBufferAttributes: nil)
if writer.canAdd(input) {
writer.add(input)
}
let audioSettings = audioDataOutput.recommendedAudioSettingsForAssetWriter(writingTo: .mov)
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings)
audioInput.expectsMediaDataInRealTime = true
if writer.canAdd(audioInput) {
writer.add(audioInput)
}
writer.startWriting()
writer.startSession(atSourceTime: .zero)
_assetWriter = writer
_assetWriterVideoInput = input
_assetWriterAudioInput = audioInput
_adpater = adapter
_captureState = .capturing
_time = timestamp
case .capturing:
if connection.output == videoDataOutput {
// Process video frames and write to videoInput
if _assetWriterVideoInput?.isReadyForMoreMediaData == true {
let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
_adpater?.append(previewView.pixelBuffer!, withPresentationTime: time)
}
} else if connection.output == audioDataOutput {
// Process audio samples and write to audioInput
if _assetWriterAudioInput!.isReadyForMoreMediaData {
let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
if let modifiedBuffer = setPresentationTimestamp(sampleBuffer: sampleBuffer, presentationTimestamp: time) {
_assetWriterAudioInput!.append(modifiedBuffer)
}
}
}
break
case .end:
guard _assetWriterVideoInput?.isReadyForMoreMediaData == true, _assetWriter!.status != .failed else { break }
let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent("\(_filename).mov")
_assetWriterVideoInput?.markAsFinished()
_assetWriterAudioInput?.markAsFinished()
_assetWriter?.finishWriting { [weak self] in
self?._captureState = .idle
self?._assetWriter = nil
self?._assetWriterVideoInput = nil
self?._assetWriterAudioInput = nil
UISaveVideoAtPathToSavedPhotosAlbum(url.path, nil, nil, nil)
}
default:
break
}
(You can do whatever you want with the URL, it’s saved to the photo library here as an example)
Lastly you’ll need this to sync the audio and video:
func setPresentationTimestamp(sampleBuffer: CMSampleBuffer, presentationTimestamp: CMTime) -> CMSampleBuffer? {
var sampleBufferCopy: CMSampleBuffer? = nil
var timingInfoArray = [CMSampleTimingInfo(duration: CMTimeMake(value: 1, timescale: 30), presentationTimeStamp: presentationTimestamp, decodeTimeStamp: CMTime.invalid)]
var status = CMSampleBufferCreateCopyWithNewTiming(allocator: kCFAllocatorDefault, sampleBuffer: sampleBuffer, sampleTimingEntryCount: 1, sampleTimingArray: &timingInfoArray, sampleBufferOut: &sampleBufferCopy)
if status == noErr {
return sampleBufferCopy
} else {
// Handle the error
return nil
}
}
To actually trigger the video recording, change _captureState
to .start
, and change it to .end
to finish.
Notes:
-You may see black frames at the start and end of videos. I tried a variety of approaches to solve this but none worked, so I changed writer.startSession(atSourceTime: .zero)
to writer.startSession(atSourceTime: CMTime(seconds: 0.25, preferredTimescale: CMTimeScale(600)))
, which simply trims 0.25 seconds off the start. For the end, right before _assetWriterVideoInput?.markAsFinished()
, adding this seems to fix it:
let endTime = timestamp - _time
_assetWriter?.endSession(atSourceTime: CMTime(seconds: endTime, preferredTimescale: CMTimeScale(600)))
-Running the mic disables haptics. This is why I used a seperate audio session: so that I can tell the session to startRunning when the video recording starts, and stopRunning when it ends. As long as it's not recording, you should be able to use haptics. Note that you will of course have to get audio permissions before audio recording will work.
-Credit to here for some of the AVAssetWriter code: https://gist.github.com/yusuke024/b5cd3909d9d7f9e919291491f6b381f0
-I’m by no means an expert! So anyone feel free to point out if something here is unnecessary/wrong.