iosiphoneswiftwebrtcapprtcdemo

How to modify (add filters to) the camera stream that WebRTC is sending to other peers/server


Scope

I am using RTCCameraPreviewView to show the local camera stream

    let videoSource = self.pcFactory.avFoundationVideoSource(with: nil)
    let videoTrack = self.pcFactory.videoTrack(with: sVideoSource, trackId: "video0")

    //setting the capture session to my RTCCameraPreviewView:
    (self.previewView as! RTCCameraPreviewView).captureSession = (videoTrack.source as! RTCAVFoundationVideoSource).captureSession

    stream = self.pcFactory.mediaStream(withStreamId: "unique_label")
    audioTrack = self.pcFactory.audioTrack(withTrackId: "audio0")
    stream.addAudioTrack(audioTrack)

    var device: AVCaptureDevice?
    for captureDevice in AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) {
        if (captureDevice as AnyObject).position == AVCaptureDevicePosition.front {
            device = captureDevice as? AVCaptureDevice
            break
        }
    }
    if device != nil && videoTrack != nil {
        stream.addVideoTrack(videoTrack)
    }

    configuration = RTCConfiguration()

    configuration.iceServers = iceServers
    peerConnection = self.pcFactory.peerConnection(with: configuration, constraints: RTCMediaConstraints(mandatoryConstraints: nil, optionalConstraints: ["DtlsSrtpKeyAgreement": "true"]), delegate: self)
    peerConnection.add(stream)

Things are working fine as they are expected.

Problem

Now I want to take a hold of the frames from the camera and pre-process them to add some filters (sepia, b/w etc) and then relay the frames to WebRTC. After struggling through the webrtc docs I am still unable to find out where to start and what to do.

Any kind of heads up will be highly appreciated!


Solution

  • I found a way out. So basically you need to build your own WebRTC pod and then you can add a hook for using a custom AVCaptureVideoDataOutputSampleBufferDelegate on the videoOutput object. Then handle sampleBuffer, modify the buffer and then pass on to webrtc.

    Implementation

    Open the file webrtc/sdk/objc/Frameworks/Classes/RTCAVFoundationVideoCapturerInternal.mm

    and on the line:

    [videoDataOutput setSampleBufferDelegate:self queue:self.frameQueue];
    

    use a custom delegate instead of self.

    In that delegate

      class YourDelegate : AVCaptureVideoDataOutputSampleBufferDelegate {
     func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
            let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    
            //modify the pixelBuffer
            //get the modifiedSampleBuffer from modified pixelBuffer
            DispatchQueue.main.async {
                //show the modified buffer to the user
            }
    
            //To pass the modified buffer to webrtc (warning: [this is objc code]):
            //(_capturer object is found in RTCAVFoundationVideoCapturerInternal.mm)
            _capturer->CaptureSampleBuffer(modifiedSampleBuffer, _rotation);
    
        }
    }