webrtclibjingle

WebRTC library remote audio rendering via AddSink


When the connection is set up and ready my webrtc::PeerConnectionObserver implementation receives a call to

void OnAddStream(webrtc::MediaStreamInterface* stream);

where I pull the webrtc::AudioTrackInterface out of the webrtc::MediaStreamInterface.

I get a valid (non-null) pointer back from this, call it track

webrtc::AudioTrackInterface* track;

and I proceed to call track->AddSink(sink), where sink is my instance of a class that inherits from webrtc::AudioTrackSinkInterface and implements

  virtual void OnData(const void* audio_data,
                      int bits_per_sample,
                      int sample_rate,
                      int number_of_channels,
                      int number_of_frames) = 0;

At this point I expect to recieve regular callbacks into my concrete class with the decoded audio data, just like I receive calls into my webrtc::VideoRendererInterface with a cricket::VideoFrame* when video data is available, but I do not.

What am I doing wrong?


Solution

  • You're not doing anything wrong, except for using an interface that isn't implemented yet. Well, the interface is implemented, but there's no code behind it to actually call your OnData() method!

    The interface in question is in WebRTC's mediastreaminterface.h. There's a note farther down in the file that hints at the unimplemented status of AddSink() and OnData():

    // Get a pointer to the audio renderer of this AudioTrack.
    // The pointer is valid for the lifetime of this AudioTrack.
    // TODO(xians): Remove the following interface after Chrome switches to
    // AddSink() and RemoveSink() interfaces.
    virtual cricket::AudioRenderer* GetRenderer() { return NULL; }
    

    Unfortunately, the AudioRenderer class referenced here doesn't look very easy to work with. It's defined in Chromium's audio_renderer.h and uses all sorts of Chromium internal types. If you figure out what to do with it, please let me know, because I am trying to solve the same problem myself.

    I did notice some code in WebRTC's mediastreamhandler.cc that uses OnData() in the same way you and I are trying to do. There is a LocalAudioTrackHandler constructor that calls track->AddSink() on the audio track and passes it an instance of the LocalAudioSinkAdapter class. This class has an OnData() method which forwards to sink_->OnData(). The track->AddSink() call does get executed, but the OnData() method never gets called!

    I think this AddSink()/OnData() code was added in anticipation of Chromium implementing these calls internally, so when they do switch over it will start using this code instead of the AudioRenderer code. That avoids the need to update both codebases in perfect sync with each other.

    So all I can suggest is to wait until the code to call OnData() is implemented inside Chromium.