There's simply no full example anywhere. In live555 folders, there's the following program: testRTSPClient.cpp which accesses an RTSP and receives raw RTP packets but do nothing with them. It receives them through the DummySink
class.
There is an example on how to use testRTSPClient.cpp
to receive NAL units from h264, but live555 has custom sink classes specifically for each codec, so it's a lot better to use them. Example: H264or5VideoRTPSink.cpp.
So if I substitute an instance of DummySink
with an instance of a subclass of H264or5VideoRTPSink
in testRTSPClient.cpp
and make this subclass receive the frames I think it might work.
If I just follow the implementation of DummySink
I just need to write something like this:
class MyH264VideoRTPSink: public H264VideoRTPSink {
public:
static MyH264VideoRTPSink* createNew(UsageEnvironment& env,
MediaSubsession& subsession, // identifies the kind of data that's being received
char const* streamId = NULL); // identifies the stream itself (optional)
private:
MyH264VideoRTPSink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId);
// called only by "createNew()"
virtual ~MyH264VideoRTPSink();
static void afterGettingFrame(void* clientData, unsigned frameSize,
unsigned numTruncatedBytes,
struct timeval presentationTime,
unsigned durationInMicroseconds);
void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned durationInMicroseconds);
// redefined virtual functions:
virtual Boolean continuePlaying();
u_int8_t* fReceiveBuffer;
MediaSubsession& fSubsession;
char* fStreamId;
};
If we look at DummySink
it suggests that afterGettingFrame
is the function that receives frames. But where is the frame received? How can I access it?
void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned /*durationInMicroseconds*/) {
// We've just received a frame of data. (Optionally) print out information about it:
UPDATE:
I created my own H264 Sink class: https://github.com/lucaszanella/jscam/blob/f6b38eea2934519bcccd76c8d3aee7f58793da00/src/jscam/android/app/src/main/cpp/MyH264VideoRTPSink.cpp but it has a createNew
different from the one in DummySink
:
createNew(UsageEnvironment& env, Groupsock* RTPgs, unsigned char rtpPayloadFormat);
There's simply no mention of what RTPgs
is meant to be, neither rtpPayloadFormat
. I don't even know if I'm on the right track...
The first confusion is betweeen Source & Sink, the FAQ decribe briefly the workflow:
'source1' -> 'source2' (a filter) -> 'source3' (a filter) -> 'sink'
The class H264VideoRTPSink
is made to publish data through RTP, not to consume data.
In the case of the RTSP client sample testRTSPClient.cpp, the source which depends on the codec is created processing the DESCRIBE answer calling MediaSession::createNew
.
The Sink doesnot depend on the codec, the startPlaying
method on the MediaSink
register the callback afterGettingFrame
to be called when data will be received by the source. Next when this callback is executed you should call continuePlaying
to register it again for next incoming data.
In DummySink::afterGettingFrame
the buffer contains the H264 elementary stream frames extract from RTP buffer.
In order to dump H264 elementary stream frame you can have a look to h264bitstream