Thanks Ross. Instead of adding a MediaSink I decided to add implementation for 
identifying NAL units in afterGettingFrames() of DummySink. I added some 
parameters in DummySink for received bitrate calculation and that seems to be 
working for me and my decoder is now able to decode streams. I was actually 
interested how much CPU load is consumed while doing this so I didn't go for 
MediaSink implementation that you suggested.

Since, I just want to receive stream and send to decoder, do you think using 
testRTSPClient over OpenRTSP would be an advantage or there should be no 
difference if either one is used from CPU cycles perspective? I'm about to 
profile both on my system.

I had read in one of the FAQs Live555 is not thread safe. Does that mean if a 
multithreaded application wants to use it, application has to synchronize 
between objects it is using from Live555?

Regards,
Yogesh.


From: [email protected] 
[mailto:[email protected]] On Behalf Of Ross Finlayson
Sent: Tuesday, January 29, 2013 3:22 AM
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] Parsing frames received by testRTSPClient

I want two know if testRTSPClient allows me to specify codec type and if it can 
give me single encoded frame every time afterGettingFrame() is invoked?

Yes.  Look at the "DummySink" class that the "testRTSPClient" demo application 
uses.  Note, in particular, the (non-static) "DummySink::afterGettingFrame()" 
function ("testRTSPClient.cpp", lines 479-500.  Note that when this function is 
called, a complete 'frame' (for H.264, this will be a "NAL unit") will have 
already been delivered into "fReceiveBuffer".  Note that our "DummySink" 
implementation doesn't actually do anything with this data; that's why it's 
called a 'dummy' sink.

If you wanted to decode these frames, you would replace "DummySink" with your 
own "MediaSink" subclass.  It's "afterGettingFrame()" function would pass the 
data (at "fReceiveBuffer", of length "frameSize") to a decoder.

Because you are receiving H.264 video data, there is one more thing that you 
have to do before you start feeding frames to your decoder.  H.264 streams have 
out-of-band configuration information (SPS and PPS NAL units) that you may need 
to feed to the decoder to initialize it.  To get this information, call 
"MediaSubsession::fmtp_spropparametersets()" (on the video 'subsession' 
object).  This will give you a (ASCII) character string.  You can then pass 
this to "parseSPropParameterSets()", to generate binary NAL units for your 
decoder.)

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
[email protected]
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to