> Based on the testRTSPClient example, I've gotten a stable RTSP connection 
> working on my target platform; in this case to a video+audio RTSP source.  
> But, now I'm struggling to figure out the next step.  My custom MediaSink 
> classes do not receive any frame data via afterGettingFrame()

That is the first thing that you should fix.  Note that when the (non-static) 
"DummySink::afterGettingFrame()" function ("testRTSPClient.cpp", lines 479-500) 
is called, a complete frame will have already been delivered into 
"fReceiveBuffer".  Note that our "DummySink" implementation doesn't actually do 
anything with this data; that's why it's called a 'dummy' sink.

If you wanted to decode these frames, you would replace "DummySink" with your 
own "MediaSink" subclass.  It's "afterGettingFrame()" function would pass the 
data (at "fReceiveBuffer", of length "frameSize") to a decoder.


> , but guessing there is significantly more logic required in these classes 
> than shown in testRTSPClient.

Not really...


> What's a 'frame', exactly

It's a complete unit of data that can be passed to a decoder.  The specific 
details depend on the specific (audio and/or video) codecs that you're 
receiving.


> And how about these higher-level MediaSinks - do those work right out of the 
> box?  Seems too good to be true.  Let's say I had AAC+VP8 streams coming in.  
> Would I conditionally create a MPEG4LATMAudioRTPSink (what if it's non-LATM 
> MPEG4?) and a VP8VideoRTPSink in continueAfterSETUP() based on inspecting the 
> subsession?  I suppose I will have to try this out myself ;)

No, you're confused here.  The "*RTPSink" classes are used only for 
*transmitting* RTP packets.  I.e., they're used by servers, not clients, and 
are therefore not classes that you would use.

If you want to decode (and then play) the received data, then you would need a 
decoder for each media type.  Note, however, that our code *does not* include 
any decoders.  For that, you would use a separate software library - or 
decoding hardware.


> I know we're encouraged to study the examples and browse the library source, 
> but there is a LOT to scan without having a roadmap!  A lot of these examples 
> seem to have more to do with serving rather than consuming streams, can 
> someone point out a good one for study?  In the class graph I also see some 
> specific MediaSink classes, e.g. MPEG4ESVideoRTPSink.  Would it make more 
> sense to study these implementations?  I guess I'm struggling to understand 
> the high-level roadmap of information.  

Maybe you should begin by explaining what it is specifically that you're trying 
to do...

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to