Sorry. That wasn't clear. Yes, FramedSource derives from MediaSource, which 
derives from Medium. RTSPClient derives from Medium.

In the example program you have the ourRTSPClient with a StreamClientState 
object attached. You "strobe" the session object to cause the client to pump 
frames through. I suppose in my case, I would have a session a camera.

In testH264VideoToTransportStream you have a FramedSource file input, 
ByteStreamFileSource, which you  pass in to H264VideoStreamFramer as an 
argument within the context of a session object.

Looking at these two pieces together, I see what I need, but I also see two 
disparate paradigms. The framer wants a FramedSource and RTSPClient presents a 
Media source. RTSPClient understands RTSP, the framer understands H264. But 
they don't fit together in any way I can see. I see a Framed paradigm and a 
Media paradigm.

From: live-devel [mailto:live-devel-boun...@ns.live555.com] On Behalf Of Ross 
Finlayson
Sent: Monday, November 03, 2014 1:38 PM
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] Buffering an H264 Video Stream

I need to keep a constant 3 second buffer of an H264 video stream. It's for 
security cameras. When something trips the camera, I replay the 3 seconds and 
then 6 more to see the event (One hopes I'll catch some ghosts or mountain 
lions, but really it's to catch car thieves!).  With mpeg it was easy because 
mpeg has discrete frames, but the much better definition h264 doesn't. I mean 
it does, but they're spread out over an indefinite series of NAL packets that 
can contain various varieties of slices. Squishing together into a discrete 
frame is a problem.


 It seems to me that there are two different paradigms at work in live555. 
Modules that derive from Medium and modules that derive from MediaSource.

No, this is completely wrong.  (Note that “MediaSource” is a subclass of 
“Medium”.)

I suggest that you begin by reviewing the “testRTSPClient” demo application (in 
the “testProgs” directory).  Note, in particular, the “DummySink” object that 
receives ‘frames’ (for H.264 video, each ‘frame’ will actually be a NAL unit).  
Note the code for “DummyRTPSink::afterGettingFrame()” ("testRTSPClient.cpp”, 
lines 500-521).  For your own RTSP client application, you could write your own 
‘sink’ class (a subclass of “MediaSink”) that receives H.264 NAL units, and 
processes them however you want.

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to