You need to create a filter and insert it into the chain.

I had this exact scenario and what I had was my own filter that handled the 
incoming frames. All my frames were small POD classes with a bit of meta data 
and a buffer holding the frame. I had a pool of these of different sizes and 
they were boost reference counted to save on memory and copying overhead as 
they went to disk or HTTP live Streaming or out the HTTP interface or to the 
browser plugin.
I may be mistaken, but I think the Framer filters do just this.

I then had a GOP buffer. Each GOP contained the reference to one keyframe and a 
list of references to all the difference frames.  Then I had a adjustable 
buffer of gops. 2 -3 gops at a time max were fully decoded or partially decoded 
to allow forward and backward play and fast forward on key frame only depending 
on if it was live or recorded or cached playback.

Now, here you've lost me. I don't know what a GOP is. I don't need forward or 
backward. I just need to spit out a discrete autonomous ten second clip. 
Someone else will play it.

On Mon, Nov 3, 2014 at 4:02 PM, Mark Bondurant 
<ma...@virtualguard.com<mailto:ma...@virtualguard.com>> wrote:
Hello,

Sorry if this is a repeat, but I/we have constant email problems (political 
issues), which I'm fairly sure I've now found a workaround for. What I'm saying 
is that this may be a repeat. If it is, I apologize, I didn't get your 
responses.

I need to keep a constant 3 second buffer of an H264 video stream. It's for 
security cameras. When something trips the camera, I replay the 3 seconds and 
then 6 more to see the event (One hopes I'll catch some ghosts or mountain 
lions, but really it's to catch car thieves!).  With mpeg it was easy because 
mpeg has discrete frames, but the much better definition h264 doesn't. I mean 
it does, but they're spread out over an indefinite series of NAL packets that 
can contain various varieties of slices. Squishing together into a discrete 
frame is a problem.

It seems to me that there are two different paradigms at work in live555. 
Modules that derive from Medium and modules that derive from MediaSource. One 
seems to clock out of env and the other out of session and they don't fit 
together with each other.

I need RTSPClient to interface with the camera, which derives from Medium. It 
understands the RTSP Describe and Setup responses, but it only fits with its 
Filesinks. I need H264DiscreteFramer filter because it understands how to 
gather together NAL packets into frames. Unfortunately Discrete Framer wants 
it's input from an input source that derives from MediaSource, not Medium, 
which RTSPClient derives from. RTSPClient doesn't fit  together with 
H264DiscreteFramer! (clunk, clunk, me trying to squish them together).

When things don't fit, I think that I'm missing something important. So what 
I'm asking for is a clue. What am I missing?

Mark

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com<mailto:live-devel@lists.live555.com>
http://lists.live555.com/mailman/listinfo/live-devel

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to