On Sep 5, 2014, at 1:14 AM, Fabrice Triboix <fabri...@ovation.co.uk> wrote:

> You're thinking about this the wrong way.  "doGetNextFrame()" gets called 
> automatically (by the downstream, 'transmitting' object) whenever it needs a 
> new NAL unit to transmit.  So you should just deliver the next NAL unit (just 
> one!) whenever "doGetNextFrame()" is called.  If your encoder can generate 
> more than one NAL unit at a time, then you'll need to enqueue them in some 
> way.
> [Fabrice] I would be interested in understanding a bit more here. Is live555 
> is a pull model?

Yes.

> How does the transmitting object knows when to send the next frame? Who/what 
> decides to call doGetNextFrame() and when?

The transmitting object (a "MultiFramedRTPSink" subclass) uses the frame 
duration parameter ("fDurationInMicroseconds") to figure out how long to wait - 
after transmitting a RTP packet - before requesting another 'frame' from the 
upstream object.  (I put 'frame' in quotes here, because - for H.264 streaming 
- the piece of data being delivered is actually a H.264 NAL unit.)  If 
"fDurationInMicroseconds" is 0 (its default value), then the transmitting 
object will request another 'frame' immediately after transmitting a RTP 
packet.  If data is being delivered from a live encoder - as in your case - 
then that's OK, because the encoder won't actually deliver data until it 
becomes available.

That's why you don't need to set "fDurationInMicroseconds" if your data comes 
from a live source.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to