Hi Ross,

I am still somewhat confused.
The parameter fDurationInMicrosecondss is being set correctly by me in the 
deliverFrame() function of my AudioEncSource class before the call to 
FramedSource::afterGetting(this). Could you point me to where in your code it 
is actually used to decide when to make the next call?

In line 144 of MPEG1or2AudioStreamFramer::continueReadProcessing(), as you 
mentioned, it is *set* anyways by your code. So however I set it in my class 
does not seem to matter at all.

Thanks,
Debargha.


> -----Original Message-----
> From: live-devel-boun...@ns.live555.com
> [mailto:live-devel-boun...@ns.live555.com] On Behalf Of Ross Finlayson
> Sent: Thursday, March 26, 2009 6:00 PM
> To: LIVE555 Streaming Media - development & use
> Subject: Re: [Live-devel] Framed Source Issue
>
> >Also it seems to me that the MPEG1or2AudioStreamFramer class does
> >not use the fDurationInMicroseconds parameter at all.
>
> Yes it does, it *sets* this parameter (see line 144).
>
> Once again, look at the parameters that are passed to each call to
> "void MultiFramedRTPSink::afterGettingFrame()".  (These parameters
> include the frame size and duration; the 'duration' is what our code
> uses to decide when to ask for the next frame of data.)  This should
> help tell you what's going wrong.
> --
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
> _______________________________________________
> live-devel mailing list
> live-devel@lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel
>
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to