> And the AudioFrameSource main code is like this:
> void AudioFrameSource::doGetNextFrame()
> {
>      
> CamerManager::GetInstance()->GetAudioFrame(“test”,(char*)fTo,fMaxSize,&fFrameSize,&fNumTruncatedBytes);
> fPresentationTime.tv_sec=usec/1000;
> fPresentationTime.tv_usec = usec%1000;
> usec+=200;

Why do you have a variable named "usec", when its value appears to be in 
milliseconds, not microseconds?

But anyway, assuming that "usec" is really intended to be a value in 
milliseconds: You need to be aware of the following:

1/ The very first value of "usec" - i.e., the value that's used when 
"doGetNextFrame()" is called for the first time - must be aligned with 'wall 
clock' time - i.e., the time that you'd get by calling "gettimeofday()".  This 
is important for RTCP-based timing to work properly.

2/ Because you have specified a sampling frequency of 8000 samples per second, 
the length of time that you increment "usec" by each time that 
"GetAudioFrame()" is called needs to correspond to the number of samples that 
"GetAudioFrame()" has returned.  The value you are currently incrementing 
"usec" by - 200 ms - is almost certainly wrong, because 200 ms corresponds to 
8000*0.2 == 1600 audio samples, which will be too big for an outgoing RTP 
packet (assuming default packet size settings).  (Note that each 1-channel 
u-law audio sample will take up 1 byte.)

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to