If I did as you said. VLC always said the input buffer is empty.
At 2013-10-26 12:05:12,"Ross Finlayson" wrote:
nextTask() =
envir().taskScheduler().scheduleDelayedTask(2,(TaskFunc*)FramedSource::afterGetting,
this);
[...]
nextTask() =
envir().taskScheduler().scheduleDelayed
> nextTask() =
> envir().taskScheduler().scheduleDelayedTask(2,(TaskFunc*)FramedSource::afterGetting,
> this);
[...]
>nextTask() =
> envir().taskScheduler().scheduleDelayedTask(8000,(TaskFunc*)FramedSource::afterGetting,
> this);
This is wrong. Once you've delivered a frame of dat
But if I use the vlc to play it, the audio's pts is out of range, and it's
dropped.
but it works by ffplay.
I don't know where is wrong.
Could you help me to check it where is wrong. I'm crazy caused by this problem.
Here is my code.
audio is g711, each buffer is 160 bytes.
void AudioFrameSource::
> I saw the FAQ in live555 website.
> It said the live555 sync the audio and video by RTCP's SR packets.
> So I should create RTCP instance for each RTP source explititly?
No, because (assuming that you are controlling the streaming using RTSP) this
is done implicitly.
(In the RTSP server, this
I saw the FAQ in live555 website.
It said the live555 sync the audio and video by RTCP's SR packets.
So I should create RTCP instance for each RTP source explititly?
At 2013-10-24 05:48:40,"Ross Finlayson" wrote:
So how should I set the duration, then the audio and video would be sync.
Yo
> So how should I set the duration, then the audio and video would be sync.
You don't. The way you ensure that your audio and video streams are in sync is
by having your server give each (audio and video) frame accurate presentation
times - i.e., in the setting of "fPresentationTime" by the (au
Thanks your answer.
There is another question about the scheduleDelayedTask(duration,x,x).
So how should I set the duration, then the audio and video would be sync.
Currently my every audio's frame is 2ms. the video frame rate is 25fps..
Now I set the audio's next getframe time is 2ms, vid
>In the videoframesource’s getnextframe, if the buffer is nalu, not
> completely frame. So the fPresentationTime and fDurationInMicroseconds should
> only be set when the buffer is the last nalu in current frame.
> Is it right?
Not quite. "fPresentationTime" should be set for every
HI,
In the videoframesource’s getnextframe, if the buffer is nalu, not
completely frame. So the fPresentationTime and fDurationInMicroseconds
should only be set when the buffer is the last nalu in current frame.
Is it right?
___
live-dev