However, audio still gets called much more often than it needs to as
per the fDurationInMicroseconds prameter.
Remember, You Have Complete Source Code.
I have told you *repeatedly* what you need to look at to figure out
why your problem is happening. Why do you keep ignoring my advice??
T
@ns.live555.com] On Behalf Of Ross Finlayson
> Sent: Friday, March 27, 2009 3:41 PM
> To: LIVE555 Streaming Media - development & use
> Subject: Re: [Live-devel] Framed Source Issue
>
> On Mar 27, 2009, at 3:22 PM, "Mukherjee, Debargha"
> > wrote:
>
>
On Mar 27, 2009, at 3:22 PM, "Mukherjee, Debargha" > wrote:
Thanks. How about the MPEG4 video? I am currently encoding video
frames into MPEG4 and then using the MPEG4VideoStreamFramer class
before feeding into MPEG4ESVideoRTPSink. Is that correct?
No. Because your input source
evel-boun...@ns.live555.com] On Behalf Of Ross Finlayson
> Sent: Friday, March 27, 2009 2:42 PM
> To: LIVE555 Streaming Media - development & use
> Subject: Re: [Live-devel] Framed Source Issue
>
> On Mar 27, 2009, at 9:28 AM, "Mukherjee, Debargha"
> > wrote:
>
&g
On Mar 27, 2009, at 9:28 AM, "Mukherjee, Debargha" > wrote:
Hi Ross,
I am still somewhat confused.
The parameter fDurationInMicrosecondss is being set correctly by me
in the deliverFrame() function of my AudioEncSource class before the
call to FramedSource::afterGetting(this). Could you poi
ive555.com
> [mailto:live-devel-boun...@ns.live555.com] On Behalf Of Ross Finlayson
> Sent: Thursday, March 26, 2009 6:00 PM
> To: LIVE555 Streaming Media - development & use
> Subject: Re: [Live-devel] Framed Source Issue
>
> >Also it seems to me that the MPEG1or2AudioStreamF
Also it seems to me that the MPEG1or2AudioStreamFramer class does
not use the fDurationInMicroseconds parameter at all.
Yes it does, it *sets* this parameter (see line 144).
Once again, look at the parameters that are passed to each call to
"void MultiFramedRTPSink::afterGettingFrame()". (The
---Original Message-
> From: live-devel-boun...@ns.live555.com
> [mailto:live-devel-boun...@ns.live555.com] On Behalf Of Ross Finlayson
> Sent: Thursday, March 19, 2009 4:58 PM
> To: LIVE555 Streaming Media - development & use
> Subject: Re: [Live-devel] Framed Source Issue
>
>
First, is it possible to explain using
TaskScheduler::scheduleDelayedTask(), as you suggested, a little
better?
In your implementation of "doGetNextFrame()", if no input data is
currently available to be delivered to the downstream object, then
you could just call "TaskScheduler::scheduleDela
Thanks Ross for the pointers.
First, is it possible to explain using TaskScheduler::scheduleDelayedTask(), as
you suggested, a little better?
Second, I also suspect that the freezing issue has to do with the timestamps
and the duration.
I am setting the duration in microsecs as 26122 (for 44.1
I am having an issue on the streaming-out side. The audio and video
encoders read raw data from shared buffers using two derived
FramedSource classes modeled after DeviceSource.cpp. The
deliverFrame() function in these derived classes read raw audio and
video from respective shared buffers, enc
Hi,
I am developing a multi-threaded application using live libraries that receives
audio and video from several Axis IP cameras concurrently as RTSP clients,
decodes into raw, processes the data, creates a single outbound audio/video
stream, re-encodes and streams it out to a remote receiver a
12 matches
Mail list logo