OK, then I think my mistake is to presume those timestamps are adjusted for
the server's timezone settings. In fact, they must be UTC values, with no
way to determine the timezone they originated from. In retrospect - duh,
that's how you'd expect timestamps to work, and that's probably already
do
I had been going under the assumption that I could interpret the
presentation times in terms of the server's wallclock. I just realized
that, in fact, presentation time appears to have been converted to the
client's local time, i.e. accounting for time zone, etc. Is this the case,
and if so, is t
PLAY command has occurred, so YMMV.
Thanks,
Jesse
On Tue, Feb 19, 2013 at 8:35 AM, Jesse Hemingway <
jesse.heming...@nerdery.com> wrote:
> I see - thank you. That was a bit of a sanity check. In our case, we
> need to keep server-to-client latency low and predictable. Our use case
I see - thank you. That was a bit of a sanity check. In our case, we need
to keep server-to-client latency low and predictable. Our use case is
indeed like the exception you mention, with multiple receivers in the same
'room' that need to stay synced, i.e. having similar requirements as VoIP
app
Sorry if this is a stupid question, but I can't fully understand how
Live555's Presentation Times are to be applied, *aside* from synchronizing
parallel media streams. Is there any way to use these presentation times
to determine the receiver's time offset from the server? In my case, I'm
trying
Hi Ross,
I've seen threads in which you state that Live555 does not offer a jitter
buffer, and I've also seen posts (in which you weren't involved) that claim
it does. I'm fairly certain that your logic is reordering packets so that
frames are delivered to a media sink in presentation order (whic
Jeff --
A) Is it really necessary to collect all, successive I-frames to send all
>> at once to avcodec_decode_video2(), or might this indicate some other,
>> larger issue? If I don't collect them all, only one fraction of the image
>> is clear at a time, with the rest of it totally blurred.
>>
>
or these sequences? It does seem odd to me that
> an encoder would generate these by default.
>
> ** **
>
> Chris Richardson
>
> WTI
>
> ** **
>
> *From:* live-devel-boun...@ns.live555.com [mailto:
> live-devel-boun...@ns.live555.com] *On Behalf Of
facts?
-Jesse
On Mon, Feb 11, 2013 at 10:10 AM, Jesse Hemingway <
jesse.heming...@nerdery.com> wrote:
> Thanks Jeff,
>
> I tried out your suggestion of caching and passing the 7,8 frames before
> every keyframe, so my sequence looked like
> 7,8,5,7,8,5,7,8,5,1,1,1,1,1,1,7,8,5
---
> * From:* live-devel-boun...@ns.live555.com [
> live-devel-boun...@ns.live555.com] on behalf of Jesse Hemingway [
> jhemi...@nerdery.com]
> *Sent:* Friday, February 08, 2013 8:36 PM
>
> *To:* LIVE555 Streaming Media - development & use
> *Cc:* LIV
e encoder side?
> Is there a reason for generating 7, 8, 1, 1, 1 … 5?
>
> Chris Richardson
> WTI
>
>
> From: live-devel-boun...@ns.live555.com
> [mailto:live-devel-boun...@ns.live555.com] On Behalf Of Jesse Hemingway
> Sent: Friday, February 08, 2013 3:06 PM
>
Hello,
I apologize if this is noise - my question may well have nothing to do with
Live555, but I thought I'd post here in case anyone can help me rule it
out. It appears I'm successfully consuming H.264 via RTSP and acquiring
frames in my mediasink.
Next, I set up ffmpeg's decoder with the SPS
Thanks Ross,
> Based on the testRTSPClient example, I've gotten a stable RTSP connection
> working on my target platform; in this case to a video+audio RTSP source.
> But, now I'm struggling to figure out the next step. My custom MediaSink
> classes do not receive any frame data via afterGetting
Apologies in advance for following deluge. I'm new to Live555, RTP and
RTSP in general; and trying to gather resources to understand how to
consume video+audio streams.
Based on the testRTSPClient example, I've gotten a stable RTSP connection
working on my target platform; in this case to a video
14 matches
Mail list logo