I am looking at the source testProgs/playCommon.cpp, in particular
the shutdown() and afterPlaying() functions.
Does calling Medium::close(subsession->sink) call the subsession
destructor also? This is what the comment says in
subsessionAfterPlaying().
That comment is perhaps a bit misleading
>Is there a built-in assumption that the RTCP socket is blocking?
No. Both incoming RTCP and incoming RTP packets are read
asynchronously, from the event loop, so their sockets don't need to
be blocking.
> If I
>just change the code to make it non-blocking, will there be any ill
>effect on th
I am looking at the source testProgs/playCommon.cpp, in particular the
shutdown() and afterPlaying() functions.
Does calling Medium::close(subsession->sink) call the subsession
destructor also? This is what the comment says in
subsessionAfterPlaying().
I set a breakpoint on MediaSubsession::~Me
I am having an extremely occasional hang of a live555-based linux rtsp
server under heavy load. I have induced a core dump to see where the
hang occurs. It seems that we hang waiting for a packet on the RTCP
socket. The RTCP socket does not appear to be set to non-blocking. Now,
at first glance
Ross Finlayson wrote:
No, that's not correct. The RTSP server implementation's 'liveness
check' timer gets rescheduled only after the receipt of an incoming
*RTCP packet* (or an incoming RTSP command) - not on every (or any)
outgoing packet.
Ah good, that makes a great deal more sense.
How
>Studying the performance my own epoll()-based scheduler, I strongly
>suspect that the far bigger source of inefficiency is the DelayQueue
>implementation that BasicTaskScheduler0 uses. This queue is a linked
>list, causing O(n) cost to adding and deleting timers. Which happens a
>lot. If I underst
A couple of months ago, there was a discussion of performance of the
live555 libraries on Linux, and the discussion turned to the efficiency
of select() vs. epoll().
Studying the performance my own epoll()-based scheduler, I strongly
suspect that the far bigger source of inefficiency is the Del
>So I tried to use your RTSP server but it didn't change the situation.
>My streams are still diverting. I investigated the problem further and
>fond out, that live555 queries my sound source much more often than it
>queries the video source. I suspect it has something to do with data
>size
No, th
Hello again,
I'm referring to the following mail:
I'm working on an application that streams live generated content (audio
and video) using the Darwin Streamnig Server. I decided to use ffmpeg
for the encoding part and live555 for the streaming part. The basic
architecture is as follows: In a
UDP checksumming is done by the OS or network adaptor, and has
absolutely nothing to do with the "LIVE555 Streaming Media" code,
which runs above all of this.
--
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
___
live-devel mailing list
li
this "error" can also occur when your the checksum process is offloaded
and performed by the network adapter ...
On Fri, 2007-06-01 at 13:05 +0200, Julian Lamberty wrote:
> Sorry, my fault ;) You should not run wireshark on the same computer
> that sends the packets...
> ___
Sorry, my fault ;) You should not run wireshark on the same computer
that sends the packets...
smime.p7s
Description: S/MIME Cryptographic Signature
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live
Hi!
Since my transcoder now transcodes I have a new problem:
I deliver complete MPEG4 Frames to MPEG4VideoStreamDiscreteFramer
followed by MPEG4ESVideoRTPSink.
But all the packets sent have a wrong UDP checksum. Wireshark reports
that a lot of subsequent UDP packets have the SAME checksum. For
Here's the problem:
>Opened URL "rtsp://172.24.141.104:554/Video/edit.mpg", returning a SDP
>description:
>v=0
>o=- 3389626461 0 IN IP4 0.0.0.0
>s=Video RTSP Server
>t=3389626461 0
>m=video 0 RTP/AVP 33
>a=rtpmap:33 H264/9
>a=control:rtsp://172.24.141.104:554/Video/edit.mpg
>a=range:npt=0.0-14
14 matches
Mail list logo