As you presumed I just run single program and we use another tiny RTSP client
to generate heavy load. (just stream, not decoding)
Regarding socket descriptor limitation,
I already increase it to 1024. Before do this, we stopped at 32 clients (32 * 2
sockets, RTP, RTCP)
some strange thing is
Why
> We are encountered significant performance degradation when ports live555 to
> windows from Linux.
> Even though Window server has more powerful H/W spec. then Linux.
>
> Test scenario
> 1. Run live555MediaServer (not modified)
> 2. Connect client to server stream (H264, 720p, 2mbps, RTP over T
We are encountered significant performance degradation when ports live555 to
windows from Linux.
Even though Window server has more powerful H/W spec. then Linux.
Test scenario
1. Run live555MediaServer (not modified)
2. Connect client to server stream (H264, 720p, 2mbps, RTP over TCP)
3. Increas
On Jul 28, 2011, at 10:19 PM, xue wrote:
> /* Here should process the queue data before get new frame from source, this
> very important for IP net work camera, the video latency will 50ms shorter
> than before. Zack */
> if (tv_timeToDelay.tv_sec == 0 && tv_timeToDelay.tv_usec == 0){
>
On Jul 28, 2011, at 10:21 PM, xue wrote:
> Linux Epoll patch. Just for reference
Thanks. However, I won't (can't) make such a change to the released code,
because "epoll()" - unlike "select()" - is not portable across multiple OSs.
It's important to understand that the "BasicTaskScheduler" cla
Linux Epoll patch. Just for reference
/**
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 2.1 of the License, or (at your
option) any later version
void BasicTaskScheduler::SingleStep(unsigned maxDelayTime) {
fd_set readSet = fReadSet; // make a copy for this select() call
fd_set writeSet = fWriteSet; // ditto
fd_set exceptionSet = fExceptionSet; // ditto
DelayInterval const& timeToDelay = fDelayQueue.timeToNextAlarm();
struc
First, all congratulations for Live555.
But, I have no performance that I will hope. I develop an embedded
application running under Linux 2.6.19 with a processor at 260MHz
(with 128Mo DDR2 memory) and I would like use liveMediaServer as
RTSP server and multiple openRTSP as RTSP clients. (I have m
Hello Marc,
We use liveMedia library in a server-side application and we noticed the same
performance issue with DelayQueue class. We tried to optimize it but couldn't
get good results. Would you mind if I ask you to share your optimized code ?
Ross, I understand your point regarding embedded
Ross Finlayson wrote:
No, that's not correct. The RTSP server implementation's 'liveness
check' timer gets rescheduled only after the receipt of an incoming
*RTCP packet* (or an incoming RTSP command) - not on every (or any)
outgoing packet.
Ah good, that makes a great deal more sense.
How
>Studying the performance my own epoll()-based scheduler, I strongly
>suspect that the far bigger source of inefficiency is the DelayQueue
>implementation that BasicTaskScheduler0 uses. This queue is a linked
>list, causing O(n) cost to adding and deleting timers. Which happens a
>lot. If I underst
A couple of months ago, there was a discussion of performance of the
live555 libraries on Linux, and the discussion turned to the efficiency
of select() vs. epoll().
Studying the performance my own epoll()-based scheduler, I strongly
suspect that the far bigger source of inefficiency is the Del
12 matches
Mail list logo