A couple of months ago, there was a discussion of performance of the live555 libraries on Linux, and the discussion turned to the efficiency of select() vs. epoll().
Studying the performance my own epoll()-based scheduler, I strongly suspect that the far bigger source of inefficiency is the DelayQueue implementation that BasicTaskScheduler0 uses. This queue is a linked list, causing O(n) cost to adding and deleting timers. Which happens a lot. If I understand the behavior correctly, a 45-second idle timer is rescheduled on each packet. This almost invariably goes to the end of the scheduling queue. With the stock code, I had results similar to Vlad Seyakov's: I petered out at about 140-150 sessions. With my rewritten scheduler, I've been able to get to 400-500 sessions. My scheduling queue is based on an STL set<> with an appropriate less-than operator. This provides O(log n) insert/delete. Even so, I find that scheduling and unscheduling timers accounts for approximately 1/3 of the CPU at 400 sessions. I made one other observation: readSocket() in GroupsockHelper.cpp calls blockUntilReadable(). blockUntilReadable() uses select() to wait for the socket to be ready. This has two problems: first, we really shouldn't ever be blocking, since this blocks all sessions: if the data isn't ready, we should go back to the event loop. This should happen rarely, if ever, of course, since presumably we're only calling this after a select()/epoll() has triggered. The larger problem is that the use of select() a server to 1024 file descriptors unless you override the size of fd_sets in your build, and that of course, creates a performance degradation. Architecturally is seems a little harder to replace parts of groupsock than to replace parts of UsageEnvironment to make changes like this. Marc Neuberger _______________________________________________ live-devel mailing list live-devel@lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel