This corresponds with our recent findings with respect to performance/scalability: We also found that the CPU usage limits the number of clients that can connect to the RTSP server.
Attached is a plot of CPU vs number of clients when - streaming only H.264 video per RTSP session (red) - streaming only AAC audio per RTSP session (blue) - streaming only H.264 video + AAC audio per RTSP session (green) One question: was the following ever implemented/addressed since the post in 2007? http://lists.live555.com/pipermail/live-devel/2007-June/006889.html > At some point, I should get rid of these (few) remaining blocking > socket reads, and remove the "select()" call from "readSocket()". > Actually, as you're just running a RTSP server, you can probably > remove the "select()" call right now. You could give that a try, to > see if it improves performance on your system. Thanks, Ralf >>> Konstantin Shpinev <k...@microimpuls.com> 07/16/17 5:28 PM >>> 40 clients, ~40% CPU usage. gprof results: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls us/call us/call name 20.76 0.11 0.11 150000 0.73 3.47 BasicTaskScheduler::SingleStep(unsigned int) 11.32 0.17 0.06 61120 0.98 1.04 DelayQueue::addEntry(DelayQueueEntry*) 9.43 0.22 0.05 58657 0.85 1.36 RTPInterface::sendPacket(unsigned char*, unsigned int) 7.55 0.26 0.04 58296 0.69 0.69 DelayQueue::removeEntry(long) 5.66 0.29 0.03 58611 0.51 0.51 MultiFramedRTPSink::buildAndSendPacket(unsigned char) 3.77 0.31 0.02 3231064 0.01 0.01 HandlerIterator::next() 3.77 0.33 0.02 326217 0.06 0.06 DelayQueue::synchronize() 3.77 0.35 0.02 150000 0.13 0.18 DelayQueue::handleAlarm() 3.77 0.37 0.02 117586 0.17 0.17 NetInterfaceTrafficStats::countPacket(unsigned int) 3.77 0.39 0.02 58610 0.34 0.34 SimpleRTPSink::doSpecialFrameHandling(unsigned int, unsigned char*, unsigned int, timeval, unsigned int) 3.77 0.41 0.02 58610 0.34 2.91 MultiFramedRTPSink::sendPacketIfNecessary() 3.77 0.43 0.02 58610 0.34 3.76 MPEG2TransportStreamFramer::afterGettingFrame1(unsigned int, timeval) 1.89 0.44 0.01 150000 0.07 0.12 DelayQueue::timeToNextAlarm() 1.89 0.45 0.01 117222 0.09 0.09 FramedSource::getNextFrame(unsigned char*, unsigned int, void (*)(void*, unsigned int, unsigned int, timeval , unsigned int), void*, void (*)(void*), void*) 1.89 0.46 0.01 116338 0.09 0.17 BasicTaskScheduler::setBackgroundHandling(int, int, void (*)(void*, int), void*) 1.89 0.47 0.01 90105 0.11 0.11 HandlerIterator::reset() 1.89 0.48 0.01 61120 0.16 1.21 BasicTaskScheduler0::scheduleDelayedTask(long, void (*)(void*), void*) 1.89 0.49 0.01 58658 0.17 0.17 writeSocket(UsageEnvironment&, int, in_addr, unsigned short, unsigned char*, unsigned int) 1.89 0.50 0.01 58610 0.17 3.42 MultiFramedRTPSink::afterGettingFrame1(unsigned int, unsigned int, timeval, unsigned int) 1.89 0.51 0.01 58151 0.17 0.17 HandlerDescriptor::~HandlerDescriptor() 1.89 0.52 0.01 22 454.56 454.56 MultiFramedRTPSink::MultiFramedRTPSink(UsageEnvironment&, Groupsock*, unsigned char, unsigned int, char cons t*, unsigned int) 1.89 0.53 0.01 BasicTaskScheduler::~BasicTaskScheduler() 0.00 0.53 0.00 410270 0.00 0.00 MPEG2TransportStreamFramer::updateTSPacketDurationEstimate(unsigned char*, double) 0.00 0.53 0.00 326218 0.00 0.00 TimeNow() 0.00 0.53 0.00 326217 0.00 0.00 operator-(Timeval co nst&, Timeval const&) 0.00 0.53 0.00 175833 0.00 0.53 0.00 150000 0.00 0.00 HandlerIterator::HandlerIterator(HandlerSet&) 0.00 0.53 0.00 150000 0.00 0.00 HandlerIterator::~HandlerIterator() сб, 15 июл. 2017 г. в 22:11, Ross Finlayson <finlay...@live555.com>: > > I my experience, scalability issues with our server software are usually > caused by running up against limits in either (1) the capacity of their > network, or (2) the number of open files (sockets) that are supported by > the underlying operating system (see > http://live555.com/liveMedia/faq.html#scalability ). Increasing this > limit usually solves the problem. > > > > > > I'm using Linux server, so it's not it. > > Note that Linux kernels also have a limit on the number of open > files/sockets. This limit can be reconfigured. > > > > The problem is high CPU usage. > > So which part(s) of the code are contributing most to the CPU usage. Have > you tried rebuilding the code for, and running it under, “gprof”? > > > Ross Finlayson > Live Networks, Inc. > http://www.live555.com/ > > > _______________________________________________ > live-devel mailing list > live-devel@lists.live555.com > http://lists.live555.com/mailman/listinfo/live-devel > -- С уважением, Константин Шпинёв ООО "Майкроимпульс" раб.: +7 (499) 647-49-78 моб.: +7 (906) 844-57-66 skype: ksot1k www.microimpuls.com -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. Please consider the environment before printing this email.
_______________________________________________ live-devel mailing list live-devel@lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel