Hi,

I added simple logic and few parameters in DummySink to calculate received bit 
rate in afteGettingFrame() of testRTSPClient and printed the same at 30secs of 
interval. This was showing birate per stream received. When I opened 4 
connections from IP cameras (streaming at 8Mbps CBR) I saw 30-32Mbps data 
received in application as expected. When I opened 8 streams (effectively 64 
Mbps data coming in) from different IP cameras, I saw testRTSPClient cannot 
receive more than 25 Mbps of data collectively. I can see 'ifconfig' showing 64 
Mbps data is received. I mean if I execute 'ifconfig' twice with interval of 30 
secs and calculate bitrate with difference between 'RX bytes' it comes out to 
be 64Mbps approximately. I think that means driver is not dropping huge data. I 
also observed CPU load which was less than 50% so CPU is not overloaded.

Why the same bitrate is not observed in user space? Where the data is being 
dropped? Is it that the application is not consuming data at enough rate (I 
mean select is not getting called fast enough)?

Before doing this experiment I ensured following things.

1.     Changed DUMMY_SINK_RECEIVE_BUFFER_SIZE to 10000000

2.     unsigned RTSPClient::responseBufferSize = 10000000;

3.     I have turned my system to set net.core.rmem_max and other parameters. I 
am also using setReceiveBufferTo() to increase socket receive buffer to 0xDA000.

Please let me know if you can foresee where the bottleneck could be. I'm 
running linux on receiving side.

Regards.
Yogesh.
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to