--- Begin Message ---
Hi everybody,

I am developing an RTSP server based on openRTSP. The source streams consist of H264 video from IP cameras, which are handled by other processes. Basically, the server should forward H264 streams as they are received from the cameras, with the constraint that we have to employ TCP as transport protocol (although this is not recommended).

Things work well, as long as there is enough bandwidth on the link between our RTSP server and the clients.

As a very basic form of bitrate adaptation, I would like to switch to streaming only key frames from the primary source whenever a "congestion" is detected on a single client session.
This is not hard to implement.
The hard part seems to be how to detect (the onset of) a congestion.

The idea is to watch into RTPTransmissionStats (which is updated based on RTCP RR packets from client), looking for some kind of clue. In particular, I was trying to compare the rate of packet received from the camera source against the rate of packet reception, inferred from lastPacketNumReceived() in RTPTransmissionStats. This seems to work with UDP, but fails when using TCP, mainly because the connection get closed as soon as the bandwidth gets limited.

Do you have any idea or suggestion ? Is this idea misleading ?

Thanks in advance,

Massimo Perrone

Innova Spa

Trieste - Italy




--- End Message ---
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to