> I've noticed that a number of the largest consumer video manufacturers (Flir,
> Digital Watchdog, Hikvision, etc.) all seem to use RTSP over TCP by default.
> Is that more or less designed to combat unknown network circumstances of
> varying consumers networks, or would there be another reas
> The best way to handle data loss is to stream over UDP, but also
configure your encoder so that ‘key frames’ are sent as a series of
‘slices’, rather than as a single ‘frame’ (or NAL unit, in H.264
terminology). That way, the loss of a packet will cause only a single
‘slice’ to be lost; the res
> I'm now assuming that the mere fact that TCP is a reliable transport protocol
> is what's ensuring that I get all of the necessary packets to reassemble the
> image frame.
This is a common (though understandable) misconception. TCP is not some ‘magic
fairy dust’ that will always prevent data
I really wanted to post this as a learning check. I think I was able to
finally wrap my head around an RTSP/RTP issue I've been having.
Specifically, I have been playing around with a coupld of different IP
cameras.
I've found that I have no real issues streaming (or proxying) 720p or less
cameras