Hello all! The last days I was debugging some problems when using an On-Demand Server to stream MPEG4/H.264 video as RTP-over-TCP. After taking a deeper look into the TCP code, it seems that they are not too easily fixable, but a major rewrite of the RTP-over-TCP code would be needed to fix them all. Are there currently any efforts underway in that direction?
If they're not already known, here are the problems I'm dealing with at the moment (you can reproduce them with the "testOnDemandRTSPServer" from the latest 2011.01.24 release). All problems arise when you use RTP-over-TCP streaming mode. * "No response to TEARDOWN when the source ended" If the source ends playing (e.g. EOF), StreamState::reclaim will be called by the afterPlayingStreamState handler. The RTCPInstance and RTPSink will be deleted, which causes them to de-register the TCP socket from the RTPInterface. This turns off reading of this socket. The delete of the RTCPInstance also will send a BYE-Event to the client. The client will try to TEARDOWN, but as nobody is listening on the still open TCP socket, it never get's an answer. VLC (1.1.7) totally locks up in that situation - nice. * "Defective reuseFirstSource when using OnDemandServerMediaSubsession with RTP-over-TCP" Again, if the source ends (e.g. EOF) the StreamState will "reclaim", but the OnDemandServerMediaSubsession will not be informed by that. So on the next request, it will readily re-use the StreamState that is already dead, no data at all. This will only be cured if eventually the RTSPClientSession owning the StreamTokens are garbage collected by the liveness timeout. If they (for whatever reason) are kept alive, it is impossible to open a new Stream with this Subsession. * "RTSP is dead after stopping all RTP-over-TCP streams" I'm not too sure about that one, as I'm not too deep into RTSP RFC and I don't have any testcode to prove it. But from what I see, if within a RTSP session you start RTP-over-TCP streams and stop them all, also the RTSP session will be dead. It seems to be impossible to start any other streams now. * "No error handling on TCP Socket errors" There are already some posts about this on the list. Personally I think not doing error handling on serious errors (socket closed => "Broken pipe") is not good. The server will continue to try to put packets into a socket that is known closed. So you will have extra load until some time a liveness check will kill the RTSPClientSession. It seems that handing over the ownership of the TCP socket to the RTPInterface when doing RTP-over-TCP is the root of all evil here. Wouldn't it be better to let the RTSPClientSession still own the socket, do the read-handling and de-muxing and give a delegate to the RTPInterface for sending data? Best regards, Andreas.
_______________________________________________ live-devel mailing list live-devel@lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel