Greetings to all, As I explained in the previous emails, I'm trying to set up an rtsp server that streams h263 video, taken from a process that writes the stream on a unix named pipe.
The "encoder" process sets up ffmpeg libraries, takes video frames from an analog camera and encodes them in h263 format, writing them on a named pipe (fifo) on the filesystem. The server process adds an H263plusVideoFileServerMediaSubsession, using the filename of the named pipe as the second parameter. When I start the encoder process, it sets itself up and blocks opening the fifo write only. That's ok, because no one has yet opened the fifo for reading. Then I start the server, and start an rtsp client to do the right request. The server then open()s the fifo for reading. Consequently the encoder process unblocks and starts encoding frames and writing them on the fifo. The odd thing is that, then, the server close()s the fifo, only to re - open() it after a bit. After the server close()s, and before it re - open(s) them, the encoder process has encoded some frames and tries to write them on the fifo. But the other end of the fifo is now closed, and the encoder process gets a SIGPIPE and dies. The strace of the server process goes like this: open("camera0fifo", O_RDONLY|O_LARGEFILE) = 5 -->(opens the fifo for the 1st time) [...] close(5) -->(closes fd 5, which is the fifo descriptor) [...] ---> Here the encoder process tries to write() on the fifo, and gets the SIGPIPE [...] open("camera0fifo", O_RDONLY|O_LARGEFILE) = 5 - ->(opens the fifo again, for the last time) If I make the encoder process sleep() for two seconds, however - that's bad practice, I know, but just to prove my point - the server has time to re - open() the read end of the fifo, and all consequent write()s on it are successful. To avoid this, I would either ignore the SIGPIPE or poll the file descriptor until its read end is opened again, but I don't particularly like these solutions. Is there a particular reason why liveMedia open()s the file descriptor twice? Is it meant to be like this or is it misbehaving? And if it's meant, how would you recommend to handle this situation? Maybe I'm missing something, and someone could be kind and enlighten me :) The second part of the question is a bit trickier. Server sends correctly H263plus packets, as far I can see. Unfortunately, my implementation is meant to stream to cellular phones using the standard realplayer. Realplayer behaves well with the server in rtsp/rtp interaction, but it won't read H263plus packets as encoded by ffmpeg. Instead, it would read H263 (non-plus, first version, with fixed screen sizes). When I try to stream H263 streams with an H263plusVideoFileServerMediaSubsession, it behaves quite oddly: servers sends packets very slowly, with short bursts of rtp packets interleaved with 2/3 seconds of sending nothing, as I could observe in wireshark. As it consumes much slower than the encoder process produces, the fifo gets filled, the writes block, then the server finally consumes its buffer (taking minutes), and the encoder process fills the fifo again and so on. I realize H263plusVideoFileServerMediaSubsession is meant to send H263+ streams, which it does, but I thought H263+ was backwards compatible with H263. Anyone managed to stream pure H263 first version with liveMedia? Anyone knows how can I achieve this goal? Thanks to everybody. -- Belloni Cristiano Imavis Srl. www.imavis.com <http://www.imavis.com> [EMAIL PROTECTED] <mailto://[EMAIL PROTECTED]> _______________________________________________ live-devel mailing list live-devel@lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel