Hi! I'm writing some code to stream live audio in ulaw format over RTP multicast. I derived a class from framedSource class to read in the data from a memory buffer. An audio recorder thread feeds ulaw audio data to this buffer. In the derived class, I specified frame size @ 128 bytes, duration @ 16000 us and the presentation time, etc in the "doGetNextFrame" function. The network sink is an instance of "SimpleRTPSink". It seems to be straightforward. However, the audio was very broken when played back from VLC. Later I found that the output RTP packet has 1024 bytes payload and arriving interval for each RTP packet is 128 ms. I intended to have a 16-ms arriving interval and 128-byte RTP payload (which is the audio recorder's output frame size). Does this make any sense and how should I do that?
Thanks! _______________________________________________ live-devel mailing list live-devel@lists.live555.com http://lists.live555.com/mailman/listinfo/live-devel