I'm writing some code to stream live audio in ulaw format over RTP
multicast. I derived a class from framedSource class to read in the
data from a memory buffer. An audio recorder thread feeds ulaw audio
data to this buffer. In the derived class, I specified frame size @
128 bytes, duration @ 16000 us and the presentation time, etc in the
"doGetNextFrame" function. The network sink is an instance of
"SimpleRTPSink". It seems to be straightforward. However, the audio
was very broken when played back from VLC. Later I found that the
output RTP packet has 1024 bytes payload and arriving interval for
each  RTP packet is 128 ms. I intended to have a 16-ms arriving
interval and 128-byte RTP payload (which is the audio recorder's
output frame size). Does this make any sense and how should I do that?

Yes, the solution is to specify that your "SimpleRTPSink" object should pack no more than one input frame at a time into each outgoing RTP packet. You can do this by setting the "allowMultipleFramesPerPacket" parameter in the "createNew::createNew()" call to False.
--

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to