Here's how I implemented the subsession methods:

RTPSink * LiveG711MediaSubsession::createNewRTPSink(Groupsock *rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource *inputSource)
    {
        char const* mimeType = "PCMU";
        unsigned char payloadFormatCode = 0;
        int sampleFrequency = 8000;
        unsigned int numChannels = 1;

        return SimpleRTPSink::createNew(envir(), rtpGroupsock,
                    payloadFormatCode, sampleFrequency,
                    "audio", mimeType, numChannels);
    }

If your input data is *already* u-law audio (i.e., 8-bits-per sample), then this should work. (If, instead, it's 16-bit-per-sample PCM audio, then you need to insert a filter to convert it to u-law.)


The "LiveG711AudioStreamFramer is also something I wrote, and "m_mediaSource" is the thing that supplies my audio. For the values I supply to SimpleRTPSink, I basically copied what WAVAudioFileServerMediaSubsession was delivering for PCMU audio (8000 hz, 1 channel, etc.). Still not sure what problem I'm having--I see VLC is receiving the audio stream, but it's dropping all of the data claiming the "PTS is out of range."

Ahh! The problem here is probably that you're not setting "fPresentationTime" properly in your 'framer' class.
--

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to