First, as you know (because you've read the FAQ :-), you shouldn't modify the 
existing code 'in place'; see:
        http://www.live555.com/liveMedia/faq.html#modifying-and-extending
Instead, you should write your own new subclass(es) (perhaps using the existing 
code as a model, when necessary).

In any case, the "WAVAudioFileSource" code is the wrong code to be using as a 
model, because most of what it does is irrelevant for your application.  In 
particular:
        - It reads and processes a WAV audio file header, which you don't need 
to do (because you presumably know the audio parameters (# channels, sampling 
frequency, etc.) in advance).
        - It reads from a file, which you won't be doing (as you've noted).
        - It provides support for 'trick play' operations, which you won't 
support, because you'll be reading from a live input source, rather than from a 
static file.

So, instead, you should write your own "FramedSource" subclass (not based on 
"WAVAudioFileSource") to encapsulate your input audio sample buffer.  For this, 
I suggest that you use the "DeviceSource" code as a model (see 
"liveMedia/DeviceSource.cpp").

Also, of course, you will need to write your own 
"OnDemandServerMediaSubsession" subclass, and use that - instead of 
"WAVAudioFileServerMediaSubsession" - in your RTSP server.  Although you may 
want to use the "WAVAudioFileServerMediaSubsession" code as a model, you'll 
find that you won't need most of that code.  In fact, you'll probably need to 
implement only the "createNewStreamSource()" and "createNewRTPSink()" virtual 
functions (and the implementation of those will be much simpler than those in 
"WAVAudioFileServerMediaSubsession", because - unlike for a WAV file - you know 
in advance the audio parameters).


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to