I am trying to understand the relationship and interaction of different Live555 
classes so that I can implement an OnDemandRTSPServer to be able to send live 
video/audio frames that are generated by my encoder associated to different 
video port channels.

To start, I derived myOnDyanmicRTSPServer from RTSPServer. I also subclassed 
myDyanmicRTSPClientSession from RTSPServer::RTSPClientSession  within 
myOnDyanmicRTSPServer and implemented handlCmd_PLAY and handleCmd_TEARDOWN 
vitrual methods. Additionally, I implemented lookupServerMediaSession() virtual 
method in  myOnDyanmicRTSPServer. Now in lookupServerMediaSession() I checked 
to see if the channel name in the URI passed to this method does not exist then 
I created a ServerMediaSession and added addSubsession() for the corresponding 
video codec and dataport on my video board for the channel specified in the 
URI. Now when handleCmd_PLAY() or handleCmd_TEARDOWN is called I start/stop my 
video encoder for that particular video data port. Lastly when doGetNextFrame() 
is called in my my sub-classed VideoStreamFramer, I get the frame provided by 
my video encoder and copy it to  fTo buffer and call afterGetting() to inform 
the sink the frame was obtained.

My Questions are:

1) What is the relationship between the ServerMediaSession and 
SeverMediaSubsession? What is the rule of each of these classes? My 
understanding is that their relationship in case video only is one to one. I 
create a ServerMediaSession for a particular video port channel requested by 
the RTSP URI DESCRIBE command, then add the associated video encoder which is 
my SeverMediaSubsession to this ServerMediaSession. I also think that if I had 
audio I would have added a audio SeverMediaSubsession. But is it possible that 
there are more than one audio or video SeverMediaSubsession to be associated to 
a ServerMediaSession. If yes, what is the purpose of it?

2) I need to know when to start/stop my video encoder. Currently, I do this by 
sub-classing RTSPServer::RTSPClientSession and handling the handlCmd_PLAY and 
handleCmd_TEARDOWN. Is this the correct place or should I sub-class the 
ServerMediaSubsession class and handle startStream() and deleteStream() virtual 
methods?

3) How does clientSessionId that is passed to various xxxStream() methods (i,e, 
startStream(), deleteStream()) relates to a particular URI associated to a 
ServerMediaSession?

4) Lastly, how do I relate doGetNextFrame(), that is called by the framer to 
get the next video frame, to the channel name as it was specified in the RTSP 
URI command Play?

Thanks for your help.
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to