Thanks Ross;

Attached is the testRTSPClient.cpp that will crash.

If you use this as  a source: rtsp://192.168.1.1:8554/main,
then you will see heap corruption.

If you comment out the Medium::close(rtspClient); on line 398, then it will no longer crash, but you would be leaking a lot of resources.

Unfortunately, "exit()" is hiding a lot of memory leaks and other problems.

PS, I am running this on a Windows machine.

Regards,
Gord.


-----Original Message----- From: live-devel-requ...@ns.live555.com
Sent: Wednesday, April 18, 2012 1:15 AM
To: live-de...@ns.live555.com
Subject: live-devel Digest, Vol 102, Issue 17

Send live-devel mailing list submissions to
live-devel@lists.live555.com

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.live555.com/mailman/listinfo/live-devel
or, via email, send a message with subject or body 'help' to
live-devel-requ...@lists.live555.com

You can reach the person managing the list at
live-devel-ow...@lists.live555.com

When replying, please edit your Subject line so it is more specific
than "Re: Contents of live-devel digest..."


Today's Topics:

  1. Receive data from live server (i m what i m ~~~~)
  2. Re: Receive data from live server (Ross Finlayson)
  3. Re: ONVIF RTSP extension : Audio Backchannel Handling
     (Yuri Timenkov)
  4. Re: Receive data from live server (i m what i m ~~~~)
  5. testRTSPClient heap corruption (Gord Umphrey)
  6. Re: Receive data from live server (Ross Finlayson)
  7. Re: testRTSPClient heap corruption (Ross Finlayson)


----------------------------------------------------------------------

Message: 1
Date: Mon, 16 Apr 2012 15:02:00 +0530
From: "i m what i m ~~~~" <trn200...@gmail.com>
To: live-de...@ns.live555.com, "LIVE555 Streaming Media - development
& use" <live-de...@ns.live555.com>
Subject: [Live-devel] Receive data from live server
Message-ID:
<CAJmxhGUsN4Lg-Q=xq1dcfeew7q7wgiwafyssnpjvf1x-fm7...@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Hello Sir
i made a c++ code(using sockets) to receive *.264 streaming from the RTSP
server(RTSP server.cpp) but it doesnot receive anything but when i used the
same code to receive streaming  from the vlc player it works well??

Can u suggest me where am i wrong or any pseudo code?

Thank YOU.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20120416/f0de91c6/attachment-0001.html>

------------------------------

Message: 2
Date: Tue, 17 Apr 2012 09:16:34 +1000
From: Ross Finlayson <finlay...@live555.com>
To: LIVE555 Streaming Media - development & use
<live-de...@ns.live555.com>
Subject: Re: [Live-devel] Receive data from live server
Message-ID: <1c13a0fb-925a-4053-959f-621434ec5...@live555.com>
Content-Type: text/plain; charset="iso-8859-1"

i made a c++ code(using sockets) to receive *.264 streaming from the RTSP server(RTSP server.cpp) but it doesnot receive anything but when i used the same code to receive streaming from the vlc player it works well??

Can u suggest me where am i wrong or any pseudo code?

Look at the "testRTSPClient" demo application.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20120417/91c31bf4/attachment-0001.html>

------------------------------

Message: 3
Date: Tue, 17 Apr 2012 12:25:17 +0400
From: Yuri Timenkov <yuri.timen...@itv.ru>
To: LIVE555 Streaming Media - development & use
<live-de...@ns.live555.com>
Cc: Ross Finlayson <finlay...@live555.com>
Subject: Re: [Live-devel] ONVIF RTSP extension : Audio Backchannel
Handling
Message-ID: <4f8d28ed.6090...@itv.ru>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hi Ross,

Actually the "extension" is implemented in terms of RFC 2326 by using
"Require" header (12.32 Require).
So the question is how to get these "Require" headers from OPTIONS
request and respond with 200 OK or 551 Option not supported.

The second part of question is how to make a track with "a:sendonly"
attribute and accept data. The reverse media transport is part of RTP
standard and used in SIP protocol. I suppose the only implication is
when backchannel interleaved over TCP, but liveMedia should handle it to
process RTCP feedback.

Best wishes,
Yuri

On 14.04.2012 12:53, Ross Finlayson wrote:
Because - as you noted - these extensions are not part of the RTSP
standard, we don't support them.  Also, unfortunately, it would not be
possible to support them without modifying (not just subclassing) the
existing LIVE555 library code.  In particular, you would need to
reimplement the "MediaSubsession::initiate()" function.  For
'backchannel' subsessions, you would need to create an appropriate
"RTPSink" subclass, rather than a "RTPSource" subclass that the
existing code always does.

Ross Finlayson
Live Networks, Inc.
http://www.live555.com/



_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20120417/b89c1043/attachment-0001.html>

------------------------------

Message: 4
Date: Tue, 17 Apr 2012 12:41:41 +0530
From: "i m what i m ~~~~" <trn200...@gmail.com>
To: "LIVE555 Streaming Media - development & use"
<live-de...@ns.live555.com>
Subject: Re: [Live-devel] Receive data from live server
Message-ID:
<cajmxhgvtjwvwnfcroojkzdcv8eucbtnksutwzow_gjogsvi...@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Sir i don't want to use the live media library rather  want to receive the
data through sockets.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20120417/49c294c5/attachment-0001.html>

------------------------------

Message: 5
Date: Tue, 17 Apr 2012 08:42:03 -0400
From: "Gord Umphrey" <go...@dvr2010.com>
To: <live-de...@ns.live555.com>
Subject: [Live-devel] testRTSPClient heap corruption
Message-ID: <E68B0D6CF2C84CCEBC5C6EFAFE3EB951@SmokeyPC>
Content-Type: text/plain; charset="utf-8"

Hi;

If you attempt to receive a stream from an invalid source, (i.e. rtsp://non-existing_IP:554/main)
then the testRTSPClient application will crash.

It seems that calling Medium::close() is the culprit. Medium::close() works fine when connected to a valid stream, but will cause heap corruption when not streaming.

Steps to reproduce:

Two modifications are required in testRTSPClient.cpp to show the problem:

1.) In main(), follow the comments at the end of the procedure, I.e. comment out the ?return 0?, and uncomment the last two lines 2.) At the end of the shutdown() procedure, comment out the exit(), and replace with ?eventLoopWatchVariable = 1;?

Then just stream to a non-existing source.

The reason this is not showing up more frequently is because the exit() routine by-passes the run-time memory checks and just exits.

A quick work-around would be greatly appreciated!!

Regards,

Gord.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20120417/d42fcd6c/attachment-0001.html>

------------------------------

Message: 6
Date: Wed, 18 Apr 2012 14:45:42 +1000
From: Ross Finlayson <finlay...@live555.com>
To: LIVE555 Streaming Media - development & use
<live-de...@ns.live555.com>
Subject: Re: [Live-devel] Receive data from live server
Message-ID: <3b37a4aa-cc27-4154-b1dd-f54440763...@live555.com>
Content-Type: text/plain; charset="iso-8859-1"

Sir i don't want to use the live media library rather want to receive the data through sockets.

Well, if you don't want to use our software, then your question is off-topic for this mailing list. You'll need to ask somewhere else. Sorry.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20120418/db78b440/attachment-0001.html>

------------------------------

Message: 7
Date: Wed, 18 Apr 2012 15:15:32 +1000
From: Ross Finlayson <finlay...@live555.com>
To: LIVE555 Streaming Media - development & use
<live-de...@ns.live555.com>
Subject: Re: [Live-devel] testRTSPClient heap corruption
Message-ID: <526f6d4b-7b7e-45cd-881b-6e8ad3115...@live555.com>
Content-Type: text/plain; charset="windows-1252"

If you attempt to receive a stream from an invalid source, (i.e. rtsp://non-existing_IP:554/main)
then the testRTSPClient application will crash.

It seems that calling Medium::close() is the culprit. Medium::close() works fine when connected to a valid stream, but will cause heap corruption when not streaming.

Steps to reproduce:

Two modifications are required in testRTSPClient.cpp to show the problem:

1.) In main(), follow the comments at the end of the procedure, I.e. comment out the ?return 0?, and uncomment the last two lines 2.) At the end of the shutdown() procedure, comment out the exit(), and replace with ?eventLoopWatchVariable = 1;?

Then just stream to a non-existing source.

Sorry, but I can't reproduce this. However, I think you may be misunderstanding the purpose of the two lines:

env->reclaim(); env = NULL;
delete scheduler; scheduler = NULL;

You should execute those lines only if you're *complete done* with the "UsageEnvironment" and "TaskScheduler" objects; i.e. if you don't plan to execute any more "LIVE555 Streaming Media" library code. In particular, you should not be calling "Medium::close()" at all after this point (and that function won't get called from the event loop anymore - because you're no longer in the event loop).

So where exactly is this erroneous call to "Medium::close()" happening?


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.live555.com/pipermail/live-devel/attachments/20120418/a06f7a50/attachment.html>

------------------------------

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


End of live-devel Digest, Vol 102, Issue 17
*******************************************
/**********
This library is free software; you can redistribute it and/or modify it
under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 2.1 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)

This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS
FOR A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for
more details.

You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation,
Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301  USA
**********/
// Copyright (c) 1996-2012, Live Networks, Inc.  All rights reserved
// A demo application, showing how to create and run a RTSP client (that can
potentially receive multiple streams concurrently).
//
// NOTE: This code - although it builds a running application - is intended
only to illustrate how to develop your own RTSP
// client application.  For a full-featured RTSP client application - with
much more functionality, and many options - see
// "openRTSP": http://www.live555.com/openRTSP/

#include "liveMedia.hh"
#include "BasicUsageEnvironment.hh"

// Forward function definitions:

// RTSP 'response handlers':
void continueAfterDESCRIBE(RTSPClient* rtspClient, int resultCode, char*
resultString);
void continueAfterSETUP(RTSPClient* rtspClient, int resultCode, char*
resultString);
void continueAfterPLAY(RTSPClient* rtspClient, int resultCode, char*
resultString);

// Other event handler functions:
void subsessionAfterPlaying(void* clientData); // called when a stream's
subsession (e.g., audio or video substream) ends
void subsessionByeHandler(void* clientData); // called when a RTCP "BYE" is
received for a subsession
void streamTimerHandler(void* clientData);
 // called at the end of a stream's expected duration (if the stream has
not already signaled its end using a RTCP "BYE")

// The main streaming routine (for each "rtsp://" URL):
void openURL(UsageEnvironment& env, char const* progName, char const*
rtspURL);

// Used to iterate through each stream's 'subsessions', setting up each one:
void setupNextSubsession(RTSPClient* rtspClient);

// Used to shut down and close a stream (including its "RTSPClient" object):
void shutdownStream(RTSPClient* rtspClient, int exitCode = 1);

// A function that outputs a string that identifies each stream (for
debugging output).  Modify this if you wish:
UsageEnvironment& operator<<(UsageEnvironment& env, const RTSPClient&
rtspClient) {
 return env << "[URL:\"" << rtspClient.url() << "\"]: ";
}

// A function that outputs a string that identifies each subsession (for
debugging output).  Modify this if you wish:
UsageEnvironment& operator<<(UsageEnvironment& env, const MediaSubsession&
subsession) {
 return env << subsession.mediumName() << "/" << subsession.codecName();
}

void usage(UsageEnvironment& env, char const* progName) {
 env << "Usage: " << progName << " <rtsp-url-1> ... <rtsp-url-N>\n";
 env << "\t(where each <rtsp-url-i> is a \"rtsp://\" URL)\n";
}

char eventLoopWatchVariable = 0;

int main(int argc, char** argv)
{
 // Begin by setting up our usage environment:
 TaskScheduler* scheduler = BasicTaskScheduler::createNew();
 UsageEnvironment* env = BasicUsageEnvironment::createNew(*scheduler);

 // We need at least one "rtsp://" URL argument:
 if (argc < 2) {
   usage(*env, argv[0]);
   return 1;
 }

 // There are argc-1 URLs: argv[1] through argv[argc-1].  Open and start
streaming each one:
 for (int i = 1; i <= argc-1; ++i) {
   openURL(*env, argv[0], argv[i]);
 }

 // All subsequent activity takes place within the event loop:
 env->taskScheduler().doEventLoop(&eventLoopWatchVariable);
   // This function call does not return, unless, at some point in time,
"eventLoopWatchVariable" gets set to something non-zero.

// return 0;

 // If you choose to continue the application past this point (i.e., if you
comment out the "return 0;" statement above),
 // and if you don't intend to do anything more with the "TaskScheduler"
and "UsageEnvironment" objects,
 // then you can also reclaim the (small) memory used by these objects by
uncommenting the following code:

   env->reclaim(); env = NULL;
   delete scheduler; scheduler = NULL;

}

// Define a class to hold per-stream state that we maintain throughout each
stream's lifetime:

class StreamClientState {
public:
 StreamClientState();
 virtual ~StreamClientState();

public:
 MediaSubsessionIterator* iter;
 MediaSession* session;
 MediaSubsession* subsession;
 TaskToken streamTimerTask;
 double duration;
};

// If you're streaming just a single stream (i.e., just from a single URL,
once), then you can define and use just a single
// "StreamClientState" structure, as a global variable in your application.
However, because - in this demo application - we're
// showing how to play multiple streams, concurrently, we can't do that.
Instead, we have to have a separate "StreamClientState"
// struture for each "RTSPClient".  To do this, we subclass "RTSPClient",
and add a "StreamClientState" field to the subclass:

class ourRTSPClient: public RTSPClient {
public:
 static ourRTSPClient* createNew(UsageEnvironment& env, char const*
rtspURL,
                                  int verbosityLevel = 0,
                                  char const* applicationName = NULL,
                                  portNumBits tunnelOverHTTPPortNum = 0);

protected:
 ourRTSPClient(UsageEnvironment& env, char const* rtspURL,
                int verbosityLevel, char const* applicationName, portNumBits
tunnelOverHTTPPortNum);
   // called only by createNew();
 virtual ~ourRTSPClient();

public:
 StreamClientState scs;
};

// Define a data sink (a subclass of "MediaSink") to receive the data for
each subsession (i.e., each audio or video 'substream').
// In practice, this might be a class (or a chain of classes) that decodes
and then renders the incoming audio or video.
// Or it might be a "FileSink", for outputting the received data into a file
(as is done by the "openRTSP" application).
// In this example code, however, we define a simple 'dummy' sink that
receives incoming data, but does nothing with it.

class DummySink: public MediaSink {
public:
 static DummySink* createNew(UsageEnvironment& env,
                              MediaSubsession& subsession, // identifies the 
kind of data that's
being received
                              char const* streamId = NULL); // identifies the 
stream itself
(optional)

private:
 DummySink(UsageEnvironment& env, MediaSubsession& subsession, char const*
streamId);
   // called only by "createNew()"
 virtual ~DummySink();

 static void afterGettingFrame(void* clientData, unsigned frameSize,
                               unsigned numTruncatedBytes,
                                struct timeval presentationTime,
                               unsigned durationInMicroseconds);
 void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
                         struct timeval presentationTime, unsigned 
durationInMicroseconds);

private:
 // redefined virtual functions:
 virtual Boolean continuePlaying();

private:
 u_int8_t* fReceiveBuffer;
 MediaSubsession& fSubsession;
 char* fStreamId;
};

#define RTSP_CLIENT_VERBOSITY_LEVEL 1 // by default, print verbose output
from each "RTSPClient"

static unsigned rtspClientCount = 0; // Counts how many streams (i.e.,
"RTSPClient"s) are currently in use.

void openURL(UsageEnvironment& env, char const* progName, char const*
rtspURL) {
 // Begin by creating a "RTSPClient" object.  Note that there is a separate
"RTSPClient" object for each stream that we wish
 // to receive (even if more than stream uses the same "rtsp://" URL).
 RTSPClient* rtspClient = ourRTSPClient::createNew(env, rtspURL,
RTSP_CLIENT_VERBOSITY_LEVEL, progName);
 if (rtspClient == NULL) {
   env << "Failed to create a RTSP client for URL \"" << rtspURL << "\": "
<< env.getResultMsg() << "\n";
   return;
 }

 ++rtspClientCount;

 // Next, send a RTSP "DESCRIBE" command, to get a SDP description for the
stream.
 // Note that this command - like all RTSP commands - is sent
asynchronously; we do not block, waiting for a response.
 // Instead, the following function call returns immediately, and we handle
the RTSP response later, from within the event loop:
 rtspClient->sendDescribeCommand(continueAfterDESCRIBE);
}


// Implementation of the RTSP 'response handlers':

void continueAfterDESCRIBE(RTSPClient* rtspClient, int resultCode, char*
resultString) {
 do {
   UsageEnvironment& env = rtspClient->envir(); // alias
   StreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // alias

   if (resultCode != 0) {
     env << *rtspClient << "Failed to get a SDP description: " <<
resultString << "\n";
     break;
   }

   char* const sdpDescription = resultString;
   env << *rtspClient << "Got a SDP description:\n" << sdpDescription <<
"\n";

   // Create a media session object from this SDP description:
   scs.session = MediaSession::createNew(env, sdpDescription);
   delete[] sdpDescription; // because we don't need it anymore
   if (scs.session == NULL) {
     env << *rtspClient << "Failed to create a MediaSession object from the
SDP description: " << env.getResultMsg() << "\n";
     break;
   } else if (!scs.session->hasSubsessions()) {
     env << *rtspClient << "This session has no media subsessions (i.e., no
\"m=\" lines)\n";
     break;
   }

   // Then, create and set up our data source objects for the session.  We
do this by iterating over the session's 'subsessions',
   // calling "MediaSubsession::initiate()", and then sending a RTSP
"SETUP" command, on each one.
   // (Each 'subsession' will have its own data source.)
   scs.iter = new MediaSubsessionIterator(*scs.session);
   setupNextSubsession(rtspClient);
   return;
 } while (0);

 // An unrecoverable error occurred with this stream.
 shutdownStream(rtspClient);
}

void setupNextSubsession(RTSPClient* rtspClient) {
 UsageEnvironment& env = rtspClient->envir(); // alias
 StreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // alias

 scs.subsession = scs.iter->next();
 if (scs.subsession != NULL) {
   if (!scs.subsession->initiate()) {
     env << *rtspClient << "Failed to initiate the \"" << *scs.subsession
<< "\" subsession: " << env.getResultMsg() << "\n";
     setupNextSubsession(rtspClient); // give up on this subsession; go to
the next one
   } else {
     env << *rtspClient << "Initiated the \"" << *scs.subsession
          << "\" subsession (client ports " << scs.subsession->clientPortNum() 
<<
"-" << scs.subsession->clientPortNum()+1 << ")\n";

     // Continue setting up this subsession, by sending a RTSP "SETUP"
command:
     rtspClient->sendSetupCommand(*scs.subsession, continueAfterSETUP);
   }
   return;
 }

 // We've finished setting up all of the subsessions.  Now, send a RTSP
"PLAY" command to start the streaming:
 scs.duration = scs.session->playEndTime() - scs.session->playStartTime();
 rtspClient->sendPlayCommand(*scs.session, continueAfterPLAY);
}

void continueAfterSETUP(RTSPClient* rtspClient, int resultCode, char*
resultString) {
 do {
   UsageEnvironment& env = rtspClient->envir(); // alias
   StreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // alias

   if (resultCode != 0) {
     env << *rtspClient << "Failed to set up the \"" << *scs.subsession <<
"\" subsession: " << env.getResultMsg() << "\n";
     break;
   }

   env << *rtspClient << "Set up the \"" << *scs.subsession
        << "\" subsession (client ports " << scs.subsession->clientPortNum() << 
"-"
<< scs.subsession->clientPortNum()+1 << ")\n";

   // Having successfully setup the subsession, create a data sink for it,
and call "startPlaying()" on it.
   // (This will prepare the data sink to receive data; the actual flow of
data from the client won't start happening until later,
   // after we've sent a RTSP "PLAY" command.)

   scs.subsession->sink = DummySink::createNew(env, *scs.subsession,
rtspClient->url());
     // perhaps use your own custom "MediaSink" subclass instead
   if (scs.subsession->sink == NULL) {
     env << *rtspClient << "Failed to create a data sink for the \"" <<
*scs.subsession
          << "\" subsession: " << env.getResultMsg() << "\n";
     break;
   }

   env << *rtspClient << "Created a data sink for the \"" <<
*scs.subsession << "\" subsession\n";
   scs.subsession->miscPtr = rtspClient; // a hack to let subsession handle
functions get the "RTSPClient" from the subsession
   scs.subsession->sink->startPlaying(*(scs.subsession->readSource()),
                                       subsessionAfterPlaying, scs.subsession);
   // Also set a handler to be called if a RTCP "BYE" arrives for this
subsession:
   if (scs.subsession->rtcpInstance() != NULL) {
     scs.subsession->rtcpInstance()->setByeHandler(subsessionByeHandler,
scs.subsession);
   }
 } while (0);

 // Set up the next subsession, if any:
 setupNextSubsession(rtspClient);
}

void continueAfterPLAY(RTSPClient* rtspClient, int resultCode, char*
resultString) {
 do {
   UsageEnvironment& env = rtspClient->envir(); // alias
   StreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // alias

   if (resultCode != 0) {
     env << *rtspClient << "Failed to start playing session: " <<
resultString << "\n";
     break;
   }

   // Set a timer to be handled at the end of the stream's expected
duration (if the stream does not already signal its end
   // using a RTCP "BYE").  This is optional.  If, instead, you want to
keep the stream active - e.g., so you can later
   // 'seek' back within it and do another RTSP "PLAY" - then you can omit
this code.
   // (Alternatively, if you don't want to receive the entire stream, you
could set this timer for some shorter value.)
   if (scs.duration > 0) {
     unsigned const delaySlop = 2; // number of seconds extra to delay,
after the stream's expected duration.  (This is optional.)
     scs.duration += delaySlop;
     unsigned uSecsToDelay = (unsigned)(scs.duration*1000000);
     scs.streamTimerTask =
env.taskScheduler().scheduleDelayedTask(uSecsToDelay,
(TaskFunc*)streamTimerHandler, rtspClient);
   }

   env << *rtspClient << "Started playing session";
   if (scs.duration > 0) {
     env << " (for up to " << scs.duration << " seconds)";
   }
   env << "...\n";

   return;
 } while (0);

 // An unrecoverable error occurred with this stream.
 shutdownStream(rtspClient);
}


// Implementation of the other event handlers:

void subsessionAfterPlaying(void* clientData) {
 MediaSubsession* subsession = (MediaSubsession*)clientData;
 RTSPClient* rtspClient = (RTSPClient*)(subsession->miscPtr);

 // Begin by closing this subsession's stream:
 Medium::close(subsession->sink);
 subsession->sink = NULL;

 // Next, check whether *all* subsessions' streams have now been closed:
 MediaSession& session = subsession->parentSession();
 MediaSubsessionIterator iter(session);
 while ((subsession = iter.next()) != NULL) {
   if (subsession->sink != NULL) return; // this subsession is still active
 }

 // All subsessions' streams have now been closed, so shutdown the client:
 shutdownStream(rtspClient);
}

void subsessionByeHandler(void* clientData) {
 MediaSubsession* subsession = (MediaSubsession*)clientData;
 RTSPClient* rtspClient = (RTSPClient*)subsession->miscPtr;
 UsageEnvironment& env = rtspClient->envir(); // alias

 env << *rtspClient << "Received RTCP \"BYE\" on \"" << *subsession << "\"
subsession\n";

 // Now act as if the subsession had closed:
 subsessionAfterPlaying(subsession);
}

void streamTimerHandler(void* clientData) {
 ourRTSPClient* rtspClient = (ourRTSPClient*)clientData;
 StreamClientState& scs = rtspClient->scs; // alias

 scs.streamTimerTask = NULL;

 // Shut down the stream:
 shutdownStream(rtspClient);
}

void shutdownStream(RTSPClient* rtspClient, int exitCode) {
 UsageEnvironment& env = rtspClient->envir(); // alias
 StreamClientState& scs = ((ourRTSPClient*)rtspClient)->scs; // alias

 // First, check whether any subsessions have still to be closed:
 if (scs.session != NULL) {
   Boolean someSubsessionsWereActive = False;
   MediaSubsessionIterator iter(*scs.session);
   MediaSubsession* subsession;

   while ((subsession = iter.next()) != NULL) {
     if (subsession->sink != NULL) {
        Medium::close(subsession->sink);
        subsession->sink = NULL;

        if (subsession->rtcpInstance() != NULL) {
          subsession->rtcpInstance()->setByeHandler(NULL, NULL); // in case the
server sends a RTCP "BYE" while handling "TEARDOWN"
        }

        someSubsessionsWereActive = True;
     }
   }

   if (someSubsessionsWereActive) {
     // Send a RTSP "TEARDOWN" command, to tell the server to shutdown the
stream.
     // Don't bother handling the response to the "TEARDOWN".
     rtspClient->sendTeardownCommand(*scs.session, NULL);
   }
 }

 env << *rtspClient << "Closing the stream.\n";
 Medium::close(rtspClient);
   // Note that this will also cause this stream's "StreamClientState"
structure to get reclaimed.

 if (--rtspClientCount == 0) {
   // The final stream has ended, so exit the application now.
   // (Of course, if you're embedding this code into your own application,
you might want to comment this out,
   // and replace it with "eventLoopWatchVariable = 1;", so that we leave
the LIVE555 event loop, and continue running "main()".)
   eventLoopWatchVariable = 1;
        //exit(exitCode);
 }
}


// Implementation of "ourRTSPClient":

ourRTSPClient* ourRTSPClient::createNew(UsageEnvironment& env, char const*
rtspURL,
                                        int verbosityLevel, char const* 
applicationName, portNumBits
tunnelOverHTTPPortNum) {
 return new ourRTSPClient(env, rtspURL, verbosityLevel, applicationName,
tunnelOverHTTPPortNum);
}

ourRTSPClient::ourRTSPClient(UsageEnvironment& env, char const* rtspURL,
                             int verbosityLevel, char const* applicationName, 
portNumBits
tunnelOverHTTPPortNum)
 : RTSPClient(env,rtspURL, verbosityLevel, applicationName,
tunnelOverHTTPPortNum) {
}

ourRTSPClient::~ourRTSPClient() {
}


// Implementation of "StreamClientState":

StreamClientState::StreamClientState()
 : iter(NULL), session(NULL), subsession(NULL), streamTimerTask(NULL),
duration(0.0) {
}

StreamClientState::~StreamClientState() {
 delete iter;
 if (session != NULL) {
   // We also need to delete "session", and unschedule "streamTimerTask"
(if set)
   UsageEnvironment& env = session->envir(); // alias

   env.taskScheduler().unscheduleDelayedTask(streamTimerTask);
   Medium::close(session);
 }
}


// Implementation of "DummySink":

// Even though we're not going to be doing anything with the incoming data,
we still need to receive it.
// Define the size of the buffer that we'll use:
#define DUMMY_SINK_RECEIVE_BUFFER_SIZE 100000

DummySink* DummySink::createNew(UsageEnvironment& env, MediaSubsession&
subsession, char const* streamId) {
 return new DummySink(env, subsession, streamId);
}

DummySink::DummySink(UsageEnvironment& env, MediaSubsession& subsession,
char const* streamId)
 : MediaSink(env),
   fSubsession(subsession) {
 fStreamId = strDup(streamId);
 fReceiveBuffer = new u_int8_t[DUMMY_SINK_RECEIVE_BUFFER_SIZE];
}

DummySink::~DummySink() {
 delete[] fReceiveBuffer;
 delete[] fStreamId;
}

void DummySink::afterGettingFrame(void* clientData, unsigned frameSize,
unsigned numTruncatedBytes,
                                  struct timeval presentationTime, unsigned 
durationInMicroseconds) {
 DummySink* sink = (DummySink*)clientData;
 sink->afterGettingFrame(frameSize, numTruncatedBytes, presentationTime,
durationInMicroseconds);
}

// If you don't want to see debugging output for each received frame, then
comment out the following line:
#define DEBUG_PRINT_EACH_RECEIVED_FRAME 1

void DummySink::afterGettingFrame(unsigned frameSize, unsigned
numTruncatedBytes,
                                  struct timeval presentationTime, unsigned 
/*durationInMicroseconds*/)
{
 // We've just received a frame of data.  (Optionally) print out
information about it:
#ifdef DEBUG_PRINT_EACH_RECEIVED_FRAME
 if (fStreamId != NULL) envir() << "Stream \"" << fStreamId << "\"; ";
 envir() << fSubsession.mediumName() << "/" << fSubsession.codecName() <<
":\tReceived " << frameSize << " bytes";
 if (numTruncatedBytes > 0) envir() << " (with " << numTruncatedBytes << "
bytes truncated)";
 char uSecsStr[6+1]; // used to output the 'microseconds' part of the
presentation time
 sprintf(uSecsStr, "%06u", (unsigned)presentationTime.tv_usec);
 envir() << ".\tPresentation time: " << (unsigned)presentationTime.tv_sec
<< "." << uSecsStr;
 if (fSubsession.rtpSource() != NULL &&
!fSubsession.rtpSource()->hasBeenSynchronizedUsingRTCP()) {
   envir() << "!"; // mark the debugging output to indicate that this
presentation time is not RTCP-synchronized
 }
 envir() << "\n";
#endif

 // Then continue, to request the next frame of data:
 continuePlaying();
}

Boolean DummySink::continuePlaying() {
 if (fSource == NULL) return False; // sanity check (should not happen)

 // Request the next frame of data from our input source.
"afterGettingFrame()" will get called later, when it arrives:
 fSource->getNextFrame(fReceiveBuffer, DUMMY_SINK_RECEIVE_BUFFER_SIZE,
                       afterGettingFrame, this,
                       onSourceClosure, this);
 return True;
}
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to