I would like to provide some feedback of my synchronization issues
with mpeg video and audio PCM. I think it might be useful to other
users of the library.
I have slightly modified 2 testing applications included with Live555:
testMPEG1or2VideoStreamer and testMPEG1or2VideoReceiver. I have included
these 2 applications with this email.
I have added logging in the testing application testMPEG1or2VideoReceiver
to a log file. Logging to the console was causing "stress" to the
environment thread and starved the library from CPU processing.
I first start the testMPEG1or2VideoStreamer application. That application
streams an Mpeg2 Video Elementary Stream and a PCM audio wave. The audio
file must be 48KHz 2 Channels 16 bits. Otherwise modification
to the testMPEG1or2VideoReceiver app will be needed. I have introduced
a delay of 3 seconds between the creation of the source objects of the
library and the actual starting of the sinking that begin the streaming.
This closely simulates our use of the library by our own application.
Our application starts the sinking when the user press the "play" button
but create the objects when the application is instantiated.
Once testMPEG1or2VideoStreamer begins streaming I start the second
application testMPEG1or2VideoReceiver. I have modified the application
to closely match the design of the main receiving loop of the
VLC Media application. The application receives both the video and
audio streams. I don't store the data in this testing case.
Here is an excerpt of the log on the receiving client before the
library send the RTCP SR:
Video received 1311 bytes at time 0-00:20:13-702.255
Video received 1339 bytes at time 0-00:20:13-702.255
Video received 1348 bytes at time 0-00:20:13-702.255
Video received 1395 bytes at time 0-00:20:13-702.255
Video received 1068 bytes at time 0-00:20:13-702.255
Video received 506 bytes at time 0-00:20:13-702.255
Audio received 1400 bytes at time 0-00:20:13-720.633
Audio received 1400 bytes at time 0-00:20:13-727.924
Audio received 1400 bytes at time 0-00:20:13-735.215
(...)
At this point both the Video and Audio are closely in sync.
I guess using the client wall clock until RTCP SR.
Here is an excerpt of the log after the library sent the
RTCP SR:
Video received 506 bytes at time 0-00:20:13-702.255
Audio received 1400 bytes at time 0-00:20:13-720.633
Audio received 1400 bytes at time 0-00:20:13-727.924
Audio received 1400 bytes at time 0-00:20:13-735.215
Audio hasBeenSynchronizedUsingRTCP()
Audio received 1400 bytes at time 0-00:33:24-123.145
Video hasBeenSynchronizedUsingRTCP()
Video received 1047 bytes at time 0-00:33:21-122.121
Video received 987 bytes at time 0-00:33:21-122.121
Video received 1205 bytes at time 0-00:33:21-122.121
Video received 1152 bytes at time 0-00:33:21-122.121
Audio received 1400 bytes at time 0-00:33:24-130.436 (3 Sec GAP)
Video received 1160 bytes at time 0-00:33:21-122.121
Video received 994 bytes at time 0-00:33:21-122.121
Video received 1007 bytes at time 0-00:33:21-122.121
Video received 559 bytes at time 0-00:33:21-122.121
Audio received 1400 bytes at time 0-00:33:24-137.727
Audio received 1400 bytes at time 0-00:33:24-145.018
Audio received 1400 bytes at time 0-00:33:24-152.309
Video received 1053 bytes at time 0-00:33:21-155.487
Video received 991 bytes at time 0-00:33:21-155.487
As soon as the RTCP SR report is received at the client side the
library resync the presentation time to the server wall clock but
a gap is created between the audio and video. The presentation
time gap matches the delay between the creation of the source
objects and the actual starting of the sinking.
I noted that the object MPEGVideoStreamFramer creates the base
presentation time when method MPEGVideoStreamFramer::reset() is called.
This method is called at creation time of MPEGVideoStreamFramer or when
the method flushInput() is called. The comment associated to this method
is: //called if there is a discontinuity (seeking) in the input.
In our case the Mpeg2 stream was streaming from a live source and the
flushInput() method was never called. Thus the base presentation time
used when the Mpeg2 streaming began was from the object creation time.
This created a huge gap between the audio and mpeg2 stream. And VLC
media player was baffled by the presentation time once RTCP sync was
received. VLC Media Player got lost trying to regain synchronization.
This gap can be solved if the library source objects are instantiated when
the sink are started. But this was unacceptable in my case. I can also
make a call to the method flushInput() just before starting the sinking.
This is the easiest solution.
A small modification could also be done to the library. Since the
computing of the presentation time by MPEGVideoStreamFramer is internal
to the library and cannot be reassigned I fell the initialization of
the base presentation time should be done when method
computePresentationTime is called by object MPEGVideoStreamFramer. This
could be done like this:
void MPEGVideoStreamFramer::reset() {
fPictureCount = 0;
fPictureEndMarker = False;
fPicturesAdjustment = 0;
fPictureTimeBase = 0.0;
fTcSecsBase = 0;
fHaveSeenFirstTimeCode = False;
fPresentationTimeBase.tv_sec = 0; //added
fPresentationTimeBase.tv_usec = 0; //added
fFlushedInput = True; // Added this private member to object
//Removed call to gettimeofday
}
void MPEGVideoStreamFramer
::computePresentationTime(unsigned numAdditionalPictures) {
// Use the current wallclock time as the base 'presentation time':
if( fFlushedInput == True )
{
fFlushedInput = False;
gettimeofday(&fPresentationTimeBase, NULL);
}
(...)
}
Or
I suggest the adding of the calling to the method flushInput() in
the testing application testMPEG1or2VideoStreamer of the library
before starting the sink.
/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 2.1 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)
This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for
more details.
You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
**********/
// Copyright (c) 1996-2007, Live Networks, Inc. All rights reserved
// A test program that reads a MPEG-1 or 2 Program Stream file,
// splits it into Audio and Video Elementary Streams,
// and streams both using RTP
// main program
#include "liveMedia.hh"
#include "BasicUsageEnvironment.hh"
#include "GroupsockHelper.hh"
UsageEnvironment* env;
char const* inputFileName = "test.mpg";
char const* inputFileNamePcm = "test.wav";
WAVAudioFileSource* pcmSource;
ByteStreamFileSource* fileVideoSource;
FramedSource* audioSource;
FramedSource* videoSource;
RTPSink* audioSink;
RTPSink* videoSink;
MPEG1or2VideoStreamFramer* MpegSource;
char* mimeType;
unsigned char payloadFormatCode;
Boolean iFramesOnly = False;
void play(); // forward
void start(void *);
// To stream using "source-specific multicast" (SSM), uncomment the following:
//#define USE_SSM 1
#ifdef USE_SSM
Boolean const isSSM = True;
#else
Boolean const isSSM = False;
#endif
// To set up an internal RTSP server, uncomment the following:
#define IMPLEMENT_RTSP_SERVER 1
// (Note that this RTSP server works for multicast only)
// To stream *only* MPEG "I" frames (e.g., to reduce network bandwidth),
// change the following "False" to "True":
int main(int argc, char** argv) {
// Begin by setting up our usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
pcmSource = WAVAudioFileSource::createNew(*env, inputFileNamePcm);
if (pcmSource == NULL) {
*env << "Unable to open file \"" << inputFileNamePcm
<< "\" as a pcm audio file source: "
<< env->getResultMsg() << "\n";
exit(1);
}
// Get attributes of the audio source:
unsigned char const bitsPerSample = pcmSource->bitsPerSample();
if (bitsPerSample != 16) {
*env << "The input file contains " << bitsPerSample
<< " bit-per-sample audio, which we don't handle\n";
exit(1);
}
unsigned const samplingFrequency = pcmSource->samplingFrequency();
unsigned char const numChannels = pcmSource->numChannels();
unsigned bitsPerSecond = samplingFrequency*bitsPerSample*numChannels;
*env << "Audio source parameters:\n\t" << samplingFrequency << " Hz, ";
*env << bitsPerSample << " bits-per-sample, ";
*env << numChannels << " channels => ";
*env << bitsPerSecond << " bits-per-second\n";
mimeType = "L16";
if (samplingFrequency == 44100 && numChannels == 2) {
payloadFormatCode = 10; // a static RTP payload type
} else if (samplingFrequency == 44100 && numChannels == 1) {
payloadFormatCode = 11; // a static RTP payload type
} else {
payloadFormatCode = 96; // a dynamic RTP payload type
}
*env << "Converting to network byte order for streaming\n";
if ( samplingFrequency != 48000 || numChannels != 2 ) {
*env << "BIG WARNING: The receiver client is hardcoded to PCM 48KHz 2
Channels\n";
exit(1);
}
// Open the input file as a 'byte-stream file source':
fileVideoSource = ByteStreamFileSource::createNew(*env, inputFileName);
if (fileVideoSource == NULL) {
*env << "Unable to open file mpeg video ES\"" << inputFileName
<< "\" as a byte-stream file source\n";
exit(1);
}
// Create 'groupsocks' for RTP and RTCP:
char* destinationAddressStr
#ifdef USE_SSM
= "232.255.42.42";
#else
= "239.255.42.42";
// Note: This is a multicast address. If you wish to stream using
// unicast instead, then replace this string with the unicast address
// of the (single) destination. (You may also need to make a similar
// change to the receiver program.)
#endif
const unsigned short rtpPortNumAudio = 6666;
const unsigned short rtcpPortNumAudio = rtpPortNumAudio+1;
const unsigned short rtpPortNumVideo = 8888;
const unsigned short rtcpPortNumVideo = rtpPortNumVideo+1;
const unsigned char ttl = 7; // low, in case routers don't admin scope
const unsigned char rtpPayloadType = 96;
struct in_addr destinationAddress;
destinationAddress.s_addr = our_inet_addr(destinationAddressStr);
const Port rtpPortAudio(rtpPortNumAudio);
const Port rtcpPortAudio(rtcpPortNumAudio);
const Port rtpPortVideo(rtpPortNumVideo);
const Port rtcpPortVideo(rtcpPortNumVideo);
Groupsock rtpGroupsockAudio(*env, destinationAddress, rtpPortAudio, ttl);
Groupsock rtcpGroupsockAudio(*env, destinationAddress, rtcpPortAudio, ttl);
Groupsock rtpGroupsockVideo(*env, destinationAddress, rtpPortVideo, ttl);
Groupsock rtcpGroupsockVideo(*env, destinationAddress, rtcpPortVideo, ttl);
#ifdef USE_SSM
rtpGroupsockAudio.multicastSendOnly();
rtcpGroupsockAudio.multicastSendOnly();
rtpGroupsockVideo.multicastSendOnly();
rtcpGroupsockVideo.multicastSendOnly();
#endif
// Create a 'MPEG Audio RTP' sink from the RTP 'groupsock':
// audioSink = MPEG1or2AudioRTPSink::createNew(*env, &rtpGroupsockAudio);
audioSink = SimpleRTPSink::createNew(*env, &rtpGroupsockAudio,
payloadFormatCode, samplingFrequency,
"audio", mimeType, numChannels);
// Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidthAudio = bitsPerSecond/1000; // in
kbps; for RTCP b/w share
const unsigned maxCNAMElen = 100;
unsigned char CNAME[maxCNAMElen+1];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
#ifdef IMPLEMENT_RTSP_SERVER
RTCPInstance* audioRTCP =
#endif
RTCPInstance::createNew(*env, &rtcpGroupsockAudio,
estimatedSessionBandwidthAudio, CNAME,
audioSink, NULL /* we're a server */, isSSM);
// Note: This starts RTCP running automatically
// Create a 'MPEG Video RTP' sink from the RTP 'groupsock':
videoSink = MPEG1or2VideoRTPSink::createNew(*env, &rtpGroupsockVideo);
// Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidthVideo = 4500; // in kbps; for RTCP
b/w share
#ifdef IMPLEMENT_RTSP_SERVER
RTCPInstance* videoRTCP =
#endif
RTCPInstance::createNew(*env, &rtcpGroupsockVideo,
estimatedSessionBandwidthVideo, CNAME,
videoSink, NULL /* we're a server */, isSSM);
// Note: This starts RTCP running automatically
#ifdef IMPLEMENT_RTSP_SERVER
RTSPServer* rtspServer = RTSPServer::createNew(*env);
// Note that this (attempts to) start a server on the default RTSP server
// port: 554. To use a different port number, add it as an extra
// (optional) parameter to the "RTSPServer::createNew()" call above.
if (rtspServer == NULL) {
*env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
exit(1);
}
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, "testStream", inputFileName,
"Session streamed by \"testMPEG1or2AudioVideoStreamer\"",
isSSM);
sms->addSubsession(PassiveServerMediaSubsession::createNew(*audioSink,
audioRTCP));
sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink,
videoRTCP));
rtspServer->addServerMediaSession(sms);
char* url = rtspServer->rtspURL(sms);
*env << "Play this stream using the URL \"" << url << "\"\n";
delete[] url;
#endif
// Finally, start the streaming:
*env << "Beginning streaming...\n";
play();
env->taskScheduler().doEventLoop(); // does not return
return 0; // only to prevent compiler warning
}
void afterPlaying(void* clientData) {
// One of the sinks has ended playing.
// Check whether any of the sources have a pending read. If so,
// wait until its sink ends playing also:
// if (audioSource->isCurrentlyAwaitingData()
// || videoSource->isCurrentlyAwaitingData()) return;
// Now that both sinks have ended, close both input sources,
// and start playing again:
*env << "...done reading from file\n";
audioSink->stopPlaying();
videoSink->stopPlaying();
// ensures that both are shut down
Medium::close(audioSource);
Medium::close(videoSource);
// Note: This also closes the input file that this source read from.
fileVideoSource = ByteStreamFileSource::createNew(*env, inputFileName);
pcmSource = WAVAudioFileSource::createNew(*env, inputFileNamePcm);
// Start playing once again:
play();
}
void play() {
// We must demultiplex Audio and Video Elementary Streams
// from the input source:
FramedSource* videoES = fileVideoSource;
// Create a framer for each Elementary Stream:
MpegSource = MPEG1or2VideoStreamFramer::createNew(*env, videoES,
iFramesOnly);
audioSource = EndianSwap16::createNew(*env, pcmSource);
/*******************************************************************/
/* Finally, start with some delay (3 seconds) */
/* Ouch...This will create a GAP between audio and video !!!!! */
/* Change 3000000 to 0 and no GAP */
/*******************************************************************/
env->taskScheduler().scheduleDelayedTask(3000000, start, NULL);
}
void start(void *) {
*env << "Beginning to read from file...\n";
//MpegSource->flushInput(); // !!!!! Remove comment to solve delay problem
audioSink->startPlaying(*audioSource, afterPlaying, audioSink);
videoSink->startPlaying(*MpegSource, afterPlaying, videoSink);
}
/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 2.1 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)
This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for
more details.
You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
**********/
// Copyright (c) 1996-2000, Live Networks, Inc. All rights reserved
// A test program that receives a RTP/RTCP multicast MPEG video stream,
// and outputs the resulting MPEG file stream to 'stdout'
// main program
#include "liveMedia.hh"
#include "GroupsockHelper.hh"
#include "BasicUsageEnvironment.hh"
#include "OutputFile.hh"
// To receive a "source-specific multicast" (SSM) stream, uncomment this:
//#define USE_SSM 1
char *mstrtime( char *psz_buffer, int64_t date );
void afterPlaying(void* clientData); // forward
static void FrameRead( void *clientData, unsigned int frameSize,
unsigned int numTruncatedBytes, struct timeval
presentationTime,
unsigned int durationInMicroseconds );
static void FrameClose( void *clientData );
// A structure to hold the state of the current session.
// It is used in the "afterPlaying()" function to clean up the session.
struct sessionState_t {
char* stopScheduler;
unsigned int read;
unsigned int rtcpSync;
unsigned char* buffer;
unsigned int bufSize;
RTPSource* source;
RTCPInstance* rtcpInstance;
} SessionState[2];
UsageEnvironment* env;
FILE* logFile;
int main(int argc, char** argv) {
// Begin by setting up our usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
logFile = OpenOutputFile(*env, "LogFile.txt" );
if (logFile == NULL)
exit(1);
char event = false;
SessionState[0].stopScheduler = &event;
SessionState[0].bufSize = 8192;
SessionState[0].buffer = new unsigned char [SessionState[0].bufSize];
SessionState[0].read = True;
SessionState[0].rtcpSync = False;
SessionState[1].stopScheduler = &event;
SessionState[1].bufSize = 8192;
SessionState[1].buffer = new unsigned char [SessionState[1].bufSize];
SessionState[1].read = True;
SessionState[1].rtcpSync = False;
// Create 'groupsocks' for RTP and RTCP:
char* sessionAddressStr
#ifdef USE_SSM
= "232.255.42.42";
#else
= "239.255.42.42";
// Note: If the session is unicast rather than multicast,
// then replace this string with "0.0.0.0"
#endif
const unsigned short rtpPortNumVideo = 8888;
const unsigned short rtcpPortNumVideo = rtpPortNumVideo+1;
const unsigned short rtpPortNumAudio = 6666;
const unsigned short rtcpPortNumAudio = rtpPortNumAudio+1;
#ifndef USE_SSM
const unsigned char ttl = 1; // low, in case routers don't admin scope
#endif
struct in_addr sessionAddress;
sessionAddress.s_addr = our_inet_addr(sessionAddressStr);
const Port rtpPortVideo(rtpPortNumVideo);
const Port rtcpPortVideo(rtcpPortNumVideo);
const Port rtpPortAudio(rtpPortNumAudio);
const Port rtcpPortAudio(rtcpPortNumAudio);
#ifdef USE_SSM
char* sourceAddressStr = "aaa.bbb.ccc.ddd";
// replace this with the real source address
struct in_addr sourceFilterAddress;
sourceFilterAddress.s_addr = our_inet_addr(sourceAddressStr);
Groupsock rtpGroupsockVideo(*env, sessionAddress, sourceFilterAddress,
rtpPortVideo);
Groupsock rtcpGroupsockVideo(*env, sessionAddress, sourceFilterAddress,
rtcpPortVideo);
Groupsock rtpGroupsockAudio(*env, sessionAddress, sourceFilterAddress,
rtpPortAudio);
Groupsock rtcpGroupsockAudio(*env, sessionAddress, sourceFilterAddress,
rtcpPortAudio);
rtcpGroupsockVideo.changeDestinationParameters(sourceFilterAddress,0,~0);
rtcpGroupsockAudio.changeDestinationParameters(sourceFilterAddress,0,~0);
// our RTCP "RR"s are sent back using unicast
#else
Groupsock rtpGroupsockVideo(*env, sessionAddress, rtpPortVideo, ttl);
Groupsock rtcpGroupsockVideo(*env, sessionAddress, rtcpPortVideo, ttl);
Groupsock rtpGroupsockAudio(*env, sessionAddress, rtpPortAudio, ttl);
Groupsock rtcpGroupsockAudio(*env, sessionAddress, rtcpPortAudio, ttl);
#endif
// Create the video and audio data source
// BIG WARNING: hardcoded to a 48K PCM 2 channels audio
SessionState[0].source = MPEG1or2VideoRTPSource::createNew(*env,
&rtpGroupsockVideo);
SessionState[1].source = SimpleRTPSource::createNew(*env, &rtpGroupsockAudio,
96,
48000, "audio/L16",0,0);
// Create (and start) a 'RTCP instance' for the RTP source:
const unsigned estimatedSessionBandwidth = 160; // in kbps; for RTCP b/w share
const unsigned maxCNAMElen = 100;
unsigned char CNAME[maxCNAMElen+1];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
SessionState[0].rtcpInstance = RTCPInstance::createNew(*env,
&rtcpGroupsockVideo,
estimatedSessionBandwidth, CNAME,
NULL /* we're a client
*/, SessionState[0].source);
SessionState[1].rtcpInstance = RTCPInstance::createNew(*env,
&rtcpGroupsockAudio,
estimatedSessionBandwidth, CNAME,
NULL /* we're a client
*/, SessionState[1].source);
// Note: This starts RTCP running automatically
// Finally, start receiving the multicast stream:
*env << "Beginning receiving multicast stream...without sinking....\n";
// Forever... But who care...testing case...
while (1) {
sessionState_t * pSessionState;
for (int i=0; i < 2; i++) {
pSessionState = &SessionState[i];
while (pSessionState->read == True) {
pSessionState->read = False;
pSessionState->source->getNextFrame( pSessionState->buffer,
pSessionState->bufSize,
FrameRead, pSessionState,
FrameClose, pSessionState );
}
}
event = false;
env->taskScheduler().doEventLoop(&event);
}
return 0; // only to prevent compiler warning
}
void afterPlaying(void* /*clientData*/) {
*env << "...done receiving\n";
// End by closing the media:
Medium::close(SessionState[0].rtcpInstance); // Note: Sends a RTCP BYE
Medium::close(SessionState[1].rtcpInstance); // Note: Sends a RTCP BYE
Medium::close(SessionState[0].source);
Medium::close(SessionState[1].source);
delete SessionState[0].buffer;
delete SessionState[1].buffer;
}
static void FrameRead( void *clientData, unsigned int frameSize,
unsigned int numTruncatedBytes, struct timeval
presentationTime,
unsigned int durationInMicroseconds ) {
sessionState_t * pSessionState = (sessionState_t *) clientData;
char cbuf[32];
int64_t date = presentationTime.tv_sec*1000000 + presentationTime.tv_usec;
date &= 0x7fffffffffffffff; // Dont want negative math
if ( pSessionState == &SessionState[0]) {
if (pSessionState->source->hasBeenSynchronizedUsingRTCP() &&
pSessionState->rtcpSync == False) {
fprintf( logFile, "\nVideo hasBeenSynchronizedUsingRTCP()\n\n",
frameSize, mstrtime(cbuf, date) );
pSessionState->rtcpSync = True;
}
fprintf( logFile, "Video received %4d bytes at time %s\n", frameSize,
mstrtime(cbuf, date) );
}
else {
if (pSessionState->source->hasBeenSynchronizedUsingRTCP() &&
pSessionState->rtcpSync == False) {
fprintf( logFile, "\nAudio hasBeenSynchronizedUsingRTCP()\n\n",
frameSize, mstrtime(cbuf, date) );
pSessionState->rtcpSync = True;
}
fprintf( logFile, "Audio received %4d bytes at time %s\n", frameSize,
mstrtime(cbuf, date) );
}
*pSessionState->stopScheduler = True;
pSessionState->read = True;
}
static void FrameClose( void *clientData ){
}
//
// code inspired from VLC under GNU.
//
char *mstrtime( char *psz_buffer, int64_t date )
{
static int64_t ll1000 = 1000, ll60 = 60, ll24 = 24, ll365 = 365;
sprintf( psz_buffer, "%d-%02d:%02d:%02d-%03d.%03d",
(int) (date / (ll1000 * ll1000 * ll60 * ll60 * ll24) % ll365),
(int) (date / (ll1000 * ll1000 * ll60 * ll60) % ll24),
(int) (date / (ll1000 * ll1000 * ll60) % ll60),
(int) (date / (ll1000 * ll1000) % ll60),
(int) (date / ll1000 % ll1000),
(int) (date % ll1000) );
return( psz_buffer );
}
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel