Dear Ross Finlayson,

Thanks for your quick response.

We have done the changes according to your response.
After this changes, we are able to do Audio+Video streaming but we are
getting Audio+Video in sync only for one minute.
After that we are not getting Audio+video in sync.

Below is the code changes we have done in the testOnDemandRTSPServer.cpp.

1. {
char const* streamName = "AudioVideoTest";
char const* inputFileName = "/data/misc/qmmf/test_track_1_1920x1080.h264";
char const* inputFileNameAud = "/data/misc/qmmf/recorder_fifo_audio.aac";
ServerMediaSession* sms = ServerMediaSession::createNew(*env, streamName,
streamName, descriptionString);
sms->addSubsession(ADTSAudioFileServerMediaSubsession ::createNew(*env,
inputFileNameAud, reuseFirstSource));
sms->addSubsession(H264VideoFileServerMediaSubsession ::createNew(*env,
inputFileName, reuseFirstSource));
rtspServer->addServerMediaSession(sms);
announceStream(rtspServer, sms, streamName, inputFileName);
}

2. We have to make the changes in the ADTSAudioFileSource.cpp to read the
AAC-LC Header as we are getting the error
while streaming the AAC-LC Fifo with the H.264 Encoded Video Fifo. (I have
attached this file for your reference)

But, Now issue is we are not able to get the Audio/Video in sync for more
then one minute.

Please provide your valuable suggestions to resolve this issue.

*With Best Regards,*
Dhrupal Tilava
VVDN Technologies Pvt. Ltd
Cell : +91 9428158863 | Skype : dhrupal.tilava_1


On Wed, Oct 17, 2018 at 12:35 AM <live-devel-requ...@ns.live555.com> wrote:

> Send live-devel mailing list submissions to
>         live-devel@lists.live555.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.live555.com/mailman/listinfo/live-devel
> or, via email, send a message with subject or body 'help' to
>         live-devel-requ...@lists.live555.com
>
> You can reach the person managing the list at
>         live-devel-ow...@lists.live555.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of live-devel digest..."
>
>
> Today's Topics:
>
>    1. Audio + Video streaming in sync using Live555 (Dhrupal Tilava)
>    2. Re: Audio + Video streaming in sync using Live555 (Ross Finlayson)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 16 Oct 2018 17:03:48 +0530
> From: Dhrupal Tilava <dhrupal.til...@vvdntech.in>
> To: live-de...@ns.live555.com
> Subject: [Live-devel] Audio + Video streaming in sync using Live555
> Message-ID:
>         <
> cab2edjwn_fuvoxbxl1insd8mzpgdytj7gw71wt9boyyvxfg...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear Team,
>
> we are using *testOnDemandRTSPServer.cpp *for the Live Video streaming.
> So, in the input I am giving H.264 Encoded fifo to the compile binary using
> this testOnDemandRTSPServer.cpp and I am getting RTSP URL for the Video
> streaming.
> This is working fine.
>
> Now,we want to add the Audio with Video in the Live streaming. For the
> Audio I have one AAC-LC Encoded fifo.
> But to stream Audio and Video fifo in time sync, I am not able to find any
> option in the testOnDemarRTSPServer.cpp file.
>
> It would be really helpful if you can provide any example/reference in
> which, I give the Audio fifo and video fifo as input and then I get the URL
> for the streaming.
> So, I can do the audio and video streaming smoothly.
>
> Note: we have separate fifo for the Audio and Video.
>
> Thank you so much in advance for your kind support.
>
> *With Best Regards,*
> Dhrupal Tilava
> VVDN Technologies Pvt. Ltd
> Cell : +91 9428158863 | Skype : dhrupal.tilava_1
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.live555.com/pipermail/live-devel/attachments/20181016/a5539bda/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Tue, 16 Oct 2018 06:01:22 -0700
> From: Ross Finlayson <finlay...@live555.com>
> To: LIVE555 Streaming Media - development & use
>         <live-de...@ns.live555.com>
> Subject: Re: [Live-devel] Audio + Video streaming in sync using
>         Live555
> Message-ID: <3582fb49-5e1f-4a32-93bd-926a95292...@live555.com>
> Content-Type: text/plain;       charset=utf-8
>
> > we are using testOnDemandRTSPServer.cpp for the Live Video streaming.
> > So, in the input I am giving H.264 Encoded fifo to the compile binary
> using this testOnDemandRTSPServer.cpp and I am getting RTSP URL for the
> Video streaming.
> > This is working fine.
> >
> > Now,we want to add the Audio with Video in the Live streaming. For the
> Audio I have one AAC-LC Encoded fifo.
> > But to stream Audio and Video fifo in time sync, I am not able to find
> any option in the testOnDemarRTSPServer.cpp file.
> >
> > It would be really helpful if you can provide any example/reference in
> which, I give the Audio fifo and video fifo as input and then I get the URL
> for the streaming.
> > So, I can do the audio and video streaming smoothly.
>
> Streaming audio+video is easy: Just call ?addSubsession()? twice - once
> for the video source, another time for the audio source.  (So that your
> ?ServerMediaSession? object contains two ?ServerMediaSubsession? objects -
> one for the video, one for the audio.)
>
> However, for audio/video synchronization to work properly, each source
> (video and audio) *must* generate proper ?fPresentationTime? values for
> each frame, and these *must* be aligned with ?wall clock? time -  i.e., the
> same time that you?d get if you called ?gettimeofday()?.
>
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> live-devel mailing list
> live-devel@lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel
>
>
> ------------------------------
>
> End of live-devel Digest, Vol 179, Issue 4
> ******************************************
>
/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 3 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)

This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for
more details.

You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301  USA
**********/
// "liveMedia"
// Copyright (c) 1996-2017 Live Networks, Inc.  All rights reserved.
// A source object for AAC audio files in ADTS format
// Implementation

#include "ADTSAudioFileSource.hh"
#include "InputFile.hh"
#include <GroupsockHelper.hh>

////////// ADTSAudioFileSource //////////

static unsigned const samplingFrequencyTable[16] = {
  96000, 88200, 64000, 48000,
  44100, 32000, 24000, 22050,
  16000, 12000, 11025, 8000,
  7350, 0, 0, 0
};

ADTSAudioFileSource*
ADTSAudioFileSource::createNew(UsageEnvironment& env, char const* fileName) {
  static int i = 0;
//	printf("\n naisargi i = %d ----- In ADTSAudioFileSource.cpp\n\n",i);
	//printf("naisargi ---- step 1--- filename - %s\n", fileName);
  FILE* fid = NULL;
  static  unsigned char Buffer[4]; 
  do {
    fid = OpenInputFile(env, fileName);
    if (fid == NULL) break;
	//printf("naisargi ---- step 2 ---- file is successfully open\n ");
    // Now, having opened the input file, read the fixed header of the first frame,
    // to get the audio stream's parameters:
    unsigned char fixedHeader[4]; // it's actually 3.5 bytes long
   // if (fread(fixedHeader, 1, sizeof fixedHeader, fid) < sizeof fixedHeader) break;
	//printf("naisargi ---- step 3 ---inside header read\n");
if (i == 1)
	{
		fixedHeader[0] = Buffer[0];
		fixedHeader[1] = Buffer[1];
		fixedHeader[2] = Buffer[2];
		fixedHeader[3] = Buffer[3];

//		printf("fixedHeader[0] = %x\n",fixedHeader[0]);
//		printf("fixedHeader[1] = %x\n",fixedHeader[1]);
//		printf("fixedHeader[2] = %x\n",fixedHeader[2]);
//		printf("fixedHeader[3] = %x\n",fixedHeader[3]);
	}
	if (i == 0)
	{
    		if (fread(fixedHeader, 1, sizeof fixedHeader, fid) < sizeof fixedHeader) break;
		Buffer[0] = fixedHeader[0];
		Buffer[1] = fixedHeader[1];
		Buffer[2] = fixedHeader[2];
		Buffer[3] = fixedHeader[3];
		
//		printf("Buffer[0] = %x\n",Buffer[0]);
//		printf("Buffer[1] = %x\n",Buffer[1]);
//		printf("Buffer[2] = %x\n",Buffer[2]);
//		printf("Buffer[3] = %x\n",Buffer[3]);
 		i++;
	}
	
//	printf("naisargi --- step 4 ---- after reading header\n");

    // Check the 'syncword':
    if (!(fixedHeader[0] == 0xFF && (fixedHeader[1]&0xF0) == 0xF0)) {
      env.setResultMsg("Bad 'syncword' at start of ADTS file");
      break;
    }
//	printf("naisargi --- step 5 --- Syncword sucessfull\n");

    // Get and check the 'profile':
    u_int8_t profile = (fixedHeader[2]&0xC0)>>6; // 2 bits
    if (profile == 3) {
      env.setResultMsg("Bad (reserved) 'profile': 3 in first frame of ADTS file");
      break;
    }

    // Get and check the 'sampling_frequency_index':
    u_int8_t sampling_frequency_index = (fixedHeader[2]&0x3C)>>2; // 4 bits
    if (samplingFrequencyTable[sampling_frequency_index] == 0) {
      env.setResultMsg("Bad 'sampling_frequency_index' in first frame of ADTS file");
      break;
    }

    // Get and check the 'channel_configuration':
    u_int8_t channel_configuration
      = ((fixedHeader[2]&0x01)<<2)|((fixedHeader[3]&0xC0)>>6); // 3 bits

    // If we get here, the frame header was OK.
    // Reset the fid to the beginning of the file:
#ifndef _WIN32_WCE
    rewind(fid);
	//printf("naisargi -- step 6 --- in (if) ---rewind\n ");
#else
    SeekFile64(fid, SEEK_SET,0);
//	printf("naisargi --- step 6 - in else part of ifndef _WIN32WCE\n");
#endif
#ifdef DEBUG
/*
    fprintf(stderr, "Read first frame: profile %d, "
	    "sampling_frequency_index %d => samplingFrequency %d, "
	    "channel_configuration %d\n",
	    profile,
	    sampling_frequency_index, samplingFrequencyTable[sampling_frequency_index],
	    channel_configuration);*/
//	printf("down is step 7 \n ");
  //  printf( "Read first frame: profile %d,"
//	    "sampling_frequency_index %d => samplingFrequency %d, "
//	    "channel_configuration %d\n",
//	    profile,
//	    sampling_frequency_index, samplingFrequencyTable[sampling_frequency_index],
//	    channel_configuration);
#endif
    return new ADTSAudioFileSource(env, fid, profile,
				   sampling_frequency_index, channel_configuration);
//	printf("naisargi ---- step 8 ---create new ADTSAudioFileSource \n");
  } while (0);

  // An error occurred:
  CloseInputFile(fid);
  return NULL;
}

ADTSAudioFileSource
::ADTSAudioFileSource(UsageEnvironment& env, FILE* fid, u_int8_t profile,
		      u_int8_t samplingFrequencyIndex, u_int8_t channelConfiguration)
  : FramedFileSource(env, fid) {
  fSamplingFrequency = samplingFrequencyTable[samplingFrequencyIndex];
  fNumChannels = channelConfiguration == 0 ? 2 : channelConfiguration;
  fuSecsPerFrame
    = (1024/*samples-per-frame*/*1000000) / fSamplingFrequency/*samples-per-second*/;

  // Construct the 'AudioSpecificConfig', and from it, the corresponding ASCII string:
  unsigned char audioSpecificConfig[2];
  u_int8_t const audioObjectType = profile + 1;
  audioSpecificConfig[0] = (audioObjectType<<3) | (samplingFrequencyIndex>>1);
  audioSpecificConfig[1] = (samplingFrequencyIndex<<7) | (channelConfiguration<<3);
  sprintf(fConfigStr, "%02X%02x", audioSpecificConfig[0], audioSpecificConfig[1]);
}

ADTSAudioFileSource::~ADTSAudioFileSource() {
  CloseInputFile(fFid);
}

// Note: We should change the following to use asynchronous file reading, #####
// as we now do with ByteStreamFileSource. #####
void ADTSAudioFileSource::doGetNextFrame() {
  // Begin by reading the 7-byte fixed_variable headers:
  unsigned char headers[7];
  if (fread(headers, 1, sizeof headers, fFid) < sizeof headers
      || feof(fFid) || ferror(fFid)) {
    // The input source has ended:
    handleClosure();
    return;
  }

  // Extract important fields from the headers:
  Boolean protection_absent = headers[1]&0x01;
  u_int16_t frame_length
    = ((headers[3]&0x03)<<11) | (headers[4]<<3) | ((headers[5]&0xE0)>>5);
#ifdef DEBUG
  u_int16_t syncword = (headers[0]<<4) | (headers[1]>>4);
  fprintf(stderr, "Read frame: syncword 0x%x, protection_absent %d, frame_length %d\n", syncword, protection_absent, frame_length);
//	printf("naisargi --- step 9 --- syncword = %x",syncword);
  if (syncword != 0xFFF) fprintf(stderr, "WARNING: Bad syncword!\n");
#endif
  unsigned numBytesToRead
    = frame_length > sizeof headers ? frame_length - sizeof headers : 0;

  // If there's a 'crc_check' field, skip it:
  if (!protection_absent) {
    SeekFile64(fFid, 2, SEEK_CUR);
    numBytesToRead = numBytesToRead > 2 ? numBytesToRead - 2 : 0;
  }

  // Next, read the raw frame data into the buffer provided:
  if (numBytesToRead > fMaxSize) {
    fNumTruncatedBytes = numBytesToRead - fMaxSize;
    numBytesToRead = fMaxSize;
  }
  int numBytesRead = fread(fTo, 1, numBytesToRead, fFid);
  if (numBytesRead < 0) numBytesRead = 0;
  fFrameSize = numBytesRead;
  fNumTruncatedBytes += numBytesToRead - numBytesRead;

  // Set the 'presentation time':
  if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
    // This is the first frame, so use the current time:
    int time = gettimeofday(&fPresentationTime, NULL);
	printf("fpresentation time of audio --> time = %d\n",time);  // VVDN change -- naisargi
  } else {
    // Increment by the play time of the previous frame:
    unsigned uSeconds = fPresentationTime.tv_usec + fuSecsPerFrame;
//	printf("naisargi -- step 9 -----in else part of fpresentation time\n\n");
   fPresentationTime.tv_sec += uSeconds/1000000;
    fPresentationTime.tv_usec = uSeconds%1000000;	
	
printf("fpresentation time of audio -->  fPresentationTime.tv_sec = %ld   fPresentationTime.tv_usec = %ld \n",fPresentationTime.tv_sec, fPresentationTime.tv_usec);  // VVDN change -- naisargi

 }

  fDurationInMicroseconds = fuSecsPerFrame;

  // Switch to another task, and inform the reader that he has data:
  nextTask() = envir().taskScheduler().scheduleDelayedTask(0,
				(TaskFunc*)FramedSource::afterGetting, this);
}
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to