Hi Ross,

We are streaming (over Wifi ) H264 encoded frames from a live source using the 
reference of testH264VideoStreamer class. Camera frame rate is 30 fps.
Receiver is an iOS application referenced from testRTSPClient class.

We are able to stream with Initial propagation delay around 200ms but after a 
duration of 20 minutes we are observing a scenario in which we are not 
receiving any frames (afterGettingFrame is not getting called) in the receiver 
side for some very small duration(200-250 ms), and then a large propagation 
delay (more than 1 sec) is being observed in streaming. This scenario is being 
repeated every time.

We are trying to analyze the Live555 Source code to any solution for above and 
came across following unclear sections. 

1. As per our understanding RTP streaming will be done on UDP, then why in In 
RTPInterface::sendPacket method Data is being sent on TCP? 

  // Normal case: Send as a UDP packet:
  if (!fGS->output(envir(), fGS->ttl(), packet, packetSize)) success = False;
   
  // Also, send over each of our TCP sockets:
  for (tcpStreamRecord* streams = fTCPStreams; streams != NULL;
       streams = streams->fNext) {
    if (!sendRTPOverTCP(packet, packetSize,
            streams->fStreamSocketNum, streams->fStreamChannelId)) {
      success = False;
    }
  }

2. In MultiFramedRTPSink::sendPacketIfNecessary() method as fNoFramesLeft is 
false always, a delay of some amount is been added. What is the exact reason 
for this?

if (fNoFramesLeft) {
    // We're done:
    onSourceClosure(this);
  } else {
    // We have more frames left to send.  Figure out when the next frame
    // is due to start playing, then make sure that we wait this long before
    // sending the next packet.
    struct timeval timeNow;
    gettimeofday(&timeNow, NULL);
    int secsDiff = fNextSendTime.tv_sec - timeNow.tv_sec;
    int64_t uSecondsToGo = secsDiff*1000000 + (fNextSendTime.tv_usec - 
timeNow.tv_usec);
    if (uSecondsToGo < 0 || secsDiff < 0) { // sanity check: Make sure that the 
time-to-delay is non-negative:
      uSecondsToGo = 0;
    }

   // Delay this amount of time:
    nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo, 
(TaskFunc*)sendNext, this);
  }

3.  In Receiver application, in MultiFramedRTPSource::networkReadHandler1 
function ReorderBufferPacket object is used, That’s calling storepacket 
function.
What is the actual use of ReoderBufferPacket here? Whether this class is 
queueing the packets?
In ReorderBufferPacket class what's the use of fThresholdTime?

4.  In void RTPReceptionStats::noteIncomingPacket() 
Why Initially presentation time of the packet is same as wall-clock time and 
then later it is been changed to time stamp of SR’s.

// Return the 'presentation time' that corresponds to "rtpTimestamp":
if (fSyncTime.tv_sec == 0 && fSyncTime.tv_usec == 0) {
   // This is the first timestamp that we've seen, so use the current
   // 'wall clock' time as the synchronization time.  (This will be
   // corrected later when we receive RTCP SRs.)
   fSyncTimestamp = rtpTimestamp;
   fSyncTime = timeNow;
}

Thanks & Regards,
Ashfaque 
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to