If I write the read operations to a file. And the write operations to another 
file in order to play them with a video player such as the ffplay. I get the 
following outputs:

When I'm using a Ring buffer and I read fMaxSize bytes I get:
The doesn't show any error. That is expected because I'm not losing any single 
byte. 



When I'm using a buffer of NAL units and I try to send the whole NAL I get:
[h264 @ 00000000007d1fa0] corrupted macroblock 3 12 (total_coeff=-1)
[h264 @ 00000000007d1fa0] error while decoding MB 3 12
[h264 @ 00000000007d1fa0] concealing 2304 DC, 2304 AC, 2304 MV errors in I frame
[h264 @ 0000000003c407c0] top block unavailable for requested intra mode at 32 
12
[h264 @ 0000000003c407c0] error while decoding MB 32 12
[h264 @ 0000000003c407c0] concealing 2304 DC, 2304 AC, 2304 MV errors in I frame
[h264 @ 0000000003dbc020] corrupted macroblock 19 24 (total_coeff=-1)
[h264 @ 0000000003dbc020] error while decoding MB 19 24
[h264 @ 0000000003dbc020] concealing 1536 DC, 1536 AC, 1536 MV errors in I frame
[h264 @ 00000000007d1fa0] Invalid level prefix
[h264 @ 00000000007d1fa0] error while decoding MB 19 22
[h264 @ 00000000007d1fa0] concealing 1694 DC, 1694 AC, 1694 MV errors in I frame
[h264 @ 0000000003c407c0] Invalid level prefix
[h264 @ 0000000003c407c0] error while decoding MB 8 20
[h264 @ 0000000003c407c0] concealing 1833 DC, 1833 AC, 1833 MV errors in I frame
[h264 @ 0000000003dbc020] concealing 2013 DC, 2013 AC, 2013 MV errors in I frame
[h264 @ 00000000007d1fa0] corrupted macroblock 20 18 (total_coeff=16)
[h264 @ 00000000007d1fa0] error while decoding MB 20 18
[h264 @ 00000000007d1fa0] concealing 1949 DC, 1949 AC, 1949 MV errors in I frame
[h264 @ 0000000003c407c0] Invalid level prefix
[h264 @ 0000000003c407c0] error while decoding MB 50 20
[h264 @ 0000000003c407c0] concealing 1791 DC, 1791 AC, 1791 MV errors in I frame
[h264 @ 0000000003dbc020] corrupted macroblock 19 19 (total_coeff=-1)
[h264 @ 0000000003dbc020] error while decoding MB 19 19
[h264 @ 0000000003dbc020] concealing 1886 DC, 1886 AC, 1886 MV errors in I frame
[h264 @ 00000000007d1fa0] concealing 1950 DC, 1950 AC, 1950 MV errors in I frame
[h264 @ 0000000003c407c0] Invalid level prefix
[h264 @ 0000000003c407c0] error while decoding MB 38 17
[h264 @ 0000000003c407c0] concealing 1995 DC, 1995 AC, 1995 MV errors in I frame
[h264 @ 0000000003dbc020] Invalid level prefix
[h264 @ 0000000003dbc020] error while decoding MB 14 17
[h264 @ 0000000003dbc020] concealing 2019 DC, 2019 AC, 2019 MV errors in I frame
[h264 @ 00000000007d1fa0] concealing 2047 DC, 2047 AC, 2047 MV errors in I frame
[h264 @ 0000000003c407c0] corrupted macroblock 12 15 (total_coeff=-1)
[h264 @ 0000000003c407c0] error while decoding MB 12 15
[h264 @ 0000000003c407c0] concealing 2149 DC, 2149 AC, 2149 MV errors in I frame
[h264 @ 0000000003dbc020] Invalid level prefix
[h264 @ 0000000003dbc020] error while decoding MB 9 16
[h264 @ 0000000003dbc020] concealing 2088 DC, 2088 AC, 2088 MV errors in I frame

Which Is somehow expected due to there are some bytes truncated quiet often...


If I play the file with what I'm writing from the encoder everything is 
correct. 

Best,
Pablo

-----Original Message-----
From: Pablo Gomez 
Sent: Wednesday, January 23, 2013 11:28 AM
To: 'live-de...@ns.live555.com'
Subject: Re:Re: [Live-devel] unicast onDemand from live source NAL Units

>First, I assume that you have are feeding your input source object (i.e., the 
>object that delivers H.264 NAL units) into a >"H264VideoStreamDiscreteFramer" 
>object (and from there to a "H264VideoRTPSink").

I did the H264LiveServerMediaSubsession based on the 
H264FileServerMediaSubssesion.
I'm using the H264VideoRTPSink.cpp, H264VideoStreamDiscreteFramer.cpp and the 
object that inherits FramedSource where I'm reading the NAL units 

This is how it is connected in the media subsession:

FramedSource* H264LiveServerMediaSubsession::createNewStreamSource(unsigned 
/*clientSessionId*/, unsigned& estBitrate) {
  estBitrate = 10000; // kbps, estimate
  // Create the video source:
    H264LiveStreamFramedSource* liveFramer = 
H264LiveStreamFramedSource::createNew(envir(),liveBuffer);
        H264VideoStreamDiscreteFramer* discFramer = 
H264VideoStreamDiscreteFramer::createNew(envir(),liveFramer);
  // Create a framer for the Video Elementary Stream:
  return H264VideoStreamFramer::createNew(envir(), discFramer); }

RTPSink* H264LiveServerMediaSubsession
::createNewRTPSink(Groupsock* rtpGroupsock,
                   unsigned char rtpPayloadTypeIfDynamic,
                   FramedSource* /*inputSource*/) {
  return H264VideoRTPSink::createNew(envir(), rtpGroupsock, 
rtpPayloadTypeIfDynamic); }

This is the doGetNextFrame in the H264LiveStreamFramedSource I'm using:

void H264LiveStreamFramedSource::doGetNextFrame() {

        // Try to read as many bytes as will fit in the buffer provided (or 
"fPreferredFrameSize" if less)
        fFrameSize=fBuffer->read(fTo,fMaxSize,&fNumTruncatedBytes);

        // We don't know a specific play time duration for this data,
        // so just record the current time as being the 'presentation time':
        gettimeofday(&fPresentationTime, NULL);

        // Inform the downstream object that it has data:
        FramedSource::afterGetting(this);

}

About the call fBuffer.read
fBuffer->read(fTo,fMaxSize,&fNumTruncatedBytes);
is basically to the object that contains the NAL units. This object I have two 
implementations one tries to copy the whole NAL unit and sets the 
fNumTruncatedBytes to the truncatedBytes in the read operation.. It returns the 
number of bytes copied to fTo.


The second implementation I have of this buffer is a Ring Buffer. When I write 
to the ring buffer I write all bytes and when I read from it I read  the 
minimum between availableBytes in buffer and the fMaxSize. I start reading from 
the last read position+1. Thus, in this approach I do not truncate anything. 
But, I guess somehow the NAL units are broken.  Because if the last read 
position is in the middle of a NAL unit, the next Read will not have any 
SPS/PPS.

>Setting "OutPacketBuffer::maxSize" to some value larger than the largest 
>expected NAL unit is correct - and should work.  However, setting >this value 
>to 10 million is insane.  You can't possibly expect to be generating NAL units 
>this large, can you??

Yes, 10 million is insane there are no units with that size. Just wrote it to 
test. Now I set it up to 250000  which is big enough but it does not matter, 
the fMaxSize is always smaller than that and I'm getting truncated frames quiet 
often.

>If possible, you should configure your encoder to generate a sequence of NAL 
>unit 'slices', rather than single large key-frame NAL units.  >Streaming very 
>large NAL units is a bad idea, because - although our code will fragment them 
>correctly when they get packed into RTP packets -> the loss of just one of 
>these fragments will cause the whole NAL unit to get discarded by receivers.

I have checked the Nvidia encoder parameters and it has one parameter to set up 
the number of slices. I set it up to 4 and 10.  I also test it the default mode 
which lets the encoder decide the slice number. Nevertheless, I'm testing on a 
lan network so it is basically lossless. Thus, I guess this parameter should 
not be a problem.

Best
Pablo

----------------------------------------------------------------------
Message: 1
Date: Tue, 22 Jan 2013 10:46:08 -0800
From: Ross Finlayson <finlay...@live555.com>
To: LIVE555 Streaming Media - development & use
        <live-de...@ns.live555.com>
Subject: Re: [Live-devel] unicast onDemand from live source NAL Units
        NVidia
Message-ID: <bfb7d2a7-9ede-4221-b5d9-6fcf2047c...@live555.com>
Content-Type: text/plain; charset="iso-8859-1"

First, I assume that you have are feeding your input source object (i.e., the 
object that delivers H.264 NAL units) into a "H264VideoStreamDiscreteFramer" 
object (and from there to a "H264VideoRTPSink").


> I tried to set up in the Streamer code  enough size in the OutputPacketBuffer 
> but this does not seem to work....
>   {
>        OutPacketBuffer::maxSize=10000000;

If possible, you should configure your encoder to generate a sequence of NAL 
unit 'slices', rather than single large key-frame NAL units.  Streaming very 
large NAL units is a bad idea, because - although our code will fragment them 
correctly when they get packed into RTP packets - the loss of just one of these 
fragments will cause the whole NAL unit to get discarded by receivers.
Setting "OutPacketBuffer::maxSize" to some value larger than the largest 
expected NAL unit is correct - and should work.  However, setting this value to 
10 million is insane.  You can't possibly expect to be generating NAL units 
this large, can you??

If possible, you should configure your encoder to generate a sequence of NAL 
unit 'slices', rather than single large key-frame NAL units.  Streaming very 
large NAL units is a bad idea, because - although our code will fragment them 
correctly when they get packed into RTP packets - the loss of just one of these 
fragments will cause the whole NAL unit to get discarded by receivers.

Nonetheless, if you set "OutPacketBuffer::maxSize" to a value larger than the 
largest expected NAL unit, then this should work (i.e., you should find that 
"fMaxSize" will always be large enough for you to copy a whole NAL unit).


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
----------------------------------------------------------------------

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to