Well, I wish it was that simple. I am not just trying to receive video and save 
it to disk here. What I am trying to do video capturing and live-encoding on a 
server, streaming via live555, live-decoding and playback on the client. This 
may already consume too much time of yours, so feel free to ignore the thread 
from now on, but I'm trying to explain once more. One way or the other: thanks 
for your time.

> SPS, PPS, VPS NAL units are apparently mandatory, not just for ffplay, but 
> also for MPC, and VLC, and also for my own NvCodec based decoder.

Sorry for that confusing remark, what I meant to say was that the *dedicated* 
writing of the SPS, PPS, VPS the way you do it in 
H264or5VideoFileSink::afterGettingFrame, thus by decoding the base64 stings 
taken from the SDP (and what I referred to as "header"), is apparently NOT 
mandatory for generating a valid and playable file for VLC, MPC, ffplay, since 
I *never* did this that way and the files were *always* playable nonetheless. 
The reason for this is the following:

>Actually, your file “checker.265” *does* contain VPS,SPS,PPS NAL units at the 
>start

Yes, I know that. It is the first three NAL units *as received from the 
server*. When I look at what I'm getting in my MediaSink, the first three 
packages are always 24, 46, and 7 bytes. That's the VPS, SPS, PPS, as sent from 
the server, as part of the HEVC stream and exactly what you create from SDP. So 
they are there at the beginning of the file anyways. You decoding them from SDP 
and writing them as the first chunk of data in H264or5VideoSink actually causes 
them be written *twice*, I can see this in bindumps from openRTSP-geneated 
files.


Again, I am not trying to generate files on disk, I know there is code for that 
and it is easy to use. The reason *I* am writing files in the first place is 
that I am trying to figure out *where* the HEVC data breaks. In order to do 
that, I'm simply writing the raw data I got directly to disk, at several points 
of my pipeline, in order to verify its validity via players like ffplay, VLC, 
MPC (so I can see that not my own decoder/player code is to blame, but indeed 
the HEVC data itself). Look at it like snapshots of the data at different 
points. This way, I figured out that the data, as arriving after RTP-streaming 
using live555, is bad, while prior to streaming it was not. My assumption is, 
that the *plain data* just as received from the decoder, *without* any 
additional stuff, like SPS, PPS, VPS generated from SDP represents valid data, 
that can be saved to disk and then decoded via different players. I assume this 
because I never did anything else during my tests and it always worked that 
way. The data I'm writing prior to sending, for verifying the video is valid 
pre-sending, is written this way and it is *always* playable. It is of course 
preceded by the three NAL units containing VPS, SPS, PPS as well, just as the 
data arriving on client-side. The data received on the client on the other hand 
is *not* playable. Although it should be exactly the video data I would have to 
feed into my decoder. There is no more additionally generated VPS, SPS, PPS, or 
any other stuff needed. The reason I know this is that I tried feeding it the 
data I saved *pre-sending*, and it worked flawlessly, hundreds of times, I did 
this all the time while during development of my client-side player, for 
testing purposes. Now, next I also wrote the *received* data to disk, as-is, 
and tried to verify if it's still valid post-sending, and it was *not*. Now, 
things were getting even more absurd as I noticed that openRTSP generates valid 
files from the same RTP-transmitted data, using the same server, but using 
H265VideoFileSink, while on th
e other hand DummySink (in testRTSPclient, and which my client code is based 
on) did not. But as far as I can see, the both classes are doing the same 
thing! The only difference is the writing of extra-VPS, SPS, PPS in the very 
beginning, as decoded from the SDP. But again, this is *not* mandatory, since I 
*never* had to write a "header" like this before, for months. I even tried to 
modify DummySink so it does write it (thus, also having it double, just as in 
the files openRTSP generates, see the comment-block in DummySink constructor) 
and it didn't make any difference, they were still not playable. From that I'm 
inferring that it must be the data, that's messed up.

As for the slicing: I now configured my encoder in a way that all NAL units are 
smaller than 1000 bytes. I also increased buffer sizes on client and server. It 
is still not working. I am even able to reproduce this with a 
live555mediaserver with buffer size increased to 5000000, streaming a video 
file to openRTSP with buffer size 5000000 (which should make no difference when 
using slicing, but anyways) and a modified testRTSPClient, not writing to 
console, but writing a file instead. Both setups produce invalid streams. I 
once more put a ZIP file up to Dropbox [2], containing input video file (note 
that there is no NAL unit bigger than 1000 bytes in there, so everything should 
be fine) and two resulting output video files. I also attached the code of my 
modified testRTSPClient. Let me know if this works for you, here it doesn't. If 
you can reproduce this and tell me what DummySink is doing wrong, *then* I am 
confident, that this is "End of Problem".

Note also that the behavior is not completely deterministic, which is one more 
thing that gives me headache. Sometimes it just works. When I'm using smaller 
frame dimensions, it works more often. I tried with half the frame size 
(3088x2076) and it worked *almost* always. I spent the whole day yesterday 
trying out everything I could possibly think of and I found that even with the 
very same executable files, the results differ. Sometimes it even works 
flawlessly, 10 times in a row, then you get broken video files for hours. I 
certainly don't get it.

Best,
Roland


[2] https://www.dropbox.com/s/od7014v3oqbsp1n/live555-streaming-2.zip?dl=0 


-----Ursprüngliche Nachricht-----
Von: live-devel [mailto:live-devel-boun...@ns.live555.com] Im Auftrag von Ross 
Finlayson
Gesendet: Donnerstag, 4. Mai 2017 19:49
An: LIVE555 Streaming Media - development & use <live-de...@ns.live555.com>
Betreff: Re: [Live-devel] Broken data when streaming HEVC video

> And this is where I'm stuck. SPS, PPS, VPS NAL units are apparently 
> mandatory, not just for ffplay, but also for MPC, and VLC, and also for my 
> own NvCodec based decoder. I never wrote them into my files, and no program 
> ever complained about it. The files I'm writing from within the server prior 
> to sending (the one I gave you yesterday) are perfectly fine and do *not* 
> contain a header.

Actually, your file “checker.265” *does* contain VPS,SPS,PPS NAL units at the 
start (each prepended by a 0x00 0x00 0x00 0x01 ‘start code’):

%hexdump checker.265 | head
0000000 00 00 00 01 40 01 0c 01 ff ff 01 40 00 00 03 00
0000010 00 03 00 00 03 00 00 03 00 b4 ac 09 00 00 00 01
0000020 42 01 01 01 40 00 00 03 00 00 03 00 00 03 00 00
0000030 03 00 b4 a0 00 c1 08 00 82 1f 79 6b 4a 42 59 2e
0000040 30 10 10 00 00 03 00 10 00 00 03 01 e0 80 00 00
0000050 00 01 44 01 c0 f7 c0 cc 90 00 00 00 01 26 01 af
0000060 26 6c ca 03 76 1b e3 1a 3a 3d 74 9e fe e1 ab ef
0000070 d1 6d 37 5c c3 1f 69 57 fd 5c c0 e1 1b dd ee 8a
0000080 52 11 1e a0 02 97 a8 eb 9d ee 94 12 5c d7 ee b4
0000090 6f d2 07 37 7f 83 0c 80 02 b5 3b 0a dd 80 14 75

The VPS NAL unit (nal_unit_type 32 == (0x40&0x7E)>>1) is        40 01 0c 01 ff 
ff 01 40 00 00 03 00 00 03 00 00 03 00 00 03 00 b4 ac 09
The SPS NAL unit (nal_unit_type 33 == (0x42&0x7E)>>1) is        42 01 01 01 40 
00 00 03 00 00 03 00 00 03 00 00 03 00 b4 a0 00 c1 08 00 82 1f 79 6b 4a 42 59 
2e 30 10 10 00 00 03 00 10 00 00 03 01 e0 80
The PPS NAL unit (nal_unit_type 34 == (0x44&0x7E)>>1) is        44 01 c0 f7 c0 
cc 90

Your encoder is generating these NAL units at the start of the stream 
(*precisely because* they are mandatory).  And our streaming (server) software 
(e.g., when streaming your file “checker.265”) automatically parses these NAL 
units from the start of the file, and stores them (Base-64-encoded) in the 
stream’s SDP description, so that receivers (e.g., VLC, or “openRTSP”, using 
"H265VideoFileSink”) can regenerate them, in case they don’t receive them in 
the stream.


> Anyways, I found that H265VideoFileSink works while my modified DummySink (as 
> well as my own implementation of a MediaSink) does not.

That’s good.  You have something (A) that works, and something (B) that does 
not work.  Therefore, you simply compare (A) with (B), to figure out what your 
problem is.  Which you can easily do, because Remember, You Have Complete 
Source Code.

But in this case, you have already done the comparison, so you already know 
what the difference is:
        “H265VideoFileSink” writes VPS,SPS,PPS NAL units at the start of the 
output file, whereas your "modified DummySink” does not.
Therefore, either rewrite your “modified DummySink” so that it does the same 
thing as “H265VideoFileSink”, or else just use “H265VideoFileSink” instead.

End of Problem.


Ross Finlayson
Live Networks, Inc.
http://www.live555.com/


_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to