On 23/10/17 13:45, Anton Khirnov wrote: > Quoting Mark Thompson (2017-10-02 00:01:04) >> SEI headers should be inserted as generic raw data (the old specific >> type has been deprecated in libva2). >> --- >> libavcodec/vaapi_encode_h264.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c >> index 833d442d0..388d950cc 100644 >> --- a/libavcodec/vaapi_encode_h264.c >> +++ b/libavcodec/vaapi_encode_h264.c >> @@ -252,7 +252,7 @@ static int >> vaapi_encode_h264_write_extra_header(AVCodecContext *avctx, >> >> ff_cbs_fragment_uninit(&priv->cbc, au); >> >> - *type = VAEncPackedHeaderH264_SEI; >> + *type = VAEncPackedHeaderRawData; > > This makes no difference for old libva?
The correct SEI is always inserted as RawData for all driver versions. I thought that was the end of it, but stupidly I only tested VBR mode. All pre-2.0 drivers additionally generate a new, broken (not matching the parameter sets) SEI message and dump it into the stream in CBR mode if we don't provide it with the deprecated type because of the test at <https://github.com/01org/intel-vaapi-driver/blob/v1.8-branch/src/gen6_mfc_common.c#L647>. So that would suggest we should just #ifdef around the test. Unfortunately, that implies there is another bug here in CBR mode - if the user disables timing SEI generation then this will still trigger and insert an SEI message which doesn't make sense at all (the HRD parameters aren't present). Before 7a4fac5e91789b73e07bd4ad20493cfde028df76 the SEI was always included so this wasn't visible, but since then it has been broken for that case. I think that means the right solution is to always give pre-2.0 drivers a zero-length VAEncPackedHeaderH264_SEI to stop them from breaking the stream by adding a new invalid message? I'll look into how this can work. - Mark _______________________________________________ libav-devel mailing list [email protected] https://lists.libav.org/mailman/listinfo/libav-devel
