From: Fei Wang <[email protected]> Use DXGI 16bit pixel format to compatible with 12bit Y212/XV36 since there is no 12bit pixel defined in D3D11.
Fix cmdline on Windows: $ ffmpeg.exe -hwaccel qsv -i input_12bit.bin -f null - Signed-off-by: Fei Wang <[email protected]> --- libavutil/hwcontext_d3d11va.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/libavutil/hwcontext_d3d11va.c b/libavutil/hwcontext_d3d11va.c index 9d81effe5e..2cde44af8e 100644 --- a/libavutil/hwcontext_d3d11va.c +++ b/libavutil/hwcontext_d3d11va.c @@ -104,6 +104,10 @@ static const struct { { DXGI_FORMAT_P016, AV_PIX_FMT_P012 }, { DXGI_FORMAT_Y216, AV_PIX_FMT_Y216 }, { DXGI_FORMAT_Y416, AV_PIX_FMT_XV48 }, + // There is no 12bit pixel format defined in D3D11, use 16bit to compatible + // with 12 bit AV_PIX_FMT* formats. + { DXGI_FORMAT_Y216, AV_PIX_FMT_Y212 }, + { DXGI_FORMAT_Y416, AV_PIX_FMT_XV36 }, // Special opaque formats. The pix_fmt is merely a place holder, as the // opaque format cannot be accessed directly. { DXGI_FORMAT_420_OPAQUE, AV_PIX_FMT_YUV420P }, -- 2.34.1 _______________________________________________ ffmpeg-devel mailing list [email protected] https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email [email protected] with subject "unsubscribe".
