Jul 8, 2020, 06:28 by [email protected]:
> Fixes: 4907
>
> Adds support for decoding of animated WebP.
>
> The WebP parser now splits the input stream into packets containing one frame.
>
> The WebP decoder adds the animation related features according to the specs:
> https://developers.google.com/speed/webp/docs/riff_container#animation
> The frames of the animation may be smaller than the image canvas.
> Therefore, the frame is decoded to a temporary frame,
> then it is blended into the canvas, the canvas is copied to the output frame,
> and finally the frame is disposed from the canvas.
>
> The output to AV_PIX_FMT_YUVA420P/AV_PIX_FMT_YUV420P is still supported.
> The background color is specified only as BGRA in the WebP file
> so it is converted to YUVA if YUV formats are output.
>
We don't convert pixel formats in decoders, and I wouldn't want to have
libavcodec
depend on libswscale. I wouldn't trust libswscale to make accurate conversions
either.
Can you use the macros in libavutil/colorspace.h to convert the BGRA value to
YUVA
and then just memcpy it across the frame?
Also, there are a lot of frame memcpys in the code. Could you get rid of most
of them
by refcounting?
> - .capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS,> +
> .capabilities = AV_CODEC_CAP_DR1,
Why?
> + if (component == 1 || component == 2) {> + height
> = AV_CEIL_RSHIFT(height, desc->log2_chroma_h);> + }
We don't wrap 1-line if statements in brackets.
_______________________________________________
ffmpeg-devel mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".