On Sun, 26 Jul 2015, Michael Niedermayer wrote:
From: Michael Niedermayer <[email protected]>
Fixes some files from Ticket679
This also changes subtitles to 4:2:0 matching the output format and thus
simplifying the blend code.
This restricts placement to the chroma sample resolution though, speak up
if you consider this a problem, say so, the code could be changed to use
YUV444 for subtitles and scaling them down while blending, this would be
slower though.
The current code only uses a single swscale context and reinitializes it
as needed, this could be changed as well if needed
It is fine by me as it is.
Signed-off-by: Michael Niedermayer <[email protected]>
---
ffplay.c | 275 ++++++++++++++------------------------------------------------
1 file changed, 62 insertions(+), 213 deletions(-)
[...]
for (;;) {
if (!(sp = frame_queue_peek_writable(&is->subpq)))
@@ -2348,14 +2170,41 @@ static int subtitle_thread(void *arg)
for (i = 0; i < sp->sub.num_rects; i++)
{
- for (j = 0; j < sp->sub.rects[i]->nb_colors; j++)
- {
- RGBA_IN(r, g, b, a,
(uint32_t*)sp->sub.rects[i]->pict.data[1] + j);
- y = RGB_TO_Y_CCIR(r, g, b);
- u = RGB_TO_U_CCIR(r, g, b, 0);
- v = RGB_TO_V_CCIR(r, g, b, 0);
- YUVA_OUT((uint32_t*)sp->sub.rects[i]->pict.data[1] + j, y,
u, v, a);
+ int in_w = sp->sub.rects[i]->w;
+ int in_h = sp->sub.rects[i]->h;
+ int subw = is->subdec.avctx->width ? is->subdec.avctx->width :
is->viddec.avctx->width;
+ int subh = is->subdec.avctx->height ? is->subdec.avctx->height :
is->viddec.avctx->height;
+ int out_w = in_w * is->viddec.avctx->width / subw;
+ int out_h = in_h * is->viddec.avctx->height / subh;
viddec.avctx may not be set here. I see no better way but to add two extra
fields to VideoState and update them when opening a video stream and when
decoding a new picture.
+ AVPicture newpic;
+
+ //cant use avpicture_alloc as it is not compatible with
avsubtitle_free()
+ av_image_fill_linesizes(newpic.linesize, AV_PIX_FMT_YUVA420P,
out_w);
+ newpic.data[0] = av_malloc(newpic.linesize[0] * out_h);
+ newpic.data[3] = av_malloc(newpic.linesize[3] * out_h);
+ newpic.data[1] = av_malloc(newpic.linesize[1] * ((out_h+1)/2));
+ newpic.data[2] = av_malloc(newpic.linesize[2] * ((out_h+1)/2));
+
+ is->sub_convert_ctx = sws_getCachedContext(is->sub_convert_ctx,
+ in_w, in_h, AV_PIX_FMT_PAL8, out_w, out_h,
+ AV_PIX_FMT_YUVA420P, sws_flags, NULL, NULL, NULL);
+ if (!is->sub_convert_ctx || !newpic.data[0] || !newpic.data[3]
||
+ !newpic.data[1] || !newpic.data[2]
+ ) {
+ av_log(NULL, AV_LOG_FATAL, "Cannot initialize the sub
conversion context\n");
+ exit(1);
}
+ sws_scale(is->sub_convert_ctx,
+ sp->sub.rects[i]->pict.data,
sp->sub.rects[i]->pict.linesize,
+ 0, in_h, newpic.data, newpic.linesize);
+
+ av_free(sp->sub.rects[i]->pict.data[0]);
+ av_free(sp->sub.rects[i]->pict.data[1]);
+ sp->sub.rects[i]->pict = newpic;
+ sp->sub.rects[i]->w = out_w;
+ sp->sub.rects[i]->h = out_h;
+ sp->sub.rects[i]->x = sp->sub.rects[i]->x * out_w / in_w;
+ sp->sub.rects[i]->y = sp->sub.rects[i]->y * out_h / in_h;
}
Regards,
Marton
_______________________________________________
ffmpeg-devel mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel