https://bugs.kde.org/show_bug.cgi?id=512987

--- Comment #2 from kido <[email protected]> ---
(In reply to Bernd from comment #1)
> Thank you for your report.
> 
> Please note that rendering your project involves two steps: 1) Applying all
> effects, compositions and transitions; 2) Encoding the rendered frames using
> the selected encoder and adding it into the selected container.
> 
> Step 1) is done exclusively by MLT, the underlying framework for all the
> compositing and filtering, which does not utilize the GPU due to unresolved
> issues between MLT, movit (a library for GPU acceleration), and Kdenlive.
> Discussions and work is ongoing but due to the small team size progress is
> slower than we would like.
> 
> Step 2) is the only one where GPU acceleration is possible by selecting
> NVENC or VAAPI profiles. But this is only the smallest portion of the
> rendering, so the GPU is mostly idle during the Kdenlive rendering process.

Hi Bernd! Thank you for your work, first of all!
Now I understand why Kdenlive does not use the GPU constantly during rendering.
However, I conducted the following experiment:
I took a test video in 4k and in the first case I transcoded it with the
command:
ffmpeg -hwaccel cuda -i 194.mp4 -c:v h264_nvenc -vf “scale=2560:1440” -b:v 12M
output.mp4 
It took about 11 seconds, with the GPU loaded at 100%.
In the second case, I did the same thing using Kdenlive. No filters, effects,
or compositing. Only rescale to 1440p with the parameters ab=160k acodec=aac
channels=2 f=mp4 real_time=-1 threads=0 vb=12000k vcodec=h264_nvenc.
Result: encoding time 1 minute 25 seconds, CPU load ~17%, GPU load ~25-35%.
The difference in encoding time is 8 times in favor of ffmpeg!
Why doesn't Kdenlive use the GPU to its full capacity in this case? Or if it
can't use the GPU, why doesn't it use the CPU to its full capacity?

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to