Hi Jonathan,

Sorry, I had missed this mail.

On Wed, 7 Apr 2010, Bian, Jonathan wrote:

The new post-processing flags you proposed look fine to me. As far as the naming, I don't have a strong opinion as long as it conveys the different levels of trade-off. Perhaps we can use something like:

VA_FILTER_LQ_SCALING -> VA_FILTER_SCALING_FAST
VA_FILTER_MQ_SCALING -> VA_FILTER_SCALING_DEFAULT
VA_FILTER_HQ_SCALING -> VA_FILTER_SCALING_HQ

Agreed, this looks better. Thanks.

I have been thinking a little bit about how to support more advanced video post-processing capabilities with the API. As these advanced features will likely require passing more complex data structures than just flags or integer values, one possible solution is to use vaBeginPicture/vaRenderPicture/vaEndPicture for passing video post-processing data structures as buffers. For example, we can add a new VAVideoProcessingBufferType and a generalized VAVideoProcessingParameter data structure. This would make it easier to specify things like reference frames for doing motion-compensated de-interlacing etc. This should work for pre-processing as well if the source picture to be encoded needs some pre-processing, and as pre and post processing share a lot of common features they can be treated essentially the same.

The idea is appealing. At first sight, I thought there could be a problem if some postprocessing algorithms need to operate on up-scaled surfaces. On second thought, I don't know of any. So, your VAVideoProcessingBufferType looks interesting.

However, I see the vaBeginPicture() .. vaEndPicture() functions as used for the decoding process and vaPutSurface() as the display process. I mean, those steps could be completely separated, even from a (helper) library point of view.

I mean, would this model work in the following condition?

* decoder library:
- vaBeginPicture()
- vaRenderPicture() with PicParam, SliceParam, SliceData
- vaEndPicture()

* main application:
- vaBeginPicture()
- vaRenderPicture() with VideoProcessing params
- vaEndPicture()

i.e. decouple decoding and postprocessing steps and making sure the second vaRenderPicture() in the user application won't tell the driver to decode the bitstream again.

vaPutSurface() could still be the most efficient way to get a decoded frame to the screen if no advanced video processing is required, or if the hardware can't process an image and write the output to memory (e.g. hardware overlay). But if the hardware is capable of taking an input image from memory, process it and write it out to memory (whether it's GPU or fixed-function), then the vaRenderPicture path can enable more advanced features.

The vaPutSurface() postproc flags could also be thought as postproc enabling flags with VAVideoProcessing structs the algorithm options, or some defaults are chosen if no such options are defined?

So, there are three possible models here:

1) VAVideoProcessing buffers controlling immediate execution of the postproc algorithms

2) VAVideoProcessing buffers being config (e.g. denoise level) that are later executed if vaPutSurface() flags tell so

3) VAVideoProcessing buffers controlling immediate execution of the postproc algorithm and vaPutSurface() flags controlling other postproc algorithms with specific defaults.

In short, would it be desirable to keep the decoded surface as is, un-processed [for vaGetImage()]? I believe so, and postproc should be executed later at vaPutSurface() time.

WDYT?

Regards,
Gwenole.
_______________________________________________
Libva mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/libva

Reply via email to