awesome, would like to see the result from HQ scaling sometime in the future. I am just using putSurface, don't want to go thru Proc pipeline if I don't have to. Is the performance penalty identical in both ways? Is there a way I can measure how much GPU processing (% and such) is being utilized? Thanks,
Ratin On Tue, Feb 19, 2013 at 7:24 PM, Xiang, Haihao <[email protected]>wrote: > > > > I am using Intel_driver from the staging branch, on a Gen 3 HD4000. So > > there other algorithms like bi-cubic is not supported? > > You can select another scaling method other than the default method via > the flag to vaPutSurface() or the filter_flag in > VAProcPipelineParameterBuffer. > > /* Scaling flags for vaPutSurface() */ > #define VA_FILTER_SCALING_DEFAULT 0x00000000 > #define VA_FILTER_SCALING_FAST 0x00000100 > #define VA_FILTER_SCALING_HQ 0x00000200 > #define VA_FILTER_SCALING_NL_ANAMORPHIC 0x00000300 > #define VA_FILTER_SCALING_MASK 0x00000f00 > > In VAProcPipelineParameterBuffer: > > * - Scaling: \c VA_FILTER_SCALING_DEFAULT, \c VA_FILTER_SCALING_FAST, > * \c VA_FILTER_SCALING_HQ, \c VA_FILTER_SCALING_NL_ANAMORPHIC. > */ > unsigned int filter_flags; > > For Inter driver, Currently only VA_FILTER_SCALING_NL_ANAMORPHIC and > VA_FILTER_SCALING_DEFAULT/VA_FILTER_SCALING_FAST are supported. We > will add the support for VA_FILTER_SCALING_HQ. > > Thanks > Haihao > > > > > > > > > On Mon, Feb 18, 2013 at 12:11 AM, Xiang, Haihao > > <[email protected]> wrote: > > On Fri, 2013-02-15 at 16:18 -0800, Ratin wrote: > > > I am decoding a 720 P video stream from a camera to 1080 P > > surfaces > > > and displaying them on the screen. I am seeing noticable > > noise and > > > pulsating which is directly related to the I frame interval > > > (aparently), the lowest I-frame interval I can specify for > > the camera > > > is 1 second and selecting that in addition to bitrate of > > 8192 kbps > > > makes is slightly better but still a lot of noise. A > > software > > > decoded/scaled video looks all smooth. > > > > > > > > > What I am wondering is what's the default scaling algorithm > > being used > > > in vaapi/intel driver and how do I specify better scaling > > algorithms > > > like bi-cubic etc.and possibly specify the strength of > > deblocking > > > filter level as well, and what can I do to reduce the > > pulsating ? > > > > > > Which driver are you using ? For Intel, it is bilinear. > > > > > > > > > > > > > > > > > Any input would be much appreciated. > > > > > > > > > Thanks > > > > > > > > > Ratin > > > > > > > > > > > _______________________________________________ > > > Libva mailing list > > > [email protected] > > > http://lists.freedesktop.org/mailman/listinfo/libva > > > > > > > > > > >
_______________________________________________ Libva mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/libva
