Eric Anholt <[email protected]> writes:

> Keith Packard <[email protected]> writes:
>
>> Michel Dänzer <[email protected]> writes:
>>
>>> On 08.04.2014 17:13, Keith Packard wrote:
>>>> Michel Dänzer <[email protected]> writes:
>>>> 
>>>>> Yes, that works fine now in Xephyr. The only remaining obvious problem
>>>>> there is the xfwm4 window decoration corruption regression from the
>>>>> master branch.
>>>> 
>>>> Ok, looks like that's caused by glamor re-using FBOs that were too large
>>>> for tile pixmaps. Oddly, the texture fetch doesn't work like you'd want
>>>> in that case.
>>>> 
>>>> I've added a patch to keep glamor from ever using over-sized FBOs. We'll
>>>> probably want to re-enable that optimization at some point in the future
>>>> as it does tend to save a ton of allocation overhead, but we'll need to
>>>> be careful to only use it when the object isn't being used as a texture
>>>> source.
>>>
>>> Right, or the texture coordinates need to be calculated according to the
>>> texture size as opposed to the pixmap size. Though that still wouldn't
>>> work e.g. for Render repeat modes.
>>
>> I'm hoping we'll be able to simply remove all of the X-server level
>> pixmap caching; libdrm already caches stuff below us, and is doing a
>> much more polite job of it.
>
> Data today from removing the fbo cache entirely on poppler.trace,
> compared to just using exact sizing:
>
> x before
> + after
> +--------------------------------------------------------------------------------+
> |                                                                             
>   +|
> |xx          x x x                                               + +   +      
>   +|
> | |_______A__M___|                                                
> |____M_A______||
> +--------------------------------------------------------------------------------+
>     N           Min           Max        Median           Avg        Stddev
> x   5      2.562944      2.751878      2.712117     2.6680034   0.089980187
> +   5      3.334879      3.511184      3.408156     3.4232804   0.083419141
> Difference at 95.0% confidence
>       0.755277 +/- 0.126537
>       28.3087% +/- 4.74276%
>       (Student's t, pooled s = 0.0867617)
>
> So, while I don't like having this cache (which sucks memory from the
> system and doesn't give the kernel a chance to reclaim it), the
> performance delta's big enough to keep it for now.  I see some
> low-hanging fruit in Mesa, so let's revisit this if we clean up overhead
> in the rest of the stack.

Thanks for the analysis. I'll see if I can't get some internal
statistics about what the cache is actually being used for; it'd be
great if we could limit the number and size of entries without major
performance impacts.

-- 
[email protected]

Attachment: pgp8CBI7dEWmO.pgp
Description: PGP signature

_______________________________________________
[email protected]: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Reply via email to