On 01/27/2017 03:21 AM, Michel Dänzer wrote:
On 26/01/17 08:07 PM, Christian König wrote:
Am 26.01.2017 um 12:01 schrieb Samuel Pitoiset:
On 01/26/2017 03:45 AM, Michel Dänzer wrote:
On 25/01/17 11:19 PM, Samuel Pitoiset wrote:

I would like to approach the problem by reducing the amount of vram
needed by the userspace in order to prevent TTM to move lot of data...

One thing that might help there is not trying to put any buffers in VRAM
which will (likely) be accessed by the CPU and which are larger than say
1/4 the size of CPU visible VRAM. And maybe also keeping track of the
total size of such buffers we're trying to put in VRAM, and stop when it
exceeds say 3/4.

That could be a solution yes. But maybe, we should also try to reduce
the number of mapped VRAM (for buffers mapped only once).

For buffers mapped only once I suggest to just use a bouncing buffer in
GART.

BTW: What kind of allocations are we talking about here? From the
application or driver internal allocations (e.g. shader code for example)?

That's a good point about shader code, actually — we do upload the
shader machine code by writing with the CPU directly to VRAM. This could
account for at least some of the large number of mappings Sam is seeing.
It should be relatively easy to unmap these buffers immediately after
the code is written (in si_shader_binary_upload).

si_shader_binary_upload() already unmaps these buffers (ie. it calls the buffer_unmap() winsys function).


Longer term, maybe we should consider doing this without writing to VRAM
with the CPU.


_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to