Hi,

On 10/3/18 2:00 AM, Marek Olšák wrote:
On Tue, Oct 2, 2018 at 6:36 PM Rob Clark <[email protected]> wrote:
On Tue, Oct 2, 2018 at 6:30 PM Marek Olšák <[email protected]> wrote:
From: Marek Olšák <[email protected]>
[...]
Just curious (and maybe I missed some previous discussion), would this
override taskset?

Asking because when benchmarking on big/little arm SoCs I tend to use
taskset to pin things to either the fast cores or slow cores, to
eliminate a source of uncertainty in the result.  (And I use u_queue
to split of the 2nd half of batch submits, Ie. the part that generates
gmem/tiling cmds and does the kernel submit ioctl).  Would be slightly
annoying to loose that ability to control which group of cores the
u_queue thread runs on.

(But admittedly this is kind of an edge case, so I guess an env var to
override the behavior would be ok.)

I don't know, but I guess it affects it.

pipe_context::set_context_param(ctx,
PIPE_CONTEXT_PARAM_PIN_THREADS_TO_L3_CACHE, L3_group_index); is
similar to what you need.

The ideal option would be to have such default behavior on ARM that is
the most desirable. An env var is the second option.

Use of taskset is not ARM specific. I've seen nasty kernel scheduler CPU core bouncing issues also on other platforms. Debugging those required using taskset for the given game.

Also, to get any kind of reliable CPU utilization figure, one also needs to bind task to a specific core (or build a kernel with freq stats support, which isn't enabled by default in any distro kernels, and have suitable tooling to parse that data).

So, which games use thread affinity for threads that use Mesa, and how much that is a problem, i.e. why this patch is needed?


        - Eero
_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to