With the "core wars" starting (16core/32thread "consumer" CPUs
announced), maybe it's time to add an upper limit? Waking up a core
from low power state has it's latency, it's sometimes faster to just
do several jobs on an active core than to wake another one (which will
also cause more lock contention, the core will just see the job was
already taken by someone else and go back to sleep).

Gražvydas

On Thu, Jun 1, 2017 at 9:18 PM, Marek Olšák <[email protected]> wrote:
> From: Marek Olšák <[email protected]>
>
> Reserve one core for other things (like draw calls).
> ---
>  src/gallium/drivers/radeonsi/si_pipe.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/src/gallium/drivers/radeonsi/si_pipe.c 
> b/src/gallium/drivers/radeonsi/si_pipe.c
> index de4e5da..4704304 100644
> --- a/src/gallium/drivers/radeonsi/si_pipe.c
> +++ b/src/gallium/drivers/radeonsi/si_pipe.c
> @@ -874,22 +874,25 @@ struct pipe_screen *radeonsi_screen_create(struct 
> radeon_winsys *ws)
>
>         si_init_screen_state_functions(sscreen);
>
>         if (!r600_common_screen_init(&sscreen->b, ws) ||
>             !si_init_gs_info(sscreen) ||
>             !si_init_shader_cache(sscreen)) {
>                 FREE(sscreen);
>                 return NULL;
>         }
>
> -       /* Only enable as many threads as we have target machines and CPUs. */
> +       /* Only enable as many threads as we have target machines, but at most
> +        * the number of CPUs - 1 if there is more than one.
> +        */
>         num_cpus = sysconf(_SC_NPROCESSORS_ONLN);
> +       num_cpus = MAX2(1, num_cpus - 1);
>         num_compiler_threads = MIN2(num_cpus, ARRAY_SIZE(sscreen->tm));
>
>         if (!util_queue_init(&sscreen->shader_compiler_queue, "si_shader",
>                              32, num_compiler_threads)) {
>                 si_destroy_shader_cache(sscreen);
>                 FREE(sscreen);
>                 return NULL;
>         }
>
>         si_handle_env_var_force_family(sscreen);
> --
> 2.7.4
>
> _______________________________________________
> mesa-dev mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to