The idea behind the change is to transition from the existing spatial
vCPU handling approach that introduces costly modification to the
scheduling logic to ensure the requested CPU count is obeyed (10%+
performance drop in some tests, see below) to temporal isolation that can be
provided by the cgroup2 cpu.max.

Reference test results:

1. Clean setup, no vCPU related modifications:
~/at_process_ctxswitch_pipe -w -p 2 -t 15
rate_total: 856509.625000, avg: 428254.812500

2. vCPU related modifications (present state):
~/at_process_ctxswitch_pipe -w -p 2 -t 15
rate_total: 735626.812500, avg: 367813.406250

3. Cleaned-up vCPU handling:
~/at_process_ctxswitch_pipe -w -p 2 -t 15
rate_total: 840074.750000, avg: 420037.375000

Changes in v4:
 - While at it, fix a locking issue in tg_set_cpu_limit()


Dmitry Sepp (3):
  sched: Clean up vCPU handling logic
  sched: Support nr_cpus in cgroup2 as well
  sched: Add missing cpus_read_lock() in tg_set_cpu_limit()

 include/linux/sched.h          |   6 -
 include/linux/sched/topology.h |   5 -
 kernel/sched/core.c            | 108 ++-------
 kernel/sched/fair.c            | 408 ---------------------------------
 kernel/sched/sched.h           |  10 -
 5 files changed, 22 insertions(+), 515 deletions(-)


base-commit: 9dc72d17b6abbe7353f01f6ac44551f75299a560
-- 
2.47.1

_______________________________________________
Devel mailing list
[email protected]
https://lists.openvz.org/mailman/listinfo/devel

Reply via email to