The idea behind the change is to transition from the existing spatial vCPU handling approach that introduces costly modification to the scheduling logic to ensure the requested CPU count is obeyed (10%+ performance drop in some tests, see below) to temporal isolation that can be provided by the cgroup2 cpu.max.
Reference test results: 1. Clean setup, no vCPU related modifications: ~/at_process_ctxswitch_pipe -w -p 2 -t 15 rate_total: 856509.625000, avg: 428254.812500 2. vCPU related modifications (present state): ~/at_process_ctxswitch_pipe -w -p 2 -t 15 rate_total: 735626.812500, avg: 367813.406250 3. Cleaned-up vCPU handling: ~/at_process_ctxswitch_pipe -w -p 2 -t 15 rate_total: 840074.750000, avg: 420037.375000 Dmitry Sepp (2): sched: Clean up vCPU handling logic sched: Support nr_cpus in cgroup2 as well include/linux/sched.h | 6 - kernel/sched/core.c | 96 ++-------- kernel/sched/fair.c | 405 ------------------------------------------ kernel/sched/sched.h | 3 - 4 files changed, 10 insertions(+), 500 deletions(-) base-commit: 9dc72d17b6abbe7353f01f6ac44551f75299a560 -- 2.47.1 _______________________________________________ Devel mailing list [email protected] https://lists.openvz.org/mailman/listinfo/devel
