On Sun, Apr 18, 2021 at 10:17:51PM -0400, Rik van Riel wrote: > The try_to_wake_up function has an optimization where it can queue > a task for wakeup on its previous CPU, if the task is still in the > middle of going to sleep inside schedule(). > > Once schedule() re-enables IRQs, the task will be woken up with an > IPI, and placed back on the runqueue. > > If we have such a wakeup pending, there is no need to search other > CPUs for runnable tasks. Just skip (or bail out early from) newidle > balancing, and run the just woken up task. > > For a memcache like workload test, this reduces total CPU use by > about 2%, proportionally split between user and system time, > and p99 and p95 application response time by 2-3% on average. > The schedstats run_delay number shows a similar improvement.
Nice. > Signed-off-by: Rik van Riel <[email protected]> > --- > kernel/sched/fair.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 69680158963f..19a92c48939f 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -7163,6 +7163,14 @@ done: __maybe_unused; > if (!rf) > return NULL; > > + /* > + * We have a woken up task pending here. No need to search for ones > + * elsewhere. This task will be enqueued the moment we unblock irqs > + * upon exiting the scheduler. > + */ > + if (rq->ttwu_pending) > + return NULL; As reported by the robot, that needs an CONFIG_SMP guard of sorts, #ifdef might work I suppose. > new_tasks = newidle_balance(rq, rf); > > /* > @@ -10661,7 +10669,8 @@ static int newidle_balance(struct rq *this_rq, struct > rq_flags *rf) > * Stop searching for tasks to pull if there are > * now runnable tasks on this rq. > */ > - if (pulled_task || this_rq->nr_running > 0) > + if (pulled_task || this_rq->nr_running > 0 || > + this_rq->ttwu_pending) Either cino=(0:0 or just bust the limit and make it 84 chars.

