> Date: Wed, 6 Apr 2016 11:03:29 +0200 > From: Martin Pieuchot <m...@openbsd.org> > > On 27/03/16(Sun) 12:00, Martin Pieuchot wrote: > > Diff below moves resetpriority() out of updatepri() and resched_proc() > > out of resetpriority() in order to expose a double resched_proc() call > > in setrunnable(). > > > > This happens when a thread has slept for more than 1 second. Since > > p_usrpri is equal or bigger than p_priority, if the first call > > triggers a need_resched() then the second will also do so. > > > > I'd like to commit this diff which does not introduce any behavior > > change then remove the first resched_proc() in setrunnable(). > > I'm still looking for oks for this diff. > > I'm currently trying to have a better view of the differences in > voluntary vs involuntary context switches between SP and MP kernels > for the same workload. So removing a resched_proc() will help.
I am not in favour of this diff simply because it increases the risk that somebody forgets to call resched_proc() when it is needed. I don't think the optimization you're talking about will make a meaningful difference. But I don't feel too strongly about this. So if there is a consensus that this diff is the way to go, I won't object. > > Index: kern/sched_bsd.c > > =================================================================== > > RCS file: /cvs/src/sys/kern/sched_bsd.c,v > > retrieving revision 1.43 > > diff -u -p -r1.43 sched_bsd.c > > --- kern/sched_bsd.c 9 Mar 2016 13:38:50 -0000 1.43 > > +++ kern/sched_bsd.c 24 Mar 2016 11:37:34 -0000 > > @@ -246,6 +246,7 @@ schedcpu(void *arg) > > newcpu = (u_int) decay_cpu(loadfac, p->p_estcpu); > > p->p_estcpu = newcpu; > > resetpriority(p); > > + resched_proc(p, p->p_usrpri); > > if (p->p_priority >= PUSER) { > > if (p->p_stat == SRUN && > > (p->p_priority / SCHED_PPQ) != > > @@ -284,7 +285,6 @@ updatepri(struct proc *p) > > newcpu = (int) decay_cpu(loadfac, newcpu); > > p->p_estcpu = newcpu; > > } > > - resetpriority(p); > > } > > > > /* > > @@ -454,7 +454,7 @@ mi_switch(void) > > #endif > > } > > > > -static __inline void > > +void > > resched_proc(struct proc *p, u_char pri) > > { > > struct cpu_info *ci; > > @@ -509,8 +509,11 @@ setrunnable(struct proc *p) > > p->p_stat = SRUN; > > p->p_cpu = sched_choosecpu(p); > > setrunqueue(p); > > - if (p->p_slptime > 1) > > + if (p->p_slptime > 1) { > > updatepri(p); > > + resetpriority(p); > > + resched_proc(p, p->p_usrpri); > > + } > > p->p_slptime = 0; > > resched_proc(p, p->p_priority); > > } > > @@ -531,7 +534,6 @@ resetpriority(struct proc *p) > > NICE_WEIGHT * (p->p_p->ps_nice - NZERO); > > newpriority = min(newpriority, MAXPRI); > > p->p_usrpri = newpriority; > > - resched_proc(p, p->p_usrpri); > > } > > > > /* > > @@ -556,6 +558,7 @@ schedclock(struct proc *p) > > SCHED_LOCK(s); > > p->p_estcpu = ESTCPULIM(p->p_estcpu + 1); > > resetpriority(p); > > + resched_proc(p, p->p_usrpri); > > if (p->p_priority >= PUSER) > > p->p_priority = p->p_usrpri; > > SCHED_UNLOCK(s); > > Index: kern/kern_resource.c > > =================================================================== > > RCS file: /cvs/src/sys/kern/kern_resource.c,v > > retrieving revision 1.55 > > diff -u -p -r1.55 kern_resource.c > > --- kern/kern_resource.c 5 Dec 2015 10:11:53 -0000 1.55 > > +++ kern/kern_resource.c 24 Mar 2016 11:30:32 -0000 > > @@ -194,8 +194,10 @@ donice(struct proc *curp, struct process > > return (EACCES); > > chgpr->ps_nice = n; > > SCHED_LOCK(s); > > - TAILQ_FOREACH(p, &chgpr->ps_threads, p_thr_link) > > - (void)resetpriority(p); > > + TAILQ_FOREACH(p, &chgpr->ps_threads, p_thr_link) { > > + resetpriority(p); > > + resched_proc(p, p->p_usrpri); > > + } > > SCHED_UNLOCK(s); > > return (0); > > } > > Index: sys/sched.h > > =================================================================== > > RCS file: /cvs/src/sys/sys/sched.h,v > > retrieving revision 1.41 > > diff -u -p -r1.41 sched.h > > --- sys/sched.h 17 Mar 2016 13:18:47 -0000 1.41 > > +++ sys/sched.h 24 Mar 2016 11:35:38 -0000 > > @@ -166,6 +166,7 @@ void sched_stop_secondary_cpus(void); > > #define cpu_is_idle(ci) ((ci)->ci_schedstate.spc_whichqs == 0) > > > > void sched_init_runqueues(void); > > +void resched_proc(struct proc *p, u_char); > > void setrunqueue(struct proc *); > > void remrunqueue(struct proc *); > > > > > >