On 10/08, Oleg Nesterov wrote:
>
> On 10/08, Peter Zijlstra wrote:
> >
> > On Thu, Oct 08, 2015 at 04:51:36PM +0200, Oleg Nesterov wrote:
> > > @@ -261,12 +276,8 @@ int stop_two_cpus(unsigned int cpu1, unsigned int 
> > > cpu2, cpu_stop_fn_t fn, void *
> > >   set_state(&msdata, MULTI_STOP_PREPARE);
> > >
> > >   /*
> > > +  * We do not want to migrate to inactive CPU. FIXME: move this
> > > +  * into the caller.
> > >    */
> > >   if (!cpu_active(cpu1) || !cpu_active(cpu2)) {
> > >           preempt_enable();
> >
> > So we cannot move that into the caller..
>
> Why?
>
> > because this function sleeps
> > with wait_for_completion().
> >
> > Or rather, it would force the caller to use get_online_cpus(), which we
> > worked really hard to avoid.
>
> Aaah wait. Sorry for confusion!
>
> I meant "move this into the callback, migrate_swap_stop()".

Forgot to mention... And note that both these cpu_active() are obviously
racy, CPU_DOWN_PREPARE can make it inactive right after the check.

> > Also, I think we still want the patch I proposed which ensures the
> > stopper thread is active 'early', because the load balancer pretty much
>
> Perhaps. Although I do not really understand why it is important.
> I mean, either way we unpark it at CPU_ONLINE stage, just
> sched_cpu_active() has a higher priority.
>
> But this is off-topic in a sense that the main point of this patch
> is that stop_two_cpus() no longer needs to abuse cpu_active() checks
> to avoid the race with cpu_up/down, we can simply rely on ->enabled.
>
> And again, we need to take both locks to remove "lglock stop_cpus_lock".
>
> So I think your change can be applied after this series too. Or I missed
> something?

So I think this series makes sense anyway and hopefully should fix the
problem with or without your change.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to