On 05/20/2014 03:12 PM, Peter Zijlstra wrote: > On Tue, May 20, 2014 at 01:52:41AM +0530, Srivatsa S. Bhat wrote: >> From: Srivatsa S. Bhat <[email protected]> >> [PATCH v5 UPDATEDv3 3/3] CPU hotplug, smp: Flush any pending IPI callbacks >> before CPU offline >> >> During CPU offline, in the stop-machine loop, we use 2 separate stages to >> disable interrupts, to ensure that the CPU going offline doesn't get any new >> IPIs from the other CPUs after it has gone offline. >> >> However, an IPI sent much earlier might arrive late on the target CPU >> (possibly _after_ the CPU has gone offline) due to hardware latencies, >> and due to this, the smp-call-function callbacks queued on the outgoing >> CPU might not get noticed (and hence not executed) at all. >> >> This is somewhat theoretical, but in any case, it makes sense to explicitly >> loop through the call_single_queue and flush any pending callbacks before the >> CPU goes completely offline. So, flush the queued smp-call-function callbacks >> in the MULTI_STOP_DISABLE_IRQ_ACTIVE stage, after disabling interrupts on the >> active CPU. This can be trivially achieved by invoking the >> generic_smp_call_function_single_interrupt() function itself (and since the >> outgoing CPU is still online at this point, we won't trigger the "IPI to >> offline >> CPU" warning in this function; so we are safe to call it here). >> >> This way, we would have handled all the queued callbacks before going >> offline, >> and also, no new IPIs can be sent by the other CPUs to the outgoing CPU at >> that >> point, because they will all be executing the stop-machine code with >> interrupts >> disabled. >> >> Suggested-by: Frederic Weisbecker <[email protected]> >> Reviewed-by: Tejun Heo <[email protected]> >> Signed-off-by: Srivatsa S. Bhat <[email protected]> >> --- >> >> include/linux/smp.h | 2 ++ >> kernel/smp.c | 17 ++++++++++++++--- >> kernel/stop_machine.c | 11 +++++++++++ >> 3 files changed, 27 insertions(+), 3 deletions(-) >> >> diff --git a/include/linux/smp.h b/include/linux/smp.h >> index 633f5ed..e6b090d 100644 >> --- a/include/linux/smp.h >> +++ b/include/linux/smp.h >> @@ -151,6 +151,8 @@ smp_call_function_any(const struct cpumask *mask, >> smp_call_func_t func, >> >> static inline void kick_all_cpus_sync(void) { } >> >> +static inline void generic_smp_call_function_single_interrupt(void) { } >> + >> #endif /* !SMP */ >> >> /* >> diff --git a/kernel/smp.c b/kernel/smp.c >> index 306f818..b765167 100644 >> --- a/kernel/smp.c >> +++ b/kernel/smp.c >> @@ -177,9 +177,18 @@ static int generic_exec_single(int cpu, struct >> call_single_data *csd, >> return 0; >> } >> >> -/* >> - * Invoked by arch to handle an IPI for call function single. Must be >> - * called from the arch with interrupts disabled. >> +/** >> + * generic_smp_call_function_single_interrupt - Execute SMP IPI callbacks >> + * >> + * Invoked by arch to handle an IPI for call function single. >> + * >> + * This is also invoked by a CPU about to go offline, to flush any pending >> + * smp-call-function callbacks queued on this CPU (including those for which >> + * the source CPU's IPIs might not have been received on this CPU yet). >> + * This ensures that all pending IPI callbacks are run before the CPU goes >> + * completely offline. >> + * >> + * Must be called with interrupts disabled. >> */ >> void generic_smp_call_function_single_interrupt(void) >> { >> @@ -187,6 +196,8 @@ void generic_smp_call_function_single_interrupt(void) >> struct call_single_data *csd, *csd_next; >> static bool warned; >> >> + WARN_ON(!irqs_disabled()); >> + >> entry = llist_del_all(&__get_cpu_var(call_single_queue)); >> entry = llist_reverse_order(entry); >> >> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c >> index 288f7fe..4069c13 100644 >> --- a/kernel/stop_machine.c >> +++ b/kernel/stop_machine.c >> @@ -21,6 +21,7 @@ >> #include <linux/smpboot.h> >> #include <linux/atomic.h> >> #include <linux/lglock.h> >> +#include <linux/smp.h> >> >> /* >> * Structure to determine completion condition and record errors. May >> @@ -223,6 +224,16 @@ static int multi_cpu_stop(void *data) >> if (is_active) { >> local_irq_disable(); >> hard_irq_disable(); >> + >> + /* >> + * IPIs (from the inactive CPUs) might >> + * arrive late due to hardware >> latencies. >> + * So flush out any pending IPI >> callbacks >> + * explicitly, to ensure that the >> outgoing >> + * CPU doesn't go offline with work >> still >> + * pending (during CPU hotplug). >> + */ >> + >> generic_smp_call_function_single_interrupt(); >> } >> break; >> case MULTI_STOP_RUN: > > The multi_cpu_stop() path isn't exclusive to hotplug, so your changelog > is wrong or the patch is. >
Yes, I know that multi_cpu_stop() isn't exclusive to hotplug. That's why I have explicitly referred to CPU hotplug in the comment as well as the changelog. But I totally agree that code-wise this is not the best way to do it since this affects (although harmlessly) usecases other than hotplug as well. Do you have any other suggestions? > Furthermore, you yourself have worked on trying to remove stop machine > from hotplug, that too should've been a big hint this is the wrong place > for this. > I know :-( But for lack of any other solution (other than getting rid of stop-machine from the hotplug path), I went with this rather poor fix, as a stop-gap arrangement until we implement the long-term solution (stop- machine-free CPU hotplug). While we are on the topic of stop-machine removal, I'll try to revive that patchset again and hope that this time it will get more reviews and probably get a chance to go upstream. That should solve a _lot_ of problems with hotplug! Regards, Srivatsa S. Bhat -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

