Re: [PATCH 00/36] cpuidle,rcu: Cleanup the mess
On Wed, Jun 08, 2022 at 04:27:23PM +0200, Peter Zijlstra wrote: > Hi All! (omg so many) Hi Peter, Sorry for the delay; my plate has also been rather full recently. I'm beginning to page this in now. > These here few patches mostly clear out the utter mess that is cpuidle vs > rcuidle. > > At the end of the ride there's only 2 real RCU_NONIDLE() users left > > arch/arm64/kernel/suspend.c:RCU_NONIDLE(__cpu_suspend_exit()); > drivers/perf/arm_pmu.c: RCU_NONIDLE(armpmu_start(event, > PERF_EF_RELOAD)); The latter of these is necessary because apparently PM notifiers are called with RCU not watching. Is that still the case today (or at the end of this series)? If so, that feels like fertile land for more issues (yaey...). If not, we should be able to drop this. I can go dig into that some more. > kernel/cfi.c: RCU_NONIDLE({ > > (the CFI one is likely dead in the kCFI rewrite) and there's only a hand full > of trace_.*_rcuidle() left: > > kernel/trace/trace_preemptirq.c: > trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1); > kernel/trace/trace_preemptirq.c: > trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1); > kernel/trace/trace_preemptirq.c: > trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr); > kernel/trace/trace_preemptirq.c: > trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr); > kernel/trace/trace_preemptirq.c: > trace_preempt_enable_rcuidle(a0, a1); > kernel/trace/trace_preemptirq.c: > trace_preempt_disable_rcuidle(a0, a1); > > All of them are in 'deprecated' code that is unused for GENERIC_ENTRY. I think those are also unused on arm64 too? If not, I can go attack that. > I've touched a _lot_ of code that I can't test and likely broken some of it :/ > In particular, the whole ARM cpuidle stuff was quite involved with OMAP being > the absolute 'winner'. > > I'm hoping Mark can help me sort the remaining ARM64 bits as he moves that to > GENERIC_ENTRY. Moving to GENERIC_ENTRY as a whole is going to take a tonne of work (refactoring both arm64 and the generic portion to be more amenable to each other), but we can certainly move closer to that for the bits that matter here. Maybe we want a STRICT_ENTRY option to get rid of all the deprecated stuff that we can select regardless of GENERIC_ENTRY to make that easier. > I've also got a note that says ARM64 can probably do a WFE based > idle state and employ TIF_POLLING_NRFLAG to avoid some IPIs. Possibly; I'm not sure how much of a win that'll be given that by default we'll have a ~10KHz WFE wakeup from the timer, but we could take a peek. Thanks, Mark. ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 14/36] cpuidle: Fix rcu_idle_*() usage
On Wed, Jun 08, 2022 at 04:27:37PM +0200, Peter Zijlstra wrote: > --- a/kernel/time/tick-broadcast.c > +++ b/kernel/time/tick-broadcast.c > @@ -622,9 +622,13 @@ struct cpumask *tick_get_broadcast_onesh > * to avoid a deep idle transition as we are about to get the > * broadcast IPI right away. > */ > -int tick_check_broadcast_expired(void) > +noinstr int tick_check_broadcast_expired(void) > { > +#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H > + return arch_test_bit(smp_processor_id(), > cpumask_bits(tick_broadcast_force_mask)); > +#else > return cpumask_test_cpu(smp_processor_id(), tick_broadcast_force_mask); > +#endif > } This is somewhat not-ideal. :/ Could we unconditionally do the arch_test_bit() variant, with a comment, or does that not exist in some cases? Thanks, Mark. ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 24/36] printk: Remove trace_.*_rcuidle() usage
On Thu, 9 Jun 2022 15:02:20 +0200 Petr Mladek wrote: > > I'm somewhat curious whether we can actually remove that trace event. > > Good question. > > Well, I think that it might be useful. It allows to see trace and > printk messages together. Yes people still use it. I was just asked about it at Kernel Recipes. That is, someone wanted printk mixed in with the tracing, and I told them about this event (which they didn't know about but was happy to hear that it existed). -- Steve ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 15/36] cpuidle,cpu_pm: Remove RCU fiddling from cpu_pm_{enter,exit}()
On Wed, Jun 08, 2022 at 04:27:38PM +0200, Peter Zijlstra wrote: > All callers should still have RCU enabled. IIUC with that true we should be able to drop the RCU_NONIDLE() from drivers/perf/arm_pmu.c, as we only needed that for an invocation via a pm notifier. I should be able to give that a spin on some hardware. > > Signed-off-by: Peter Zijlstra (Intel) > --- > kernel/cpu_pm.c |9 - > 1 file changed, 9 deletions(-) > > --- a/kernel/cpu_pm.c > +++ b/kernel/cpu_pm.c > @@ -30,16 +30,9 @@ static int cpu_pm_notify(enum cpu_pm_eve > { > int ret; > > - /* > - * This introduces a RCU read critical section, which could be > - * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know > - * this. > - */ > - rcu_irq_enter_irqson(); > rcu_read_lock(); > ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL); > rcu_read_unlock(); > - rcu_irq_exit_irqson(); To make this easier to debug, is it worth adding an assertion that RCU is watching here? e.g. RCU_LOCKDEP_WARN(!rcu_is_watching(), "cpu_pm_notify() used illegally from EQS"); > > return notifier_to_errno(ret); > } > @@ -49,11 +42,9 @@ static int cpu_pm_notify_robust(enum cpu > unsigned long flags; > int ret; > > - rcu_irq_enter_irqson(); > raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags); > ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, > event_down, NULL); > raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags); > - rcu_irq_exit_irqson(); ... and likewise here? Thanks, Mark. > > return notifier_to_errno(ret); > } > > ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 16/36] rcu: Fix rcu_idle_exit()
On Wed, Jun 08, 2022 at 04:27:39PM +0200, Peter Zijlstra wrote: > Current rcu_idle_exit() is terminally broken because it uses > local_irq_{save,restore}(), which are traced which uses RCU. > > However, now that all the callers are sure to have IRQs disabled, we > can remove these calls. > > Signed-off-by: Peter Zijlstra (Intel) > Acked-by: Paul E. McKenney Acked-by: Mark Rutland Mark. > --- > kernel/rcu/tree.c |9 +++-- > 1 file changed, 3 insertions(+), 6 deletions(-) > > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -659,7 +659,7 @@ static noinstr void rcu_eqs_enter(bool u > * If you add or remove a call to rcu_idle_enter(), be sure to test with > * CONFIG_RCU_EQS_DEBUG=y. > */ > -void rcu_idle_enter(void) > +void noinstr rcu_idle_enter(void) > { > lockdep_assert_irqs_disabled(); > rcu_eqs_enter(false); > @@ -896,13 +896,10 @@ static void noinstr rcu_eqs_exit(bool us > * If you add or remove a call to rcu_idle_exit(), be sure to test with > * CONFIG_RCU_EQS_DEBUG=y. > */ > -void rcu_idle_exit(void) > +void noinstr rcu_idle_exit(void) > { > - unsigned long flags; > - > - local_irq_save(flags); > + lockdep_assert_irqs_disabled(); > rcu_eqs_exit(false); > - local_irq_restore(flags); > } > EXPORT_SYMBOL_GPL(rcu_idle_exit); > > > ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 20/36] arch/idle: Change arch_cpu_idle() IRQ behaviour
On Wed, Jun 08, 2022 at 04:27:43PM +0200, Peter Zijlstra wrote: > Current arch_cpu_idle() is called with IRQs disabled, but will return > with IRQs enabled. > > However, the very first thing the generic code does after calling > arch_cpu_idle() is raw_local_irq_disable(). This means that > architectures that can idle with IRQs disabled end up doing a > pointless 'enable-disable' dance. > > Therefore, push this IRQ disabling into the idle function, meaning > that those architectures can avoid the pointless IRQ state flipping. > > Signed-off-by: Peter Zijlstra (Intel) Nice! Acked-by: Mark Rutland [arm64] Mark. > --- > arch/alpha/kernel/process.c |1 - > arch/arc/kernel/process.c|3 +++ > arch/arm/kernel/process.c|1 - > arch/arm/mach-gemini/board-dt.c |3 ++- > arch/arm64/kernel/idle.c |1 - > arch/csky/kernel/process.c |1 - > arch/csky/kernel/smp.c |2 +- > arch/hexagon/kernel/process.c|1 - > arch/ia64/kernel/process.c |1 + > arch/microblaze/kernel/process.c |1 - > arch/mips/kernel/idle.c |8 +++- > arch/nios2/kernel/process.c |1 - > arch/openrisc/kernel/process.c |1 + > arch/parisc/kernel/process.c |2 -- > arch/powerpc/kernel/idle.c |5 ++--- > arch/riscv/kernel/process.c |1 - > arch/s390/kernel/idle.c |1 - > arch/sh/kernel/idle.c|1 + > arch/sparc/kernel/leon_pmc.c |4 > arch/sparc/kernel/process_32.c |1 - > arch/sparc/kernel/process_64.c |3 ++- > arch/um/kernel/process.c |1 - > arch/x86/coco/tdx/tdx.c |3 +++ > arch/x86/kernel/process.c| 15 --- > arch/xtensa/kernel/process.c |1 + > kernel/sched/idle.c |2 -- > 26 files changed, 28 insertions(+), 37 deletions(-) > > --- a/arch/alpha/kernel/process.c > +++ b/arch/alpha/kernel/process.c > @@ -57,7 +57,6 @@ EXPORT_SYMBOL(pm_power_off); > void arch_cpu_idle(void) > { > wtint(0); > - raw_local_irq_enable(); > } > > void arch_cpu_idle_dead(void) > --- a/arch/arc/kernel/process.c > +++ b/arch/arc/kernel/process.c > @@ -114,6 +114,8 @@ void arch_cpu_idle(void) > "sleep %0 \n" > : > :"I"(arg)); /* can't be "r" has to be embedded const */ > + > + raw_local_irq_disable(); > } > > #else/* ARC700 */ > @@ -122,6 +124,7 @@ void arch_cpu_idle(void) > { > /* sleep, but enable both set E1/E2 (levels of interrupts) before > committing */ > __asm__ __volatile__("sleep 0x3 \n"); > + raw_local_irq_disable(); > } > > #endif > --- a/arch/arm/kernel/process.c > +++ b/arch/arm/kernel/process.c > @@ -78,7 +78,6 @@ void arch_cpu_idle(void) > arm_pm_idle(); > else > cpu_do_idle(); > - raw_local_irq_enable(); > } > > void arch_cpu_idle_prepare(void) > --- a/arch/arm/mach-gemini/board-dt.c > +++ b/arch/arm/mach-gemini/board-dt.c > @@ -42,8 +42,9 @@ static void gemini_idle(void) >*/ > > /* FIXME: Enabling interrupts here is racy! */ > - local_irq_enable(); > + raw_local_irq_enable(); > cpu_do_idle(); > + raw_local_irq_disable(); > } > > static void __init gemini_init_machine(void) > --- a/arch/arm64/kernel/idle.c > +++ b/arch/arm64/kernel/idle.c > @@ -42,5 +42,4 @@ void noinstr arch_cpu_idle(void) >* tricks >*/ > cpu_do_idle(); > - raw_local_irq_enable(); > } > --- a/arch/csky/kernel/process.c > +++ b/arch/csky/kernel/process.c > @@ -101,6 +101,5 @@ void arch_cpu_idle(void) > #ifdef CONFIG_CPU_PM_STOP > asm volatile("stop\n"); > #endif > - raw_local_irq_enable(); > } > #endif > --- a/arch/csky/kernel/smp.c > +++ b/arch/csky/kernel/smp.c > @@ -314,7 +314,7 @@ void arch_cpu_idle_dead(void) > while (!secondary_stack) > arch_cpu_idle(); > > - local_irq_disable(); > + raw_local_irq_disable(); > > asm volatile( > "movsp, %0\n" > --- a/arch/hexagon/kernel/process.c > +++ b/arch/hexagon/kernel/process.c > @@ -44,7 +44,6 @@ void arch_cpu_idle(void) > { > __vmwait(); > /* interrupts wake us up, but irqs are still disabled */ > - raw_local_irq_enable(); > } > > /* > --- a/arch/ia64/kernel/process.c > +++ b/arch/ia64/kernel/process.c > @@ -241,6 +241,7 @@ void arch_cpu_idle(void) > (*mark_idle)(1); > > raw_safe_halt(); > + raw_local_irq_disable(); > > if (mark_idle) > (*mark_idle)(0); > --- a/arch/microblaze/kernel/process.c > +++ b/arch/microblaze/kernel/process.c > @@ -138,5 +138,4 @@ int dump_fpu(struct pt_regs *regs, elf_f > > void arch_cpu_idle(void) > { > - raw_local_irq_enable(); > } > --- a/arch/mips/kernel/idle.c > +++ b/arch/mips/kernel/idle.c > @@ -33,13 +33,13 @@ static void __cpuidle r3081_wait(void) > { > unsigned long cfg
Re: [PATCH 23/36] arm64,smp: Remove trace_.*_rcuidle() usage
On Wed, Jun 08, 2022 at 04:27:46PM +0200, Peter Zijlstra wrote: > Ever since commit d3afc7f12987 ("arm64: Allow IPIs to be handled as > normal interrupts") this function is called in regular IRQ context. > > Signed-off-by: Peter Zijlstra (Intel) [adding Marc since he authored that commit] Makes sense to me: Acked-by: Mark Rutland Mark. > --- > arch/arm64/kernel/smp.c |4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > --- a/arch/arm64/kernel/smp.c > +++ b/arch/arm64/kernel/smp.c > @@ -865,7 +865,7 @@ static void do_handle_IPI(int ipinr) > unsigned int cpu = smp_processor_id(); > > if ((unsigned)ipinr < NR_IPI) > - trace_ipi_entry_rcuidle(ipi_types[ipinr]); > + trace_ipi_entry(ipi_types[ipinr]); > > switch (ipinr) { > case IPI_RESCHEDULE: > @@ -914,7 +914,7 @@ static void do_handle_IPI(int ipinr) > } > > if ((unsigned)ipinr < NR_IPI) > - trace_ipi_exit_rcuidle(ipi_types[ipinr]); > + trace_ipi_exit(ipi_types[ipinr]); > } > > static irqreturn_t ipi_handler(int irq, void *data) > > ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 25/36] time/tick-broadcast: Remove RCU_NONIDLE usage
On Wed, Jun 08, 2022 at 04:27:48PM +0200, Peter Zijlstra wrote: > No callers left that have already disabled RCU. > > Signed-off-by: Peter Zijlstra (Intel) Acked-by: Mark Rutland Mark. > --- > kernel/time/tick-broadcast-hrtimer.c | 29 - > 1 file changed, 12 insertions(+), 17 deletions(-) > > --- a/kernel/time/tick-broadcast-hrtimer.c > +++ b/kernel/time/tick-broadcast-hrtimer.c > @@ -56,25 +56,20 @@ static int bc_set_next(ktime_t expires, >* hrtimer callback function is currently running, then >* hrtimer_start() cannot move it and the timer stays on the CPU on >* which it is assigned at the moment. > + */ > + hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED_HARD); > + /* > + * The core tick broadcast mode expects bc->bound_on to be set > + * correctly to prevent a CPU which has the broadcast hrtimer > + * armed from going deep idle. >* > - * As this can be called from idle code, the hrtimer_start() > - * invocation has to be wrapped with RCU_NONIDLE() as > - * hrtimer_start() can call into tracing. > + * As tick_broadcast_lock is held, nothing can change the cpu > + * base which was just established in hrtimer_start() above. So > + * the below access is safe even without holding the hrtimer > + * base lock. >*/ > - RCU_NONIDLE( { > - hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED_HARD); > - /* > - * The core tick broadcast mode expects bc->bound_on to be set > - * correctly to prevent a CPU which has the broadcast hrtimer > - * armed from going deep idle. > - * > - * As tick_broadcast_lock is held, nothing can change the cpu > - * base which was just established in hrtimer_start() above. So > - * the below access is safe even without holding the hrtimer > - * base lock. > - */ > - bc->bound_on = bctimer.base->cpu_base->cpu; > - } ); > + bc->bound_on = bctimer.base->cpu_base->cpu; > + > return 0; > } > > > ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 14/36] cpuidle: Fix rcu_idle_*() usage
On Tue, Jun 14, 2022 at 01:41:13PM +0100, Mark Rutland wrote: > On Wed, Jun 08, 2022 at 04:27:37PM +0200, Peter Zijlstra wrote: > > --- a/kernel/time/tick-broadcast.c > > +++ b/kernel/time/tick-broadcast.c > > @@ -622,9 +622,13 @@ struct cpumask *tick_get_broadcast_onesh > > * to avoid a deep idle transition as we are about to get the > > * broadcast IPI right away. > > */ > > -int tick_check_broadcast_expired(void) > > +noinstr int tick_check_broadcast_expired(void) > > { > > +#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H > > + return arch_test_bit(smp_processor_id(), > > cpumask_bits(tick_broadcast_force_mask)); > > +#else > > return cpumask_test_cpu(smp_processor_id(), tick_broadcast_force_mask); > > +#endif > > } > > This is somewhat not-ideal. :/ I'll say. > Could we unconditionally do the arch_test_bit() variant, with a comment, or > does that not exist in some cases? Loads of build errors ensued, which is how I ended up with this mess ... ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 15/36] cpuidle,cpu_pm: Remove RCU fiddling from cpu_pm_{enter,exit}()
On Tue, Jun 14, 2022 at 05:13:16PM +0100, Mark Rutland wrote: > On Wed, Jun 08, 2022 at 04:27:38PM +0200, Peter Zijlstra wrote: > > All callers should still have RCU enabled. > > IIUC with that true we should be able to drop the RCU_NONIDLE() from > drivers/perf/arm_pmu.c, as we only needed that for an invocation via a pm > notifier. > > I should be able to give that a spin on some hardware. > > > > > Signed-off-by: Peter Zijlstra (Intel) > > --- > > kernel/cpu_pm.c |9 - > > 1 file changed, 9 deletions(-) > > > > --- a/kernel/cpu_pm.c > > +++ b/kernel/cpu_pm.c > > @@ -30,16 +30,9 @@ static int cpu_pm_notify(enum cpu_pm_eve > > { > > int ret; > > > > - /* > > -* This introduces a RCU read critical section, which could be > > -* disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know > > -* this. > > -*/ > > - rcu_irq_enter_irqson(); > > rcu_read_lock(); > > ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL); > > rcu_read_unlock(); > > - rcu_irq_exit_irqson(); > > To make this easier to debug, is it worth adding an assertion that RCU is > watching here? e.g. > > RCU_LOCKDEP_WARN(!rcu_is_watching(), >"cpu_pm_notify() used illegally from EQS"); > My understanding is that rcu_read_lock() implies something along those lines when PROVE_RCU. ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [Pv-drivers] [PATCH 29/36] cpuidle, xenpv: Make more PARAVIRT_XXL noinstr clean
On Mon, Jun 13, 2022 at 07:23:13PM +, Nadav Amit wrote: > On Jun 13, 2022, at 11:48 AM, Srivatsa S. Bhat wrote: > > > ⚠ External Email > > > > On 6/8/22 4:27 PM, Peter Zijlstra wrote: > >> vmlinux.o: warning: objtool: acpi_idle_enter_s2idle+0xde: call to wbinvd() > >> leaves .noinstr.text section > >> vmlinux.o: warning: objtool: default_idle+0x4: call to arch_safe_halt() > >> leaves .noinstr.text section > >> vmlinux.o: warning: objtool: xen_safe_halt+0xa: call to > >> HYPERVISOR_sched_op.constprop.0() leaves .noinstr.text section > >> > >> Signed-off-by: Peter Zijlstra (Intel) > > > > Reviewed-by: Srivatsa S. Bhat (VMware) > > > >> > >> -static inline void wbinvd(void) > >> +extern noinstr void pv_native_wbinvd(void); > >> + > >> +static __always_inline void wbinvd(void) > >> { > >> PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ALT_NOT(X86_FEATURE_XENPV)); > >> } > > I guess it is yet another instance of wrong accounting of GCC for > the assembly blocks’ weight. I guess it is not a solution for older > GCCs, but presumably PVOP_ALT_CALL() and friends should have > used asm_inline or some new “asm_volatile_inline” variant. Partially, some of the *SAN options also generate a metric ton of nonsense when enabled and skew the compilers towards not inlining things. ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 15/36] cpuidle,cpu_pm: Remove RCU fiddling from cpu_pm_{enter,exit}()
On Tue, Jun 14, 2022 at 06:42:14PM +0200, Peter Zijlstra wrote: > On Tue, Jun 14, 2022 at 05:13:16PM +0100, Mark Rutland wrote: > > On Wed, Jun 08, 2022 at 04:27:38PM +0200, Peter Zijlstra wrote: > > > All callers should still have RCU enabled. > > > > IIUC with that true we should be able to drop the RCU_NONIDLE() from > > drivers/perf/arm_pmu.c, as we only needed that for an invocation via a pm > > notifier. > > > > I should be able to give that a spin on some hardware. > > > > > > > > Signed-off-by: Peter Zijlstra (Intel) > > > --- > > > kernel/cpu_pm.c |9 - > > > 1 file changed, 9 deletions(-) > > > > > > --- a/kernel/cpu_pm.c > > > +++ b/kernel/cpu_pm.c > > > @@ -30,16 +30,9 @@ static int cpu_pm_notify(enum cpu_pm_eve > > > { > > > int ret; > > > > > > - /* > > > - * This introduces a RCU read critical section, which could be > > > - * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know > > > - * this. > > > - */ > > > - rcu_irq_enter_irqson(); > > > rcu_read_lock(); > > > ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL); > > > rcu_read_unlock(); > > > - rcu_irq_exit_irqson(); > > > > To make this easier to debug, is it worth adding an assertion that RCU is > > watching here? e.g. > > > > RCU_LOCKDEP_WARN(!rcu_is_watching(), > > "cpu_pm_notify() used illegally from EQS"); > > > > My understanding is that rcu_read_lock() implies something along those > lines when PROVE_RCU. Ah, duh. Given that: Acked-by: Mark Rutland Mark. ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 00/36] cpuidle,rcu: Cleanup the mess
On Tue, Jun 14, 2022 at 12:19:29PM +0100, Mark Rutland wrote: > On Wed, Jun 08, 2022 at 04:27:23PM +0200, Peter Zijlstra wrote: > > Hi All! (omg so many) > > Hi Peter, > > Sorry for the delay; my plate has also been rather full recently. I'm > beginning > to page this in now. No worries; we all have too much to do ;-) > > These here few patches mostly clear out the utter mess that is cpuidle vs > > rcuidle. > > > > At the end of the ride there's only 2 real RCU_NONIDLE() users left > > > > arch/arm64/kernel/suspend.c:RCU_NONIDLE(__cpu_suspend_exit()); > > drivers/perf/arm_pmu.c: RCU_NONIDLE(armpmu_start(event, > > PERF_EF_RELOAD)); > > The latter of these is necessary because apparently PM notifiers are called > with RCU not watching. Is that still the case today (or at the end of this > series)? If so, that feels like fertile land for more issues (yaey...). If > not, > we should be able to drop this. That should be fixed; fingers crossed :-) > > kernel/cfi.c: RCU_NONIDLE({ > > > > (the CFI one is likely dead in the kCFI rewrite) and there's only a hand > > full > > of trace_.*_rcuidle() left: > > > > kernel/trace/trace_preemptirq.c: > > trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1); > > kernel/trace/trace_preemptirq.c: > > trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1); > > kernel/trace/trace_preemptirq.c: > > trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr); > > kernel/trace/trace_preemptirq.c: > > trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr); > > kernel/trace/trace_preemptirq.c: > > trace_preempt_enable_rcuidle(a0, a1); > > kernel/trace/trace_preemptirq.c: > > trace_preempt_disable_rcuidle(a0, a1); > > > > All of them are in 'deprecated' code that is unused for GENERIC_ENTRY. > > I think those are also unused on arm64 too? > > If not, I can go attack that. My grep spots: arch/arm64/kernel/entry-common.c: trace_hardirqs_on(); arch/arm64/include/asm/daifflags.h: trace_hardirqs_off(); arch/arm64/include/asm/daifflags.h: trace_hardirqs_off(); The _on thing should be replaced with something like: trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(); instrumentation_end(); rcu_irq_exit(); lockdep_hardirqs_on(CALLER_ADDR0); (as I think you know, since you have some of that already). And something similar for the _off thing, but with _off_finish(). > > I've touched a _lot_ of code that I can't test and likely broken some of it > > :/ > > In particular, the whole ARM cpuidle stuff was quite involved with OMAP > > being > > the absolute 'winner'. > > > > I'm hoping Mark can help me sort the remaining ARM64 bits as he moves that > > to > > GENERIC_ENTRY. > > Moving to GENERIC_ENTRY as a whole is going to take a tonne of work > (refactoring both arm64 and the generic portion to be more amenable to each > other), but we can certainly move closer to that for the bits that matter > here. I know ... been there etc.. :-) > Maybe we want a STRICT_ENTRY option to get rid of all the deprecated stuff > that > we can select regardless of GENERIC_ENTRY to make that easier. Possible yeah. > > I've also got a note that says ARM64 can probably do a WFE based > > idle state and employ TIF_POLLING_NRFLAG to avoid some IPIs. > > Possibly; I'm not sure how much of a win that'll be given that by default > we'll > have a ~10KHz WFE wakeup from the timer, but we could take a peek. Ohh.. I didn't know it woke up *that* often. I just know Will made use of it in things like smp_cond_load_relaxed() which would be somewhat similar to a very shallow idle state that looks at the TIF word. ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 14/36] cpuidle: Fix rcu_idle_*() usage
On Tue, Jun 14, 2022 at 06:40:53PM +0200, Peter Zijlstra wrote: > On Tue, Jun 14, 2022 at 01:41:13PM +0100, Mark Rutland wrote: > > On Wed, Jun 08, 2022 at 04:27:37PM +0200, Peter Zijlstra wrote: > > > --- a/kernel/time/tick-broadcast.c > > > +++ b/kernel/time/tick-broadcast.c > > > @@ -622,9 +622,13 @@ struct cpumask *tick_get_broadcast_onesh > > > * to avoid a deep idle transition as we are about to get the > > > * broadcast IPI right away. > > > */ > > > -int tick_check_broadcast_expired(void) > > > +noinstr int tick_check_broadcast_expired(void) > > > { > > > +#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H > > > + return arch_test_bit(smp_processor_id(), > > > cpumask_bits(tick_broadcast_force_mask)); > > > +#else > > > return cpumask_test_cpu(smp_processor_id(), tick_broadcast_force_mask); > > > +#endif > > > } > > > > This is somewhat not-ideal. :/ > > I'll say. > > > Could we unconditionally do the arch_test_bit() variant, with a comment, or > > does that not exist in some cases? > > Loads of build errors ensued, which is how I ended up with this mess ... Yaey :( I see the same is true for the thread flag manipulation too. I'll take a look and see if we can layer things so that we can use the arch_*() helpers and wrap those consistently so that we don't have to check the CPP guard. Ideally we'd have a a better language that allows us to make some context-senstive decisions, then we could hide all this gunk in the lower levels with somethin like: if (!THIS_IS_A_NOINSTR_FUNCTION()) { explicit_instrumentation(...); } ... ho hum. Mark. ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [Buildroot] [PATCH] package/bpftool: add a patch to fix cross compilation
On 13/06/2022 22:48, Shahab Vahedi wrote: Arnout, On 6/13/22 22:21, Arnout Vandecappelle wrote: The patch file should be git-formatted and have your Signed-off-by. I simply took the patch from [1] (which has your signoff) and added the additional Makefile.include changes Thanks for the tweak. I overlooked the "Sign-off" part. I used "diff" because of what BR's manual suggests [1]: If the software is under version control, it is recommended to use the upstream SCM software to generate the patch set. Otherwise, concatenate the header with the output of the diff -purN ... The proposed patch was made for a tar ball release that is not under any version control per se. With the hindsight, maybe I should have checked for "v6.8.0" as branch/tag in upstream repo. Yeah, bpftool is actually a special situation since the real upstream (where you contributed your patch) is a separate repository that gets regularly synced into the bpftool repo. I think that at the time you sent this to Buildroot, it was not yet synced to bpftool. (of which I don't really understand BTW why they're relevant for Buildroot). That extra change (setting "HOSTAR") is necessary, because in upstream that was already taken care of before the patch submission. However, in v6.8.0 release "HOSTAR" is never initialised and the fix is actually using it. Since we set it from the .mk file, it doesn't actually *need* to be initialized in Buildroot context. Also, if that came from a previously applied upstream patch, it's better to backport the actual upstream patch. But that can be complicated as well for something like bpftool, so it's fully understandable to do it like this. It would be nice however in such a case to add a reference to the upstream commit that added it. Regards, Arnout [1] https://buildroot.org/downloads/manual/manual.html#_format_and_licensing_of_the_package_patches Cheers, Shahab ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH v3 1/1] package/bpftool: revert bpf_cookie patch to allow building
On 14/06/2022 11:31, Shahab Vahedi wrote: Building bpftool on Debian 11 (bullseye) with kernel v5.10 and clang-11 How do you build host-bpftool with clang in Buildroot context? HOSTCC is set to gcc in the Makefile... Do you supply an explicit HOSTCC= on the Buildroot command line? I'm not sure if we are really interested in carrying fixes for such exotic and not-really-supported situations... fails: ---8<--- $ make . . . CLANGpid_iter.bpf.o skeleton/pid_iter.bpf.c:47:14: error: incomplete definition of type 'struct bpf_perf_link' perf_link = container_of(link, struct bpf_perf_link, link); . . . skeleton/pid_iter.bpf.c:49:30: error: no member named 'bpf_cookie' in 'struct perf_event' return BPF_CORE_READ(event, bpf_cookie); . . . 10 errors generated. make: *** [Makefile:176: pid_iter.bpf.o] Error 1 --->8--- There are changes in bpftool v6.8.0 that assumes the existence of particular data structures in generated vmlinux.h that is obtained from the host machine. See [1] for further details. This commit adds a patch to revert that additional change in v6.8.0. There's a patch series pending to be submitted upstream [2]. Hopefully, those will take care of the problem if they land in the next release. Changelog: Patch changelog should be under the --- line, it's not meant to be part of the git history. But of course that means you need to put the references [1][2] above the --- line. (No need to resend for this or any of my other comments, just FYI for future submissions.) v1: Initial submission [3] It's nice, but not really necessary to have a reference to the old versions. They're easy enough to find back. v2: Use a full fledged git patch for bpftool [4] v3: Fix the snafu that resulted in malformed patch file [1] https://lore.kernel.org/bpf/c47f732d-dba8-2c13-7c72-3a651bf72...@synopsys.com/t/#u [2] https://lore.kernel.org/bpf/20220421003152.339542-1-aloba...@pm.me/ [3] https://lists.buildroot.org/pipermail/buildroot/2022-June/644819.html To refer to a buildroot patch, it's nicer to refer to patchwork, i.e. https://patchwork.ozlabs.org/project/buildroot/patch/97ea44bb-58fe-d6cb-6c79-9be0b245f...@synopsys.com/ Regards, Arnout [4] https://lists.buildroot.org/pipermail/buildroot/2022-June/644824.html Signed-off-by: Shahab Vahedi --- .../0002-revert-bpf_cookie-change.patch | 129 ++ 1 file changed, 129 insertions(+) create mode 100644 package/bpftool/0002-revert-bpf_cookie-change.patch diff --git a/package/bpftool/0002-revert-bpf_cookie-change.patch b/package/bpftool/0002-revert-bpf_cookie-change.patch new file mode 100644 index 00..6f9579bd23 --- /dev/null +++ b/package/bpftool/0002-revert-bpf_cookie-change.patch @@ -0,0 +1,129 @@ +From d7c78d1e38cde73c85b491a833f0e6e3f0d62654 Mon Sep 17 00:00:00 2001 +From: Shahab Vahedi +Date: Tue, 14 Jun 2022 10:12:21 +0200 +Subject: [PATCH] Revert commit "bpftool: Add bpf_cookie to link output" + +Building bpftool on a Debian bullseye with clang-11 fails [1]. +This patch reverts the offending commit [2]. If clang-11 is not +installed, then the "co-re" feature of bpf will not be enabled +and the issue remains dormant. + +[1] Building release 6.8.0 on Debian 11 +https://lore.kernel.org/bpf/c47f732d-dba8-2c13-7c72-3a651bf72...@synopsys.com/t/#u + +[2] bpftool: Add bpf_cookie to link output +https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git/commit/?id=cbdaf71f + +Signed-off-by: Shahab Vahedi +--- + src/main.h | 2 -- + src/pids.c | 8 + src/skeleton/pid_iter.bpf.c | 22 -- + src/skeleton/pid_iter.h | 2 -- + 4 files changed, 34 deletions(-) + +diff --git a/src/main.h b/src/main.h +index aa99ffa..2f2b638 100644 +--- a/src/main.h b/src/main.h +@@ -111,9 +111,7 @@ struct obj_ref { + + struct obj_refs { + int ref_cnt; +- bool has_bpf_cookie; + struct obj_ref *refs; +- __u64 bpf_cookie; + }; + + struct btf; +diff --git a/src/pids.c b/src/pids.c +index e2d00d3..57f0d1b 100644 +--- a/src/pids.c b/src/pids.c +@@ -78,8 +78,6 @@ static void add_ref(struct hashmap *map, struct pid_iter_entry *e) + ref->pid = e->pid; + memcpy(ref->comm, e->comm, sizeof(ref->comm)); + refs->ref_cnt = 1; +- refs->has_bpf_cookie = e->has_bpf_cookie; +- refs->bpf_cookie = e->bpf_cookie; + + err = hashmap__append(map, u32_as_hash_field(e->id), refs); + if (err) +@@ -206,9 +204,6 @@ void emit_obj_refs_json(struct hashmap *map, __u32 id, + if (refs->ref_cnt == 0) + break; + +- if (refs->has_bpf_cookie) +- jsonw_lluint_field(json_writer, "bpf_cookie", refs->bpf_cookie); +- +
Re: [PATCH v3 1/1] package/bpftool: revert bpf_cookie patch to allow building
On 14/06/2022 19:14, Arnout Vandecappelle wrote: (No need to resend for this or any of my other comments, just FYI for future submissions.) Actually, the patch didn't appear on patchwork for some reason. It analyzed it as a reply to the previous version of the patch. So you will have to resend it after all. Regards, Arnout ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 00/36] cpuidle,rcu: Cleanup the mess
On Tue, Jun 14, 2022 at 06:58:30PM +0200, Peter Zijlstra wrote: > On Tue, Jun 14, 2022 at 12:19:29PM +0100, Mark Rutland wrote: > > On Wed, Jun 08, 2022 at 04:27:23PM +0200, Peter Zijlstra wrote: > > > Hi All! (omg so many) > > > > Hi Peter, > > > > Sorry for the delay; my plate has also been rather full recently. I'm > > beginning > > to page this in now. > > No worries; we all have too much to do ;-) > > > > These here few patches mostly clear out the utter mess that is cpuidle vs > > > rcuidle. > > > > > > At the end of the ride there's only 2 real RCU_NONIDLE() users left > > > > > > arch/arm64/kernel/suspend.c: > > > RCU_NONIDLE(__cpu_suspend_exit()); > > > drivers/perf/arm_pmu.c: RCU_NONIDLE(armpmu_start(event, > > > PERF_EF_RELOAD)); > > > > The latter of these is necessary because apparently PM notifiers are called > > with RCU not watching. Is that still the case today (or at the end of this > > series)? If so, that feels like fertile land for more issues (yaey...). If > > not, > > we should be able to drop this. > > That should be fixed; fingers crossed :-) Cool; I'll try to give that a spin when I'm sat next to some relevant hardware. :) > > > kernel/cfi.c: RCU_NONIDLE({ > > > > > > (the CFI one is likely dead in the kCFI rewrite) and there's only a hand > > > full > > > of trace_.*_rcuidle() left: > > > > > > kernel/trace/trace_preemptirq.c: > > > trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1); > > > kernel/trace/trace_preemptirq.c: > > > trace_irq_disable_rcuidle(CALLER_ADDR0, CALLER_ADDR1); > > > kernel/trace/trace_preemptirq.c: > > > trace_irq_enable_rcuidle(CALLER_ADDR0, caller_addr); > > > kernel/trace/trace_preemptirq.c: > > > trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr); > > > kernel/trace/trace_preemptirq.c: > > > trace_preempt_enable_rcuidle(a0, a1); > > > kernel/trace/trace_preemptirq.c: > > > trace_preempt_disable_rcuidle(a0, a1); > > > > > > All of them are in 'deprecated' code that is unused for GENERIC_ENTRY. > > I think those are also unused on arm64 too? > > > > If not, I can go attack that. > > My grep spots: > > arch/arm64/kernel/entry-common.c: trace_hardirqs_on(); > arch/arm64/include/asm/daifflags.h: trace_hardirqs_off(); > arch/arm64/include/asm/daifflags.h: trace_hardirqs_off(); Ah; I hadn't realised those used trace_.*_rcuidle() behind the scenes. That affects local_irq_{enable,disable,restore}() too (which is what the daifflags.h bits are emulating), and also the generic entry code's irqentry_exit(). So it feels to me like we should be fixing those more generally? e.g. say that with a new STRICT_ENTRY[_RCU], we can only call trace_hardirqs_{on,off}() with RCU watching, and alter the definition of those? > The _on thing should be replaced with something like: > > trace_hardirqs_on_prepare(); > lockdep_hardirqs_on_prepare(); > instrumentation_end(); > rcu_irq_exit(); > lockdep_hardirqs_on(CALLER_ADDR0); > > (as I think you know, since you have some of that already). And > something similar for the _off thing, but with _off_finish(). Sure; I knew that was necessary for the outermost parts of entry (and I think that's all handled), I just hadn't realised that trace_hardirqs_{on,off} did the rcuidle thing in the middle. It'd be nice to not have to open-code the whole sequence everywhere for the portions which run after entry and are instrumentable, so (as above) I reckon we want to make trace_hardirqs_{on,off}() not do the rcuidle part unnecessarily (which IIUC is an end-goal anyway)? > > > I've touched a _lot_ of code that I can't test and likely broken some of > > > it :/ > > > In particular, the whole ARM cpuidle stuff was quite involved with OMAP > > > being > > > the absolute 'winner'. > > > > > > I'm hoping Mark can help me sort the remaining ARM64 bits as he moves > > > that to > > > GENERIC_ENTRY. > > > > Moving to GENERIC_ENTRY as a whole is going to take a tonne of work > > (refactoring both arm64 and the generic portion to be more amenable to each > > other), but we can certainly move closer to that for the bits that matter > > here. > > I know ... been there etc.. :-) > > > Maybe we want a STRICT_ENTRY option to get rid of all the deprecated stuff > > that > > we can select regardless of GENERIC_ENTRY to make that easier. > > Possible yeah. > > > > I've also got a note that says ARM64 can probably do a WFE based > > > idle state and employ TIF_POLLING_NRFLAG to avoid some IPIs. > > > > Possibly; I'm not sure how much of a win that'll be given that by default > > we'll > > have a ~10KHz WFE wakeup from the timer, but we could take a peek. > > Ohh.. I didn't know it woke up *that* often. I just know Will made use > of it in things like smp_cond_load_relaxed() which would be s
Re: [PATCH 34.5/36] cpuidle,omap4: Push RCU-idle into omap4_enter_lowpower()
On Mon, Jun 13, 2022 at 03:39:05PM +0300, Tony Lindgren wrote: > OMAP4 uses full SoC suspend modes as idle states, as such it needs the > whole power-domain and clock-domain code from the idle path. > > All that code is not suitable to run with RCU disabled, as such push > RCU-idle deeper still. > > Signed-off-by: Tony Lindgren > --- > > Peter here's one more for your series, looks like this is needed to avoid > warnings similar to what you did for omap3. Thanks Tony! I've had a brief look at omap2_pm_idle() and do I understand it right that something like the below patch would reduce it to a simple 'WFI'? What do I do with the rest of that code, because I don't think this thing has a cpuidle driver to take over, effectively turning it into dead code. --- a/arch/arm/mach-omap2/pm24xx.c +++ b/arch/arm/mach-omap2/pm24xx.c @@ -126,10 +126,20 @@ static int omap2_allow_mpu_retention(voi return 1; } -static void omap2_enter_mpu_retention(void) +static void omap2_do_wfi(void) { const int zero = 0; + /* WFI */ + asm("mcr p15, 0, %0, c7, c0, 4" : : "r" (zero) : "memory", "cc"); +} + +#if 0 +/* + * possible cpuidle implementation between WFI and full_retention above + */ +static void omap2_enter_mpu_retention(void) +{ /* The peripherals seem not to be able to wake up the MPU when * it is in retention mode. */ if (omap2_allow_mpu_retention()) { @@ -146,8 +157,7 @@ static void omap2_enter_mpu_retention(vo pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON); } - /* WFI */ - asm("mcr p15, 0, %0, c7, c0, 4" : : "r" (zero) : "memory", "cc"); + omap2_do_wfi(); pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON); } @@ -161,6 +171,7 @@ static int omap2_can_sleep(void) return 1; } +#endif static void omap2_pm_idle(void) { @@ -169,6 +180,7 @@ static void omap2_pm_idle(void) if (omap_irq_pending()) return; +#if 0 error = cpu_cluster_pm_enter(); if (error || !omap2_can_sleep()) { omap2_enter_mpu_retention(); @@ -179,6 +191,9 @@ static void omap2_pm_idle(void) out_cpu_cluster_pm: cpu_cluster_pm_exit(); +#else + omap2_do_wfi(); +#endif } static void __init prcm_setup_regs(void) ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 16/36] rcu: Fix rcu_idle_exit()
On Wed, Jun 08, 2022 at 04:27:39PM +0200, Peter Zijlstra wrote: > Current rcu_idle_exit() is terminally broken because it uses > local_irq_{save,restore}(), which are traced which uses RCU. > > However, now that all the callers are sure to have IRQs disabled, we > can remove these calls. > > Signed-off-by: Peter Zijlstra (Intel) > Acked-by: Paul E. McKenney We have some fun conflicts between this series and Frederic's context-tracking series. But it looks like these can be resolved by: 1. A patch on top of Frederic's series that provides the old rcu_*() names for the functions now prefixed with ct_*() such as ct_idle_exit(). 2. Another patch on top of Frederic's series that takes the changes remaining from this patch, shown below. Frederic's series uses raw_local_irq_save() and raw_local_irq_restore(), which can then be removed. Or is there a better way to do this? Thanx, Paul commit f64cee8c159e9863a74594efe3d33fb513a6a7b5 Author: Peter Zijlstra Date: Tue Jun 14 17:24:43 2022 -0700 context_tracking: Interrupts always disabled for ct_idle_exit() Now that the idle-loop cleanups have ensured that rcu_idle_exit() is always invoked with interrupts disabled, remove the interrupt disabling in favor of a debug check. Signed-off-by: Peter Zijlstra Cc: Frederic Weisbecker Signed-off-by: Paul E. McKenney diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index 1da44803fd319..99310cf5b0254 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -332,11 +332,8 @@ EXPORT_SYMBOL_GPL(ct_idle_enter); */ void noinstr ct_idle_exit(void) { - unsigned long flags; - - raw_local_irq_save(flags); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !raw_irqs_disabled()); ct_kernel_enter(false, RCU_DYNTICKS_IDX - CONTEXT_IDLE); - raw_local_irq_restore(flags); } EXPORT_SYMBOL_GPL(ct_idle_exit); ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 34.5/36] cpuidle,omap4: Push RCU-idle into omap4_enter_lowpower()
Hi, Adding Aaro Koskinen and Peter Vasil for pm24xx for n800 and n810 related idle. * Peter Zijlstra [220614 22:07]: > On Mon, Jun 13, 2022 at 03:39:05PM +0300, Tony Lindgren wrote: > > OMAP4 uses full SoC suspend modes as idle states, as such it needs the > > whole power-domain and clock-domain code from the idle path. > > > > All that code is not suitable to run with RCU disabled, as such push > > RCU-idle deeper still. > > > > Signed-off-by: Tony Lindgren > > --- > > > > Peter here's one more for your series, looks like this is needed to avoid > > warnings similar to what you did for omap3. > > Thanks Tony! > > I've had a brief look at omap2_pm_idle() and do I understand it right > that something like the below patch would reduce it to a simple 'WFI'? Yes that should do for omap2_do_wfi(). > What do I do with the rest of that code, because I don't think this > thing has a cpuidle driver to take over, effectively turning it into > dead code. As we are establishing a policy where deeper idle states must be handled by cpuidle, and for most part that has been the case for at least 10 years, I'd just drop the unused functions with an explanation in the patch why we're doing it. Or the functions could be tagged with __maybe_unused if folks prefer that. In the pm24xx case we are not really causing a regression for users as there are still pending patches to make n800 and n810 truly usable with the mainline kernel. At least the PMIC and LCD related patches need some work [0]. The deeper idle states can be added back later using cpuidle as needed so we have a clear path. Aaro & Peter V, do you have any better suggestions here as this will mostly affect you guys currently? Regards, Tony [0] https://lore.kernel.org/linux-omap/20211224214512.1583430-1-peter.va...@gmail.com/ > --- a/arch/arm/mach-omap2/pm24xx.c > +++ b/arch/arm/mach-omap2/pm24xx.c > @@ -126,10 +126,20 @@ static int omap2_allow_mpu_retention(voi > return 1; > } > > -static void omap2_enter_mpu_retention(void) > +static void omap2_do_wfi(void) > { > const int zero = 0; > > + /* WFI */ > + asm("mcr p15, 0, %0, c7, c0, 4" : : "r" (zero) : "memory", "cc"); > +} > + > +#if 0 > +/* > + * possible cpuidle implementation between WFI and full_retention above > + */ > +static void omap2_enter_mpu_retention(void) > +{ > /* The peripherals seem not to be able to wake up the MPU when >* it is in retention mode. */ > if (omap2_allow_mpu_retention()) { > @@ -146,8 +157,7 @@ static void omap2_enter_mpu_retention(vo > pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON); > } > > - /* WFI */ > - asm("mcr p15, 0, %0, c7, c0, 4" : : "r" (zero) : "memory", "cc"); > + omap2_do_wfi(); > > pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON); > } > @@ -161,6 +171,7 @@ static int omap2_can_sleep(void) > > return 1; > } > +#endif > > static void omap2_pm_idle(void) > { > @@ -169,6 +180,7 @@ static void omap2_pm_idle(void) > if (omap_irq_pending()) > return; > > +#if 0 > error = cpu_cluster_pm_enter(); > if (error || !omap2_can_sleep()) { > omap2_enter_mpu_retention(); > @@ -179,6 +191,9 @@ static void omap2_pm_idle(void) > > out_cpu_cluster_pm: > cpu_cluster_pm_exit(); > +#else > + omap2_do_wfi(); > +#endif > } > > static void __init prcm_setup_regs(void) ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH 23/36] arm64,smp: Remove trace_.*_rcuidle() usage
On Tue, 14 Jun 2022 17:24:48 +0100, Mark Rutland wrote: > > On Wed, Jun 08, 2022 at 04:27:46PM +0200, Peter Zijlstra wrote: > > Ever since commit d3afc7f12987 ("arm64: Allow IPIs to be handled as > > normal interrupts") this function is called in regular IRQ context. > > > > Signed-off-by: Peter Zijlstra (Intel) > > [adding Marc since he authored that commit] > > Makes sense to me: > > Acked-by: Mark Rutland > > Mark. > > > --- > > arch/arm64/kernel/smp.c |4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > --- a/arch/arm64/kernel/smp.c > > +++ b/arch/arm64/kernel/smp.c > > @@ -865,7 +865,7 @@ static void do_handle_IPI(int ipinr) > > unsigned int cpu = smp_processor_id(); > > > > if ((unsigned)ipinr < NR_IPI) > > - trace_ipi_entry_rcuidle(ipi_types[ipinr]); > > + trace_ipi_entry(ipi_types[ipinr]); > > > > switch (ipinr) { > > case IPI_RESCHEDULE: > > @@ -914,7 +914,7 @@ static void do_handle_IPI(int ipinr) > > } > > > > if ((unsigned)ipinr < NR_IPI) > > - trace_ipi_exit_rcuidle(ipi_types[ipinr]); > > + trace_ipi_exit(ipi_types[ipinr]); > > } > > > > static irqreturn_t ipi_handler(int irq, void *data) Acked-by: Marc Zyngier M. -- Without deviation from the norm, progress is not possible. ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc