Emilio G. Cota <c...@braap.org> writes:
> Add a comment about how atomic_read works here. The comment refers to > a "BQL-less CPU loop", which will materialize toward the end > of this series. > > Note that the modifications to cpu_reset_interrupt are there to > avoid deadlock during the CPU lock transition; once that is complete, > cpu_interrupt_request will be simple again. > <snip> > --- a/qom/cpu.c > +++ b/qom/cpu.c > @@ -98,14 +98,29 @@ static void cpu_common_get_memory_mapping(CPUState *cpu, > * BQL here if we need to. cpu_interrupt assumes it is held.*/ > void cpu_reset_interrupt(CPUState *cpu, int mask) > { > - bool need_lock = !qemu_mutex_iothread_locked(); > + bool has_bql = qemu_mutex_iothread_locked(); > + bool has_cpu_lock = cpu_mutex_locked(cpu); > > - if (need_lock) { > - qemu_mutex_lock_iothread(); > + if (has_bql) { > + if (has_cpu_lock) { > + atomic_set(&cpu->interrupt_request, cpu->interrupt_request & > ~mask); > + } else { > + cpu_mutex_lock(cpu); > + atomic_set(&cpu->interrupt_request, cpu->interrupt_request & > ~mask); > + cpu_mutex_unlock(cpu); > + } > + return; > + } > + > + if (has_cpu_lock) { > + cpu_mutex_unlock(cpu); > } > - cpu->interrupt_request &= ~mask; > - if (need_lock) { > - qemu_mutex_unlock_iothread(); > + qemu_mutex_lock_iothread(); > + cpu_mutex_lock(cpu); > + atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask); > + qemu_mutex_unlock_iothread(); > + if (!has_cpu_lock) { > + cpu_mutex_unlock(cpu); > } > } Yeah I can see this is pretty ugly but cleaned up by the end of the series. If it's the same sequence as the previous patch I wonder if it's possible to hide the ugliness in a common helper while we transition? Otherwise: Reviewed-by: Alex Bennée <alex.ben...@linaro.org> -- Alex Bennée