On Wed, 18 Mar 2026 10:53:26 -0400 Mathieu Desnoyers <[email protected]> wrote:
> On 2026-03-18 10:19, Masami Hiramatsu (Google) wrote: > > On Wed, 11 Mar 2026 10:32:29 +0900 > > "Masami Hiramatsu (Google)" <[email protected]> wrote: > > > >> From: Masami Hiramatsu (Google) <[email protected]> > >> > >> On real hardware, panic and machine reboot may not flush hardware cache > >> to memory. This means the persistent ring buffer, which relies on a > >> coherent state of memory, may not have its events written to the buffer > >> and they may be lost. Moreover, there may be inconsistency with the > >> counters which are used for validation of the integrity of the > >> persistent ring buffer which may cause all data to be discarded. > >> > >> To avoid this issue, stop recording of the ring buffer on panic and > >> flush the cache of the ring buffer's memory. > > > > Hmm, on some architectures, flush_cache_vmap() is implemented using > > on_each_cpu() which waits IPI. But that does not safe in panic notifier > > because it is called after smp_send_stop(). > > > > Since this cache flush issue is currently only confirmed on arm64, > > I would like to make it doing nothing (do { } while (0)) by default. > > FWIW, I've sent a related series a while ago about flushing pmem > areas to memory on panic: > > https://lore.kernel.org/lkml/[email protected]/ > Ah, nice! > When reading your patch, I feel like I'm missing something, so please bear > with > me for a few questions: > > - What exactly are you trying to flush ? By "flush" do you mean > evince cache lines or write back cache lines ? (I expect you aim > at the second option) Yes, I need to write back cache lines, at least it can be read after hot reboot. (not evict cache) > > - AFAIU, you are not trying to evince cache lines after creation > of a new virtual mapping (which is the documented intent of > flush_cache_vmap). Ah, OK. That's a good point! (anyway I will replace it with do { } while (0) in the next version.) > > - AFAIU flush_cache_vmap maps to no-code on arm64 (asm-generic), what am > I missing ? It makes sense to be a no-op because AFAIR arm64 does not > have to deal with virtually aliasing caches. Yeah, so my patch also introduces arm64 specific implementation. > > see commit 8690bbcf3b7 ("Introduce cpu_dcache_is_aliasing() across all > architectures") OK, let me check. > > The arch_wb_cache_pmem is specific to pmem, which is not exactly what you want > to use, but on arm64 it's implemented as: > > /* Ensure order against any prior non-cacheable writes */ > dmb(osh); > dcache_clean_pop((unsigned long)addr, (unsigned long)addr + size); > > Which I think has the writeback semantic you are looking for, and AFAIU > should no > require IPIs (at least on arm64) to flush cache lines across the entire > system. Yes, that's what I need. Thank you! > > Cheers, > > Mathieu > > -- > Mathieu Desnoyers > EfficiOS Inc. > https://www.efficios.com -- Masami Hiramatsu (Google) <[email protected]>
