On Thu, 6 Feb 2025 10:24:25 -0500
Steven Rostedt <[email protected]> wrote:

> On Thu, 6 Feb 2025 14:22:32 +0900
> Masami Hiramatsu (Google) <[email protected]> wrote:
> 
> > On Wed, 05 Feb 2025 17:50:35 -0500
> > Steven Rostedt <[email protected]> wrote:
> > 
> > > From: Steven Rostedt <[email protected]>
> > > 
> > > There's no reason to save the KASLR offset for the ring buffer itself.
> > > That is used by the tracer. Now that the tracer has a way to save data in
> > > the persistent memory of the ring buffer, have the tracing infrastructure
> > > take care of the saving of the KASLR offset.
> > >   
> > 
> > Looks good to me. But note that the scratchpad size may not enough for
> > module table later, because 1 module requires at least the name[]
> > (64byte - sizeof(ulong)) and the base address (ulong). This means
> > 1 entry consumes 64byte. Thus there can be only 63 entries + meta
> > data in 4K page. My ubuntu loads 189(!) modules;
> > 
> > $ lsmod | wc -l
> > 190
> > 
> > so we want 255 entries, which requires 16KB.
> 
> So, I was thinking of modifying the allocation of the persistent ring
> buffer, which currently is
> 
> #define ring_buffer_alloc_range(size, flags, order, start, range_size)
> 
> [ it's a macro to add lockdep key information in it ]
> 
> But I should change it to include a scratch size, and allow the tracing
> system to define how much of the range it should allocate for scratch.
> 
> Then we could do:
> 
>               buf->buffer = ring_buffer_alloc_range(size, rb_flags, 0,
>                                                     tr->range_addr_start,
>                                                     tr->range_addr_size,
>                                                     struct_size(tscratch, 
> entries, 128));
> 
> Which would make sure that the scratch size contains enough memory to hold
> 128 modules.

Yeah, this idea looks godd to me. BTW, the scratch size will be aligned to
the subbuffer size (or page size?)

Thanks,

> 
> -- Steve
> 


-- 
Masami Hiramatsu (Google) <[email protected]>

Reply via email to