> From: Shani Peretz [mailto:shper...@nvidia.com]
> Sent: Monday, 7 July 2025 07.45
> 
> > From: Stephen Hemminger <step...@networkplumber.org>
> > Sent: Monday, 16 June 2025 18:30
> >
> > On Mon, 16 Jun 2025 10:29:05 +0300
> > Shani Peretz <shper...@nvidia.com> wrote:
> >
> > > This feature is designed to monitor the lifecycle of mempool objects
> > > as they move between the application and the PMD.
> > >
> > > It will allow us to track the operations and transitions of each
> > > mempool object throughout the system, helping in debugging and
> > understanding objects flow.
> > >
> > > The implementation include several key components:
> > > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > >    that represent the operations history.
> > > 2. Added functions that allow marking operations on an
> > >    mempool objects.
> > > 3. Dumps the history to a file or the console
> > >    (rte_mempool_objects_dump).
> > > 4. Added python script that can parse, analyze the data and
> > >    present it in an human readable format.
> > > 5. Added compilation flag to enable the feature.
> > >
> >
> > Could this not already be done with tracing infrastructure?
> 
> Hey,
> We did consider tracing but:
>       - It has limited capacity, which will result in older mbufs being
> lost in the tracing output while they are still in use
>       - Some operations may be lost, and we might not capture the
> complete picture due to trace misses caused by the performance overhead
> of tracking on the datapath as far as I understand
> WDYT?

This looks like an alternative trace infrastructure, just for mempool objects.
But the list of operations is limited to basic operations on mbuf mempool 
objects.
It lacks support for other operations on mbufs, e.g. IP 
fragmentation/defragmentation library operations, application specific 
operations, and transitions between the mempool cache and the mempool backing 
store.
It also lacks support for operations on other mempool objects than mbufs.

You might better off using the trace infrastructure, or something similar.
Using the trace infrastructure allows you to record more detailed information 
along with the transitions of "owners" of each mbuf.

I'm not opposing this RFC, but I think it is very limited, and not sufficiently 
expandable.

I get the point that trace can cause old events on active mbufs to be lost, and 
the concept of a trace buffer per mempool object is a good solution to that.
But I think you need to be able to store much more information with each 
transition; at least a timestamp. And if you do that, you need much more than 4 
bits per event.

Alternatively, if you do proceed with the RFC in the current form, I have two 
key suggestions:
1. Make it possible to register operations at runtime. (Look at dynamic mbuf 
fields for inspiration.)
2. Use 8 bits for the operation, instead of 4.
And if you need a longer trace history, you can use the rte_bitset library 
instead of a single uint64_t.

Reply via email to