On 6/16/25 17:30, Morten Brørup wrote:
From: Sunil Kumar Kori [mailto:sk...@marvell.com]
Sent: Monday, 16 June 2025 10.36

From: Sunil Kumar Kori <sk...@marvell.com>
Sent: Monday, 12 May 2025 17.07

rte_eth_fp_ops contains ops for fast path APIs. Each API validates
availability of callback and then invoke it.
These checks impact data path performace.

Picking up the discussion from another thread [1]:

From: Konstantin Ananyev [mailto:konstantin.anan...@huawei.com]
Sent: Wednesday, 28 May 2025 11.14

So what we are saving with that patch: one cmp and one un-taken branch:
@@ -6399,8 +6399,6 @@ rte_eth_rx_queue_count(uint16_t port_id,
uint16_t
queue_id)
                return -EINVAL;
  #endif

-       if (p->rx_queue_count == NULL)
-               return -ENOTSUP;
        return p->rx_queue_count(qd);
  }

These are inline functions, so we also save some code space,
instruction cache, and possibly an entry in the branch predictor -
everywhere these functions are instantiated by the compiler.


I wonder is how realistic (and measurable) is the gain?

The performance optimization is mainly targeting the mbuf recycle
operations, i.e. the hot fast path, where every cycle counts.
And while optimizing those, the other ethdev fast path callbacks are
also optimized.

Yes, although we all agree that there is no downside to this
optimization, it would be nice to see some performance numbers.

Sure, I will get performance numbers for Marvell platform and will share.


Hi Morten,
I got performance numbers on multiple Marvell's platforms and observed gain
around 0.1% (~20K pps) with this patch. Other than this, there are other fast
path callbacks (rx_pkt_burst and tx_pkt_burst) which avoid this check.

I'm really impressed that 0.1% is measurable since it means that the measurement results of different runs have higher order of stability.

IMO, this patch has no negative impact and slight improvement & cleanup the
fast path. Please suggest.

I still like this patch, so I confirm my ACK:

Acked-by: Morten Brørup <m...@smartsharesystems.com>


Reply via email to