[PATCH v4 4/7] net/ena: support fragment bypass mode

2025-05-28 Thread Shai Brandes
Introduce devarg `enable_frag_bypass` to toggle the fragment bypass
mode for egress packets.

When enabled, this feature bypasses the PPS limit enforced by EC2 for
fragmented egress packets on every ENI. Note that enabling this might
negatively impact network performance.

By default, this feature is disabled. To enable it set
`enable_frag_bypass=1`. If it cannot be enabled, a warning will be
printed, but driver initialization will proceed as normal.

Signed-off-by: Yosef Raisman 
Signed-off-by: Shai Brandes 
Reviewed-by: Amit Bernstein 
---
 doc/guides/nics/ena.rst   |  9 
 doc/guides/rel_notes/release_25_07.rst|  5 ++
 drivers/net/ena/base/ena_com.c| 33 +
 drivers/net/ena/base/ena_com.h|  8 
 .../net/ena/base/ena_defs/ena_admin_defs.h| 15 ++
 drivers/net/ena/ena_ethdev.c  | 48 ++-
 drivers/net/ena/ena_ethdev.h  |  2 +
 7 files changed, 119 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst
index a34575dc9b..a42deccd81 100644
--- a/doc/guides/nics/ena.rst
+++ b/doc/guides/nics/ena.rst
@@ -141,6 +141,15 @@ Runtime Configuration
  **A non-zero value for this devarg is mandatory for control path 
functionality
  when binding ports to uio_pci_generic kernel module which lacks interrupt 
support.**
 
+   * **enable_frag_bypass** (default 0)
+
+ Enable fragment bypass mode for egress packets. This mode bypasses the PPS
+ limit enforced by EC2 for fragmented egress packets on every ENI. Note 
that
+ enabling it might negatively impact network performance.
+
+ 0 - Disabled (Default).
+
+ 1 - Enabled.
 
 ENA Configuration Parameters
 
diff --git a/doc/guides/rel_notes/release_25_07.rst 
b/doc/guides/rel_notes/release_25_07.rst
index e88fbcc38c..27749f232b 100644
--- a/doc/guides/rel_notes/release_25_07.rst
+++ b/doc/guides/rel_notes/release_25_07.rst
@@ -63,6 +63,11 @@ New Features
 * ixgbe
 * iavf
 
+* **Updated Amazon ENA (Elastic Network Adapter) net driver.**
+
+  * Added support for enabling fragment bypass mode for egress packets.
+This mode bypasses the PPS limit enforced by EC2 for fragmented egress 
packets on every ENI.
+
 * **Updated virtio driver.**
 
   * Added support for Rx and Tx burst mode query.
diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c
index 588dc61387..9715a627c1 100644
--- a/drivers/net/ena/base/ena_com.c
+++ b/drivers/net/ena/base/ena_com.c
@@ -3459,3 +3459,36 @@ int ena_com_config_dev_mode(struct ena_com_dev *ena_dev,
 
return 0;
 }
+int ena_com_set_frag_bypass(struct ena_com_dev *ena_dev, bool enable)
+{
+   struct ena_admin_set_feat_resp set_feat_resp;
+   struct ena_com_admin_queue *admin_queue;
+   struct ena_admin_set_feat_cmd cmd;
+   int ret;
+
+   if (!ena_com_check_supported_feature_id(ena_dev, 
ENA_ADMIN_FRAG_BYPASS)) {
+   ena_trc_dbg(ena_dev, "Feature %d isn't supported\n",
+   ENA_ADMIN_FRAG_BYPASS);
+   return ENA_COM_UNSUPPORTED;
+   }
+
+   memset(&cmd, 0x0, sizeof(cmd));
+   admin_queue = &ena_dev->admin_queue;
+
+   cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE;
+   cmd.aq_common_descriptor.flags = 0;
+   cmd.feat_common.feature_id = ENA_ADMIN_FRAG_BYPASS;
+   cmd.feat_common.feature_version = 
ENA_ADMIN_FRAG_BYPASS_FEATURE_VERSION_0;
+   cmd.u.frag_bypass.enable = (u8)enable;
+
+   ret = ena_com_execute_admin_command(admin_queue,
+   (struct ena_admin_aq_entry *)&cmd,
+   sizeof(cmd),
+   (struct ena_admin_acq_entry 
*)&set_feat_resp,
+   sizeof(set_feat_resp));
+
+   if (unlikely(ret))
+   ena_trc_err(ena_dev, "Failed to enable frag bypass. error: 
%d\n", ret);
+
+   return ret;
+}
diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h
index b2aede1be1..f064095fd1 100644
--- a/drivers/net/ena/base/ena_com.h
+++ b/drivers/net/ena/base/ena_com.h
@@ -1109,6 +1109,14 @@ static inline bool 
ena_com_get_missing_admin_interrupt(struct ena_com_dev *ena_d
return ena_dev->admin_queue.is_missing_admin_interrupt;
 }
 
+/* ena_com_set_frag_bypass - set fragment bypass
+ * @ena_dev: ENA communication layer struct
+ * @enable: true if fragment bypass is enabled, false otherwise.
+ *
+ * @return - 0 on success, negative value on failure.
+ */
+int ena_com_set_frag_bypass(struct ena_com_dev *ena_dev, bool enable);
+
 /* ena_com_io_sq_to_ena_dev - Extract ena_com_dev using contained field io_sq.
  * @io_sq: IO submit queue struct
  *
diff --git a/drivers/net/ena/base/ena_defs/ena_admin_defs.h 
b/drivers/net/ena/base/ena_defs/ena_admin_defs.h
index bdc6efadcf..d315014776 100644
--- a/drive

[PATCH v4 3/7] net/ena: separate doorbell logic for Rx and Tx

2025-05-28 Thread Shai Brandes
The function ena_com_write_sq_doorbell() currently
checks for LLQ mode using is_llq_max_tx_burst_exists()
which is relevant only for TX queues.
Since RX queues do not operate in LLQ mode, this check
is unnecessary for the RX path.

This patch separates the doorbell write logic into two
distinct handlers for RX and TX, eliminating the
irrelevant LLQ check in the RX path.

Signed-off-by: Shai Brandes 
Reviewed-by: Amit Bernstein 
Reviewed-by: Yosef Raisman 
---
 drivers/net/ena/base/ena_eth_com.h | 15 ++-
 drivers/net/ena/ena_ethdev.c   |  6 +++---
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ena/base/ena_eth_com.h 
b/drivers/net/ena/base/ena_eth_com.h
index 8a12ed5fba..9e0a7af325 100644
--- a/drivers/net/ena/base/ena_eth_com.h
+++ b/drivers/net/ena/base/ena_eth_com.h
@@ -159,7 +159,20 @@ static inline bool ena_com_is_doorbell_needed(struct 
ena_com_io_sq *io_sq,
return num_entries_needed > io_sq->entries_in_tx_burst_left;
 }
 
-static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq)
+static inline int ena_com_write_rx_sq_doorbell(struct ena_com_io_sq *io_sq)
+{
+   u16 tail = io_sq->tail;
+
+   ena_trc_dbg(ena_com_io_sq_to_ena_dev(io_sq),
+   "Write submission queue doorbell for queue: %d tail: %d\n",
+   io_sq->qid, tail);
+
+   ENA_REG_WRITE32(io_sq->bus, tail, io_sq->db_addr);
+
+   return 0;
+}
+
+static inline int ena_com_write_tx_sq_doorbell(struct ena_com_io_sq *io_sq)
 {
u16 max_entries_in_tx_burst = io_sq->llq_info.max_entries_in_tx_burst;
u16 tail = io_sq->tail;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index a13506890f..d8ff6851d2 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -1835,7 +1835,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, 
unsigned int count)
/* When we submitted free resources to device... */
if (likely(i > 0)) {
/* ...let HW know that it can fill buffers with data. */
-   ena_com_write_sq_doorbell(rxq->ena_com_io_sq);
+   ena_com_write_rx_sq_doorbell(rxq->ena_com_io_sq);
 
rxq->next_to_use = next_to_use;
}
@@ -3163,7 +3163,7 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct 
rte_mbuf *mbuf)
PMD_TX_LOG_LINE(DEBUG,
"LLQ Tx max burst size of queue %d achieved, writing 
doorbell to send burst",
tx_ring->id);
-   ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq);
+   ena_com_write_tx_sq_doorbell(tx_ring->ena_com_io_sq);
tx_ring->tx_stats.doorbells++;
tx_ring->pkts_without_db = false;
}
@@ -3296,7 +3296,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct 
rte_mbuf **tx_pkts,
/* If there are ready packets to be xmitted... */
if (likely(tx_ring->pkts_without_db)) {
/* ...let HW do its best :-) */
-   ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq);
+   ena_com_write_tx_sq_doorbell(tx_ring->ena_com_io_sq);
tx_ring->tx_stats.doorbells++;
tx_ring->pkts_without_db = false;
}
-- 
2.17.1



[PATCH v4 2/7] net/ena/base: coding style changes

2025-05-28 Thread Shai Brandes
Reordered variable declarations to follow the
reverse Christmas tree style.

Signed-off-by: Shai Brandes 
Reviewed-by: Amit Bernstein 
Reviewed-by: Yosef Raisman 
---
 drivers/net/ena/base/ena_com.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c
index 238716de29..588dc61387 100644
--- a/drivers/net/ena/base/ena_com.c
+++ b/drivers/net/ena/base/ena_com.c
@@ -308,8 +308,8 @@ static struct ena_comp_ctx *ena_com_submit_admin_cmd(struct 
ena_com_admin_queue
 struct ena_admin_acq_entry 
*comp,
 size_t comp_size_in_bytes)
 {
-   unsigned long flags = 0;
struct ena_comp_ctx *comp_ctx;
+   unsigned long flags = 0;
 
ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags);
if (unlikely(!admin_queue->running_state)) {
@@ -616,10 +616,10 @@ static int 
ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_c
  */
 static int ena_com_set_llq(struct ena_com_dev *ena_dev)
 {
+   struct ena_com_llq_info *llq_info = &ena_dev->llq_info;
struct ena_com_admin_queue *admin_queue;
-   struct ena_admin_set_feat_cmd cmd;
struct ena_admin_set_feat_resp resp;
-   struct ena_com_llq_info *llq_info = &ena_dev->llq_info;
+   struct ena_admin_set_feat_cmd cmd;
int ret;
 
memset(&cmd, 0x0, sizeof(cmd));
-- 
2.17.1



[PATCH v4 0/7] net/ena: release 2.13.0

2025-05-28 Thread Shai Brandes
This patchset includes an upgrade of the ENA HAL,
introduces a new feature, and addresses three bug fixes.

Thank you in advance to the net maintainers and community members
for your time and effort reviewing the code.

Best regards,
Shai Brandes
AWS Elastic Network Adapter team

---
v2:
Removed patch "net/ena: fix virtual address calc for unaligned BARs"
which contained a problematic casting when compiling with 32-bit system

v3:
no change, there was some technical issue when sending the emails
where part of the patches apeared on different series.

v4:
Each patch in the series should compile independently.  
Patch 4/7 causes a compile error that was missed,  
as the full series passed our directed tests when applied together


Shai Brandes (7):
  net/ena/base: avoid recalculating desc per entry
  net/ena/base: coding style changes
  net/ena: separate doorbell logic for Rx and Tx
  net/ena: support fragment bypass mode
  net/ena: fix unhandled interrupt config failure
  net/ena: fix aenq timeout with low poll interval
  net/ena: upgrade driver version to 2.13.0

 doc/guides/nics/ena.rst   | 13 ++-
 doc/guides/rel_notes/release_25_07.rst|  9 ++
 drivers/net/ena/base/ena_com.c| 39 +++-
 drivers/net/ena/base/ena_com.h|  8 ++
 .../net/ena/base/ena_defs/ena_admin_defs.h| 15 +++
 drivers/net/ena/base/ena_eth_com.c|  6 +-
 drivers/net/ena/base/ena_eth_com.h| 15 ++-
 drivers/net/ena/ena_ethdev.c  | 98 +++
 drivers/net/ena/ena_ethdev.h  |  5 +-
 9 files changed, 177 insertions(+), 31 deletions(-)

-- 
2.17.1



[PATCH v4 7/7] net/ena: upgrade driver version to 2.13.0

2025-05-28 Thread Shai Brandes
Upgraded the driver version to 2.13.0.

Signed-off-by: Shai Brandes 
Reviewed-by: Amit Bernstein 
Reviewed-by: Yosef Raisman 
---
 drivers/net/ena/ena_ethdev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index d249701144..7e75eddcd9 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -22,7 +22,7 @@
 #include 
 
 #define DRV_MODULE_VER_MAJOR   2
-#define DRV_MODULE_VER_MINOR   12
+#define DRV_MODULE_VER_MINOR   13
 #define DRV_MODULE_VER_SUBMINOR0
 
 #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l)
-- 
2.17.1



[v1 00/10] DPAA specific fixes

2025-05-28 Thread vanshika . shukla
From: Vanshika Shukla 

This series includes fixes for NXP DPAA drivers.

Gagandeep Singh (1):
  bus/dpaa: improve DPAA cleanup

Hemant Agrawal (2):
  bus/dpaa: avoid using same structure and variable name
  bus/dpaa: optimize qman enqueue check

Jun Yang (5):
  bus/dpaa: add FMan node
  bus/dpaa: enhance DPAA SoC version
  bus/dpaa: optimize bman acquire/release
  mempool/dpaa: fast acquire and release
  mempool/dpaa: adjust pool element for LS1043A errata

Vanshika Shukla (1):
  net/dpaa: add devargs for enabling err packets on main queue

Vinod Pullabhatla (1):
  net/dpaa: add Tx rate limiting DPAA PMD API

 .mailmap  |   1 +
 doc/guides/nics/dpaa.rst  |   3 +
 drivers/bus/dpaa/base/fman/fman.c | 278 -
 drivers/bus/dpaa/base/fman/netcfg_layer.c |   8 +-
 drivers/bus/dpaa/base/qbman/bman.c| 149 +--
 drivers/bus/dpaa/base/qbman/qman.c|  50 ++--
 drivers/bus/dpaa/base/qbman/qman_driver.c |   2 -
 drivers/bus/dpaa/bus_dpaa_driver.h|   9 +-
 drivers/bus/dpaa/dpaa_bus.c   | 179 +
 drivers/bus/dpaa/include/fman.h   |  40 +--
 drivers/bus/dpaa/include/fsl_bman.h   |  20 +-
 drivers/bus/dpaa/include/fsl_qman.h   |   2 +-
 drivers/bus/dpaa/include/netcfg.h |  14 --
 drivers/mempool/dpaa/dpaa_mempool.c   | 230 +
 drivers/mempool/dpaa/dpaa_mempool.h   |  13 +-
 drivers/net/dpaa/dpaa_ethdev.c| 291 +++---
 drivers/net/dpaa/dpaa_flow.c  |  87 ++-
 drivers/net/dpaa/dpaa_ptp.c   |  12 +-
 drivers/net/dpaa/dpaa_rxtx.c  |   4 +-
 drivers/net/dpaa/fmlib/fm_lib.c   |  30 +++
 drivers/net/dpaa/fmlib/fm_port_ext.h  |   2 +-
 drivers/net/dpaa/rte_pmd_dpaa.h   |  21 +-
 22 files changed, 1022 insertions(+), 423 deletions(-)

-- 
2.25.1



[v1 02/10] bus/dpaa: add FMan node

2025-05-28 Thread vanshika . shukla
From: Jun Yang 

Add FMan node(s) and associate FMan to it's interface(port).
This method describes FMan attributes and avoid accessing FMan from
port directly.
Logically, something like IEEE 1588 is FMan global resource,
which is in range of 0xF_E000–0xF_EFFF.
Port specific resource is in range of 0x8_–0xB_.

Signed-off-by: Jun Yang 
---
 drivers/bus/dpaa/base/fman/fman.c | 278 --
 drivers/bus/dpaa/base/fman/netcfg_layer.c |   8 +-
 drivers/bus/dpaa/dpaa_bus.c   |  17 +-
 drivers/bus/dpaa/include/fman.h   |  40 ++--
 drivers/bus/dpaa/include/netcfg.h |  14 --
 drivers/net/dpaa/dpaa_ethdev.c|  18 +-
 drivers/net/dpaa/dpaa_ptp.c   |  12 +-
 7 files changed, 206 insertions(+), 181 deletions(-)

diff --git a/drivers/bus/dpaa/base/fman/fman.c 
b/drivers/bus/dpaa/base/fman/fman.c
index d49339d81e..1b3b8836a5 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -16,24 +16,15 @@
 #include 
 #include 
 
-#define QMI_PORT_REGS_OFFSET   0x400
-
-/* CCSR map address to access ccsr based register */
-void *fman_ccsr_map;
-/* fman version info */
-u16 fman_ip_rev;
-static int get_once;
-u32 fman_dealloc_bufs_mask_hi;
-u32 fman_dealloc_bufs_mask_lo;
-
 int fman_ccsr_map_fd = -1;
 static COMPAT_LIST_HEAD(__ifs);
-void *rtc_map;
+static COMPAT_LIST_HEAD(__fmans);
 
 /* This is the (const) global variable that callers have read-only access to.
  * Internally, we have read-write access directly to __ifs.
  */
 const struct list_head *fman_if_list = &__ifs;
+const struct list_head *fman_list = &__fmans;
 
 static void
 if_destructor(struct __fman_if *__if)
@@ -55,40 +46,99 @@ if_destructor(struct __fman_if *__if)
 }
 
 static int
-fman_get_ip_rev(const struct device_node *fman_node)
+_fman_init(const struct device_node *fman_node, int fd)
 {
-   const uint32_t *fman_addr;
-   uint64_t phys_addr;
-   uint64_t regs_size;
+   const struct device_node *ptp_node;
+   const uint32_t *fman_addr, *ptp_addr, *cell_idx;
+   uint64_t phys_addr, regs_size, lenp;
+   void *vir_addr;
uint32_t ip_rev_1;
-   int _errno;
+   int _errno = 0;
+   struct __fman *fman;
+
+   fman = rte_zmalloc(NULL, sizeof(struct __fman), 0);
+   if (!fman) {
+   FMAN_ERR(-ENOMEM, "malloc fman");
+   return -ENOMEM;
+   }
+
+   cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+   if (!cell_idx) {
+   FMAN_ERR(-ENXIO, "%s: no cell-index", fman_node->full_name);
+   return -ENXIO;
+   }
+   assert(lenp == sizeof(*cell_idx));
+   fman->idx = of_read_number(cell_idx, lenp / sizeof(phandle));
 
fman_addr = of_get_address(fman_node, 0, ®s_size, NULL);
if (!fman_addr) {
-   pr_err("of_get_address cannot return fman address\n");
+   FMAN_ERR(-EINVAL, "Get fman's CCSR failed");
return -EINVAL;
}
phys_addr = of_translate_address(fman_node, fman_addr);
if (!phys_addr) {
-   pr_err("of_translate_address failed\n");
+   FMAN_ERR(-EINVAL, "Translate fman's CCSR failed");
return -EINVAL;
}
-   fman_ccsr_map = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
-MAP_SHARED, fman_ccsr_map_fd, phys_addr);
-   if (fman_ccsr_map == MAP_FAILED) {
-   pr_err("Can not map FMan ccsr base");
+   vir_addr = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+   MAP_SHARED, fd, phys_addr);
+   if (vir_addr == MAP_FAILED) {
+   FMAN_ERR(-EINVAL, "Map fman's CCSR failed");
return -EINVAL;
}
+   fman->ccsr_phy = phys_addr;
+   fman->ccsr_size = regs_size;
+   fman->ccsr_vir = vir_addr;
+
+   fman->time_phy = 0;
+   for_each_compatible_node(ptp_node, NULL, "fsl,fman-ptp-timer") {
+   ptp_addr = of_get_address(ptp_node, 0, ®s_size, NULL);
+   if (!ptp_addr)
+   continue;
+   phys_addr = of_translate_address(ptp_node, ptp_addr);
+   if (phys_addr != (fman->ccsr_phy + fman->ccsr_size))
+   continue;
+   vir_addr = mmap(NULL, regs_size, PROT_READ | PROT_WRITE,
+   MAP_SHARED, fd, phys_addr);
+   if (vir_addr == MAP_FAILED) {
+   FMAN_ERR(-EINVAL, "Map fman's IEEE 1588 failed");
+   return -EINVAL;
+   }
+   fman->time_phy = phys_addr;
+   fman->time_size = regs_size;
+   fman->time_vir = vir_addr;
+   break;
+   }
 
-   ip_rev_1 = in_be32(fman_ccsr_map + FMAN_IP_REV_1);
-   fman_ip_rev = (ip_rev_1 & FMAN_IP_REV_1_MAJOR_MASK) >>
-   FMAN_IP_REV_1_MAJOR_SHIFT;
+   if (!fman->time_phy) {
+   FMAN_ERR(-EINVAL, "Map fman's 

[v1 04/10] bus/dpaa: optimize bman acquire/release

2025-05-28 Thread vanshika . shukla
From: Jun Yang 

1) Reduce byte swap between big endian and little endian.
2) Reduce ci(cache invalid) access by 128bit R/W instructions.
These methods improve ~10% buffer acquire/release performance.

Signed-off-by: Jun Yang 
---
 drivers/bus/dpaa/base/qbman/bman.c  | 149 
 drivers/bus/dpaa/include/fsl_bman.h |  20 +++-
 2 files changed, 150 insertions(+), 19 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/bman.c 
b/drivers/bus/dpaa/base/qbman/bman.c
index 8a6290734f..13f535a679 100644
--- a/drivers/bus/dpaa/base/qbman/bman.c
+++ b/drivers/bus/dpaa/base/qbman/bman.c
@@ -1,18 +1,38 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2017 NXP
+ * Copyright 2017, 2024 NXP
  *
  */
+#include 
+#include 
+#include 
 
 #include "bman.h"
-#include 
 
 /* Compilation constants */
 #define RCR_THRESH 2   /* reread h/w CI when running out of space */
 #define IRQNAME"BMan portal %d"
 #define MAX_IRQNAME16  /* big enough for "BMan portal %d" */
 
+#ifndef MAX_U16
+#define MAX_U16 0x
+#endif
+#ifndef BIT_SIZE
+#define BIT_SIZE(t) (sizeof(t) * 8)
+#endif
+#ifndef MAX_U32
+#define MAX_U32 \
+   uint32_t)MAX_U16) << BIT_SIZE(uint16_t)) | MAX_U16)
+#endif
+#define MAX_U48 \
+   uint64_t)MAX_U16) << BIT_SIZE(uint32_t)) | MAX_U32)
+#define HI16_OF_U48(x) \
+   (((x) >> BIT_SIZE(rte_be32_t)) & MAX_U16)
+#define LO32_OF_U48(x) ((x) & MAX_U32)
+#define U48_BY_HI16_LO32(hi, lo) \
+   (((hi) << BIT_SIZE(uint32_t)) | (lo))
+
 struct bman_portal {
struct bm_portal p;
/* 2-element array. pools[0] is mask, pools[1] is snapshot. */
@@ -246,7 +266,52 @@ static void update_rcr_ci(struct bman_portal *p, int avail)
bm_rcr_cce_update(&p->p);
 }
 
-#define BMAN_BUF_MASK 0xul
+RTE_EXPORT_INTERNAL_SYMBOL(bman_release_fast)
+int
+bman_release_fast(struct bman_pool *pool, const uint64_t *bufs,
+   uint8_t num)
+{
+   struct bman_portal *p;
+   struct bm_rcr_entry *r;
+   uint8_t i, avail;
+   uint64_t bpid = pool->params.bpid;
+   struct bm_hw_buf_desc bm_bufs[FSL_BM_BURST_MAX];
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+   if (!num || (num > FSL_BM_BURST_MAX))
+   return -EINVAL;
+   if (pool->params.flags & BMAN_POOL_FLAG_NO_RELEASE)
+   return -EINVAL;
+#endif
+
+   p = get_affine_portal();
+   avail = bm_rcr_get_avail(&p->p);
+   if (avail < 2)
+   update_rcr_ci(p, avail);
+   r = bm_rcr_start(&p->p);
+   if (unlikely(!r))
+   return -EBUSY;
+
+   /*
+* we can copy all but the first entry, as this can trigger badness
+* with the valid-bit
+*/
+   bm_bufs[0].bpid = bpid;
+   bm_bufs[0].hi_addr = cpu_to_be16(HI16_OF_U48(bufs[0]));
+   bm_bufs[0].lo_addr = cpu_to_be32(LO32_OF_U48(bufs[0]));
+   for (i = 1; i < num; i++) {
+   bm_bufs[i].hi_addr = cpu_to_be16(HI16_OF_U48(bufs[i]));
+   bm_bufs[i].lo_addr = cpu_to_be32(LO32_OF_U48(bufs[i]));
+   }
+
+   rte_memcpy(r->bufs, bm_bufs, sizeof(struct bm_buffer) * num);
+
+   bm_rcr_pvb_commit(&p->p, BM_RCR_VERB_CMD_BPID_SINGLE |
+   (num & BM_RCR_VERB_BUFCOUNT_MASK));
+
+   return 0;
+}
+
 int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
 u32 flags __maybe_unused)
 {
@@ -256,7 +321,7 @@ int bman_release(struct bman_pool *pool, const struct 
bm_buffer *bufs, u8 num,
u8 avail;
 
 #ifdef RTE_LIBRTE_DPAA_HWDEBUG
-   if (!num || (num > 8))
+   if (!num || (num > FSL_BM_BURST_MAX))
return -EINVAL;
if (pool->params.flags & BMAN_POOL_FLAG_NO_RELEASE)
return -EINVAL;
@@ -276,11 +341,11 @@ int bman_release(struct bman_pool *pool, const struct 
bm_buffer *bufs, u8 num,
 */
r->bufs[0].opaque =
cpu_to_be64(((u64)pool->params.bpid << 48) |
-   (bufs[0].opaque & BMAN_BUF_MASK));
+   (bufs[0].opaque & MAX_U48));
if (i) {
for (i = 1; i < num; i++)
r->bufs[i].opaque =
-   cpu_to_be64(bufs[i].opaque & BMAN_BUF_MASK);
+   cpu_to_be64(bufs[i].opaque & MAX_U48);
}
 
bm_rcr_pvb_commit(&p->p, BM_RCR_VERB_CMD_BPID_SINGLE |
@@ -289,16 +354,70 @@ int bman_release(struct bman_pool *pool, const struct 
bm_buffer *bufs, u8 num,
return 0;
 }
 
+static inline uint64_t
+bman_extract_addr(struct bm_buffer *buf)
+{
+   buf->opaque = be64_to_cpu(buf->opaque);
+
+   return buf->addr;
+}
+
+static inline uint64_t
+bman_hw_extract_addr(struct bm_hw_buf_desc *buf)
+{
+   uint64_t hi, lo;
+
+   hi = be16_to_cpu(buf->hi_addr);
+   lo = be32_to_cpu(buf->lo_addr);
+   return U48_BY_HI16_LO32(hi, lo);
+}
+
+RTE_EXP

[v1 01/10] bus/dpaa: avoid using same structure and variable name

2025-05-28 Thread vanshika . shukla
From: Hemant Agrawal 

rte_dpaa_bus was being used as structure and variable name both.

Signed-off-by: Jun Yang 
Signed-off-by: Hemant Agrawal 
---
 drivers/bus/dpaa/dpaa_bus.c | 56 ++---
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 5420733019..f5ce4a2761 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -54,7 +54,7 @@ struct rte_dpaa_bus {
int detected;
 };
 
-static struct rte_dpaa_bus rte_dpaa_bus;
+static struct rte_dpaa_bus s_rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
 
 /* define a variable to hold the portal_key, once created.*/
@@ -120,7 +120,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
struct rte_dpaa_device *dev = NULL;
struct rte_dpaa_device *tdev = NULL;
 
-   RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+   RTE_TAILQ_FOREACH_SAFE(dev, &s_rte_dpaa_bus.device_list, next, tdev) {
comp = compare_dpaa_devices(newdev, dev);
if (comp < 0) {
TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -130,7 +130,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
}
 
if (!inserted)
-   TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, newdev, next);
+   TAILQ_INSERT_TAIL(&s_rte_dpaa_bus.device_list, newdev, next);
 }
 
 /*
@@ -176,7 +176,7 @@ dpaa_create_device_list(void)
struct fm_eth_port_cfg *cfg;
struct fman_if *fman_intf;
 
-   rte_dpaa_bus.device_count = 0;
+   s_rte_dpaa_bus.device_count = 0;
 
/* Creating Ethernet Devices */
for (i = 0; dpaa_netcfg && (i < dpaa_netcfg->num_ethports); i++) {
@@ -187,7 +187,7 @@ dpaa_create_device_list(void)
goto cleanup;
}
 
-   dev->device.bus = &rte_dpaa_bus.bus;
+   dev->device.bus = &s_rte_dpaa_bus.bus;
dev->device.numa_node = SOCKET_ID_ANY;
 
/* Allocate interrupt handle instance */
@@ -226,7 +226,7 @@ dpaa_create_device_list(void)
dpaa_add_to_device_list(dev);
}
 
-   rte_dpaa_bus.device_count += i;
+   s_rte_dpaa_bus.device_count += i;
 
/* Unlike case of ETH, RTE_LIBRTE_DPAA_MAX_CRYPTODEV SEC devices are
 * constantly created only if "sec" property is found in the device
@@ -259,7 +259,7 @@ dpaa_create_device_list(void)
}
 
dev->device_type = FSL_DPAA_CRYPTO;
-   dev->id.dev_id = rte_dpaa_bus.device_count + i;
+   dev->id.dev_id = s_rte_dpaa_bus.device_count + i;
 
/* Even though RTE_CRYPTODEV_NAME_MAX_LEN is valid length of
 * crypto PMD, using RTE_ETH_NAME_MAX_LEN as that is the size
@@ -274,7 +274,7 @@ dpaa_create_device_list(void)
dpaa_add_to_device_list(dev);
}
 
-   rte_dpaa_bus.device_count += i;
+   s_rte_dpaa_bus.device_count += i;
 
 qdma_dpaa:
/* Creating QDMA Device */
@@ -287,7 +287,7 @@ dpaa_create_device_list(void)
}
 
dev->device_type = FSL_DPAA_QDMA;
-   dev->id.dev_id = rte_dpaa_bus.device_count + i;
+   dev->id.dev_id = s_rte_dpaa_bus.device_count + i;
 
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
sprintf(dev->name, "dpaa_qdma-%d", i+1);
@@ -297,7 +297,7 @@ dpaa_create_device_list(void)
 
dpaa_add_to_device_list(dev);
}
-   rte_dpaa_bus.device_count += i;
+   s_rte_dpaa_bus.device_count += i;
 
return 0;
 
@@ -312,8 +312,8 @@ dpaa_clean_device_list(void)
struct rte_dpaa_device *dev = NULL;
struct rte_dpaa_device *tdev = NULL;
 
-   RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
-   TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+   RTE_TAILQ_FOREACH_SAFE(dev, &s_rte_dpaa_bus.device_list, next, tdev) {
+   TAILQ_REMOVE(&s_rte_dpaa_bus.device_list, dev, next);
rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
@@ -537,10 +537,10 @@ rte_dpaa_bus_scan(void)
return 0;
}
 
-   if (rte_dpaa_bus.detected)
+   if (s_rte_dpaa_bus.detected)
return 0;
 
-   rte_dpaa_bus.detected = 1;
+   s_rte_dpaa_bus.detected = 1;
 
/* create the key, supplying a function that'll be invoked
 * when a portal affined thread will be deleted.
@@ -564,7 +564,7 @@ rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
 
BUS_INIT_FUNC_TRACE();
 
-   TAILQ_INSERT_TAIL(&rte_dpaa_bus.driver_list, driver, next);
+   TAILQ_INSERT_TAIL(&s_rte_dpaa_bus.driver_list, driver, next);
 }
 
 /* un-register a dpaa bus based dpaa driver */
@@ -574,7 +574,7 @@ rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
 {
B

[v1 03/10] bus/dpaa: enhance DPAA SoC version

2025-05-28 Thread vanshika . shukla
From: Jun Yang 

Provide internal API to identify DPAA1 SoC version instead of accessing
global variable directly.

Signed-off-by: Jun Yang 
---
 drivers/bus/dpaa/base/qbman/qman.c |  9 +++---
 drivers/bus/dpaa/bus_dpaa_driver.h |  9 +++---
 drivers/bus/dpaa/dpaa_bus.c| 48 ++
 drivers/net/dpaa/dpaa_ethdev.c | 29 +-
 drivers/net/dpaa/dpaa_rxtx.c   |  4 +--
 5 files changed, 54 insertions(+), 45 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c 
b/drivers/bus/dpaa/base/qbman/qman.c
index 11fabcaff5..fbce0638b7 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
  *
  * Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2017,2019-2024 NXP
+ * Copyright 2017,2019-2025 NXP
  *
  */
 
@@ -520,11 +520,12 @@ qman_init_portal(struct qman_portal *portal,
if (!c)
c = portal->config;
 
-   if (dpaa_svr_family == SVR_LS1043A_FAMILY)
+   if (dpaa_soc_ver() == SVR_LS1043A_FAMILY) {
portal->use_eqcr_ci_stashing = 3;
-   else
+   } else {
portal->use_eqcr_ci_stashing =
-   ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);
+   (qman_ip_rev >= QMAN_REV30 ? 1 : 0);
+   }
 
/*
 * prep the low-level portal struct with the mapped addresses from the
diff --git a/drivers/bus/dpaa/bus_dpaa_driver.h 
b/drivers/bus/dpaa/bus_dpaa_driver.h
index 26a83b2cdf..d64a8e80e0 100644
--- a/drivers/bus/dpaa/bus_dpaa_driver.h
+++ b/drivers/bus/dpaa/bus_dpaa_driver.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017-2022 NXP
+ *   Copyright 2017-2022, 2025 NXP
  *
  */
 #ifndef BUS_DPAA_DRIVER_H
@@ -55,11 +55,9 @@ dpaa_seqn(struct rte_mbuf *mbuf)
 /* DPAA SoC identifier; If this is not available, it can be concluded
  * that board is non-DPAA. Single slot is currently supported.
  */
-#define DPAA_SOC_ID_FILE   "/sys/devices/soc0/soc_id"
 
 #define SVR_LS1043A_FAMILY 0x8792
 #define SVR_LS1046A_FAMILY 0x8707
-#define SVR_MASK   0x
 
 /** Device driver supports link state interrupt */
 #define RTE_DPAA_DRV_INTR_LSC  0x0008
@@ -70,8 +68,6 @@ dpaa_seqn(struct rte_mbuf *mbuf)
 #define RTE_DEV_TO_DPAA_CONST(ptr) \
container_of(ptr, const struct rte_dpaa_device, device)
 
-extern unsigned int dpaa_svr_family;
-
 struct rte_dpaa_device;
 struct rte_dpaa_driver;
 
@@ -250,6 +246,9 @@ RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
 __rte_internal
 struct fm_eth_port_cfg *dpaa_get_eth_port_cfg(int dev_id);
 
+__rte_internal
+uint32_t dpaa_soc_ver(void);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index ed1f77bab7..7abc2235e7 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017-2020 NXP
+ * Copyright 2017-2025 NXP
  *
  */
 /* System headers */
@@ -46,12 +46,16 @@
 #include 
 #include 
 
+#define DPAA_SOC_ID_FILE   "/sys/devices/soc0/soc_id"
+#define DPAA_SVR_MASK 0x
+
 struct rte_dpaa_bus {
struct rte_bus bus;
TAILQ_HEAD(, rte_dpaa_device) device_list;
TAILQ_HEAD(, rte_dpaa_driver) driver_list;
int device_count;
int detected;
+   uint32_t svr_ver;
 };
 
 static struct rte_dpaa_bus s_rte_dpaa_bus;
@@ -60,9 +64,6 @@ static struct netcfg_info *dpaa_netcfg;
 /* define a variable to hold the portal_key, once created.*/
 static pthread_key_t dpaa_portal_key;
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_svr_family)
-unsigned int dpaa_svr_family;
-
 #define FSL_DPAA_BUS_NAME  dpaa_bus
 
 RTE_EXPORT_INTERNAL_SYMBOL(per_lcore_dpaa_io)
@@ -73,6 +74,13 @@ RTE_EXPORT_INTERNAL_SYMBOL(dpaa_seqn_dynfield_offset)
 int dpaa_seqn_dynfield_offset = -1;
 
 RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_eth_port_cfg)
+
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_soc_ver)
+uint32_t dpaa_soc_ver(void)
+{
+   return s_rte_dpaa_bus.svr_ver;
+}
+
 struct fm_eth_port_cfg *
 dpaa_get_eth_port_cfg(int dev_id)
 {
@@ -663,7 +671,7 @@ rte_dpaa_bus_probe(void)
struct rte_dpaa_device *dev;
struct rte_dpaa_driver *drv;
FILE *svr_file = NULL;
-   unsigned int svr_ver;
+   uint32_t svr_ver;
int probe_all = s_rte_dpaa_bus.bus.conf.scan_mode != 
RTE_BUS_SCAN_ALLOWLIST;
static int process_once;
 
@@ -671,6 +679,29 @@ rte_dpaa_bus_probe(void)
if (!s_rte_dpaa_bus.detected)
return 0;
 
+   if (s_rte_dpaa_bus.bus.conf.scan_mode != RTE_BUS_SCAN_ALLOWLIST)
+   probe_all = true;
+
+   svr_file = fopen(DPAA_SOC_ID_FILE, "r");
+   if (svr_file) {
+   if (fscanf(svr_file, "svr:%x", &svr_ver) > 0)
+   s_rte_dpaa_bus.svr_ver = svr_ver & DPAA_SVR_MASK;
+   else
+   s_

[v1 09/10] bus/dpaa: improve DPAA cleanup

2025-05-28 Thread vanshika . shukla
From: Gagandeep Singh 

This patch addresses DPAA driver issues with the introduction of
rte_eal_cleanup, which caused driver-specific destructors to fail
due to memory cleanup.
To resolve this, we remove the driver destructor and relocate the
code to the bus cleanup function.

So, this patch also implements DPAA bus cleanup.

Signed-off-by: Gagandeep Singh 
---
 drivers/bus/dpaa/base/qbman/qman_driver.c |   2 -
 drivers/bus/dpaa/dpaa_bus.c   |  58 ++
 drivers/net/dpaa/dpaa_ethdev.c| 217 --
 3 files changed, 215 insertions(+), 62 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c 
b/drivers/bus/dpaa/base/qbman/qman_driver.c
index 7a129a2d86..cdce6b777b 100644
--- a/drivers/bus/dpaa/base/qbman/qman_driver.c
+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c
@@ -228,8 +228,6 @@ int fsl_qman_fq_portal_destroy(struct qman_portal *qp)
if (ret)
pr_err("qman_free_global_portal() (%d)\n", ret);
 
-   kfree(qp);
-
process_portal_irq_unmap(cfg->irq);
 
addr.cena = cfg->addr_virt[DPAA_PORTAL_CE];
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 7abc2235e7..4d7e7ea3df 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -63,6 +63,9 @@ static struct netcfg_info *dpaa_netcfg;
 
 /* define a variable to hold the portal_key, once created.*/
 static pthread_key_t dpaa_portal_key;
+/* dpaa lcore specific  portals */
+struct dpaa_portal *dpaa_portals[RTE_MAX_LCORE] = {NULL};
+static int dpaa_bus_global_init;
 
 #define FSL_DPAA_BUS_NAME  dpaa_bus
 
@@ -402,6 +405,7 @@ int rte_dpaa_portal_init(void *arg)
 
return ret;
}
+   dpaa_portals[lcore] = DPAA_PER_LCORE_PORTAL;
 
DPAA_BUS_LOG(DEBUG, "QMAN thread initialized");
 
@@ -461,6 +465,8 @@ dpaa_portal_finish(void *arg)
rte_free(dpaa_io_portal);
dpaa_io_portal = NULL;
DPAA_PER_LCORE_PORTAL = NULL;
+   dpaa_portals[rte_lcore_id()] = NULL;
+   DPAA_BUS_DEBUG("Portal cleanup done for lcore = %d", rte_lcore_id());
 }
 
 static int
@@ -771,6 +777,7 @@ rte_dpaa_bus_probe(void)
break;
}
}
+   dpaa_bus_global_init = 1;
 
return 0;
 }
@@ -878,6 +885,56 @@ dpaa_bus_dev_iterate(const void *start, const char *str,
return NULL;
 }
 
+static int
+dpaa_bus_cleanup(void)
+{
+   struct rte_dpaa_device *dev, *tmp_dev;
+
+   BUS_INIT_FUNC_TRACE();
+   RTE_TAILQ_FOREACH_SAFE(dev, &s_rte_dpaa_bus.device_list, next, tmp_dev) 
{
+   struct rte_dpaa_driver *drv = dev->driver;
+   int ret = 0;
+
+   if (!rte_dev_is_probed(&dev->device))
+   continue;
+   if (!drv || !drv->remove)
+   continue;
+   ret = drv->remove(dev);
+   if (ret < 0) {
+   rte_errno = errno;
+   return -1;
+   }
+   dev->driver = NULL;
+   dev->device.driver = NULL;
+   }
+   dpaa_portal_finish((void *)DPAA_PER_LCORE_PORTAL);
+   dpaa_bus_global_init = 0;
+   DPAA_BUS_DEBUG("Bus cleanup done");
+
+   return 0;
+}
+
+/* Adding destructor for double check in case non-gracefully
+ * exit.
+ */
+static void __attribute__((destructor(102)))
+dpaa_cleanup(void)
+{
+   unsigned int lcore_id;
+
+   if (!dpaa_bus_global_init)
+   return;
+
+   /* cleanup portals in case non-gracefull exit */
+   RTE_LCORE_FOREACH_WORKER(lcore_id) {
+   /* Check for non zero id */
+   dpaa_portal_finish((void *)dpaa_portals[lcore_id]);
+   }
+   dpaa_portal_finish((void *)DPAA_PER_LCORE_PORTAL);
+   dpaa_bus_global_init = 0;
+   DPAA_BUS_DEBUG("Worker thread clean up done");
+}
+
 static struct rte_dpaa_bus s_rte_dpaa_bus = {
.bus = {
.scan = rte_dpaa_bus_scan,
@@ -888,6 +945,7 @@ static struct rte_dpaa_bus s_rte_dpaa_bus = {
.plug = dpaa_bus_plug,
.unplug = dpaa_bus_unplug,
.dev_iterate = dpaa_bus_dev_iterate,
+   .cleanup = dpaa_bus_cleanup,
},
.device_list = TAILQ_HEAD_INITIALIZER(s_rte_dpaa_bus.device_list),
.driver_list = TAILQ_HEAD_INITIALIZER(s_rte_dpaa_bus.driver_list),
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 62cafb7073..85122c24b3 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -310,11 +310,12 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
 
if (!(default_q || fmc_q)) {
-   if (dpaa_fm_config(dev,
-   eth_conf->rx_adv_conf.rss_conf.rss_hf)) {
+   ret = dpaa_fm_config(dev,
+   eth_conf->rx_adv_conf.rss_conf.rss_hf);
+   if (ret) {
dpaa_write_fm_config_to_file();
-   

[v1 06/10] mempool/dpaa: adjust pool element for LS1043A errata

2025-05-28 Thread vanshika . shukla
From: Jun Yang 

Adjust every element of pool by populate callback.
1) Make sure start DMA address is aligned with 16B.
2) For buffer across 4KB boundary, make sure start DMA address is
   aligned with 256B.

Signed-off-by: Jun Yang 
---
 drivers/mempool/dpaa/dpaa_mempool.c | 145 +++-
 drivers/mempool/dpaa/dpaa_mempool.h |  11 ++-
 2 files changed, 150 insertions(+), 6 deletions(-)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c 
b/drivers/mempool/dpaa/dpaa_mempool.c
index 6c850f5cb2..2af6ebcee2 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  *
- *   Copyright 2017,2019,2023 NXP
+ *   Copyright 2017,2019,2023-2025 NXP
  *
  */
 
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+
 #include 
 
 #include 
@@ -29,6 +30,9 @@
 #include 
 #include 
 
+#define FMAN_ERRATA_BOUNDARY ((uint64_t)4096)
+#define FMAN_ERRATA_BOUNDARY_MASK (~(FMAN_ERRATA_BOUNDARY - 1))
+
 /* List of all the memseg information locally maintained in dpaa driver. This
  * is to optimize the PA_to_VA searches until a better mechanism (algo) is
  * available.
@@ -51,6 +55,7 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp)
struct dpaa_bp_info *bp_info;
uint8_t bpid;
int num_bufs = 0, ret = 0;
+   uint16_t elem_max_size;
struct bman_pool_params params = {
.flags = BMAN_POOL_FLAG_DYNAMIC_BPID
};
@@ -101,9 +106,11 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp)
}
}
 
+   elem_max_size = rte_pktmbuf_data_room_size(mp);
+
rte_dpaa_bpid_info[bpid].mp = mp;
rte_dpaa_bpid_info[bpid].bpid = bpid;
-   rte_dpaa_bpid_info[bpid].size = mp->elt_size;
+   rte_dpaa_bpid_info[bpid].size = elem_max_size;
rte_dpaa_bpid_info[bpid].bp = bp;
rte_dpaa_bpid_info[bpid].meta_data_size =
sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(mp);
@@ -296,6 +303,130 @@ dpaa_mbuf_get_count(const struct rte_mempool *mp)
return bman_query_free_buffers(bp_info->bp);
 }
 
+static int
+dpaa_check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz)
+{
+   if (!pg_sz || elt_sz > pg_sz)
+   return true;
+
+   if (RTE_PTR_ALIGN(obj, pg_sz) !=
+   RTE_PTR_ALIGN(obj + elt_sz - 1, pg_sz))
+   return false;
+   return true;
+}
+
+static void
+dpaa_adjust_obj_bounds(char *va, size_t *offset,
+   size_t pg_sz, size_t total, uint32_t flags)
+{
+   size_t off = *offset;
+
+   if (dpaa_check_obj_bounds(va + off, pg_sz, total) == false) {
+   off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + off);
+   if (flags & RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ)
+   off += total - size_t)va + off - 1) % total) + 1);
+   }
+
+   *offset = off;
+}
+
+static int
+dpaa_mbuf_ls1043a_errata_obj_adjust(uint8_t **pobj,
+   uint32_t header_size, size_t *poff, size_t data_room)
+{
+   uint8_t *obj = *pobj;
+   size_t off = *poff, buf_addr, end;
+
+   if (RTE_PKTMBUF_HEADROOM % FMAN_ERRATA_BUF_START_ALIGN) {
+   DPAA_MEMPOOL_ERR("RTE_PKTMBUF_HEADROOM(%d) NOT aligned to %d",
+   RTE_PKTMBUF_HEADROOM,
+   FMAN_ERRATA_BUF_START_ALIGN);
+   return -1;
+   }
+   if (header_size % FMAN_ERRATA_BUF_START_ALIGN) {
+   DPAA_MEMPOOL_ERR("Header size(%d) NOT aligned to %d",
+   header_size,
+   FMAN_ERRATA_BUF_START_ALIGN);
+   return -1;
+   }
+
+   /** All FMAN DMA start addresses (for example, BMAN buffer
+* address, FD[address] + FD[offset]) are 16B aligned.
+*/
+   buf_addr = (size_t)obj + header_size;
+   while (!rte_is_aligned((void *)buf_addr,
+   FMAN_ERRATA_BUF_START_ALIGN)) {
+   off++;
+   obj++;
+   buf_addr = (size_t)obj + header_size;
+   }
+
+   /** Frame buffers must not span a 4KB address boundary,
+* unless the frame start address is 256 byte aligned.
+*/
+   end = buf_addr + data_room;
+   if (((buf_addr + RTE_PKTMBUF_HEADROOM) &
+   FMAN_ERRATA_BOUNDARY_MASK) ==
+   (end & FMAN_ERRATA_BOUNDARY_MASK))
+   goto quit;
+
+   while (!rte_is_aligned((void *)(buf_addr + RTE_PKTMBUF_HEADROOM),
+   FMAN_ERRATA_4K_SPAN_ADDR_ALIGN)) {
+   off++;
+   obj++;
+   buf_addr = (size_t)obj + header_size;
+   }
+quit:
+   *pobj = obj;
+   *poff = off;
+
+   return 0;
+}
+
+static int
+dpaa_mbuf_op_pop_helper(struct rte_mempool *mp, uint32_t flags,
+   uint32_t max_objs, void *vaddr, rte_iova_t iova,
+   size_t len, struct dpaa_bp_info *bp_info,
+   rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+   char *va = vaddr;
+   size_t total_elt_sz, pg_sz, o

[PATCH v4 6/7] net/ena: fix aenq timeout with low poll interval

2025-05-28 Thread Shai Brandes
The driver can work in polling-based functionality of the admin
queue, eliminating the need for interrupts in the control-path.
This mode is mandatory when using the uio_pci_generic driver,
which lacks interrupt support.

The control_path_poll_interval devarg is being set within the range
[1..1000]. A value of 0 disables the polling mechanism.
This value defines the interval in milliseconds at which the driver
checks for asynchronous notifications from the device.

Testing revealed that setting this interval below 500 milliseconds
might lead to false detection of device unresponsiveness.
This patch clamps the user-defined value to the updated valid range
[500..1000] that ensures reliable aenq monitoring.

Fixes: ca1dfa85f0d3 ("net/ena: add control path pure polling mode")
Cc: sta...@dpdk.org
Signed-off-by: Shai Brandes 
Reviewed-by: Amit Bernstein 
Reviewed-by: Yosef Raisman 
---
 doc/guides/nics/ena.rst|  4 +++-
 doc/guides/rel_notes/release_25_07.rst |  2 ++
 drivers/net/ena/ena_ethdev.c   | 24 ++--
 drivers/net/ena/ena_ethdev.h   |  3 ++-
 4 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst
index a42deccd81..decb6be766 100644
--- a/doc/guides/nics/ena.rst
+++ b/doc/guides/nics/ena.rst
@@ -136,7 +136,9 @@ Runtime Configuration
 
  0 - Disable (Admin queue will work in interrupt mode).
 
- [1..1000] - Number of milliseconds to wait between periodic inspection of 
the admin queues.
+ [500..1000] – Time in milliseconds to wait between periodic checks of the 
admin queues.
+ If a value outside this range is specified, the driver will automatically 
adjust it to
+ fit within the valid range.
 
  **A non-zero value for this devarg is mandatory for control path 
functionality
  when binding ports to uio_pci_generic kernel module which lacks interrupt 
support.**
diff --git a/doc/guides/rel_notes/release_25_07.rst 
b/doc/guides/rel_notes/release_25_07.rst
index 8c245d6805..9bff9a4627 100644
--- a/doc/guides/rel_notes/release_25_07.rst
+++ b/doc/guides/rel_notes/release_25_07.rst
@@ -69,6 +69,8 @@ New Features
 This mode bypasses the PPS limit enforced by EC2 for fragmented egress 
packets on every ENI.
   * Fixed the device initialization routine to correctly handle failure during 
the registration
 or enabling of interrupts when operating in control path interrupt mode.
+  * Fixed an issue where the device might be incorrectly reported as 
unresponsive when using
+polling-based admin queue functionality with a poll interval of less than 
500 milliseconds.
 
 * **Updated virtio driver.**
 
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 8ba4f3a9cf..d249701144 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -30,6 +30,8 @@
 #define GET_L4_HDR_LEN(mbuf)   \
((rte_pktmbuf_mtod_offset(mbuf, struct rte_tcp_hdr *,   \
mbuf->l3_len + mbuf->l2_len)->data_off) >> 4)
+#define CLAMP_VAL(val, min, max)   \
+   (RTE_MIN(RTE_MAX((val), (typeof(val))(min)), (typeof(val))(max)))
 
 #define ETH_GSTRING_LEN32
 
@@ -3756,25 +3758,19 @@ static int ena_process_uint_devarg(const char *key,
uint64_value * rte_get_timer_hz();
}
} else if (strcmp(key, ENA_DEVARG_CONTROL_PATH_POLL_INTERVAL) == 0) {
-   if (uint64_value > ENA_MAX_CONTROL_PATH_POLL_INTERVAL_MSEC) {
-   PMD_INIT_LOG_LINE(ERR,
-   "Control path polling interval is too long: %" 
PRIu64 " msecs. "
-   "Maximum allowed: %d msecs.",
-   uint64_value, 
ENA_MAX_CONTROL_PATH_POLL_INTERVAL_MSEC);
-   return -EINVAL;
-   } else if (uint64_value == 0) {
+   if (uint64_value == 0) {
PMD_INIT_LOG_LINE(INFO,
-   "Control path polling interval is set to zero. 
Operating in "
-   "interrupt mode.");
-   adapter->control_path_poll_interval = 0;
+   "Control path polling is disabled - Operating 
in interrupt mode");
} else {
+   uint64_value = CLAMP_VAL(uint64_value,
+   ENA_MIN_CONTROL_PATH_POLL_INTERVAL_MSEC,
+   ENA_MAX_CONTROL_PATH_POLL_INTERVAL_MSEC);
PMD_INIT_LOG_LINE(INFO,
-   "Control path polling interval is set to %" 
PRIu64 " msecs.",
+   "Control path polling interval is %" PRIu64 " 
msec",
uint64_value);
-   adapter->control_path_poll_interval = 
uint64_value * USEC_PER_MSEC;
   

[PATCH v4 5/7] net/ena: fix unhandled interrupt config failure

2025-05-28 Thread Shai Brandes
Fixed the device initialization routine to correctly handle
failure during the registration or enabling of interrupts
when operating in control path interrupt mode.

Fixes: ca1dfa85f0d3 ("net/ena: add control path pure polling mode")
Cc: sta...@dpdk.org
Signed-off-by: Shai Brandes 
Reviewed-by: Amit Bernstein 
Reviewed-by: Yosef Raisman 
---
 doc/guides/rel_notes/release_25_07.rst |  2 ++
 drivers/net/ena/ena_ethdev.c   | 20 ++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_25_07.rst 
b/doc/guides/rel_notes/release_25_07.rst
index 27749f232b..8c245d6805 100644
--- a/doc/guides/rel_notes/release_25_07.rst
+++ b/doc/guides/rel_notes/release_25_07.rst
@@ -67,6 +67,8 @@ New Features
 
   * Added support for enabling fragment bypass mode for egress packets.
 This mode bypasses the PPS limit enforced by EC2 for fragmented egress 
packets on every ENI.
+  * Fixed the device initialization routine to correctly handle failure during 
the registration
+or enabling of interrupts when operating in control path interrupt mode.
 
 * **Updated virtio driver.**
 
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 18aa25f6c7..8ba4f3a9cf 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -2465,8 +2465,16 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
 
if (!adapter->control_path_poll_interval) {
/* Control path interrupt mode */
-   rte_intr_callback_register(intr_handle, 
ena_control_path_handler, eth_dev);
-   rte_intr_enable(intr_handle);
+   rc = rte_intr_callback_register(intr_handle, 
ena_control_path_handler, eth_dev);
+   if (unlikely(rc < 0)) {
+   PMD_DRV_LOG_LINE(ERR, "Failed to register control path 
interrupt");
+   goto err_stats_destroy;
+   }
+   rc = rte_intr_enable(intr_handle);
+   if (unlikely(rc < 0)) {
+   PMD_DRV_LOG_LINE(ERR, "Failed to enable control path 
interrupt");
+   goto err_control_path_destroy;
+   }
ena_com_set_admin_polling_mode(ena_dev, false);
} else {
/* Control path polling mode */
@@ -2485,6 +2493,14 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
 
return 0;
 err_control_path_destroy:
+   if (!adapter->control_path_poll_interval) {
+   rc = rte_intr_callback_unregister_sync(intr_handle,
+   ena_control_path_handler,
+   eth_dev);
+   if (unlikely(rc < 0))
+   PMD_INIT_LOG_LINE(ERR, "Failed to unregister interrupt 
handler");
+   }
+err_stats_destroy:
rte_free(adapter->drv_stats);
 err_indirect_table_destroy:
ena_indirect_table_release(adapter);
-- 
2.17.1



[PATCH v4 1/7] net/ena/base: avoid recalculating desc per entry

2025-05-28 Thread Shai Brandes
desc_per_entry is precomputed in ena_com_config_llq_info() using
desc_stride_ctrl and desc_list_entry_size, which remain unchanged after
device negotiation. Reuse the existing value instead of recalculating it
in the fast path.

Signed-off-by: Shai Brandes 
Reviewed-by: Amit Bernstein 
Reviewed-by: Yosef Raisman 
---
 drivers/net/ena/base/ena_eth_com.c | 6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/drivers/net/ena/base/ena_eth_com.c 
b/drivers/net/ena/base/ena_eth_com.c
index 90dd85c7ff..c6668238e5 100644
--- a/drivers/net/ena/base/ena_eth_com.c
+++ b/drivers/net/ena/base/ena_eth_com.c
@@ -248,11 +248,7 @@ static int ena_com_sq_update_llq_tail(struct ena_com_io_sq 
*io_sq)
   0x0, llq_info->desc_list_entry_size);
 
pkt_ctrl->idx = 0;
-   if (unlikely(llq_info->desc_stride_ctrl == 
ENA_ADMIN_SINGLE_DESC_PER_ENTRY))
-   pkt_ctrl->descs_left_in_line = 1;
-   else
-   pkt_ctrl->descs_left_in_line =
-   llq_info->desc_list_entry_size / io_sq->desc_entry_size;
+   pkt_ctrl->descs_left_in_line = llq_info->descs_per_entry;
}
 
return ENA_COM_OK;
-- 
2.17.1



[v1 05/10] mempool/dpaa: fast acquire and release

2025-05-28 Thread vanshika . shukla
From: Jun Yang 

Use new BMan APIs to improve performance and support burst release.
Improve release performance ~90% by burst release.

Signed-off-by: Jun Yang 
---
 drivers/mempool/dpaa/dpaa_mempool.c | 85 -
 drivers/mempool/dpaa/dpaa_mempool.h |  2 +-
 2 files changed, 36 insertions(+), 51 deletions(-)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c 
b/drivers/mempool/dpaa/dpaa_mempool.c
index 7dacaa9513..6c850f5cb2 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -157,61 +157,46 @@ dpaa_mbuf_free_pool(struct rte_mempool *mp)
}
 }
 
-static void
-dpaa_buf_free(struct dpaa_bp_info *bp_info, uint64_t addr)
-{
-   struct bm_buffer buf;
-   int ret;
-
-   DPAA_MEMPOOL_DPDEBUG("Free 0x%" PRIx64 " to bpid: %d",
-  addr, bp_info->bpid);
-
-   bm_buffer_set64(&buf, addr);
-retry:
-   ret = bman_release(bp_info->bp, &buf, 1, 0);
-   if (ret) {
-   DPAA_MEMPOOL_DEBUG("BMAN busy. Retrying...");
-   cpu_spin(CPU_SPIN_BACKOFF_CYCLES);
-   goto retry;
-   }
-}
-
 static int
 dpaa_mbuf_free_bulk(struct rte_mempool *pool,
void *const *obj_table,
-   unsigned int n)
+   unsigned int count)
 {
struct dpaa_bp_info *bp_info = DPAA_MEMPOOL_TO_POOL_INFO(pool);
int ret;
-   unsigned int i = 0;
+   uint32_t n = 0, i, left;
+   uint64_t phys[DPAA_MBUF_MAX_ACQ_REL];
 
DPAA_MEMPOOL_DPDEBUG("Request to free %d buffers in bpid = %d",
-n, bp_info->bpid);
+count, bp_info->bpid);
 
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: 
%d",
 ret);
-   return 0;
+   return ret;
}
}
 
-   while (i < n) {
-   uint64_t phy = rte_mempool_virt2iova(obj_table[i]);
-
-   if (unlikely(!bp_info->ptov_off)) {
-   /* buffers are from single mem segment */
-   if (bp_info->flags & DPAA_MPOOL_SINGLE_SEGMENT) {
-   bp_info->ptov_off = (size_t)obj_table[i] - phy;
-   rte_dpaa_bpid_info[bp_info->bpid].ptov_off
-   = bp_info->ptov_off;
-   }
+   while (n < count) {
+   /* Acquire is all-or-nothing, so we drain in 7s,
+* then the remainder.
+*/
+   if ((count - n) > DPAA_MBUF_MAX_ACQ_REL)
+   left = DPAA_MBUF_MAX_ACQ_REL;
+   else
+   left = count - n;
+
+   for (i = 0; i < left; i++) {
+   phys[i] = rte_mempool_virt2iova(obj_table[n]);
+   phys[i] += bp_info->meta_data_size;
+   n++;
}
-
-   dpaa_buf_free(bp_info,
- (uint64_t)phy + bp_info->meta_data_size);
-   i = i + 1;
+release_again:
+   ret = bman_release_fast(bp_info->bp, phys, left);
+   if (unlikely(ret))
+   goto release_again;
}
 
DPAA_MEMPOOL_DPDEBUG("freed %d buffers in bpid =%d",
@@ -226,9 +211,9 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
 unsigned int count)
 {
struct rte_mbuf **m = (struct rte_mbuf **)obj_table;
-   struct bm_buffer bufs[DPAA_MBUF_MAX_ACQ_REL];
+   uint64_t bufs[DPAA_MBUF_MAX_ACQ_REL];
struct dpaa_bp_info *bp_info;
-   void *bufaddr;
+   uint8_t *bufaddr;
int i, ret;
unsigned int n = 0;
 
@@ -240,7 +225,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
if (unlikely(count >= (RTE_MEMPOOL_CACHE_MAX_SIZE * 2))) {
DPAA_MEMPOOL_ERR("Unable to allocate requested (%u) buffers",
 count);
-   return -1;
+   return -EINVAL;
}
 
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
@@ -248,7 +233,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
if (ret) {
DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: 
%d",
 ret);
-   return -1;
+   return ret;
}
}
 
@@ -257,10 +242,11 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
 * then the remainder.
 */
if ((count - n) > DPAA_MBUF_MAX_ACQ_REL) {
-   ret = bman_acquire(bp_info->bp, bufs,
-  DPAA_MBUF_MAX_ACQ_REL, 0);
+   ret = bman_acquire_fast(bp_info-

[v1 08/10] net/dpaa: add devargs for enabling err packets on main queue

2025-05-28 Thread vanshika . shukla
From: Vanshika Shukla 

Currently, error queue is mapped to the Rx queue and enabled by default.
This patch adds the devargs to control the err packets on main queue.
Also, in VSP mode the error queue should be disabled because the error
packets from kernel are diverted to the Rx queue/err queue causing crash.

Signed-off-by: Vanshika Shukla 
---
 doc/guides/nics/dpaa.rst   |  3 +++
 drivers/net/dpaa/dpaa_ethdev.c | 29 +
 2 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index de3ae96e07..cc9aef7f83 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -277,6 +277,9 @@ for details.
 
 * Use dev arg option ``drv_ieee1588=1`` to enable IEEE 1588 support
   at driver level, e.g. ``dpaa:fm1-mac3,drv_ieee1588=1``.
+* Use dev arg option ``recv_err_pkts=1`` to receive all packets including
+  error packets and thus disabling hardware based packet handing
+  at driver level, e.g. ``dpaa:fm1-mac3,recv_err_pkts=1``.
 
 FMAN Config
 ---
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 7d0f830204..62cafb7073 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -52,9 +52,10 @@
 #include 
 #include 
 
-#define DRIVER_IEEE1588"drv_ieee1588"
-#define CHECK_INTERVAL 100  /* 100ms */
-#define MAX_REPEAT_TIME90   /* 9s (90 * 100ms) in total */
+#define DRIVER_IEEE1588 "drv_ieee1588"
+#define CHECK_INTERVAL  100  /* 100ms */
+#define MAX_REPEAT_TIME 90   /* 9s (90 * 100ms) in total */
+#define DRIVER_RECV_ERR_PKTS  "recv_err_pkts"
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
@@ -87,6 +88,8 @@ static int is_global_init;
 static int fmc_q = 1;  /* Indicates the use of static fmc for distribution */
 static int default_q;  /* use default queue - FMC is not executed*/
 int dpaa_ieee_1588;/* use to indicate if IEEE 1588 is enabled for the 
driver */
+bool dpaa_enable_recv_err_pkts; /* Enable main queue to receive error packets 
*/
+
 /* At present we only allow up to 4 push mode queues as default - as each of
  * this queue need dedicated portal and we are short of portals.
  */
@@ -1273,10 +1276,12 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, 
uint16_t queue_idx,
}
}
 
-   /* Enable main queue to receive error packets also by default */
+   /* Enable main queue to receive error packets */
if (fif->mac_type != fman_offline_internal &&
-   fif->mac_type != fman_onic)
+   fif->mac_type != fman_onic &&
+   dpaa_enable_recv_err_pkts && !fif->is_shared_mac) {
fman_if_set_err_fqid(fif, rxq->fqid);
+   }
 
return 0;
 }
@@ -2191,6 +2196,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
if (dpaa_get_devargs(dev->devargs, DRIVER_IEEE1588))
dpaa_ieee_1588 = 1;
 
+   if (dpaa_get_devargs(dev->devargs, DRIVER_RECV_ERR_PKTS))
+   dpaa_enable_recv_err_pkts = 1;
+
memset((char *)dev_rx_fqids, 0,
sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES);
 
@@ -2418,8 +2426,12 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
fman_intf->mac_type != fman_offline_internal &&
fman_intf->mac_type != fman_onic) {
/* Configure error packet handling */
-   fman_if_receive_rx_errors(fman_intf,
- FM_FD_RX_STATUS_ERR_MASK);
+#ifndef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+   if (dpaa_enable_recv_err_pkts)
+#endif
+   fman_if_receive_rx_errors(fman_intf,
+   FM_FD_RX_STATUS_ERR_MASK);
+
/* Disable RX mode */
fman_if_disable_rx(fman_intf);
/* Disable promiscuous mode */
@@ -2619,5 +2631,6 @@ static struct rte_dpaa_driver rte_dpaa_pmd = {
 
 RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
 RTE_PMD_REGISTER_PARAM_STRING(net_dpaa,
-   DRIVER_IEEE1588 "=");
+   DRIVER_IEEE1588 "="
+   DRIVER_RECV_ERR_PKTS "=");
 RTE_LOG_REGISTER_DEFAULT(dpaa_logtype_pmd, NOTICE);
-- 
2.25.1



[PATCH] crypto/virtio: fix driver ID for virtio

2025-05-28 Thread Rajesh Mudimadugula
This patch corrects driver id for virtio and virtio_user
pmds.

Fixes: 25500d4b8076 ("crypto/virtio: support device init")

Signed-off-by: Rajesh Mudimadugula 
---
 drivers/crypto/virtio/virtio_cryptodev.c  | 4 ++--
 drivers/crypto/virtio/virtio_user_cryptodev.c | 5 +++--
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_cryptodev.c 
b/drivers/crypto/virtio/virtio_cryptodev.c
index fa215fe528..ca06cf2e37 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -550,7 +550,6 @@ crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, 
uint64_t features,
 {
struct virtio_crypto_hw *hw;
 
-   cryptodev->driver_id = cryptodev_virtio_driver_id;
cryptodev->dev_ops = &virtio_crypto_dev_ops;
 
cryptodev->enqueue_burst = virtio_crypto_pkt_tx_burst;
@@ -599,6 +598,7 @@ crypto_virtio_create(const char *name, struct 
rte_pci_device *pci_dev,
if (cryptodev == NULL)
return -ENODEV;
 
+   cryptodev->driver_id = cryptodev_virtio_driver_id;
if (crypto_virtio_dev_init(cryptodev, VIRTIO_CRYPTO_PMD_GUEST_FEATURES,
pci_dev) < 0)
return -1;
@@ -1666,7 +1666,7 @@ virtio_crypto_dev_info_get(struct rte_cryptodev *dev,
PMD_INIT_FUNC_TRACE();
 
if (info != NULL) {
-   info->driver_id = cryptodev_virtio_driver_id;
+   info->driver_id = dev->driver_id;
info->feature_flags = dev->feature_flags;
info->max_nb_queue_pairs = hw->max_dataqueues;
/* No limit of number of sessions */
diff --git a/drivers/crypto/virtio/virtio_user_cryptodev.c 
b/drivers/crypto/virtio/virtio_user_cryptodev.c
index 992e8fb43b..4daa188e1d 100644
--- a/drivers/crypto/virtio/virtio_user_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_user_cryptodev.c
@@ -26,6 +26,8 @@
 
 #define virtio_user_get_dev(hwp) container_of(hwp, struct virtio_user_dev, hw)
 
+uint8_t cryptodev_virtio_user_driver_id;
+
 static void
 virtio_user_read_dev_config(struct virtio_crypto_hw *hw, size_t offset,
 void *dst, int length __rte_unused)
@@ -460,6 +462,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
goto end;
}
 
+   cryptodev->driver_id = cryptodev_virtio_user_driver_id;
if (crypto_virtio_dev_init(cryptodev, 
VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES,
NULL) < 0) {
PMD_INIT_LOG(ERR, "crypto_virtio_dev_init fails");
@@ -563,8 +566,6 @@ static struct rte_vdev_driver virtio_user_driver = {
 
 static struct cryptodev_driver virtio_crypto_drv;
 
-uint8_t cryptodev_virtio_user_driver_id;
-
 RTE_PMD_REGISTER_VDEV(crypto_virtio_user, virtio_user_driver);
 RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv,
virtio_user_driver.driver,
-- 
2.34.1



[v1 10/10] bus/dpaa: optimize qman enqueue check

2025-05-28 Thread vanshika . shukla
From: Hemant Agrawal 

This patch improves data access during qman enequeue ring check.

Signed-off-by: Jun Yang 
Signed-off-by: Hemant Agrawal 
---
 drivers/bus/dpaa/base/qbman/qman.c  | 41 -
 drivers/bus/dpaa/include/fsl_qman.h |  2 +-
 2 files changed, 23 insertions(+), 20 deletions(-)

diff --git a/drivers/bus/dpaa/base/qbman/qman.c 
b/drivers/bus/dpaa/base/qbman/qman.c
index fbce0638b7..60087c55a1 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -1466,7 +1466,7 @@ int qman_create_fq(u32 fqid, u32 flags, struct qman_fq 
*fq)
}
spin_lock_init(&fq->fqlock);
fq->fqid = fqid;
-   fq->fqid_le = cpu_to_be32(fqid);
+   fq->fqid_be = cpu_to_be32(fqid);
fq->flags = flags;
fq->state = qman_fq_state_oos;
fq->cgr_groupid = 0;
@@ -2291,7 +2291,7 @@ int qman_enqueue_multi(struct qman_fq *fq,
struct qm_portal *portal = &p->p;
 
register struct qm_eqcr *eqcr = &portal->eqcr;
-   struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+   struct qm_eqcr_entry *eq = eqcr->cursor;
 
u8 i = 0, diff, old_ci, sent = 0;
 
@@ -2307,7 +2307,7 @@ int qman_enqueue_multi(struct qman_fq *fq,
 
/* try to send as many frames as possible */
while (eqcr->available && frames_to_send--) {
-   eq->fqid = fq->fqid_le;
+   eq->fqid = fq->fqid_be;
eq->fd.opaque_addr = fd->opaque_addr;
eq->fd.addr = cpu_to_be40(fd->addr);
eq->fd.status = cpu_to_be32(fd->status);
@@ -2317,8 +2317,9 @@ int qman_enqueue_multi(struct qman_fq *fq,
((flags[i] >> 8) & QM_EQCR_DCA_IDXMASK);
}
i++;
-   eq = (void *)((unsigned long)(eq + 1) &
-   (~(unsigned long)(QM_EQCR_SIZE << 6)));
+   eq++;
+   if (unlikely(eq >= (eqcr->ring + QM_EQCR_SIZE)))
+   eq = eqcr->ring;
eqcr->available--;
sent++;
fd++;
@@ -2332,11 +2333,11 @@ int qman_enqueue_multi(struct qman_fq *fq,
for (i = 0; i < sent; i++) {
eq->__dont_write_directly__verb =
QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
-   prev_eq = eq;
-   eq = (void *)((unsigned long)(eq + 1) &
-   (~(unsigned long)(QM_EQCR_SIZE << 6)));
-   if (unlikely((prev_eq + 1) != eq))
+   eq++;
+   if (unlikely(eq >= (eqcr->ring + QM_EQCR_SIZE))) {
eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+   eq = eqcr->ring;
+   }
}
 
/* We need  to flush all the lines but without load/store operations
@@ -2361,7 +2362,7 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct 
qm_fd *fd,
struct qm_portal *portal = &p->p;
 
register struct qm_eqcr *eqcr = &portal->eqcr;
-   struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
+   struct qm_eqcr_entry *eq = eqcr->cursor;
 
u8 i = 0, diff, old_ci, sent = 0;
 
@@ -2377,7 +2378,7 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct 
qm_fd *fd,
 
/* try to send as many frames as possible */
while (eqcr->available && frames_to_send--) {
-   eq->fqid = fq[sent]->fqid_le;
+   eq->fqid = fq[sent]->fqid_be;
eq->fd.opaque_addr = fd->opaque_addr;
eq->fd.addr = cpu_to_be40(fd->addr);
eq->fd.status = cpu_to_be32(fd->status);
@@ -2388,8 +2389,9 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct 
qm_fd *fd,
}
i++;
 
-   eq = (void *)((unsigned long)(eq + 1) &
-   (~(unsigned long)(QM_EQCR_SIZE << 6)));
+   eq++;
+   if (unlikely(eq >= (eqcr->ring + QM_EQCR_SIZE)))
+   eq = eqcr->ring;
eqcr->available--;
sent++;
fd++;
@@ -2403,11 +2405,11 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const 
struct qm_fd *fd,
for (i = 0; i < sent; i++) {
eq->__dont_write_directly__verb =
QM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;
-   prev_eq = eq;
-   eq = (void *)((unsigned long)(eq + 1) &
-   (~(unsigned long)(QM_EQCR_SIZE << 6)));
-   if (unlikely((prev_eq + 1) != eq))
+   eq++;
+   if (unlikely(eq >= (eqcr->ring + QM_EQCR_SIZE))) {
eqcr->vbit ^= QM_EQCR_VERB_VBIT;
+   eq = eqcr->ring;
+   }
}
 
/* We need  to flush all the lines but without load/store operations
@@ -2416,8 +2418,9 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct 
qm_fd *fd,
eq = eqcr->cursor;
for (i = 0; i < sent; i++) {
dcbf(eq);
- 

RE: [PATCH v2 1/2] ethdev: remove unnecessary type conversion

2025-05-28 Thread Konstantin Ananyev



> 
> > From: Konstantin Ananyev [mailto:konstantin.anan...@huawei.com]
> > Sent: Wednesday, 28 May 2025 10.24
> >
> > > >
> > > > > From: Stephen Hemminger [mailto:step...@networkplumber.org]
> > > > > Sent: Tuesday, 27 May 2025 17.07
> > > > >
> > > > > On Mon, 12 May 2025 20:37:19 +0530
> > > > >  wrote:
> > > > >
> > > > > >  /**@{@name Rx hardware descriptor states
> > > > > > diff --git a/lib/ethdev/rte_ethdev_core.h
> > > > > b/lib/ethdev/rte_ethdev_core.h
> > > > > > index e55fb42996..4ffae4921a 100644
> > > > > > --- a/lib/ethdev/rte_ethdev_core.h
> > > > > > +++ b/lib/ethdev/rte_ethdev_core.h
> > > > > > @@ -45,7 +45,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
> > > > > >
> > > > > >
> > > > > >  /** @internal Get number of used descriptors on a receive
> > queue. */
> > > > > > -typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
> > > > > > +typedef int (*eth_rx_queue_count_t)(void *rxq);
> > > > > >
> > > > > >  /** @internal Check the status of a Rx descriptor */
> > > > > >  typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t
> > > > > offset);
> > > > >
> > > > >
> > > > > This gets reported as ABI breakage. The change will have to wait
> > until
> > > > > next LTS (25.11)
> > > >
> > > > The return type was weird (wrong) to begin with.
> > > > When used, it gets cast to int:
> > > >
> > https://elixir.bootlin.com/dpdk/v25.03/source/lib/ethdev/rte_ethdev.h#L
> > 6404
> > >
> > > Personally, I don't see anything strange here.
> > > devops rx_queue_count() supposed to return uint, because it should
> > never failed for
> > > valid queue.
> 
> The main thing wrong is inconsistency with its sibling API for TX queue count:
> /** @internal Get number of used descriptors on a receive queue. */
> typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
> /** @internal Get number of used descriptors on a transmit queue. */
> typedef int (*eth_tx_queue_count_t)(void *txq);
> 
> > > While rte_eth_rx_queue_count() itself can fail - wrong port/queue id,
> > etc.
> >
> > BTW, rx_queue_setup() accepts only uint16_t for number of rx
> > descritoirs:
> > int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> > uint16_t nb_rx_desc, unsigned int socket_id,
> > const struct rte_eth_rxconf *rx_conf,
> > struct rte_mempool *mb_pool);
> >
> > shouldn't dev->rx_queue_count() then also return uitn16_t (for
> > consistency)?
> 
> If neither the RX or TX queue count callbacks can fail, then yes, both APIs 
> could be updated to return uint16_t.
> But it would be more future proof to allow these callbacks to fail, even on a 
> valid queue.
> The public APIs rte_eth_rx/tx_queue_count() can fail, so passing on failures 
> from the callbacks would not be a change of the public
> API. (Assuming no new error codes are allowed.)
> 
> And the "future" already arrived:
> The return type must be int (or have the same size as int), for the 
> performance patch [1] replacing the ops->callback==NULL check
> with dummy callbacks returning -ENOTSUP.
> 
> [1]: https://inbox.dpdk.org/dev/20250512150732.65743-2-sk...@marvell.com/

So what we are saving with that patch: one cmp and one un-taken branch:
@@ -6399,8 +6399,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t 
queue_id)
return -EINVAL;
 #endif
 
-   if (p->rx_queue_count == NULL)
-   return -ENOTSUP;
return p->rx_queue_count(qd);
 }

I wonder is how realistic (and measurable) is the gain?

>   
> >
> > > >
> > > > You are right that it formally changes the ABI, and we should go
> > through the LTS motions.
> > > > But, for this change, I'd favor an exception.
> > >
> > > Again, from my opinion, there is nothing that urgent/important why
> > such changes (if needed)
> > > can't wait till next LTS.
> > > For now, we can simply do type conversion explicitly at
> > rte_eth_rx_queue_count().
> 
> OK. No objections from me. Just trying to accelerate some cleanup work.
> 
> > >
> > > > PS: As a consequence of this change, a patch to update the return
> > type of the callback in all the ethdev drivers should be provided.
> > > >
> > > > >
> > > > >
> > > > >   [C] 'rte_eth_fp_ops rte_eth_fp_ops[32]' was changed at
> > > > > rte_ethdev.c:47:1:
> > > > > type of variable changed:
> > > > >   array element type 'struct rte_eth_fp_ops' changed:
> > > > > type size hasn't changed
> > > > > 1 data member change:
> > > > >   type of 'eth_rx_queue_count_t rx_queue_count' changed:
> > > > > underlying type 'uint32_t (*)(void*)' changed:
> > > > >   in pointed to type 'function type uint32_t
> > (void*)':
> > > > > return type changed:
> > > > >   entity changed from 'typedef uint32_t' to 'int'
> > > > >   type size hasn't changed
> > > > >   type size hasn't changed



RE: [PATCH v2 2/2] ethdev: remove callback checks from fast path

2025-05-28 Thread Morten Brørup
> From: Sunil Kumar Kori 
> Sent: Monday, 12 May 2025 17.07
> 
> rte_eth_fp_ops contains ops for fast path APIs. Each API
> validates availability of callback and then invoke it.
> These checks impact data path performace.

Picking up the discussion from another thread [1]:

> From: Konstantin Ananyev [mailto:konstantin.anan...@huawei.com]
> Sent: Wednesday, 28 May 2025 11.14
> 
> So what we are saving with that patch: one cmp and one un-taken branch:
> @@ -6399,8 +6399,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t
> queue_id)
>   return -EINVAL;
>  #endif
> 
> - if (p->rx_queue_count == NULL)
> - return -ENOTSUP;
>   return p->rx_queue_count(qd);
>  }

These are inline functions, so we also save some code space, instruction cache, 
and possibly an entry in the branch predictor - everywhere these functions are 
instantiated by the compiler.

> 
> I wonder is how realistic (and measurable) is the gain?

The performance optimization is mainly targeting the mbuf recycle operations, 
i.e. the hot fast path, where every cycle counts.
And while optimizing those, the other ethdev fast path callbacks are also 
optimized.

Yes, although we all agree that there is no downside to this optimization, it 
would be nice to see some performance numbers.

[1]: https://inbox.dpdk.org/dev/581e7a5389f842a9824a365a46c47...@huawei.com/

> 
> Hence removing these NULL checks instead using dummy
> callbacks.
> 
> Signed-off-by: Sunil Kumar Kori 
> ---
>  lib/ethdev/ethdev_driver.c | 55 +
>  lib/ethdev/ethdev_driver.h | 71 ++
>  lib/ethdev/rte_ethdev.h| 29 ++--
>  3 files changed, 129 insertions(+), 26 deletions(-)
> 
> diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
> index ec0c1e1176..f89562b237 100644
> --- a/lib/ethdev/ethdev_driver.c
> +++ b/lib/ethdev/ethdev_driver.c
> @@ -75,6 +75,20 @@ eth_dev_get(uint16_t port_id)
>   return eth_dev;
>  }
> 
> +static void
> +eth_dev_set_dummy_fops(struct rte_eth_dev *eth_dev)
> +{
> + eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
> + eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
> + eth_dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
> + eth_dev->rx_queue_count = rte_eth_queue_count_dummy;
> + eth_dev->tx_queue_count = rte_eth_queue_count_dummy;
> + eth_dev->rx_descriptor_status = rte_eth_descriptor_status_dummy;
> + eth_dev->tx_descriptor_status = rte_eth_descriptor_status_dummy;
> + eth_dev->recycle_tx_mbufs_reuse =
> rte_eth_recycle_tx_mbufs_reuse_dummy;
> + eth_dev->recycle_rx_descriptors_refill =
> rte_eth_recycle_rx_descriptors_refill_dummy;
> +}
> +
>  RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocate)
>  struct rte_eth_dev *
>  rte_eth_dev_allocate(const char *name)
> @@ -115,6 +129,7 @@ rte_eth_dev_allocate(const char *name)
>   }
> 
>   eth_dev = eth_dev_get(port_id);
> + eth_dev_set_dummy_fops(eth_dev);
>   eth_dev->flow_fp_ops = &rte_flow_fp_default_ops;
>   strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
>   eth_dev->data->port_id = port_id;
> @@ -847,6 +862,46 @@ rte_eth_pkt_burst_dummy(void *queue __rte_unused,
>   return 0;
>  }
> 
> +RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_tx_pkt_prepare_dummy)
> +uint16_t
> +rte_eth_tx_pkt_prepare_dummy(void *queue __rte_unused,
> + struct rte_mbuf **pkts __rte_unused,
> + uint16_t nb_pkts)
> +{
> + return nb_pkts;
> +}
> +
> +RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_queue_count_dummy)
> +int
> +rte_eth_queue_count_dummy(void *queue __rte_unused)
> +{
> + return -ENOTSUP;
> +}
> +
> +RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_descriptor_status_dummy)
> +int
> +rte_eth_descriptor_status_dummy(void *queue __rte_unused,
> + uint16_t offset __rte_unused)
> +{
> + return -ENOTSUP;
> +}
> +
> +RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_recycle_tx_mbufs_reuse_dummy)
> +uint16_t
> +rte_eth_recycle_tx_mbufs_reuse_dummy(void *queue __rte_unused,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info
> __rte_unused)
> +{
> + return 0;
> +}
> +
> +RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_recycle_rx_descriptors_refill_dummy
> )
> +void
> +rte_eth_recycle_rx_descriptors_refill_dummy(void *queue __rte_unused,
> + uint16_t nb __rte_unused)
> +{
> + /* No action. */
> +}
> +
>  RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_representor_id_get)
>  int
>  rte_eth_representor_id_get(uint16_t port_id,
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 2b4d2ae9c3..71085bddff 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1874,6 +1874,77 @@ rte_eth_pkt_burst_dummy(void *queue
> __rte_unused,
>   struct rte_mbuf **pkts __rte_unused,
>   uint16_t nb_pkts __rte_unused);
> 
> +/**
> + * @internal
> + * Dummy DPDK callback for Tx packet prepare.
> + *
> + * @param queue
> + *  Pointer to Tx queue
> + * @param pkts

[PATCH] net/mlx5: avoid setting kernel MTU if not needed

2025-05-28 Thread Maxime Coquelin
This patch checks whether the Kernel MTU has the same value
as the requested one at port configuration time, and skip
setting it if it is the same.

Doing this, we can avoid the application to require
NET_ADMIN capability, as in v23.11.

Fixes: 10859ecf09c4 ("net/mlx5: fix MTU configuration")
Cc: sta...@dpdk.org

Signed-off-by: Maxime Coquelin 
---

Hi Dariuz,

I set priv->mtu as it is done after the mlx5_set_mtu() call,
but I'm not sure it is necessary, as is the existing call to
mlx5_get_mtu() because it seems done in mlx5_dev_spawn().

---

 drivers/net/mlx5/mlx5_ethdev.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 7708a0b808..f2ae75a8e1 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -647,6 +647,14 @@ mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
ret = mlx5_get_mtu(dev, &kern_mtu);
if (ret)
return ret;
+
+   if (kern_mtu == mtu) {
+   priv->mtu = mtu;
+   DRV_LOG(DEBUG, "port %u adapter MTU was already set to %u",
+   dev->data->port_id, mtu);
+   return 0;
+   }
+
/* Set kernel interface MTU first. */
ret = mlx5_set_mtu(dev, mtu);
if (ret)
--
2.49.0



RE: [PATCH v2 1/2] ethdev: remove unnecessary type conversion

2025-05-28 Thread Morten Brørup
> From: Konstantin Ananyev [mailto:konstantin.anan...@huawei.com]
> Sent: Wednesday, 28 May 2025 11.14
> 
> >
> > > From: Konstantin Ananyev [mailto:konstantin.anan...@huawei.com]
> > > Sent: Wednesday, 28 May 2025 10.24
> > >
> > > > >
> > > > > > From: Stephen Hemminger [mailto:step...@networkplumber.org]
> > > > > > Sent: Tuesday, 27 May 2025 17.07
> > > > > >
> > > > > > On Mon, 12 May 2025 20:37:19 +0530
> > > > > >  wrote:
> > > > > >
> > > > > > >  /**@{@name Rx hardware descriptor states
> > > > > > > diff --git a/lib/ethdev/rte_ethdev_core.h
> > > > > > b/lib/ethdev/rte_ethdev_core.h
> > > > > > > index e55fb42996..4ffae4921a 100644
> > > > > > > --- a/lib/ethdev/rte_ethdev_core.h
> > > > > > > +++ b/lib/ethdev/rte_ethdev_core.h
> > > > > > > @@ -45,7 +45,7 @@ typedef uint16_t (*eth_tx_prep_t)(void
> *txq,
> > > > > > >
> > > > > > >
> > > > > > >  /** @internal Get number of used descriptors on a receive
> > > queue. */
> > > > > > > -typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
> > > > > > > +typedef int (*eth_rx_queue_count_t)(void *rxq);
> > > > > > >
> > > > > > >  /** @internal Check the status of a Rx descriptor */
> > > > > > >  typedef int (*eth_rx_descriptor_status_t)(void *rxq,
> uint16_t
> > > > > > offset);
> > > > > >
> > > > > >
> > > > > > This gets reported as ABI breakage. The change will have to
> wait
> > > until
> > > > > > next LTS (25.11)
> > > > >
> > > > > The return type was weird (wrong) to begin with.
> > > > > When used, it gets cast to int:
> > > > >
> > >
> https://elixir.bootlin.com/dpdk/v25.03/source/lib/ethdev/rte_ethdev.h#L
> > > 6404
> > > >
> > > > Personally, I don't see anything strange here.
> > > > devops rx_queue_count() supposed to return uint, because it
> should
> > > never failed for
> > > > valid queue.
> >
> > The main thing wrong is inconsistency with its sibling API for TX
> queue count:
> > /** @internal Get number of used descriptors on a receive queue. */
> > typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
> > /** @internal Get number of used descriptors on a transmit queue. */
> > typedef int (*eth_tx_queue_count_t)(void *txq);
> >
> > > > While rte_eth_rx_queue_count() itself can fail - wrong port/queue
> id,
> > > etc.
> > >
> > > BTW, rx_queue_setup() accepts only uint16_t for number of rx
> > > descritoirs:
> > > int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> > > uint16_t nb_rx_desc, unsigned int socket_id,
> > > const struct rte_eth_rxconf *rx_conf,
> > > struct rte_mempool *mb_pool);
> > >
> > > shouldn't dev->rx_queue_count() then also return uitn16_t (for
> > > consistency)?
> >
> > If neither the RX or TX queue count callbacks can fail, then yes,
> both APIs could be updated to return uint16_t.
> > But it would be more future proof to allow these callbacks to fail,
> even on a valid queue.
> > The public APIs rte_eth_rx/tx_queue_count() can fail, so passing on
> failures from the callbacks would not be a change of the public
> > API. (Assuming no new error codes are allowed.)
> >
> > And the "future" already arrived:
> > The return type must be int (or have the same size as int), for the
> performance patch [1] replacing the ops->callback==NULL check
> > with dummy callbacks returning -ENOTSUP.
> >
> > [1]: https://inbox.dpdk.org/dev/20250512150732.65743-2-
> sk...@marvell.com/
> 
> So what we are saving with that patch: one cmp and one un-taken branch:
> @@ -6399,8 +6399,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t
> queue_id)
>   return -EINVAL;
>  #endif
> 
> - if (p->rx_queue_count == NULL)
> - return -ENOTSUP;
>   return p->rx_queue_count(qd);
>  }
> 
> I wonder is how realistic (and measurable) is the gain?

Moving the performance discussion to the other thread [2].
[2]: 
https://inbox.dpdk.org/dev/98cbd80474fa8b44bf855df32c47dc35e9f...@smartserver.smartshare.dk/

> 
> >
> > >
> > > > >
> > > > > You are right that it formally changes the ABI, and we should
> go
> > > through the LTS motions.
> > > > > But, for this change, I'd favor an exception.
> > > >
> > > > Again, from my opinion, there is nothing that urgent/important
> why
> > > such changes (if needed)
> > > > can't wait till next LTS.
> > > > For now, we can simply do type conversion explicitly at
> > > rte_eth_rx_queue_count().
> >
> > OK. No objections from me. Just trying to accelerate some cleanup
> work.
> >
> > > >
> > > > > PS: As a consequence of this change, a patch to update the
> return
> > > type of the callback in all the ethdev drivers should be provided.
> > > > >
> > > > > >
> > > > > >
> > > > > >   [C] 'rte_eth_fp_ops rte_eth_fp_ops[32]' was changed at
> > > > > > rte_ethdev.c:47:1:
> > > > > > type of variable changed:
> > > > > >   array element type 'struct rte_eth_fp_ops' changed:
> > > > > > type size hasn't changed
> > > > > > 1 data member change:
> > > > > >   type of 'eth_rx_queue_count_t 

[v1 07/10] net/dpaa: add Tx rate limiting DPAA PMD API

2025-05-28 Thread vanshika . shukla
From: Vinod Pullabhatla 

Add support to set Tx rate on DPAA platform through PMD APIs

Signed-off-by: Vinod Pullabhatla 
Signed-off-by: Vanshika Shukla 
---
 .mailmap |  1 +
 drivers/net/dpaa/dpaa_flow.c | 87 +++-
 drivers/net/dpaa/fmlib/fm_lib.c  | 30 ++
 drivers/net/dpaa/fmlib/fm_port_ext.h |  2 +-
 drivers/net/dpaa/rte_pmd_dpaa.h  | 21 ++-
 5 files changed, 137 insertions(+), 4 deletions(-)

diff --git a/.mailmap b/.mailmap
index 563f602bcc..d2d9ad2758 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1635,6 +1635,7 @@ Vincent S. Cojot 
 Vinh Tran 
 Vipin Padmam Ramesh 
 Vinod Krishna 
+Vinod Pullabhatla 
 Vipin Varghese  
 Vipul Ashri 
 Visa Hankala 
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 2a22b23c8f..eb4bbb097c 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2019,2021-2024 NXP
+ * Copyright 2017-2019,2021-2025 NXP
  */
 
 /* System headers */
@@ -669,6 +669,22 @@ static inline int get_rx_port_type(struct fman_if *fif)
return e_FM_PORT_TYPE_DUMMY;
 }
 
+static inline int get_tx_port_type(struct fman_if *fif)
+{
+   if (fif->mac_type == fman_offline_internal ||
+   fif->mac_type == fman_onic)
+   return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
+   else if (fif->mac_type == fman_mac_1g)
+   return e_FM_PORT_TYPE_TX;
+   else if (fif->mac_type == fman_mac_2_5g)
+   return e_FM_PORT_TYPE_TX_2_5G;
+   else if (fif->mac_type == fman_mac_10g)
+   return e_FM_PORT_TYPE_TX_10G;
+
+   DPAA_PMD_ERR("MAC type unsupported");
+   return e_FM_PORT_TYPE_DUMMY;
+}
+
 static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
 uint64_t req_dist_set,
 struct fman_if *fif)
@@ -889,9 +905,9 @@ int dpaa_fm_init(void)
/* FM PCD Enable */
ret = fm_pcd_enable(pcd_handle);
if (ret) {
-   fm_close(fman_handle);
fm_pcd_close(pcd_handle);
DPAA_PMD_ERR("fm_pcd_enable: Failed");
+   fm_close(fman_handle);
return -1;
}
 
@@ -1073,3 +1089,70 @@ int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, 
struct fman_if *fif)
 
return E_OK;
 }
+
+int rte_pmd_dpaa_port_set_rate_limit(uint16_t port_id, uint16_t burst,
+uint32_t rate)
+{
+   t_fm_port_rate_limit port_rate_limit;
+   bool port_handle_exists = true;
+   void *handle;
+   uint32_t ret;
+   struct rte_eth_dev *dev;
+   struct dpaa_if *dpaa_intf;
+   struct fman_if *fif;
+
+   RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+   dev = &rte_eth_devices[port_id];
+   dpaa_intf = dev->data->dev_private;
+   fif = dev->process_private;
+
+   memset(&port_rate_limit, 0, sizeof(port_rate_limit));
+   port_rate_limit.max_burst_size = burst;
+   port_rate_limit.rate_limit = rate;
+
+   DPAA_PMD_DEBUG("Port:%s: set max Burst =%u max Rate =%u",
+   dpaa_intf->name, burst, rate);
+
+   if (!dpaa_intf->port_handle) {
+   t_fm_port_params fm_port_params;
+
+   /* Memset FM port params */
+   memset(&fm_port_params, 0, sizeof(fm_port_params));
+
+   /* Set FM port params */
+   fm_port_params.h_fm = fm_open(0);
+   fm_port_params.port_type = get_tx_port_type(fif);
+   fm_port_params.port_id = mac_idx[fif->mac_idx];
+
+   /* FM PORT Open */
+   handle = fm_port_open(&fm_port_params);
+   fm_close(fm_port_params.h_fm);
+   if (!handle) {
+   DPAA_PMD_ERR("Can't open handle %p",
+fm_info.fman_handle);
+   return -ENODEV;
+   }
+
+   port_handle_exists = false;
+   } else {
+   handle = dpaa_intf->port_handle;
+   }
+
+   if (burst == 0 || rate == 0)
+   ret = fm_port_delete_rate_limit(handle);
+   else
+   ret = fm_port_set_rate_limit(handle, &port_rate_limit);
+
+   if (ret) {
+   DPAA_PMD_ERR("Failed to %s rate limit ret = %#x.",
+   (!burst || !rate) ? "del" : "set", ret);
+   } else {
+   DPAA_PMD_DEBUG("Success to %s rate limit,",
+   (!burst || !rate) ? "del" : "set");
+   }
+
+   if (!port_handle_exists)
+   fm_port_close(handle);
+
+   return -ret;
+}
diff --git a/drivers/net/dpaa/fmlib/fm_lib.c b/drivers/net/dpaa/fmlib/fm_lib.c
index b35feba004..d34993de38 100644
--- a/drivers/net/dpaa/fmlib/fm_lib.c
+++ b/drivers/net/dpaa/fmlib/fm_lib.c
@@ -558,3 +558,33 @@ get_device_id(t_handle h_dev)
 
return (t_handle)p_dev->id;
 }

Re: [dpdk-dev] Regarding HQOS with run-to-completion Model

2025-05-28 Thread farooq basha
Thanks Stephen.

 While browsing the DPDK qos code, i figured out that existing
PIPE-PROFILE cannot be updated or deleted at run time.
Was there any reason why this limitation ?

Thanks
Farooq.J

On Thu, 22 May, 2025, 20:51 Stephen Hemminger, 
wrote:

> On Thu, 22 May 2025 08:15:14 +0530
> farooq basha  wrote:
>
> > Thanks Stephen for addressing my queries , and it is helpful.
> >
> > One more follow up question on the same ,   Can DPDK HQOS be
> customized
> > based on Use case ?
> >
> > For example: Hqos config for one of the use cases ,  *One Port , One
> > Subport , 16 Pipes & Each Pipe with only one TC*.
> >  16 pipe config was allowed but changing the
> 13TCs
> > to 1TC is not allowed per Pipe.
> >
> > Can I still use 13 TCs but use the QueueSize as 0, Can that impact
> > performance ?
> >
>
> No. Current qos sched code has hard coded assumptions on number of pipes
> etc.
> I think it is modeled after some carrier standard and is not generally
> that useful.
>


Re: [EXTERNAL] Re: |FAILURE| pw153190 [PATCH V3] Add new tracepoint function for type time_t

2025-05-28 Thread Changqing Li

Hi,

Just kindly remind, with the help of  Jerin's comment,  I had just send 
a V4 version yesterday.


Here is the patch link:

https://patches.dpdk.org/project/dpdk/patch/20250527120404.2027529-1-changqing...@windriver.com/

Regards

Sandy

On 5/28/25 16:06, Sunil Kumar Kori wrote:

CAUTION: This email comes from a non Wind River email account!
Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

Please mark this version as superseded as new version is available.

Thanks
Sunil Kumar Kori


-Original Message-
From: Jerin Jacob
Sent: Thursday, May 8, 2025 7:24 PM
To: Changqing Li; David Marchand
; Sunil Kumar Kori;
tho...@monjalon.net; Stephen Hemminger
Cc:dev@dpdk.org
Subject: RE: [EXTERNAL] Re: |FAILURE| pw153190 [PATCH V3] Add new tracepoint
function for type time_t


 I'm new to this project, and have no clue about the failure, could
experts at this project provide

 some help about the following failure?

 + sudo babeltrace
/home/runner/work/dpdk/dpdk/build/app/test/suites/rte-2025-04-30-AM-02
-
25-21

 Error: at line 2819: token "time_t": syntax error, unexpected
IDENTIFIER
 Error: Error creating AST


 I think this time_t type you added is not described in CTF.
 Have a look at 2114521cff91 ("trace: fix size_t field emitter").

 Copying the trace framework maintainers who will have a better idea.



time_t is not defined in CTF spec. See
https://diamon.org/ctf/v1.8.3/#specification.
You can create new type using typealias with integer or structure as backend
type. See meta_data_type_emit() in DPDK code base where we created new
types.


RE: [PATCH v2 1/2] ethdev: remove unnecessary type conversion

2025-05-28 Thread Konstantin Ananyev



> >
> > > From: Stephen Hemminger [mailto:step...@networkplumber.org]
> > > Sent: Tuesday, 27 May 2025 17.07
> > >
> > > On Mon, 12 May 2025 20:37:19 +0530
> > >  wrote:
> > >
> > > >  /**@{@name Rx hardware descriptor states
> > > > diff --git a/lib/ethdev/rte_ethdev_core.h
> > > b/lib/ethdev/rte_ethdev_core.h
> > > > index e55fb42996..4ffae4921a 100644
> > > > --- a/lib/ethdev/rte_ethdev_core.h
> > > > +++ b/lib/ethdev/rte_ethdev_core.h
> > > > @@ -45,7 +45,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
> > > >
> > > >
> > > >  /** @internal Get number of used descriptors on a receive queue. */
> > > > -typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
> > > > +typedef int (*eth_rx_queue_count_t)(void *rxq);
> > > >
> > > >  /** @internal Check the status of a Rx descriptor */
> > > >  typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t
> > > offset);
> > >
> > >
> > > This gets reported as ABI breakage. The change will have to wait until
> > > next LTS (25.11)
> >
> > The return type was weird (wrong) to begin with.
> > When used, it gets cast to int:
> > https://elixir.bootlin.com/dpdk/v25.03/source/lib/ethdev/rte_ethdev.h#L6404
> 
> Personally, I don't see anything strange here.
> devops rx_queue_count() supposed to return uint, because it should never 
> failed for
> valid queue.
> While rte_eth_rx_queue_count() itself can fail - wrong port/queue id, etc.

BTW, rx_queue_setup() accepts only uint16_t for number of rx descritoirs:
int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);

shouldn't dev->rx_queue_count() then also return uitn16_t (for consistency)?

> >
> > You are right that it formally changes the ABI, and we should go through 
> > the LTS motions.
> > But, for this change, I'd favor an exception.
> 
> Again, from my opinion, there is nothing that urgent/important why such 
> changes (if needed)
> can't wait till next LTS.
> For now, we can simply do type conversion explicitly at 
> rte_eth_rx_queue_count().
> 
> > PS: As a consequence of this change, a patch to update the return type of 
> > the callback in all the ethdev drivers should be provided.
> >
> > >
> > >
> > >   [C] 'rte_eth_fp_ops rte_eth_fp_ops[32]' was changed at
> > > rte_ethdev.c:47:1:
> > > type of variable changed:
> > >   array element type 'struct rte_eth_fp_ops' changed:
> > > type size hasn't changed
> > > 1 data member change:
> > >   type of 'eth_rx_queue_count_t rx_queue_count' changed:
> > > underlying type 'uint32_t (*)(void*)' changed:
> > >   in pointed to type 'function type uint32_t (void*)':
> > > return type changed:
> > >   entity changed from 'typedef uint32_t' to 'int'
> > >   type size hasn't changed
> > >   type size hasn't changed


RE: [PATCH v2 1/2] ethdev: remove unnecessary type conversion

2025-05-28 Thread Konstantin Ananyev




> 
> > From: Stephen Hemminger [mailto:step...@networkplumber.org]
> > Sent: Tuesday, 27 May 2025 17.07
> >
> > On Mon, 12 May 2025 20:37:19 +0530
> >  wrote:
> >
> > >  /**@{@name Rx hardware descriptor states
> > > diff --git a/lib/ethdev/rte_ethdev_core.h
> > b/lib/ethdev/rte_ethdev_core.h
> > > index e55fb42996..4ffae4921a 100644
> > > --- a/lib/ethdev/rte_ethdev_core.h
> > > +++ b/lib/ethdev/rte_ethdev_core.h
> > > @@ -45,7 +45,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
> > >
> > >
> > >  /** @internal Get number of used descriptors on a receive queue. */
> > > -typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
> > > +typedef int (*eth_rx_queue_count_t)(void *rxq);
> > >
> > >  /** @internal Check the status of a Rx descriptor */
> > >  typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t
> > offset);
> >
> >
> > This gets reported as ABI breakage. The change will have to wait until
> > next LTS (25.11)
> 
> The return type was weird (wrong) to begin with.
> When used, it gets cast to int:
> https://elixir.bootlin.com/dpdk/v25.03/source/lib/ethdev/rte_ethdev.h#L6404

Personally, I don't see anything strange here.
devops rx_queue_count() supposed to return uint, because it should never failed 
for
valid queue. 
While rte_eth_rx_queue_count() itself can fail - wrong port/queue id, etc. 

> 
> You are right that it formally changes the ABI, and we should go through the 
> LTS motions.
> But, for this change, I'd favor an exception.

Again, from my opinion, there is nothing that urgent/important why such changes 
(if needed)
can't wait till next LTS.
For now, we can simply do type conversion explicitly at 
rte_eth_rx_queue_count().

> PS: As a consequence of this change, a patch to update the return type of the 
> callback in all the ethdev drivers should be provided.
> 
> >
> >
> >   [C] 'rte_eth_fp_ops rte_eth_fp_ops[32]' was changed at
> > rte_ethdev.c:47:1:
> > type of variable changed:
> >   array element type 'struct rte_eth_fp_ops' changed:
> > type size hasn't changed
> > 1 data member change:
> >   type of 'eth_rx_queue_count_t rx_queue_count' changed:
> > underlying type 'uint32_t (*)(void*)' changed:
> >   in pointed to type 'function type uint32_t (void*)':
> > return type changed:
> >   entity changed from 'typedef uint32_t' to 'int'
> >   type size hasn't changed
> >   type size hasn't changed


RE: [EXTERNAL] Re: |FAILURE| pw153190 [PATCH V3] Add new tracepoint function for type time_t

2025-05-28 Thread Sunil Kumar Kori
Please mark this version as superseded as new version is available. 

Thanks
Sunil Kumar Kori

> -Original Message-
> From: Jerin Jacob 
> Sent: Thursday, May 8, 2025 7:24 PM
> To: Changqing Li ; David Marchand
> ; Sunil Kumar Kori ;
> tho...@monjalon.net; Stephen Hemminger 
> Cc: dev@dpdk.org
> Subject: RE: [EXTERNAL] Re: |FAILURE| pw153190 [PATCH V3] Add new tracepoint
> function for type time_t
> 
> > I'm new to this project, and have no clue about the failure, 
> > could
> > experts at this project provide
> >
> > some help about the following failure?
> >
> > + sudo babeltrace
> > /home/runner/work/dpdk/dpdk/build/app/test/suites/rte-2025-04-30-AM-02
> > -
> > 25-21
> >
> > Error: at line 2819: token "time_t": syntax error, unexpected
> > IDENTIFIER
> > Error: Error creating AST
> >
> >
> > I think this time_t type you added is not described in CTF.
> > Have a look at 2114521cff91 ("trace: fix size_t field emitter").
> >
> > Copying the trace framework maintainers who will have a better idea.
> 
> 
> 
> time_t is not defined in CTF spec. See
> https://diamon.org/ctf/v1.8.3/#specification.
> You can create new type using typealias with integer or structure as backend
> type. See meta_data_type_emit() in DPDK code base where we created new
> types.
> 



Re: [PATCH v2 1/1] net/cnxk: mark invalid MAC address if it doesn't exist

2025-05-28 Thread Nithin Dabilpuram
Acked-by: Nithin Dabilpuram

On Wed, May 28, 2025 at 11:51 AM  wrote:
>
> From: Sunil Kumar Kori 
>
> When user requests to configure a device which is already in
> configured state then first device gets resets to default and
> then reconfigured with latest parameters.
>
> While resetting the device, MAC address table is left stale which
> causes entry update in later state.
>
> Hence marking the MAC address entries as invalid to avoid any error
> due to further operation on MAC table.
>
> Fixes: b75e0aca84b0 ("net/cnxk: add device configuration operation")
> Cc: sta...@dpdk.org
>
> Signed-off-by: Sunil Kumar Kori 
> Signed-off-by: Nithin Dabilpuram 
> ---
> v1..v2:
> - Instead of restoring the MAC addresses, reset to them as invalid
>   during restore process.
>
>  drivers/net/cnxk/cnxk_ethdev.c | 26 --
>  drivers/net/cnxk/cnxk_ethdev.h |  3 ++
>  drivers/net/cnxk/cnxk_ethdev_ops.c | 57 --
>  3 files changed, 81 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
> index b9a0b37425..64d7937be6 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.c
> +++ b/drivers/net/cnxk/cnxk_ethdev.c
> @@ -1230,8 +1230,8 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
> uint16_t nb_rxq, nb_txq, nb_cq;
> struct rte_ether_addr *ea;
> uint64_t rx_cfg;
> +   int rc, i;
> void *qs;
> -   int rc;
>
> rc = -EINVAL;
>
> @@ -1286,6 +1286,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
> roc_nix_tm_fini(nix);
> nix_rxchan_cfg_disable(dev);
> roc_nix_lf_free(nix);
> +
> +   /* Reset to invalid */
> +   for (i = 0; i < dev->max_mac_entries; i++)
> +   dev->dmac_idx_map[i] = CNXK_NIX_DMAC_IDX_INVALID;
> +
> +   dev->dmac_filter_count = 1;
> }
>
> dev->rx_offloads = rxmode->offloads;
> @@ -1891,7 +1897,7 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
> struct rte_security_ctx *sec_ctx;
> struct roc_nix *nix = &dev->nix;
> struct rte_pci_device *pci_dev;
> -   int rc, max_entries;
> +   int rc, max_entries, i;
>
> eth_dev->dev_ops = &cnxk_eth_dev_ops;
> eth_dev->rx_queue_count = cnxk_nix_rx_queue_count;
> @@ -1993,6 +1999,17 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
> goto free_mac_addrs;
> }
>
> +   dev->dmac_addrs = rte_malloc("dmac_addrs", max_entries * 
> RTE_ETHER_ADDR_LEN, 0);
> +   if (dev->dmac_addrs == NULL) {
> +   plt_err("Failed to allocate memory for dmac addresses");
> +   rc = -ENOMEM;
> +   goto free_mac_addrs;
> +   }
> +
> +   /* Reset to invalid */
> +   for (i = 0; i < max_entries; i++)
> +   dev->dmac_idx_map[i] = CNXK_NIX_DMAC_IDX_INVALID;
> +
> dev->max_mac_entries = max_entries;
> dev->dmac_filter_count = 1;
>
> @@ -2051,6 +2068,8 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
>
>  free_mac_addrs:
> rte_free(eth_dev->data->mac_addrs);
> +   rte_free(dev->dmac_addrs);
> +   dev->dmac_addrs = NULL;
> rte_free(dev->dmac_idx_map);
>  dev_fini:
> roc_nix_dev_fini(nix);
> @@ -2182,6 +2201,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool 
> reset)
> rte_free(dev->dmac_idx_map);
> dev->dmac_idx_map = NULL;
>
> +   rte_free(dev->dmac_addrs);
> +   dev->dmac_addrs = NULL;
> +
> rte_free(eth_dev->data->mac_addrs);
> eth_dev->data->mac_addrs = NULL;
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
> index d62cc1ec20..1ced6dd65e 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.h
> +++ b/drivers/net/cnxk/cnxk_ethdev.h
> @@ -106,6 +106,8 @@
>  /* Fastpath lookup */
>  #define CNXK_NIX_FASTPATH_LOOKUP_MEM "cnxk_nix_fastpath_lookup_mem"
>
> +#define CNXK_NIX_DMAC_IDX_INVALID -1
> +
>  struct cnxk_fc_cfg {
> enum rte_eth_fc_mode mode;
> uint8_t rx_pause;
> @@ -342,6 +344,7 @@ struct cnxk_eth_dev {
> uint8_t max_mac_entries;
> bool dmac_filter_enable;
> int *dmac_idx_map;
> +   struct rte_ether_addr *dmac_addrs;
>
> uint16_t flags;
> uint8_t ptype_disable;
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index 9970c5ff5c..abef1a9eaf 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -473,11 +473,20 @@ cnxk_nix_mac_addr_add(struct rte_eth_dev *eth_dev, 
> struct rte_ether_addr *addr,
>  {
> struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
> struct roc_nix *nix = &dev->nix;
> +   struct rte_ether_addr *current;
> int rc;
>
> -   PLT_SET_USED(index);
> PLT_SET_USED(pool);
>
> +   if (dev->dmac_idx_map[index] != CNXK_NIX_DMAC_IDX_INVALID) {
> +   current = &dev->dmac_addrs[index];

[PATCH v1 0/1] compress/zsda: code cleanup

2025-05-28 Thread Hanxiao Li
v1:
- submit patch


Hanxiao Li (1):
  compress/zsda: compress code cleanup

 drivers/common/zsda/zsda_device.c |  3 +-
 drivers/common/zsda/zsda_device.h |  6 +--
 drivers/common/zsda/zsda_qp.h | 48 +-
 drivers/common/zsda/zsda_qp_common.h  | 43 +++-
 drivers/compress/zsda/zsda_comp.c |  2 +-
 drivers/compress/zsda/zsda_comp.h |  2 +-
 drivers/compress/zsda/zsda_comp_pmd.c | 73 ++-
 drivers/compress/zsda/zsda_comp_pmd.h | 12 +++--
 8 files changed, 96 insertions(+), 93 deletions(-)

--
2.27.0

[PATCH v1 1/1] compress/zsda: code cleanup

2025-05-28 Thread Hanxiao Li
zsda compress code cleanup.

Signed-off-by: Hanxiao Li 
---
 drivers/common/zsda/zsda_device.c |  3 +-
 drivers/common/zsda/zsda_device.h |  6 +--
 drivers/common/zsda/zsda_qp.h | 48 +-
 drivers/common/zsda/zsda_qp_common.h  | 43 +++-
 drivers/compress/zsda/zsda_comp.c |  2 +-
 drivers/compress/zsda/zsda_comp.h |  2 +-
 drivers/compress/zsda/zsda_comp_pmd.c | 73 ++-
 drivers/compress/zsda/zsda_comp_pmd.h | 12 +++--
 8 files changed, 96 insertions(+), 93 deletions(-)

diff --git a/drivers/common/zsda/zsda_device.c 
b/drivers/common/zsda/zsda_device.c
index 0d1e772fe4..72f017c699 100644
--- a/drivers/common/zsda/zsda_device.c
+++ b/drivers/common/zsda/zsda_device.c
@@ -213,7 +213,8 @@ static struct rte_pci_driver rte_zsda_pmd = {
.id_table = pci_id_zsda_map,
.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
.probe = zsda_pci_probe,
-   .remove = zsda_pci_remove };
+   .remove = zsda_pci_remove
+};
 
 RTE_PMD_REGISTER_PCI(ZSDA_PCI_NAME, rte_zsda_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(ZSDA_PCI_NAME, pci_id_zsda_map);
diff --git a/drivers/common/zsda/zsda_device.h 
b/drivers/common/zsda/zsda_device.h
index 036e157b8d..bb916f1e63 100644
--- a/drivers/common/zsda/zsda_device.h
+++ b/drivers/common/zsda/zsda_device.h
@@ -9,8 +9,8 @@
 #include "zsda_comp_pmd.h"
 #include "zsda_crypto_pmd.h"
 
-#define MAX_QPS_ON_FUNCTION128
-#define ZSDA_DEV_NAME_MAX_LEN  64
+#define MAX_QPS_ON_FUNCTION128
+#define ZSDA_DEV_NAME_MAX_LEN  64
 
 struct zsda_device_info {
const struct rte_memzone *mz;
@@ -45,8 +45,8 @@ struct zsda_qp_hw {
struct zsda_qp_hw_data data[MAX_QPS_ON_FUNCTION];
 };
 
+/* Data used by all services */
 struct zsda_pci_device {
-   /* Data used by all services */
char name[ZSDA_DEV_NAME_MAX_LEN];
/**< Name of zsda pci device */
uint8_t zsda_dev_id;
diff --git a/drivers/common/zsda/zsda_qp.h b/drivers/common/zsda/zsda_qp.h
index 486474ee70..4dbefe7bbd 100644
--- a/drivers/common/zsda/zsda_qp.h
+++ b/drivers/common/zsda/zsda_qp.h
@@ -9,43 +9,43 @@
 
 #include "zsda_device.h"
 
-#define ZSDA_ADMIN_Q_START 0x100
-#define ZSDA_ADMIN_Q_STOP  0x100
+#define ZSDA_ADMIN_Q_START 0x100
+#define ZSDA_ADMIN_Q_STOP  0x100
 #define ZSDA_ADMIN_Q_STOP_RESP 0x104
-#define ZSDA_ADMIN_Q_CLR   0x108
+#define ZSDA_ADMIN_Q_CLR   0x108
 #define ZSDA_ADMIN_Q_CLR_RESP  0x10C
 
-#define ZSDA_IO_Q_START0x200
-#define ZSDA_IO_Q_STOP 0x200
-#define ZSDA_IO_Q_STOP_RESP0x400
-#define ZSDA_IO_Q_CLR  0x600
-#define ZSDA_IO_Q_CLR_RESP 0x800
+#define ZSDA_IO_Q_START0x200
+#define ZSDA_IO_Q_STOP 0x200
+#define ZSDA_IO_Q_STOP_RESP0x400
+#define ZSDA_IO_Q_CLR  0x600
+#define ZSDA_IO_Q_CLR_RESP 0x800
 
-#define ZSDA_ADMIN_WQ  0x40
-#define ZSDA_ADMIN_WQ_BASE70x5C
-#define ZSDA_ADMIN_WQ_CRC  0x5C
+#define ZSDA_ADMIN_WQ  0x40
+#define ZSDA_ADMIN_WQ_BASE70x5C
+#define ZSDA_ADMIN_WQ_CRC  0x5C
 #define ZSDA_ADMIN_WQ_VERSION  0x5D
-#define ZSDA_ADMIN_WQ_FLAG 0x5E
-#define ZSDA_ADMIN_CQ  0x60
-#define ZSDA_ADMIN_CQ_BASE70x7C
-#define ZSDA_ADMIN_CQ_CRC  0x7C
+#define ZSDA_ADMIN_WQ_FLAG 0x5E
+#define ZSDA_ADMIN_CQ  0x60
+#define ZSDA_ADMIN_CQ_BASE70x7C
+#define ZSDA_ADMIN_CQ_CRC  0x7C
 #define ZSDA_ADMIN_CQ_VERSION  0x7D
-#define ZSDA_ADMIN_CQ_FLAG 0x7E
-#define ZSDA_ADMIN_WQ_TAIL 0x80
-#define ZSDA_ADMIN_CQ_HEAD 0x84
+#define ZSDA_ADMIN_CQ_FLAG 0x7E
+#define ZSDA_ADMIN_WQ_TAIL 0x80
+#define ZSDA_ADMIN_CQ_HEAD 0x84
 
 #define ZSDA_Q_START   0x1
-#define ZSDA_Q_STOP0x0
+#define ZSDA_Q_STOP0x0
 #define ZSDA_CLEAR_VALID   0x1
 #define ZSDA_CLEAR_INVALID 0x0
 #define ZSDA_RESP_VALID0x1
 #define ZSDA_RESP_INVALID  0x0
 
-#define ADMIN_BUF_DATA_LEN 0x1C
-#define ADMIN_BUF_TOTAL_LEN0x20
+#define ADMIN_BUF_DATA_LEN 0x1C
+#define ADMIN_BUF_TOTAL_LEN0x20
 
 #define IO_DB_INITIAL_CONFIG   0x1C00
-#define SET_CYCLE  0xff
+#define SET_CYCLE  0xff
 #define SET_HEAD_INTI  0x0
 
 #define ZSDA_TIME_SLEEP_US 100
@@ -55,8 +55,8 @@
 #define WQ_CSR_UBASE   0x1004
 #define CQ_CSR_LBASE   0x1400
 #define CQ_CSR_UBASE   0x1404
-#define WQ_TAIL0x1800
-#define CQ_HEAD0x1804
+#define WQ_TAIL0x1800
+#define CQ_HEAD0x1804
 
 /* CSR write macro */
 #define ZSDA_CSR_WR(csrAddr, csrOffset, val)   
\
diff --git a/drivers/common/zsda/zsda_qp_common.h 
b/drivers/common/zsda/zsda_qp_common.h
index e291cb1d60..50cfa9a550 100644
--- a/drivers/

RE: [PATCH] test/crypto: fix RSA decrypt op validation

2025-05-28 Thread Gowrishankar Muthukrishnan
Recheck unit test failure for openssl 1.1.1 in some distros.

Recheck-request: iol-unit-amd64-testing
--
Gowrishankar

> 
> Following RSA encrypt op, same plaintext buffer is used as output buffer for
> decrypt op, hence comparing plaintext buffer against same buffer pointer in
> crypto op always succeed irrespective of whether decrypt op succeeds or not.
> This patch fixes this issue with a local buffer for crypto op.
> 
> Fixes: 5ae36995f10 ("test/crypto: move RSA enqueue/dequeue into
> functions")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Gowrishankar Muthukrishnan 



RE: [PATCH v2 1/2] ethdev: remove unnecessary type conversion

2025-05-28 Thread Morten Brørup
> From: Konstantin Ananyev [mailto:konstantin.anan...@huawei.com]
> Sent: Wednesday, 28 May 2025 10.24
> 
> > >
> > > > From: Stephen Hemminger [mailto:step...@networkplumber.org]
> > > > Sent: Tuesday, 27 May 2025 17.07
> > > >
> > > > On Mon, 12 May 2025 20:37:19 +0530
> > > >  wrote:
> > > >
> > > > >  /**@{@name Rx hardware descriptor states
> > > > > diff --git a/lib/ethdev/rte_ethdev_core.h
> > > > b/lib/ethdev/rte_ethdev_core.h
> > > > > index e55fb42996..4ffae4921a 100644
> > > > > --- a/lib/ethdev/rte_ethdev_core.h
> > > > > +++ b/lib/ethdev/rte_ethdev_core.h
> > > > > @@ -45,7 +45,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
> > > > >
> > > > >
> > > > >  /** @internal Get number of used descriptors on a receive
> queue. */
> > > > > -typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
> > > > > +typedef int (*eth_rx_queue_count_t)(void *rxq);
> > > > >
> > > > >  /** @internal Check the status of a Rx descriptor */
> > > > >  typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t
> > > > offset);
> > > >
> > > >
> > > > This gets reported as ABI breakage. The change will have to wait
> until
> > > > next LTS (25.11)
> > >
> > > The return type was weird (wrong) to begin with.
> > > When used, it gets cast to int:
> > >
> https://elixir.bootlin.com/dpdk/v25.03/source/lib/ethdev/rte_ethdev.h#L
> 6404
> >
> > Personally, I don't see anything strange here.
> > devops rx_queue_count() supposed to return uint, because it should
> never failed for
> > valid queue.

The main thing wrong is inconsistency with its sibling API for TX queue count:
/** @internal Get number of used descriptors on a receive queue. */
typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
/** @internal Get number of used descriptors on a transmit queue. */
typedef int (*eth_tx_queue_count_t)(void *txq);

> > While rte_eth_rx_queue_count() itself can fail - wrong port/queue id,
> etc.
> 
> BTW, rx_queue_setup() accepts only uint16_t for number of rx
> descritoirs:
> int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> uint16_t nb_rx_desc, unsigned int socket_id,
> const struct rte_eth_rxconf *rx_conf,
> struct rte_mempool *mb_pool);
> 
> shouldn't dev->rx_queue_count() then also return uitn16_t (for
> consistency)?

If neither the RX or TX queue count callbacks can fail, then yes, both APIs 
could be updated to return uint16_t.
But it would be more future proof to allow these callbacks to fail, even on a 
valid queue.
The public APIs rte_eth_rx/tx_queue_count() can fail, so passing on failures 
from the callbacks would not be a change of the public API. (Assuming no new 
error codes are allowed.)

And the "future" already arrived:
The return type must be int (or have the same size as int), for the performance 
patch [1] replacing the ops->callback==NULL check with dummy callbacks 
returning -ENOTSUP.

[1]: https://inbox.dpdk.org/dev/20250512150732.65743-2-sk...@marvell.com/

> 
> > >
> > > You are right that it formally changes the ABI, and we should go
> through the LTS motions.
> > > But, for this change, I'd favor an exception.
> >
> > Again, from my opinion, there is nothing that urgent/important why
> such changes (if needed)
> > can't wait till next LTS.
> > For now, we can simply do type conversion explicitly at
> rte_eth_rx_queue_count().

OK. No objections from me. Just trying to accelerate some cleanup work.

> >
> > > PS: As a consequence of this change, a patch to update the return
> type of the callback in all the ethdev drivers should be provided.
> > >
> > > >
> > > >
> > > >   [C] 'rte_eth_fp_ops rte_eth_fp_ops[32]' was changed at
> > > > rte_ethdev.c:47:1:
> > > > type of variable changed:
> > > >   array element type 'struct rte_eth_fp_ops' changed:
> > > > type size hasn't changed
> > > > 1 data member change:
> > > >   type of 'eth_rx_queue_count_t rx_queue_count' changed:
> > > > underlying type 'uint32_t (*)(void*)' changed:
> > > >   in pointed to type 'function type uint32_t
> (void*)':
> > > > return type changed:
> > > >   entity changed from 'typedef uint32_t' to 'int'
> > > >   type size hasn't changed
> > > >   type size hasn't changed


[PATCH 1/7] common/cnxk: fix CQ tail drop feature

2025-05-28 Thread Rahul Bhansali
From: Nithin Dabilpuram 

CQ tail drop feature is currently supposed to be enabled
when inline IPsec is disabled. But since XQE drop is not
enabled, CQ tail drop is implicitly disabled. Fix the same.

Fixes: c8c967e11717 ("common/cnxk: support enabling AURA tail drop for RQ")

Signed-off-by: Nithin Dabilpuram 
---
 drivers/common/cnxk/roc_nix.h   |  2 ++
 drivers/common/cnxk/roc_nix_queue.c | 11 +--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 80392e7e1b..1e543d8f11 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -355,6 +355,8 @@ struct roc_nix_rq {
bool lpb_drop_ena;
/* SPB aura drop enable */
bool spb_drop_ena;
+   /* XQE drop enable */
+   bool xqe_drop_ena;
/* End of Input parameters */
struct roc_nix *roc_nix;
uint64_t meta_aura_handle;
diff --git a/drivers/common/cnxk/roc_nix_queue.c 
b/drivers/common/cnxk/roc_nix_queue.c
index e852211ba4..39bd051c94 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -530,7 +530,7 @@ nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, 
uint16_t qints,
aq->rq.rq_int_ena = 0;
/* Many to one reduction */
aq->rq.qint_idx = rq->qid % qints;
-   aq->rq.xqe_drop_ena = 1;
+   aq->rq.xqe_drop_ena = rq->xqe_drop_ena;
 
/* If RED enabled, then fill enable for all cases */
if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
@@ -613,6 +613,7 @@ nix_rq_cn10k_cfg(struct dev *dev, struct roc_nix_rq *rq, 
uint16_t qints, bool cf
aq->rq.wqe_skip = rq->wqe_skip;
aq->rq.wqe_caching = 1;
 
+   aq->rq.xqe_drop_ena = 0;
aq->rq.good_utag = rq->tag_mask >> 24;
aq->rq.bad_utag = rq->tag_mask >> 24;
aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0);
@@ -632,6 +633,8 @@ nix_rq_cn10k_cfg(struct dev *dev, struct roc_nix_rq *rq, 
uint16_t qints, bool cf
aq->rq.bad_utag = rq->tag_mask >> 24;
aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0);
aq->rq.cq = rq->cqid;
+   if (rq->xqe_drop_ena)
+   aq->rq.xqe_drop_ena = 1;
}
 
if (rq->ipsech_ena) {
@@ -680,7 +683,6 @@ nix_rq_cn10k_cfg(struct dev *dev, struct roc_nix_rq *rq, 
uint16_t qints, bool cf
aq->rq.rq_int_ena = 0;
/* Many to one reduction */
aq->rq.qint_idx = rq->qid % qints;
-   aq->rq.xqe_drop_ena = 0;
aq->rq.lpb_drop_ena = rq->lpb_drop_ena;
aq->rq.spb_drop_ena = rq->spb_drop_ena;
 
@@ -725,6 +727,7 @@ nix_rq_cn10k_cfg(struct dev *dev, struct roc_nix_rq *rq, 
uint16_t qints, bool cf
aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag;
aq->rq_mask.ltag = ~aq->rq_mask.ltag;
aq->rq_mask.cq = ~aq->rq_mask.cq;
+   aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena;
}
 
if (rq->ipsech_ena)
@@ -950,6 +953,10 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq 
*rq, bool ena)
rq->roc_nix = roc_nix;
rq->tc = ROC_NIX_PFC_CLASS_INVALID;
 
+   /* Enable XQE/CQ drop on cn10k to count pkt drops only when inline is 
disabled */
+   if (roc_model_is_cn10k() && !roc_nix_inl_inb_is_enabled(roc_nix))
+   rq->xqe_drop_ena = true;
+
if (is_cn9k)
rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, false, ena);
else if (roc_model_is_cn10k())
-- 
2.25.1



[PATCH 5/7] common/cnxk: disable xqe drop config in RQ context

2025-05-28 Thread Rahul Bhansali
Disable RQ context xqe drop enable config when
dis_xqe_drop parameter is set.

Signed-off-by: Rahul Bhansali 
---
 drivers/common/cnxk/roc_nix.h   | 1 +
 drivers/common/cnxk/roc_nix_queue.c | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 75b414a32a..a9cdc42617 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -487,6 +487,7 @@ struct roc_nix {
uint16_t inb_cfg_param1;
uint16_t inb_cfg_param2;
bool force_tail_drop;
+   bool dis_xqe_drop;
/* End of input parameters */
/* LMT line base for "Per Core Tx LMT line" mode*/
uintptr_t lmt_base;
diff --git a/drivers/common/cnxk/roc_nix_queue.c 
b/drivers/common/cnxk/roc_nix_queue.c
index 84a736e3bb..e19a6877e6 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -956,7 +956,7 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq 
*rq, bool ena)
/* Enable XQE/CQ drop on cn10k to count pkt drops only when inline is 
disabled */
if (roc_model_is_cn10k() &&
(roc_nix->force_tail_drop || !roc_nix_inl_inb_is_enabled(roc_nix)))
-   rq->xqe_drop_ena = true;
+   rq->xqe_drop_ena = roc_nix->dis_xqe_drop ? false : true;
 
if (is_cn9k)
rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, false, ena);
-- 
2.25.1



[PATCH 6/7] net/cnxk: devarg option to disable xqe drop

2025-05-28 Thread Rahul Bhansali
Provide devarg option to disable xqe drop in rq context.
It will be set as disable_xqe_drop=1 for nix device.

e.g.: 0002:02:00.0,disable_xqe_drop=1

Signed-off-by: Rahul Bhansali 
---
 drivers/net/cnxk/cnxk_ethdev_devargs.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c 
b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 7013849ad3..e42344a2ec 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -282,6 +282,7 @@ parse_val_u16(const char *key, const char *value, void 
*extra_args)
 #define CNXK_CUSTOM_META_AURA_DIS "custom_meta_aura_dis"
 #define CNXK_CUSTOM_INB_SA   "custom_inb_sa"
 #define CNXK_FORCE_TAIL_DROP "force_tail_drop"
+#define CNXK_DIS_XQE_DROP"disable_xqe_drop"
 
 int
 cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev 
*dev)
@@ -308,6 +309,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, 
struct cnxk_eth_dev *dev)
uint16_t custom_sa_act = 0;
uint16_t custom_inb_sa = 0;
struct rte_kvargs *kvlist;
+   uint8_t dis_xqe_drop = 0;
uint32_t meta_buf_sz = 0;
uint16_t lock_rx_ctx = 0;
uint16_t rx_inj_ena = 0;
@@ -367,6 +369,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, 
struct cnxk_eth_dev *dev)
   &custom_meta_aura_dis);
rte_kvargs_process(kvlist, CNXK_CUSTOM_INB_SA, &parse_flag, 
&custom_inb_sa);
rte_kvargs_process(kvlist, CNXK_FORCE_TAIL_DROP, &parse_flag, 
&force_tail_drop);
+   rte_kvargs_process(kvlist, CNXK_DIS_XQE_DROP, &parse_flag, 
&dis_xqe_drop);
rte_kvargs_free(kvlist);
 
 null_devargs:
@@ -409,6 +412,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, 
struct cnxk_eth_dev *dev)
if (roc_feature_nix_has_rx_inject())
dev->nix.rx_inj_ena = rx_inj_ena;
dev->nix.force_tail_drop = force_tail_drop;
+   dev->nix.dis_xqe_drop = !!dis_xqe_drop;
return 0;
 exit:
return -EINVAL;
@@ -434,4 +438,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_cnxk,
  CNXK_FLOW_AGING_POLL_FREQ "=<10-65535>"
  CNXK_NIX_RX_INJ_ENABLE "=1"
  CNXK_CUSTOM_META_AURA_DIS "=1"
- CNXK_FORCE_TAIL_DROP "=1");
+ CNXK_FORCE_TAIL_DROP "=1"
+ CNXK_DIS_XQE_DROP "=1");
-- 
2.25.1



[PATCH 2/7] common/cnxk: set CQ drop and backpressure threshold

2025-05-28 Thread Rahul Bhansali
In case of force_tail_drop is enabled, a different set of
CQ drop and backpressure threshold will be configured
to avoid CQ FULL interrupts.
Also, drop thresholds are optimized for security packets.

Signed-off-by: Rahul Bhansali 
---
 drivers/common/cnxk/roc_nix.h   |  4 
 drivers/common/cnxk/roc_nix_fc.c| 10 +
 drivers/common/cnxk/roc_nix_priv.h  | 14 +---
 drivers/common/cnxk/roc_nix_queue.c | 35 ++---
 4 files changed, 48 insertions(+), 15 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 1e543d8f11..75b414a32a 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -189,6 +189,7 @@ struct roc_nix_fc_cfg {
uint32_t rq;
uint16_t tc;
uint16_t cq_drop;
+   uint16_t cq_bp;
bool enable;
} cq_cfg;
 
@@ -196,6 +197,7 @@ struct roc_nix_fc_cfg {
uint32_t rq;
uint16_t tc;
uint16_t cq_drop;
+   uint16_t cq_bp;
uint64_t pool;
uint64_t spb_pool;
uint64_t pool_drop_pct;
@@ -371,6 +373,7 @@ struct roc_nix_cq {
uint8_t stash_thresh;
/* End of Input parameters */
uint16_t drop_thresh;
+   uint16_t bp_thresh;
struct roc_nix *roc_nix;
uintptr_t door;
int64_t *status;
@@ -483,6 +486,7 @@ struct roc_nix {
uint32_t root_sched_weight;
uint16_t inb_cfg_param1;
uint16_t inb_cfg_param2;
+   bool force_tail_drop;
/* End of input parameters */
/* LMT line base for "Per Core Tx LMT line" mode*/
uintptr_t lmt_base;
diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
index 3e162ede8e..e35c993f96 100644
--- a/drivers/common/cnxk/roc_nix_fc.c
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -157,7 +157,8 @@ nix_fc_cq_config_get(struct roc_nix *roc_nix, struct 
roc_nix_fc_cfg *fc_cfg)
if (rc)
goto exit;
 
-   fc_cfg->cq_cfg.cq_drop = rsp->cq.bp;
+   fc_cfg->cq_cfg.cq_drop = rsp->cq.drop;
+   fc_cfg->cq_cfg.cq_bp = rsp->cq.bp;
fc_cfg->cq_cfg.enable = rsp->cq.bp_ena;
fc_cfg->type = ROC_NIX_FC_CQ_CFG;
 
@@ -288,7 +289,7 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct 
roc_nix_fc_cfg *fc_cfg)
if (fc_cfg->cq_cfg.enable) {
aq->cq.bpid = nix->bpid[fc_cfg->cq_cfg.tc];
aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
-   aq->cq.bp = fc_cfg->cq_cfg.cq_drop;
+   aq->cq.bp = fc_cfg->cq_cfg.cq_bp;
aq->cq_mask.bp = ~(aq->cq_mask.bp);
}
 
@@ -310,7 +311,7 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct 
roc_nix_fc_cfg *fc_cfg)
if (fc_cfg->cq_cfg.enable) {
aq->cq.bpid = nix->bpid[fc_cfg->cq_cfg.tc];
aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
-   aq->cq.bp = fc_cfg->cq_cfg.cq_drop;
+   aq->cq.bp = fc_cfg->cq_cfg.cq_bp;
aq->cq_mask.bp = ~(aq->cq_mask.bp);
}
 
@@ -332,7 +333,7 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct 
roc_nix_fc_cfg *fc_cfg)
if (fc_cfg->cq_cfg.enable) {
aq->cq.bpid = nix->bpid[fc_cfg->cq_cfg.tc];
aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
-   aq->cq.bp = fc_cfg->cq_cfg.cq_drop;
+   aq->cq.bp = fc_cfg->cq_cfg.cq_bp;
aq->cq_mask.bp = ~(aq->cq_mask.bp);
}
 
@@ -389,6 +390,7 @@ nix_fc_rq_config_set(struct roc_nix *roc_nix, struct 
roc_nix_fc_cfg *fc_cfg)
tmp.cq_cfg.rq = fc_cfg->rq_cfg.rq;
tmp.cq_cfg.tc = fc_cfg->rq_cfg.tc;
tmp.cq_cfg.cq_drop = fc_cfg->rq_cfg.cq_drop;
+   tmp.cq_cfg.cq_bp = fc_cfg->rq_cfg.cq_bp;
tmp.cq_cfg.enable = fc_cfg->rq_cfg.enable;
 
rc = nix_fc_cq_config_set(roc_nix, &tmp);
diff --git a/drivers/common/cnxk/roc_nix_priv.h 
b/drivers/common/cnxk/roc_nix_priv.h
index 09a55e43ce..dc3450a3d4 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -15,10 +15,18 @@
 #define NIX_SQB_PREFETCH ((uint16_t)1)
 
 /* Apply BP/DROP when CQ is 95% full */
-#define NIX_CQ_THRESH_LEVEL(5 * 256 / 100)
-#define NIX_CQ_SEC_THRESH_LEVEL (25 * 256 / 100)
+#define NIX_CQ_THRESH_LEVEL   (5 * 256 / 100)
+#define NIX_CQ_SEC_BP_THRESH_LEVEL (25 * 256 / 100)
+
+/* Applicable when force_tail_drop is enabled */
+#define NIX_CQ_THRESH_LEVEL_REF1   (50 * 256 / 100)
+#define NIX_CQ_SEC_THRESH_LEVEL_REF1   (20 * 256 / 100)
+#define NIX_CQ_BP_THRESH_LEVEL_REF1(60 * 256 / 100)
+#define NIX_CQ_SEC_BP_THRESH_LEVEL_REF1 (50 * 256 / 100

[PATCH 7/7] doc: updates cnxk doc for new devargs

2025-05-28 Thread Rahul Bhansali
Adds details of below nix devargs
- force_tail_drop
- disable_xqe_drop

Signed-off-by: Rahul Bhansali 
---
 doc/guides/nics/cnxk.rst | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 35f95dcc0a..7f4ff7b4fb 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -470,6 +470,29 @@ Runtime Config Options
With the above configuration, inline inbound IPsec post-processing
should be done by the application.
 
+- ``Enable force tail drop feature`` (default ``0``)
+
+   Force tail drop can be enabled by specifying ``force_tail_drop`` ``devargs``
+   parameter.
+   This option is for OCTEON CN10K SoC family.
+
+   For example::
+
+  -a 0002:02:00.0,force_tail_drop=1
+
+   With the above configuration, descriptors are internally increased and back
+   pressures are optimized to avoid CQ full situation due to inflight packets.
+
+- ``Disable RQ XQE drop`` (default ``0``)
+
+   Rx XQE drop can be disabled by specifying ``disable_xqe_drop`` ``devargs``
+   parameter.
+   This option is for OCTEON CN10K SoC family.
+
+   For example::
+
+  -a 0002:02:00.0,disable_xqe_drop=1
+
 .. note::
 
Above devarg parameters are configurable per device, user needs to pass the
-- 
2.25.1



[PATCH 3/7] net/cnxk: devarg to set force tail drop

2025-05-28 Thread Rahul Bhansali
A new devarg is added to configure force tail drop.
Also, CQ descriptors are doubled under this option.

To enable this devarg, it needs to be pass as
force_tail_drop=1 for nix device.
e.g.: 0002:02:00.0,force_tail_drop=1

Signed-off-by: Rahul Bhansali 
---
 drivers/net/cnxk/cnxk_ethdev.c | 4 
 drivers/net/cnxk/cnxk_ethdev_devargs.c | 7 ++-
 drivers/net/cnxk/cnxk_ethdev_ops.c | 2 ++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index b9a0b37425..1ba09c068b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -708,6 +708,10 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, 
uint16_t qid,
if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
nb_desc = nix_inl_cq_sz_clamp_up(nix, lpb_pool, nb_desc);
 
+   /* Double the CQ descriptors */
+   if (nix->force_tail_drop)
+   nb_desc = 2 * RTE_MAX(nb_desc, (uint32_t)4096);
+
/* Setup ROC CQ */
cq = &dev->cqs[qid];
cq->qid = qid;
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c 
b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index aa2fe7dfe1..7013849ad3 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -281,6 +281,7 @@ parse_val_u16(const char *key, const char *value, void 
*extra_args)
 #define CNXK_NIX_RX_INJ_ENABLE "rx_inj_ena"
 #define CNXK_CUSTOM_META_AURA_DIS "custom_meta_aura_dis"
 #define CNXK_CUSTOM_INB_SA   "custom_inb_sa"
+#define CNXK_FORCE_TAIL_DROP "force_tail_drop"
 
 int
 cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev 
*dev)
@@ -301,6 +302,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, 
struct cnxk_eth_dev *dev)
uint16_t outb_nb_desc = 8200;
struct sdp_channel sdp_chan;
uint16_t rss_tag_as_xor = 0;
+   uint8_t force_tail_drop = 0;
uint16_t scalar_enable = 0;
uint16_t tx_compl_ena = 0;
uint16_t custom_sa_act = 0;
@@ -364,6 +366,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, 
struct cnxk_eth_dev *dev)
rte_kvargs_process(kvlist, CNXK_CUSTOM_META_AURA_DIS, &parse_flag,
   &custom_meta_aura_dis);
rte_kvargs_process(kvlist, CNXK_CUSTOM_INB_SA, &parse_flag, 
&custom_inb_sa);
+   rte_kvargs_process(kvlist, CNXK_FORCE_TAIL_DROP, &parse_flag, 
&force_tail_drop);
rte_kvargs_free(kvlist);
 
 null_devargs:
@@ -405,6 +408,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, 
struct cnxk_eth_dev *dev)
dev->npc.flow_age.aging_poll_freq = aging_thread_poll_freq;
if (roc_feature_nix_has_rx_inject())
dev->nix.rx_inj_ena = rx_inj_ena;
+   dev->nix.force_tail_drop = force_tail_drop;
return 0;
 exit:
return -EINVAL;
@@ -429,4 +433,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_cnxk,
  CNXK_SQB_SLACK "=<12-512>"
  CNXK_FLOW_AGING_POLL_FREQ "=<10-65535>"
  CNXK_NIX_RX_INJ_ENABLE "=1"
- CNXK_CUSTOM_META_AURA_DIS "=1");
+ CNXK_CUSTOM_META_AURA_DIS "=1"
+ CNXK_FORCE_TAIL_DROP "=1");
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c 
b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 9970c5ff5c..7c8a4d8416 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -313,6 +313,7 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
fc_cfg.rq_cfg.pool = rq->aura_handle;
fc_cfg.rq_cfg.spb_pool = rq->spb_aura_handle;
fc_cfg.rq_cfg.cq_drop = cq->drop_thresh;
+   fc_cfg.rq_cfg.cq_bp = cq->bp_thresh;
fc_cfg.rq_cfg.pool_drop_pct = ROC_NIX_AURA_THRESH;
 
rc = roc_nix_fc_config_set(nix, &fc_cfg);
@@ -1239,6 +1240,7 @@ nix_priority_flow_ctrl_rq_conf(struct rte_eth_dev 
*eth_dev, uint16_t qid,
fc_cfg.rq_cfg.pool = rxq->qconf.mp->pool_id;
fc_cfg.rq_cfg.spb_pool = rq->spb_aura_handle;
fc_cfg.rq_cfg.cq_drop = cq->drop_thresh;
+   fc_cfg.rq_cfg.cq_bp = cq->bp_thresh;
fc_cfg.rq_cfg.pool_drop_pct = ROC_NIX_AURA_THRESH;
rc = roc_nix_fc_config_set(nix, &fc_cfg);
if (rc)
-- 
2.25.1



[PATCH 4/7] net/cnxk: fix descriptor count update on reconfig

2025-05-28 Thread Rahul Bhansali
In Rx queue setup, input descriptors count is updated as per
requirement, and stored. But during port reconfig , this
descriptor count will change again in rx queue setup.
Hence, will need to store the initial input descriptor count.

Fixes: a86144cd9ded ("net/cnxk: add Rx queue setup and release")

Signed-off-by: Rahul Bhansali 
---
 drivers/net/cnxk/cnxk_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 1ba09c068b..14e4e95419 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -653,6 +653,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, 
uint16_t qid,
struct roc_nix *nix = &dev->nix;
struct cnxk_eth_rxq_sp *rxq_sp;
struct rte_mempool_ops *ops;
+   uint32_t desc_cnt = nb_desc;
const char *platform_ops;
struct roc_nix_rq *rq;
struct roc_nix_cq *cq;
@@ -778,7 +779,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, 
uint16_t qid,
rxq_sp->qconf.conf.rx = *rx_conf;
/* Queue config should reflect global offloads */
rxq_sp->qconf.conf.rx.offloads = dev->rx_offloads;
-   rxq_sp->qconf.nb_desc = nb_desc;
+   rxq_sp->qconf.nb_desc = desc_cnt;
rxq_sp->qconf.mp = lpb_pool;
rxq_sp->tc = 0;
rxq_sp->tx_pause = (dev->fc_cfg.mode == RTE_ETH_FC_FULL ||
-- 
2.25.1



[v1 00/10] DPAA specific fixes

2025-05-28 Thread vanshika . shukla
From: Vanshika Shukla 

This series includes fixes for NXP DPAA drivers.

Gagandeep Singh (1):
  bus/dpaa: improve DPAA cleanup

Hemant Agrawal (2):
  bus/dpaa: avoid using same structure and variable name
  bus/dpaa: optimize qman enqueue check

Jun Yang (5):
  bus/dpaa: add FMan node
  bus/dpaa: enhance DPAA SoC version
  bus/dpaa: optimize bman acquire/release
  mempool/dpaa: fast acquire and release
  mempool/dpaa: adjust pool element for LS1043A errata

Vanshika Shukla (1):
  net/dpaa: add devargs for enabling err packets on main queue

Vinod Pullabhatla (1):
  net/dpaa: add Tx rate limiting DPAA PMD API

 .mailmap  |   1 +
 doc/guides/nics/dpaa.rst  |   3 +
 drivers/bus/dpaa/base/fman/fman.c | 278 -
 drivers/bus/dpaa/base/fman/netcfg_layer.c |   8 +-
 drivers/bus/dpaa/base/qbman/bman.c| 149 +--
 drivers/bus/dpaa/base/qbman/qman.c|  50 ++--
 drivers/bus/dpaa/base/qbman/qman_driver.c |   2 -
 drivers/bus/dpaa/bus_dpaa_driver.h|   9 +-
 drivers/bus/dpaa/dpaa_bus.c   | 179 +
 drivers/bus/dpaa/include/fman.h   |  40 +--
 drivers/bus/dpaa/include/fsl_bman.h   |  20 +-
 drivers/bus/dpaa/include/fsl_qman.h   |   2 +-
 drivers/bus/dpaa/include/netcfg.h |  14 --
 drivers/mempool/dpaa/dpaa_mempool.c   | 230 +
 drivers/mempool/dpaa/dpaa_mempool.h   |  13 +-
 drivers/net/dpaa/dpaa_ethdev.c| 291 +++---
 drivers/net/dpaa/dpaa_flow.c  |  87 ++-
 drivers/net/dpaa/dpaa_ptp.c   |  12 +-
 drivers/net/dpaa/dpaa_rxtx.c  |   4 +-
 drivers/net/dpaa/fmlib/fm_lib.c   |  30 +++
 drivers/net/dpaa/fmlib/fm_port_ext.h  |   2 +-
 drivers/net/dpaa/rte_pmd_dpaa.h   |  21 +-
 22 files changed, 1022 insertions(+), 423 deletions(-)

-- 
2.25.1



[v1 01/10] bus/dpaa: avoid using same structure and variable name

2025-05-28 Thread vanshika . shukla
From: Hemant Agrawal 

rte_dpaa_bus was being used as structure and variable name both.

Signed-off-by: Jun Yang 
Signed-off-by: Hemant Agrawal 
---
 drivers/bus/dpaa/dpaa_bus.c | 56 ++---
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 5420733019..f5ce4a2761 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -54,7 +54,7 @@ struct rte_dpaa_bus {
int detected;
 };
 
-static struct rte_dpaa_bus rte_dpaa_bus;
+static struct rte_dpaa_bus s_rte_dpaa_bus;
 struct netcfg_info *dpaa_netcfg;
 
 /* define a variable to hold the portal_key, once created.*/
@@ -120,7 +120,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
struct rte_dpaa_device *dev = NULL;
struct rte_dpaa_device *tdev = NULL;
 
-   RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+   RTE_TAILQ_FOREACH_SAFE(dev, &s_rte_dpaa_bus.device_list, next, tdev) {
comp = compare_dpaa_devices(newdev, dev);
if (comp < 0) {
TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -130,7 +130,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
}
 
if (!inserted)
-   TAILQ_INSERT_TAIL(&rte_dpaa_bus.device_list, newdev, next);
+   TAILQ_INSERT_TAIL(&s_rte_dpaa_bus.device_list, newdev, next);
 }
 
 /*
@@ -176,7 +176,7 @@ dpaa_create_device_list(void)
struct fm_eth_port_cfg *cfg;
struct fman_if *fman_intf;
 
-   rte_dpaa_bus.device_count = 0;
+   s_rte_dpaa_bus.device_count = 0;
 
/* Creating Ethernet Devices */
for (i = 0; dpaa_netcfg && (i < dpaa_netcfg->num_ethports); i++) {
@@ -187,7 +187,7 @@ dpaa_create_device_list(void)
goto cleanup;
}
 
-   dev->device.bus = &rte_dpaa_bus.bus;
+   dev->device.bus = &s_rte_dpaa_bus.bus;
dev->device.numa_node = SOCKET_ID_ANY;
 
/* Allocate interrupt handle instance */
@@ -226,7 +226,7 @@ dpaa_create_device_list(void)
dpaa_add_to_device_list(dev);
}
 
-   rte_dpaa_bus.device_count += i;
+   s_rte_dpaa_bus.device_count += i;
 
/* Unlike case of ETH, RTE_LIBRTE_DPAA_MAX_CRYPTODEV SEC devices are
 * constantly created only if "sec" property is found in the device
@@ -259,7 +259,7 @@ dpaa_create_device_list(void)
}
 
dev->device_type = FSL_DPAA_CRYPTO;
-   dev->id.dev_id = rte_dpaa_bus.device_count + i;
+   dev->id.dev_id = s_rte_dpaa_bus.device_count + i;
 
/* Even though RTE_CRYPTODEV_NAME_MAX_LEN is valid length of
 * crypto PMD, using RTE_ETH_NAME_MAX_LEN as that is the size
@@ -274,7 +274,7 @@ dpaa_create_device_list(void)
dpaa_add_to_device_list(dev);
}
 
-   rte_dpaa_bus.device_count += i;
+   s_rte_dpaa_bus.device_count += i;
 
 qdma_dpaa:
/* Creating QDMA Device */
@@ -287,7 +287,7 @@ dpaa_create_device_list(void)
}
 
dev->device_type = FSL_DPAA_QDMA;
-   dev->id.dev_id = rte_dpaa_bus.device_count + i;
+   dev->id.dev_id = s_rte_dpaa_bus.device_count + i;
 
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
sprintf(dev->name, "dpaa_qdma-%d", i+1);
@@ -297,7 +297,7 @@ dpaa_create_device_list(void)
 
dpaa_add_to_device_list(dev);
}
-   rte_dpaa_bus.device_count += i;
+   s_rte_dpaa_bus.device_count += i;
 
return 0;
 
@@ -312,8 +312,8 @@ dpaa_clean_device_list(void)
struct rte_dpaa_device *dev = NULL;
struct rte_dpaa_device *tdev = NULL;
 
-   RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
-   TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+   RTE_TAILQ_FOREACH_SAFE(dev, &s_rte_dpaa_bus.device_list, next, tdev) {
+   TAILQ_REMOVE(&s_rte_dpaa_bus.device_list, dev, next);
rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
@@ -537,10 +537,10 @@ rte_dpaa_bus_scan(void)
return 0;
}
 
-   if (rte_dpaa_bus.detected)
+   if (s_rte_dpaa_bus.detected)
return 0;
 
-   rte_dpaa_bus.detected = 1;
+   s_rte_dpaa_bus.detected = 1;
 
/* create the key, supplying a function that'll be invoked
 * when a portal affined thread will be deleted.
@@ -564,7 +564,7 @@ rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
 
BUS_INIT_FUNC_TRACE();
 
-   TAILQ_INSERT_TAIL(&rte_dpaa_bus.driver_list, driver, next);
+   TAILQ_INSERT_TAIL(&s_rte_dpaa_bus.driver_list, driver, next);
 }
 
 /* un-register a dpaa bus based dpaa driver */
@@ -574,7 +574,7 @@ rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
 {
B

RE: [EXTERNAL] [PATCH v2 0/9] DPAA2 crypto driver changes

2025-05-28 Thread Akhil Goyal
> V2 changes:
>  - fix checkpatch warning
>  - fix 32 bit compilation error
>  - fix a commit message
>  - update document
> 
> Gagandeep Singh (5):
>   common/dpaax: fix invalid key command error
>   common/dpaax: fix for PDCP AES only 12bit SN case
>   common/dpaax: support 12bit SN in pdcp uplane
>   crypto/dpaa2_sec: change custom device API to standard
>   crypto/dpaa2_sec: add null algo capability
> 
> Jun Yang (3):
>   net/dpaa2: configure buffer layout
>   mempool/dpaa2: mempool operation index
>   crypto/dpaa2_sec: add support for simple IPsec FD
> 


> Vanshika Shukla (1):
>   crypto/dpaa2_sec: fix coverity Issues
Patch title updated for this as Coverity issue number was missing

Series applied to dpdk-next-crypto
Thanks.


> 
>  doc/guides/cryptodevs/dpaa2_sec.rst  |   2 +
>  doc/guides/cryptodevs/features/dpaa2_sec.ini |   2 +
>  drivers/bus/fslmc/portal/dpaa2_hw_pvt.h  |  10 +
>  drivers/common/dpaax/caamflib/desc/pdcp.h|  31 +++-
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c  | 182 ---
>  drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h|  45 -
>  drivers/crypto/dpaa2_sec/meson.build |   3 +-
>  drivers/crypto/dpaa_sec/dpaa_sec.c   |  42 ++---
>  drivers/mempool/dpaa2/dpaa2_hw_mempool.c |  20 +-
>  drivers/mempool/dpaa2/dpaa2_hw_mempool.h |   5 +-
>  drivers/net/dpaa2/base/dpaa2_hw_dpni.c   |  18 +-
>  drivers/net/dpaa2/base/dpaa2_hw_dpni_annot.h |   6 +
>  drivers/net/dpaa2/dpaa2_ethdev.h |   6 +-
>  13 files changed, 263 insertions(+), 109 deletions(-)
> 
> --
> 2.25.1



Re: [PATCH] net/mlx5: avoid setting kernel MTU if not needed

2025-05-28 Thread David Marchand
Hello,

On Wed, May 28, 2025 at 11:36 AM Maxime Coquelin
 wrote:
>
> This patch checks whether the Kernel MTU has the same value
> as the requested one at port configuration time, and skip
> setting it if it is the same.
>
> Doing this, we can avoid the application to require
> NET_ADMIN capability, as in v23.11.
>
> Fixes: 10859ecf09c4 ("net/mlx5: fix MTU configuration")
> Cc: sta...@dpdk.org
>
> Signed-off-by: Maxime Coquelin 
> ---
>
> Hi Dariuz,
>
> I set priv->mtu as it is done after the mlx5_set_mtu() call,
> but I'm not sure it is necessary, as is the existing call to
> mlx5_get_mtu() because it seems done in mlx5_dev_spawn().

It seems there were some back and forth on this priv->mtu topic
between Nelio and other devs in the past.

Atm, I don't see the need for keeping such a cached mtu value in priv.
There is only one user of the value, and it is for configuration
operation that can do a query to the kernel.


-- 
David Marchand



RE: [PATCH] crypto/virtio: fix DER encoding of RSA public key

2025-05-28 Thread Akhil Goyal



> -Original Message-
> From: Gowrishankar Muthukrishnan 
> Sent: Saturday, May 10, 2025 4:11 PM
> To: dev@dpdk.org; Jay Zhou 
> Cc: Anoob Joseph ; Akhil Goyal ;
> Gowrishankar Muthukrishnan 
> Subject: [PATCH] crypto/virtio: fix DER encoding of RSA public key
> 
> As per RFC 8017, RSA public key in ASN.1 should have only
> modulus and exponent values. Add a separate encoding function
> to follow this standard.
> 
> Fixes: 6fe6a7f7bcf ("crypto/virtio: add asymmetric RSA support")
Updated the Fixes tag 
Fixes: 10702138f1a1 ("crypto/virtio: support asymmetric RSA")
Cc: sta...@dpdk.org
> 
> Signed-off-by: Gowrishankar Muthukrishnan 
Acked-by: Akhil Goyal 

Applied to dpdk-next-crypto
Thanks.



[PATCH RESEND 2/3] lib/lpm: R-V V rte_lpm_lookupx4

2025-05-28 Thread uk7b
From: sunyuechi 

bpi-f3:
scalar: 5.7 cycles
rvv:2.4 cycles

Maybe runtime detection in LPM should be added for all architectures,
but this commit is only about the RVV part.

Signed-off-by: sunyuechi 
---
 MAINTAINERS   |  2 +
 lib/lpm/meson.build   |  1 +
 lib/lpm/rte_lpm.h |  2 +
 lib/lpm/rte_lpm_rvv.h | 91 +++
 4 files changed, 96 insertions(+)
 create mode 100644 lib/lpm/rte_lpm_rvv.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 3e16789250..0f207ac129 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -340,6 +340,8 @@ M: Stanislaw Kardach 
 F: config/riscv/
 F: doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
 F: lib/eal/riscv/
+M: sunyuechi 
+F: lib/**/*rvv*
 
 Intel x86
 M: Bruce Richardson 
diff --git a/lib/lpm/meson.build b/lib/lpm/meson.build
index fae4f79fb9..09133061e5 100644
--- a/lib/lpm/meson.build
+++ b/lib/lpm/meson.build
@@ -17,6 +17,7 @@ indirect_headers += files(
 'rte_lpm_scalar.h',
 'rte_lpm_sse.h',
 'rte_lpm_sve.h',
+'rte_lpm_rvv.h',
 )
 deps += ['hash']
 deps += ['rcu']
diff --git a/lib/lpm/rte_lpm.h b/lib/lpm/rte_lpm.h
index 7df64f06b1..b06517206f 100644
--- a/lib/lpm/rte_lpm.h
+++ b/lib/lpm/rte_lpm.h
@@ -408,6 +408,8 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, 
uint32_t hop[4],
 #include "rte_lpm_altivec.h"
 #elif defined(RTE_ARCH_X86)
 #include "rte_lpm_sse.h"
+#elif defined(RTE_ARCH_RISCV) && defined(RTE_RISCV_FEATURE_V)
+#include "rte_lpm_rvv.h"
 #else
 #include "rte_lpm_scalar.h"
 #endif
diff --git a/lib/lpm/rte_lpm_rvv.h b/lib/lpm/rte_lpm_rvv.h
new file mode 100644
index 00..d6aa1500be
--- /dev/null
+++ b/lib/lpm/rte_lpm_rvv.h
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2025 Institute of Software Chinese Academy of Sciences 
(ISCAS).
+ */
+
+#ifndef _RTE_LPM_RVV_H_
+#define _RTE_LPM_RVV_H_
+
+#include 
+
+#include 
+#include 
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_LPM_LOOKUP_SUCCESS 0x0100
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
+
+typedef void (*lpm_lookupx4_fn)(const struct rte_lpm *, xmm_t, uint32_t[4], 
uint32_t);
+
+static lpm_lookupx4_fn lpm_lookupx4_impl;
+
+static inline void rte_lpm_lookupx4_scalar(
+   const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], uint32_t defv)
+{
+   uint32_t nh;
+   int ret;
+
+   for (int i = 0; i < 4; i++) {
+   ret = rte_lpm_lookup(lpm, ip[i], &nh);
+   hop[i] = (ret == 0) ? nh : defv;
+   }
+}
+
+static inline void rte_lpm_lookupx4_rvv(
+   const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], uint32_t defv)
+{
+   size_t vl = 4;
+
+   const uint32_t *tbl24_p = (const uint32_t *)lpm->tbl24;
+   uint32_t tbl_entries[4] = {
+   tbl24_p[((uint32_t)ip[0]) >> 8],
+   tbl24_p[((uint32_t)ip[1]) >> 8],
+   tbl24_p[((uint32_t)ip[2]) >> 8],
+   tbl24_p[((uint32_t)ip[3]) >> 8],
+   };
+   vuint32m1_t vtbl_entry = __riscv_vle32_v_u32m1(tbl_entries, vl);
+
+   vbool32_t mask = __riscv_vmseq_vx_u32m1_b32(
+   __riscv_vand_vx_u32m1(vtbl_entry, RTE_LPM_VALID_EXT_ENTRY_BITMASK, 
vl),
+   RTE_LPM_VALID_EXT_ENTRY_BITMASK, vl);
+
+   vuint32m1_t vtbl8_index = __riscv_vsll_vx_u32m1(
+   __riscv_vadd_vv_u32m1(
+   __riscv_vsll_vx_u32m1(__riscv_vand_vx_u32m1(vtbl_entry, 
0x00FF, vl), 8, vl),
+   __riscv_vand_vx_u32m1(
+   __riscv_vle32_v_u32m1((const uint32_t *)&ip, vl), 
0x00FF, vl),
+   vl),
+   2, vl);
+
+   vtbl_entry = __riscv_vluxei32_v_u32m1_mu(
+   mask, vtbl_entry, (const uint32_t *)(lpm->tbl8), vtbl8_index, vl);
+
+   vuint32m1_t vnext_hop = __riscv_vand_vx_u32m1(vtbl_entry, 0x00FF, 
vl);
+   mask = __riscv_vmseq_vx_u32m1_b32(
+   __riscv_vand_vx_u32m1(vtbl_entry, RTE_LPM_LOOKUP_SUCCESS, vl), 0, 
vl);
+
+   vnext_hop = __riscv_vmerge_vxm_u32m1(vnext_hop, defv, mask, vl);
+
+   __riscv_vse32_v_u32m1(hop, vnext_hop, vl);
+}
+
+static inline void rte_lpm_lookupx4(
+   const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], uint32_t defv)
+{
+   lpm_lookupx4_impl(lpm, ip, hop, defv);
+}
+
+RTE_INIT(rte_lpm_init_alg)
+{
+   lpm_lookupx4_impl = rte_cpu_get_flag_enabled(RTE_CPUFLAG_RISCV_ISA_V)
+   ? rte_lpm_lookupx4_rvv
+   : rte_lpm_lookupx4_scalar;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LPM_RVV_H_ */
-- 
2.49.0



[PATCH RESEND 3/3] riscv: override machine_args only when default

2025-05-28 Thread uk7b
From: sunyuechi 

Support using -Dcpu_instruction_set=rv64gcv to enable V extension.

Signed-off-by: sunyuechi 
---
 config/riscv/meson.build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/config/riscv/meson.build b/config/riscv/meson.build
index e3694cf2e6..1036a86d05 100644
--- a/config/riscv/meson.build
+++ b/config/riscv/meson.build
@@ -111,6 +111,7 @@ arch_config = arch_config[arch_id]
 # Concatenate flags respecting priorities.
 dpdk_flags = flags_common + vendor_config['flags'] + arch_config.get('flags', 
[])
 
+if (cpu_instruction_set == 'rv64gc')
 # apply supported machine args
 machine_args = [] # Clear previous machine args
 foreach flag: arch_config['machine_args']
@@ -118,6 +119,7 @@ foreach flag: arch_config['machine_args']
 machine_args += flag
 endif
 endforeach
+endif
 
 # check if we can do buildtime detection of extensions supported by the target
 riscv_extension_macros = false
-- 
2.49.0



[PATCH RESEND 1/3] config/riscv: detect V extension

2025-05-28 Thread uk7b
From: sunyuechi 

This patch is derived from "config/riscv: detect presence of Zbc
extension with modifications".

The RISC-V C api defines architecture extension test macros
These let us detect whether the V extension is supported on the
compiler and -march we're building with. The C api also defines V
intrinsics we can use rather than inline assembly on newer versions of
GCC (14.1.0+) and Clang (18.1.0+).

If the V extension and intrinsics are both present and we can detect
the V extension at runtime, we define a flag, RTE_RISCV_FEATURE_V.

Signed-off-by: sunyuechi 
---
 config/riscv/meson.build | 25 +
 1 file changed, 25 insertions(+)

diff --git a/config/riscv/meson.build b/config/riscv/meson.build
index 7562c6cb99..e3694cf2e6 100644
--- a/config/riscv/meson.build
+++ b/config/riscv/meson.build
@@ -119,6 +119,31 @@ foreach flag: arch_config['machine_args']
 endif
 endforeach
 
+# check if we can do buildtime detection of extensions supported by the target
+riscv_extension_macros = false
+if (cc.get_define('__riscv_arch_test', args: machine_args) == '1')
+  message('Detected architecture extension test macros')
+  riscv_extension_macros = true
+else
+  warning('RISC-V architecture extension test macros not available. Build-time 
detection of extensions not possible')
+endif
+
+# detect extensions
+# Requires intrinsics available in GCC 14.1.0+ and Clang 18.1.0+
+if (riscv_extension_macros and
+(cc.get_define('__riscv_vector', args: machine_args) != ''))
+  if ((cc.get_id() == 'gcc' and cc.version().version_compare('>=14.1.0'))
+  or (cc.get_id() == 'clang' and cc.version().version_compare('>=18.1.0')))
+if (cc.compiles('''#include 
+int main(void) { size_t vl = __riscv_vsetvl_e32m1(1); }''', args: 
machine_args))
+  message('Compiling with the V extension')
+  machine_args += ['-DRTE_RISCV_FEATURE_V']
+endif
+  else
+warning('Detected V extension but cannot use because intrinsics are not 
available (present in GCC 14.1.0+ and Clang 18.1.0+)')
+  endif
+endif
+
 # apply flags
 foreach flag: dpdk_flags
 if flag.length() > 0
-- 
2.49.0



RE: [PATCH v3 04/10] net/ice/base: set speculative execution barrier

2025-05-28 Thread Pillai, Dhanya R
Sounds good to me Bruce.

/Dhanya
-Original Message-
From: Richardson, Bruce  
Sent: Wednesday, May 28, 2025 3:59 PM
To: Pillai, Dhanya R 
Cc: Burakov, Anatoly ; dev@dpdk.org; Krakowiak, 
LukaszX 
Subject: Re: [PATCH v3 04/10] net/ice/base: set speculative execution barrier

On Wed, May 28, 2025 at 02:07:25PM +0100, Bruce Richardson wrote:
> On Tue, May 27, 2025 at 01:17:23PM +, Dhanya Pillai wrote:
> > From: Lukasz Krakowiak 
> > 
> > Fix issues related to SPECULATIVE_EXECUTION_DATA_LEAK.
> > This changes set speculative execution barrier to functions:
> > 
> > * ice_sched_add_vsi_child_nodes,
> > * ice_sched_add_vsi_support_nodes,
> > * ice_sched_move_vsi_to_agg,
> > * ice_prof_has_mask_idx,
> > * ice_alloc_prof_mask.
> > 
> > Also, Added memfence definitions.
> > 
> > Signed-off-by: Lukasz Krakowiak 
> > Signed-off-by: Dhanya Pillai 
> > ---
> >  drivers/net/intel/ice/base/ice_flex_pipe.c | 2 ++
> >  drivers/net/intel/ice/base/ice_osdep.h | 6 ++
> >  drivers/net/intel/ice/base/ice_sched.c | 3 +++
> >  3 files changed, 11 insertions(+)
> > 
> > diff --git a/drivers/net/intel/ice/base/ice_flex_pipe.c 
> > b/drivers/net/intel/ice/base/ice_flex_pipe.c
> > index 6dd5588f85..dc8c92e203 100644
> > --- a/drivers/net/intel/ice/base/ice_flex_pipe.c
> > +++ b/drivers/net/intel/ice/base/ice_flex_pipe.c
> > @@ -1280,6 +1280,7 @@ ice_prof_has_mask_idx(struct ice_hw *hw, enum 
> > ice_block blk, u8 prof, u16 idx,
> > if (hw->blk[blk].masks.masks[i].in_use &&
> > hw->blk[blk].masks.masks[i].idx == idx) {
> > found = true;
> > +   ice_memfence_read();
> > if (hw->blk[blk].masks.masks[i].mask == mask)
> > match = true;
> > break;
> > @@ -1648,6 +1649,7 @@ ice_alloc_prof_mask(struct ice_hw *hw, enum ice_block 
> > blk, u16 idx, u16 mask,
> > /* if mask is in use and it exactly duplicates the
> >  * desired mask and index, then in can be reused
> >  */
> > +   ice_memfence_read();
> > if (hw->blk[blk].masks.masks[i].mask == mask &&
> > hw->blk[blk].masks.masks[i].idx == idx) {
> > found_copy = true;
> > diff --git a/drivers/net/intel/ice/base/ice_osdep.h 
> > b/drivers/net/intel/ice/base/ice_osdep.h
> > index ad6cde9896..7588ad3dbc 100644
> > --- a/drivers/net/intel/ice/base/ice_osdep.h
> > +++ b/drivers/net/intel/ice/base/ice_osdep.h
> > @@ -203,6 +203,12 @@ struct __rte_packed_begin ice_virt_mem {  
> > #define ice_memset(a, b, c, d) memset((a), (b), (c))  #define 
> > ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
> >  
> > +/* Memory fence barrier */
> > +#define ice_memfence_read()
> > +#define ice_memfence_read_write()
> > +#define ice_memfence_write()
> > +
> 
> I suspect rather than removing this, they would be better defined as
> rte_smp_* barriers. As in:
> 
> #define ice_memfence_read() rte_smp_rmb()
> 
Correcting my own suggestion - since this is a NIC driver, we probably want to 
use rte_io_* barriers, not the rte_smp_* ones.

/Bruce


Re: [PATCH v3 04/10] net/ice/base: set speculative execution barrier

2025-05-28 Thread Bruce Richardson
On Tue, May 27, 2025 at 01:17:23PM +, Dhanya Pillai wrote:
> From: Lukasz Krakowiak 
> 
> Fix issues related to SPECULATIVE_EXECUTION_DATA_LEAK.
> This changes set speculative execution barrier to functions:
> 
> * ice_sched_add_vsi_child_nodes,
> * ice_sched_add_vsi_support_nodes,
> * ice_sched_move_vsi_to_agg,
> * ice_prof_has_mask_idx,
> * ice_alloc_prof_mask.
> 
> Also, Added memfence definitions.
> 
> Signed-off-by: Lukasz Krakowiak 
> Signed-off-by: Dhanya Pillai 
> ---
>  drivers/net/intel/ice/base/ice_flex_pipe.c | 2 ++
>  drivers/net/intel/ice/base/ice_osdep.h | 6 ++
>  drivers/net/intel/ice/base/ice_sched.c | 3 +++
>  3 files changed, 11 insertions(+)
> 
> diff --git a/drivers/net/intel/ice/base/ice_flex_pipe.c 
> b/drivers/net/intel/ice/base/ice_flex_pipe.c
> index 6dd5588f85..dc8c92e203 100644
> --- a/drivers/net/intel/ice/base/ice_flex_pipe.c
> +++ b/drivers/net/intel/ice/base/ice_flex_pipe.c
> @@ -1280,6 +1280,7 @@ ice_prof_has_mask_idx(struct ice_hw *hw, enum ice_block 
> blk, u8 prof, u16 idx,
>   if (hw->blk[blk].masks.masks[i].in_use &&
>   hw->blk[blk].masks.masks[i].idx == idx) {
>   found = true;
> + ice_memfence_read();
>   if (hw->blk[blk].masks.masks[i].mask == mask)
>   match = true;
>   break;
> @@ -1648,6 +1649,7 @@ ice_alloc_prof_mask(struct ice_hw *hw, enum ice_block 
> blk, u16 idx, u16 mask,
>   /* if mask is in use and it exactly duplicates the
>* desired mask and index, then in can be reused
>*/
> + ice_memfence_read();
>   if (hw->blk[blk].masks.masks[i].mask == mask &&
>   hw->blk[blk].masks.masks[i].idx == idx) {
>   found_copy = true;
> diff --git a/drivers/net/intel/ice/base/ice_osdep.h 
> b/drivers/net/intel/ice/base/ice_osdep.h
> index ad6cde9896..7588ad3dbc 100644
> --- a/drivers/net/intel/ice/base/ice_osdep.h
> +++ b/drivers/net/intel/ice/base/ice_osdep.h
> @@ -203,6 +203,12 @@ struct __rte_packed_begin ice_virt_mem {
>  #define ice_memset(a, b, c, d) memset((a), (b), (c))
>  #define ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
>  
> +/* Memory fence barrier */
> +#define ice_memfence_read()
> +#define ice_memfence_read_write()
> +#define ice_memfence_write()
> +

I suspect rather than removing this, they would be better defined as
rte_smp_* barriers. As in:

#define ice_memfence_read() rte_smp_rmb()

Let me know if you agree with this proposal, and I'll add it on apply.

/Bruce


RE: Looks like a bug: operands are different enum types 'ibv_flow_attr_type' and 'ibv_flow_flags'

2025-05-28 Thread Suanming Mou
Hi Andre,


> -Original Message-
> From: Andre Muezerie 
> Sent: Wednesday, May 28, 2025 3:33 AM
> To: Dariusz Sosnowski ; Slava Ovsiienko
> ; Bing Zhao ; Ori Kam
> ; Suanming Mou ; Matan
> Azrad 
> Cc: dev@dpdk.org
> Subject: Looks like a bug: operands are different enum types
> 'ibv_flow_attr_type' and 'ibv_flow_flags'
> 
> Hi Folks,
> 
> 
> Compiling with MSCS resulted in the warning below:
> 
> ../drivers/net/mlx5/mlx5_flow_dv.c(19636): warning C5287: operands are
> different enum types 'ibv_flow_attr_type' and 'ibv_flow_flags'; use an 
> explicit
> cast to silence this warning
> 
> It looks like a legit bug. Here is the offending line:
> 
>   struct mlx5dv_flow_matcher_attr dv_attr = {
>   .type = IBV_FLOW_ATTR_NORMAL |
> IBV_FLOW_ATTR_FLAGS_EGRESS,
> 
> As the warning states, the constants in the bitwise operation belong to
> different enums, and these enums have overlaping values, which makes the
> bitwise operation very suspicious.
> On top of that, I see that struct mlx5dv_flow_matcher_attr has a field named
> "flags" which accepts values from ibv_flow_flags:
> 
>   struct mlx5dv_flow_matcher_attr {
>   enum ibv_flow_attr_type type;
>   uint32_t flags; /* From enum ibv_flow_flags. */
> 
> Could someone more familiar with the code take a look and make a fix if
> needed? My goal here is just to get the code to compile with MSVC without
> warnings. I can add a cast to remove the warning if this is indeed how the
> code should be, but I don't want to do this unless I get confirmation that 
> this
> would be the right course of action.

Yes, I think you are right. ` IBV_FLOW_ATTR_FLAGS_EGRESS ` should goes to flags.
Thanks for the reporting. 
Will first try to verify how it works even with the incorrect initialization.

> 
> Thanks,
> 
> Andre Muezerie


[PATCH v3] drivers: remove __rte_used from functions for compatibility with MSVC

2025-05-28 Thread Andre Muezerie
With gcc, the macro __rte_used translates to __attribute__((used)).
MSVC has something to the same effect, but harder to use and with some
limitations (one being that it cannot be used with "static"). Therefore,
it makes sense to avoid __rte_used in some cases.

The functions modified in this patch don't really need to use __rte_used.
Instead, these functions can be involved in same ifdefs used in the
callers. That way, they are only defined when needed (when
someone is actually calling the function). Doing so makes the code
compatible with MSVC and avoids compiler warnings about functions being
defined but not used.

Signed-off-by: Andre Muezerie 
Acked-by: Pavan Nikhilesh 
---
 drivers/net/cnxk/cn10k_rx_select.c | 4 +++-
 drivers/net/cnxk/cn10k_tx_select.c | 4 +++-
 drivers/net/cnxk/cn20k_rx_select.c | 4 +++-
 drivers/net/cnxk/cn20k_tx_select.c | 4 +++-
 drivers/net/cnxk/cn9k_rx_select.c  | 4 +++-
 drivers/net/cnxk/cn9k_tx_select.c  | 4 +++-
 6 files changed, 18 insertions(+), 6 deletions(-)

diff --git a/drivers/net/cnxk/cn10k_rx_select.c 
b/drivers/net/cnxk/cn10k_rx_select.c
index fe1f0dda73..5258d7f745 100644
--- a/drivers/net/cnxk/cn10k_rx_select.c
+++ b/drivers/net/cnxk/cn10k_rx_select.c
@@ -5,7 +5,8 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-static __rte_used void
+#if defined(RTE_ARCH_ARM64) && !defined(CNXK_DIS_TMPLT_FUNC)
+static void
 pick_rx_func(struct rte_eth_dev *eth_dev,
 const eth_rx_burst_t rx_burst[NIX_RX_OFFLOAD_MAX])
 {
@@ -21,6 +22,7 @@ pick_rx_func(struct rte_eth_dev *eth_dev,
 
rte_atomic_thread_fence(rte_memory_order_release);
 }
+#endif
 
 static uint16_t __rte_noinline __rte_hot __rte_unused
 cn10k_nix_flush_rx(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)
diff --git a/drivers/net/cnxk/cn10k_tx_select.c 
b/drivers/net/cnxk/cn10k_tx_select.c
index 56fddac5a0..066c65c9b9 100644
--- a/drivers/net/cnxk/cn10k_tx_select.c
+++ b/drivers/net/cnxk/cn10k_tx_select.c
@@ -5,7 +5,8 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-static __rte_used inline void
+#if defined(RTE_ARCH_ARM64) && !defined(CNXK_DIS_TMPLT_FUNC)
+static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev,
 const eth_tx_burst_t tx_burst[NIX_TX_OFFLOAD_MAX])
 {
@@ -19,6 +20,7 @@ pick_tx_func(struct rte_eth_dev *eth_dev,
rte_eth_fp_ops[eth_dev->data->port_id].tx_pkt_burst =
eth_dev->tx_pkt_burst;
 }
+#endif
 
 #if defined(RTE_ARCH_ARM64)
 static int
diff --git a/drivers/net/cnxk/cn20k_rx_select.c 
b/drivers/net/cnxk/cn20k_rx_select.c
index 25c79434cd..d60f4e62f7 100644
--- a/drivers/net/cnxk/cn20k_rx_select.c
+++ b/drivers/net/cnxk/cn20k_rx_select.c
@@ -5,7 +5,8 @@
 #include "cn20k_ethdev.h"
 #include "cn20k_rx.h"
 
-static __rte_used void
+#if defined(RTE_ARCH_ARM64) && !defined(CNXK_DIS_TMPLT_FUNC)
+static void
 pick_rx_func(struct rte_eth_dev *eth_dev, const eth_rx_burst_t 
rx_burst[NIX_RX_OFFLOAD_MAX])
 {
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
@@ -18,6 +19,7 @@ pick_rx_func(struct rte_eth_dev *eth_dev, const 
eth_rx_burst_t rx_burst[NIX_RX_O
 
rte_atomic_thread_fence(rte_memory_order_release);
 }
+#endif
 
 static uint16_t __rte_noinline __rte_hot __rte_unused
 cn20k_nix_flush_rx(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)
diff --git a/drivers/net/cnxk/cn20k_tx_select.c 
b/drivers/net/cnxk/cn20k_tx_select.c
index fb62b54a5f..95cd1148a1 100644
--- a/drivers/net/cnxk/cn20k_tx_select.c
+++ b/drivers/net/cnxk/cn20k_tx_select.c
@@ -5,7 +5,8 @@
 #include "cn20k_ethdev.h"
 #include "cn20k_tx.h"
 
-static __rte_used inline void
+#if defined(RTE_ARCH_ARM64) && !defined(CNXK_DIS_TMPLT_FUNC)
+static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev, const eth_tx_burst_t 
tx_burst[NIX_TX_OFFLOAD_MAX])
 {
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
@@ -16,6 +17,7 @@ pick_tx_func(struct rte_eth_dev *eth_dev, const 
eth_tx_burst_t tx_burst[NIX_TX_O
if (eth_dev->data->dev_started)
rte_eth_fp_ops[eth_dev->data->port_id].tx_pkt_burst = 
eth_dev->tx_pkt_burst;
 }
+#endif
 
 #if defined(RTE_ARCH_ARM64)
 static int
diff --git a/drivers/net/cnxk/cn9k_rx_select.c 
b/drivers/net/cnxk/cn9k_rx_select.c
index 0d4031ddeb..bb943e694d 100644
--- a/drivers/net/cnxk/cn9k_rx_select.c
+++ b/drivers/net/cnxk/cn9k_rx_select.c
@@ -5,7 +5,8 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-static __rte_used void
+#if defined(RTE_ARCH_ARM64) && !defined(CNXK_DIS_TMPLT_FUNC)
+static void
 pick_rx_func(struct rte_eth_dev *eth_dev,
 const eth_rx_burst_t rx_burst[NIX_RX_OFFLOAD_MAX])
 {
@@ -19,6 +20,7 @@ pick_rx_func(struct rte_eth_dev *eth_dev,
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst =
eth_dev->rx_pkt_burst;
 }
+#endif
 
 #if defined(RTE_ARCH_ARM64)
 static void
diff --git a/drivers/net/cnxk/cn9k_tx_select.c 
b/drivers/net/cnxk/cn9k_tx_select.c
index 497449b1c4..784faa3b8b 100644
--- a/drivers/n

Re: [PATCH v3 04/10] net/ice/base: set speculative execution barrier

2025-05-28 Thread Bruce Richardson
On Wed, May 28, 2025 at 02:07:25PM +0100, Bruce Richardson wrote:
> On Tue, May 27, 2025 at 01:17:23PM +, Dhanya Pillai wrote:
> > From: Lukasz Krakowiak 
> > 
> > Fix issues related to SPECULATIVE_EXECUTION_DATA_LEAK.
> > This changes set speculative execution barrier to functions:
> > 
> > * ice_sched_add_vsi_child_nodes,
> > * ice_sched_add_vsi_support_nodes,
> > * ice_sched_move_vsi_to_agg,
> > * ice_prof_has_mask_idx,
> > * ice_alloc_prof_mask.
> > 
> > Also, Added memfence definitions.
> > 
> > Signed-off-by: Lukasz Krakowiak 
> > Signed-off-by: Dhanya Pillai 
> > ---
> >  drivers/net/intel/ice/base/ice_flex_pipe.c | 2 ++
> >  drivers/net/intel/ice/base/ice_osdep.h | 6 ++
> >  drivers/net/intel/ice/base/ice_sched.c | 3 +++
> >  3 files changed, 11 insertions(+)
> > 
> > diff --git a/drivers/net/intel/ice/base/ice_flex_pipe.c 
> > b/drivers/net/intel/ice/base/ice_flex_pipe.c
> > index 6dd5588f85..dc8c92e203 100644
> > --- a/drivers/net/intel/ice/base/ice_flex_pipe.c
> > +++ b/drivers/net/intel/ice/base/ice_flex_pipe.c
> > @@ -1280,6 +1280,7 @@ ice_prof_has_mask_idx(struct ice_hw *hw, enum 
> > ice_block blk, u8 prof, u16 idx,
> > if (hw->blk[blk].masks.masks[i].in_use &&
> > hw->blk[blk].masks.masks[i].idx == idx) {
> > found = true;
> > +   ice_memfence_read();
> > if (hw->blk[blk].masks.masks[i].mask == mask)
> > match = true;
> > break;
> > @@ -1648,6 +1649,7 @@ ice_alloc_prof_mask(struct ice_hw *hw, enum ice_block 
> > blk, u16 idx, u16 mask,
> > /* if mask is in use and it exactly duplicates the
> >  * desired mask and index, then in can be reused
> >  */
> > +   ice_memfence_read();
> > if (hw->blk[blk].masks.masks[i].mask == mask &&
> > hw->blk[blk].masks.masks[i].idx == idx) {
> > found_copy = true;
> > diff --git a/drivers/net/intel/ice/base/ice_osdep.h 
> > b/drivers/net/intel/ice/base/ice_osdep.h
> > index ad6cde9896..7588ad3dbc 100644
> > --- a/drivers/net/intel/ice/base/ice_osdep.h
> > +++ b/drivers/net/intel/ice/base/ice_osdep.h
> > @@ -203,6 +203,12 @@ struct __rte_packed_begin ice_virt_mem {
> >  #define ice_memset(a, b, c, d) memset((a), (b), (c))
> >  #define ice_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
> >  
> > +/* Memory fence barrier */
> > +#define ice_memfence_read()
> > +#define ice_memfence_read_write()
> > +#define ice_memfence_write()
> > +
> 
> I suspect rather than removing this, they would be better defined as
> rte_smp_* barriers. As in:
> 
> #define ice_memfence_read() rte_smp_rmb()
> 
Correcting my own suggestion - since this is a NIC driver, we probably want
to use rte_io_* barriers, not the rte_smp_* ones.

/Bruce


[PATCH] test/crypto: test vectors for additional ECDH groups

2025-05-28 Thread Gowrishankar Muthukrishnan
Add test vectors for ECDH groups 19, 20 and 21.

Signed-off-by: Gowrishankar Muthukrishnan 
---
 app/test/test_cryptodev_asym.c   |  39 ++-
 app/test/test_cryptodev_ecdh_test_vectors.h  | 344 +++
 app/test/test_cryptodev_ecdsa_test_vectors.h |   6 +
 3 files changed, 386 insertions(+), 3 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index fcaf73aa1a..46bbc520e6 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -1676,7 +1676,9 @@ test_ecdsa_sign_verify_all_curve(void)
const char *msg;
 
for (curve_id = SECP192R1; curve_id < END_OF_CURVE_LIST; curve_id++) {
-   if (curve_id == ED25519 || curve_id == ED448)
+   if (curve_id == ED25519 || curve_id == ED448 ||
+   curve_id == ECGROUP19 || curve_id == ECGROUP20 ||
+   curve_id == ECGROUP21)
continue;
 
status = test_ecdsa_sign_verify(curve_id);
@@ -1840,7 +1842,9 @@ test_ecpm_all_curve(void)
const char *msg;
 
for (curve_id = SECP192R1; curve_id < END_OF_CURVE_LIST; curve_id++) {
-   if (curve_id == SECP521R1_UA || curve_id == ED25519 || curve_id 
== ED448)
+   if (curve_id == SECP521R1_UA || curve_id == ECGROUP19 ||
+   curve_id == ECGROUP20 || curve_id == ECGROUP21 ||
+   curve_id == ED25519 || curve_id == ED448)
continue;
 
status = test_ecpm(curve_id);
@@ -2043,6 +2047,15 @@ test_ecdh_pub_key_generate(enum curve curve_id)
case SECP521R1:
input_params = ecdh_param_secp521r1;
break;
+   case ECGROUP19:
+   input_params = ecdh_param_group19;
+   break;
+   case ECGROUP20:
+   input_params = ecdh_param_group20;
+   break;
+   case ECGROUP21:
+   input_params = ecdh_param_group21;
+   break;
case ED25519:
input_params = ecdh_param_ed25519;
break;
@@ -2204,6 +2217,15 @@ test_ecdh_pub_key_verify(enum curve curve_id)
case SECP521R1:
input_params = ecdh_param_secp521r1;
break;
+   case ECGROUP19:
+   input_params = ecdh_param_group19;
+   break;
+   case ECGROUP20:
+   input_params = ecdh_param_group20;
+   break;
+   case ECGROUP21:
+   input_params = ecdh_param_group21;
+   break;
default:
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
@@ -2334,6 +2356,15 @@ test_ecdh_shared_secret(enum curve curve_id)
case SECP521R1:
input_params = ecdh_param_secp521r1;
break;
+   case ECGROUP19:
+   input_params = ecdh_param_group19;
+   break;
+   case ECGROUP20:
+   input_params = ecdh_param_group20;
+   break;
+   case ECGROUP21:
+   input_params = ecdh_param_group21;
+   break;
default:
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
@@ -2556,7 +2587,9 @@ test_ecdh_all_curve(void)
const char *msg;
 
for (curve_id = SECP192R1; curve_id < END_OF_CURVE_LIST; curve_id++) {
-   if (curve_id == SECP521R1_UA || curve_id == ED25519 || curve_id 
== ED448)
+   if (curve_id == SECP521R1_UA || curve_id == ECGROUP19 ||
+   curve_id == ECGROUP20 || curve_id == ECGROUP21 ||
+   curve_id == ED25519 || curve_id == ED448)
continue;
 
status = test_ecdh_priv_key_generate(curve_id);
diff --git a/app/test/test_cryptodev_ecdh_test_vectors.h 
b/app/test/test_cryptodev_ecdh_test_vectors.h
index 36f92b223f..91719b11c1 100644
--- a/app/test/test_cryptodev_ecdh_test_vectors.h
+++ b/app/test/test_cryptodev_ecdh_test_vectors.h
@@ -553,6 +553,350 @@ struct crypto_testsuite_ecdh_params ecdh_param_secp521r1 
= {
.curve = RTE_CRYPTO_EC_GROUP_SECP521R1
 };
 
+/** 256 bit Random ECP group
+ * https://datatracker.ietf.org/doc/html/rfc5903#section-8.1
+ */
+
+static uint8_t i_group19[] = {
+   0xC8, 0x8F, 0x01, 0xF5, 0x10, 0xD9, 0xAC, 0x3F,
+   0x70, 0xA2, 0x92, 0xDA, 0xA2, 0x31, 0x6D, 0xE5,
+   0x44, 0xE9, 0xAA, 0xB8, 0xAF, 0xE8, 0x40, 0x49,
+   0xC6, 0x2A, 0x9C, 0x57, 0x86, 0x2D, 0x14, 0x33
+};
+
+static uint8_t gix_group19[] = {
+   0xDA, 0xD0, 0xB6, 0x53, 0x94, 0x22, 0x1C, 0xF9,
+   0xB0, 0x51, 0xE1, 0xFE, 0xCA, 0x57, 0x87, 0xD0,
+   0x98, 0xDF, 0xE6, 0x37, 0xFC, 0x90, 0xB9, 0xEF,
+   0x94, 0x5D, 0x0C, 0x37, 0x72, 0x58, 0x11, 0x80
+};
+
+static uint8_t giy_group19[] = {
+   0x52, 0x71, 0xA0, 0x46, 0x1C, 0xDB, 0x82, 0x52,
+   0xD6, 0x1F, 0x1C, 0x45, 0x6F, 0xA3, 0xE5, 0x9A,
+   0xB1, 0xF4, 0x5B, 0x33, 

RE: [PATCH] doc: fix anchors namespace in guides

2025-05-28 Thread Hemant Agrawal



> -Original Message-
> From: Nandini Persad 
> Subject: [PATCH] doc: fix anchors namespace in guides
> Importance: High
> 
> I modified the anchors names within the guides to have a a clear prefix so 
> that
> they don't collide, based on advice from Thomas.
> 
> Signed-off-by: Nandini Persad 

Acked-by: Hemant Agrawal 


Re: [PATCH v1 3/3] dts: fix doc generation bug

2025-05-28 Thread Patrick Robb
On Tue, May 27, 2025 at 11:37 AM Dean Marx  wrote:

> Fix a bug in the port stats check test suite that was causing
> the DTS doc generation to fail.
>
> Fixes: 8f21210b1d50 ("dts: add port stats check test suite")
>
> Signed-off-by: Dean Marx 
> ---
>  dts/tests/TestSuite_port_stats_checks.py | 13 +
>  1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/dts/tests/TestSuite_port_stats_checks.py
> b/dts/tests/TestSuite_port_stats_checks.py
> index 2a3fb06946..491c2263b6 100644
> --- a/dts/tests/TestSuite_port_stats_checks.py
> +++ b/dts/tests/TestSuite_port_stats_checks.py
> @@ -51,10 +51,15 @@ class TestPortStatsChecks(TestSuite):
>
>  #: Length of the packet being sent including the IP and frame headers.
>  total_packet_len: ClassVar[int] = 100
> -#: Packet to send during testing.
> -send_pkt: ClassVar[Packet] = (
> -Ether() / IP() / Raw(b"X" * (total_packet_len - ip_header_len -
> ether_header_len))
> -)
> +
> +@property
> +def send_pkt(self) -> Packet:
> +"""Packet to send during testing."""
> +return (
> +Ether()
> +/ IP()
> +/ Raw(b"X" * (self.total_packet_len - self.ip_header_len -
> self.ether_header_len))
> +)
>

Looks good. In my opinion this should be a standalone patch, since it's
unrelated to patch 1 and 2, but it's not functionally relevant since there
is no such thing as a series in git, so you don't need to resubmit. Keep in
mind for future series though that they should be logically grouped.

Reviewed-by: Patrick Robb 


>
>  def extract_noise_information(
>  self, verbose_out: list[TestPmdVerbosePacket]
> --
> 2.49.0
>
>


Re: [PATCH 2/2] dts: use tmp dir and DPDK tree dir

2025-05-28 Thread Dean Marx
Tested-by: Dean Marx 


Re: [PATCH 1/2] dts: add remote create dir function

2025-05-28 Thread Dean Marx
Tested-by: Dean Marx 


Re: [PATCH v1 1/3] dts: rewrite README

2025-05-28 Thread Patrick Robb
On Tue, May 27, 2025 at 11:37 AM Dean Marx  wrote:

> Remove unnecessary information from README.md, and add new sections to
> clarify
> the purpose/use of DTS along with clear setup instructions.
>
> Signed-off-by: Dean Marx 
> ---
>  dts/README.md | 104 +++---
>  1 file changed, 39 insertions(+), 65 deletions(-)
>
> diff --git a/dts/README.md b/dts/README.md
> index 2b3a7f89c5..879cf65abc 100644
> --- a/dts/README.md
> +++ b/dts/README.md
> @@ -1,81 +1,55 @@
> -# DTS Environment
> +# Description
>
> -The execution and development environments for DTS are the same,
> -a [Docker](https://docs.docker.com/) container defined by our
> [Dockerfile](./Dockerfile).
> -Using a container for the development environment helps with a few things.
> +DTS is a testing framework and set of testsuites for end to end testing
> of DPDK and DPDK
> +enabled hardware. Unlike the DPDK unit test application, DTS is intended
> to be used to


Maybe change to "unlike DPDK's dpdk-test application, which is used for
running unit tests, DTS is intended to be used to evaluate real DPDK
workloads run over supported hardware."


>
> +evaluate real DPDK workloads run over supported hardware. For instance,
> DTS will control
> +a traffic generator node which will send packets to a system under test
> node which is
> +running a DPDK application, and evaluate those results.
>
>
Change to "evaluate the resulting DPDK application behavior."


> -1. It helps enforce the boundary between the DTS environment and the
> TG/SUT,
> -   something which caused issues in the past.
> -2. It makes creating containers to run DTS inside automated tooling much
> easier, since
> -   they can be based off of a known-working environment that will be
> updated as DTS is.
> -3. It abstracts DTS from the server it is running on. This means that the
> bare-metal OS
> -   can be whatever corporate policy or your personal preferences dictate,
> -   and DTS does not have to try to support all distros that are supported
> by DPDK CI.
> -4. It makes automated testing for DTS easier,
> -   since new dependencies can be sent in with the patches.
> -5. It fixes the issue of undocumented dependencies,
> -   where some test suites require Python libraries that are not installed.
> -6. Allows everyone to use the same Python version easily,
> -   even if they are using a distribution or Windows with out-of-date
> packages.
> -7. Allows you to run the tester on Windows while developing via Docker
> for Windows.
> +# Supported Test Node Topologies
>
> -## Tips for setting up a development environment
> +DTS is a python application which will control a traffic generator node
> (TG) and system
> +under test node (SUT). The nodes represent a DPDK device (usually a NIC)
> located on a
> +host. The devices/NICs can be located on two separate servers, or on the
> same server. If you use
> +the same server for both NICs, install them on separate NUMA domains if
> possible (this is ideal
> +for performance testing.)
>
> -### Getting a docker shell
> +1. 2 link: Represents a topology in which the TG node and SUT node both
> have two network interfaces
>

2 links topology


> +which form the TG <-> SUT connection. An example of this would be a dual
> interface NIC which is the
> +TG node connected to a dual interface NIC which is the SUT node.
> Interface 0 on TG <-> interface 0
> +on SUT, interface 1 on TG <-> interface 1 on SUT.
> +2. 1 link: Works, but may result in skips for testsuites which are
> explicitly decorated with a
>

1 links topology


> +2 link requirement. Represents a topology in which the TG node and SUT
> node are both located on one
> +network interface. An example of this would be a dual interface NIC with
> a connection between
> +its own ports.
>

I wonder whether it makes sense to provide an ascii drawing of the various
topologies?

I google an online ascii art tool and got these 2 showing the 2 links
topology for 2 servers and 1 server:

+--+
+--+
|  | |
 |
|  | --- |
 |
|  | |
 |
|  Tester (Traffic Generator)  | | System Under
Test|
|  | |
 |
|  | --- |
 |
|  | |
 |
+--+
+--+





   ---
  |   |
  | - |
  ||| |
  ||| |
  ||| |
  || 

[DPDK/other Bug 1715] Enabling tracepoints in dpdk-test causes app to exit immediately without running any tests

2025-05-28 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1715

Bug ID: 1715
   Summary: Enabling tracepoints in dpdk-test causes app to exit
immediately without running any tests
   Product: DPDK
   Version: 25.03
  Hardware: POWER
OS: Linux
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: other
  Assignee: dev@dpdk.org
  Reporter: d...@linux.ibm.com
  Target Milestone: ---

While attempting to create a custom tracepoint I discovered that enabling any
tracepoints in the dpdk-test app cause if to exit (exit code 255) without
running any tests.  Here's an example on a ppc64le system:


> $ ~/src/dpdk/build/app/test/dpdk-test -l 2-127 --no-pci --no-huge
> --trace=lib.mempool.create
> EAL: Detected CPU lcores: 128
> EAL: Detected NUMA nodes: 2
> EAL: Static memory layout is selected, amount of reserved memory can be
> adjusted with -m or --socket-mem
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: Trace dir: /home/drc/dpdk-traces/rte-2025-05-28-PM-05-10-59


The resulting trace looks like this:


> $ babeltrace2 ~/dpdk-traces/rte-2025-05-28-PM-05-10-59
> 05-28 17:13:00.352 484540 484540 F LIB/ASSERT-COND
> bt_lib_assert_cond_failed@lib/assert-cond.c:64 Babeltrace 2 library
> precondition not satisfied.
> 05-28 17:13:00.352 484540 484540 F LIB/ASSERT-COND
> bt_lib_assert_cond_failed@lib/assert-cond.c:65
> 
> 05-28 17:13:00.352 484540 484540 F LIB/ASSERT-COND
> bt_lib_assert_cond_failed@lib/assert-cond.c:66 Condition ID:
> `pre:message-packet-beginning-create-with-default-clock-snapshot:without-default-clock-snapshot`.
> 05-28 17:13:00.352 484540 484540 F LIB/ASSERT-COND
> bt_lib_assert_cond_failed@lib/assert-cond.c:68 Function:
> bt_message_packet_beginning_create_with_default_clock_snapshot().
> 05-28 17:13:00.352 484540 484540 F LIB/ASSERT-COND
> bt_lib_assert_cond_failed@lib/assert-cond.c:69
> 
> 05-28 17:13:00.352 484540 484540 F LIB/ASSERT-COND
> bt_lib_assert_cond_failed@lib/assert-cond.c:70 Error is:
> 05-28 17:13:00.352 484540 484540 F LIB/ASSERT-COND
> bt_lib_assert_cond_failed@lib/assert-cond.c:72 Unexpected stream class
> configuration when creating a packet beginning or end message: no default
> clock snapshot is needed, but one was provided: stream-addr=0x476bcfb0,
> stream-id=0,
> stream-name="/home/drc/dpdk-traces/rte-2025-05-28-PM-05-10-59/channel0_0",
> stream-stream-class-addr=0x47aae210, stream-stream-class-id=0,
> stream-trace-class-addr=0x47aae0d0, stream-trace-addr=0x474f0800,
> stream-trace-name="", stream-packet-pool-size=0, stream-packet-pool-cap=0,
> sc-addr=0x47aae210, sc-id=0, sc-is-frozen=0, sc-event-class-count=532,
> sc-packet-context-fc-addr=0x47aae5b0, sc-event-common-context-fc-addr=(nil),
> sc-assigns-auto-ec-id=0, sc-assigns-auto-stream-id=0, sc-supports-packets=1,
> sc-packets-have-begin-default-cs=0, sc-packets-have-end-default-cs=0,
> sc-supports-discarded-events=0, sc-discarded-events-have-default-cs=0,
> sc-supports-discarded-packets=0, sc-discarded-packets-have-default-cs=0,
> sc-trace-class-addr=0x47aae0d0, sc-pcf-pool-size=0, sc-pcf-pool-cap=0,
> with-cs=1, cs-val=0
> 05-28 17:13:00.352 484540 484540 F LIB/ASSERT-COND
> bt_lib_assert_cond_failed@lib/assert-cond.c:75 Aborting...
[1]484540 abort (core dumped)  babeltrace2
~/dpdk-traces/rte-2025-05-28-PM-05-10-59

-- 
You are receiving this mail because:
You are the assignee for the bug.

DPDK Ring Q

2025-05-28 Thread Lombardo, Ed
Hi,
I have an issue with DPDK 24.11.1 and 2 port 100G Intel NIC (E810-C) on 22 core 
CPU dual socket server.

There is a dedicated CPU core to get the packets from DPDK using 
rte_eth_rx_burst() and enqueue the mbufs into a worker ring Q.  This thread 
does nothing else.  The NIC is dropping packets at 8.5 Gbps per port.

Studying the perf report, I was interested in the common_ring_mc_dequeue().  
Perf tool shows common_ring_mc_dequeue() 92.86% Self and 92.86% Children.

I see further with perf tool rte_ring_enqueue_bulk() and 
rte_ring_enqueue_bulk_elem().  These are at 0.00% Self and 0.05% Children.
Perf tool shows rte_ring_sp_enqueue_bulk_elem (inlined) which is what I wanted 
to see (Single producer) representing the enqueue of the mbufs pointers to the 
worker ring Q.

Is it possible to change the common_ring_mc_dequeue() to 
common_ring_sc_dequeue()?  Can it be set to one consumer on single Queue 0.

I believe this is limiting DPDK from reaching 90 Gbps or higher in my setup, 
which is my goal.

I made sure the E810-C firmware was up to date, NIC FW Version: 4.80 0x80020543 
1.3805.0

Perf report shows:
   - 99.65% input_thread
  - 99.35% rte_eth_rx_burst (inlined)
 - ice_recv_scattered_pkts
  92.83% common_ring_mc_dequeue

Any thoughts or suggestions?

Thanks,
Ed


RE: [PATCH] crypto/virtio: add request check on request side

2025-05-28 Thread Jiang, YuX
> -Original Message-
> From: Radu Nicolau 
> Sent: Friday, May 23, 2025 10:05 PM
> To: Jay Zhou ; Fan Zhang
> ; Chenbo Xia 
> Cc: dev@dpdk.org; Nicolau, Radu ;
> roy.fan.zh...@intel.com; sta...@dpdk.org
> Subject: [PATCH] crypto/virtio: add request check on request side
> 
> Add same request checks on the request side.
> 
> Fixes: b2866f473369 ("vhost/crypto: fix missed request check for copy
> mode")
> Cc: roy.fan.zh...@intel.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Radu Nicolau 
> ---
>  drivers/crypto/virtio/virtio_rxtx.c | 40 +
>  1 file changed, 40 insertions(+)
> 
Tested-by:  Yu Jiang 

Best regards,
Yu Jiang


Re: [PATCH v3] drivers: remove __rte_used from functions for compatibility with MSVC

2025-05-28 Thread Jerin Jacob
On Thu, May 29, 2025 at 5:42 AM Andre Muezerie
 wrote:
>
> With gcc, the macro __rte_used translates to __attribute__((used)).
> MSVC has something to the same effect, but harder to use and with some
> limitations (one being that it cannot be used with "static"). Therefore,
> it makes sense to avoid __rte_used in some cases.
>
> The functions modified in this patch don't really need to use __rte_used.
> Instead, these functions can be involved in same ifdefs used in the
> callers. That way, they are only defined when needed (when
> someone is actually calling the function). Doing so makes the code
> compatible with MSVC and avoids compiler warnings about functions being
> defined but not used.
>
> Signed-off-by: Andre Muezerie 
> Acked-by: Pavan Nikhilesh 

Updated the git commit as follows and applied to
dpdk-next-net-mrvl/for-main. Thanks


net/cnxk: improve MSVC compatibility

With gcc, the macro __rte_used translates to __attribute__((used)).
MSVC has something to the same effect, but harder to use and with some
limitations (one being that it cannot be used with "static"). Therefore,
it makes sense to avoid __rte_used in some cases.

The functions modified in this patch don't really need to use __rte_used.
Instead, these functions can be involved in same ifdefs used in the
callers. That way, they are only defined when needed (when
someone is actually calling the function). Doing so makes the code
compatible with MSVC and avoids compiler warnings about functions being
defined but not used.

Signed-off-by: Andre Muezerie 
Acked-by: Pavan Nikhilesh 


[PATCH] ethdev: fix null error struct in flow configure

2025-05-28 Thread Maayan Kashani
RTE flow configure returned error value w/o filling the
error struct which caused a crash on complain function.

Filling the error struct fixed the issue.

Signed-off-by: Maayan Kashani 
Fixes: 4ff58b734bc9 ("ethdev: introduce flow engine configuration")
Cc: sta...@dpdk.org
---
 lib/ethdev/rte_flow.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 9f8d8f3dc2d..fe8f43caff7 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1725,21 +1725,21 @@ rte_flow_configure(uint16_t port_id,
FLOW_LOG(INFO,
"Device with port_id=%"PRIu16" is not configured.",
port_id);
-   return -EINVAL;
+   goto error;
}
if (dev->data->dev_started != 0) {
FLOW_LOG(INFO,
"Device with port_id=%"PRIu16" already started.",
port_id);
-   return -EINVAL;
+   goto error;
}
if (port_attr == NULL) {
FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id);
-   return -EINVAL;
+   goto error;
}
if (queue_attr == NULL) {
FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.", port_id);
-   return -EINVAL;
+   goto error;
}
if ((port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) &&
 !rte_eth_dev_is_valid_port(port_attr->host_port_id)) {
@@ -1760,6 +1760,10 @@ rte_flow_configure(uint16_t port_id,
return rte_flow_error_set(error, ENOTSUP,
  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
  NULL, rte_strerror(ENOTSUP));
+error:
+   return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, rte_strerror(EINVAL));
 }
 
 RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_create, 22.03)
-- 
2.21.0



RE: [EXTERNAL] [PATCH V4] Add new tracepoint function for type time_t

2025-05-28 Thread Sunil Kumar Kori
> From: Changqing Li 
> 
> To support Y2038 issue, for 32bit system, -D_TIME_BITS=64 is passed to gcc, 
> struct
> timespec time->tv_sec is 64bit, but size_t is 32bits, so dpdk will compile 
> failed
> with error:
> "../git/lib/ethdev/ethdev_trace.h: In function
> 'rte_eth_trace_timesync_write_time':
> ../git/lib/eal/include/rte_common.h:498:55: error: size of unnamed array is
> negative
>   498 | #define RTE_BUILD_BUG_ON(condition) ((void)sizeof(char[1 -
>   2*!!(condition)]))"
> 
> Add a new tracepoint function for type time_t to fix this issue
> 
> Signed-off-by: Changqing Li 
> ---
>  lib/eal/common/eal_common_trace_ctf.c | 5 +
>  lib/eal/include/rte_trace_point.h | 5 +
>  lib/ethdev/ethdev_trace.h | 8 
>  3 files changed, 14 insertions(+), 4 deletions(-)
> 

Acked-by: Sunil Kumar Kori 



Re: [PATCH] config/arm: update neoverse n3 SoC and add neoverse V3

2025-05-28 Thread Thomas Monjalon
16/05/2025 18:15, Doug Foster:
> Arm Neoverse N3 build configuration is updated to include mcpu that
> aligns with latest GCC. Also, add Arm Neoverse V3 build configuration.
> 
> Signed-off-by: Doug Foster 
> Reviewed-by: Wathsala Vithanage 
> Reviewed-by: Dhruv Tripathi 

Split in 2 patches and pushed, thanks.





Re: [PATCH 1/3] eventdev: introduce event vector adapter

2025-05-28 Thread Jerin Jacob
On Fri, Apr 11, 2025 at 12:10 AM  wrote:
>
> From: Pavan Nikhilesh 
>
> The event vector adapter supports offloading creation of
> event vectors by vectorizing objects (mbufs/ptrs/u64s).
> Applications can create a vector adapter associated with
> an event queue and enqueue objects to be vectorized.
> When the vector reaches the configured size or when the timeout
> is reached, the vector adapter will enqueue the vector to the
> event queue.
>
> Signed-off-by: Pavan Nikhilesh 
> ---
>  config/rte_config.h   |   1 +
>  doc/api/doxy-api-index.md |   1 +
>  doc/guides/eventdevs/features/default.ini |   7 +
>  .../eventdev/event_vector_adapter.rst | 208 
>  doc/guides/prog_guide/eventdev/eventdev.rst   |  10 +-
>  doc/guides/prog_guide/eventdev/index.rst  |   1 +
>  doc/guides/rel_notes/release_25_07.rst|   6 +
>  lib/eventdev/event_vector_adapter_pmd.h   |  85 
>  lib/eventdev/eventdev_pmd.h   |  36 ++
>  lib/eventdev/meson.build  |   3 +
>  lib/eventdev/rte_event_vector_adapter.c   | 472 +
>  lib/eventdev/rte_event_vector_adapter.h   | 481 ++


Update MAINTAINER file for new file additions


>  lib/eventdev/rte_eventdev.c   |  22 +
>  lib/eventdev/rte_eventdev.h   |  10 +
>  14 files changed, 1338 insertions(+), 5 deletions(-)
>  create mode 100644 doc/guides/prog_guide/eventdev/event_vector_adapter.rst
>  create mode 100644 lib/eventdev/event_vector_adapter_pmd.h
>  create mode 100644 lib/eventdev/rte_event_vector_adapter.c
>  create mode 100644 lib/eventdev/rte_event_vector_adapter.h
>
> diff --git a/config/rte_config.h b/config/rte_config.h
> index 86897de75e..9535c48d81 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -92,6 +92,7 @@
>  #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32
>  #define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
>  #define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32
> +#define RTE_EVENT_VECTOR_ADAPTER_MAX_INSTANCE_PER_QUEUE 32
>
>  /* rawdev defines */
>  #define RTE_RAWDEV_MAX_DEVS 64
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index 5c425a2cb9..a11bd59526 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -30,6 +30,7 @@ The public API headers are grouped by topics:
>[event_timer_adapter](@ref rte_event_timer_adapter.h),
>[event_crypto_adapter](@ref rte_event_crypto_adapter.h),
>[event_dma_adapter](@ref rte_event_dma_adapter.h),
> +  [event_vector_adapter](@ref rte_event_vector_adapter.h),
>[rawdev](@ref rte_rawdev.h),
>[metrics](@ref rte_metrics.h),
>[bitrate](@ref rte_bitrate.h),
> diff --git a/doc/guides/eventdevs/features/default.ini 
> b/doc/guides/eventdevs/features/default.ini
> index fa24ba38b4..9fb68f946e 100644
> --- a/doc/guides/eventdevs/features/default.ini
> +++ b/doc/guides/eventdevs/features/default.ini
> @@ -64,3 +64,10 @@ internal_port_vchan_ev_bind =
>  [Timer adapter Features]
>  internal_port  =
>  periodic   =
> +
> +;
> +; Features of a default Vector adapter
> +;
> +[Vector adapter Features]
> +internal_port  =
> +sov_eov=
> diff --git a/doc/guides/prog_guide/eventdev/event_vector_adapter.rst 
> b/doc/guides/prog_guide/eventdev/event_vector_adapter.rst
> new file mode 100644
> index 00..e257552d22
> --- /dev/null
> +++ b/doc/guides/prog_guide/eventdev/event_vector_adapter.rst
> @@ -0,0 +1,208 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +Copyright(c) 2025 Marvell International Ltd.
> +
> +Event Vector Adapter Library
> +
> +
> +The Event Vector Adapter library extends the event-driven model by 
> introducing
> +a mechanism to aggregate multiple 8B objects (e.g., mbufs, u64s) into a 
> single

Add link to 8B object structure.

Also, tell the use case for this i.e when and why to use this

> +vector event and enqueue it to an event queue. It provides an API to create,
> +configure, and manage vector adapters.
> +
> +The Event Vector Adapter library is designed to interface with hardware or
> +software implementations of vector aggregation. It queries an eventdev PMD
> +to determine the appropriate implementation.
> +
> +Examples of using the API are presented in the `API Overview`_ and
> +`Processing Vector Events`_ sections.
> +
> +.. _vector_event:
> +
> +Vector Event
> +
> +
> +A vector event is enqueued in the event device when the vector adapter
> +reaches the configured vector size or timeout. The event device uses the
> +attributes configured by the application when scheduling it.
> +
> +Fallback Behavior
> +~
> +
> +If the vector adapter cannot aggregate objects into a vector event, it
> +enqueues the objects as single events with fallback event properties 
> configured
> +by the application.
> +
> +Timeout and Size
> +
> +
>

Re: [v1 01/10] bus/dpaa: avoid using same structure and variable name

2025-05-28 Thread Stephen Hemminger
On Wed, 28 May 2025 17:25:04 +0530
vanshika.shu...@nxp.com wrote:

> From: Hemant Agrawal 
> 
> rte_dpaa_bus was being used as structure and variable name both.
> 
> Signed-off-by: Jun Yang 
> Signed-off-by: Hemant Agrawal 
> ---

That is perfectly ok in C.
Would prefer that you not use rte_ for variable names that are
local to the driver.


Re: [PATCH v1 2/3] dts: rewrite dts rst

2025-05-28 Thread Patrick Robb
On Tue, May 27, 2025 at 11:37 AM Dean Marx  wrote:

> Modify dts.rst to exclude redundant/outdated information about the project,
> and add new information regarding setup and framework design.
>
> Signed-off-by: Dean Marx 
> ---
>  doc/guides/tools/dts.rst | 310 +--
>  1 file changed, 99 insertions(+), 211 deletions(-)
>
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index fcc6d22036..0aa6663b9f 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -1,6 +1,7 @@
>  ..  SPDX-License-Identifier: BSD-3-Clause
>  Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>  Copyright(c) 2024 Arm Limited
> +Copyright(c) 2025 University of New Hampshire
>
>  DPDK Test Suite
>  ===
> @@ -20,31 +21,18 @@ DTS runtime environment
>
>  DTS runtime environment node
>A node where at least one DTS runtime environment is present.
> -  This is the node where we run DTS and from which DTS connects to other
> nodes.
>

Leave this in.


>
>  System under test
> -  An SUT is the combination of DPDK and the hardware we're testing
> -  in conjunction with DPDK (NICs, crypto and other devices).
> +  Node with DPDK and relevant hardware (NICs, crypto, etc.).
>

Maybe change to "The system which runs a DPDK application on relevant
hardware (NIC, accelerator cards, etc) and from which the DPDK behavior is
observed for tests."


>
>  System under test node
>A node where at least one SUT is present.
>
>  Traffic generator
> -  A TG is either software or hardware capable of sending packets.
> +  Node that sends traffic; can be hardware or software-based.
>

"Node that sends traffic to the SUT;"

Sorry for being so particular. :)


>  Traffic generator node
>A node where at least one TG is present.
> -  In case of hardware traffic generators, the TG and the node are
> literally the same.
> -
> -
> -In most cases, interchangeably referring to a runtime environment, SUT,
> TG or the node
> -they're running on (e.g. using SUT and SUT node interchangeably) doesn't
> cause confusion.
> -There could theoretically be more than of these running on the same node
> and in that case
> -it's useful to have stricter definitions.
> -An example would be two different traffic generators (such as Trex and
> Scapy)
> -running on the same node.
> -A different example would be a node containing both a DTS runtime
> environment
> -and a traffic generator, in which case it's both a DTS runtime
> environment node and a TG node.
>
>
>  DTS Environment
> @@ -195,12 +183,28 @@ These need to be set up on a Traffic Generator Node:
>  Running DTS
>  ---
>
> -DTS needs to know which nodes to connect to and what hardware to use on
> those nodes.
> -Once that's configured, either a DPDK source code tarball or tree folder
> -need to be supplied whether these are on your DTS host machine or the SUT
> node.
> -DTS can accept a pre-compiled build placed in a subdirectory,
> -or it will compile DPDK on the SUT node,
> -and then run the tests with the newly built binaries.
> +To run DTS, use ``main.py`` with Poetry:
> +
> +.. code-block:: console
> +
> +   ```shell
> +   docker build --target dev -t dpdk-dts .
> +   docker run -v $(pwd)/..:/dpdk -v /home/dtsuser/.ssh:/root/.ssh:ro -it
> dpdk-dts bash
> +   $ poetry install
> +   $ poetry run ./main.py
> +   ```
> +
> +Common options include:
> +
> +- ``--output-dir``: Custom output location.
> +- ``--remote-source``: Use sources stored on the SUT.
> +- ``--tarball``: Specify the tarball to be tested.
> +
> +For a full list:
> +
> +.. code-block:: console
> +
> +   poetry run ./main.py --help
>

I think we should keep the full list of flags here instead of removing it
for this subset. It's a bit of a maintenance burden and it make the file
longer but it's important info. I think it's good to present it here even
if it is only "a --help away."


>
>
>  Configuring DTS
> @@ -220,71 +224,6 @@ The user must have :ref:`administrator privileges
> `
>  which don't require password authentication.
>
>
> -DTS Execution
> -~
> -
> -DTS is run with ``main.py`` located in the ``dts`` directory after
> entering Poetry shell:
> -
> -.. code-block:: console
> -
> -   (dts-py3.10) $ ./main.py --help
> -   usage: main.py [-h] [--test-run-config-file FILE_PATH]
> [--nodes-config-file FILE_PATH] [--tests-config-file FILE_PATH]
> -  [--output-dir DIR_PATH] [-t SECONDS] [-v] [--dpdk-tree
> DIR_PATH | --tarball FILE_PATH] [--remote-source]
> -  [--precompiled-build-dir DIR_NAME] [--compile-timeout
> SECONDS] [--test-suite TEST_SUITE [TEST_CASES ...]]
> -  [--re-run N_TIMES] [--random-seed NUMBER]
> -
> -   Run DPDK test suites. All options may be specified with the
> environment variables provided in brackets. Command line arguments have
> higher
> -   priority.
> -
> -   options:
> - -h, --helpshow this help message and exit
> - --test-run-config-file FILE_

[PATCH] doc: update ABI reference version in example

2025-05-28 Thread Stephen Hemminger
The example should list current ABI reference version.

Signed-off-by: Stephen Hemminger 
---
 doc/guides/contributing/patches.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/guides/contributing/patches.rst 
b/doc/guides/contributing/patches.rst
index 8ad6b6e715..fc9844242e 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -539,7 +539,7 @@ to a different location.
 
 Sample::
 
-   DPDK_ABI_REF_VERSION=v19.11 DPDK_ABI_REF_DIR=/tmp 
./devtools/test-meson-builds.sh
+   DPDK_ABI_REF_VERSION=v24.11 DPDK_ABI_REF_DIR=/tmp 
./devtools/test-meson-builds.sh
 
 
 Sending Patches
-- 
2.47.2