Re: [PATCH] net/mlx5: add GRE as L4 layer for entropy calculation
Hi, On 26/02/2025 11:51 AM, Yaniv Rosner wrote: Signed-off-by: Yaniv Rosner Acked-by: Bing Zhao --- drivers/net/mlx5/mlx5_flow_hw.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index e72b87d70f..27ee9d6cd3 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -14875,6 +14875,9 @@ flow_hw_calc_encap_hash(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP6: data.next_protocol = IPPROTO_ICMPV6; break; + case RTE_FLOW_ITEM_TYPE_GRE: + data.next_protocol = IPPROTO_GRE; + break; default: break; } Patch applied to next-net-mlx, -- Kindest regards Raslan Darawsheh
Re: [PATCH v1] net/mlx5: support multi-host lag probe
Hi, On 11/03/2025 10:31 AM, Rongwei Liu wrote: Under multi-host environments, the NIC exports total 4 ports, and each host get 2 ports. The 2 ports' identifier is uncontinous now. It causes the lag port array access violation. Increase the lag port array and allow the hole in middle. Signed-off-by: Rongwei Liu Acked-by: Dariusz Sosnowski Patch applied to next-net-mlx, -- Kindest regards Raslan Darawsheh
Re: [PATCH v2] net/mlx5: mitigate the Tx queue parameter adjustment
Hi, On 24/04/2025 4:31 PM, Viacheslav Ovsiienko wrote: he DPDK API rte_eth_tx_queue_setup() has a parameter nb_tx_desc specifying the desired queue capacity, measured in packets. The ConnectX NIC series has a hardware-imposed queue size limit of 32K WQEs (packet hardware descriptors). Typically, one packet requires one WQE to be sent. There is a special offload option, data-inlining, to improve performance for small packets. Also, NICs in some configurations require a minimum amount of inline data for the steering engine to operate correctly. In the case of inline data, more than one WQEs might be required to send a single packet. The mlx5 PMD takes this into account and adjusts the number of queue WQEs accordingly. If the requested queue capacity can't be satisfied due to the hardware queue size limit, the mlx5 PMD rejected the queue creation, causing unresolvable application failure. The patch provides the following: - fixes the calculation of the number of required WQEs to send a single packet with inline data, making it more precise and extending the painless operating range. - If the requested queue capacity can't be satisfied due to WQE number adjustment for inline data, it no longer causes a severe error. Instead, a warning message is emitted, and the queue is created with the maximum available size, with a reported success. Please note that the inline data size depends on many options (NIC configuration, queue offload flags, packet offload flags, packet size, etc.), so the actual queue capacity might not be impacted at all. Signed-off-by: Viacheslav Ovsiienko Acked-by: Dariusz Sosnowski Patch applied to next-net-mlx, -- Kindest regards Raslan Darawsheh
Re: [PATCH] net/mlx5: fix modify field action on group 0
Hi, On 25/04/2025 10:32 PM, Dariusz Sosnowski wrote: HW modify header commands generated for multiple modify field flow actions, which modify/access the same packet fields do not have to be separated by NOPs when used on group 0. This is because: - On group > 0, HW uses Modify Header Pattern objects which require NOP explicitly. - On group 0, modify field action is implemented using Modify Header Context object managed by FW. FW inserts requires NOPs internally. mlx5 PMD inserted NOP always, which caused flow/table creation failures on group 0 flow rules. This patch addresses that. Fixes: 0f4aa72b99da ("net/mlx5: support flow modify field with HWS") Cc: suanmi...@nvidia.com Cc: sta...@dpdk.org Signed-off-by: Dariusz Sosnowski Acked-by: Bing Zhao Patch applied to next-net-mlx, -- Kindest regards Raslan Darawsheh
Re: [PATCH] net/mlx5: validate GTP PSC QFI width
Hi, On 25/04/2025 10:35 PM, Dariusz Sosnowski wrote: Add missing validation of GTP PSC QFI flow field width for modify field flow action. Fixes: 0f4aa72b99da ("net/mlx5: support flow modify field with HWS") Cc: suanmi...@nvidia.com Cc: sta...@dpdk.org Signed-off-by: Dariusz Sosnowski Acked-by: Bing Zhao Patch applied to next-net-mlx, -- Kindest regards Raslan Darawsheh
Re: [PATCH 0/2] net/mlx5: flow counter pool fixes
Hi, On 25/04/2025 10:41 PM, Dariusz Sosnowski wrote: This patch series includes several fixes for flow counter pool used with HW Steering flow engine. Dariusz Sosnowski (2): net/mlx5: fix counter pool init error propagation net/mlx5: fix counter service thread init drivers/net/mlx5/mlx5_hws_cnt.c | 29 +++-- 1 file changed, 19 insertions(+), 10 deletions(-) -- 2.39.5 Series applied to next-net-mlx, -- Kindest regards Raslan Darawsheh
Re: [PATCH] mem: fix infinite loop
LGTM, Acked-by: Huisong Li 在 2025/4/2 20:42, Dengdui Huang 写道: When the process address space is insufficient, mmap will fail, which will cause an infinite loop. This pathc fix it. Fixes: c4b89ecb64ea ("eal: introduce memory management wrappers") Cc: sta...@dpdk.org Signed-off-by: Dengdui Huang --- <...>
Re: [PATCH 0/2] add debug capabilities to ipool
Hi, On 24/03/2025 10:18 AM, Shani Peretz wrote: Enhanced ipool debugging: Added new log component and verbosity levels for operations. Introduced a bitmap in debug mode to track allocations/deallocations, preventing doubles in per-core cache mode. Shani Peretz (2): net/mlx5: add ipool debug capabilities net/mlx5: added a bitmap that tracks ipool allocs and frees drivers/net/mlx5/mlx5_utils.c | 151 +- drivers/net/mlx5/mlx5_utils.h | 21 + 2 files changed, 171 insertions(+), 1 deletion(-) Series applied to next-net-mlx, Kindest regards Raslan Darawsheh