Re: [dpdk-dev] imissed drop with mellanox connectx5

2021-07-22 Thread Yaron Illouz
Hi Matan We work with mbuf in all threads and lcores, We pass them from one thread to another through the dpdk ring before releasing them. There are drops in 10K to 100K pps, we can't stay with these drops. The drops are in the imissed counter from rte_eth_stats_get, so I thought that the drops

Re: [dpdk-dev] imissed drop with mellanox connectx5

2021-07-22 Thread Matan Azrad
Hi Yaron Freeing mbufs from a different lcore than the original lcore allocated them causes cache miss in the mempool cache of the original lcore per mbuf allocation - all the time the PMD will get non-hot mbufs to work with. It can be one of the reasons for the earlier drops you see. Matan

[dpdk-dev] imissed drop with mellanox connectx5

2021-07-21 Thread Yaron Illouz
Hi We try to read from 100G NIC Mellanox ConnectX-5 without drop at nic. All thread are with core pinning and cpu isolation. We use dpdk 19.11 I tried to apply all configuration that are in https://fast.dpdk.org/doc/perf/DPDK_19_08_Mellanox_NIC_performance_report.pdf We have a strange behavior,