> > Occasionally, the amount of packets to free from the work queue ends > > perfectly on a boundary to have nb_free = 0 and pool = 0. This causes a > > segfault as follows: > > > > (gdb) bt > > #0 rte_mempool_default_cache (mp=<optimized out>, mp=<optimized > > out>, > > lcore_id=<optimized out>) > > at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/x86_64-native- > > linuxapp-gcc/include/rte_mempool.h:1017 > > #1 rte_mempool_put_bulk (n=0, obj_table=0x7f10deff2530, mp=0x0) > > at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/x86_64-native- > > linuxapp-gcc/include/rte_mempool.h:1174 > > #2 enic_free_wq_bufs (wq=wq@entry=0x7efabffcd5b0, > > completed_index=completed_index@entry=33) > > at /usr/src/debug/openvswitch-2.6.1/dpdk- > > 16.11/drivers/net/enic/enic_rxtx.c:429 > > #3 0x00007f11e9c86e17 in enic_cleanup_wq (enic=<optimized out>, > > wq=wq@entry=0x7efabffcd5b0) > > at /usr/src/debug/openvswitch-2.6.1/dpdk- > > 16.11/drivers/net/enic/enic_rxtx.c:442 > > #4 0x00007f11e9c86e5f in enic_xmit_pkts (tx_queue=0x7efabffcd5b0, > > tx_pkts=0x7f10deffb1a8, nb_pkts=<optimized out>) > > at /usr/src/debug/openvswitch-2.6.1/dpdk- > > 16.11/drivers/net/enic/enic_rxtx.c:470 > > #5 0x00007f11e9e147ad in rte_eth_tx_burst (nb_pkts=<optimized out>, > > tx_pkts=0x7f10deffb1a8, queue_id=0, port_id=<optimized out>) > > > > This commit makes the enic wq driver match other drivers who call the bulk > > free, by checking that there are actual packets to free. > > > > Fixes: 36935afbc53c ("net/enic: refactor Tx mbuf recycling") > > CC: sta...@dpdk.org > > Reported-by: Vincent S. Cojot <vco...@redhat.com> > > Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1468631 > > Signed-off-by: Aaron Conole <acon...@redhat.com> > > Reviewed-by: John Daley <johnd...@cisco.com>
Applied, thanks With more context in the title: net/enic: fix crash when freeing 0 packet to mempool