On Wed, Feb 6, 2019 at 3:36 PM Eric Dumazet <eric.duma...@gmail.com> wrote: > > > > On 02/06/2019 03:00 PM, Cong Wang wrote: > > mlx5_eq_cq_get() is called in IRQ handler, the spinlock inside > > gets a lot of contentions when we test some heavy workload > > with 60 RX queues and 80 CPU's, and it is clearly shown in the > > flame graph. > > > > In fact, radix_tree_lookup() is perfectly fine with RCU read lock, > > we don't have to take a spinlock on this hot path. This is pretty > > much similar to commit 291c566a2891 > > ("net/mlx4_core: Fix racy CQ (Completion Queue) free"). Slow paths > > are still serialized with the spinlock, and with synchronize_irq() > > it should be safe to just move the fast path to RCU read lock. > > > > This patch itself reduces the latency by about 50% for our memcached > > workload on a 4.14 kernel we test. In upstream, as pointed out by Saeed, > > this spinlock gets some rework in commit 02d92f790364 > > ("net/mlx5: CQ Database per EQ"), so the difference could be smaller. > > > > Cc: Saeed Mahameed <sae...@mellanox.com> > > Cc: Tariq Toukan <tar...@mellanox.com> > > Acked-by: Saeed Mahameed <sae...@mellanox.com> > > Signed-off-by: Cong Wang <xiyou.wangc...@gmail.com> > > --- > > drivers/net/ethernet/mellanox/mlx5/core/eq.c | 12 ++++++------ > > 1 file changed, 6 insertions(+), 6 deletions(-) > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c > > b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > > index ee04aab65a9f..7092457705a2 100644 > > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > > @@ -114,11 +114,11 @@ static struct mlx5_core_cq *mlx5_eq_cq_get(struct > > mlx5_eq *eq, u32 cqn) > > struct mlx5_cq_table *table = &eq->cq_table; > > struct mlx5_core_cq *cq = NULL; > > > > - spin_lock(&table->lock); > > + rcu_read_lock(); > > cq = radix_tree_lookup(&table->tree, cqn); > > if (likely(cq)) > > mlx5_cq_hold(cq); > > I suspect that you need a variant that makes sure refcount is not zero. > > ( Typical RCU rules apply ) > > if (cq && !refcount_inc_not_zero(&cq->refcount)) > cq = NULL; > > > See commit 6fa19f5637a6c22bc0999596bcc83bdcac8a4fa6 rds: fix refcount bug in > rds_sock_addref > for a similar issue I fixed recently.
synchronize_irq() is called before mlx5_cq_put(), so I don't see why readers could get 0 refcnt. For the rds you mentioned, it doesn't wait for readers, this is why it needs to check against 0 and why it is different from this one. Thanks.