On Wed, Sep 04, 2019 at 09:50:15AM +0200, Eric Dumazet wrote:
> > +static int queue_count(struct mr_table *mrt)
> > +{
> > + struct list_head *pos;
> > + int count = 0;
> > +
> > + spin_lock_bh(&mfc_unres_lock);
> > + list_for_each(pos, &mrt->mfc_unres_queue)
> > + count++;
> > + spin_unlock_bh(&mfc_unres_lock);
> > +
> > + return count;
> > +}
>
> I guess that even if we remove a limit on the number of items, we probably
> should
> keep the atomic counter (no code churn, patch much easier to review...)
>
> Your patch could be a one liner really [1]
>
> Eventually replacing this linear list with an RB-tree, so that we can be on
> the safe side.
>
> [1]
> diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
> index
> c07bc82cbbe96d53d05c1665b2f03faa055f1084..313470f6bb148326b4afbc00d265b6a1e40d93bd
> 100644
> --- a/net/ipv4/ipmr.c
> +++ b/net/ipv4/ipmr.c
> @@ -1134,8 +1134,8 @@ static int ipmr_cache_unresolved(struct mr_table *mrt,
> vifi_t vifi,
>
> if (!found) {
> /* Create a new entry if allowable */
> - if (atomic_read(&mrt->cache_resolve_queue_len) >= 10 ||
> - (c = ipmr_cache_alloc_unres()) == NULL) {
> + c = ipmr_cache_alloc_unres();
> + if (!c) {
> spin_unlock_bh(&mfc_unres_lock);
>
> kfree_skb(skb);
hmm, that looks more clear and easy to review..
Hi David, Alexey,
What do you think? If you also agree, I could post a new version patch.
Thanks
Hangbin