On 22/10/15(Thu) 20:41, Bret Lambert wrote:
> On Thu, Oct 22, 2015 at 07:42:24PM +0200, Martin Pieuchot wrote:
> > Now that we have a single refcounting mechanism for route entries, I'd
> > like to use atomic operations and grab the KERNEL_LOCK only if a CPU is
> > dropping the last reference on an entry.
> > 
> > Currently this only matters for MPLS.  I intentionally use atomic_* ops
> > because I'd like to see be able to see if a counter goes negative.
> > 
> > For symmetry reasons I'm also moving the KERNEL_LOCK() inside rtalloc().
> > These two functions are my current targets.
> > 
> > Comments, oks?
> 
> One comment inline...

Thanks for looking into this.

> >  void
> > @@ -348,14 +350,16 @@ rtfree(struct rtentry *rt)
> >     if (rt == NULL)
> >             return;
> >  
> > -   if (--rt->rt_refcnt <= 0) {
> > +   if (atomic_dec_int_nv(&rt->rt_refcnt) <= 0) {
> >             KASSERT(!ISSET(rt->rt_flags, RTF_UP));
> >             KASSERT(!RT_ROOT(rt));
> > -           rttrash--;
> > +           atomic_dec_int(&rttrash);
> 
> Are you using rttrash for debugging? It's unused anywhere else,
> and if it's just incrementing and decrementing a counter only
> used for debugging (or possibly not at all!), it might be
> better to put it in DEBUG kernels, or just remove it entirely.

Yes, I use it for debugging.  I'm all for such cleanup (including
removing rt_use, exporting the real refcnt to userland and using
the refcnt(9) API) once possible bugs are founds.

Reply via email to