On Fri, 2016-05-20 at 05:44 -0700, Eric Dumazet wrote:
> On Thu, 2016-05-19 at 19:45 -0700, Eric Dumazet wrote:
> > On Thu, 2016-05-19 at 18:50 -0700, Cong Wang wrote:
> > > On Thu, May 19, 2016 at 5:35 AM, Eric Dumazet <eric.duma...@gmail.com> 
> > > wrote:
> > > >
> > > > These stats are using u64 or u32 fields, so reading integral values
> > > > should not prevent writers from doing concurrent updates if the kernel
> > > > arch is a 64bit one.
> > > >
> > > > Being able to atomically fetch all counters like packets and bytes sent
> > > > at the expense of interfering in fast path (queue and dequeue packets)
> > > > is simply not worth the pain, as the values are generally stale after 1
> > > > usec.
> > > 
> > > I think one purpose of this lock is to make sure we have an atomic
> > > snapshot of these counters as a whole. IOW, we may need another
> > > lock rather than the qdisc root lock to guarantee this.
> > 
> > Right, this was stated in the changelog.
> > 
> > I played a bit at changing qdisc->__state to a seqcount.
> > 
> > But this would add 2 additional smp_wmb() barriers.
> 
> Although this would allow the mechanism to be used both on 32bit an
> 64bit kernels.
> 
> This would also add LOCKDEP annotations which can be nice for debugging.
> 
> Also the seqcount value >> 1 would give us the number of __qdisc_run()
> and we could compute packets/(seqcount>>1) to get the average number of
> packets processed per round.

Tricky, since sch_direct_xmit() releases the qdisc spinlock and grabs it
again, while owning the ' running seqcount'

Needs more LOCKDEP tricks ;)


Reply via email to