On Fri, Jan 17, 2014 at 7:26 AM, Hugh Dickins <[email protected]> wrote: > Commit 74e72f894d56 ("lib/percpu_counter.c: fix __percpu_counter_add()") > looked very plausible, but its arithmetic was badly wrong: obvious once > you see the fix, but maddening to get there from the weird tmpfs ENOSPCs > > Signed-off-by: Hugh Dickins <[email protected]> > --- > > lib/percpu_counter.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > --- 3.13-rc8+/lib/percpu_counter.c 2014-01-15 09:53:27.768111792 -0800 > +++ linux/lib/percpu_counter.c 2014-01-16 14:58:54.156555308 -0800 > @@ -82,7 +82,7 @@ void __percpu_counter_add(struct percpu_ > unsigned long flags; > raw_spin_lock_irqsave(&fbc->lock, flags); > fbc->count += count; > - __this_cpu_sub(*fbc->counters, count); > + __this_cpu_sub(*fbc->counters, count - amount);
Hammmm, you are right, thanks for the fix, and I really tested the patch with reinserting module of 'null_blk' after lots of IOs, now I know the reason: the scale of my test is still too small to cover the slow path in case of the batch of 1000000, and only the part in fast path is verified to be OK. Thanks, -- Ming Lei -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

