Jesper Dangaard Brouer <bro...@redhat.com> wrote:
> > I take 2) back.  Its wrong to do this, for large NR_CPU values it
> > would even overflow.
> 
> Alternatively solution 3:
> Why do we want to maintain a (4MBytes) memory limit, across all CPUs?
> Couldn't we just allow each CPU to have a memory limit?

Consider ipv4, ipv6, nf ipv6 defrag, 6lowpan, and 8k cpus... This will
render any limit useless.

> > > To me it looks like we/I have been using the wrong API for comparing
> > > against percpu_counters.  I guess we should have used 
> > > __percpu_counter_compare().  
> > 
> > Are you sure?  For liujian use case (64 cores) it looks like we would
> > always fall through to percpu_counter_sum() so we eat spinlock_irqsave
> > cost for all compares.
> > 
> > Before we entertain this we should consider reducing 
> > frag_percpu_counter_batch
> > to a smaller value.
> 
> Yes, I agree, we really need to lower/reduce the frag_percpu_counter_batch.
> As you say, else the __percpu_counter_compare() call will be useless
> (around systems with >= 32 CPUs).
> 
> I think the bug is in frag_mem_limit().  It just reads the global
> counter (fbc->count), without considering other CPUs can have upto 130K
> that haven't been subtracted yet (due to 3M low limit, become dangerous
> at >=24 CPUs).  The  __percpu_counter_compare() does the right thing,
> and takes into account the number of (online) CPUs and batch size, to
> account for this.

Right, I think we should at very least use __percpu_counter_compare
before denying a new frag queue allocation request.

I'll create a patch.

Reply via email to