On Tue, 2018-02-13 at 11:34 -0600, Dennis Zhou wrote:
> Hi Eric,
> 
> On Tue, Feb 13, 2018 at 05:35:26AM -0800, Eric Dumazet wrote:
> > 
> > Also I would consider using this fix as I had warnings of cpus being
> > stuck there for more than 50 ms :
> > 
> > 
> > diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c
> > index 
> > 9158e5a81391ced4e268e3d5dd9879c2bc7280ce..6309b01ceb357be01e857e5f899429403836f41f
> >  100644
> > --- a/mm/percpu-vm.c
> > +++ b/mm/percpu-vm.c
> > @@ -92,6 +92,7 @@ static int pcpu_alloc_pages(struct pcpu_chunk *chunk,
> >                     *pagep = alloc_pages_node(cpu_to_node(cpu), gfp, 0);
> >                     if (!*pagep)
> >                             goto err;
> > +                   cond_resched();
> >             }
> >     }
> >     return 0;
> > 
> > 
> 
> This function gets called from pcpu_populate_chunk while holding the
> pcpu_alloc_mutex and is called from two scenarios. First, when an
> allocation occurs to a place without backing pages, and second when the
> workqueue item is scheduled to replenish the number of empty pages. So,
> I don't think this is a good idea.
> 

That _is_ a good idea, we do this already in vmalloc(), and vmalloc()
can absolutely be called while some mutex(es) are held.


> My understanding is if we're seeing warnings here, that means we're
> struggling to find backing pages. I believe adding __GFP_NORETRY on the
> workqueue path as Tejun mentioned above would help with warnings as
> well, but not if they are caused by the allocation path.
> 

That is a separate concern.

My patch simply avoids latency spikes when huge percpu allocations are
happening, on systems with say 1024 cpus.


Reply via email to