On Tue, 2021-01-12 at 16:37 +0200, Vladimir Oltean wrote:
> On Mon, Jan 11, 2021 at 03:38:49PM -0800, Saeed Mahameed wrote:
> > GFP_ATOMIC is a little bit aggressive especially when user daemons
> > are
> > periodically reading stats. This can be avoided.
> > 
> > You can pre-allocate with GFP_KERNEL an array with an "approximate"
> > size.
> > then fill the array up with whatever slaves the the bond has at
> > that
> > moment, num_of_slaves  can be less, equal or more than the array
> > you
> > just allocated but we shouldn't care ..
> > 
> > something like:
> > rcu_read_lock()
> > nslaves = bond_get_num_slaves();
> > rcu_read_unlock()
> > sarray = kcalloc(nslaves, sizeof(struct bonding_slave_dev),
> > GFP_KERNEL);
> > rcu_read_lock();
> > bond_fill_slaves_array(bond, sarray); // also do: dev_hold()
> > rcu_read_unlock();
> > 
> > 
> > bond_get_slaves_array_stats(sarray);
> > 
> > bond_put_slaves_array(sarray);
> 
> I don't know what to say about acquiring RCU read lock twice and
> traversing the list of interfaces three or four times.

You can optimize this by tracking #num_slaves.

> On the other hand, what's the worst that can happen if the GFP_ATOMIC
> memory allocation fails. It's not like there is any data loss.
> User space will retry when there is less memory pressure.

Anyway Up to you, i just don't like it when we use GFP_ATOMIC when it
can be avoided, especially for periodic jobs, like stats polling.. 

Reply via email to