On 03/20, Loktionov, Aleksandr wrote:
> 
> 
> > -----Original Message-----
> > From: Intel-wired-lan <[email protected]> On Behalf
> > Of Stanislav Fomichev
> > Sent: Friday, March 20, 2026 2:25 AM
> > To: [email protected]
> > Cc: [email protected]; [email protected]; [email protected];
> > [email protected]; [email protected]; [email protected];
> > [email protected]; [email protected];
> > [email protected]; [email protected]; Nguyen, Anthony
> > L <[email protected]>; Kitszel, Przemyslaw
> > <[email protected]>; [email protected]; [email protected];
> > [email protected]; [email protected]; [email protected];
> > [email protected]; [email protected]; [email protected];
> > [email protected]; [email protected]; [email protected]; Keller,
> > Jacob E <[email protected]>; [email protected];
> > [email protected]; [email protected]; Loktionov, Aleksandr
> > <[email protected]>; [email protected]; linux-
> > [email protected]; [email protected]; intel-wired-
> > [email protected]; [email protected]; linux-
> > [email protected]; [email protected];
> > [email protected]
> > Subject: [Intel-wired-lan] [PATCH net-next v3 03/13] net: introduce
> > ndo_set_rx_mode_async and dev_rx_mode_work
> > 
> > Add ndo_set_rx_mode_async callback that drivers can implement instead
> > of the legacy ndo_set_rx_mode. The legacy callback runs under the
> > netif_addr_lock spinlock with BHs disabled, preventing drivers from
> > sleeping. The async variant runs from a work queue with rtnl_lock and
> > netdev_lock_ops held, in fully sleepable context.
> > 
> > When __dev_set_rx_mode() sees ndo_set_rx_mode_async, it schedules
> > dev_rx_mode_work instead of calling the driver inline. The work
> > function takes two snapshots of each address list (uc/mc) under the
> > addr_lock, then drops the lock and calls the driver with the work
> > copies. After the driver returns, it reconciles the snapshots back to
> > the real lists under the lock.
> > 
> > Reviewed-by: Aleksandr Loktionov <[email protected]>
> > Signed-off-by: Stanislav Fomichev <[email protected]>
> > ---
> >  Documentation/networking/netdevices.rst |  8 +++
> >  include/linux/netdevice.h               | 20 ++++++
> >  net/core/dev.c                          | 95 +++++++++++++++++++++++-
> > -
> >  3 files changed, 116 insertions(+), 7 deletions(-)
> > 
> > diff --git a/Documentation/networking/netdevices.rst
> > b/Documentation/networking/netdevices.rst
> > index 35704d115312..dc83d78d3b27 100644
> > --- a/Documentation/networking/netdevices.rst
> > +++ b/Documentation/networking/netdevices.rst
> > @@ -289,6 +289,14 @@ struct net_device synchronization rules
> >  ndo_set_rx_mode:
> >     Synchronization: netif_addr_lock spinlock.
> >     Context: BHs disabled
> 
> ...
> 
> > to
> > +device
> > + * and configure RX filtering.
> > + * @dev: device
> > + *
> > + * When the device doesn't support unicast filtering it is put in
> > +promiscuous
> > + * mode while unicast addresses are present.
> >   */
> >  void __dev_set_rx_mode(struct net_device *dev)  {
> >     const struct net_device_ops *ops = dev->netdev_ops;
> > 
> >     /* dev_open will call this function so the list will stay sane.
> > */
> > -   if (!(dev->flags&IFF_UP))
> > +   if (!netif_up_and_present(dev))
> >             return;
> > 
> > -   if (!netif_device_present(dev))
> > +   if (ops->ndo_set_rx_mode_async) {
> > +           queue_work(rx_mode_wq, &dev->rx_mode_work);
> >             return;
> This early return skips the legacy core fallback below.
> Before this patch, __dev_set_rx_mode() continued into the
> existing unicast-filter handling when the device did not
> advertise IFF_UNICAST_FLT.
> 
> After this patch, any driver that implements
> ndo_set_rx_mode_async but does not set IFF_UNICAST_FLT
> will never hit that fallback path.

I believe this is addressed later in "net: move promiscuity handling into
dev_rx_mode_work"? That should take care of doing __dev_set_promiscuity
for !IFF_UNICAST_FLT+ndo_set_rx_mode_async. Not sure if there is a
better way to rearrange the chunks in the patches.

        if (ops->ndo_set_rx_mode_async) {
                ...

+               promisc_inc = dev_uc_promisc_update(dev);
+
+               netif_addr_unlock_bh(dev);
+       } else {
+               netif_addr_lock_bh(dev);
+               promisc_inc = dev_uc_promisc_update(dev);
+               netif_addr_unlock_bh(dev);
+       }
+
+       if (promisc_inc)
+               __dev_set_promiscuity(dev, promisc_inc, false);
+

Reply via email to