On 8/6/20 2:55 AM, Felix Fietkau wrote:
> For some drivers (especially 802.11 drivers), doing a lot of work in the NAPI
> poll function does not perform well. Since NAPI poll is bound to the CPU it
> was scheduled from, we can easily end up with a few very busy CPUs spending
> most of their time in softirq/ksoftirqd and some idle ones.
> 
> Introduce threaded NAPI for such drivers based on a workqueue. The API is the
> same except for using netif_threaded_napi_add instead of netif_napi_add.
> 
> In my tests with mt76 on MT7621 using threaded NAPI + a thread for tx 
> scheduling
> improves LAN->WLAN bridging throughput by 10-50%. Throughput without threaded
> NAPI is wildly inconsistent, depending on the CPU that runs the tx scheduling
> thread.
> 
> With threaded NAPI, throughput seems stable and consistent (and higher than
> the best results I got without it).
> 
> Based on a patch by Hillf Danton
> 
> Cc: Hillf Danton <hdan...@sina.com>
> Signed-off-by: Felix Fietkau <n...@nbd.name>

...

> index e353b822bb15..99233e86f4c5 100644
> --- a/net/core/net-sysfs.c
> +++ b/net/core/net-sysfs.c
> @@ -471,6 +471,47 @@ static ssize_t proto_down_store(struct device *dev,
>  }
>  NETDEVICE_SHOW_RW(proto_down, fmt_dec);
>  


This belongs to a separate patch, with correct attribution.

> +static int change_napi_threaded(struct net_device *dev, unsigned long val)
> +{
> +     struct napi_struct *napi;
> +
> +     if (list_empty(&dev->napi_list))
> +             return -EOPNOTSUPP;
> +     list_for_each_entry(napi, &dev->napi_list, dev_list) {
> +             if (val)
> +                     set_bit(NAPI_STATE_THREADED, &napi->state);
> +             else
> +                     clear_bit(NAPI_STATE_THREADED, &napi->state);
> +     }
> +
> +     return 0;
> +}
> +
> +static ssize_t napi_threaded_store(struct device *dev,
> +                             struct device_attribute *attr,
> +                             const char *buf, size_t len)
> +{
> +     return netdev_store(dev, attr, buf, len, change_napi_threaded);
> +}
> +
> +static ssize_t napi_threaded_show(struct device *dev,
> +                               struct device_attribute *attr,
> +                               char *buf)
> +{
> +     struct net_device *netdev = to_net_dev(dev);
> +     struct napi_struct *napi;
> +     bool enabled = false;
> +


You probably want to use RTNL protection, list could change under us otherwise.

The write side part is protected already in netdev_store()


> +     list_for_each_entry(napi, &netdev->napi_list, dev_list) {
> +             if (test_bit(NAPI_STATE_THREADED, &napi->state))
> +                     enabled = true;
> +     }
> +
> +     return sprintf(buf, fmt_dec, enabled);
> +}
> +DEVICE_ATTR_RW(napi_threaded);
> +
>  static ssize_t phys_port_id_show(struct device *dev,
>                                struct device_attribute *attr, char *buf)
>  {
> @@ -563,6 +604,7 @@ static struct attribute *net_class_attrs[] 
> __ro_after_init = {
>       &dev_attr_tx_queue_len.attr,
>       &dev_attr_gro_flush_timeout.attr,
>       &dev_attr_napi_defer_hard_irqs.attr,
> +     &dev_attr_napi_threaded.attr,
>       &dev_attr_phys_port_id.attr,
>       &dev_attr_phys_port_name.attr,
>       &dev_attr_phys_switch_id.attr,
> 

Reply via email to