On Tue, 2017-01-24 at 15:00 -0800, Stephen Hemminger wrote:
> On Tue, 24 Jan 2017 14:39:19 -0800
> Eric Dumazet <eric.duma...@gmail.com> wrote:
> 
> > On Tue, 2017-01-24 at 13:06 -0800, Stephen Hemminger wrote:
> > > To improve performance, netvsc can call network stack directly and
> > > avoid the local backlog queue. This is safe since incoming packets are
> > > handled in softirq context already because the receive function
> > > callback is called from a tasklet.  
> > 
> > Is this tasklet implementing a limit or something ?
> 
> The ring only holds a fixed amount of data so there is a limit but
> it is quite large.
> 
> > 
> > netif_rx() queues packets to the backlog, which is processed later by
> > net_rx_action() like other NAPI, with limit of 64 packets per round.
> 
> Since netvsc_receive has to copy all incoming data it is a bottleneck
> unto itself. By the time net_rx_action is invoked the cache is stale.
> 
> > 
> > Calling netif_receive_skb() means you can escape this ability to fairly
> > distribute the cpu cycles among multiple NAPI.
> > 
> > I do not see range_cnt being capped in netvsc_receive()
> 
> There is no cap. NAPI is coming and will help.

This was my point really.

If you call netif_receive_skb() in a loop, it is not NAPI anymore,
and it is a potential latency spike point, while blocking BH and not
servicing other queues depending on this cpu.

(While sofirqs processing NAPI (including netif_rc()) can be scheduled
to ksoftirqd)

Not a big deal, I only want to point out that netif_receive_skb() can be
dangerous if used in a loop.



Reply via email to