On 2019/6/14 下午2:00, Michael S. Tsirkin wrote:
On Fri, Jun 14, 2019 at 11:28:59AM +0800, Jason Wang wrote:
On 2019/6/14 上午12:24, Willem de Bruijn wrote:
From: Willem de Bruijn <will...@google.com>
NAPI tx mode improves TCP behavior by enabling TCP small queues (TSQ).
TSQ reduces queuing ("bufferbloat") and burstiness.
Previous measurements have shown significant improvement for
TCP_STREAM style workloads. Such as those in commit 86a5df1495cc
("Merge branch 'virtio-net-tx-napi'").
There has been uncertainty about smaller possible regressions in
latency due to increased reliance on tx interrupts.
The above results did not show that, nor did I observe this when
rerunning TCP_RR on Linux 5.1 this week on a pair of guests in the
same rack. This may be subject to other settings, notably interrupt
coalescing.
In the unlikely case of regression, we have landed a credible runtime
solution. Ethtool can configure it with -C tx-frames [0|1] as of
commit 0c465be183c7 ("virtio_net: ethtool tx napi configuration").
NAPI tx mode has been the default in Google Container-Optimized OS
(COS) for over half a year, as of release M70 in October 2018,
without any negative reports.
Link: https://marc.info/?l=linux-netdev&m=149305618416472
Link: https://lwn.net/Articles/507065/
Signed-off-by: Willem de Bruijn <will...@google.com>
---
now that we have ethtool support and real production deployment,
it seemed like a good time to revisit this discussion.
I agree to enable it by default. Need inputs from Michael. One possible
issue is we may get some regression on the old machine without APICV, but
consider most modern CPU has this feature, it probably doesn't matter.
Thanks
Right. If the issue does arise we can always add e.g. a feature flag
to control the default from the host.
Yes.
So
Acked-by: Jason Wang <jasow...@redhat.com>