On Mon, 30 Apr 2018 09:14:04 -0700
Ben Greear <gree...@candelatech.com> wrote:

> >> As part of VMware's performance testing with the Linux 4.15 kernel,
> >> we identified CPU cost and throughput regressions when comparing to
> >> the Linux 4.14 kernel. The impacted test cases are mostly TCP_STREAM
> >> send tests when using small message sizes. The regressions are
> >> significant (up 3x) and were tracked down to be a side effect of Eric
> >> Dumazat's RB tree changes that went into the Linux 4.15 kernel.
> >> Further investigation showed our use of the TCP_NODELAY flag in
> >> conjunction with Eric's change caused the regressions to show and
> >> simply disabling TCP_NODELAY brought performance back to normal.
> >> Eric's change also resulted into significant improvements in our
> >> TCP_RR test cases.
> >>
> >>
> >>
> >> Based on these results, our theory is that Eric's change made the
> >> system overall faster (reduced latency) but as a side effect less
> >> aggregation is happening (with TCP_NODELAY) and that results in lower
> >> throughput. Previously even though TCP_NODELAY was set, system was
> >> slower and we still got some benefit of aggregation. Aggregation
> >> helps in better efficiency and higher throughput although it can
> >> increase the latency. If you are seeing a regression in your
> >> application throughput after this change, using TCP_NODELAY might
> >> help bring performance back however that might increase latency.  
> 
> I guess you mean _disabling_ TCP_NODELAY instead of _using_ TCP_NODELAY?

Yes, thank you for catching that.

-- Steve

Reply via email to