Much appreciated.

Thank you everyone.
/Robert


On Mon, 24 Aug 2020 at 20:39, Priyaranjan Jha <priyar...@google.com> wrote:
>
> Thank you, Eric, Robert.
> We will try to provide the backport for the patch soon.
>
> Thanks,
> Priyaranjan
>
> (resending since previous reply bounced back)
> On Mon, Aug 24, 2020 at 9:14 AM Eric Dumazet <eric.duma...@gmail.com> wrote:
> >
> >
> >
> > On 8/24/20 7:35 AM, Robert Bengtsson-Ölund wrote:
> > > Hi everyone
> > >
> > > We stumbled upon a TCP BBR throughput issue that the following change 
> > > fixes.
> > > git: 78dc70ebaa38aa303274e333be6c98eef87619e2
> > >
> > > Our issue:
> > > We have a transmission that is application limited to 20Mbps on an
> > > ethernet connection that has ~1Gbps capacity.
> > > Without this change our transmission seems to settle at ~3.5Mbps.
> > >
> > > We have seen the issue on a slightly different network setup as well
> > > between two fiber internet connections.
> > >
> > > Due to what the mentioned commit changes we suspect some middlebox
> > > plays with the ACK frequency in both of our cases.
> > >
> > > Our transmission is basically an RTMP feed through ffmpeg to MistServer.
> > >
> > > Best regards
> > > /Robert
> > >
> >
> > Please always CC patch authors in this kind of requests.
> >
> > Thanks.
> >
> > Patch was :
> >
> > commit 78dc70ebaa38aa303274e333be6c98eef87619e2
> > Author: Priyaranjan Jha <priyar...@google.com>
> > Date:   Wed Jan 23 12:04:54 2019 -0800
> >
> >     tcp_bbr: adapt cwnd based on ack aggregation estimation
> >
> >     Aggregation effects are extremely common with wifi, cellular, and cable
> >     modem link technologies, ACK decimation in middleboxes, and LRO and GRO
> >     in receiving hosts. The aggregation can happen in either direction,
> >     data or ACKs, but in either case the aggregation effect is visible
> >     to the sender in the ACK stream.
> >
> >     Previously BBR's sending was often limited by cwnd under severe ACK
> >     aggregation/decimation because BBR sized the cwnd at 2*BDP. If packets
> >     were acked in bursts after long delays (e.g. one ACK acking 5*BDP after
> >     5*RTT), BBR's sending was halted after sending 2*BDP over 2*RTT, leaving
> >     the bottleneck idle for potentially long periods. Note that loss-based
> >     congestion control does not have this issue because when facing
> >     aggregation it continues increasing cwnd after bursts of ACKs, growing
> >     cwnd until the buffer is full.
> >
> >     To achieve good throughput in the presence of aggregation effects, this
> >     algorithm allows the BBR sender to put extra data in flight to keep the
> >     bottleneck utilized during silences in the ACK stream that it has 
> > evidence
> >     to suggest were caused by aggregation.
> >
> >     A summary of the algorithm: when a burst of packets are acked by a
> >     stretched ACK or a burst of ACKs or both, BBR first estimates the 
> > expected
> >     amount of data that should have been acked, based on its estimated
> >     bandwidth. Then the surplus ("extra_acked") is recorded in a 
> > windowed-max
> >     filter to estimate the recent level of observed ACK aggregation. Then 
> > cwnd
> >     is increased by the ACK aggregation estimate. The larger cwnd avoids BBR
> >     being cwnd-limited in the face of ACK silences that recent history 
> > suggests
> >     were caused by aggregation. As a sanity check, the ACK aggregation 
> > degree
> >     is upper-bounded by the cwnd (at the time of measurement) and a global 
> > max
> >     of BW * 100ms. The algorithm is further described by the following
> >     presentation:
> >     
> > https://datatracker.ietf.org/meeting/101/materials/slides-101-iccrg-an-update-on-bbr-work-at-google-00
> >
> >     In our internal testing, we observed a significant increase in BBR
> >     throughput (measured using netperf), in a basic wifi setup.
> >     - Host1 (sender on ethernet) -> AP -> Host2 (receiver on wifi)
> >     - 2.4 GHz -> BBR before: ~73 Mbps; BBR after: ~102 Mbps; CUBIC: ~100 
> > Mbps
> >     - 5.0 GHz -> BBR before: ~362 Mbps; BBR after: ~593 Mbps; CUBIC: ~601 
> > Mbps
> >
> >     Also, this code is running globally on YouTube TCP connections and 
> > produced
> >     significant bandwidth increases for YouTube traffic.
> >
> >     This is based on Ian Swett's max_ack_height_ algorithm from the
> >     QUIC BBR implementation.
> >
> >     Signed-off-by: Priyaranjan Jha <priyar...@google.com>
> >     Signed-off-by: Neal Cardwell <ncardw...@google.com>
> >     Signed-off-by: Yuchung Cheng <ych...@google.com>
> >     Signed-off-by: David S. Miller <da...@davemloft.net>
> >



-- 
Robert Bengtsson-Ölund, System Developer
Software Development
+46(0)90-349 39 00

www.intinor.com

-- INTINOR --
WE ARE DIREKT

Reply via email to