Jonathan Morton <[email protected]> writes: >>>> your solution significantly hurts performance in the common case >>> >>> I'm sorry - did someone actually describe such a case? I must have >>> missed it. >> >> I started this whole thread by pointing out that this behaviour results >> in the delay of the TCP flows scaling with the number of active flows; >> and that for 32 active flows (on a 10Mbps link), this results in the >> latency being three times higher than for FQ-CoDel on the same link. > > Okay, so intra-flow latency is impaired for bulk flows sharing a > relatively low-bandwidth link. That's a metric which few people even > know how to measure for bulk flows, though it is of course important > for sparse flows. I was hoping you had a common use-case where > *sparse* flow latency was impacted, in which case we could actually > discuss it properly. > > But *inter-flow* latency is not impaired, is it? Nor intra-sparse-flow > latency? Nor packet loss, which people often do measure (or at least > talk about measuring) - quite the opposite? Nor goodput, which people > *definitely* measure and notice, and is influenced more strongly by > packet loss when in ingress mode?
As I said, I'll run more tests and post more data once I have time. > The measurement you took had a baseline latency in the region of 60ms. The baseline link latency is 50 ms; which is sorta what you'd expect from a median non-CDN'en internet connection. > That's high enough for a couple of packets per flow to be in flight > independently of the bottleneck queue. Yes. As is the case for most flows going over the public internet... > I would take this argument more seriously if a use-case that mattered > was identified. Use cases where intra-flow latency matters, off the top of my head: - Real-time video with congestion response - Multiple connections multiplexed over a single flow (HTTP/2 or QUIC-style) - Anything that behaves more sanely than TCP at really low bandwidths. But yeah, you're right, no one uses any of those... /s > So far, I can't even see a coherent argument for making this tweak > optional (which is of course possible), let alone removing it > entirely; we only have a single synthetic benchmark which shows one > obscure metric move in the "wrong" direction, versus a real use-case > identified by an actual user in which this configuration genuinely > helps. And I've been trying to explain why you are the one optimising for pathological cases at the expense of the common case. But I don't think we are going to agree based on a theoretical discussion. So let's just leave this and I'll return with some data once I've had a chance to run some actual tests of the different use cases. -Toke _______________________________________________ Cake mailing list [email protected] https://lists.bufferbloat.net/listinfo/cake
