Do we know a reason why the system's behavior won't move beyond the new limit 
the same way it moved beyond the old one? If it's some bizarre kind of leaky 
bucket let's have the showdown now rather than later when everything is larger 
and ossification has begun. 


p vixie 


On Jul 30, 2024 07:16, Nick Banks <[email protected]> 
wrote:

Hello Folks,

  

We’ve had this discussion on Slack in the past, and I wanted to bring it here 
to get some additional feedback. As some of you know, I have a project on 
GitHub (microsoft/quicreach) that is a simple ping-like reachability tool for 
QUIC, and I run a periodic action to test the top 5000 hostnames for 
QUIC-reachability and then breaks the handshake down by whether it (a) requires 
multiple round trips, (b) exceeds the specified amplification limit or (c) 
connects in 1-RTT under the limit. It produces this dashboard:

  

 

  

The main point in sending this email is to focus on the large percentage of 
servers that are ignoring the 3x amplification limit today, and what we should 
do (if anything) about that. I ran a quick experiment (PR) this morning to test 
how the breakdown would look if we had different amplification limits (3x, 4x, 
5x) and found that if we used a 5x limit we would find ourselves in a place 
where most servers are now under the limit.

  

 

  

So, my ask to the group is if we should more officially bless a 5x limit as 
‘Ok’ for servers to use. This would more impact those servers that currently 
take multiple round trips because they are correctly enforcing the 3x limit on 
themselves, resulting in longer handshake times. If we say they can/should 
change their logic from 3x to 5x, then their handshake times will improve, and 
largely things will speed up for clients when using QUIC. Personally, I’d like 
to update MsQuic to use this new limit based on this data, but I wanted to get 
a feel from the group first.

  

Thanks,

- Nick

  

Sent from Outlook

Reply via email to