________________________________
From: Gautam Akiwate <[email protected]>
Sent: Thursday, May 29, 2025 2:50 PM

Thanks for helping me understand the proposal.  Drilling down to a few topics:

...

5. Standardization is principally relevant in an "open" deployment, where 
components from separate vendors must interoperate.  Do you imagine a 
deployment where the client and service are developed by separate parties?  If 
they are developed by the same party, any setup can be done by private 
arrangement.

> We do. We want this to broadly work. For instance, applications who are aware 
> of it on Apple devices can mark traffic class (background / interactive) 
> which we map to the appropriate “sla” tier. More generally, the OS is also 
> typically aware of when something is background / foreground, so the OS too 
> can choose to do this without explicit signals from applications too.

For a standard like this to be necessary, these parties all must be unrelated, 
with the app connecting to a service based only on a single URL.  Otherwise, 
they can coordinate a solution by private arrangement (e.g. providing multiple 
URLs for different purposes).  Do you have a use case in mind that fits this 
pattern?  Or is this more like a convenience to simplify API design?

Some concrete use cases would help to make the motivation clearer.

> While the draft does not prescribe how the clients choose the service level, 
> one potential way want to do it would be to use traffic class. Concretely, if 
> an application marks a request with a specific traffic class the underlying 
> system can use it as a signal of what service level to use. On Apple devices, 
> we support the following traffic classes 
> (https://developer.apple.com/documentation/network/nwparameters/serviceclass-swift.enum<https://urldefense.com/v3/__https://developer.apple.com/documentation/network/nwparameters/serviceclass-swift.enum__;!!Bt8RZUm9aw!99dc3DP2VS__m6z_hqth8AGOJQobruj67Sy_JjpVNSKcoz4a1jJPuFmlPyyHVe2vLnr6tK7Y1OMlDMXdeLgSdaT2CMk$>)
>  which applications can use currently. We envision that applications should 
> be able to set traffic class or something similar so that  they can leverage 
> the “sla” parameter.

I note that there are 6 traffic classes listed there, which seems like a poor 
fit for 3 non-extensible classes as proposed.  Making the classes extensible 
(via an IANA registry) and reserving a private-use range could help.

6. Respecting this flag is potentially adverse to the client's interest 
(degrading the service level it uses, and potentially delaying the connection 
if one is already open for a different service level).  Are you sure clients 
will respect this indicator, which does not appear to be enforced by any 
technical mechanism?

> If the default is the same as high priority then only the well-behaved 
> clients to improve the services they care about.

By "high priority" do you mean "interactive" or "realtime"?

> We cannot stop a malicious client since this is not a security feature and 
> more of a performance feature that helps improve the performance of the 
> servers when their interests do align.

This seems strange because the client and server are separate entities (or this 
wouldn't be necessary), and their interests do not align.  Or is this really a 
convenience for developers who control both the client and server?

One can imagine a version of this parameter that says "This option is 15 ms 
closer to you, but it costs me 2.3x more to operate.".  But then why would the 
client choose the slower option?

...
7. Sharing the origin for multiple service levels adds complexity to connection 
establishment and pooling.  Have you considered how this interacts with HEv3?

> We have tested this with HEv3. Since the “sla” parameter is intended as a 
> filter, we do not expect any anti-patterns with HEv3.

The antipattern I see is that this competes with connection pooling, by 
breaking up the origin into multiple destinations.  That will impair connection 
establishment, congestion control, etc.

Also, this parameter partly serves as a latency hint, but the client can pretty 
easily measure the latency, and is expected to do so in some HE situations.  
Should we really trust the DNS server's estimate over the client's measurement? 
 An alternative would be to split the parameter in two: "predicted latency" and 
"operator's preference".  The predicted latency would bootstrap the client's 
latency estimate, and the client would use the most preferred endpoint whose 
latency (predicted or estimated) meets its requirements.  (Also, "operator's 
preference" looks a lot like SvcPriority...)

--Ben
_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to