Thanks for the detailed feedback Ben.

> On Mar 13, 2025, at 11:27 PM, Ben Schwartz <[email protected]> 
> wrote:
> 
> draft-gakiwate-dnsop-svcb-sla-parameter proposes a new SVCB SvcParam, 
> "sla=0,1,2" indicating whether an endpoint is suitable for "background", 
> "interactive", and/or "realtime" use.
> 
> The proposal is well-formed as a matter of SVCB syntax, but I have some 
> questions about the design.

> 
> 1. SLA is well-known as "service level agreement".  Are you sure you want to 
> use that name?

That is a fair point. We are thinking of shortening the parameter to just “sl” 
short for “service level”. We are also open to alternative names.

> 
> 2. The document explains that this parameter is necessary because
> 
>    there are scenarios where a service with a single hostname needs to
>    be used by both "background" and "interactive" clients
> 
> What are those scenarios?  None are mentioned.  Note that SVCB records are 
> specific to a "service" such as an HTTP "origin", including the URI scheme, 
> not just a "hostname".  Why not use distinct origins for distinct performance 
> characteristics?

 We do note in the draft that the current approach to achieve this scenario is 
to have distinct origins for distinct server performance characteristics. 
Effectively, we still have distinct origins for distinct server performance 
characteristics but avoid any static mappings. We use the “sla” parameter to 
dynamically discover these distinct origins and filter down to the right 
origin. Another advantage is that the application does not need to be 
explicitly aware of the endpoints and there is no need for an additional 
certificate. I would say the main advantage of “sla” parameter is that it 
allows the client to dynamically discover these different origins and allows 
the client to choose between the origins using some local context (e.g., 
traffic class) without knowing first hand about it. 


> 
> 3. It seems like this is primarily geared toward cases where the 
> lowest-latency nodes are not preferred for latency-insensitive use.  Why not? 
>  Usually, we prefer lowest latency for all connections.  (I imagine this is 
> because the lowest-latency nodes might be approaching a load limit or have 
> higher marginal cost for utilization.)

Yes, the primary use case is where the lowest-latency nodes are not preferred 
for the latency insensitive use case. The dynamic discovery allows us to 
balance both load and cost at the nodes when latency is not a primary 
consideration.

> 
> 4. Are you sure that these 3 categories will be sufficient indefinitely?  In 
> my experience, multipurpose traffic steering can grow to cover more than 3 
> traffic classes.

We believe 3 categories should be sufficient. If there are scenarios where more 
than three categories are needed, we would be happy to revisit this number. We 
do not view the categories as traffic classes, though I now realize that the 
“background” “real-time” and “interactive” might invoke [DiffServ] like traffic 
classes which might be confusing. The categories are not as "traffic classes” 
but tiers of different latency and performance profiles that the servers can 
support. 

As such, even though we may have more than three traffic classes, we still map 
them to three tiers of performance and latency profiles servers can support.
 
> 
> 5. Standardization is principally relevant in an "open" deployment, where 
> components from separate vendors must interoperate.  Do you imagine a 
> deployment where the client and service are developed by separate parties?  
> If they are developed by the same party, any setup can be done by private 
> arrangement.

We do. We want this to broadly work. For instance, applications who are aware 
of it on Apple devices can mark traffic class (background / interactive) which 
we map to the appropriate “sla” tier. More generally, the OS is also typically 
aware of when something is background / foreground, so the OS too can choose to 
do this without explicit signals from applications too.

While the draft does not prescribe how the clients choose the service level, 
one potential way want to do it would be to use traffic class. Concretely, if 
an application marks a request with a specific traffic class the underlying 
system can use it as a signal of what service level to use. On Apple devices, 
we support the following traffic classes 
(https://developer.apple.com/documentation/network/nwparameters/serviceclass-swift.enum)
 which applications can use currently. We envision that applications should be 
able to set traffic class or something similar so that  they can leverage the 
“sla” parameter.

> 
> 6. Respecting this flag is potentially adverse to the client's interest 
> (degrading the service level it uses, and potentially delaying the connection 
> if one is already open for a different service level).  Are you sure clients 
> will respect this indicator, which does not appear to be enforced by any 
> technical mechanism?

If the default is the same as high priority then only the well-behaved clients 
to improve the services they care about. We cannot stop a malicious client 
since this is not a security feature and more of a performance feature that 
helps improve the performance of the servers when their interests do align. As 
you pointed out, the default behavior is to use the most performant node and 
the use of lower service levels is opportunistic. At worst, we have no change 
in behavior. But in the average case this would help. 
> 
> 7. Sharing the origin for multiple service levels adds complexity to 
> connection establishment and pooling.  Have you considered how this interacts 
> with HEv3?

We have tested this with HEv3. Since the “sla” parameter is intended as a 
filter, we do not expect any anti-patterns with HEv3. 

Thanks
Gautam

> 
> --Ben Schwartz
> _______________________________________________
> DNSOP mailing list -- [email protected] <mailto:[email protected]>
> To unsubscribe send an email to [email protected] 
> <mailto:[email protected]>
_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to