> On May 29, 2025, at 12:47 PM, Ben Schwartz <[email protected]> > wrote: > > From: Gautam Akiwate <[email protected] > <mailto:[email protected]>> > Sent: Thursday, May 29, 2025 2:50 PM > > Thanks for helping me understand the proposal. Drilling down to a few topics: > > ... > > 5. Standardization is principally relevant in an "open" deployment, where > components from separate vendors must interoperate. Do you imagine a > deployment where the client and service are developed by separate parties? > If they are developed by the same party, any setup can be done by private > arrangement. > > > We do. We want this to broadly work. For instance, applications who are > > aware of it on Apple devices can mark traffic class (background / > > interactive) which we map to the appropriate “sla” tier. More generally, > > the OS is also typically aware of when something is background / > > foreground, so the OS too can choose to do this without explicit signals > > from applications too. > > For a standard like this to be necessary, these parties all must be > unrelated, with the app connecting to a service based only on a single URL. > Otherwise, they can coordinate a solution by private arrangement (e.g. > providing multiple URLs for different purposes). Do you have a use case in > mind that fits this pattern? Or is this more like a convenience to simplify > API design?
There seem to be two underlying questions here: (1) Why use SVCB to learn these URLs (2) Why standardize this SVCB approach? The primary benefit of the SVCB approach is that it allows us to dynamically learn the different URLs which simplifies design. The additional benefits include that clients do not have to be updated to learn new URLs and neither do endpoints need to be provisioned with additional certificates. While co-ordination with private arrangement is always an option, we believe the SVCB approach to learning the multiple URLs is a better approach. As for why standardize this SVCB approach, we believe that standardizing this approach can allow the underlying OS/system to facilitate this service mapping for applications and servers controlled by them. For instance, consider an application like weather that fetches data in the background but then when in the foreground wants to use more optimal end points. The application might want to use a slower server to save on costs for background traffic and reduce the loads and costs on the interactive edge servers. The SVCB approach could allow the underlying OS/system to do this transparently for the application greatly simplifying it’s design. > Some concrete use cases would help to make the motivation clearer. > > > While the draft does not prescribe how the clients choose the service > > level, one potential way want to do it would be to use traffic class. > > Concretely, if an application marks a request with a specific traffic class > > the underlying system can use it as a signal of what service level to use. > > On Apple devices, we support the following traffic classes > > (https://developer.apple.com/documentation/network/nwparameters/serviceclass-swift.enum > > > > <https://urldefense.com/v3/__https://developer.apple.com/documentation/network/nwparameters/serviceclass-swift.enum__;!!Bt8RZUm9aw!99dc3DP2VS__m6z_hqth8AGOJQobruj67Sy_JjpVNSKcoz4a1jJPuFmlPyyHVe2vLnr6tK7Y1OMlDMXdeLgSdaT2CMk$>) > > which applications can use currently. We envision that applications should > > be able to set traffic class or something similar so that they can > > leverage the “sla” parameter. > > I note that there are 6 traffic classes listed there, which seems like a poor > fit for 3 non-extensible classes as proposed. Making the classes extensible > (via an IANA registry) and reserving a private-use range could help. There is some potential confusion here. The three classes were supposed to map to three performance profiles for servers. A cost-effective server in a data center (background), an edge location (interactive), and a performant edge cache at the ISP edge cache perhaps (real-time). The many different traffic classes should map to one of these three different profiles. For instance, bestEffort and responsiveData could map to the interactive server class. Do you think calling the three classes something other than background, interactive, and real-time might reduce confusion? > > 6. Respecting this flag is potentially adverse to the client's interest > (degrading the service level it uses, and potentially delaying the connection > if one is already open for a different service level). Are you sure clients > will respect this indicator, which does not appear to be enforced by any > technical mechanism? > > > If the default is the same as high priority then only the well-behaved > > clients to improve the services they care about. > > By "high priority" do you mean "interactive" or "realtime"? > > > We cannot stop a malicious client since this is not a security feature and > > more of a performance feature that helps improve the performance of the > > servers when their interests do align. > > This seems strange because the client and server are separate entities (or > this wouldn't be necessary), and their interests do not align. Or is this > really a convenience for developers who control both the client and server? > > One can imagine a version of this parameter that says "This option is 15 ms > closer to you, but it costs me 2.3x more to operate.". But then why would > the client choose the slower option? This issue seems to be rooted in the subtle distinctions between what we each mean by a client, and server. While in most cases we do not need to make a distinction between an application and the client system/OS it might be worthwhile doing that here. Assuming, an application running atop a client system relies on it to facilitate network traffic. In this case, the application and the server even if co-ordinating need support from the client system to do the service binding mapping. Moreover, the client system can ensure that an application running in the background marks application traffic appropriately. > > ... > 7. Sharing the origin for multiple service levels adds complexity to > connection establishment and pooling. Have you considered how this interacts > with HEv3? > > > We have tested this with HEv3. Since the “sla” parameter is intended as a > > filter, we do not expect any anti-patterns with HEv3. > > The antipattern I see is that this competes with connection pooling, by > breaking up the origin into multiple destinations. That will impair > connection establishment, congestion control, etc. > > Also, this parameter partly serves as a latency hint, but the client can > pretty easily measure the latency, and is expected to do so in some HE > situations. Should we really trust the DNS server's estimate over the > client's measurement? An alternative would be to split the parameter in two: > "predicted latency" and "operator's preference". The predicted latency would > bootstrap the client's latency estimate, and the client would use the most > preferred endpoint whose latency (predicted or estimated) meets its > requirements. (Also, "operator's preference" looks a lot like SvcPriority...) While it is likely that the choice of server is co-related with latency it is not always. The choice on the server side will primarily driven by cost. Also, we do not currently connection pool requests to the same origin with different traffic classes. > > —Ben Gautam
_______________________________________________ DNSOP mailing list -- [email protected] To unsubscribe send an email to [email protected]
