Hi Vittorio,

I'm going to answer your questions from my standpoint, but realise that the 
browser folks may not agree with what I say, or have additional context. Much 
of the draft's design is attempting to be responsive to them, and I don't want 
to try to represent their positions too much.

> On 27 Feb 2025, at 8:55 pm, Vittorio Bertola 
> <[email protected]> wrote:
> 
>> Il 26/02/2025 08:20 CET Mark Nottingham <[email protected]> ha 
>> scritto:
>> 
>> The intent is not to scale to that degree -- indeed, that would be 
>> considered a failure, because it would indicate widespread censorship on the 
>> Internet. Instead, it's to selectively surface legally mandated censorship 
>> when it impacts 'large' services (such as public resolvers) to raise user 
>> awareness and reduce confusion.
> 
> This depends on the definition of "censorship", and also on whether you 
> envisage this system only for EDE 17 (user-requested blocking) or also for 
> EDE 16 and 15, which should be used for law-mandated and 
> operator's-own-initiative blocking.

My interest is only in exposing legally-mandated blocking. 

> In any case, a lot of countries mandate some kind of blocking (the USA might 
> join the list soon, apparently - see the new FADPA proposal), and AFAIK there 
> are many ISPs that proactively block phishing etc, so, in case of success, I 
> would expect the number of resolver operators that need an operator ID to be 
> at least in the thousands, perhaps an order of magnitude more. On the other 
> hand, most ISPs could be too "low tech" to use this mechanism, so the list 
> might be kept short by lack of adoption.
> 
>>> possibly the browsers should focus on controlling what kind of message is 
>>> presented to the users, rather than who is sending it.
>> 
>> If you have a means of doing so without increasing the risk of arbitrary 
>> censorship that *isn't* legally mandated, I'm very receptive.
> 
> I need to understand your threat model, then.
> 
> Are you concerned about someone injecting phishing URLs as fake blocking 
> messages? If so, perhaps the best solution would be to implement some kind of 
> content-based and metadata-based heuristics that would also help against any 
> other phishing attempt (not sure how well this could work, of course, but 
> somehow browsers could use the same sources of abusive domain lists that ISPs 
> employ, or AI).

Browsers already deploy extensive mechanisms to block phishing at the endpoint 
-- including but not limited to Safe Browsing.

> On the other hand, are you concerned about resolvers blocking arbitrary 
> stuff? In this case, I think that making "censorship" (however defined) 
> visible to the end-user by showing the ISP's message that says "censored" 
> would actually help countering it, as it could create end-user outrage and 
> motivate the end-user to change resolver or ISP. If the browser ignores the 
> message and just lets the connection drop, the user will think that "the 
> Internet doesn't work" and not do much.

Personally, I'm concerned that making it too easy to surface censorship will 
normalise it -- i.e., it will just become part of how the Internet works in 
many places. I'm not going to hang onto that as a primary motivation for the 
draft's design, because by nature it's speculative. 

What's more relevant is that it's prudent to be cautious in exposing new 
surface area like this, in particular because what you see in a browser is only 
between you and the site you're talking to (provided you're using HTTPS, which 
basically everyone is now) -- changing that to allow a third party to interpose 
needs to be done very carefully, especially when that third party is 
unauthenticated.

As the draft attempts to explain, that's the root of the threat model here -- 
allowing an unauthenticated party to interpose into communication when users 
are already easily confused by the underlying technology. Keep in mind that 
while enterprise and operator networks might consider themselves as trustworthy 
in this regard, many people still get their DNS settings from DHCP in coffee 
shops and other places that are inherently untrustworthy.

Yes, browsers have mitigations against phishing, but they also practice defence 
in depth, and opening up new vectors for attacks is a no-go for them -- hence 
the careful design here.

> In the end, this draft is about showing the resolver's message or not, not 
> about accepting the resolver's blocking or not - unless what you imagine is 
> that browsers will "re-resolve" blocked domains when they come from 
> unknown/untrusted sources but not when they come from a trusted resolver.
> 
>> From my standpoint, it's necessary to have some party making a judgement 
>> call about who is using this mechanism responsibly, and while I share your 
>> discomfort with concentration of power, browser vendors are well placed for 
>> this, experienced in making such calls, already in place, and seemingly 
>> distant from any significant conflict of interest (at least as far as I can 
>> see).
> 
> I think that any government's viewpoint on this point would be dramatically 
> different from yours :) In the EU, we just had a EUCJ ruling that Google 
> cannot legally exclude from Android Auto third party apps on the grounds that 
> they do not meet their unilaterally imposed requirements, if this alters 
> market competition. In cases where the browser maker is also a significant 
> player in the resolver market, the conflict of interest is obvious. In cases 
> where the browser is recognized as a gatekeeper service, in the EU there also 
> are constraints on what it can do.

We may be talking past each other here. The idea here is that browsers would 
recognise resolvers that were using the mechanism to implement legally-mandated 
filtering -- thereby keeping control of filtering firmly in the hands of the 
state.

> Personally, I think that the only party that should decide whether to trust 
> whatever comes from their resolver is the end-user, rather than an 
> intermediary. If you don't trust a resolver or dislike its policies, just 
> pick another one; if you continue using it, then it would be better for you 
> to receive the information made available by the resolver you use. So my 
> threat model is just about an attacker faking the DNS response, not about a 
> malicious resolver.

That sounds nice in theory, but most users accept the default resolver offered 
to them by the network. Choice exhaustion is a thing.

Cheers,

--
Mark Nottingham   https://www.mnot.net/

_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to