>> If we make a change now (in requirements for signing) then in some number of >> years validators can reject DNSKEY RRsets that have key tag collisions or >> at least strongly limit the number of such sets that are accepted. Validator >s >> can also reject RRSIG sets that have multiple RRSIGs with the same >> key tag or even just give up after a signle RRSIG fails to validate. > >Sounds like we agree, except for the detail of how much easier it is to >stop after one collision than 2 or 3. Since the code for 2 or 3 already >exists (see below) that doesn't strike me as very persuasive.
For validators the desired endgame is where there is no support for duplicate key tags. That significantly reduces complexity and attack surface. The only way to get there is to start telling signers that they must not generate key tag collisions. The problem for operators of signed zones is that key tag collisions are rare but quite easy to abuse for a DoS attack. This means that validators are quite tempted to keep reducing the number of collisions that are accepted. We don't have a standard for that. So it is not technically wrong. Then when an operator is unlucky and does get a key tag collision, they may find their zone bogus. Or maybe only bogus on a cold cache. Some resolvers have resource limits per query. A single query may trigger more than one DNSSEC validation check in a single zone. And if that zone has a key tag collision, then each DNSSEC validation check can independently eat into the limited number of collisions that is tolerated. These things are very hard to predict in advance and may just lead to bogus results in a way nobody predicted. _______________________________________________ DNSOP mailing list -- [email protected] To unsubscribe send an email to [email protected]
