In your letter dated Tue, 15 Jul 2025 12:24:03 -0400 (EDT) you wrote:
>> It is exactly this limit that causes trouble for some validator software.
>
>Can you explain in some detail what the complexity here is? I thought it
>was an easy limit to add based on number of RRSIGs of a single RRset to
>count, or perhaps for more complicated scenarios, counting the
>validation failures per QNAME being resolved? And since there are other
>things that could cause these, eg trees of NS/CNAME redirects to other
>abusive failing RRSIGs, wouldn't this complexity need to be implemented
>regardless of keytag collisions?

There is no easy limit. The number of queries a recursor has to send to
authoritatives can be very high if the resolver has a cold cache.
So limits need to be set very high to handle this case. Which gives
an attacker plenty of space to create nice DoS. So resolvers have to play
a balancing act where weird but legitimate queries do not fail and attackers
cannot bring down a resolver with weird setups.

If we allow a number of failures per RRset then the number of signature
checks that need to be allowed multiplies.

>I was putting the CPU cost in perspective, as I thought that was the
>main motivation these days for this whole keytag discussion. But I think
>you are now referring to "causes trouble for some validator software"
>being some code complexity that I don't fully understand?

A TLS connection typically requires two public key operations: one to
establish a session key and one sign or verify the session state.

In contrast, validating a single DNS query with a cold cache and easily
lead to tens even hundreds of signature validations. So the scale
is not compareble.

At the same time we think of setting up a TLS connection as expensive and
answering a DNS request over UDP as cheap. So the query load on resolvers
is typically very high.

_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to