On 20/10/2025 20:50, Ondřej Surý wrote:
Hey,

Is it worth the complexity? There are so many things that can go wrong with 
signaling over stateless protocol.

Yes, I think it is. For every new DNSSEC algorithm to become widely used we need to wait now for too long. Yes, a cache has a state. Protocol does not need to have also. Sure, there are implementations which still did not get DO bit support right.

It can be used for known algorithms already, but should increase efficiency of new algorithms deployed in production.

I work with older versions of validators and keep them without significant 
holes.
Yes, your employer is part of the problem. But I am not sure if I get your 
point – any new DAU-like signaling
will get implemented only in very recent DNS implementation versions which in 
turn means that you are not
going to get any benefit out of it for a long time. It feels like shifting the 
complexity into the protocol instead of
fixing it in the place where it needs to be fixed. When the quantum computing 
gets to a point where the RSA
and ECC is no longer secure it doesn't matter if you keep the software without 
significant security holes, because
the underlying cryptography will be broken. In turn, it will be actually better 
to go insecure instead of pretending
everything is secure and peachy.

Ondrej

Yes, once quantum computing is able to crack anything, we need to be ready. There are more companies than Red Hat, which provide long term support software without adding new (breaking) features into it. We received some well deserved shaming, when our products stopped validating algorithm 5 and 7 too early. But to be ready, we must not discourage deployment of new algorithms, until they have no other alternative. I think deploying new software should start faster.

We need to deploy it early. But to do that, its deployment should not make older implementations behave worse. TLS clients can do that. Server has 2 separate signed chains of certificate, but sends only one to the client.

DNSSEC now has ability to send no signatures at all (DO=0) or all of them (DO=1). Bind 9.16+ does not have ability to avoid caching DNSSEC signatures anymore. Conforming implementation must store all signatures. But to deploy new algorithms, I think we need opt-in approach.

My employer offers older and stable versions. Yes, they lack new features. But many of them support algorithm 15 and 16 already. But adoption almost is non-existent, at least on TLD. Why is that? Are algorithms 15 or 16 too bad, too intensive? I don't think so. According to dnsthought [1] the support is not so bad.

# <number of delegations> <algorihtm number>
awk '$4 == "DS" {print $6}' </tmp/root.db | sort -n | uniq -c | sort -rn
   1259 8
    197 13
     31 10
     11 7
      2 15
      1 14

We have fj. and pg. TLD domains backed by it. I have no idea where they lie without consulting a map. Why are they pioneers in this regard? Probably because not many people will suffer if they are considered insecure. I consider algorithm 15 and 16 not experimental anymore. You authored RFC 8080 in 2017. More than 8 years later no important TLD uses it. I think there is missing protocol support to adopt new algorithms faster. If PQC were expected to be supported on our RHEL12, I would not ask. But our crypto team is testing PQC support of TLS in RHEL10 already. From that I expect switch of important zones to PQ algorithms should happen sooner than in 15+ years from now.

To adopt new algorithms faster, we need a way to serve them only to implementations able to process them. There was a smart way to do that with DO=1. It were able to work with older implementations, but it offered something better to new software. I think we need similar approach for introducing new algorithms. Serve new signatures only to new clients, but still serve older servers signatures they can use, if there are (still) some. To do that, we need to have multiple different sets created.

Did a switch from IPv4 to IPv6 happen fast? I don't think so. Do you expect switch from current algorithms to not yet published without some "dual stack" transition phase, allowing both old and new? Can you name protocols where such approach did work? I do not know them. It might be the lack of my knowledge. Please share articles I should read, if you know them.

Thanks,
Petr

1. https://dnsthought.nlnetlabs.nl/vis/

--
Petr Menšík
Senior Software Engineer, RHEL
Red Hat,https://www.redhat.com/
PGP: DFCF908DB7C87E8E529925BC4931CA5B6C9FC5CB

_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to