One validating path is enough, correct. Unlike TLS server, DNSSEC cannot receive only that one validating path, which is enough for _every_ validator. If the zone contains multiple signatures of different algorithms, every validating client must receive all of them. I think it needs to change to make adoption of new algorithms faster.

I expect we cannot switch root zone and most TLDs from algorithm 8 to some, which do not have even assigned numbers today. Unless we want to go insecure for whole tree, I think the only way to start deploying PQC is dual signing.

I want to optimize clients to receive, cache and distribute only signatures used by someone. Current DNSSEC protocol forces clients to receive all unknown algorithms authoritatives have. Even if they are able to signal they won't use them or don't trust them.

I am trying to solve the cache usage in case some zones would be signed by more than one algorithm. One widely supported, like 8 or 13. Second more modern and less adopted. It can be tested with algorithms 15 and 16 already. They are implemented, but rarely used. The problem with bigger signatures will make cache usage without change of current algorithm wasted much more. Because quite a lot of legacy resolvers supporting only traditional algorithms will have most of their caches filled by signatures offerring no value. Only increased traffic and memory consumption. I think we should offer them only legacy signatures and reserve more modern to more modern clients. Like when DNSSEC started deploying.

Every client should receive only the best signature possible per record. It may receive more, if some of its clients need something different. Without getting less secure, it should use only actually used RRSIGs.

Just like on TLS, this does not require online signing. But needs ability to choose only the best set available signature and receive only that part.

I work with older versions of validators and keep them without significant holes. I expect when new algorithms get implemented into fresh new releases, those older version should keep validating zones signed by older algorithms. They should not stop validating and dns tree should not become insecure to them. At the same time, we want new versions to use new algorithms. Not old ones.

We have algorithm 8 on the root zone. Is it the best we have? Nope, just the most supported. How will we get to algorithm 15 or newer on the root zone? What is the process and timeline getting there? That is what I am thinking about.

Regards,

Petr

On 20/10/2025 19:50, Ondřej Surý wrote:
There has also been some research in the area - mine and UTwente on the usable 
algorithms.

We already have algorithm flexibility in DNSSEC (one validating path is 
enough), so I am not really sure what is the problem that you are trying to 
solve.

Ondrej
--
Ondřej Surý (He/Him)
[email protected]

On 14. 10. 2025, at 17:13, Petr Menšík <[email protected]> 
wrote:

Hello!

I have been thinking whether there is a good plan how to switch to post-quantum 
resistant algorithms in DNSSEC. I am a software engineer at Red Hat and there 
are a lot of more qualified people about the cryptographic part.

But it seems to me one thing is obvious. New signatures of whatever algorithms 
would be somehow large, compared to what is used today. I think the only 
working model for early adoption is to not punish new zones signing in less 
common algorithms is to dual sign with some proven and common at the same time. 
OpenSSL has nice support for such thing on TLS channels. But dual signing in 
DNSSEC has mostly disadvantages and is avoided for a good reason.

I think we need some way how to make it easier to offer less common algorithms. 
I have been thinking about how to do that and put together document with my 
idea. It is not a draft quality, I have never written RFC draft for even 
trivial EDNS extension. But I failed to find something similar.

I think it would need to support new algorithms and old algorithms together for 
some time. Just like it is expected on TLS channels.

My idea is to have something similar to RFC 6975 DAU record, but a modified 
variant with primary and backup algorithm sets. Authoritative servers would 
then send only signatures types requested. I expect authoritative zones would 
be dual signed. But validating clients could fetch only signatures they want. 
Or their clients want.

More at my gitlab [1]. I have explained it on DNS-OARC DNSdev room. One of 
results were this it way too complicated to pass on DNSOP.

I think such support might help also standardized algorithms 15 a 16 to become 
more used. They have minimal usage today.

Is there any other plan, how to gradually deploy newer DNSSEC algorithms, even 
experimental ones? Without trying them only on zones appearing insecure to 
majority of older validators? Or waiting ages before they are supported almost 
by everyone?

Answer here, at DNSdev room or create issues at my gitlab. Is my idea worth 
trying it as a normal draft? Does it make sense to you?

Thank you in advance.

Best Regards,
Petr

1. 
https://gitlab.com/pemensik/dns-rfc-proposals/-/blob/main/dnssec-dual-signing.md?ref_type=heads

--
Petr Menšík
Senior Software Engineer, RHEL
Red Hat, https://www.redhat.com/
PGP: DFCF908DB7C87E8E529925BC4931CA5B6C9FC5CB

_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

--
Petr Menšík
Senior Software Engineer, RHEL
Red Hat, https://www.redhat.com/
PGP: DFCF908DB7C87E8E529925BC4931CA5B6C9FC5CB

_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to