On 20/10/2025 22:06, Michael Richardson wrote:
What I'm hearing is that even if you receive the extra signatures, you'd like
to only cache ones you can understand, and then only answer with that smaller
set.

I think the answer side is a problem, but I don't think the cache side is a 
problem.

...

No, I want that to legacy clients only, which are unable to signal what signatures are useful to them.

For more modern resolver and their clients, I want them to receive only the smallest set of signatures possible, leading to verification of the record. Storing and fetching more is extra garbage, never useful for anything. While these signatures are small, it is not a big problem.

It depends on how large of wasted signatures are. If I visit rootcanary.org and run the test, I receive also some signatures I have no use for. On secure.d2a7n3.rootcanary.net it is 181 bytes long signature, 16 bytes A record. On secure.d4a1n1.rootcanary.net, it has RRSIG 181 bytes long. Useful record has 28 bytes of AAAA address.

That means size of stored signature I do not intend to use and I must not use is more than 11 times larger than the part of record I am actually using. Okay, RAM is cheap, we do not have to count every byte. But multiple parties need to store it and waste memory with it and transmit it over wire. This is per record and multiplicates with more records.

(auth) --- (recursive resolver) --- (validating forwarder1) --- (validating site forwarder2) -- client

If this client has visited rootcanary.org, it forces recursive and forwarding caches to store and transmit that garbage as well.

But it seems post quantum algorithm signatures will be much bigger. According to PowerDNS tests [1][2], signature might be 2486 bytes long for Dilithium2. While useful answer remains 16 bytes for any older resolver, which does not understand that algorithm. But such resolver must cache these response and offer them to clients. Cache for any older resolver would be filled by content not useful for them. Very likely also not for any their clients. But they have no opt-out, unless they set DO=0 flag. With this I would be caching 155 times more than I want, *per record*.

Falcon512 has only 731 bytes per RRSIG, that is at least useful in UDP. Only 45 times bigger than response it signs.

There seems to be solution involving Merkle trees, which would make signatures in responses smaller. That is great. But it would only reduce space wasted for all older resolvers. I hope such resolvers will not disappear in few days nor would have their cache filled mostly by unusable content.

When we will sign a zone with multiple algorithms, every record will have one signature more than is needed. If we double sign with current approach, it would be hated by both old and new implementations.

It would not be a problem on auth server, there it is not avoidable. But its clients should be able to receive and store only the useful part. If recursive resolver cached the interesting part only, it would help that machine. But if its client supported different algorithms, what should it do then? Would it prevent successful verification by the client?

1. https://blog.powerdns.com/2022/04/07/falcon-512-in-powerdns
2. https://pq-dnssec.dedyn.io/

--
Petr Menšík
Senior Software Engineer, RHEL
Red Hat, https://www.redhat.com/
PGP: DFCF908DB7C87E8E529925BC4931CA5B6C9FC5CB

_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to