On Sun, 25 Jan 2026, Steve Crocker wrote:
Yes, I would have thought all this was obvious. The Red Hat incident caught my
attention. I spent a few minutes analyzing their mistake, and then I went
looking
for guidance that would have addressed it. I didn't find it. Even RFC 8624
seemed a bit fuzzy to me.
In my view, there are two issues. First, the introduction and removal of
algorithms requires multiple steps, and these steps are separate for the
introduction/removal of the initiating algorithm (signing or decrypting) and
for the receiving algorithm (validating or decrypting). There's nothing super
deep or
complicated here, but it's slightly more complicated than "obvious."
Second, although each implementer and each operator makes their own decisions,
there's a need for competent and useful guidance. Current guidance, as
expressed in
8624 and 8624 bis is perhaps adequate at the moment, but algorithm changes will
likely continue indefinitely. It seems to me appropriate to create a process
for
dealing with these changes on a regular basis. "Regular basis" here means
several years or perhaps even multiple decades, of course, but nonetheless often enough
to need a process.
When the RedHat SHA1 situation started to happen, I worked there. I gave
them all the arguments about not breaking things, lifecycles, RFCs etc.
They just deemed it less important than a consistent "no SHA1 for digital
signatures" policy over all software and standards. They had the author
of RFC8624 working for them and still ignored it. Writing up another
RFC wouldn't have done anything different.
Perhaps the key point, which may not have been emphasized enough in the draft,
is that implementers and operators are really dealing with multiple algorithms.
It's
important to have the next algorithm at the ready. And, of course, the
retraction process, particularly on the receiving side, requires an extended
period of
time. (See below for an additional comment on this point.)
How can we provide the community with competent advice? Well, the judgment of
crypto experts is definitely relevant, but it's not sufficient. They will tell
us
when they think an existing algorithm is too weak to survive indefinitely, but
there's some distance between that sort of declaration and actually getting the
community to stop using a fading algorithm.
Similarly, a declaration from the crypto experts that a new algorithm is fit to
use doesn't lead immediately to widespread implementation and deployment. In
both
cases, measurement of actual use across the community is needed.
This was (and is) all done with the usage and deployment of DNSSEC
algorithms series, i.e. RFC 8624 and RFC 9904.
If this lifecycle model is adopted, I'm hoping:
* The IETF will establish an expert advisory group that will review the
status of each active algorithm and make a formal declaration whenever it's
time to move
an algorithm to its next phase.
That group is DNSOP and it does so with the 8624 -> 9904 -> xxx
guidance RFCs.
* Implementers will schedule the inclusion and retraction of algorithms in
their packages using the IETF advice, and will inform and alert their customers
accordinging. (Which reminds me that someone suggested it was impossible
to remove signing/encrypting from a particular package without also removing
validation/decrypting. For the architects of that package, please provide
separate access to the initiating and receiving uses.)
We would hope they do that based on the above docs. If they choose not
to, another RFC telling them they are going to break things or get
broken won't make a difference.
* Operators will similarly plan the inclusion and retraction of algorithms in
their systems, and alert their users accordingly.
Thanks,
See above.
Paul
_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]