Paul and Philip,

Thanks for your notes.

Yes, I would have thought all this was obvious.  The Red Hat incident
caught my attention.  I spent a few minutes analyzing their mistake, and
then I went looking for guidance that would have addressed it.  I didn't
find it.  Even RFC 8624 seemed a bit fuzzy to me.

In my view, there are two issues.  First, the introduction and removal of
algorithms requires multiple steps, and these steps are separate for the
introduction/removal of the initiating algorithm (signing or decrypting)
and for the receiving algorithm (validating or decrypting).  There's
nothing super deep or complicated here, but it's slightly more complicated
than "obvious."

Second, although each implementer and each operator makes their own
decisions, there's a need for competent and useful guidance.  Current
guidance, as expressed in 8624 and 8624 bis is perhaps adequate at the
moment, but algorithm changes will likely continue indefinitely.  It seems
to me appropriate to create a process for dealing with these changes on a
regular basis.  "Regular basis" here means several years or perhaps even
multiple decades, of course, but nonetheless often enough to need a process.

Perhaps the key point, which may not have been emphasized enough in the
draft, is that implementers and operators are really dealing with multiple
algorithms.  It's important to have the next algorithm at the ready.  And,
of course, the retraction process, particularly on the receiving side,
requires an extended period of time.  (See below for an additional comment
on this point.)

How can we provide the community with competent advice?  Well, the judgment
of crypto experts is definitely relevant, but it's not sufficient.  They
will tell us when they think an existing algorithm is too weak to survive
indefinitely, but there's some distance between that sort of declaration
and actually getting the community to stop using a fading algorithm.

Similarly, a declaration from the crypto experts that a new algorithm is
fit to use doesn't lead immediately to widespread implementation and
deployment.  In both cases, measurement of actual use across the community
is needed.

Further, and continuing on the retraction process, there can be wide
variation depending on the intended usage.  Checking DNSSEC signatures of
current DNS records doesn't require a lengthy retraction period after use
of the algorithm for signing has subsided.  On the other hand, checking the
signature on a heavyweight contract that might have a lifetime of multiple
decades requires keeping the validation process alive even if the use of
the signing algorithm has been deprecated and ceased many years earlier.
Same for encryption/decryption.

If this lifecycle model is adopted, I'm hoping:

   - The IETF will establish an expert advisory group that will review the
   status of each active algorithm and make a formal declaration whenever it's
   time to move an algorithm to its next phase.

   - Implementers will schedule the inclusion and retraction of algorithms
   in their packages using the IETF advice, and will inform and alert their
   customers accordinging.  (Which reminds me that someone suggested it was
   impossible to remove signing/encrypting from a particular package without
   also removing validation/decrypting.  For the architects of that package,
   please provide separate access to the initiating and receiving uses.)

   - Operators will similarly plan the inclusion and retraction of
   algorithms in their systems, and alert their users accordingly.

Thanks,

Steve




On Sat, Jan 24, 2026 at 2:09 PM Philip Homburg <[email protected]>
wrote:

> > As I have stated during the life of this document, I do not think
> > it is a useful document as the envisioned lifecycles so far seem
> > to have all been changed due to unexpected events (sha1 breakage,
> > new dos attacks, etc) and having a theoretical lifecycle of dont
> > break things seems so obvious it doesnt need writing down and the
> > specific actions will always have to be taken after discussion
> > through the WG via Standards Track actions anyway in response to
> > specific events.
>
> In my opinion this is an important document. During the discussion about
> allowing multi-signers to use different algorithms it became clear that
> a lifecycle document is needed.
>
> SHA1 breakage can hardly be called an unexpected event when DNSSEC is
> concerned. First, SHA1 was assumed to be broken long before the first
> pulbic attack.
>
> However, SHA1 is broken where attackers can create collisions. For most
> of DNSSEC that is not problem. As far as I know, people have described how
> to abuse SHA1 collisions in theory, but nobody has done so in practice.
>
> The main SHA1 breakage came from a vendor who decided to release an OS
> that just broke RSASHA1. Making RSASHA1 unsupported might just be annoying
> for people who need DNSSEC validation to work, for example for DANE. But
> letting RSASHA1 become bogus is very bad.
>
> However, the IETF cannot do anything about vendors releasing broken
> software
> for weird reasons, so that should not stop us from setting policies.
>
> With upcoming PQC algorithms, it will be good to have a clear policy
> on how to introduce and retire algorithms.
>
> And if the lifecycle of algorithms is obvious, then it should not take much
> time to get consensus and be done.
>
> _______________________________________________
> DNSOP mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>


-- 
Sent by a Verified

sender
_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to