On Mon, Nov 27, 2017 at 12:54 PM, Hubert Kario <[email protected]> wrote:
>
> > On the realm of CA policy, we're discussing two matters:
> > 1) What should the certificates a CA issue be encoded as
> > 2) How should the CA protect and use its private key.
> >
> > While it may not be immediately obvious, both your proposal and 4055
> > attempt to treat #2 by #1, but they're actually separate issues. This
> > mistake is being made by treating PSS-params on CA certificates as an
> > important signal for reducing cross-protocol attacks, but it doesn't.
> This
> > is because the same public/private key pair can be associated with
> multiple
> > certificates, with multiple params encodings (and potentially the same
> > subject), and clients that enforced the silly 4055 restrictions would
> > happily accept these.
>
> the CA can also use sexy primes as the private key, making the private key
> easy to derive from the modulus... We can't list every possible way you can
> overturn the intention of the RFCs.
>
> we need to assume well-meaning actors, at least to a certain degree
>

First, I absolutely disagree with your assumption - we need to assume
hostility, and design our code and policies to be robust against that. I
should hope that was uncontroversial, but it doesn't seem to be.

Second, the only reason this is an issue was your suggestion (derived from
4055, to be fair) about restricting the params<->signature interaction. The
flexibility afforded by 4055 in expressing the parameters, and then
subsequently constraining the validation rules, is not actually met by the
threat model.

That is, if it's dangerous to mix the hash algorithms in PSS signatures
(and I'm not aware of literature suggesting this is necessary, versus being
speculative concern), then we should explicitly prohibit it via policy.
Requiring the parameters in the certificates does not, in any way, mitigate
this risk - and its presumptive inclusion in 4055 was to constrain how
signature-creating-software behaved, rather than how
signature-accepting-clients should behave.

Alternatively, if mixing the hash algorithms is not fundamentally unsafe in
the case of RSA-PSS, then it's unnecessary and overly complicating things
to include the params in the SPKI of the CA's certificate. The fact that
'rsaEncryption' needs to be accepted as valid for the issuance of RSA-PSS
signatures already implies it's acceptable, and so the whole SHOULD
construct is imposing on the ecosystem an unsupported policy.

So no, we should not assume well-meaning actors, and we should be explicit
about what the "intention" of the RFCs is, and whether they actually
achieve that.


> > So I think it's useful to instead work from a clean set of principles,
> and
> > try to express them:
> >
> > 1) The assumption, although the literature doesn't suggest it's
> necessary,
> > and it's not presently enforced in the existing WebPKI, is that the hash
> > algorithm for both PKCS#1 v1.5 and RSA-PSS should be limited to a single
> > hash algorithm for the private key.
> >   a) One way to achieve this is via policy - to state that all signatures
> > produced by a CA with a given private key must use the same set of
> > parameters
> >   b) Another way is to try and achieve this via encoding (as 4055
> > attempts), but as I noted, this is entirely toothless (and somewhat
> > incorrectly presumes X.500's DIT as the mechanism of enforcing policy a)
>
> just because the mechanism can be abused, doesn't make it useless for
> people
> that want to use it correctly. It still will protect people that use it
> correctly.
>

B is absolutely useless as a security mechanism against threats, and is
instead a way of signature-producing software to bake in an API contract in
to an RFC. We shouldn't encourage that, nor should the ecosystem have to
bear that complexity.

If it's not a security mechanism, then it's unnecessary.


> > 2) We want to ensure there is a bounded, unambiguous set of accepted
> > encodings for what a CA directly controls
> >   a) The "signature" fields of TBSCertificate (Certs) and TBSCertList
> > (CRL). OCSP does not duplicate the signature algorithm in the
> ResponseData
> > of a BasicOCSPResponse, so it's not necessary
>
> that's already a MUST requirement, isn't it?
>

It's not what NSS has implemented, but shipping, as captured on the bug.

And this matters, because permissive bugs in client implementation
absolutely leads to widespread ossification of server bugs, and why I
specifically requested that the NSS developers unship RSA-PSS support until
they can correctly and properly implement it.

We already saw this with RSA-PKCS#1v1.5 - it shouldn't be repeated again.


>
> >   b) The "subjectPublicKeyInfo" of a TBSCertificate
>
> that's the biggest issue
>
> > 3) We want to make sure to set expectations around what is supported in
> the
> > signatureAlgorithm fields of a Certificate (certs), CertificateList
> (CRLs),
> > and BasicOCSPResponse (OCSP).
> >   - Notably, these fields are mutable by attackers as they're part of the
> > 'unsigned' portion of the certificate, so we must be careful here about
> the
> > flexibility
>
> true, but a). there's no chance that a valid PKCS#1 v1.5 signature will be
> accepted as an RSA-PSS signature or vice versa, b). I'm proposing addition
> of
> only 3 valid encodings, modulo salt size
>

IMO, a) is not relevant to set of concerns, which I echo'd on the bug and
again above

And I'm suggesting that while you're proposing prosaically three valid
encodings, this community has ample demonstration that CAs have difficulty
correctly implementing things - in part, due to clients such as NSS
shipping Postel-liberal parsers - and so the policy should make it as
unambiguous as possible. The best way to make this unambiguous is to
provide the specific encodings - byte for byte.

Then a correct implementation can do a byte-for-byte evaluation of the
algorithm, without needing to parse at all - a net win.


> > 4) We want to define what the behaviour will be for NSS (and Mozilla)
> > clients if/when these constraints are violated
> >   - Notably, is the presence of something awry a sign of a bad
> > certification path (which can be recovered by trying other paths) or is
> it
> > a sign of bad CA action (in which case, it should be signalled as an
> error
> > and non-functioning)
>
> it's an invalid signature, needs to be treated as that
>

I think my point still stands that 'invalid signature' can be treated as
either case I mentioned, and so your answer doesn't actually resolve the
matter.



> > However, if we chose to avoid simplicitcy and pursue complexity, then I
> > think we'd want to treat this as:
> >
> > 1) A policy restriction that a CA MUST NOT use a private key that has
> been
> > used for one algorithm to be used with another (no mixing PKCS#1 v1.5 and
> > RSA-PSS)
> > 2) Optionally, a policy restriction that a CA MUST NOT use a private key
> > with one set of RSA-PSS params to issue signatures with another set of
> > RSA-PSS params
> > 3) Optionally, a policy restriction that a CA MUST NOT use a private key
> > with one RSA-PKCS#1v1.5 hash algorithm to issue signatures with another
> > RSA-PKCS#1v1.5 hash algorithm
> >
> > I say "optionally", because a substantial number of the CAs already do
> and
> > have done #3, and was critically necessary, for example, for the
> transition
> > from SHA-1 to SHA-256 - which is why I think #2 is silly and unnecessary.
>
> I don't consider allowing for encoding such restrictions hugely important
> either, but I don't see a reason to forbid CAs from doing that to CA
> certificates either, if they decide that they want to do that
>

Why one and not the other? Personal preference? There's a lack of tight
proof either way as to the harm.


> > 5) A policy requirement that CAs MUST encode the signature field of
> > TBSCertificate and TBSCertList in an unambiguous form (the policy would
> > provide the exact bytes of the DER encoded structure).
> >   - This is necessary because despite PKCS#1v1.5 also having specified
> how
> > the parameters were encoded, CAs still screwed this up
>
> that was because NULL versus empty was ambiguous - that's not the case for
> RSA-PSS - empty params means SHA-1 and SHA-1 is forbidden, missing params
> is
> unbounded so there's nothing to fail interop
>

I disagree with your assessment, again born out by the experience here on
the community.

I can easily see a CA mistaking "MGF is MGF1" leaning to encoding the
hashAlgorithm as SHA-1 and the MGF as id-mgf1 without realizing that params
also needs to be specified.

Consider, for example, that RFC 4055's rsaSSA-PSS-SHA256-Params,
SHA384-Params, and SHA512-Params all set saltLength as 20. The subtlety of
the policy requiring 32/48/64 rather than 20/20/20 is absolutely a mistake
a CA can make. For example, their software may say "PSS/SHA-256" and result
in 4055's PSS-SHA256-Params rather than the proposed requirement.


> 6) A policy requirement that CAs MUST encode the subjectPublicKeyInfo
> field
> > of TBSCertificate in an unambiguous form (the policy would provide the
> > exact bytes of the DER-encoded structure)
> > 7) Changes to NSS to ensure it did NOT attempt to DER-decode the
> structure
> > (especially given NSS's liberal acceptance of invalid DER-like BER), but
> > instead did a byte-for-byte comparison - much like mozilla::pkix does for
> > PKCS#1v1.5 (thus avoiding past CVEs in NSS)
>
> that would require hardcoding salt lengths, given their meaning in
> subjectPublicKeyInfo, I wouldn't be too happy about it
>
> looking at OpenSSL behaviour, it would likely render all past signatures
> invalid and making signatures with already released software unnecessarily
> complex (OpenSSL defaults to as large salt as possible)
>

That's OK, because none of these certificates are publicly trusted, and
there's zero reason for a client to support all of the ill-considered
flexibility of 4055.


> > If this is adopted, it still raises the question of whether 'past'
> RSA-PSS
> > issuances are misissued - whether improperly DER-like BER encoded or
> mixed
> > hash algorithms or mixed parameter encodings - but this is somewhat an
> > intrinsic result of not carefully specifying the algorithms and not
> having
> > implementations be appropriately strict.
>
> for X.509 only DER is allowed, if the tags or values are not encoded with
> minimal number of bytes necessary, or with indeterminate length, it's not
> DER
> it's BER and that's strictly forbidden


I appreciate your contribution, but I think it's not born out by the real
world. If it was, then
https://wiki.mozilla.org/SecurityEngineering/Removing_Compatibility_Workarounds_in_mozilla::pkix
wouldn't have been necessary.

"strictly forbidden, but not enforced by clients" is just another way of
saying "implicitly permitted and likely to ossify". I would like to avoid
that, since many of these issues mentioned were in part caused by
past-acceptance of DER-like BER by NSS.
_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to