On 3/21/07, Gervase Markham <[EMAIL PROTECTED]> wrote:
> >
> > All of the workarounds that have been emplaced are limited, necessarily,
> > by these two concepts.  Now, you're advocating placing an external limit
> > on the trust allowed to be delegated from a trust anchor.  (which is
> > also what EV requires.)  However, this is an explicit violation of the
> > concept of a trust anchor: a trust anchor is the anchor for absolute trust.
>
> We already place external limits on trust anchors in this way. For
> example, certificates in the root store can be allowed to sign websites
> (or not), emails (or not) and code (or not). Why are you happy with a
> distinction being made between "websites" and "email", but not with one
> being made between "this set of websites" and "that set of websites"?

All told, I'm not.  I think it's a horribly coarse and nearly
completely useless way of doing things.  See, identity is identity.
The only function that limiting the types of things that a root can
sign certificates for is to raise the bar and force people who want to
do certain things (like sign code) to get identity certificates from
more expensive sources.  The net result is something that smells of
collusion: "If you want to run code on our browsers, you have to pay
dearly for the privilege."  The only thing that makes it stink less is
that the browser manufacturer doesn't get a cut of the profit.

To be perfectly honest, in my view X.509 is nearly completely broken
as a protocol, and even more broken as a paradigm.  The original
specification was created in a time when the only function of
cryptography was by spies and national
intelligence/counterintelligence networks, and had no realistic
experience to back it up.  X.509v2 was intended to patch it to say
"oh, wait, we need to have some means of telling people when a
certificate is invalid if it becomes invalid before it expires".
X.509v3 wasn't even done by the ITU, it was done by the IETF.

However, idealistic arguments aren't going to sway anyone.  For better
or for worse, we have an infrastructure that requires additional
patching to make up for the deficiencies in the paradigm (including
the concept of "an anchor is an emplacement of absolute trust", when
the original paradigm also specified a singleton trust anchor as "the
anchor is the emplacement of absolute trust").

Incidentally, this problem has been in place since before the ITAR was
replaced by the EAR -- I recall "how to get 128-bit encryption" how-to
documents and tools for Netscape Navigator's certificate store.  (This
was during the days of server gated cryptography.)

> In fact, one could argue that the Mozilla Foundation is already the
> ultimate trust anchor, as we choose the certificates to place in the
> root store. Most users of products which use the store (e.g. Firefox)
> are ultimately trusting us to make good decisions about what CA root
> certs to include.

No, a 'trust anchor' is a technical location where all trust can be
proven to devolve from (the private key, with the one-to-one
correspondence to the public key).

The choice of certificates is made by an authority.  Without an
anchor, though, it's possible to willy-nilly add certificates to the
database and mark them as trusted (and I hope no one decides to do so
in a combination phishing attack).

> > Within the X.500 model, the better way to do this would be to generate a
> > CA for Mozilla, convert the self-signed trust anchors into CSRs, have
> > the Mozilla CA re-sign them with the appropriate restrictions and
> > validations, and embed the Mozilla root into the library as the absolute
> > trust anchor.
>
> We could do this. However, it would place a highly important private key
> into the hands of an organisation with little experience in key
> management at the level required (that is, the Mozilla Project). I'm not
> sure the security of the system would improve as a result.

This is part of why I say X.509 is so broken.  The paradigms in place
only work when reality is clubbed like a baby seal to "conform".  They
don't really work in any kind of open-source environment, and
implementing them requires a certain degree of organizational
inflexibility.

The folks at Microsoft have done a lot more thinking about the concept
of a public key infrastructure, and what it means, and what kinds of
extensions have to be in place to properly support it.  One of their
extensions is "this certificate will be used to verify organizational
acceptance of lists of trusted roots."  I haven't the faintest idea
how to USE it, but it at least exists in their certificate services
server.

> Is there any audit trail today if you go to a site with a CAcert
> certificate, get an error dialog and ask "who said that this CA can't
> sign this site's certificate"?

That's straightforward when the anchor isn't in the store.

What happens when the anchor is already there?

> > I am ABSOLUTELY against any concept of a "silent and unaccountable"
> > restriction being placed, on anything.  With unaccountability and
> > silence comes a "what the hell is it set that way for?  I'll just fix
> > it..." mentality without the benefit of being able to see the reasoning
> > behind it.  At least if there's a true anchor that signs things, an
> > explanatory URL could be placed in its X.509 package and they could see
> > an explanation of the reasoning.
>
> I agree that we need a "lack of silence" and we need accountability for
> our actions in this regard, but they do not have to be embedded in the
> certificate chain.

I completely disagree, partly for the "local store manipulation"
reason earlier, and partly because they give a full affirmation that a
given policy was intended.

> I fancy that people who are technically capable of, when faced with a
> certificate problem, analysing the chain, finding the embedded URL which
> explains the policy, visiting and it and reading it, are also capable of
> (in the alternative technical scenario) realising that the organisation
> which shipped the product must have put a restriction on, and heading
> over to their website to find out why (if it's not obvious).

Challenge: go to http://www.mozilla.org/ and find the root inclusion
policy following links (and only following links) from that URL.
Report how many links you had to go through, and how many unhelpful or
otherwise useless links you also followed.  For bonus points, find a
root inclusion policy on a TLS/SSL-encrypted page served with a
"Mozilla Foundation" certificate which additionally states all of the
approved root certificates and their thumbprints.

Embedding a URL in the certificate chain makes things a lot easier for
the person who's trying to do the troubleshooting.  It also makes the
support burden for the browser vendor lighter.

> Particularly if the user agent makes it clear why the error has occurred.

Much of this set of problems could be worked around if we could touch
the chrome, but we've had many arguments on this list about the fact
that we can't.  Has this policy been changed?

-Kyle H
_______________________________________________
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to