<Snip>
>  Are there any other benefits?

IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation, while
non-libpkix is not. That isn't to say the primitives don't exist - they
do, and libpkix uses them - but that the non-libpkix path doesn't use them
presently, and some may be non-trivial work to implement.

One benefit of libpkix is that it reflects much of the real world
experience and practical concerns re: PKI that were distilled in RFC 4158.
I also understand that it passes all the PKITS tests (
http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html ),
while non-libpkix does not (is this correct?)

Don't get me wrong, I'm not trying to be a libpkix apologist - I've had
more than my share of annoyances (latest is http://crbug.com/108514#c3 ),
but I find it much more predictable and reasonable than some of the
non-3280 implementations - both non-libpkix and entirely non-NSS
implementations (eg: OS X's Security.framework)

The problem that I fear is that once you start trying to go down the route
of replacing libpkix, while still maintaining 3280 (or even better, 5280)
compliance, in addition to some of the path building (/not/ verification)
strategies of RFC 4158, you end up with a lot of 'big' and 'complex' code
that can be a chore to maintain because PKI/PKIX is an inherently hairy
and complicated beast.

So what is the new value trying to be accomplished? As best I can tell, it
seems focused around that libpkix is big, scary (macro-based error
handling galore), and has bugs but only few people with expert/domain
knowledge of the code to fix them? Does a new implementation solve that by
much?

>From your list of pros/cons, it sounds like you're primarily focused on
the path verification aspects (policies, revocation), but a very important
part of what libpkix does is the path building/locating aspects (depth
first search, policy/constraint based edge filtering, etc). While it's not
perfect ( https://bugzilla.mozilla.org/show_bug.cgi?id=640892 ), as an
algorithm it's more robust than the non-libpkix implementation in my
experience.

>  As for #5, I don't think Firefox is going to be able to use libpkix's
current OCSP/CRL fetching anyway, because libpkix's fetching is
serialized
>  and we will need to be able to fetch revocation for every cert in the
chain in parallel in order to avoid regressing performance (too much) when
>  we start fetching intermediate certificates' revocation information. I
have an idea for how to do this without changing anything in NSS, doing
all the OCSP/CRL fetching in Gecko instead.

A word of caution - this is a very contentious area in the PKIX WG. The
argument is that a "correct" implementation should only trust data as far
as it can throw it (or as far as it can be chained to a trusted root).
Serializing revocation checking by beginning at the root and then working
down /is/ the algorithm described in RFC 3280 Section 6.3. In short, the
argument goes that you shouldn't be trusting/operating on ANY information
from the intermediate until you've processed the root - since it may be a
hostile intermediate.

libpkix, like CryptoAPI and other implementations, defers revocation
checking until all trust paths are validated, but even then checks
revocation serially/carefully.

Now, I recognize that such an approach/interpretation is not universally
agreed upon, but I just want to make sure you realize there is a reasoning
for the approach it currently uses. For some people, even AIA chasing is
seen as a 'bad' idea - even if, in practice, every sane user agent does it
because of so many broken TLS implementations/webservers out there.

While not opposed to exploring, I am trying to play the proverbial devil's
advocate for security-sensitive code used by millions of users, especially
for what sounds at first blush like a "cut our losses" proposal.

Ryan




-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to