Ryan Sleevi wrote:
> IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation,
> while non-libpkix is not. That isn't to say the primitives don't exist -
> they do, and libpkix uses them - but that the non-libpkix path doesn't use
> them presently, and some may be non-trivial work to implement.

It would be helpful to get some links to some real-world servers that would 
require Firefox to do complex path building.

No conformant TLS server can require RFC 4158 path building. I would like to 
understand better how much of RFC 3280, 4158, and 5280 is actually required for 
an HTTPS client. (Non-TLS usage like S/MIME in Thunderbird is a separate 
issue.) After all, the TLS specifications are pretty clear that the server is 
*supposed* to provide the full path to the root in its Certificate message, so 
even the dumbest path building code will work with any TLS-conformant server. 
Then, for Firefox, all of the complexity of the libpkix path building is purely 
there to handle non-conformant servers.

AFAICT, we can split these non-conformant servers into two classes: 
misconfigured servers, and enterprise/government servers. It seems very likely 
to me that simpler-than-RFC4158 processing will work very well for 
misconfigured servers (maybe "just do AIA cert fetching" is enough?). But, how 
much of RFC3280/4158 do real-world TLS-non-conformant government/enterprise 
servers without AIA cert information in the certs use? (Knowing nothing about 
this topic, I wouldn't be surprised if "just do AIA cert fetching" works even 
for these cases.)

> I find it much more predictable and reasonable than some of the
> non-3280 implementations - both non-libpkix and entirely non-NSS
> implementations (eg: OS X's Security.framework)

Thanks. This is very helpful to know.

> The problem that I fear is that once you start trying to go down the
> route of replacing libpkix, while still maintaining 3280 (or even
> better, 5280) compliance, in addition to some of the path building
> (/not/ verification) strategies of RFC 4158, you end up with a lot
> of 'big' and 'complex' code that can be a chore to maintain because
> PKI/PKIX is an inherently hairy and complicated beast.

> So what is the new value trying to be accomplished? As best I can
> tell, it seems focused around that libpkix is big, scary (macro-based
> error handling galore), and has bugs but only few people with
> expert/domain knowledge of the code to fix them? Does a new
> implementation solve that by much?

I am not thinking to convert any existing code into another conformant RFC 
3280/4158/5280 implementation. My goal is to make things work in Firefox. It 
seems like "conform to RFC 3280/4158/5280" isn't a sufficient condition and I 
am curious if it is even a necessary condition. If RFC 3280/4158/5280 is a 
necessary condition (again, for a *web browser* only, not for a S/MIME and 
related things), then fixing existing problems with libpkix seems like the more 
reasonable path. My question is whether those RFCs actual describe what a web 
browser needs to do.

> > As for #5, I don't think Firefox is going to be able to use
> > libpkix's current OCSP/CRL fetching anyway, because libpkix's
> > fetching is serialized and we will need to be able to fetch
> > revocation for every cert in the chain in parallel in order
> > to avoid regressing performance (too much) when we start
> > fetching intermediate certificates' revocation information. I
> > have an idea for how to do this without changing anything in NSS,
> > doing all the OCSP/CRL fetching in Gecko instead.
> 
> A word of caution - this is a very contentious area in the PKIX WG.

I am aware of all of that. But, I know some people don't want to turn on 
intermediate revocation fetching in Firefox at all (by default) because of the 
horrible performance regression it will induce. We can (and should) also 
improve our caching of revocation information to help mitigate that, but the 
fact is that there will be many important cases where fetching intermediate 
certs will cause a serious performance regression. There are other things we 
could do to avoid the performance regression instead of parallelizing the 
revocation status requests but they are also significant departures from the 
standards.

> While not opposed to exploring, I am trying to play the proverbial
> devil's advocate for security-sensitive code used by millions of
> users, especially for what sounds at first blush like a "cut our
> losses" proposal.

A few months ago, I had a discussion about Kai, where he asked me a question 
that he said Wan-Teh had asked him: are we committed to making libpkix work or 
not? This thread is the start of answering that question.

I am concerned that the libpkix code is hard to maintain and that there are 
very few people available to maintain it. If we have a group of people who are 
committed to making it work, then Mozilla relying on libpkix is probably 
workable. But, it is a little distressing that Google Chrome seems to avoid 
libpkix whenever possible, and that Sun/Oracle [redacted]. And, generally, 
nobody I have talked to seems happy with libpkix in practice, even though it 
seems to be the right choice in theory. Literally, the best thing that has been 
said about it is "it's the only choice we have." I wonder if that is really 
true.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to