On 01/03/2012 04:59 PM, Brian Smith wrote:
> 1. libpkix can handle cross-signed certificates correctly, without getting 
> stuck in loops. Non-libpkix validation cannot.
>
> 2. libpkix can accept parameters that control each individual validation, 
> whereas non-libpkix validation relies on global settings.
> 2.a. libpkix can control OCSP/CRL/cert fetching on a per-validation basis.
> 2.b. libpkix can restrict the set of roots that are validated. non-libpkix 
> validation cannot.
>
> 3. libpkix can enforce certificate policies (e.g. requiring EV policy OIDs). 
> Can the non-libpkix validation?
>
> 4. libpkix can can return the full certificate chain to the caller. The 
> non-libpkix validation cannot.
>
> 5. libpkix has better AIA/CRL fetching:
> 5.a. libpkix can fetch revocation information for every cert in a chain. The 
> non-libpkix validation cannot (right?).\
                yes, well for OCSP. for CLR's non-libkix does check the
revocation status, but it doesn't refresh or even update the CRL. If the
CRL is out of date, the validation just fails (though I'm not sure what
the current definition of 'out-of-date' is for the old code).
> 5.b. libpkix can (in theory) fetch using LDAP in addition to HTTP. 
> non-libpkix validation cannot.
> 5.c. libpkix checks for revocation information while walking from a trusted 
> root to the EE. The non-libpkix validation does the fetching while walking 
> from the EE to the root.
>
> Are there any other benefits?
6. libpkix can actually fetch missing certs in the chain. This has been
an issue for a very long time.

(actually most of the features in libpkix have been issues for a very
long time).

7. libpkix can actually fetch CRL's on the fly. The old code can only
use CRL's that have been manually downloaded. We have hacks in PSM to
periodically load CRL's, which work for certain enterprises, but not
with the internet.
>
> As for #5, I don't think Firefox is going to be able to use libpkix's current 
> OCSP/CRL fetching anyway, because libpkix's fetching is serialized and we 
> will need to be able to fetch revocation for every cert in the chain in 
> parallel in order to avoid regressing performance (too much) when we start 
> fetching intermediate certificates' revocation information. I have an idea 
> for how to do this without changing anything in NSS, doing all the OCSP/CRL 
> fetching in Gecko instead.
OCSP responses are cached, so OCSP fetching on common intermediates
should not be a significant performance hit. Chrome is using this
feature (we know because we've had some intermediates in were revoked).
>
> It seems to me that it would be relatively easy to add #2, #3, and #4 to the 
> non-libpkix validation engine, especially since we can reference the libpkix 
> code
No, it's going to be a real bear to do so. And in the long run, we are
still far away from our goal of being compliant with the RFC 3280. 
Number 1 is *very* tricky, which is way it was punted in the original code.

Also, 5 is a *very* important feature in the new world. We now have
revoked intermediates in the wild!
>
> I don't know how much effort it would take to implement #1, but to my naive 
> mind it seems like we could get something very serviceable pretty easily by 
> trying every matching cert at each point in the chain, instead of checking 
> only the "best" match. Is there some complexity there that I am missing?
>
> I know that just about everybody has expressed concerns about how difficult 
> libpkix is to maintain. And, also, it is huge. I am not aware of all the 
> problems that the older validation code has, so it seems like it might be 
> somewhat reasonable to extend the old validation code to add the features it 
> is missing, and avoid using libpkix at all. 
I'm ok if someone wanted to rework the libpkix code itself, but trying
to shoehorn in the libpkix features into the old cert processing code is
the longer path to getting to something stable. Note that the decision
to move away from the old code was made by those who knew it best.
Getting the RFC 3280 processing of certificates is long overdue in NSS,
and in Firefox in particular. It's time to just get on with it. We have
code that works. I'm OK with a plan to replace it with something else,
but right now it's the code we have. Trying to graft things onto the old
code (which is really 4 separate implementations anyway) is not a good
path forward.

bob
>
> Thoughts?
>
> Thanks,
> Brian


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to