One of the main features of the sqlite key storage engine is that multiple processes can read from and write to it at once, using sqlite's file locking ACID semantics. This prevents it from becoming corrupted by multiple accessors. In order to implement this, I would guess that they decided not to use the cache, because the cache didn't meet their needs.
The cache needs to be invalidated if the file -- or preferably only entry -- has been updated since the cache entries were created. The NSS developers live in a world of hardware PKCS#11 tokens that are many thousands of times slower than its own PKCS#11 software implementation, and thus (in my opinion) that it is likely that they don't typically think about speed of processing and any path deeper than 5 CAs. That said, if you can figure out how to update the cache-validation code, it would probably make the deep CA chains and large number of certificates in the store more bearable. (I am not an NSS developer, and I offer this analysis with many weasel-words: that is, these are *my opinions and conjectures*. I have not worked with them other than over this mailing list and dev-security-policy, and I cannot speak for them.) -Kyle H On Sun, Sep 12, 2010 at 5:41 AM, Wolter Eldering <wolter.elder...@vanad.com.cn> wrote:
I'm using nss with a sqlite database I noticed that CERT_GetCertChainFromCert will rebuild the whole chain again and again by going PKCS#11 calls that all go to the sqlite database. Sqlite is very fast but if you have a deep CA chain and a larger number of certificates it will start to add up. Is there a reason an internal slot can't work with the cache if (!PK11_IsInternal(nss3slot) && PK11_IsHW(nss3slot)) { rvToken->cache = nssTokenObjectCache_Create(rvToken, PR_TRUE, PR_TRUE, PR_TRUE); if (!rvToken->cache) goto loser; }
-- dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto