David,

I don't find it particularly surprising that an effort directed towards the 
enterprise and qualified individuals using desktop computers, doesn't come to 
exactly the same conclusions as a project targeting consumers using mobile 
devices for accessing applications like on-line banking.  I.e. "my" users don't 
even know what a CA is.
"Yeah, Consumers Just Wanna Login,.. Login,.. Login" :-)

In the SKS/KeyGen2 plot PKCS #11 et al have been "degraded" to only perform the 
task they are really good at: Exposing already provisioned keys to applications.

In theory you could upgrade cryptographic APIs to support provisioning [1] but 
10Y+ of zero success indicates that this is indeed just a theory.

Therefore I have defined a specific credential provisioning/management API 
which unlike PKCS #11 is not abstracting a CM (Cryptographic Module).  It's 
rather the opposite; it specifies an on-line-adapted CM down to the bit-level.

Technically this is already a deal done [2]; the challenge is convincing 
various parties that this is a good approach.  I have obviously a bit left to 
do here with respect to Intel...

br
ar

1] Using an on-line end-to-end-secured process

2] Specifications, Application notes, Fairly extensive PoC code including JUnit 
suite

On 2012-07-26 14:15, David Woodhouse wrote:
> On Thu, 2012-07-26 at 14:00 +0200, Anders Rundgren wrote:
>> On 2012-07-26 12:24, David Woodhouse wrote:
>> <snip>
>>>
>>>> My same concerns would apply to private keys.
>>>
>>> An application-specific 'third slot' would certainly address that
>>> concern.
>>
>> </snip>
>>
>> IMO, private keys is a very different topic because they are not
>> really "owned" by the user and if misused they could hurt not only
>> the user but the RP as well.
> 
> They are a very different topic, and one I was planning to look at
> *after* sorting out the basic "sysadmin wants to install all $CORP CA
> keys to be trusted system-wide" and "user wants to install her own CA
> and the CACert one to be trusted for all purposes" use cases.
> 
> It's *insane* that we can't even cope with those two simple use cases
> properly.
> 
>> There seems to be two ways ahead [for private keys]:
>>
>> 1. Let each application manage/own its private keys
>> 2. Let the system manage private keys and limit misuse by ACLs
> 
> Or 3. Let the system manage private keys, and let them have a PIN.
> 
> Which is what you get with a hardware token anyway. If I want to connect
> to the VPN using a key from a hardware token, I just do:
> 
>  openconnect -c pkcs11:object=AnyConnect%20Remote%20Access $VPNSERVER
> 
> ... and then I'm asked for the PIN. If the application knows the PIN¹,
> it gets to use the key. If not, it doesn't. 
> 
> Since this scheme already *exists* and is used for hardware tokens, why
> would we do something different for software storage of keys?
> 
> We thus get to conveniently punt the question of how the application
> *gets* the PIN. It could ask the user once and store the PIN in "secure"
> storage that we don't have to worry about. It could ask the user every
> time. Or it could interact with some secret storage infrastructure which
> will provide stored passwords only to the appropriate applications, and
> already *has* the ACL stuff sorted out. But ideally, I lost you at
> 'conveniently punt'. We don't want to have to reinvent this particular
> wheel.
> 
> All we need is a per-key passphrase, and I think we have a fairly good
> (and realistic, and deployable, and not pie-in-the-sky) solution.
> 
> For private keys, which wasn't where I intended to start.
> 
> 
> 

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to