On Sat, Mar 21, 2009 at 5:57 PM, Nelson B Bolyard <nel...@bolyard.me> wrote:
> Kyle Hamilton wrote, On 2009-03-21 15:49:
>> On Sat, Mar 21, 2009 at 2:58 PM, Nelson B Bolyard <nel...@bolyard.me> wrote:
>
>> I blame NSS for choosing not to adhere to certain aspects of the SSL
>> 3.0 and TLS 1.0 standards (accepting a ClientCertificateRequest with a
>> zero-length list of identifiers of acceptable CAs), enforcing others
>> (including the 'fatal protocol_error alert' I alluded to above)
>
> NSS did enforce that for a long time, but then certain misconfigured servers
> began, in large numbers, to request client auth without sending
> and issuer names, and browsers simply stopped working with those servers.
> So, NSS was changed to forgive that error.  There seemed to be no down
> side to doing so.  On behalf of the NSS team, I supported the change in
> TLS 1.1 to allow zero length lists of issuer names.

So, essentially, you allowed servers to dictate client behavior, and
it's turning around and biting you in the ass.  This explains why
you're so averse to changing the client/browser at this point --
because you already made one change to appease the server crowd, and
it's simply opened an additional can of worms.

The downside to supporting that behavior is that the UI didn't keep
up.  We're still provided with a single list of available
certificates, rather than being able to group them or tree them or do
anything useful with the information.  (Not to mention that the list
of available certificates only appears in a pulldown menu, when it
would more likely be useful to put it in a two-paned Explorer-like
window with the certificate selection on the left and the appropriate
details in a pane on the right -- so that related identity groups can
be created and maintained and used.)

Oh wait, I'm bitching about the UI again.  (Have I not tried to
explain before this that all of these problems are interconnected?
The UI is how the user interfaces to the system, and it needs to be
examined and built for adequacy to the task at hand, capability for
those who use it often, and simplicity for those who don't use it
often.)

>> I *do*, however, blame NSS for requiring client keys and certificates to
>> be installed in the current user's certificate store in order to use
>> them.
>
> NSS requires them to be in some (any) "device" (could be virtual) that is
> accessible through the PKCS#11 API.  Remember that NSS can use certs and
> keys from ANY PKCS#11 software module, including those that make the OS's
> native cert/key store appear to be a PKCS#11 token.  There's a PKCS#11
> module that looks in the computer's on-board TPM chip.  There's even a
> PKCS#11 module that looks in a directory of PEM files.

Perfect!  Now how does one install it?

> NSS offers a capable and secure means of storage of keys and certs, and
> offers an extensible API through which any other scheme you can name can be
> plugged in.
>
> Now, is your complaint still valid?  Or have you merely not yet availed
> yourself of the available solutions?

PKCS#11 has at least two major flaws, neither of which has been
addressed at all to my knowledge.

1) It is platform/OS specific -- you're provided a binary module which
you must trust to not do anything inimicable with your certificates or
keys.
1a) Platform unsupported by the provider of the token?  You're SOL.
1b) You think the binary module is spyware-laden?  You're SOL.
2) The installation into each environment is different.
2a) I don't know how to install such a module in OSX for its key manager to use.
2b) I don't know how to install such a module in OSX for Firefox to
use (other than secmod)
2c) I don't know how to install such a module in Windows for CryptoAPI to use.

Even EFI (Extensible Firmware Interface) has addressed problem #1: it
defines a platform-independent bytecode for drivers which conforming
implementations must interpret in a manner which will allow the device
to initialize and begin providing services.

>> I still, however, believe that server-auth is -- if not the the worst
>> feature of TLS -- certainly the most overhyped and misused.
>
> You mean, most ignored by users?  Absolutely.  Most users always blindly
> assume that they're connected to the server they want to be connected to.
> They have no CONCEPT of MITM in their heads.  They cannot imagine that the
> server to which they're talking would be an attacker's.  They're the ones
> who say "I don't need authentication, only encryption". They'll type their
> user name and password into any screen from any server, which is exactly why
> phishing exists.

I've already stated my thoughts as to why this is a bogus argument --
including "there's no branding of CAs" -- so I'm not going to repeat
it here.  (They don't recognize the importance of the CA because the
CA is never actually presented to them as being something worth
noticing.)

> That's also why user identification and authentication schemes -- wherein
> the info that the user presents to the remote server CANNOT be used by that
> server to impersonate the user -- are SO important.
> SSL client auth is such a scheme.

At least we agree on this point.

>> Interesting.  So my friend in Ann Arbor who has a 6-to-4 IPsec tunnel
>> should be able to use it without problems (and can't, as the tunnel
>> uses IPsec and is blocked), and my friend in San Jose who needed to
>> upgrade to a business account so that he could do work from home
>> (using the Cisco VPN utility) shouldn't have needed to?
>
> I suspect so.  I use the Cisco VPN utility from home on my ordinary home
> Comcast cable modem in a suburb of San Jose.  I think Comcast customer
> service reps may be too quick to up sell users who have problems to the
> more expensive accounts.  It's also possible that Comcast doesn't operate
> uniformly in all markets.

Alright, thank you (very much!) for the information.  I'll see if I
can't help him troubleshoot what's going on.

>> Alright, fair point.  I am (and have been) looking at it from the view
>> of a single webserver providing a single service.
>
> Didn't you recently accuse me of that?

I likely did.  I'm suffering a bit of a quandary, though, because I'm
failing to understand how sessions (negotiated before any data,
including the GET header, is sent to the server) can be separated at
the path level while still allowing each path to separate its
authentication requirement -- meaning, allowing the session ID to be
changed during a renegotiation, for a specific path.

>> I realize that there are other implementations in production
>
> If one looks at the number of different products that use SSL/TLS, and
> counts each product as one (counting products, not instances of products)
> browsers and web servers are a small percentage of the total applications
> that use SSL/TLS.  That's also true of the products that use NSS.
> It's very common to use SSL/TLS between servers, acting as clients, and
> other servers.  In such applications of SSL, (in)activity based session
> lifetimes are unwanted.  NSS is useful to all of those classes of products.

It'd work wonderfully for one of the things I'm working on, except I
have to use OpenSSL for because it's the one TLS implementation that I
know well enough to comment out the "protocol_error fatal alert" for
the issue related to TLS servers asking clients for authentication
before they choose to identify themselves.

>> This wouldn't particularly work  [...] Particularly if the session
>> has to ask, every time it wants to switch credentials, what
>> certificate to use for it.
>
> NSS supports having multiple simultaneous sessions between a client and
> a server, each bearing different credentials.

Well, yes.  The question is as I asked above: How can a client know
that it's not supposed to reuse the same session, with the same
credentials, with a different path on the server?

>>> The problem here is that people so desperately (or ignorantly) want the
>>> TLS sessions to be application sessions, that they configure the TLS
>>> sessions as if they WERE application sessions.  They configure TLS
>>> sessions with an upper bound of a few minutes (or seconds), thinking
>>> (or wishing) that TLS sessions are based on inactivity times.  The
>>> solution is not to change TLS to work the way that some single application
>>> wants its sessions to work, and it is not to misconfigure TLS sessions
>>> with absurdly short maximum lifetimes.  That is a server problem, perhaps
>>> a server admin problem.  It is not a browser problem.
>>
>> Is it?  Or is it something that, like the CA/B forum, needs a S/B forum?
>
> If TLS was a standard only for browsers and web servers, then browsers and
> web server representatives could get together and define to work just for
> them.  But it isn't.

It's not, no, and I'm not suggesting that they try to change the
protocol just for themselves.  What I'm proposing is that they try to
work out the semantics of the protocol -- what each part and
interaction *means* -- and then write to those semantics, those
assumptions.

>> Rather than making all the assumptions from the RFCs and
>> standards that exist, which have been shown to have failed in the
>> underlying goal of interoperability,
>
> All the interoperation that happens on the internet is on the basis of those
> IETF standards.  That's hardly "shown to have failed".

Client certificates have "shown to have failed", simply because
they're not in common use for the common person that the Mozilla
Foundation exists to build a browser for.  The capability has existed
for fifteen years.

>> perhaps the most important thing would be to settle down, stop blaming
>> the other, sit down, and negotiate on exactly what the various things
>> *mean*. (I swear, this is almost worse than the Hatfields and McCoys...)
>
> Here's an idea.  Settle down, stop blaming the browser, and get the
> developers and admins of the products to start using the standards as
> they exist, configuring their products to use the standard protocol
> features as they are actually defined, rather than perpetually whining
> that the standards don't work the way they'd personally like.

How about coming to the same table and explaining why you, as a
browser creator, have interpreted the standards to mean what you have?
 How about coming to the same table and listening to the reasons why
the developers and admins of the products involved have interpreted
the standards to mean what *they* have?

Ian has used the term "impedance mismatch".  That's the difference
between the expectations that browser vendors have, and the
expectations that server vendors have.  Sitting down and talking so
that you can understand each others' positions will go a LONG way
toward making things interoperable and useful.

>>> Configuring TLS session lifetimes in the server with absurdly short maximum
>>> lifetimes (including zero lifetimes) is THE SOLE CAUSE of all that
>>> re-prompting for TLS client auth.  It's stupid.  To all the admins who do
>>> that, and then whine that the browser prompts often, I repeat: "Doctor,
>>> it hurts when I do this".
>>
>> Maybe, just maybe, you should get off your high-horse.  You are
>> describing this as if it is The Way Things Are And Cannot Be Changed
>> (like the human body).  This isn't the human body, it's software.
>> It's insanely configurable.  It's insanely re-editable.
>>
>> It may be THE SOLE CAUSE as far as YOUR IMPLEMENTATION goes...
>
> There is no other cause for browsers (or any TLS client, regardless of
> implementation) to ever need to choose a client cert, by any means, other
> than to receive a client auth request from the server, which BY DEFINITION
> only happens when the server has chosen not to reuse an existing session.

No, actually.  A server can send ClientCertificateRequest at any time,
including when a client has requested a resource that the server's
security policy states must have an underlying certificate
authentication.  A server can send ClientCertificateRequest when the
current authentication doesn't have the proper access rights to the
underlying resource.  This can all be done within the single session.

>> but if there's enough people who don't understand the configuration,
>> that means that there's a major disconnect in communication somewhere.
>
> OK, so the server products don't educate their users (admins) how to
> properly configure them.  Hey, let's insist that the browser change instead!

You're breaking your own argument here.  First, you say that NSS is
too useful to non-web-(browser|server)s to allow resetting session
timeouts.  You then say that servers are the only ones that actually
have any control over session timeouts.  THEN you say that I'm
insisting that the browser change?!

TLS is a *very* flexible protocol.  It does not mandate that sessions
timeout, though it suggests that an upper bound of 24 hours be in
place.  (ah, the difference between SHOULD and MUST.)  It does not
mandate that any security policy regarding the sessions lifetime be
implemented.  It doesn't even mandate that the session lifetime not be
reset to zero on every request.  All of this is *at the TLS client
protocol implementation's end* -- the implementation of the
application protocol that is running atop TLS.  This application
implementation interfaces to and with the TLS library through a
sideband channel that is akin to ioctl() (though, through custom and
practice, much more complex).

>> Yes, the problem exists on the servers -- but until you sit down and
>> explain *why* certain decisions on the client have been made, to the
>> people working with the servers, so that they can go "oh, but what
>> about *this?*" and explain why certain decisions on the server have
>> been made, and you can come to a real workable compromise that
>> addresses your concerns AND the concerns on the server side...
>
> See, there are these things called standards.  Interoperation on the
> Internet is a function of conformance to standards.  They've already been
> negotiated.  It's not an endless renegotiation of protocols to accommodate
> every Johnny Come Lately.  Now people (including developers and admins) must
> take the time to learn to use them properly.

Funny, the "HTTP over TLS" RFC isn't a standard.  It's an
"informational RFC".  This means that it was a documentation of
current practice -- and it requested comments on it.  You're acting
like it's as important as RFCs 822 or 793.  Funny, these are comments
on it, and instead of recognizing the points that I and Ian and Anders
have made, you're trying to shield every design decision that's making
it impossible to support client certificates in any kind of useful
manner as though It's A Standard.

You're even the one who pointed out that it was an "informational" RFC.

> There will always be those who, when confronted with the fact that they're
> abusing the standards, will revile the standards to save face.  And there
> will always be those who imagine that the product they use *IS* the
> standard.  (I am reminded of a comment I read recently, saying that there
> must be a flaw in the RFC, because OpenSSL doesn't work the way the RFC
> says. :)
>
> Standards do evolve.  If people want to change the standards, the place
> to do that is in the IETF working groups, not in the NSS mailing list.

Except for the problem I alluded to above: HTTP over TLS *IS NOT A STANDARD*.

> I guess that would be the paradigm of following the standards, eh?
> You'd rather have every thing be constantly renegotiated, I gather?

I'd rather things that are shown not to work, fifteen years after
they're implemented, be renegotiated.  I'd rather things that are
shown to work within five years not.

> Let me suggest that you clearly separate UI issues from protocol issues.
> There are standards for the security protocols.  Those standards govern
> practically every aspect of security protocols.  By contrast, there are
> far fewer standards for UI.  UI is infinitely malleable.  So, perhaps
> you should expend you efforts on trying to get browser UI to change.
> IMO, this list is not the place to do that, because this is not where
> the UI guys hang out.  (One of them lurks here sometimes, but ...)

It is rather difficult to do this, simply because of one major, major issue:

TLS as a standard is very much like English as a language.  It follows
certain rules, certain standards, certain forms, certain syntax.

HTTP over TLS is very much like a technical dialect.  It uses TLS as a
basis, certainly -- but it has its own assumptions, its own forms, its
own acceptable and unacceptable grammars, and its own acceptable and
unacceptable syntax.

I'm not saying (other than my ONE criticism of TLS, the
"unauthenticated server cannot ask for client authentication"
criticism, which -- as you have correctly pointed out -- rightfully
doesn't apply in the case of HTTP over TLS) that TLS has any need for
modification.  It is, as a language, sufficient for the task at hand.
(There may be other issues raised during a discussion between the web
browsers and web servers that might suggest that it should be modified
in other ways, in later updates to the protocol, to enable certain
things that might be found useful -- I don't know, because I'm not at
the table, because the table doesn't exist yet.)

The technical dialect of HTTP over TLS, though, is not functional.
There are assumptions that you make (as a browser vendor) that are not
the same as the assumptions that others make (as server vendors).
This is very much akin to one group using imperial units of yards and
inches versus the other group using metric units of meters and
centimeters.  Until both sides sit down and figure out which
measurement standard to use, nothing's ever going to happen to change
the status quo, which is that Mozilla neé Netscape has spent millions
of dollars on a feature which isn't used by the people it's chartered
to support.

(Why am I seeing parallels to
http://www.tysknews.com/Depts/Metrication/mars_orbiters_demise_avoidable.htm
?)

>> Interesting.  So my friends and contacts in the various IT
>> substructures in the Navy, Marines, and Air Force have been
>> misrepresenting that to me all these years.  They have stated to me
>> that Firefox doesn't go on computers that connect to the Official
>> Network, but only on the Morale & Recreation machines.
>
> There are FF web browsers on subs and in tanks.  Is that M&R?
> (Do tank operators play first person shooter games between battles? :)
>
> Seriously, there are DoD installations that are still running FF2 because
> FF3 uses a newer version of NSS that has not yet been FIPS 140 certified.
> The certification of NSS 3.12.x (the version in FF3) begins next month.

FIPS 140-2 validation -- per both the NIST, which is not the final
arbiter in the matter, and the NSA, which is -- is only good for
systems that handle confidential but unclassified data.  AES-256 has
been accepted as an algorithm to protect classified data as well, but
FIPS 140-2 hasn't (as far as I can tell) been deemed acceptable proof
that a particular implementation meets the requirements for classified
data encryption.

So, please forgive me, but I must seriously doubt this assertion.
(Especially since DES was dropped from FIPS-140-1 validity several
years ago.)

-Kyle H
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to