I think that the approaches described in that paper have some serious
flaws which need to be addressed.

"Man in the Middle" attacks (to the client) are theoretically supposed
to be avoided by the public key of the site being signed by a trusted
third party, which is where the concept of the X.509 "certificate"
comes in.  MITM attacks (to the server) are theoretically supposed to
be avoided by the public key of the client being signed by a trusted
third party.  There are three instances that I'm aware of that cause
issues in this MITM-avoidance system, though... the first two are
attacks that have occurred, and the third one is the one that you seem
to want to propose a 'fix' to:

1) A 'trusted third party' (i.e., a root authority which has its
certificate embedded in the browser) signs a certificate that belongs
to the same-named company in a different area of commerce, which (even
though it has a different domain name) allows the 'lock' icon to show
up.  In this case, the other domain name decrypts and reencrypts. 
(This is mitigated by the use of client certificates, which very few
places have implemented.)

2) A proxy server that is trusted (such as the Chinese Firewall),
which use is mandated and which has a wildcard certificate for '*',
receives the request, decrypts it, and forwards it on.  This almost
certainly precludes the use of client certificates, as the only entity
which can verify that certificate is the firewall, since the firewall
doesn't have the private key associated with the certified public key.

3) The client talking with the server has not cryptographically proven
its identity to the server.

There are issues with the third option, to be sure -- at this point,
very few servers require the client to cryptographically identify
itself, which is what allows the MITM attacks to occur.  And, to be
fair, the SSL and TLS specifications compound this problem by
suggesting that the client must be authenticated with the same level
of third-party assurance as the server is.  This requirement, though,
is not necessarily the case -- a given public key and the ability to
encrypt things which can be decrypted by that public key is enough to
prove a 'unique identity' which has no characteristics other than
uniqueness; anything done within the context of that 'unique identity'
can add characteristics to that identity -- such as logging into a
website with a unique set of credentials, those credentials can be
viewed as being properties of that unique identity.

There are certainly issues with the entire SSL/TLS approach.  First
off, there are too many trusted third parties (root certificates
embedded in clients) for easy management.  Second, from the viewpoint
of the client, a given root authority is generally (but not
necessarily) used for initial certificate issuance and for later
renewals.  This suggests that a client could warn the user if the root
certificate used to sign a particular site's certificate has changed. 
In addition, browsers need to encourage or enforce the creation of
certificates (self-signed, if necessary), under the concept of the
"unique identity" assertion I suggested above.

In any case, "SSL-session-aware user authentication" is only useful if
neither the server nor client have decided to prevent the resumption
of a session.  (Per the SSL and TLS specifications, either side has
the option of doing so, per local security policy or from a lack of
implementation of SSL/TLS session resumption.  This is particularly
true on the server side, which may have so many sessions available
that it doesn't have enough space to cache them all.)  SSL/TLS
sessions are finite, have a finite lifetime, and often are only
maintained as long as a single SSL/TLS connection.  They are not
permanent, and most certainly even within a given "authenticated
transactional session" are given ample opportunity to change. 
However, private keys are generally more closely guarded by current
SSL/TLS implementations than most other data they hold, and thus it is
less likely that a private key will be compromised.  (Unless, of
course, the MITM has the ability to look into the guarded store, or
the memory of the process -- in which case the security of the client
has already been compromised and nothing that the user can do short of
reinstalling the OS can fix it.)

In short, access to the master secret won't help to maintain security
-- every time a session was forced to be renegotiated due to being
dropped from the cache, the client would have to reauthenticate, and
even then there would be no guarantee that it wasn't a MITM.  (There
is no guarantee that an MITM isn't occurring with a self-signed client
certificate, either, but it would appear to be less likely at the
moment.  However, there is at least one case I can think of -- the
case of the Chinese Firewall -- which would cause issues, but in that
case you're dealing with an adversary with infinite resources [aka the
"brain in a jar" -- the client's behind the firewall, which is the
only way to the outside world, therefore the Firewall is the jar]...
which no cryptographic scheme can defeat.)

-Kyle H

On 3/17/06, Ralf Hauser <[EMAIL PROTECTED]> wrote:
> Nelson,
>
> Thx for the quick follow-up.
> >> Is it possible to access the 36 bytes as per
> >> http://www.rfc.net/rfc2246.html#s7.4.9.
> >> (20 bytes of SHA-1 and 16 bytes of MD5)?
> > Of what value are these items to anything outside of the SSL/TLS protocol
> > itself?
> We are trying to build a plugin to prevent phishing. See also
> http://www.w3.org/2005/Security/usability-ws/papers/08-esecurity-browser-enhancements/
> and https://bugzilla.mozilla.org/show_bug.cgi?id=322661 .
> >
> >> P.S.: Alternatively, after the handshake has completed, is it possible to
> >> access the SSL/TLS session key that was negotiated.
> > Directly?   To the bits of they key itself?  or to a handle for that key?
> I guess either the bits of the key itself or a digest thereof
> >
> > Again, why is this value of interest to anything outside of SSL itself?
> Probably, the approaches are best described in
> http://www.esecurity.ch/OHB06b.pdf .
> >
> >> I see that I can get at the serverCert in
> >> http://xulplanet.com/references/xpcomref/ifaces/nsISSLStatus.html
> >
> > Yes, unlike the other values requested above, the cert is public info,
> > and is typically sent over the wire in the clear.
> So, if the 36 bytes are not available, hashing the session key plus the
> server certificate most likely identify an SSL session pretty uniquely as a
> fall-back as well - would you agree?
>
>    Ralf
>
>
> _______________________________________________
> dev-tech-crypto mailing list
> dev-tech-crypto@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>
_______________________________________________
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to