On Tue, Oct 5, 2010 at 11:41 PM, Kurt Seifried <k...@seifried.org> wrote:
>
> >> Google is currently communicating about how they will use SSL False Start
> >> to "accelerate the web", even if it means breaking a small fraction of
> >> incompatible site (they will use a black list that should mitigate most of
> >> the problem).
> >> See http://news.cnet.com/8301-30685_3-20018437-264.html
> >>
> >
> > Interestingly the folks at CNET made a huge mistake in their calculations
> > since only a fraction of the 227 million web sites are SSL secured. Of that
> > 0.05% appears to be rather tiny, certainly not the 114,000 sites they
> > claimed in the article.
>
>
> From the EFF SSL Observatory (pretty recent data):
>
> 10.8M started an SSL handshake
> 4.3+M used valid cert chains
> 1.3+M distinctvalid leaves
>
> so that's more like 2000 sites that will be broken assuming Google's
> numbers are legit (of course if those are the top 500 sites it would
> be rather painful, but a blacklist of 2000 entries is pretty simple to
> maintain). So he's only off by a factor of 50 or so.
>
> > Signer:  Eddy Nigg, StartCom Ltd.
>
> --
> Kurt Seifried
> k...@seifried.org
> tel: 1-703-879-3176

Thanks for the information, Kurt (and indirectly, Eddy). I would like
to be accurate on this point and correct the story as necessary, but I
need help in ensuring I have the right information and understand what
it means, first.

I took the 0.05 percent statistic from Adam Langley's blog (offline
right now--perhaps he took it down?), who said (as I quoted in the
story), "this change will cause issues with about 0.05% of websites on
the Internet." As far as I can tell, he was talking about the
Internet, not the TLS/SSL-secured subset. I welcome expert opinions on
whether he was unclear or wrong, whether I misunderstood, and what the
true statistic is.

For context, here's the quotation with some of the paragraphs before and after:


  "...That might not seem like very much. But these costs are
multiplied when loading a complex site. If it takes an extra 100ms to
start fetching the contents of the page, then that's 100ms until the
browser discovers resources on other websites that will be needed to
render its contents. If loading those resources reveals more
dependents then they are delayed by three round trips.

  "And this change disproportionately benefits smaller websites (who
aren't multihomed around the world) and mobile users or areas with
poorer Internet connectivity (who have longer round trip times).

  "Most attractively, this change can be made unilaterally. Browsers
can implement False Start and benefit without having to update
webservers.

  "However, we are aware that this change will cause issues with about
0.05% of websites on the Internet. There are a number of possible
responses to this:

  "The most common would be to admit defeat. Rightly or wrongly, users
assign blame to the last thing to change. Thus, no matter how grievous
or damaging their behaviour, anything that worked at some point in the
past is regarded as blameless for any future problem. As the Internet
becomes larger and more diverse, more and more devices are attached
that improperly implement Internet protocols. This means that any
practice that isn't common is likely to be non-functional for a
significant number of users...."


Kurt, I gather your SSL data is from July's Defcon paper (available at
https://www.eff.org/observatory). For starts, could you folks explain
to me why the 4.3M sites with a valid certificate chain would be the
ones to look at (vs. all that offer an SSL handshake). Second, why
would Google be wrong in saying it's 0.05 percent of all sites vs.
just SSL/TLS-encrypted sites?

sts

--
stephen.shankl...@cbs.com
http://news.cnet.com/deep-tech
Twitter/Skype: stshank
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to