On 12/12/2017 22:51, Ryan Sleevi wrote:
On Tue, Dec 12, 2017 at 3:44 PM, Jakob Bohm via dev-security-policy <
[email protected]> wrote:
What you are writing below, with far too many words is that you think
that URLs are the only identities that matter in this world, and
therefore DV certificates are enough security for everyone.
Yes. This is the foundation and limit of Web Security.
https://en.wikipedia.org/wiki/Same-origin_policy
This is what is programatically enforced. Anything else either requires new
technology to technically enforce it (such as a new scheme), or is
offloading the liability to the user.
What is *programmatically* enforced is too little for human safety.
believing that computers can replace human judgement is a big mistake.
Most of the world knows this.
That is why there is such a thing as identity documents in the real
world. Because humans often need to know who they are talking to, not
just that they have a vanity plate and a company logo on their white
van.
Humans have opinions about and relationships with other humans and
human-operated companies. The prominent display of CA vetted identity
information in addition to the self-selected network address (URLs)
provides this information to human users as part of their decision
process. The way the information is presented is very similar to how
such information is presented in real world trust scenarios: Id cards
pinned to the clothes or hanging around the neck. Official business
license framed on the wall behind the counter. Official health and
safety inspection report posted at the door. People glance to see it is
there, occasionally reads just enough to see if it looks right, taking
comfort in the other party not knowing if today is the day you will do
actually read and not just glance.
You need to understand that not every trust begins and ends with a
Google search for a URL. The more real the stakes are, the more real
the basis of trust needs to be. Sometimes people are just commenting on
a blog and don't care much of the blogger is even a real person.
Sometimes people buy cheaper items online and just need to know that
their credit card transaction is not visible to a random company (hence
the common practice of outsourcing the entry of card details to a
reputable clearing service that promises not to hand the credit card
number back to the seller). Sometimes people make bigger purchases and
need the assurance that there is a real company at the other end, which
can (if necessary) be sued for non-delivery. Sometimes people make
really big transactions and need to know that they are dealing with a
real world entity that they have a real world trust relationship with.
Respectfully, I would encourage you to re-read both Ian's and James'
research. For example, you will find that the organization being discussed
is "Stripe, Inc", not "Spring, Inc" - a mistake made frequent enough to not
be charitably attributabed as a typo. The question about the level of
stringency on the validation requirements has also been responded to, as
well as the deficiencies of "Well, they'd have to lie to do so" as a
response.
I have been copying the example name from message to message, with noone
objecting. Saving up this mistake for use as ammunition when you run
out of arguments is not a nice way to argue.
The remainder of your argument basically boils down to "But Banks already
are offloading the liability to users when they say check for the green
bar" (and that is bad, user hostile, and unsustainable), and the "Look for
the corporate identity" has been shown repeatedly to be insufficient and
incomplete that if that is the response you'd offer, then it's not
introducing new information into the conversation.
No, I was using the awareness campaigns by banks as an example of how
users can be, and have been, trained to use the EV UI even if they don't
fully understand it. It was a counterexample to your use of misleading
statistics about how few users understand the nuances of EV
certificates.
I agree that we should be concerned about potential fraud, and there are
far more user-friendly technologies that can help mitigate that - as I
mentioned. That doesn't mean that getting rid of EV UI is throwing the
proverbial baby out - it means having the maturity to accept that some
technological experiments don't pan out, and as good engineers and
socially-responsible developers, we should recognize when certain features
are causing systemic harm to users overall security. I realize the innate
appeal to "Let users decide" by giving them an option, but a trivial survey
of human-computer interaction literature should reveal the flaw in that. If
that is too much to ask, reading about "Analysis Paralysis", "Decision
Fatigue", and "Information Overload" on Wikipedia should all provide
sufficient background context.
I am saying that your view of what the EV system achieves and has
already achieved is completely biased and flawed.
So we have to circle back to the core question:
- Is the display of the UI, as implemented today, meaningful and useful for
the problems it tries to solve and the cognitive overhead it introduces to
billions of users. If not, are there plans to remove it?
I am saying it *does* achieve its goal of helping to protect users
against misleading domain names, but not perfectly. It also achieved
the goal of getting CAs to provide a more thorough vetting of OV
certificates by providing a way to sell thoroughly vetted EV
certificates despite the higher real cost (in man hours etc.) above more
sloppy OV practices.
"Showing more information" is not a viable answer - it results in a worse
outcome for users.
Only if you believe that hiding essential information from users is an
honest thing to do.
"Improve the validation" presumes that the information is viable and
useful, which goes against the SOP. (Read [1] if you're not sure why that's
bad)
[1] http://www.adambarth.com/papers/2008/jackson-barth-b.pdf
EV requirements are supposed to ensure that the information is viable,
useful and correct, it is specifically supposed to ban any problematic
operating procedures of 10 years ago. If there are CA practices that
undercut that, the requirements need to be strengthened. If there are
government practices undercutting that (e.g. by allowing companies to
register misleading names or to be created with no firm link to a
prosecutable person), then that is a general society problem not limited
to the Internet context.
Enjoy
Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy