On Friday, November 27, 2015 at 7:59:37 AM UTC-5, Gijs Kruitbosch wrote:
> On 27/11/2015 12:16, Gervase Markham wrote:
> > On 26/11/15 17:13, Mike Hoye wrote:
> >> Stillman wrote some new code and put it through a process meant to catch
> >> problems in old code, and it passed. That's unfortunate, but does it
> >> really surprise anyone that security is an evolving process? That it
> >> might be be full of hard tradeoffs? There is a _huge_gap_ between "new
> >> code can defeat old security measures" and "therefore all the old
> >> security measures are useless".
> >
> > But the thing is, members of our security group are now piling into the
> > bug pointing out that trying to find malicious JS code by static code
> > review is literally _impossible_ (and perhaps hinting that they'd have
> > said so much earlier if someone had asked them).
> 
> That's not what they're saying. They're saying it is impossible to 
> guarantee with static analysis that code is not malicious. Nobody 
> disputes that, and nobody did before they started saying it, either. The 
> distinction is that the static code review finds *some* malicious JS 
> code. Dan's charge is that this is not useful because malware authors 
> will realize this (we tell them that their add-on got rejected and why!) 
> and try to bypass that review until they manage it.

Repeatedly claiming (as a number of Mozilla folks have now done) that "nobody 
ever said we could detect all malicious code" isn't helpful. That's a strawman 
argument.

The issue here is that this new system -- specifically, an automated scanner 
sending extensions to manual review -- has been defended by Jorge's saying, 
from March when I first brought this up until yesterday on the hardening bug 
[1], that he believes the scanner can "block the majority of malware". That's 
simply not the case. The scanner cannot block even trivial attempts at 
obfuscation. That's what members of your security group are saying, and what my 
PoC demonstrates. You literally cannot block the sorts of examples in my PoC 
without blocking all extensions. (As I note on the hardening bug, the confusion 
here may be partly my fault. The examples in my PoC -- e.g., 'e'.replace() + 
'val' -- were meant to be somewhat humorous, but the point is just that you can 
bypass the scanner by generating a string, and it's provably impossible to 
figure out what those strings would be from static analysis.) 

The implication of this is that manual review would be strictly voluntary on 
the part of the extension developer, and so you would almost by definition be 
sending only legitimate developers to manual review. If malware wants to be 
detected, it's sort of by definition not malware.

Jorge has been saying he believes the scanner can block most malware because he 
genuinely doesn't understand the technical issues here, as his statements (and 
his absurd blocklisting of the PoC) make clear. It's hard not to make this 
sound like a personal attack, but that's really not my intention. Throughout 
this process, Jorge should have had the support of Mozilla engineers who 
understand that the claims he's been making about the scanner are not possible. 
Some people are now attempting to defend him by saying "well we never said it 
could block all malware", but that's a misrepresentation of the disconnect here.

Even the developer of the validator, asking a month ago whether combating this 
sort of trivial obfuscation was possible (it's not, as he was told), said, 
"Without it the validator remains more an advisory/helpful tool than something 
we could use to automate security validation." [2]

> > And not only that, but attempting it seems
> > to be causing significant collateral damage.
> 
> This is the interesting bit. The reason Dan is bringing this up is not 
> his concern for users' safety but the fact that the same automated 
> scanning is flagging things in his add-on that he claims aren't "real" 
> issues, this causes him to land in the "manual review" queue, and that 
> takes time.

Zotero is flagged for manual review 1) because of our use of things like 
nsIProcess and js-ctypes and 2) because we have hundreds of thousands of lines 
of code, which both time out the validator (though presumably that could be 
fixed) and all but guarantee that there will be something in every update that 
is flagged, because the scanner blocks all sorts of highly ambiguous things.

And yes, in our case the things it is flagging are indeed not "real" issues, as 
AMO editors have acknowledged in every review we've ever received. An AMO 
review has never identified a legitimate security issue in Zotero -- if they 
had, 1) we would've been grateful and 2) they wouldn't have approved us.

The collateral damage here is legitimate developers and the users of their 
extensions, who are denied timely updates (including important bug fixes) based 
on a scanner that blocks legitimate extensions but that by definition cannot 
block anyone who doesn't want to be blocked.

> To my mind, the logical conclusion of Dan's post is that he should want 
> all add-ons to be manually reviewed all the time.

No, because that would be massively disruptive to all, not just some, 
developers of unlisted extensions, who are likely unlisted specifically because 
they want or need to release timely updates to their users. Furthermore, in 
Zotero's case, we have a vastly better understanding of our very complex code 
base than an AMO reviewer, and we have every incentive to keep our users, whose 
trust we've earned over a decade of direct distribution and support from our 
site, safe.

> He claims that if 
> static analysis does not guarantee benign code, it is not worth having 
> it at all (something which I would dispute (see distinction drawn 
> above), but let's stick with it for now).

It is not worth causing massive disruption for legitimate developers in the 
name of a system that cannot stop even trivial attempts at obfuscation, yes.

> The reason we're drawing different conclusions is that I believe we 
> still want to do something about the issues caused by frontloaded 
> add-ons not distributed through AMO, and so we will need to do some kind 
> of review, and if we get rid of the static analysis, the logical thing 
> to do would be to use manual review.
> 
> Of course, having to manually review all the add-ons is going to cause 
> even more delays, and is therefore not in Dan's interest, so instead he 
> posits that we should just drop all our attempts to control issues 
> caused by frontloaded add-ons not distributed through AMO.

But you are doing something: you're introducing signing, you're forcing manual 
review for the far more dangerous and problematic category of sideloaded 
extensions, and you're creating a record of deployed code that can be reviewed 
later by AMO editors in order to provide feedback or (reliably) blocklist. I 
suggest, additionally, using the time saved by discontinuing repeated reviews 
of legitimate extensions to perform an initial review of submissions, which 
wouldn't prevent malware later but would reduce id churn and provide an 
opportunity to give initial feedback to new developers.

The problem is thinking that you're accomplishing anything by also trying to 
block malware in any meaningful way through automated scanning. As I note in my 
post, onerous but trivially bypassable automated rules could actually work 
against the benefits we actually agree on, by reducing the amount of code you 
have to search through or review later.

> The interesting thing is that Dan is nominally talking about how we're 
> trying to moderate quality and safety in a space where we haven't before 
> (ie add-ons not distributed through AMO), but then says stuff like "And 
> it's just depressing that the entire Mozilla developer community spent 
> the last year debating extension signing and having every single 
> counterargument be dismissed only to end up with a system that is 
> utterly incapable of actually combating malware."
> 
> which basically boils down to an ad-hominem on Mozilla and an indictment 
> of "the system" and signing and the add-ons space generally, when 
> really, all we're talking about right now is how/whether to review 
> non-AMO-distributed add-ons before signing them. Dan acknowledges 
> elsewhere in his post that signing has other benefits, but the polemic 
> tone sure makes it seem like the entire plan we have here is rubbish 
> from start to finish.

It's the people defending automated scanning as a meaningful deterrent against 
malware that are failing to make a distinction between different parts of the 
system, not me. I make extremely clear in my post that signing has benefits. 
What I find frustrating is the seemingly reflexive unwillingness, in the name 
of defending some colleagues who are out of their technical depth, to 
acknowledge that some parts of the system don't and can't achieve the goals 
that Mozilla has laid out for them.

> There's been a general trend there that Dan sees our attempts to try to 
> do something in that space as a one-way street where Mozilla should 
> basically make sure that all add-ons that used to work and weren't 
> distributed through AMO should not be disrupted, and we have been saying 
> that it's hard to improve user experience here if there are 0 
> restrictions, and so "something's gotta give". Dan wants a system where 
> he can (grudgingly) submit his add-on to AMO, and AMO gives it back to 
> him signed (ideally automatically via APIs) and nobody from Mozilla 
> (human or otherwise) reviews his code or tells him how to do stuff.

Read my post. I'm not calling for no signing. I'm not calling for no 
restrictions. I'm not calling for no review. I'm calling for changing the parts 
of the process that provide essentially no additional protection against 
malicious code but that are hugely disruptive to legitimate developers.

But wait, now it's unreasonable that I should want us to be able to release 
Zotero using automated build tools rather than by running a build script, 
downloading the XPI, manually uploading it to a web app, waiting four days for 
a review that may then require weeks of back-and-forth discussion resulting in 
no actual changes, going back to the web app and downloading the signed XPI 
manually, uploading the XPI to our servers, and running additional build steps 
to update manifests and distribute it? And you wonder why some extension 
developers feel there's been a lack of respect demonstrated by Mozilla 
throughout this discussion?

> And 
> we're trying to improve the state of non-AMO add-ons. Those two desires 
> are fundamentally very hard to reconcile. And that has very little to do 
> with whether or not static analysis can guarantee you non-malicious code 
> or not.

It's Mozilla that has repeatedly called for balancing user safety and developer 
freedom. What you have now is a system that is extremely disruptive to 
legitimate developers while providing almost no additional security, because it 
can be trivially bypassed by anyone with a passing knowledge of JavaScript.

Fortunately, as Jorge has said, the process, and the manual review step in 
particular, is being "reconsidered".

- Dan

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1227867#c26
[2] 
https://groups.google.com/forum/#!topic/mozilla.dev.static-analysis/qTCBKh2bRsE
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to