Nelson B Bolyard wrote:
(Snipped. Your interpretation is not inaccurate but isn't where we are heading.)

I think this list is NOT the place for the debate over the superiority
of open vs. closed source software.  This is the open source locker room,
not the open/closed source battle field.


Agreed, all.

Now, if the discussion can be steered to how Mozilla's crypto can succeed at
becoming as popular as Skype may be, WITHOUT it having to resort to
- closed source,

Indeed, if we look at Skype's closed source, it isn't really an advantage to them in security or popularity terms, only in competition, which doesn't interest us here, so much.


- proprietary designs (restricted intellectual property),

Sure, I concur with that, for the same reasons as open source.


- being a closed system with no interoperability,
that may be worthwhile for this forum, IMO.


This may be an issue, but let's see how it develops. My issue is not one of source, open v. closed, but one of:

    standards v. non-standards.

Before we get to that, let's see how the open source thing works and see if we have an agreement on that.

Open source leads to interoperability and the ability for any number of players to play in the market. This counterbalances the economies of scale in IT, and holds back the dominant players -- Microsoft -- by ensuring there is a steady series of small ideas from small players. Because it is *our net* we also feel good about these things, which has a very important side-effect: people want to contribute and they want to download when the product is open.

That's Mozilla's space & place; agreed? And, we are probably agreed that open source is better for security [1] so I'll assume that too.

Now, there is a side-effect of the interoperability argument that creates a need for *standards*. Simply put, so that all can play, the new successful technology needs to be turned into a standard.

The problem with this is that it slows down any change. Indeed, by some lights it may make change impossible, or reduce it to only trivial or inoffensive things [2].

What's the problem with that? Well, it might be that slowness is the price of open source and interoperability, combined, and for general purpose open source we might accept that price. The IETF is the expression of our acceptance of the price, in the net. However, there is one exception to that where it isn't so rosy: security.



For this, let me leap across to Boyd's OODA model. In Boyd's world of observe-orient-decide-act, he modelled combat between adversaries. This happens to match security; we have the defenders and attackers. OODA is a loop; what he presents as one for each: one adversary goes through OODA, the other does too, again the first, again the other.... each time bettering their position w.r.t. last round, each time trying to out-do the other.

Here's the punchline: the one who can turn faster wins. The one who can turn inside the other guy's OODA (this metaphor is taken from the old fighter combat days of Spitfires v. Messerschmitt 109s) is the one that gains more each time, and eventually wins [3].

We can measure the times that defender and attacker can turn their OODA loops . Suffice to say, in the internet world of security, the attacker turns in his loop around much faster than the defender.

To the extent that this model applies, the difference is so severe in some areas that it is "game over." Assuming a real combat as opposed to a parade ground review, the attackers will turn their OODA loop faster, much faster, and win. Examples are spam, phishing, viruses, malware, etc; as soon as a defence comes out, the attacker has turned inside and attacked us from elsewhere. Proof? When was the last time that you heard of one of these things going down in volume/losses?

Why is this? Well, one reason is that the defender OODA loop is simply larger. If it includes extra nodes, then it takes time to travel all the nodes. E.g., anti-virus has to include the OS, and PKI has to include the standards body [4], and a bug fix has to include the user-update effort. The entire loop must then be slower, even if we know everyone is working at the same speed.



Why doesn't this matter in general purpose open source? Because outside security, we have an absence of active attacker and an absence of real losses. When Firefox blows up, the user restarts, switches browser or even files a bug. Only slight losses of time & patience there. Also, when an Internet security system is breached by an active attacker, the value that can be lost could be an entire bank account [5]. In general purpose software failures, there isn't an attacker that turns faster, and takes more of our money when he wins.

In summary, there is no disagreement between open source and general internet software. But there is a clash between security and standards [6]. Standards slow down response to evolving attacks, and to the extent that this is not addressed, the attacks migrate faster inside the standards defender's OODA loop. Not only does the attacker win this time, he wins every time, because of the rules of the game. No matter how much effort you put into the standards game, the only way out of the OODA trap is to change the paramaters of combat and shrink your OODA loop. Dramatically. It may be that standards cannot compete in a real battle, in that marketplace.

Mozilla's policy however has chosen to do all software in a standards setting. Including security. What I am about here is exploring whether it is even possible to do active security -- where you have a real combat with a real enemy -- in a standards setting.

Skype is useful for that discussion because it succeeded, delivers security, and has no standards body.



iang



[1] Economist on open source, last para:
http://www.economist.com/science/tq/displaystory.cfm?story_id=12673385

[2] recent examples:  TLS/SNI, green bars, revocation.

[3] Related discussion: http://www.financialcryptography.com/mt/archives/000991.html

[4] And, we can extend the loop with some security models to as many as 4 organisations, so these must be even slower to respond.

[5] rule of thumb is that average loss is $1000.

[6] The obvious rejoiner to this model is to say it doesn't apply, because the security design is right and was always right, and there are no weaknesess. This is what we call "Maginot Line thinking." To those arguments, we have evidence that shows losses, real money losses. If we show that (c.f., phishing, K's MITM, etc) then we also show the design as not being perfect. Ergo, we must adjust, or accept and pay the losses. Either way, the design wasn't perfect.
_______________________________________________
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to