On Tue, Nov 13, 2012 at 9:59 AM, Randell Jesup <rjesup.n...@jesup.org> wrote:
> The WebRTC API (and MediaStream API via the Media Capture Task Force and
> getUserMedia()) is very much still in flux.

I’m not familiar with these specs, so I don’t know why they are still in flux.

> Chrome is shipping enabled-by-default soon, and will do so as
> RTCPeerConnection.

Why do they want to ship something that is still in flux? Who is
expected to use an in-flux API and be able to deal with the API being
in flux? (Web developers in general are not up to the task of tracking
in-flux APIs.)

>   Once WebRTC goes into the stable channel of Chrome, API changes will
>   be done with a longer, smoother transition period.

There’s a risk that the old API versions stick around forever, then.
Apple will basically never be removing their -webkit- CSS stuff. Also,
the period for Chrome to transition from H.264 to WebM seems to
continue indefinitely.

> However, I'd be *quite* surprised if there were no more breaking changes
> to RTCPeerConnection.

Why and how do the breaking changes arise? Shouldn’t implementors just
say “no” to mere “nice to have” tweaks at this point?

> So our problem will be: what do we ship in FF20?

I am not familiar with the APIs in question, but the policy I proposed
would answer:

Either don’t ship in Firefox 20, if there’s something worthwhile to
wait for, or, if waiting is not an option, ship in Firefox 20 and tell
the WG to stop making breaking changes (enforced by refusing to
implement breaking changes if the WG tries to make them nonetheless).

> I'll point out that APIs have a lot of options other than simple naming:
> (please excuse any syntax/etc errors):
>
> * You can directly query for features 
> (mypeercon.supports("JSEP-with-swizzling"));
> * You can test for sub-APIs (if (mypeercon.createFunnyOffer != undefined));

If "JSEP-with-swizzling" or createFunnyOffer ever went away, Web apps
would break. (One can be sure that a scenario where all Web apps
gracefully handle the feature going away is a theoretical scenario.)

If the plan is for them to never go away, they should just be part of
the spec the way document.write() is even though document.write() is
not a nice feature. In which case the issue reduces to shipping
partial features rather than shipping features with expected breaking
changes.

> * You can design it to support multiple versions (mypeercon = new
>   RTCPeerConnection("API version NN please"));
> * You can use the NN suffix naming ala Chrome (though they've dropped it now):
>   RTCPeerConnection00/01/etc as breaking changes are incorporated, and try
>   to support the previous version "for a while" (or for a long while).

Again, if previous versions never go away, the spec should just define
them all instead of pretending that the old versions went away. But
see also http://lists.w3.org/Archives/Public/public-html/2007Apr/0279.html
and http://robert.ocallahan.org/2008/01/http-equiv_22.html .

OTOH, if the plan is that old versions go away, how do you ensure that
developers of Web apps are on board with tracking the API breakage? So
far, telling Web devs that an API is subject to change has never
avoided breakage.

In any case, both Firefox and Chrome supporting RTCPeerConnection00
makes the API more compatible across browsers than Firefox supporting
mozRTCPeerConnection and Chrome supporting webkitRTCPeerConnection, so
I don’t see the WebRTC scenario as being an argument *for* *prefixes*
even though I see WebRTC *might* be an argument *against* refraining
from shipping experimental features on the release channel.

> Devs will use polyfills to smooth over these differences, as they always
> do.  These would at least make the polyfills fairly deterministic if the
> editors and implementors are careful about marking 'breaking' changes.

Polyfills work when the person writing a polyfill for an API knows
what the API will be and has a browser that already implements the API
to test with so that the polyfill developer can be test with a real
host browser that the polyfill really moves out of the way when the
API is supported by the browser.

You can't avoid future breakage with polyfills in the scenario where
you assume that the APIs you currently know will go away and you don't
yet know what they will be replaced with.

> WebRTC is a big spec, with lots still to be defined.  It will be a Long
> Time before it moves past editors-draft stage.  We will *not* have a
> reasonable option to avoid exposing it; the only question is how to do
> so, and how to minimize problems in the future - especially given others
> ahead of us on implementation (though we're catching up).  We will try
> hard to be API-equivalent to Chrome, but that may not be possible
> without downgrading our spec-compliance in issues where we're closer to
> the draft, for example. If we can't be 100% API-compatible with Chrome,
> we'll need to define how a site/app will notice and handle the
> differences (i.e. some level of polyfill...).

Why does the draft differ from what Chrome does? Is the difference
worth all the versioning trouble? That is, why not make Chrome’s
behavior the standard if Chrome already has the code that behaves like
that (be definition) and we value (according to what you said)
compatibility with Chrome?

> I wish I had a magic answer that would get the spec finished before
> anyone ships this, but that's not happening.  Sorry.

To me, this says that either you will break Web apps or you will end
up supporting all the API revisions that were ever shipped (or no one
ended up using the API before it changed). If the plan is to support
all the API revisions that ever got shipped, the solution is not to
pretend that the later revisions replace old APIs and instead make the
spec grow with side-by-side API alternatives. If the plan is to remove
old API versions, who is the expected user of the APIs that will get
broken? Are the expected users really up to the task of changing their
Web apps to track changing API versions so that browser end users
don’t experience breakage?

Either way, a moz prefix won’t help. (It’s worthless as a “this will
break” signal.)

If the in-flux APIs are shipped only to be used by Google+, Facebook
and a Skype Web app, expecting the Web devs to track API changes might
be realistic. If the expected users are Web developers in general, I
think it's completely unrealistic to expect them to track API changes.

If the idea is to ship something that we know is going to change on
the assumption that it will be used only by the likes of Google,
Facebook or Skype, the right solution might be shipping the APIs,
continue to break the APIs and have the APIs work only one short white
list of domains whose dev teams are believed to be up to the task of
tracking in-flux APIs (to prevent devs who aren’t up to the task from
writing apps that depend on stuff we still want to break).

-- 
Henri Sivonen
hsivo...@iki.fi
http://hsivonen.iki.fi/
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to