Hi James, thanks very much for your thoughts on this.

First, you're correct that there's a lot more involved in the selection,
implementation, adoption and maintenance of the web platform than what
we've covered. We picked a couple of areas where we think small specific
changes can have positive downstream effects.

Additionally, much of what we're proposing is based directly on the
interviews we had with people in different roles in the development of the
web platform. Common themes were: lack of data for making
selection/prioritization decisions, visibility of what is in flight both in
Gecko and across all vendors, lack of overall coordination across vendors,
and visibility into adoption. Those themes are the first priority, and
drove this first set of actions.

Much of what you discuss is, as you noted, far better than in the past, so
maybe is why they didn't come up much in the interviews?

More thoughts inline below.

> There are a number of things we can do to help ensure that the cost to
developers of targeting multiple implementations is relatively low:

I've coupled these items with your status summary, for readability:

> 1) Write standards for each feature, detailed enough to implement without
ambiguity.
> 1) Compared to 14 years ago, we have got a lot better at this. Standards
are usually written to be unambiguous and produce defined behaviour for all
cases. Where they fall short of this we aren't always disciplined at
providing feedback on the problems, and there are certainly other areas we
can improve.

YES. Huge improvements here, eg WebIDLing all the things. Would be great to
hear from our spec authors about what needs improving here.

> 2) Write a testsuite for each feature, ensure that it's detailed enough
to catch issues and ensure that we are passing those tests when we ship a
new feature.
> 2) We now have a relatively well established cross-browser testsuite in
web-platform-tests. We are still relatively poor at ensuring that features
we implement are adequately tested (essentially the only process here is
the informal one related to Intent to Implement emails) or that we actually
match other implementations before we ship a feature.

Can you share more about this, and some examples? My understanding is that
this lies mostly in the reviewer's hands. If we have these testsuites, are
they just not in automation, or not being used?

> 3) Ensure that the performance profile of the feature is good enough
compared to other implementations (in particular if it's relatively easy to
hit performance problems in one implementation, that may prevent it being
useful in that implementation even though it "works")
> 3) Performance testing is obviously hard and whilst benchmarks are a
thing, it's hard to make them representative of the entire gamut of
possible uses of a feature. We are starting to work on more cross-browser
performance testing, but this is difficult to get right. The main strategy
seems to just to try to be fast in general. Devtools can be helpful in
bridging the gap here if it can identify the cause of slowness either in
general or in a specific engine.

There is a lot of focus and work on perf generally, so not something that
really came up in the interviews. I'm interested in learning about gaps in
developer tooling, if you have some examples.

> 4) Ensure that developers using the feature have a convenient way to
develop and debug the feature in each implementation.
> 4) This is obviously the role of devtools, making it convenient to
develop inside the browser and possible to debug implementation-specific
problems even where a developer isn't using a specific implementation all
the time. Requiring devtools support for new features where it makes sense
seems like a good step forward.

We've seen success and excitement when features are well supported with
tooling. We're asserting that *always* shipping tooling concurrently with
features in release will amplify adoption.

We'd also like to dig deeper into how we ship large feature sets affects
adoption. For example, we don't know about the cost or benefit of waiting
to ship things like animations api or service workers or web components to
the release channel only once all the parts are complete vs piecemeal.

> 5) Ensure that developers have a convenient way to do ongoing testing of
their site against multiple different implementations so that it continues
to work over time.
> 5) This is something we support via WebDriver, but it doesn't cover all
features, and there seems to be some movement toward vendor-specific
replacements (e.g. Google's Puppeteer), which prioritise the goal of making
development and testing in a single browser easy, at the expense of
cross-browser development / testing hard. This seems like an area where we
need to do much better, by ensuring we can offer web developers a
compelling story on how to test their products in multiple browsers.

Definitely agree on easing cross-browser development. There are a few
services that do it, but a paid service is a huge barrier and also not
standardized so not integrated into tooling. It's not where the developers
are already working.

Btw, I implemented a subset of the Puppeteer API for Firefox so that I
could easily run the same tests against Chrome and Firefox:

https://github.com/autonome/puppeteer-fx

> So, to bring this back to your initiative, it seems that the only point
above you really address is number 4 by recommending that devtools support
is required for shipping new features. I fully agree that this is a good
recommendation, but I think we need to go further and ensure that we are
improving on all the areas listed above.

Yes, lots more work to do in the areas you listed, by a number of different
groups! Thanks for sharing your thoughts.



On Thu, Jul 26, 2018 at 2:13 PM James Graham <ja...@hoppipolla.co.uk> wrote:

> On 26/07/2018 19:15, Dietrich Ayala wrote:
>
> > Why are we doing this?
> >
> > The goals of this effort are to ensure that the web platform
> technologies we're investing in are meeting the highest priority needs of
> today's designers and developers, and to accelerate availability and
> maximize adoption of the technologies we've prioritized to meet these needs.
>
> I think this is a great effort, and all the recommendations you make
> seem sensible.
>
> Taking half a step back, the overriding goal seems to be to make
> developing for the web platform a compelling experience. I think one way
> to subdivide this overall goal is into two parts
>
> * Ensure that the features that are added to the platform meet the
> requirements of content creators (i.e. web developers).
>
> * Ensure that once shipped, using the features is as painless as
> possible. In particular for the web this means that developing content
> that works in multiple implementations should not be substantially more
> expensive than the cost of developing for a single implementation.
>
> The first point seems relatively well covered by your plans; it's true
> that so far the approach to selecting which features to develop has been
> ad-hoc, and there's certainly room to improve.
>
> The second point seems no less crucial to the long term health of the
> web; there is a lot of evidence that having multiple implementations of
> the platform is not a naturally stable equilibrium and in the absence of
> continued effort to maintain one it will drift toward a single dominant
> player and de-facto vendor control. The cheaper it is to develop content
> that works in many browsers, the easier it will be to retain this
> essential distinguishing feature of the web.
>
> There are a number of things we can do to help ensure that the cost to
> developers of targeting multiple implementations is relatively low:
>
> 1) Write standards for each feature, detailed enough to implement
> without ambiguity.
>
> 2) Write a testsuite for each feature, ensure that it's detailed enough
> to catch issues and ensure that we are passing those tests when we ship
> a new feature.
>
> 3) Ensure that the performance profile of the feature is good enough
> compared to other implementations (in particular if it's relatively easy
> to hit performance problems in one implementation, that may prevent it
> being useful in that implementation even though it "works")
>
> 4) Ensure that developers using the feature have a convenient way to
> develop and debug the feature in each implementation.
>
> 5) Ensure that developers have a convenient way to do ongoing testing of
> their site against multiple different implementations so that it
> continues to work over time.
>
> There are certainly more things I've missed.
>
> On each of those items we are currently at a different stage of progress:
>
> 1) Compared to 14 years ago, we have got a lot better at this. Standards
> are usually written to be unambiguous and produce defined behaviour for
> all cases. Where they fall short of this we aren't always disciplined at
> providing feedback on the problems, and there are certainly other areas
> we can improve.
>
> 2) We now have a relatively well established cross-browser testsuite in
> web-platform-tests. We are still relatively poor at ensuring that
> features we implement are adequately tested (essentially the only
> process here is the informal one related to Intent to Implement emails)
> or that we actually match other implementations before we ship a feature.
>
> 3) Performance testing is obviously hard and whilst benchmarks are a
> thing, it's hard to make them representative of the entire gamut of
> possible uses of a feature. We are starting to work on more
> cross-browser performance testing, but this is difficult to get right.
> The main strategy seems to just to try to be fast in general. Devtools
> can be helpful in bridging the gap here if it can identify the cause of
> slowness either in general or in a specific engine.
>
> 4) This is obviously the role of devtools, making it convenient to
> develop inside the browser and possible to debug implementation-specific
> problems even where a developer isn't using a specific implementation
> all the time. Requiring devtools support for new features where it makes
> sense seems like a good step forward.
>
> 5) This is something we support via WebDriver, but it doesn't cover all
> features, and there seems to be some movement toward vendor-specific
> replacements (e.g. Google's Puppeteer), which prioritise the goal of
> making development and testing in a single browser easy, at the expense
> of cross-browser development / testing hard. This seems like an area
> where we need to do much better, by ensuring we can offer web developers
> a compelling story on how to test their products in multiple browsers.
>
> So, to bring this back to your initiative, it seems that the only point
> above you really address is number 4 by recommending that devtools
> support is required for shipping new features. I fully agree that this
> is a good recommendation, but I think we need to go further and ensure
> that we are improving on all the areas listed above.
>
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to