Re: Intent to implement and ship: ping, rel, referrerPolicy, relList, hreflang, type and text properties on SVG elements

2018-04-11 Thread david
On Tuesday, 10 April 2018 00:57:43 UTC-7, Gijs Kruitbosch  wrote:
> On 10/04/2018 03:07, Cameron McCormack wrote:
> > On Tue, Apr 10, 2018, at 11:58 AM, Jeff Gilbert wrote:
> >> Do we have a heuristic for when to /not/ include something from HTML in 
> >> SVG?
> > 
> > If it doesn't make two features which already exist in both HTML and SVG 
> > more consistent, then I wouldn't include it.
> > 
> >> More or less, these additions to SVG just strike me as having solid
> >> potential risk (for both spec-interaction and implementation bugs) and
> >> negligible upside. Do we have people asking for this?
> > 
> > I don't know of people asking for this, but I would hope that we could 
> > share the implementations of these properties between HTMLAnchorElement and 
> > SVGAElement.  The closer the two  elements are in behavior, and the more 
> > we can share implementation between them, the lower the risk for bugs 
> > between the two.  (Ignoring the general risk of bugs from touching code at 
> > all.)
> 
> I don't know about the C++ side of things here in terms of shared 
> implementation, but *behavior* is already different as per spec. And we 
> (frontend folks who have to make sure things like context menus deal 
> with arbitrary web content) regularly forget that, and then end up being 
> bitten by it. Most annoyingly, for SVG  the `.href` property is an 
> SVGAnimatedString object with `baseVal` and `animVal` properties, not a 
> string like it is in HTML.
> 
> I assume that the same thing applies to the properties this is adding, 
> which means this is on the one hand more consistency with HTML (same 
> properties) and on the other hand, less (different values anyway).
> 
> ~ Gijs

Hi, I work on MS Edge and I'm helping drive these changes. 

The idea here is quite the opposite. We're trying to make the same concepts in 
SVG share the same specified behavior as HTML. For example in SVG today these 
say to return an SVGAnimatedString. With this change they return a DOMString 
(except relist which returns a DOMTokenList object) and thus match the HTML 
definition. The defined behavior will be in HTML as Anne mentions. We've 
already made a similar change for SVGElement, where focus() et al were defined 
in the SVG spec with a note that they behave the same as HTML. Now they're 
defined in a HTMLOrSVGElement mixin in HTML and the definition is removed from 
the SVG spec. This one was an easy change as the behavior was already aligned.


Your point about href is a good one. That is one we can't redefine as DOMString 
due to already being implemented and having usage on the web. We're trying to 
avoid future cases like this by aligning behavior/specs. There may be some ways 
we can align href while keeping legacy behavior so things don't break, but 
don't have a concrete proposal for this yet.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-23 Thread David Bolter
Hi all, so as not to leave this hanging:

On Tue, Jan 17, 2017 at 9:24 AM, Axel Hecht  wrote:

> Am 16/01/2017 um 21:43 schrieb Dave Townsend:
>
>>
>> What other features do we depend on in XUL that I haven't listed?
>>
>>
> Accessibility? Not sure how big the difference is there between XUL and
> HTML.
>

Thanks for bringing this up. There is almost no difference and we're
covered here.

Cheers,
D
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-23 Thread David Bolter
On Tue, Jan 17, 2017 at 12:55 PM, Bobby Holley 
wrote:

> On Tue, Jan 17, 2017 at 8:56 AM, Boris Zbarsky  wrote:
>
> > On 1/16/17 4:28 PM, Matthew N. wrote:
> >
> >> Does it just work from XHTML documents?
> >>
> >
> > Yes, as far as I know.
> >
> > Is our implementation of Web Components ready to replace it and riding
> the
> >> trains?
> >>
> >
> > No.
> >
>
>
> From the perspective of the platform, my general experience is that XBL is
> much, much more of a PITA than XUL itself. I would be very skeptical of a
> plan to start using some form of heavily-XBL-reliant HTML.
>
> I have generally ambivalent feelings towards XUL, but XBL cannot die fast
> enough.
>

Should (can) it die in the Quantum development timeframe?  What does that
do to shipping risk? I realize churn creates risk, but I seem to recall XBL
is getting in the way for Quantum styling?

Cheers,
David


>
> bholley
>
>
> >
> > -Boris
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adding Rust code to Gecko, now documented

2017-01-26 Thread David Teller
Bug 1231711, but I never got to do it, unfortunately.

On 26/01/17 08:01, zbranie...@mozilla.com wrote:
> On Thursday, November 10, 2016 at 5:15:26 AM UTC-8, David Teller wrote:
>> Ok. My usecase is the reimplementation of OS.File in Rust, which should
>> be pretty straightforward and shave a few Mb of RAM and possibly a few
>> seconds during some startups. The only difficulty is the actual JS
>> binding. I believe that the only DOM object involved would be Promise,
>> I'll see how tricky it is to handle with a combo of Rust and C++.
> 
> Did you ever get to do this? Is there a bug?
> 
> zb.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [ANN] Changing the default bug view for BMO

2017-01-30 Thread David Lawrence
Unfortunately, we will need to postpone the changeover and it will not
be happening tomorrow as announced. Due to some unplanned changes
related to security and also some blockers are still being worked on, it
has taken longer then expected to bring you the best possible experience
for the new default bug page.

Also special thanks to Marco Zehe for his invaluable feedback and
testing for some remaining accessibility issues with the new modal bug page.

Will keep you posted.
dkl

On Friday, January 13, 2017 at 5:04:35 PM UTC-5, dlaw...@mozilla.com
 wrote:
> Due to some fixes that are taking longer than expected so we are going
to push the cutover date to January 30th, 2016. Sorry for the delay.
>
> Follow along at:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1150541

>
> Thanks
> BMO Team
>
> On Friday, January 6, 2017 at 12:11:50 PM UTC-5, dlaw...@mozilla.com
 wrote:
> > Announcement: Changing the default bug view for BMO
> >
> >As some of you may already know, and maybe even have been using
it for a while, the BMO [1] team has been working on a new bug view page
((a.k.a show_bug.cgi) for some time [2] . The older, stock bug form was
not well laid out and was difficult to understand, with all of the
fields visible regardless of how often they were used. The stock form
was originally designed to  display all of a bug’s data , and users had
to adapt their workflows accordingly.. The BMO team designed a 
completely new and more efficient view for the workflows of the majority
of BMO users.
> >
> >The new bug-editing form has been added to BMO alongside the
stock form and can be enabled by toggling a user preference [3]. We have
been incrementally improving it and collecting feedback by those brave
enough to use it early on. The new form is using more modern design
practices and therefore is easier for us to improve and expand on. Any
new enhancements will be done only on the new form going forward. It was
code named ‘Bug Modal’ due to the modular layout of the page. Each
submodule can be collapsed out of view and expanded when needed.
> >
> >At this time we feel that the form is feature complete and ready
to become the new default bug form for BMO. We have been tracking the
final blockers in a bug report [4], and the last blockers are being
wrapped up this week. We wanted to let everyone know of the coming
changeover in advance so that we can get last-minute feedback. The
older, stock bug view form will still be around, and we will update the
user preference to let users switch back to the old one[3]. In the
future, we will be removing the old form altogether after fixing a few
more bugs[6], and we will make another announcement beforehand. Removing
the old code will make it easier on us from a maintenance standpoint.
> >
> >Barring any issues with landing the last blockers, we are hoping
to do the changeover on January 16th, 2017. Until then, if this is the
first time you have heard of this new feature,  go ahead and give it a
try. Any bugs, comments, or ideas for improvement of the modal form can
be reported in BMO [5].
> >
> > Thanks!
> > BMO Team
> >
> > [1] https://bugzilla.mozilla.org
> > [2] https://globau.wordpress.com/2015/03/31/bmo-new-look/

> > [3]
https://bugzilla.mozilla.org/userprefs.cgi?tab=settings#ui_experiments_row

> > [4] https://bugzilla.mozilla.org/show_bug.cgi?id=1150541

> > [5]
https://bugzilla.mozilla.org/enter_bug.cgi?product=bugzilla.mozilla.org&component=User%20Interface:%20Modal

> > [6] https://bugzilla.mozilla.org/show_bug.cgi?id=1273046


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please don't abuse "No bug" in commit messages

2017-02-03 Thread David Burns
I have raised a bug[1] to block these types of commits in the future. This
is an unnecessary risk that we are taking.

I also think that we need to remove this as acceptable practice from the
MDN page.

David

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1336459

On 3 February 2017 at 15:11, Ryan VanderMeulen 
wrote:

> A friendly reminder that per the MDN commit rules, the use of "No bug" in
> the commit message is to be used sparingly - in general for minor things
> like whitespace changes/comment fixes/etc where traceability isn't as
> important.
> https://developer.mozilla.org/docs/Mozilla/Developer_guide/
> Committing_Rules_and_Responsibilities
>
> I've come across uses of "No bug" commits recently where entire upstream
> imports of vendored libraries were done. This is bad for multiple reasons:
> * If makes sheriffing difficult - if the patch is broken and needs backing
> out, where should the comment go? When it gets merged to mozilla-central,
> who gets notified?
> * It makes tracking a pain - what happens if that patch ends up needing
> uplift? What about when that patch causes conflicts with another patch
> needing uplift? What if it causes a regression and a blocking bug needs to
> be filed?
>
> Bugs are cheap. Please use them.
>
> Thanks,
> Ryan
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please don't abuse "No bug" in commit messages

2017-02-03 Thread David Burns
Ideally we would be doing it in a way that we can audit things quickly in
the case we think a bad actor has compromised someone's machine.

While we can say what we want and how we want the process, we need to work
with what we have until we have a better process. This could be between now
and the end of the year. Ideally, and hopefully its not too far away, all
pushes will be going through autoland and then we won't have situations
like this.

In the short term, when the sheriffs do merges I will be getting them to
have a look at all "no bug" pushes. Anything that goes outside of
https://developer.mozilla.org/docs/Mozilla/Developer_guide/C
ommitting_Rules_and_Responsibilities will be followed up.

David



On 3 February 2017 at 19:01, Steve Fink  wrote:

> On 02/03/2017 09:29 AM, Gijs Kruitbosch wrote:
>
>> On 03/02/2017 15:11, Ryan VanderMeulen wrote:
>>
>>> A friendly reminder that per the MDN commit rules, the use of "No bug"
>>> in the commit message is to be used sparingly - in general for minor things
>>> like whitespace changes/comment fixes/etc where traceability isn't as
>>> important.
>>> https://developer.mozilla.org/docs/Mozilla/Developer_guide/C
>>> ommitting_Rules_and_Responsibilities
>>>
>>> I've come across uses of "No bug" commits recently where entire upstream
>>> imports of vendored libraries were done. This is bad for multiple reasons:
>>> * If makes sheriffing difficult - if the patch is broken and needs
>>> backing out, where should the comment go? When it gets merged to
>>> mozilla-central, who gets notified?
>>>
>> As Greg said, the committer / pusher, via IRC and/or email.
>>
>>> * It makes tracking a pain - what happens if that patch ends up needing
>>> uplift?
>>>
>> Generally, the person committing it will know whether it needs uplift,
>> and would have filed a bug if it did - and would file one if it does after
>> the fact. We can already not rely on bugs being marked fixed/verified on a
>> trunk branch when searching bugzilla for uplift requests (cf. "disable
>> feature Y on beta" bugs) and so I don't see how this is relevant.
>>
>>> What about when that patch causes conflicts with another patch needing
>>> uplift?
>>>
>> That seems like it hardly ever happens in the example you gave (vendored
>> libraries and other wholesale "update this dir to match external canonical
>> version"), and if/when it does, the people who would be likely to be
>> involved in such changes (effectively changes to vendored deps that aren't
>> copied from the same canonical source!) are also likely to be aware of what
>> merged when.
>>
>>> What if it causes a regression and a blocking bug needs to be filed?
>>>
>> Then you file a bug and needinfo the person who landed the commit (which
>> one would generally do anyway, besides just marking it blocking the
>> regressor).
>>
>>
>> If there's an overwhelming majority of people who think using "no bug"
>> for landing 'sync m-c with repo X' commits is bad, we can make a more
>> explicit change in the rules here. But in terms of reducing development
>> friction, if we think bugs are necessary at this point, I would prefer
>> doing something like what Greg suggested, ie auto-file bugs for commits
>> that get pushed that don't have a bug associated with them.
>>
>>
>> More generally, I concur with Greg that we should pivot to having the
>> repos be our source of truth about what csets are present on which
>> branches. I've seen cases recently where e.g. we land a feature, then there
>> are regressions, and some of them are addressed in followup bugs, and then
>> eventually we back everything out of one of the trains because we can't fix
>> *all* the regressions in time. At that point, the original feature bug's
>> flags are updated ('disabled' on the branch with backouts), but not those
>> of the regression fix deps, and so if *those* have regressions, people
>> filing bugs make incorrect assumptions about what versions are affected.
>> Manually tracking branch fix state in bugzilla alone is a losing battle.
>>
>
> For the immediate term, I must respectfully disagree. Sheriffs are the
> people most involved with and concerned with the changeset management
> procedures, so if the sheriffs (and Ryan "I'm not a sheriff!" VM) are
> claiming that No Bug landings are being overused and causing issues, I'm
> inclined to adjust current practices first and hold 

Re: Changing the representation of rectangles in platform code

2017-02-08 Thread David Major
Is there a specific problem that's being solved by this proposal? It would
be helpful to make this a bit more concrete, like "these benchmarks go x%
faster", or "here's a list of overflow bugs that will just vanish", or
"here's some upcoming work that this would facilitate".

On Thu, Feb 9, 2017 at 1:56 PM, Botond Ballo  wrote:

> Hi everyone!
>
> I would like to propose changing the internal representation of
> rectangles in platform code.
>
> We currently represent a rectangle by storing the coordinates of its
> top-left corner, its width, and its height. I'll refer to this
> representation as "x/y/w/h".
>
> I would like to propose storing instead the coordinates of the
> top-left corner, and the coordinates of the bottom-right corner. I'll
> refer to this representation as "x1/y1/x2/y2".
>
> The x1/y1/x2/y2 representation has several advantages over x/y/w/h:
>
>   - Several operations are more efficient with x1/y1/x2/y2, including
> intersection,
> union, and point-in-rect.
>   - The representation is more symmetric, since it stores two quantities
> of the
> same kind (two points) rather than a point and a dimension
> (width/height).
>   - The representation is less susceptible to overflow. With x/y/w/h,
> computation
> of x2/y2 can overflow for a large range of values of x/y and w/h.
> However,
> with x1/y1/x2/y2, computation of w/h cannot overflow if the
> coordinates are
> signed and the resulting w/h is unsigned.
>
> A known disadvantage of x1/y1/x2/y2 is that translating the rectangle
> requires translating both points, whereas translating x/y/w/h only
> requires translating one point. I think this disadvantage is minor in
> comparison to the above advantages.
>
> The proposed change would affect the class mozilla::gfx::BaseRect, and
> classes that derive from it (such as CSSRect, LayoutRect, etc., and,
> notably, nsRect and nsIntRect), but NOT other rectangle classes like
> DOMRect.
>
> I would like to propose making the transition as follows:
>
>   - Replace direct accesses to the 'width' and 'height' fields throughout
> the codebase with calls to getter and setter methods. (There aren't
> that many - on the order of a few dozen, last I checked.)
>
>   - Make the representation change, which is non-breaking now that
> the direct accesses to 'width' and 'height' have been removed.
>
>   - Examine remaining calls to the getters and setters for width and
> height and see if any can be better expressed using operations
> on the points instead.
>
> The Graphics team, which owns this code, is supportive of this change.
> However, since this is a fundamental utility type that's used by a
> variety of platform code, I would like to ask the wider platform
> development community for feedback before proceeding. Please let me
> know if you have any!
>
> Thanks,
> Botond
>
> [1] http://searchfox.org/mozilla-central/rev/
> 672c83ed65da286b68be1d02799c35fdd14d0134/gfx/2d/BaseRect.h#46
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Key measurements from memory reports are now shown by crash-stats.m.o

2017-02-13 Thread David Major
Nice!

I see that these fields are available in Super Search already, which is
great. This is going to make search queries really powerful.

On Tue, Feb 14, 2017, at 12:37 PM, Nicholas Nethercote wrote:
> Hi,
> 
> For a long time we have collected a memory report for most crash reports
> where the crash is caused by an out-of-memory problem. But the memory
> report data hasn't been incorporated nicely into crash-stats -- you have
> to
> save the memory report file to disk and then load it in about:memory to
> see
> anything.
> 
> I'm happy to say that things have now improved, via bug 1291173. For
> crash
> reports that contain a memory report, the following 16 key measurements
> are
> now shown in the default "Details" tab at crash-stats.m.o:
> 
> - explicit
> - gfx-textures
> - ghost-windows
> - heap-allocated
> - heap-overhead
> - heap-unclassified
> - host-object-urls
> - images
> - js-main-runtime
> - private
> - resident
> - resident-unique
> - system-heap-allocated
> - top(none)/detached
> - vsize
> - vsize-max-contiguous
> 
> See
> https://crash-stats.mozilla.org/report/index/c012f28b-cc09-48ba-9cfd-1e6c32170202
> for an example.
> 
> This change should give us better insight into causes of OOM crashes. For
> example, we will be able to see correlations between suspicious
> measurements (e.g. ghost-windows > 0) and add-ons.
> 
> Many thanks to Adrian Gaudebert for lots of help getting this working.
> 
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread David Major
One thing I like about trailing operators is that they tend to match
what you'd find in bullet-point prose. Here's a made-up example:

You can apply for a refund of your travel insurance policy if:
* You cancel within 7 days of purchase, and
* You have not yet begun your journey, and
* You have not used any benefits of the plan.

Over time my eyes have come to expect the conjunction on the right.

On Fri, Feb 17, 2017, at 07:28 PM, gsquel...@mozilla.com wrote:
> While it's good to know how many people are for or against it, so that we
> get a sense of where the majority swings, I'd really like to know *why*
> people have their position. (I could learn something!)
> 
> So Nick, would you have some reasons for your "strong preference"? And
> what do you think of the opposite rationale as found in [2]?
> 
> (But I realize it's more work, so no troubles if you don't have the time
> to expand on your position here&now; thank you for your feedback so far.)
> 
> On Friday, February 17, 2017 at 5:16:42 PM UTC+11, Nicholas Nethercote
> wrote:
> > I personally have a strong preference for operators at the end of lines.
> > The codebase seems to agree with me, judging by some rough grepping ('fff'
> > is an alias I have that's roughly equivalent to rgrep):
> > 
> > $ fff "&&$" | wc -l
> >   28907
> > $ fff "^ *&&" | wc -l
> >3751
> > 
> > $ fff "||$" | wc -l
> >   26429
> > $ fff "^ *||" | wc -l
> >2977
> > 
> > $ fff " =$" | wc -l
> >   39379
> > $ fff "^ *= " | wc -l
> > 629
> > 
> > $ fff " +$" | wc -l
> >   31909
> > $ fff "^ *+$" | wc -l
> > 491
> > 
> > $ fff " -$" | wc -l
> >2083
> > $ fff "^ *-$" | wc -l
> >  52
> > 
> > $ fff " ==$" | wc -l
> >   1501
> > $ fff "^ *== " | wc -l
> >   161
> > 
> > $ fff " !=$" | wc -l
> >   699
> > $ fff "^ *!= " | wc -l
> >   129
> > 
> > They are rough regexps and probably have some false positives, but the
> > numbers aren't even close; operators at the end of the line clearly
> > dominate.
> > 
> > I will conform for the greater good but silently weep inside every time I
> > see it.
> > 
> > Nick
> > 
> > On Fri, Feb 17, 2017 at 8:47 AM,  wrote:
> > 
> > > Question of the day:
> > > When breaking overlong expressions, should &&/|| go at the end or the
> > > beginning of the line?
> > >
> > > TL;DR: Coding style says 'end', I&others think we should change it to
> > > 'beginning' for better clarity, and consistency with other operators.
> > >
> > >
> > > Our coding style reads:
> > > "Break long conditions after && and || logical connectives. See below for
> > > the rule for other operators." [1]
> > > """
> > > Overlong expressions not joined by && and || should break so the operator
> > > starts on the second line and starts in the same column as the start of 
> > > the
> > > expression in the first line. This applies to ?:, binary arithmetic
> > > operators including +, and member-of operators (in particular the .
> > > operator in JavaScript, see the Rationale).
> > >
> > > Rationale: operator at the front of the continuation line makes for faster
> > > visual scanning, because there is no need to read to end of line. Also
> > > there exists a context-sensitive keyword hazard in JavaScript; see bug
> > > 442099, comment 19, which can be avoided by putting . at the start of a
> > > continuation line in long member expression.
> > > """ [2]
> > >
> > >
> > > I initially focused on the rationale, so I thought *all* operators should
> > > go at the front of the line.
> > >
> > > But it seems I've been living a lie!
> > > &&/|| should apparently be at the end, while other operators (in some
> > > situations) should be at the beginning.
> > >
> > >
> > > Now I personally think this just doesn't make sense:
> > > - Why the distinction between &&/|| and other operators?
> > > - Why would the excellent rationale not apply to &&/||?
> > > - Pedantically, the style talks about 'expression *not* joined by &&/||,
> > > but what about expression that *are* joined by &&/||? (Undefined 
> > > Behavior!)
> > >
> > > Based on that, I believe &&/|| should be made consistent with *all*
> > > operators, and go at the beginning of lines, aligned with the first 
> > > operand
> > > above.
> > >
> > > And therefore I would propose the following changes to the coding style:
> > > - Remove the lonely &&/|| sentence at [1].
> > > - Rephrase the first sentence at [2] to something like: "Overlong
> > > expressions should break so that the operator starts on the following 
> > > line,
> > > in the same column as the first operand for that operator. This applies to
> > > all binary operators, including member-of operators (in particular the .
> > > operator in JavaScript, see the Rationale), and extends to ?: where the 
> > > 2nd
> > > and third operands should be on separate lines and start in the same 
> > > column
> > > as the first operand."
> > > - Keep the rationale at [2].
> > >
> > > Also, I think we should add something about where to break expressions
> > > with oper

ANN: Finally changing the default bug view for BMO on March 1st, 2017

2017-02-22 Thread David Lawrence
tl;dr

   On March 1st, 2017, the BMO Team is going to make another attempt at
changing the default bug view to the new modal view that has been in
development for a while. For those who would like to use the old form,
instructions on how to switch back are below in the background
information. The old form will however be removed one day so please try
to use the new one and make suggestions on how it can be improved.

Thanks!

Background:

As some of you may already know, and maybe even have been using it
for a while, the BMO [1] team has been working on a new bug view page
(a.k.a show_bug.cgi) for some time [2] . The older, stock bug form was
not well laid out and was difficult to understand, with all of the
fields visible regardless of how often they were used. The stock form
was originally designed to  display all of a bug’s data, and users had
to adapt their workflows accordingly.. The BMO team designed a
completely new and more efficient view for the workflows of the majority
of BMO users.

The new bug-editing form has been added to BMO alongside the stock
form and can be enabled by toggling a user preference [3]. We have been
incrementally improving it and collecting feedback by those brave enough
to use it early on. The new form is using more modern design practices
and therefore is easier for us to improve and expand on. Any new
enhancements will be done only on the new form going forward. It was
code named ‘Bug Modal’ due to the modular layout of the page. Each
submodule can be collapsed out of view and expanded when needed.

At this time we feel that the form is feature complete and ready to
become the new default bug form for BMO. We have been tracking the final
blockers in a bug report [4], and the last blocker(s) are being wrapped
up this week. We wanted to let everyone know of the coming
changeover in advance so that we can get last-minute feedback. The
older, stock bug view form will still be around, and we will update the
user preference to let users switch back to the old one [3]. In the
future, we will be removing the old form altogether after fixing a few
more bugs [6], and we will make another announcement before doing so.
Removing the old code will make it easier on us from a maintenance
standpoint.

Barring any issues with landing the last blocker(s), we are hoping
to do the changeover on March 1st, 2017. Until then, if this is the
first time you have heard of this new feature,  go ahead and give it a
try. Any bugs, comments, or ideas for improvement of the modal form can
be reported in BMO [5].
Thanks!
BMO Team

[1] https://bugzilla.mozilla.org
[2] https://globau.wordpress.com/2015/03/31/bmo-new-look
[3]
https://bugzilla.mozilla.org/userprefs.cgi?tab=settings#ui_experiments_row
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1150541
[5]
https://bugzilla.mozilla.org/enter_bug.cgi?product=bugzilla.mozilla.org&component=User%20Interface:%20Modal
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1273046


-- 
David Lawrence
d...@mozilla.com
bugzilla.mozilla.org
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please consider whether new APIs/functionality should be disabled by default in sandboxed iframes

2017-02-27 Thread David Bruant
Hi Boris,

Did a particular feature triggered your message?

Would it make sense to add the question to the "Intent to Implement" email 
template?
https://wiki.mozilla.org/WebAPI/ExposureGuidelines#Intent_to_Implement

"Intent to" emails seem like a good time to ask this questions/raise:
* the feature is not implemented yet
* other browsers vendors are reading the "intent to" emails, so there is an 
opportunity for this question to be fixed in an interoperable manner

David


Le mercredi 11 janvier 2017 18:34:56 UTC+1, Boris Zbarsky a écrit :
> When adding a new API or CSS/HTML feature, please consider whether it 
> should be disabled by default in sandboxed iframes, with a sandbox token 
> to enable.
> 
> Note that this is impossible to do post-facto to already-shipped APIs, 
> due to breaking compat.  But for an API just being added, this is a 
> reasonable option and should be strongly considered.
> 
> -Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


ANN: Default bug view for BMO changed today!

2017-03-01 Thread David Lawrence
   Today, the BMO Team changed the default bug view to the new modal
view that has been in development for a while. For those who would like
to use the old form, instructions on how to switch back are below in the
background information. The old form will, however, be removed one day,
so please try to use the new one and make suggestions on how it can be
improved.

As some of you may already know, and maybe even have been using it
for a while, the BMO [1] team has been working on a new bug view page
(a.k.a. show_bug.cgi) for some time [2] . The older, stock bug form was
not well laid out and was difficult to understand, with all of the
fields visible regardless of how often they were used. The stock form
was originally designed to  display all of a bug’s data, and users had
to adapt their workflows accordingly. The BMO team designed a completely
new and more efficient view for the workflows of the majority of BMO users.

The new bug-editing form has been added to BMO alongside the stock
form and can be enabled by toggling a user preference [3]. We have been
incrementally improving it and collecting feedback by those brave enough
to use it early on. The new form is using more modern design practices
and therefore is easier for us to improve and expand on. Any new
enhancements will be done only on the new form going forward. It was
code named ‘Bug Modal’ due to the modular layout of the page. Each
submodule can be collapsed out of view and expanded when needed.

At this time we feel that the form is feature complete and has now
become the new default bug form for BMO. We have finished all the
blockers in our tracking bug report [4]. The older, stock bug view form
will still be around, and you can use the user preferences form to
switch back to the old one [3]. In the future, we will be removing the
old form completely after fixing a few more bugs [6], and we will make
another announcement before doing so. Removing the old code will make it
easier on us from a maintenance standpoint. Any bugs, comments, or ideas
for improvement of the modal form can be reported in BMO [5].

Thanks!
BMO Team

[1] https://bugzilla.mozilla.org
[2] https://globau.wordpress.com/2015/03/31/bmo-new-look
[3]
https://bugzilla.mozilla.org/userprefs.cgi?tab=settings#ui_experiments_row
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1150541
[5]
https://bugzilla.mozilla.org/enter_bug.cgi?product=bugzilla.mozilla.org&component=User%20Interface:%20Modal&format=__default__
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1273046
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ANN: Default bug view for BMO changed today!

2017-03-02 Thread David Lawrence
Thanks for the feedback. Please file this as a bug if you haven't
already, outlining the proposed workflow change. I could see how
automatically exposing the milestone form field when resolving a bug
could be useful for a lot of people.

Thanks
dkl

On 3/2/17 3:13 AM, Jörg Knobloch wrote:
> On 01/03/2017 21:47, David Lawrence wrote:
>> Today, the BMO Team changed the default bug view to the new modal
>> view that has been in development for a while.
> 
> I like the fact that you can now change the Product in one step.
> 
> Sadly when resolving a bug, there's now an additional click necessary on
> "Edit Bug" to set the Target Milestone. And at least once I had to click
> once more to expand the Tracking section.
> 
> Sorry, some people have never happy with progress ;-)
> 
> Jörg.
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

-- 
David Lawrence
d...@mozilla.com
bugzilla.mozilla.org
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Sheriff Highlights and Summary in February 2017

2017-03-07 Thread David Burns
One thing that we have also noticed is that the backout rate on autoland is
lower than inbound.

In the last 7 days backout rate is averaging (merges have been removed):

   - Autoland 6%.(24 backouts out of 381 pushes)
   - Inbound 12% (30 backouts out of 251 pushes)

I don't have graphs to show this but when I look at https://futurama.
theautomatedtester.co.uk/   each week I have seen this result consistenly
for about a month.

David


On 7 March 2017 at 09:03, Carsten Book  wrote:

> Hi Lawrence,
>
> most (i would say 95 %) of the backouts are for Code issues - this include
> bustages and test failures.
>
> From this Code Issues i would guess its about 2/3 for breaking tests and
> 1/3 Build Bustages.
>
> The other backout reasons are merge conflicts / backout requests for
> changes causes new regressions out of our test suites / backout requests
> from developers etc -
>
> In February the backout rate was (excluding the servo merge with 8315
> changesets from the 13242) = 297 in 4927 changesets = ~ 6 %
> In January the backout rate = 302 backouts in 4121 changesets = 7 %
>
> We had a much higher backout rate in the past i think - so it stabilized
> now with this 6-7 % backout rate in the last months,
>
> If you think its useful, i can provide for the next monthly report a more
> detailed analysis like x % =  backouts because builds bustages, y %=
> backouts for test failures etc .
>
> Cheers,
>  -Tomcat
>
>
> On Mon, Mar 6, 2017 at 11:13 PM, Lawrence Mandel 
> wrote:
>
>> Hi Tomcat,
>>
>> Do you have any more details about the reasons why the 297 changesets
>> needed to be backed out?
>>
>> Thanks,
>>
>> Lawrence
>>
>> On Wed, Mar 1, 2017 at 6:05 AM, Carsten Book  wrote:
>>
>>> Hi,
>>>
>>> We will be more active in 2017 and inform more about whats happening in
>>> Sheriffing and since its already March :)
>>>
>>> In February we had about 13242 changesets landed on mozilla-central
>>> monitored by Sheriffs.
>>>
>>> (The high number of changes is because of the merge from servo to
>>> mozilla-central with about 8315 changesets).
>>>
>>> 297 changesets were backed out in February.
>>>
>>> Beside this Sheriffs took park in doing uplifts and checkin-needed bugs.
>>>
>>> The Current Orangefactor is 10.43 (7250 test failures failures in 695
>>> pushes in the last 7 days).
>>> You can find the list of top Intermittent failure bugs here
>>> https://brasstacks.mozilla.com/orangefactor/
>>>
>>> You can find more statistics here: https://futurama.theautomatedt
>>> ester.co.uk/
>>>
>>> A big thanks to the Team especially our Community Sheriffs for handling
>>> the new Tier 2 Stylo Build on integration trees  and mozilla-central  and
>>> the teamwork with the Developers!
>>>
>>> If you want also to be a Community Sheriffs, as part of more blogging
>>> about Sheriffing i published a blog post about it :
>>> https://blog.mozilla.org/tomcat/2017/02/27/community-sheriffs/
>>>
>>> Let us know when you have any Question or Feedback about Sheriffing.
>>>
>>> Cheers,
>>>  -Tomcat
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Sheriff Highlights and Summary in February 2017

2017-03-07 Thread David Burns
This is just a rough number of how many pushes had a backout and how many
didn't. I don't have any data on whether this is a full or partial backout.

If there are multiple bugs in a push on inbound, a sheriff may revert the
entire push (or might not depending on how obvious the error is and
availability of the person pushing.)

As jmaher pointed out, and so did you, pushes are more succinct on autoland
which could be mean its simpler to know what to revert. Other than doing
some A/B testing to prove the hypothesis of the sheriffs in this thread, a
lot of the data is going to be anecdotal as to why there is a difference.

David

On 7 March 2017 at 12:57, Boris Zbarsky  wrote:

> On 3/7/17 6:23 AM, David Burns wrote:
>
>>- Autoland 6%.(24 backouts out of 381 pushes)
>>- Inbound 12% (30 backouts out of 251 pushes)
>>
>
> Were those full backouts or partial backouts?
>
> That is, how are we counting a multi-bug push to inbound where one of the
> bugs gets backed out?  Note that such a thing probably doesn't happen on
> autoland.
>
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Sheriff Highlights and Summary in February 2017

2017-03-10 Thread David Burns
I went back and did some checks with autoland to servo and the results are
negligible. So from 01 February 2017 to 10 March 2017 (as of sending this
email). I have removed merge commits from the numbers.

Autoland:
Total Servo Sync Pushes: 152
Total Pushes: 1823
Total Backouts: 144
Percentage of backouts: 7.8990674712
Percentage of backouts without Servo: 8.61759425494

Mozilla-Inbound:
Total Pushes: 1472
Total Backouts: 166
Percentage of backouts: 11.277173913


I will look into why, with more pushes, is resulting in fewer backouts. The
thing to note is that autoland, by its nature, does not allow us to fail
forward like inbound without having to get a sheriff to land the code.

I think, and this is my next area to investigate, is the 1 bug per push
(the autoland model) could be helping with the percentage of backouts being
lower.

David

On 7 March 2017 at 21:29, Chris Peterson  wrote:

> On 3/7/2017 3:38 AM, Joel Maher wrote:
>
>> One large difference I see between autoland and mozilla-inbound is that on
>> autoland we have many single commits/push whereas mozilla-inbound it is
>> fewer.  I see the Futurama data showing pushes and the sheriff report
>> showing total commits.
>>
>
> autoland also includes servo commits imported from GitHub that won't break
> Gecko. (They might break the linux64-stylo builds, but they won't be backed
> out for that.)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-13 Thread David Burns
As the manager of the sheriffs, I am in favour of this proposal.

The reasons why are as follow (and to note there are only 3 paid sheriffs
to try cover the world):

* A number of r+ with nits land up in the sheriffs queue for
checkin-needed. This then puts the onus on the sheriffs, not the reviewer
that the right thing has been done. The sheriffs do no have the context
knowledge of the patch, never mind the knowledge of the system being
changed.

* The throughput of patches into the trees is only going to increase. If
there are failures, and the sheriffs need to back out, this can be a long
process depending on the failure leading to pile ups of broken patches.

A number of people have complained that using autoland doesn't allow us to
fail forward on patches. While that is true, there is the ability to do
T-shaped try pushes to make sure that you at least compile on all
platforms. This can easily done from Mozreview (Note: I am not suggesting
we move to mozreview).

Regarding burden on reviewers, the comments in this thread just highlight
how broken our current process is by having to flag individual people for
reviews. This leads to the a handful of people doing 50%+ of reviews on the
code. We, at least visibly, don't do enough to encourage new committers
that would lighten the load to allow the re-review if there are nits. Also,
we need to do work to automate the removal of nits to limit the amount of
re-reviews that reviewer can do.

We should try mitigate the security problem and fix our nit problem instead
of bashing that we can't handle re-reviews because of nits.

David

On 9 March 2017 at 21:53, Mike Connor  wrote:

> (please direct followups to dev-planning, cross-posting to governance,
> firefox-dev, dev-platform)
>
>
> Nearly 19 years after the creation of the Mozilla Project, commit access
> remains essentially the same as it has always been.  We've evolved the
> vouching process a number of times, CVS has long since been replaced by
> Mercurial & others, and we've taken some positive steps in terms of
> securing the commit process.  And yet we've never touched the core idea of
> granting developers direct commit access to our most important
> repositories.  After a large number of discussions since taking ownership
> over commit policy, I believe it is time for Mozilla to change that
> practice.
>
> Before I get into the meat of the current proposal, I would like to
> outline a set of key goals for any change we make.  These goals have been
> informed by a set of stakeholders from across the project including the
> engineering, security, release and QA teams.  It's inevitable that any
> significant change will disrupt longstanding workflows.  As a result, it is
> critical that we are all aligned on the goals of the change.
>
>
> I've identified the following goals as critical for a responsible commit
> access policy:
>
>
>- Compromising a single individual's credentials must not be
>sufficient to land malicious code into our products.
>- Two-factor auth must be a requirement for all users approving or
>pushing a change.
>- The change that gets pushed must be the same change that was
>approved.
>- Broken commits must be rejected automatically as a part of the
>commit process.
>
>
> In order to achieve these goals, I propose that we commit to making the
> following changes to all Firefox product repositories:
>
>
>- Direct commit access to repositories will be strictly limited to
>sheriffs and a subset of release engineering.
>   - Any direct commits by these individuals will be limited to fixing
>   bustage that automation misses and handling branch merges.
>- All other changes will go through an autoland-based workflow.
>   - Developers commit to a staging repository, with scripting that
>   connects the changeset to a Bugzilla attachment, and integrates with 
> review
>   flags.
>   - Reviewers and any other approvers interact with the changeset as
>   today (including ReviewBoard if preferred), with Bugzilla flags as the
>   canonical source of truth.
>   - Upon approval, the changeset will be pushed into autoland.
>   - If the push is successful, the change is merged to
>   mozilla-central, and the bug updated.
>
> I know this is a major change in practice from how we currently operate,
> and my ask is that we work together to understand the impact and concerns.
> If you find yourself disagreeing with the goals, let's have that discussion
> instead of arguing about the solution.  If you agree with the goals, but
> not the solution, I'd love to hear alternative ideas for how we can achieve
> the outcomes outlined above.
>
> -- Mike
>
> __

Renaming nsAString_internal to nsAString

2017-03-13 Thread David Major
TL;DR this will change crash signatures and possibly other tool output;
talk to me if this adversely affects you.

Now that bug 1332639 has removed the external string API, we no longer need
to differentiate between internal and external strings.

Currently we #define nsAString to nsAString_internal, so even though we
write nsAString in the source, the binary thinks the name is
nsAString_internal. Same for nsACString.

In bug 1346078 I plan to remove the #defines so that we just have
nsA(C)String in the binary.

If you have tools outside the tree that depend on the _internal names,
please let me know. (I'm already aware of the servo bindings and will
address those)

Thanks,
David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-03-16 Thread David Major
Try using Sysinternals Process Monitor to see what files MsMpEng.exe is
reading.

On Fri, Mar 17, 2017, at 04:26 PM, Ben Kelly wrote:
> Hi all,
> 
> I'm trying to configure my new windows build machine and noticed that
> builds were still somewhat slow.  I did:
> 
> 1) Put it in high performance power profile
> 2) Made sure my mozilla-central dir was not being indexed for search
> 3) Excluded my mozilla-central directory from windows defender
> 
> Watching the task monitor during a build, though, I still saw MsMpEng.exe
> (antivirus) running during the build.
> 
> I ended up added some very broad exclusions to get this down close to
> zero.  I am now excluding:
> 
>   - mozilla-central checkout
>   - mozilla-build install dir
>   - visual studio install dir
>   - /users/bkelly/appdada/local/temp
>   - /users/bkelly (because temp dir was not enough)
> 
> I'd like to narrow this down a bit.  Does anyone have a better list of
> things to exclude from virus scanning for our build process?
> 
> Thanks.
> 
> Ben
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: unowned module: Firefox::New Tab Page, help me find an owner

2017-03-22 Thread David Burns
On 22 March 2017 at 13:49, Ben Kelly  wrote:

> On Wed, Mar 22, 2017 at 9:39 AM,  wrote:
>
>
> Finding someone to own the feature and investigate intermittents is
> important too, but that doesn't mean the tests have zero value.
>

This just strikes me that we are going to disable until they are all gone
then we have dead code in the tree and still no one to own it. Its a longer
process that could end up at the same end point.

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread David Major
I'd like to add to this a reminder that commit messages should describe
the _change_ and not the _symptom_. In other words, "Bug XYZ: Crash at
Foo::Bar" is not a good summary.

This is implied by what Boris said, but I've seen enough of these on my
pulsebot backscroll that it's worth mentioning explicitly.

Thanks!

On Mon, Apr 17, 2017, at 11:16 AM, Boris Zbarsky wrote:
> A quick reminder to patch authors and reviewers.
> 
> Changesets should have commit messages.  The commit message should 
> describe not just the "what" of the change but also the "why".  This is 
> especially true in cases when the "what" is obvious from the diff 
> anyway; for larger changes it makes sense to have a summary of the 
> "what" in the commit message.
> 
> As a specific example, if your diff is a one-line change that changes a 
> method call argument from "true" to "false", having a commit message 
> that says "change argument to mymethod from true to false" is not very 
> helpful at all.  A good commit message in this situation will at least 
> mention the meaning for the argument.  If that does not make it clear 
> why the change is being made, the commit message should explain the
> "why".
> 
> Thank you,
> Boris
> 
> P.S.  Yes, this was prompted by a specific changeset I saw.  This 
> changeset had been marked r+, which means neither the patch author not 
> the reviewer really thought about this problem.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: Web Compatibility Testing Survey

2017-05-03 Thread David Burns
Hi All,

If you are working on web exposed features could you please take a few
minutes to complete the survey. This will help us prioritize work on Web
Platform Tests which helps us with our web compatibility story.

David
-- Forwarded message --
From: 
Date: 28 April 2017 at 14:19
Subject: Web Compatibility Testing Survey
To: dev-platform@lists.mozilla.org


I've invited you to fill in the following form:
Web Platform Tests Survey

To fill it in, visit:
https://docs.google.com/forms/d/e/1FAIpQLScsbrVikqdnH32Qs-hV
8YWgBi6kb5QRJtYG3KGqHba2eO2Dzw/viewform?c=0&w=1&usp=mail_form_link

Hi All,

The Product Integrity Automation Harnesses team would like your help
prioritizing web compatibility work.

We are trying to understand how people test web-exposed features they are
working on.

Please can you fill in the following survey[1], it shouldn’t take more than
5 minutes. It will help us ensure that we focus on the projects that will
have the greatest impact on gecko development.

Thanks

David

Google Forms: Create and analyse surveys.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Gecko Profiler Win64 stability

2017-05-18 Thread David Major
Hi,

The Gecko Profiler used to be notoriously unstable on 64-bit Windows. (If
you're curious: it's mostly because the unwinding rules for Win64 require
the system libraries to do a lot of synchronization, which our stack walker
would often call in ways that could deadlock.)

Ever since the last Quantum Flow work week, I've been working on fixing
these issues. As of this week's nightlies, all of the hangs that have been
reported to me are resolved fixed. There might be some rare corner cases
still remaining, but I'm confident that the most frequent problems have
been addressed.

So my ask is: if you've previously been deterred from using the profiler on
Win64 builds, please give it another try! Flag me on any issues you find
and/or mark them blocking bug 1366030. I will treat these bugs with high
priority since I want to keep Quantum activities unblocked.

Thanks!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing our thread APIs for Quantum DOM scheduling

2017-05-19 Thread David Teller
Out of curiosity, how will this interact with nsCOMPtr thread-safe (or
thread-unsafe) refcounting?

Also, in code I have seen, `NS_IsMainThread` is used mainly for
assertion checking. I *think* that the semantics you detail below will
work, but do you know if there is a way to make sure of that?

Also, I had the impression that Quantum DOM scheduling made JS event
loop spinning unncessary. Did I miss something?

Cheers,
 David

On 5/19/17 1:38 AM, Bill McCloskey wrote:
> Hi everyone,
> 
> One of the challenges of the Quantum DOM project is that we will soon have
> multiple "main" threads [1]. These will be real OS threads, but only one of
> them will be allowed to run code at any given time. We will switch between
> them at well-defined points (currently just the JS interrupt callback).
> This cooperative scheduling will make it much easier to keep our global
> state consistent. However, having multiple "main" threads is likely to
> cause confusion.
> 
> To simplify things, we considered trying to make these multiple threads
> "look" like a single main thread at the API level, but it's really hard to
> hide something like that. So, instead, we're going to be transitioning to
> APIs that try to avoid exposing threads at all. This post will summarize
> that effort. You can find more details in this Google doc:
> 
> https://docs.google.com/document/d/1MZhF1zB5_dk12WRiq4bpmNZUJWmsIt9OTpFUWAlmMyY/edit?usp=sharing
> 
> [Note: I'd like this thread (and the Google doc) to be for discussing
> threading APIs. If you have more general questions about the project,
> please contact me personally.]
> 
> The main API change is that we're going to make it a lot harder to get hold
> of an nsIThread for the main thread. Instead, we want people to work with
> event targets (nsIEventTarget). The advantage of event targets is that all
> the main threads will share a single event loop, and therefore a single
> nsIEventTarget. So code that once expected a single main thread can now
> expect a single main event target.
> 
> The functions NS_GetMainThread, NS_GetCurrentThread, and
> do_Get{Main,Current}Thread will be deprecated. In their place, we'll
> provide mozilla::Get{Main,Current}ThreadEventTarget. These functions will
> return an event target instead of a thread.
> 
> More details:
> 
> NS_IsMainThread: This function will remain pretty much the same. It will
> return true on any one of the main threads and false elsewhere.
> 
> Dispatching runnables: NS_DispatchToMainThread will still work, and you
> will still be able to dispatch using Get{Main,Current}ThreadEventTarget.
> From JS, we want people to use Services.tm.dispatchToMainThread.
> 
> Thread-safety assertions: Code that used PR_GetCurrentThread for thread
> safety assertions will be converted to use NS_ASSERT_OWNINGTHREAD, which
> will allow code from different main threads to touch the same object.
> PR_GetCurrentThread will be deprecated. If you really want to get the
> current PRThread*, you can use GetCurrentPhysicalThread, which will return
> a different value for each main thread.
> 
> Code that uses NS_GetCurrentThread for thread safety assertions will be
> converted to use nsIEventTarget::IsOnCurrentThread. The main thread event
> target will return true from IsOnCurrentThread if you're on any of the main
> threads.
> 
> Nested event loop spinning: In the future, we want people to use
> SpinEventLoopUntil to spin a nested event loop. It will do the right thing
> when called on any of the main threads. We'll provide a similar facility to
> JS consumers.
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-19 Thread David Major
I'm not sure if this is exactly the same as the ALLOW_COMPILER_WARNINGS
that we use for warnings-on-errors, but FWIW as of a couple of months
ago I cleaned out the last warning-allowance in our "own" code.
ALLOW_COMPILER_WARNINGS usage is now only in external libraries (I'm
counting NSS and NSPR as "external" for this purpose since I can't land
code to m-c directly).

On Fri, May 19, 2017, at 02:55 PM, Kris Maglione wrote:
> I've actually been meaning to post about this, with some questions.
> 
> I definitely like that we now print warnings at the end of builds, since
> it 
> makes them harder to ignore. But the current amount of warnings spew at
> the 
> end of every build is problematic, especially when a build fails and I
> need 
> to scroll up several pages to find out why.
> 
> Can we make some effort to get clean warnings output at the end of
> standard 
> builds? A huge chunk of the warnings come from NSS and NSPR, and should
> be 
> easily fixable. Most of the rest seem to come from vendored libraries,
> and 
> should probably just be whitelisted if we can't get fixes upstreamed.
> 
> On Fri, May 19, 2017 at 11:44:08AM -0700, Gregory Szorc wrote:
> >`mach build` attempts to parse compiler warnings to a persisted "database."
> >You can view a list of compiler warnings post build by running `mach
> >warnings-list`. The intent behind this feature was to make it easier to
> >find and fix compiler warnings. After all, something out of sight is out of
> >mind.
> >
> >There have been a few recent changes to increase the visibility of compiler
> >warnings with the expectation being that raising visibility will increase
> >the chances of someone addressing them. After all, a compiler warning is
> >either a) valid and should be fixed or b) invalid and should be ignored.
> >Either way, a compiler warning shouldn't exist.
> >
> >First, `mach build` now prints a list of all parsed compiler warnings at
> >the end of the build. More details are in the commit message:
> >https://hg.mozilla.org/mozilla-central/rev/4abe611696ab
> >
> >Second, Perfherder now records the compiler warning count as a metric. This
> >means we now have alerts when compiler warnings are added or removed, like
> >[1]. And, we can easily see graphs of compiler warnings over time, like
> >[2]. The compiler warnings are also surfaced in the "performance" tab of
> >build jobs on Treeherder, like [3].
> >
> >The Perfherder alerts and tracking are informational only: nobody will be
> >backing you out merely because you added a compiler warning. While the
> >possibility of doing that now exists, I view that decision as up to the C++
> >module. I'm not going to advocate for it. So if you feel passionately, take
> >it up with them :)
> >
> >Astute link clickers will note that the count metrics in CI are often noisy
> >- commonly fluctuating by a value of 1-2. I suspect there are race
> >conditions with interleaved process output and/or bugs in the compiler
> >warnings parser/aggregator. Since the values are currently advisory only,
> >I'm very much taking a "perfect is the enemy of good" attitude and have no
> >plans to improve the robustness.
> >
> >[1] https://treeherder.mozilla.org/perf.html#/alerts?id=6700
> >[2] https://mzl.la/2qFN0me and https://mzl.la/2rAkWR5
> >[3]
> >https://treeherder.mozilla.org/#/jobs?repo=autoland&selectedJob=100509284
> >___
> >dev-platform mailing list
> >dev-platform@lists.mozilla.org
> >https://lists.mozilla.org/listinfo/dev-platform
> 
> -- 
> Kris Maglione
> Senior Firefox Add-ons Engineer
> Mozilla Corporation
> 
> The X server has to be the biggest program I've ever seen that doesn't
> do anything for you.
>   --Ken Thompson
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


BaseThreadInitThunk potential collateral benefits

2017-06-05 Thread David Durst
Hi.

I wanted to call attention to
https://bugzilla.mozilla.org/show_bug.cgi?id=1322554 (Startup crashes in
BaseThreadInitThunk since 2016-12-03) -- a startup crash resulting from DLL
injection early in the process -- which was resolved fixed for 32bit
Windows on 5/24.

This particular fix[0] has the potential side effect of solving other bugs
as well, ones that may have very different signatures that deal with
remotely-injected code[1].

Just a caution that it's not going to impact all injection issues. We know
that there is an as-yet-unidentified increase and decrease in this crash
prior to patch landing[2]. Carl Corcoran, who landed this, will be looking
at that time period as well to attempt to find contributing causes (any
help diagnosing is probably welcome, though I defer to ccorcoran).

Also, just a shout out that this was Carl's first patch.

Thanks!


[0] The proposed/chosen solution is roughly here:
https://bugzilla.mozilla.org/show_bug.cgi?id=1322554#c69
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1322554#c168
[2]
https://crash-stats.mozilla.com/signature/?product=Firefox&release_channel=release&signature=BaseThreadInitThunk&date=%3E%3D2016-12-05T14%3A42%3A31.000Z&date=%3C2017-06-05T14%3A42%3A31.000Z#graphs

--
David Durst [:ddurst]
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JSBC: JavaScript Start-up Bytecode Cache

2017-06-13 Thread David Teller


On 6/13/17 5:37 PM, Nicolas B. Pierron wrote:
> Also, the chrome files are stored in the jar file (If I recall
> correctly), and we might want to generate the bytecode ahead of time,
> such that users don't have to go through the encoding-phase.

How large is the bytecode?

I suspect that if it's too large, we'll be okay with generating the
bytecode on the user's computer.

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Phabricator Update, July 2017

2017-07-13 Thread David Anderson
On Thursday, July 13, 2017 at 1:38:18 PM UTC-7, Joe Hildebrand wrote:
> I'm responding at the top of the thread here so that I'm not singling out any 
> particular response.
> 
> We didn't make clear in this process how much work Mark and his team did 
> ahead of the decision to gather feedback from senior engineers on both Selena 
> and my teams, and how deeply committed the directors have been in their 
> support of this change.
> 
> Seeing a need for more modern patch reviewing at Mozilla, Laura Thomson's 
> team did an independent analysis of the most popular tools available on the 
> market today.  Working with the NSS team to pilot their selected tool, they 
> found that Phabricator is a good fit for our development approach 
> (coincidentally a good enough fit that the Graphics team was also piloting 
> Phabricator in parallel).  After getting the support of the Engineering 
> directors, they gathered feedback from senior engineers and managers on the 
> suggested approach, and tweaked dates and features to match up with our 
> release cycles more appropriately.

The problem is this hasn't been transparent. It was announced as an edict, and 
I don't remember seeing any public discussion beforehand. If senior engineers 
were consulted, it wasn't many of them - and the only person I know who was, 
was consulted after the decision was made.

I've contributed thousands of patches over many years and haven't really seen 
an explanation of how this change will make my development process easier. 
Maybe it will, or maybe it won't. It probably won't be a big deal because this 
kind of tooling is not really what makes development hard. I don't spend most 
of my day figuring out how to get a changeset from one place to another.

The fact is that no one really asked us beforehand, "What would make 
development easier?" We're just being told that Phabricator will. That's why 
people are skeptical.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Care in the use of MOZ_CRASH_UNSAFE_* macros: data review needed

2017-07-17 Thread David Major
As of bug 1275780, rust panic text gets reported as a MOZ_CRASH reason.

On Mon, Jul 17, 2017 at 12:42 PM, Benjamin Smedberg 
wrote:

> I don't know really anything about how rust panics get reflected into
> crash-data. Who would be the right person to talk to about that?
>
> --BDS
>
> On Mon, Jul 17, 2017 at 12:40 PM, Emilio Cobos Álvarez 
> wrote:
>
> > On 07/17/2017 05:18 PM, Benjamin Smedberg wrote:> Unlike MOZ_CRASH,
> > which only annotates crashes with compile-time constants
> > > which are inherently not risky, both MOZ_CRASH_UNSAFE_OOL and
> > > MOZ_CRASH_UNSAFE_PRINTF can annotate crashes with arbitrary data. Crash
> > > reasons are publicly visible in crash reports, and in general it is not
> > > appropriate to send any kind of user data as part of the crash reason.
> >
> > I suppose the same happens with rust panics, should those also be
> reviewed?
> >
> >  -- Emilio
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


removing "the old way" of signing add-ons

2017-07-19 Thread David Keeler
[dev-apps-thunderbird and dev-apps-seamonkey cc'd, but please discuss on
dev-platform]

Hello Everyone,

You may or may not be surprised to learn that Gecko contains two
different ways to verify that an add-on has been signed. The primary
method is nsIX509CertDB.openSignedAppFileAsync. This is what Gecko-based
products that require add-on signing use. However, there is also
nsIZipRreader.getSigningCert (plus some additional glue code).

The only place where these two implementations share code is in the
actual signature verification. That is, the logic to ensure that every
file in the archive has a corresponding valid entry in the manifest (and
that every entry in the manifest has a corresponding file in the
archive) and so on appears in Gecko twice.

From what I can tell, the actual functionality provided by the second
API (which is only applicable in builds that do not require add-on
signing) is a small text label in the install dialog that identifies the
certificate that signed the add-on. Note that this isn't even the dialog
that Firefox uses by default - you have to flip
"xpinstall.customConfirmationUI" to false to see it. In the default
dialog in Firefox, there is no difference between an unsigned add-on and
an add-on signed by a non-Mozilla root certificate that has been trusted
for code signing (and note that soon no certificate in Mozilla's root CA
program will have the code signing trust bit enabled by default [0])
(and again, this only applies to builds where add-on signing isn't
required - for builds where it is required, this API is not used at all).

Given all this, the question is do we still need this second API? Does
Thunderbird or SeaMonkey use it for any reason, or can we simplify the
code-base, reduce build size, etc.?

Cheers,
David

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1366243



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-23 Thread David Teller
Thanks for starting this conversation. I'd love to be able to use more
Rust in Firefox.

In SpiderMonkey, the main blocker I encounter is interaction with all
the nice utility classes we have in C++, in particular templatized ones.

Also, for the rest of Gecko, my main blocker was the lack of support for
Rust-implemented webidl in m-c, which meant that roughly 50% of the code
I would be writing would have been bug-prone adapters.

Cheers,
 David

On 10/07/17 12:29, Nicholas Nethercote wrote:
> Hi,
> 
> Firefox now has multiple Rust components, and it's on track to get a
> bunch more. See https://wiki.mozilla.org/Oxidation for details.
> 
> I think this is an excellent trend, and I've been thinking about how to
> accelerate it. Here's a provocative goal worth considering: "when
> writing a new compiled-code component, or majorly rewriting an existing
> one, Rust should be considered / preferred / mandated."
> 
> What are the obstacles? Here are some that I've heard.
> 
> - Lack of Rust expertise for both writing and reviewing code. We have
> some pockets of expertise, but these need to be expanded greatly. I've
> heard that there has been some Rust training in the Paris and Toronto
> offices. Would training in other offices (esp. MV and SF, given their
> size) be a good idea? What about remoties?
> 
> - ARM/Android is not yet a Tier-1 platform for Rust. See
> https://forge.rust-lang.org/platform-support.html and
> https://internals.rust-lang.org/t/arm-android-to-tier-1/5227 for some
> details.
> 
> - Interop with existing components can be difficult. IPDL codegen rust
> bindings could be a big help.
> 
> - Compile times are high, especially for optimized builds.
> 
> Anything else?
> 
> Nick
> 
> 
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensions and Gecko specific APIs

2017-07-25 Thread David Teller
Should we moz-prefix moz-specific extensions?

On 25/07/17 20:45, Jet Villegas wrote:
> Based on product plans I've heard, this sounds like the right approach. We
> should try to limit the scope of such browser-specific APIs but it's likely
> necessary in some cases (e.g., in the devtools.)
> 
> 
> On Tue, Jul 25, 2017 at 2:44 AM, Gabor Krizsanits 
> wrote:
> 
>> In my mind at least the concept is to share the API across all browsers
>> where we can, but WebExtensions should not be limited to APIs that are
>> accepted and implemented by all browser vendors.
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensions and Gecko specific APIs

2017-07-26 Thread David Teller
Well, at least there is the matter of feature detection, for people who
want to write code that will work in more than just Firefox.
moz-prefixing makes it clear that the feature can be absent on some
browsers.

Cheers,
 David

On 26/07/17 05:55, Martin Thomson wrote:
> On Wed, Jul 26, 2017 at 6:20 AM, Andrew Overholt  wrote:
>> On Tue, Jul 25, 2017 at 3:06 PM, David Teller  wrote:
>>> Should we moz-prefix moz-specific extensions?
>>
>> We have been trying not to do that for Web-exposed APIs but maybe the
>> extensions case is different?
> 
> I don't think that it is.  If there is any risk at all that someone
> else might want to use it, then prefixing will only make our life
> harder long term.  Good names are cheap enough that we don't need to
> wall ours off.
> 
> See also https://tools.ietf.org/html/rfc6648
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nodejs for extensions ?

2017-07-31 Thread David Teller
Node dependency trees tend to be pretty large, so I'm a little concerned
here. Has the memory footprint be measured?

Cheers,
 David

On 31/07/17 19:45, Michael Cooper wrote:
> If you mean using modules from NPM in a browser add-on, the Shield client
> extension recently started doing this <
> https://github.com/mozilla/normandy/tree/master/recipe-client-addon>
> 
> We do this by using webpack to process the node modules, bundling the
> entire dependency tree of a library into a single file. We then add a few
> more bits to make the resulting file compatible with `Chrome.utils.import`.
> You can see the webpack config file here <
> https://github.com/mozilla/normandy/blob/master/recipe-client-addon/webpack.config.js>
> and the way we use the resulting files here <
> https://github.com/mozilla/normandy/blob/48a446cab33d3b261b87c3d509964987e044289d/recipe-client-addon/lib/FilterExpressions.jsm#L12
>>
> 
> We suspect that this approach won't be compatible with all Node libraries,
> because it is fairly naive. But it has worked well for the ones we've used
> (React, ReactDOM, ajv, and mozjexl, so far).
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Actually-Infallible Fallible Allocations

2017-08-01 Thread David Major
I don't think that anyone deliberately set out to write the code this way.
Likely this is fallout from the mass-refactorings in bug 968520 and related
bugs. I'd recommend working with poiru and froydnj to see if there's any
automated follow-up we could do to remove/improve this pattern.

On Tue, Aug 1, 2017 at 12:31 PM, Alexis Beingessner  wrote:

> TL;DR: we're using fallible allocations without checking the result in
> cases where we're certain there is enough space already reserved. This is
> confusing and potentially dangerous for refactoring. Should we stop doing
> this?
>
> -
>
> I was recently searching through our codebase to look at all the ways we
> use fallible allocations, and was startled when I came across several lines
> that looked like this:
>
> dom::SocketElement &mSocket = *sockets.AppendElement(fallible);
>
> For those who aren't familiar with how our allocating APIs work:
>
> * by default we hard abort on allocation failure
> * but if you add `fallible` (or use the Fallible template), you will get
> back nullptr on allocation failure
>
> So in isolation this code is saying "I want to handle allocation failure"
> and then immediately not doing that and just dereferencing the result. This
> turns allocation failure into Undefined Behaviour, rather than a process
> abort.
>
> Thankfully, all the cases where I found this were preceded by something
> like the following:
>
> uint32_t length = socketData->mData.Length();if
> (!sockets.SetCapacity(length, fallible)) {   JS_ReportOutOfMemory(cx);
>   return NS_ERROR_OUT_OF_MEMORY;}for (uint32_t i = 0; i <
> socketData->mData.Length(); i++) {dom::SocketElement &mSocket =
> *sockets.AppendElement(fallible);
>
>
>
> So really, the fallible AppendElement *is* infallible, but we're just
> saying "fallible" anyway. I find this pattern concerning for two reasons:
>
> * It's confusing to anyone who encounters this for the first time.
> * It's a time bomb for any bad refactoring which makes the allocations
> *actually* fallible
>
> I can however think of two reasons why this might be desirable
>
> * It encourages the optimizer to understand that the allocations can't
> fail, removing branches (dubious, these branches would predict perfectly if
> they exist at all)
>
> * It's part of a guideline/lint that mandates only fallible allocations be
> used in this part of the code.
>
> If the latter is a concern, I would much rather we use infallible
> allocations with a required comment, or some other kind of decoration to
> state "this is fine". In the absence of a checker that can statically prove
> the the AppendElement calls will never fail (which in the given example
> would in fact warn that we don't use `length` in the loop condition), I
> would rather mistakes lead to us crashing more, rather than Undefined
> Behaviour.
>
> Thoughts?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


the root CA module now loads asynchronously

2017-08-09 Thread David Keeler
Hello Folks,

Bug 1372656 landed this morning, which means that Nightly now loads the
root certificate authority trust database asynchronously and shouldn't
block the main thread*. This should make startup faster, but it's a bit
experimental and improving PSM/NSS startup time is a work in progress
overall. There's some more context in bug 1370834.

If you notice any strange behavior related to certificates (e.g.
unexpected "Your connection is not secure" errors, missing EV indicators
(the larger green lock icon when you visit your bank), etc.), please
file a bug in PSM:
https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&component=Security%3A%20PSM

Thanks to everyone who made this possible!

Cheers,
David

* the catch is that if any thread needs to query the trust of a
certificate, that thread will block until this operation has completed.
In the worst case, we should be no slower than before.



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Quantum Flow Engineering Newsletter #18

2017-08-10 Thread David Durst
While we're pushing forward on Quantum Flow things, there are a lot of
people looking to help out where they can.

To make this at least a little organized, please remember to assign a bug
to yourself if you're working on it (or planning to).
https://charts.mozilla.org/quantum/blockers.html# is showing a few
unassigned that are actually assigned (judging from the comments).

Similarly, if something IS assigned to you, but it's not actively being
worked on and you're OK with someone else doing so, please un-assign
yourself.

Thanks...


--
David Durst [:ddurst]

On Tue, Aug 8, 2017 at 9:26 AM, Mike Conley  wrote:

> Mike Conley ported scrollbox to use smooth scrolling instead of JS driven
>> scrolling <https://bugzilla.mozilla.org/show_bug.cgi?id=1356705>.  This
>> affects most importantly scrolling the tab bar, and should make it more
>> smooth by removing a lot of slow code that used to run and off-loading that
>> work to the compositor through CSS-based smooth scrolling!  He notes
>> <https://bugzilla.mozilla.org/show_bug.cgi?id=1356705#c67> on the bug
>> that in order to achieve great performance some follow-ups may be needed.
>
>
> Just to ensure credit is given where it's due - that was 95% Dão
> Gottwald's work. I just helped push the last 5% over the line when he got
> focused on getting square tabs landed. Thanks Dão!
>
> On 4 August 2017 at 04:04, Ehsan Akhgari  wrote:
>
>> Hi everyone,
>>
>> This has been a busy week.  A lot of fixes have landed, setting up the
>> Firefox 57 cycle for a good start.  On the platform side, a notable change
>> that will be in the upcoming Nightly is the fix for document.cookie using
>> synchronous IPC.  This super popular API call slows down various web pages
>> today in Firefox, and starting from tomorrow, the affected pages should
>> experience a great speedup.  I have sometimes seen the slowdown caused by
>> this one issue to amount to a second or more in some situations, thanks a
>> lot to Amy and Josh for their hard work on this feature.  The readers of
>> these newsletters know that the work on fixing this issue has gone on for a
>> long time, and it's great to see it land early in the cycle.
>>
>> On the front-end side, more and more of the UI changes of Photon are
>> landing in Nightly.  One of the overall changes that I have seen is that
>> the interface is starting to feel a lot more responsive and snappy than it
>> was around this time last year.  This is due to many different details.  A
>> lot of work has gone into fixing rough edges in the performance of the
>> existing code, some of which I have covered but most of which is under the 
>> Photon
>> Performance project
>> <https://bugzilla.mozilla.org/show_bug.cgi?id=1363750>.  Also the new UI
>> is built with performance in mind, so for example where animations are
>> used, they use the compositor and don't run on the main thread.  All of the
>> pieces of this performance puzzle are nicely coming to fit in together, and
>> it is great to see that this hard work is paying off.
>>
>> On the Speedometer front, things are progressing with fast pace.  We have
>> been fixing issues that have been on our list from the previous findings,
>> which has somewhat slowed down the pace of finding new issues to work on.
>> Although the SpiderMonkey team haven't waited around
>> <https://bugzilla.mozilla.org/show_bug.cgi?id=1245279> and are
>> continually finding new optimization opportunities out of further
>> investigations.  There is still more work to be done there!
>>
>> I will now move own to acknowledge the great work of all of those who
>> helped make Firefox faster last week.  I hope I am not mistakenly
>> forgetting any names here!
>>
>>- Andrew McCreight got rid of some cycle collector overhead
>><https://bugzilla.mozilla.org/show_bug.cgi?id=1385459> related to
>>using QueryInterface to canonicalize the nsISupports pointers stored in 
>> the
>>purple buffer, and similarly for pointers encountered during
>>traversal of native roots
>><https://bugzilla.mozilla.org/show_bug.cgi?id=1385474> as well.
>>- Kris Maglione added some utilities to BrowserUtils
>><https://bugzilla.mozilla.org/show_bug.cgi?id=1383367> that should
>>help our front-end devs avoid synchronous layout and style flushes.
>>- Amy Chung got rid of the sync IPC messages in the cookie service
>><https://bugzilla.mozilla.org/show_bug.cgi?id=1331680>! This was a
>>substantial amount of work and should eliminate jank on a number of

JavaScript Binary AST Engineering Newsletter #1

2017-08-18 Thread David Teller
Hey, all cool kids have exciting Engineering Newsletters these days, so
it's high time the JavaScript Binary AST got one!


# General idea

JavaScript Binary AST is a joint project between Mozilla and Facebook to
rethink how JavaScript source code is stored/transmitted/parsed. We
expect that this project will help visibly speed up the loading of large
codebases of JS applications, including web applications, and will have
a large impact on the JS development community, including both web
developers, Node developers, add-on developers and ourselves.


# Context

The size of JavaScript-based applications – starting with webpages –
keep increasing, while the parsing speed of JavaScript VMs has basically
peaked. The result is that the startup of many web/js applications is
now limited by JavaScript parsing speed. While there are measures that
JS developers can take to improve the loading speed of their code, many
applications have reached a situation in which such an effort becomes
unmanageable.

The JavaScript Binary AST is a novel format for storing JavaScript code.
The global objective is to decrease the time spent between
first-byte-received and code-execution-starts. To achieve this, we focus
on a new file format, which we hope will aid our goal by:

- making parsing easier/faster
- supporting parallel parsing
- supporting lazy parsing
- supporting on-the-fly bytecode generation
- decreasing file size.

For more context on the JavaScript Binary AST and alternative
approaches, see the companion high-level blog post [1].


# Progress

## Benchmarking Prototype

The first phase of the project was spent developing an early prototype
format and parser to validate our hypothesis that:

- we can make parsing much faster
- we can make lazy parsing much faster
- we can reduce the size of files.

The prototype built for benchmarking was, by definition, incomplete, but
sufficient to represent ES5-level source code. All our benchmarking was
performed on snapshots of Facebook’s chat and of the JS source code our
own DevTools.

While numbers are bound to change as we progress from a proof-of-concept
prototype towards a robust and future-proof implementation, the results
we obtained from the prototype are very encouraging.

- parsing time 0.3 (i.e. parsing time is less than a third of original time)
- lazy parsing time *0.1
- gzipped file size vs gzipped human-readable source code *0.3
- gzipped file size vs gzipped minified source code *0.95.

Please keep in mind that future versions may have very different
results. However, these values confirm that the approach can
considerably improve performance.

More details in bug 1349917.


## Reference Prototype

The second phase of the project consisted of building a second prototype
format designed to:

- support future evolutions of JavaScript
- support annotations designed to allow safe lazy/concurrent parsing
- serve as a reference implementation for third-party developers

This reference prototype has been implemented (minus compression) and is
currently being tested. It is entirely independent from SpiderMonkey and
uses Rust (for all the heavy data structure manipulation) and Node (to
benefit from existing parsing/pretty-printing tool Babylon). It is
likely that, as data structures stabilize, the reference prototype will
migrate to a full JS implementation, so as to make it easier for
third-party contributors to join in.

More details on the tracker [2].


## Standard tracks

As any proposed addition to the JavaScript language, the JavaScript
Binary AST needs to go through standards body.

The project has successfully been accepted as an ECMA TC39 Stage 1
Proposal. Once we have a working Firefox implementation and compelling
results, we will proceed towards Stage 2.

More details on the tracker [3].



# Next few steps

There is still lots of work before we can land this on the web.


## SpiderMonkey implementation

We are currently working on a SpiderMonkey implementation of the
Reference Prototype, initially without lazy or concurrent parsing. We
need to finish it to be able to properly test that JavaScript decoding
works.

## Compression

The benchmarking prototype only implemented naive compression, while the
reference prototype – which carries more data – doesn’t implement any
form of compression yet. We need to reintroduce compression to be able
to measure the impact on file size.



# How can I help?

If you wish to help with the project, please get in touch with either
Naveed Ihsanullah (IRC: naveed, mail: nihsanullah) or myself (IRC:
Yoric, mail: dteller).


[1] https://yoric.github.io/post/binary-ast-newsletter-1/
[2] https://github.com/Yoric/binjs-ref/
[3] https://github.com/syg/ecmascript-binary-ast/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-30 Thread David Burns
Do we know if the other vendors would see value in having this spec'ed
properly so that we have true interop here? Reverse engineering  seems like
a "fun" project but what stops people from breaking stuff without realising?

David

On 30 August 2017 at 22:55, Michael Smith  wrote:

> Hi everyone,
>
> Mozilla DevTools is exploring implementing parts of the Chrome DevTools
> Protocol ("CDP") [0] in Firefox. This is an HTTP, WebSockets, and JSON
> based protocol for automating and inspecting running browser pages.
>
> Originally built for the Chrome DevTools, it has seen wider adoption with
> outside developers. In addition to Chrome/Chromium, the CDP is supported by
> WebKit, Safari, Node.js, and soon Edge, and an ecosystem of libraries and
> tools already exists which plug into it, for debugging, extracting
> performance data, providing live-preview functionality like the Brackets
> editor, and so on. We believe it would be beneficial if these could be
> leveraged with Firefox as well.
>
> The initial implementation we have in mind is an alternate target for
> third-party integrations to connect to, in addition to the existing Firefox
> DevTools Server. The Servo project has also expressed interest in adding
> CDP support to improve its own devtools story, and a PR is in flight to
> land a CDP server implementation there [1].
>
> I've been working on this project with guidance from Jim Blandy. We've
> come up with the following approach:
>
> - A complete, typed Rust implementation of the CDP protocol messages and
> (de)serialization lives in the "cdp" crate [2], automatically generated
> from the protocol's JSON specification [3] using a build script (this
> happens transparently as part of the normal Cargo compilation process).
> This comes with Rustdoc API documentation of all messages/types in the
> protocol [4] including textual descriptions bundled with the specification
> JSON. The cdp crate will likely track the Chrome stable release for which
> version of the protocol is supported. A maintainers' script exists which
> can find and fetch the appropriate JSON [5].
>
> - The "tokio-cdp" crate [6] builds on the types and (de)serialization
> implementation in the cdp crate to provide a server implementation built on
> the Tokio asynchronous I/O system. The server side provides traits for
> consuming incoming CDP RPC commands, executing them concurrently and
> sending back responses, and simultaneously pushing events to the client.
> They are generic over the underlying transport, so the same backend
> implementation could provide support for "remote" clients plugging in over
> HTTP/WebSockets/JSON or, for example, a browser-local client communicating
> over IPDL.
>
> - In Servo, a new component plugs into the cdp and tokio-cdp crates and
> acts on behalf of connected CDP clients in response to their commands,
> communicating with the rest of the Servo constellation. This server is
> disabled by default and can be started by passing a "--cdp" flag to the
> Servo binary, binding a TCP listener to the loopback interface at the
> standard CDP port 9222 (a different port can be specified as an option to
> the flag).
>
> - The implementation we envision in Firefox/Gecko would act similarly: a
> new Rust component, disabled by default and switched on via a command line
> flag, which binds to a local port and mediates between Gecko internals and
> clients connected via tokio-cdp.
>
> We chose to build this on Rust and the Tokio event loop, along with the
> hyper HTTP library and rust-websocket which plug into Tokio.
>
> Rust and Cargo provide excellent facilities for compile-time code
> generation which integrate transparently into the normal build process,
> avoiding the need to invoke scripts by hand to keep generated artifacts in
> sync. The Rust ecosystem provides libraries such as quote [7] and serde [8]
> which allow us to auto-generate an efficient, typed, and self-contained
> interface for the entire protocol. This moves the complexity of ingesting,
> validating, and extracting information from client messages out of the
> Servo- and Gecko-specific backend implementations, helps to ensure they
> conform correctly to the protocol specification, and provides a structured
> way of upgrading to new protocol versions.
>
> As for Tokio, the event loop and Futures-based model of concurrency it
> offers maps well to the Chrome DevTools Protocol. RPC commands typically
> execute simultaneously, returning responses in order of completion, while
> the server continuously generates events to which the client has
> subscribed. Under Tokio we can spawn multiple lightweight Tasks, dispatch
> messages to the

Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-09-04 Thread David Burns
I don't think anyone would disagree with the reasons for doing this. I,
like James who brought it up earlier, am concerned that we from the emails
appear to think that implementing the wire protocol would be sufficient to
making sure we have the same semantics.

As mentioned by Karl earlier, it was part of the Browser Testing Tools WG,
but was dropped due to lack of interest from vendors but this thread is now
suggesting that feeling has changed. I am happy, as chair, to see about
adding it back when we recharter, which luckily at the end of September!

I am happy to create a separate thread to get this started.

David

On 31 August 2017 at 21:51, Jim Blandy  wrote:

> Sorry for the premature send. The complete message should read:
>
> The primary goals here are not related to automation and testing.
>
> - We want to migrate the Devtools console and the JS debugger to the CDP,
> to replace an unpopular protocol with a more widely-used one.
>
> - Servo wants its devtools server to use an industry-standard protocol, not
> Firefox's custom thing.
>
> - We'd like to have a devtools server we can share between Gecko and Servo.
>
> - We'd like to move devtools server code out of JS and into a language that
> gives us better control over memory use and performance, because the
> current server, implemented in JS, introduces a lot of noise into
> measurements, affecting the quality of the performance data we're able to
> collect.
>
> - We'd like to share front ends (i.e. clients) between Firefox and Servo.
> devtools.html already implements both the Firefox protocol and the CDP.
>
> Using a protocol that's more familiar to web developers doing automation
> and testing is also good, and we're hoping that will have ancillary
> benefits. For example, it turns out that there is lots of interest in our
> new JS debugger UI, which has hundreds of contributors now. I don't know
> why people want a JS debugger UI, but they do. I believe that Firefox's use
> of a bespoke protocol prevents many similar opportunities for
> collaboration.
>
>
> On Thu, Aug 31, 2017 at 1:36 PM, Jim Blandy  wrote:
>
> > Certain bits of the original post are getting more emphasis than I had
> > anticipated. Let me try to clarify why we in Devtools want this change or
> > something like it.
> >
> > The primary goals here are not related to automation and testing. They
> are:
> >
> >- to allow Devtools to migrate the console and the JS debugger to the
> >CDP;
> >- to start a tools server that can be shared between Gecko and Servo;
> >- to replace Gecko's devtools server, implemented in JS, with one
> >implemented in Rust, to reduce memory consumption and introduce less
> noise
> >into performance and memory measurements
> >
> >
> >
> > and to help us share code with Servo. Our user interfaces already work
> > with the CDP.
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Eiminating nsIDOM* interfaces and brand checks

2017-09-15 Thread David Bruant
Hi,

Sorry, arriving a bit late to the party.
I was about to propose something related to @@toStringTag, but reading the 
discussions about how it may/will work [1][2][3], i realize it may not be your 
preferred solution.

Maybe @@toStringTag will end up not working well enough for your need anyway.
But another solution could be to define a chromeonly symbol for the brand.

obj[Symbol.brand] === 'HTMLEmbedElement'
(`Symbol.brand` is chromeonly. `obj[Symbol.brand]` too)

No function call, no string to allocate, nothing to import (`Symbol` is a 
standard ECMAScript global).
It might look weird because symbols are new, but maybe it's just something to 
get used to, hard to tell.

David

[1] https://github.com/heycam/webidl/issues/54
[2] https://github.com/heycam/webidl/pull/357
[3] https://github.com/heycam/webidl/issues/419


Le vendredi 1 septembre 2017 17:01:58 UTC+2, Boris Zbarsky a écrit :
> Now that we control all the code that can attempt to touch 
> Components.interfaces.nsIDOM*, we can try to get rid of these interfaces 
> and their corresponding bloat.
> 
> The main issue is that we have consumers that use these for testing what 
> sort of object we have, like so:
> 
>if (obj instanceof Ci.nsIDOMWhatever)
> 
> and we'd need to find ways to make that work.  In some cases various 
> hacky workarounds are possible in terms of property names the object has 
> and maybe their values, but they can end up pretty non-intuitive and 
> fragile.  For example, this:
> 
>element instanceof Components.interfaces.nsIDOMHTMLEmbedElement
> 
> becomes:
> 
>element.localName === "embed" &&
>element.namespaceURI === "http://www.w3.org/1999/xhtml";
> 
> and even that is only OK at the callsite in question because we know it 
> came from 
> http://searchfox.org/mozilla-central/rev/51b3d67a5ec1758bd2fe7d7b6e75ad6b6b5da223/dom/interfaces/xul/nsIDOMXULCommandDispatcher.idl#17
>  
> and hence we know it really is an Element...
> 
> Anyway, we need a replacement.  Some possible options:
> 
> 1)  Use "obj instanceof Whatever".  The problem is that we'd like to 
> maybe kill off the cross-global instanceof behavior we have now for DOM 
> constructors.
> 
> 2)  Introduce chromeonly "is" methods on all DOM constructors.  So 
> "HTMLEmbedElement.is(obj)".  Possibly with some nicer but uniform name.
> 
> 3)  Introduce chromeonly "isWhatever" methods (akin to Array.isArray) on 
> DOM constructors.  So "HTMLEmbedElement.isHTMLEmbedElement(obj)".  Kinda 
> wordy and repetitive...
> 
> 4)  Something else?
> 
> Thoughts?  It really would be nice to get rid of some of this stuff 
> going forward.  And since the web platform seems to be punting on 
> branding, there's no existing solution we can use.
> 
> -Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Creating a content process during shutdown...

2017-09-21 Thread David Bolter
Hi Gabor, I'm interested in shutdown issues. Is there a bug # for this case?
Cheers,
D

On Thu, Sep 21, 2017 at 4:08 AM, Gabor Krizsanits 
wrote:

> I guess the question is how do you define "after we've entered shutdown".
>
> For the preallocated process manager both "profile-change-teardown" and
> "xpcom-shutdown" will prevent any further process spawning. For the
> preloaded browser in tabbrowser.xml we usually instead of creating a
> process for the hidden about:blank page we select an existing one, however
> if there is non around (the browser has only some chrome pages open when
> the shutdown is initiated) then it can create a process and I'm not sure if
> anything would prevent the process creation now that I'm thinking about it.
>
> I think we should set some flag when some of these shutdown related
> notifications are fired and based on that prevent any new content process
> creation somewhere in ContentParent::LaunchSubprocess. Since I don't see
> any legit use case for it and I'm pretty sure it usually ends up with
> crashes or hangs.
>
> Gabor
>
> On Wed, Sep 20, 2017 at 8:38 PM, Milan Sreckovic 
> wrote:
>
> > I've spoken to some of you about this, but at this point need a larger
> > audience to make sure we're covering all the bases.
> >
> > Do we have code in Firefox that would cause us to create a new content
> > process, after we've entered shutdown?
> >
> > I understand the possibility of user action that would create a new
> > content process followed quickly by one that would cause shutdown, and
> the
> > timing making it so that the new content process initialization can then
> > happen out of order, but I'm more asking about the explicit scenario.
> >
> > Thanks!
> >
> > --
> > - Milan (mi...@mozilla.com)
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: removing "the old way" of signing add-ons

2017-09-22 Thread David Keeler
Hi Onno,

The work was done in bug 1382749. The first post in this thread outlined
what would be removed as a result of doing this - namely the upper right
corner of the label in the add-on installation dialog as you mentioned.
Note that as of bug 1366243 (shipping in 56), by default Gecko-based
products don't trust any code-signing roots, so this wouldn't work as
before even without removing the now-dead code.

Cheers,
David

On 09/22/2017 01:35 AM, Onno Ekker wrote:
> Op 27-7-2017 om 07:03 schreef Andrew Swan:
>> On Wed, Jul 26, 2017 at 2:49 AM, Frank-Rainer Grahl  wrote:
>>
>>> I need to look at the notifications for SeaMonkey anyway but how could
>>> Thunderbird implement the standard doorhanger with no location bar? I think
>>> the dialog should be retained for projects which do not have a location bar
>>> and/or tabbrowser.
>>
>>
>> That was poorly worded -- these applications should create listeners for
>> the various events that are generated during the install process.  Whether
>> you try to adapt the Firefox doorhangers somehow or keep some version of
>> the current dialog (but apropos the original message in this thread, even
>> that is likely to change) doesn't matter to me, but the existing code that
>> displays a modal xul dialog from the guts of the addons manager isn't long
>> for this world...
>>
>> This is straying off-topic for dev-platform, please follow up with me
>> individually or on the dev-addons list if you have more questions.
>>
>> -Andrew
>>
> 
> Did something change here in TB/SM-code or in the methods they call?
> When I now add/update an add-on manager (in TB 57.0a1), I don't see any
> notice about the signature anymore. The upper right corner from my
> previous screenshot is now empty.
> 
> Is this on purpose or did something break? If on purpose, it would be
> nice if it could be communicated. My old signing key is about to expire
> and in order to get a new key, the CA now requires me to store it on a
> smart card, so I almost bought a smart card and a card reader and
> renewed my signing certificate for the price of some € 130,- and it
> would be totally useless.
> 
> Is there a bug about the removing of the "old way" of signing add-ons?
> 
> Onno
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ function that the optimizer won't eliminate

2017-10-06 Thread David Major
I bet Google Benchmark will have what you want.

As a first guess, maybe this?
https://github.com/google/benchmark/blob/master/include/benchmark/benchmark.h#L297

(And if godbolt says they are wrong, please send them a PR :))


On Fri, Oct 6, 2017 at 9:16 AM, Gabriele Svelto  wrote:

> On 06/10/2017 11:00, Henri Sivonen wrote:
> > Do we already have a C++ analog of Rust's test::black_box() function?
> > I.e. a function that just passes through a value but taints it in such
> > a way that the optimizer can't figure out that there are no side
> > effects. (For the purpose of ensuring that the compiler can't
> > eliminate computation that's being benchmarked.)
> >
> > If we don't have one, how should one be written so that it works in
> > GCC, clang and MSVC?
> >
> > It's surprisingly hard to find an answer to this on Google or
> > StackOverflow, and experimentation on godbolt.org suggests that easy
> > answers that are found are also wrong.
> >
> > Specifically, this isn't the answer for GCC:
> > void* black_box(void* foo) {
> >   asm ("":"=r" (foo): "r" (foo):"memory");
> >   return foo;
> > }
>
> IIUC what you are looking for is the '+' constraint which implies the
> parameter is both read and written in the asm statement, e.g.:
>
> void* black_box(void* foo) {
>   asm ("":"+r" (foo): "r" (foo):"memory");
>   return foo;
> }
>
>  Gabriele
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ function that the optimizer won't eliminate

2017-10-09 Thread David Major
> On Fri, Oct 6, 2017 at 5:41 PM, David Major  wrote:
> > I bet Google Benchmark will have what you want.
> >
> > As a first guess, maybe this?
> > https://github.com/google/benchmark/blob/master/include/
> benchmark/benchmark.h#L297
>
> Thank you. I guess it's the best to import that into the tree even
> though it seems to only consume values without returning a value.
> (I've been under the impression that it would be prudent to taint
> benchmark inputs as untrackable by the optimizer also.)
>
> I'm a bit surprised that the MSVC code path relies on an empty
> out-of-line function and not on inline asm.
>
>
MSVC doesn't allow inline assembly in 64-bit code.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


We need better canaries for JS code

2017-10-18 Thread David Teller
Hi everyone,

  Yesterday, Nightly was broken on Linux and MacOS because of a typo in
JS code [1]. If I understand correctly, this triggered the usual
"undefined is not a function", which was

1/ uncaught during testing, as these things often are;
2/ basically impossible to diagnose in the wild, because there was no
error message of any kind.

I remember that we had bugs of this kind lurking for years in our
codebase, in code that was triggered daily and that everybody believed
to be tested.

I'd like to think that there is a better way to handle these bugs,
without waiting for them to explode in our user's face. Opening this
thread to see if we can find a way to somehow "solve" these bugs, either
by making them impossible, or by making them much easier to solve.

I have one proposal. Could we change the behavior of the JS VM as follows?

- The changes affect only Nightly.
- The changes affect only mozilla chrome code (including system add-ons
but not user add-ons or test code).
- Any programmer error (e.g. SyntaxError) causes a crash a crash that
displays (and attaches to the CrashReporter) both the JS stack in and
the native stack.
- Any SyntaxError is a programmer error.
- Any TypeError is a programmer error.

I expect that this will find a number of lurking errorsy, so we may want
to migrate code progressively, using a directive, say "use strict
moz-platform" and static analysis to help encourage using this directive.

What do you think?

Cheers,
 David



[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1407351#c28
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread David Teller


On 18/10/17 10:45, Gregory Szorc wrote:
> I agree that errors like this should have better visibility in order to
> help catch bugs.
> 
> I'm not sure changing behavior of the JS VM is the proper layer to
> accomplish this. I think reporting messages from the JS console is a
> better place to start. We could change the test harnesses to fail tests
> if certain errors (like SyntaxError or TypeError) are raised. If there
> is a "hook" in the JS VM to catch said errors at error time, we could
> have the test harnesses run Firefox in a mode that makes said errors
> more fatal (even potentially crashing as you suggest) and/or included
> additional metadata, such as stacks.

Works for me. I'd need to check how much performance this would cost.

> Another idea would be to require all non-log output in the JS console to
> be accounted for. Kinda like reftest's expected assertion count.
> Assuming most JS errors/warnings are unwanted, this would allow us to
> fail all tests reporting JS errors/warnings while allowing wanted/known
> failures to not fail the test. A problem though is background services
> "randomly" injecting their output during test execution depending on
> non-deterministic timing. It could be difficult to roll this out in
> practice. But it feels like we should be able to filter out messages or
> stacks accordingly.

This looks like a much larger undertaking.

> But why stop there? Shouldn't Firefox installs in the wild report JS
> errors and warnings in Firefox code back to Mozilla (like we do
> crashes)? I know this has been discussed. I'm not sure what we're
> doing/planning about it though.

I would be for it, but as a followup.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread David Teller
> I'm not sure changing behavior of the JS VM is the proper layer to
> accomplish this. I think reporting messages from the JS console is a
> better place to start. We could change the test harnesses to fail tests
> if certain errors (like SyntaxError or TypeError) are raised. If there
> is a "hook" in the JS VM to catch said errors at error time, we could
> have the test harnesses run Firefox in a mode that makes said errors
> more fatal (even potentially crashing as you suggest) and/or included
> additional metadata, such as stacks.

Ok, I discussed this with jorendorff.

It shouldn't be too hard to add this hook, plus it should have basically
no overhead. The next step would be to register a test harness handler
to crash (or do something else).

This would later open the door to reporting errors (possibly through
crashing) from Nightly, Beta, Release, ...

My main worry, at this stage, is what we encountered when we started
flagging uncaught async errors: some module owners simply never fixed
their errors, so we had to whitelist large swaths of Firefox code,
knowing that it was misbehaving.


Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread David Teller


On 18/10/17 14:16, Boris Zbarsky wrote:
> On 10/18/17 4:28 AM, David Teller wrote:
>> 2/ basically impossible to diagnose in the wild, because there was no
>> error message of any kind.
> 
> That's odd.  Was the exception caught or something?  If not, it should
> have shown up in the browser console, at least

In this case, the browser console couldn't be opened. Also, as suggested
by gps, we can probably reuse the same (kind of) mechanism to report
stacks of programming errors in the wild.

> 
>> I have one proposal. Could we change the behavior of the JS VM as
>> follows?
> 
> Fwiw, the JS VM doesn't really do exception handling anymore; we handle
> all that in dom/xpconnect code.

Mmmh... I was looking at setPendingException at
http://searchfox.org/mozilla-central/source/js/src/jscntxtinlines.h#435
. Can you think of a better place to handle this?

>> - The changes affect only Nightly.
>> - The changes affect only mozilla chrome code (including system add-ons
>> but not user add-ons or test code).
> 
> What about test chrome code?  The browser and chrome mochitests are
> pretty hard to tell apart from "normal" chrome code...

Good question. I'm not sure yet. I actually don't know how the tests are
loaded, but I hope that there is a way. Also, we need to test: it is
possible that the code of tests might not be a (big) problem.

>> - Any programmer error (e.g. SyntaxError) causes a crash a crash that
>> displays (and attaches to the CrashReporter) both the JS stack in and
>> the native stack.
> 
> We would have to be a little careful to only include the chrome frames
> in the JS stack.
> 
> But the more important question is this: should this only apply to
> _uncaught_ errors, or also to ones inside try/catch?  Doing the former
> is pretty straightforward, actually.  Just hook into
> AutoJSAPI::ReportException and have it do whatever work you want.  It
> already has a concept of "chrome" (though it may not match the
> definition above; it basically goes by "system principal or not?") and
> should be the bottleneck for all uncaught exceptions, except:
> 
> * Toplevel evaluation errors (including syntax errors) in worker scripts.
> * "uncaught" promise rejections of various sorts
> 
> Doing this might be a good idea.  It's _definitely_ a good experiment...

My idea would be to do it even on caught errors. It is too easy to catch
errors accidentally, in particular in Promise.

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread David Teller
This should be feasible.

Opening bug 1409852 for the low-level support.

On 18/10/17 22:22, Dan Mosedale wrote:
> Could we do this on a per-module opt-in basis to allow for gradual
> migration?  That is to say, assuming there's enough information in the
> stack to tell where it was thrown from (I'm guessing that's the case
> most of the time), by default, ignore these errors unless they're from
> code in an opted-in module?
> 
> Dan
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-19 Thread David Teller
Btw, I believe that there is already support for reporting uncaught
errors  and that it is blocked by the lack of test harness support.

Cheers,
 David

On 18/10/17 19:37, Steve Fink wrote:
> My gut feeling is that you'd only want uncaught errors, and
> AutoJSAPI::ReportException is a better place than setPendingException. I
> don't know how common things like
> 
>   if (eval('nightlyOnlyFeature()')) { ... }
> 
> are, but they certainly seem reasonable. And you'd have to do a bunch of
> work for every one to decide whether the catch was appropriate or not.
> It may be worth doing too, if you could come up with some robust
> whitelisting mechanisms, but at least starting with uncaught exceptions
> seems more fruitful.
> 
> As for the Promise case, I don't know enough to suggest anything, but
> surely there's a way to detect those particular issues separately? Is
> there any way to detect if a finalized Promise swallowed an exception
> without "handling" it in some way or other, even if very heuristically?
> 
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Visual Studio 2017 coming soon

2017-10-25 Thread David Major
I'm planning to move production Windows builds to VS2017 (15.4.1) in bug
1408789.

VS2017 has optimizer improvements that produce faster code. I've seen 3-6%
improvement on Speedometer. There is also increased support for C++14 and
C++17 language features:
https://docs.microsoft.com/en-us/cpp/visual-cpp-language-conformance

These days we tend not to support older VS for too long, so after some
transition period you can probably expect that VS2017 will be required to
build locally, ifdefs can be removed, etc. VS2017 Community Edition is a
free download and it can coexist with previous compilers. Installation
instructions are at:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Windows_Prerequisites#Visual_Studio_2017

If you have concerns, please talk to me or visit the bug. Thanks!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio 2017 coming soon

2017-10-26 Thread David Major
Agreed, changing compilers of an already-released ESR isn't a good idea.

You could use 2017 to build ESR52 locally though, if that's what you're
asking. Our tree has supported 2017 builds for a good while, since it's the
default VS download from Microsoft and a number of Mozillians have been
using it.

On Thu, Oct 26, 2017 at 10:33 AM, Jonathan Kew  wrote:

> On 26/10/2017 15:14, Milan Sreckovic wrote:
>
>> Are we locked into using the same compiler for the ESR updates?  In other
>> words, do we need to keep VS2015 for ESR52 builds until they are not needed
>> anymore?
>>
>>
> Yes, IMO.
>
> Whether or not we're "locked" in any technical sense, I think we should
> probably lock ourselves there by policy, unless a specific bug in the older
> compiler is directly affecting ESR builds in a serious way, and can only be
> solved by updating.
>
> Short of something like that (which seems pretty unlikely!), the stability
> risk involved in switching compilers doesn't sound like it belongs anywhere
> near the ESR world.
>
> JK
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio 2017 coming soon

2017-10-26 Thread David Major
It would be great to get these speed gains for 58, hot on the heels of the
57 release.

My plan is this: if I can get this landed by Monday, that still leaves two
weeks in the cycle. Based on my positive experience thus far with this
compiler (this update has been much more smooth than past ones), I'm
comfortable with that number. If it goes longer than that, I agree it makes
sense to wait for a new train.



On Thu, Oct 26, 2017 at 3:31 AM, Sylvestre Ledru  wrote:

> Hello,
>
>
> On 25/10/2017 23:48, David Major wrote:
> > I'm planning to move production Windows builds to VS2017 (15.4.1) in bug
> > 1408789.
> >
> In which version are you planning to land this change?
> As we are close to the end of the 58 cycle in nightly, it would be great
> to wait for 59.
>
> Thanks,
> Sylvestre
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio 2017 coming soon

2017-10-29 Thread David Major
This has reached mozilla-central and nightlies are now being built with
VS2017.

The clang-cl builds (which still rely on a VS package for link.exe, among
other things) remain on VS2015 while I work out some issues in clang-cl's
path detection. All other Windows jobs have moved to VS2017.

While I'm here, I want to give a shout out to everyone who helped with the
move to Taskcluster and in-tree job definitions. In previous compiler
upgrades I had to beg for a lot of handholding from busy releng folks. This
time was an absolute piece of cake. Everything was self-explanatory,
self-service, and easily testable on Try. Thank you!


On Wed, Oct 25, 2017 at 5:48 PM, David Major  wrote:

> I'm planning to move production Windows builds to VS2017 (15.4.1) in bug
> 1408789.
>
> VS2017 has optimizer improvements that produce faster code. I've seen 3-6%
> improvement on Speedometer. There is also increased support for C++14 and
> C++17 language features: https://docs.microsoft.com/en-
> us/cpp/visual-cpp-language-conformance
>
> These days we tend not to support older VS for too long, so after some
> transition period you can probably expect that VS2017 will be required to
> build locally, ifdefs can be removed, etc. VS2017 Community Edition is a
> free download and it can coexist with previous compilers. Installation
> instructions are at: https://developer.mozilla.org/
> en-US/docs/Mozilla/Developer_guide/Build_Instructions/
> Windows_Prerequisites#Visual_Studio_2017
>
> If you have concerns, please talk to me or visit the bug. Thanks!
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread David Durst
What you're describing having seen is, I think, exactly what I've been
trying to reproduce in a now-blocking-57 bug (1398652).

By your description, the only thing that makes sense to me -- to account
for unknown/unknowable changes on the site -- is to track potential runaway
growth of the content process. I realize how stupid this sounds, but what I
observed was that when the content process was in trouble, it continued to
grow. And the effect/impact builds: first the tab, then the process, then
the browser, then the OS. So if there's a way to determine that "this
process has only grown in the past X amount of time" or past a certain
threshold, that's the best indicator I've come up with.

I don't know what we'd do at that point -- force killing the content
process sounds severe (though possibly correct) -- or some alert similar to
the dreaded slow script one.


--
David Durst [:ddurst]

On Thu, Nov 2, 2017 at 11:31 AM, Randell Jesup 
wrote:

> >about:performance can provide the tab/pid mapping, with some performance
> >indexes.
> >This might help solve your issue listed in the side note.
>
> mconley told me in IRC that today's nightly has brought back the PID in
> the tooltip (in Nightly only); it was accidentally removed.
>
> about:performance can be useful, but 3 tabs vs all tabs is too coarse
> for me, and things like "site leaked memory and is slowing CC" I presume
> doesn't show up in the 'heat' for the tab there.
>
> --
> Randell Jesup, Mozilla Corp
> remove "news" for personal email
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


verifying unpacked signed add-ons

2017-11-03 Thread David Keeler
[firefox-dev, dev-addons, and the enterprise mailing list cc'd - please
direct follow-up discussion to dev-platform]

Hello All,

As you're no doubt aware, from 57 onwards, only signed WebExtensions
will be available as add-ons for the general release population. My
understanding is these are all packaged as "xpi" files (zip files,
really, but what's important is that they're bundled up as a single file
rather than a directory). Add-on developers can develop their add-ons by
temporarily loading them as unsigned packages or unsigned unbundled
directories (again, if my understanding is correct).

This leaves the question of what use we have for verifying unbundled
add-ons. Is there ever a case where we want to verify an unbundled yet
signed add-on? For example, do we ever do this with system add-ons? (And
if we do, I've been told this would be bad for performance, so perhaps
we should disallow this?)

Thanks,
David



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: verifying unpacked signed add-ons

2017-11-03 Thread David Keeler
On 11/03/2017 03:34 PM, Robert Helmer wrote:
> On Fri, Nov 3, 2017 at 3:25 PM, David Keeler  wrote:
>> [firefox-dev, dev-addons, and the enterprise mailing list cc'd - please
>> direct follow-up discussion to dev-platform]
>>
>> Hello All,
>>
>> As you're no doubt aware, from 57 onwards, only signed WebExtensions
>> will be available as add-ons for the general release population. My
>> understanding is these are all packaged as "xpi" files (zip files,
>> really, but what's important is that they're bundled up as a single file
>> rather than a directory). Add-on developers can develop their add-ons by
>> temporarily loading them as unsigned packages or unsigned unbundled
>> directories (again, if my understanding is correct).
>>
>> This leaves the question of what use we have for verifying unbundled
>> add-ons. Is there ever a case where we want to verify an unbundled yet
>> signed add-on? For example, do we ever do this with system add-ons? (And
>> if we do, I've been told this would be bad for performance, so perhaps
>> we should disallow this?)
> 
> 
> System add-on updates must be packed into a XPI[1]. Built-in add-ons are 
> always
> shipped packed (along with Firefox in the application directory), but
> unpacked will
> work for builds so you can modify a file in ./browser/extensions/ and
> see the change
> without a rebuild.

I imagine those directories are unsigned when you're doing that work,
though, right?

> We plan to move built-in add-ons into the omni jar eventually (bug 1357205)
> 
> 
>>
>> Thanks,
>> David
>>
>>
>> ___
>> firefox-dev mailing list
>> firefox-...@mozilla.org
>> https://mail.mozilla.org/listinfo/firefox-dev
>>
> 
> 1 - 
> http://searchfox.org/mozilla-central/rev/af86a58b157fbed26b0e86fcd81f1b421e80e60a/toolkit/mozapps/extensions/internal/XPIProvider.jsm#6561
> 



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-06 Thread David Teller
As a user, I would definitely love to have this.

I wanted to add something like that to about:performance, but at the
time, my impression was that we did not have sufficient platform data on
where allocations come from to provide something convincing.

Cheers,
 David

On 02/11/17 15:34, Randell Jesup wrote:
> [Note: I'm a tab-hoarder - but that doesn't really cause this problem]
> 
> tl;dr: we should look at something (roughly) like the existing "page is
> making your browser slow" dialog for website leaks.
> 
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Pulsebot in #developers

2017-11-06 Thread David Major
I use #developers for two things:

1. I prefer to keep my discussions in smaller topic channels, but for my
sanity I also try to keep my channel list small. There is a large set of
people whom I ping roughly once a month and can't be bothered matching
channels with. #developers is my "lowest common denominator" for talking to
those people.

2. To cast the widest possible net for immediate help with some sudden
roadblock or unfamiliar tool etc.

I don't really care whether pulsebot gets moved, but it won't change my use
of #developers.


On Mon, Nov 6, 2017 at 8:13 AM, Gijs Kruitbosch 
wrote:

> On 06/11/2017 12:49, Philipp Kewisch wrote:
>
>> If there is a better place to ask ad-hoc questions about Gecko, I am
>> happy to go there (and we should make sure the channel is promoted in
>> our docs).
>>
>
> I think different teams have ended up with different IRC channels, as Kris
> said. Frontend Firefox stuff tends to go in #fx-team, and #content, #jsapi
> and similar channels each cater to their own "niche", so to speak.
>
> I can see how less usage of IRC can lead to the situation we have now.
>> As long as there is an (open) replacement that is more accepted nowadays
>> I think this is fine, but aside from that I think we should find a way
>> to promote discussing Firefox and Platform issues in the open again.
>>
>
> I am not aware of people discussing Firefox or Platform *engineering*
> issues in other places, open or otherwise. I am aware of incidental
> non-engineering folks using Slack for bugreporting (unsurprisingly, a lot
> of those questions get answered with "file a bug in bugzilla"), but it
> seems to me that's not what you're talking about...
>
> ~ Gijs
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2017-11-10 Thread David Burns
My only concern about this is how local developer environments are going to
be when it comes to testing. While I am sympathetic to moving to python 3
we need to make sure that all the test harnesses have been moved over and
this is something that needs a bit of coordination. Luckily a lot of the
mozbase stuff is already moving to python 3 support but that still means we
need to have web servers and the actual test runners moved over too.

David



On 10 November 2017 at 23:27, Gregory Szorc  wrote:

> For reasons outlined at https://bugzilla.mozilla.org/
> show_bug.cgi?id=1388447#c7, we would like to make Python 3 a requirement
> to build Firefox sometime in the Firefox 59 development cycle. (Firefox 59
> will be an ESR release.)
>
> The requirement will likely be Python 3.5+. Although I would love to make
> that 3.6 if possible so we can fully harness modern features and
> performance.
>
> I would love to hear feedback - positive or negative - from downstream
> packagers and users of various operating systems and distributions about
> this proposal.
>
> Please send comments to dev-bui...@lists.mozilla.org or leave them on bug
> 1388447.
>
> ___
> dev-builds mailing list
> dev-bui...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-builds
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2017-11-12 Thread David Burns
I am not saying it should but if we have a requirement for python 3, we are
also going to have a requirement for py2 to both be available for local
development.

David

On 11 November 2017 at 14:10, Andrew Halberstadt 
wrote:

> On Fri, Nov 10, 2017 at 9:44 PM David Burns  wrote:
>
>> My only concern about this is how local developer environments are going
>> to be when it comes to testing. While I am sympathetic to moving to python
>> 3 we need to make sure that all the test harnesses have been moved over and
>> this is something that needs a bit of coordination. Luckily a lot of the
>> mozbase stuff is already moving to python 3 support but that still means we
>> need to have web servers and the actual test runners moved over too.
>>
>> David
>>
>
> For libraries like mozbase, I think the plan will be to support both 2 and
> 3 at
> the same time. There are libraries (like 'six') that make this possible.
> I'd bet
> there are even parts of the build system that will still need to support
> both at
> the same time.
>
> With that in mind, I don't think python 3 support for test harnesses needs
> to
> block the build system.
>
> Andrew
>
> On 10 November 2017 at 23:27, Gregory Szorc  wrote:
>>
>>> For reasons outlined at https://bugzilla.mozilla.org/
>>> show_bug.cgi?id=1388447#c7, we would like to make Python 3 a
>>> requirement to build Firefox sometime in the Firefox 59 development cycle.
>>> (Firefox 59 will be an ESR release.)
>>>
>>> The requirement will likely be Python 3.5+. Although I would love to
>>> make that 3.6 if possible so we can fully harness modern features and
>>> performance.
>>>
>>> I would love to hear feedback - positive or negative - from downstream
>>> packagers and users of various operating systems and distributions about
>>> this proposal.
>>>
>>> Please send comments to dev-bui...@lists.mozilla.org or leave them on
>>> bug 1388447.
>>>
>>> ___
>>> dev-builds mailing list
>>> dev-bui...@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-builds
>>>
>>> ___
>> dev-builds mailing list
>> dev-bui...@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-builds
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to unship: Linux 32bit Geckodriver executable

2017-11-21 Thread David Burns
For the next version of geckodriver I am intending that it not ship a Linux
32 bit version of Geckodriver. Currently it accounts of 0.1% of downloads
and we regularly get somewhat cryptic intermittents which are hard to
diagnose.

*What does this mean for most people?* We will be turning off the WDSpec
tests, a subset of Web-Platform Tests used for testing the WebDriver
specification. Testharness.js and reftests in the Web-Platform tests will
still be working as they use Marionette via another means.

Let me know if you have any questions.

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: Linux 32bit Geckodriver executable

2017-11-21 Thread David Burns
Answered inline below.

On 21 November 2017 at 19:03, Nicholas Alexander 
wrote:

>
>
> On Tue, Nov 21, 2017 at 5:25 AM, David Burns  wrote:
>
>> For the next version of geckodriver I am intending that it not ship a
>> Linux
>> 32 bit version of Geckodriver. Currently it accounts of 0.1% of downloads
>> and we regularly get somewhat cryptic intermittents which are hard to
>> diagnose.
>>
>
> I don't see the connection between 32-bit geckodriver and the test changes
> below.  Is it that the test suites we run require 32-bit geckodriver, and
> that's the only consumer?
>

Linux 32 bit Geckodriver is only used on that platform for testing wdspec
tests. It is built as part of the Linux 32 bit build and then moved to
testers.


>
>
>> *What does this mean for most people?* We will be turning off the WDSpec
>> tests, a subset of Web-Platform Tests used for testing the WebDriver
>> specification.
>
>
> Are these WDSpec tests run anywhere?  My long play here is to use a Java
> Web Driver client to drive web content to test interaction with GeckoView,
> so I'm pretty interested in our implementation conforming to the Web Driver
> spec ('cuz any Java Web Driver client will expect it to do so).  Am I
> missing something here?
>

They are currently run on OSX, Windows 32bit and 64bit and Linux 64 bit. We
are not dropping support for WebDriver. Actually this will allow us to
focus more on where our users are.

As for mobile, geckodriver is designed to speak to marionette over tcp. As
long as we can speak to the view, probably over adb, geckodriver it can
then speak to Marionette. This would make the host mostly irrelevant and
seeing how Linux 32 is barely used its not going to affect any work that
you do.


>
> This is all rather vaporish, so if my concerns aren't concrete or
> immediate enough, I'll accept that.
>

Hopefully this gives you a little more confidence :)

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Performance issue on 57 with some a11y clients

2017-11-28 Thread David Bolter
Hi folks,


As you may have heard, some users on 57 are experiencing performance
problems due to third party clients that are interacting poorly with
Firefox accessibility. We have been able to reproduce only one case
(Realplayer) and need more. So here’s what we are asking. If you use
Windows can you try this:


   1.

   In about:config, ensure accessibility.force_disabled is 0.
   2.

   Restart Firefox (57, 58, or 59)
   3.

   Check about:support
   1.

  Down in the “Accessibility” section, is “Activated” true? And,
  2.

  Do you experience performance degradation?
  3.

  If so, please contact jmathies and/or davidb.


If you are aware of friends or family that have experienced bad performance
on Firefox 57 for Windows please also contact us. In particular, if they
solved their problem by disabling accessibility these are exactly the
systems from which we need to gather more info.

Thanks,

Jim and David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


ARIA implementation plans (was Re: W3C Proposed Recommendations: WAI-ARIA and Core Accessibility API Mappings)

2017-11-30 Thread David Bolter
Hi Tantek,

I spun this off the rec proposals thread as per your suggestion.

On Thu, Nov 30, 2017 at 8:58 PM, Tantek Çelik 
wrote:

> On Thu, Nov 30, 2017 at 4:10 PM, L. David Baron  wrote:
>
>
[snip]


> Two things:
>
> 1. Do we have an Intent to Implement / Ship for the full testable
> feature set of these specifications? (I couldn't find any such
> "Intent" "ARIA" emails in dev-platform, but I may have missed it).
> If not, could the implementing team please send after-the-fact
> either/both Intent to Implement / Intent to Ship emails for both specs
> to dev-platform?
>

This work is unscheduled and TBD but we will send an intent when ready. The
accessibility team has been explicitly deprioritized working on new ARIA
support until the Windows multiprocess accessibility support work is in
good shape, our priority backlog is addressed, and priority front end work
is done.

Nonetheless, Joanie (chair of W3C ARIA spec) has been implementing some
ARIA recently in Firefox!



>
> 2. Assuming we have such intent, do we have bugs filed in Bugzilla to
> implement the remaining testable features of both specifications?  (to
> get the following %s to 100)
>

Not exhaustively no (for similar reasons)


> From Core Mappings report above:
> Firefox on Linux using ATK: 79% of mappings successfully implemented
> (188/237)
> Firefox on macOS using AX API:: 41% of mappings successfully
> implemented (84/205)
> Firefox on Windows using MSAA + IAccessible2: 75% of mappings
> successfully implemented (181/242)
> (or do we have documented somewhere reasoning why we won't implement
> to 100% - and if so, do we have problems with some of the features?)
>
> From WAI ARiA report above:
> no % tallies provided, but lots of red and yellow squares in the FF**
> columns here:
> https://w3c.github.io/test-results/wai-aria/all.html
>
>
It will be really great to get to a place where turning these green gets
back into our top priorities, unfortunately we're not there yet.



> Feel free to follow-up to this part (re: specs that we implement) with
> a reply-with-subject-change to start a new thread as needed.
>
> Thanks,
>
> Tantek
>

Thanks for your detailed attention here!

Cheers,
David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: W3C Proposed Recommendations: WAI-ARIA and Core Accessibility API Mappings

2017-11-30 Thread David Bolter
Hi Jonathan,

On Thu, Nov 30, 2017 at 9:23 PM, Jonathan Kingston  wrote:

>
> *Thoughts*
> We should ensure ARIA provides clear justification for any other roles that
> already have HTML representation.
> I'm pretty sceptical of ARIA helping Accessibility. I think there is more
> impact when assistive and non-assistive improvements work together like
> .
>
>
I definitely agree with the impact part. But as long as web developers are
able to build a dialog out of divs and spans, at least there is a way to
give that jumble a role. The use of ARIA should be a smell* yes, but it has
been essential to making what exists out there accessible.

* the problem is either: the web dev has reinvented something that would
have hasd baked-in accessibility had they followed best practice instead,
Or, best practice doesn't yet cover what web apps want or need to do...
(related, see rules here https://www.w3.org/TR/using-aria/#rule1)

Cheers,
D
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


IMEs can instantiate accessibility (was Re: INTENT TO DEPRECATE (taskcluster l10n routing))

2017-12-04 Thread David Bolter
On Mon, Dec 4, 2017 at 8:58 AM, Axel Hecht  wrote:

> Am 04.12.17 um 05:42 schrieb Jet Villegas:
>
>>
>> On Sun, Dec 3, 2017 at 05:15 Axel Hecht > l...@mozilla.com>> wrote:
>>
>> Am 01.12.17 um 16:45 schrieb Justin Wood:
>> > Hey Everyone,
>>
>>
[snip]

I hope we can change that (testing on localized builds) with this proposed
>> change. We’ve gotten reports that localized builds (and related usage;
>> e.g., input method editors) cause A11y API activation, which triggers other
>> bugs for us.
>>
>
> My gut reaction is "that shouldn't happen", though, well, no idea what
> IMEs do. Do we have bugs tracking these? I'd love to be on CC on those.
>

It can happen. IMEs often have prediction features and they need to examine
the context under which the input is happening. One example we recently
learned of is a Japanese IME feature ATOK's "insight" which analyzes
focused web content for input prediction and switching dictionaries.

The main side effect is performance.

We need to do two things here:
1. Add more caching to our Windows e10s a11y solution.
2. Do something smart about the different kinds of clients that instantiate
a11y. This will likely be two things: blocking 'a11y' clients who that are
just being ridiculous, and better, copying what Chrome does and instantiate
only 'portions' of our a11y support... think of this as having a
'lightweight' mode, although it may be that we have multiple levers.

Cheers,
D
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Next year in web-platform-tests

2017-12-15 Thread David Burns
On 15 December 2017 at 21:29, smaug  wrote:

>
> Not being able to send (DOM) events which mimic user input prevents
> converting
> many event handling tests to wpt.
> Not sure if WebDriver could easily do some of this, or should browsers
> have some testing mode which exposes
> APIs for this kinds of cases.
>

The TestDriver work does a lot of this and is currently being worked on and
hopefully will solve this use case. It uses webdriver under the hood to do
the user mimicing.

David



>
>
> -Olli
>
>
>
> or in the workflow that lead to you writing other
>> test types for cross-browser features.
>>
>> Thanks
>>
>> (Note: I set the reply-to for the email version of this message to be
>> off-list as an experiment to see if that avoids the anchoring effect where
>> early
>> replies set the direction of all subsequent discussion. But I'm very
>> happy to have an on-list conversation about anything that you feel merits a
>> broader audience).
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Refactoring proposal for the observer service

2018-01-03 Thread David Teller
That would be great!

On 03/01/18 23:09, Gabriele Svelto wrote:
> TL;DR this is a proposal to refactor the observer service to use a
> machine-generated list of integers for the topics (disguised as enums/JS
> constants) instead of arbitrary strings.
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-02 Thread David Teller
Pretty complicated in the general case but might be simple in the case
of number overflow.

Also, while we shouldn't depend on the UI in libpref, could we send some
kind of event or observer notification that the UI could use to display
a detailed error message? It would be a shame if Firefox was broken and
impossible-to-diagnose because of a number overflow, for instance.

On 02/02/2018 14:42, Boris Zbarsky wrote:
> You mean pick up parsing again after hitting an error?   That sounds
> complicated...
> 
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Who can review licenses these days?

2018-03-09 Thread David Teller
I'll need a license review for a vendored Rust package. Who can perform
these reviews these days?

Thanks,
 Yoric
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Who can review licenses these days?

2018-03-10 Thread David Teller


On 09/03/2018 19:39, Gregory Szorc wrote:
> On Fri, Mar 9, 2018 at 7:28 AM, David Teller  <mailto:dtel...@mozilla.com>> wrote:
> 
> I'll need a license review for a vendored Rust package. Who can perform
> these reviews these days?
> 
> 
> We have an allow list of licenses in our Cargo config. So if the license
> is already allowed, you can reference the crate in a Cargo.toml and
> `mach vendor rust` will "just work." Otherwise, we need to review the
> license before any action is taken.
> 
> **Only attorneys or someone empowered by them should review licenses and
> give sign-off on a license.**

Yes, that's exactly what I'm talking about. We have whitelisted BSD 3,
but I have a build-time dependency on vendored BSD 2 code.

> 
> You should file a Legal bug at
> https://bugzilla.mozilla.org/enter_bug.cgi?product=Legal. I think the
> component you want is "Product - feature." Feel free to CC me and/or
> Ted, as we've dealt with these kinds of requests in the past and can
> help bridge the gap between engineering and legal. We also know about
> how to follow up with the response (e.g. if we ship code using a new
> license, we need to add something to about:license).

Ok, thanks.

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Prefs overhaul

2018-03-12 Thread David Teller
Out of curiosity, why is the read handled by C++ code?

On 12/03/2018 10:38, Nicholas Nethercote wrote:
> I don't know. But libpref's file-reading is done by C++ code, which passes
> a string to the Rust code for parsing.
> 
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Please try out clang-cl and lld-link on Windows

2018-03-13 Thread David Major
Link xul.dll in 20 seconds with this one weird trick!


Hi everyone,

clang-cl builds of Firefox have come a long way, from being a hobby project
of a few developers to running static analysis in CI for more than a year
now. The tools are in really good shape and should be ready for broader use
within Mozilla at this point.

Bug 1443590 is looking into what it would take to ship official builds with
clang-cl and lld-link, but in the meantime it's possible to do local builds
already. I'd like to invite people who develop on Windows to give it a try.

*** Reasons to use clang-cl and lld-link locally ***

- Speed! lld is known for being very fast. I'm serious about 20-second
libxuls. That's a non-incremental link, with identical code folding
enabled. For comparison, MSVC takes me over two minutes.

- Speed again! clang-cl will integrate with upcoming sccache/icecream work.

- Much clearer and more actionable error messages than MSVC

- Make your own ASan and static analysis builds (the latter need an LLVM
before r318304, see bug 1427808)

- Help ship Firefox with clang-cl by getting more eyes and machines on
these tools

*** Reasons not to use clang-cl and lld-link locally (yet) ***

- You are testing codegen-level fixes or optimizations and need to see the
exact bits that will be going out to users

- lld-link currently doesn’t support incremental linking -- but with full
linking being so fast, this might not matter

- You do artifact builds that don't use a local compiler

*** How do I get started? ***

https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Building_Firefox_on_Windows_with_clang-cl

A number of build system changes have landed that make these builds much
easier than before. For example you no longer need to use old versions of
MozillaBuild.

Note that clang-cl builds still depend on an MSVC installation for headers,
libraries, and auxiliary build tools, so don't go uninstalling your Visual
Studio just yet.

If you run into any problems, please stop by #build or visit the shiny new
Firefox Build System product in Bugzilla (formerly Core :: Build Config).

Thanks!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please try out clang-cl and lld-link on Windows

2018-03-14 Thread David Major
The SDK requirement is due to some nonstandard code in the latest Microsoft
headers that clang rejects:
https://developercommunity.visualstudio.com/content/problem/132223/clang-cant-compile-wrlimplementsh.html

For now the easiest way forward is to go into Visual Studio Installer, add
"Windows 10 SDK (10.0.15063.0) for Desktop C++ [x86 and x64]" and remove
16299.

On Wed, Mar 14, 2018 at 6:20 AM, Marco Bonardo  wrote:

> It sounds cool and I'd really love to try, but:
> "ERROR: Found SDK version 10.0.16299.0 but clang-cl builds currently
> don't work with SDK version 10.0.16299.0 and later..."
>
> I know that I can set WINDOWSSDKDIR, but I'm not willing to mess too
> much with the env. Is there a bug tracking the update to the latest
> sdk, or automatically use the right one, that I can follow?
>
>
> On Tue, Mar 13, 2018 at 3:31 PM, David Major  wrote:
> > Link xul.dll in 20 seconds with this one weird trick!
> >
> >
> >
> > Hi everyone,
> >
> >
> > clang-cl builds of Firefox have come a long way, from being a hobby
> project
> > of a few developers to running static analysis in CI for more than a year
> > now. The tools are in really good shape and should be ready for broader
> use
> > within Mozilla at this point.
> >
> >
> > Bug 1443590 is looking into what it would take to ship official builds
> with
> > clang-cl and lld-link, but in the meantime it's possible to do local
> builds
> > already. I'd like to invite people who develop on Windows to give it a
> try.
> >
> >
> > *** Reasons to use clang-cl and lld-link locally ***
> >
> >
> > - Speed! lld is known for being very fast. I'm serious about 20-second
> > libxuls. That's a non-incremental link, with identical code folding
> enabled.
> > For comparison, MSVC takes me over two minutes.
> >
> > - Speed again! clang-cl will integrate with upcoming sccache/icecream
> work.
> >
> > - Much clearer and more actionable error messages than MSVC
> >
> > - Make your own ASan and static analysis builds (the latter need an LLVM
> > before r318304, see bug 1427808)
> >
> > - Help ship Firefox with clang-cl by getting more eyes and machines on
> these
> > tools
> >
> >
> > *** Reasons not to use clang-cl and lld-link locally (yet) ***
> >
> >
> > - You are testing codegen-level fixes or optimizations and need to see
> the
> > exact bits that will be going out to users
> >
> > - lld-link currently doesn’t support incremental linking -- but with full
> > linking being so fast, this might not matter
> >
> > - You do artifact builds that don't use a local compiler
> >
> >
> > *** How do I get started? ***
> >
> >
> > https://developer.mozilla.org/en-US/docs/Mozilla/Developer_
> guide/Build_Instructions/Building_Firefox_on_Windows_with_clang-cl
> >
> >
> > A number of build system changes have landed that make these builds much
> > easier than before. For example you no longer need to use old versions of
> > MozillaBuild.
> >
> >
> > Note that clang-cl builds still depend on an MSVC installation for
> headers,
> > libraries, and auxiliary build tools, so don't go uninstalling your
> Visual
> > Studio just yet.
> >
> >
> > If you run into any problems, please stop by #build or visit the shiny
> new
> > Firefox Build System product in Bugzilla (formerly Core :: Build Config).
> >
> >
> > Thanks!
> >
> >
> >
> > ___
> > firefox-dev mailing list
> > firefox-...@mozilla.org
> > https://mail.mozilla.org/listinfo/firefox-dev
> >
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: CPU core count game!

2018-03-31 Thread David Durst
Just noting that there's bug 1399962 right now -- and if you're single core
(like a large portion of very affordable PCs in the past two years or so),
the problem can be so bad (bug 1431835) that common sites like amazon can
turn Firefox into a not viable option.

--
David Durst [:ddurst]

On Tue, Mar 27, 2018 at 5:02 PM, Mike Conley  wrote:

> Thanks for drawing attention to this, sfink.
>
> This is likely to become more important as we continue to scale up our
> parallelization with content processes and threads.
>
> On 21 March 2018 at 14:54, Steve Fink  wrote:
>
> > Just to drive home a point, let's play a game.
> >
> > First, guesstimate what percentage of our users have systems with 2 or
> > fewer cores.
> >
> > Then visit https://hardware.metrics.mozilla.com/#goto-cpu-and-memory to
> > check your guess.
> >
> > (I didn't say it was a *fun* game.)
> >
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement: Early, experimental support for application/javascript+binast

2018-04-18 Thread David Teller
# Summary

JavaScript parsing and compilation are performance bottlenecks. The
JavaScript Binary AST is a domain-specific content encoding for
JavaScript, designed to speed up parsing and compilation of JavaScript,
as well as to allow streaming compilation of JavaScript (and possibly
streaming startup interpretation).

We already get a 30-50% parsing improvement by just switching to this
format, without any streaming code optimization, and we believe that we
can go much further. We wish to implement
`application/javascript+binast` so as to start experiments with partners.


# Bug

Bug 1451344


# Link to standard

This content encoding is a JS VM technology, with an entry point for
loading in the DOM.

- DOM level: No proposal yet.
https://github.com/binast/ecmascript-binary-ast/issues/27
- JS level (high): https://binast.github.io/ecmascript-binary-ast/
- JS level (low): No proposal yet.
https://binast.github.io/binjs-ref/binjs_io/multipart/index.html#overview


# Platform coverage

All.


# Estimated or target release

For the moment, no target release. We are still in the experimentation
phase.


# Preference behind which this will be implemented

dom.script.enable.application_javascript_binast


# Is this feature enabled by default in sandboxed iframes?

This is just a compression format, should make no change wrt security.

# DevTools bug

Bug 1454990


# Do other browser engines implement this?

Not yet. We are still in the experimentation phase.


# web-platform-tests

No web platform specification yet.


# Secure contexts

Let's restrict this to secure contexts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Early, experimental support for application/javascript+binast

2018-04-18 Thread David Teller
No plans yet, but it's a good idea. The only reason not to do this (that
I can think of) is that we might prefer switching to the Bytecode Cache,
which would probably give us even better speed ups.

I understand that we can't use the Bytecode Cache for our chrome code
yet due to the fact that it uses a very different path in Necko, which
is the Source of Truth for the Bytecode Cache, but I may be wrong, and
it might be fixable.

Cheers,
 David

On 18/04/2018 19:09, Dave Townsend wrote:
> This is awesome. I understand that we already do some kind of
> pre-compile for our chrome code, is there any plan/benefit to switch to
> this eventually there?

> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


License of test data?

2018-04-24 Thread David Teller
Ideally, I'd like to put a few well-known frameworks in jsapi tests, to
be used as data for SpiderMonkey integration tests.

What's our policy for this? Are there any restrictions? All the
frameworks I currently have at hand are have either an MIT- or an
MIT-like license, so in theory, we need to copy the license somewhere in
the test repo, right?

Cheers,
 David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox 60 Beta build error on ARM

2018-05-02 Thread David Major
This sounds like https://bugzilla.mozilla.org/show_bug.cgi?id=1434589
which currently doesn't have a fix. You might be able to work around
it for now with --disable-webrtc.

On Wed, May 2, 2018 at 1:08 PM, Charles G Robertson
 wrote:
> Hi,
>
> I'm trying to build Firefox 60 Beta on Arm64 and seeing this error:
> ...
> g++: error: unrecognized command line option '-msse2'
> gmake[4]: *** [/home/abuild/rpmbuild/BUILD/mozilla/config/rules.mk:1049: 
> Unified_cpp_common_audio_sse2_gn0.o] Error 1
> gmake[3]:
>   *** [/home/abuild/rpmbuild/BUILD/mozilla/config/recurse.mk:73:
> media/webrtc/trunk/webrtc/common_audio/common_audio_sse2_gn/target]
> Error 2
> gmake[3]: *** Waiting for unfinished jobs
> ...
>
> Had no
> issue building Firefox 59 for Arm64 so something changed between 59 and 60. 
> Is there some configure option I
> should be enabling/disabling when building FF 60 on Arm?
>
> Thanks,
> Charles Robertson
> SUSE
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-14 Thread David Major
We've confirmed that this issue with debug symbols comes from lld-link and
not from clang-cl. This will likely need a fix from the LLVM side, but in
the meantime I'd like to encourage people not to be deterred from using
clang-cl as your compiler.

On Thu, May 10, 2018 at 9:12 PM Xidorn Quan  wrote:

> On Fri, May 11, 2018, at 10:35 AM, Anthony Jones wrote:
> > I have some specific requests for you:
> >
> > Let me know if you have specific Firefox related cases where Rust is
> > slowing you down (thanks Jeff [7])
> > Cross language inlining is coming - avoid duplication between Rust
> > and C++ in the name of performance
> > Do developer builds with clang

> Regarding the last item about building with clang on Windows, I'd not
recommend people who use Visual Studio for debugging Windows build to build
with clang at this moment.

> I've tried using lld-link as linker (while continuing using cl rather
than clang-cl) for my local Windows build, and it seems to cause problems
when debugging with Visual Studio. Specifically, you may not be able to
invoke debugging functions like DumpJSStack, DumpFrameTree in Immediate
Windows, and variable value watching doesn't seem to work well either.

> I've filed a bug[1] for the debugging function issue (and probably should
file another for the watching issue as well).

> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1458109


> - Xidorn
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Windows Address Sanitizer enabled on trunk

2018-06-01 Thread David Major
Bug 1360120 on inbound enables Windows ASan builds and tests on trunk branches.

Initially these are tier-2 while we confirm that this doesn't
introduce test flakiness. If nothing catches fire, I intend to bump
them to tier-1 in the near future.

You can run these jobs on try under the platform name "win64-asan" or
build locally with the mozconfig here:
https://developer.mozilla.org/en-US/docs/Mozilla/Testing/Firefox_and_Address_Sanitizer#Creating_local_builds_on_Windows

If you're building ASan locally on Windows 10 version 1803, you'll
want to run mach bootstrap to pick up a fresh clang with an
1803-specific fix.

This platform has taken several years to stand up. Thank you to
everyone who helped out, especially Ehsan for getting this started and
Ting-Yu for working through a bunch of hurdles on automation.

Happy sanitizing!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows Address Sanitizer enabled on trunk

2018-06-19 Thread David Major
As of bug 1467126 these jobs are now running at tier 1.

On Fri, Jun 1, 2018 at 3:16 PM David Major  wrote:
>
> Bug 1360120 on inbound enables Windows ASan builds and tests on trunk 
> branches.
>
> Initially these are tier-2 while we confirm that this doesn't
> introduce test flakiness. If nothing catches fire, I intend to bump
> them to tier-1 in the near future.
>
> You can run these jobs on try under the platform name "win64-asan" or
> build locally with the mozconfig here:
> https://developer.mozilla.org/en-US/docs/Mozilla/Testing/Firefox_and_Address_Sanitizer#Creating_local_builds_on_Windows
>
> If you're building ASan locally on Windows 10 version 1803, you'll
> want to run mach bootstrap to pick up a fresh clang with an
> 1803-specific fix.
>
> This platform has taken several years to stand up. Thank you to
> everyone who helped out, especially Ehsan for getting this started and
> Ting-Yu for working through a bunch of hurdles on automation.
>
> Happy sanitizing!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: Retained Display Lists (rollout plan)

2018-06-19 Thread David Bolter
Hi All,

Retained Display Lists (RDL) will be enabled for release-channel users in
Firefox 61, with a gradual rollout [1].

   - RDL will be initially disabled for all users, then 2 days after FF61
   go-live, we push RDL to 25% of users.
   - We then monitor for a week, then push RDL to 50% of users.
   - After that week, we then push RDL to 100% of users.

Please reply if you have any questions or concerns about this plan.

Thanks,

David and Matt

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1467514
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Using clang-cl to ship Windows builds

2018-07-10 Thread David Major
Bug 1443590 is switching our official Windows builds to use clang-cl
as the compiler.

Please keep an eye out for regressions and file a blocking bug for
anything that might be fallout from this change. I'm especially
interested in hearing about the quality of the debugging experience.

It's possible that the patch may bounce and we'll go back and forth to
MSVC for a while. You can check your build's compiler at
`about:buildconfig`. Treeherder is running an additional set of MSVC
jobs on mozilla-central to make sure we can fall back to a green MSVC
if needed.

Watch for more toolchain changes to come. The next steps after this
will be to switch to lld-link and enable ThinLTO. That will open the
door to a cross-language LTO that could inline calls between Rust and
C++. In the longer term we can look into cross-compiling from Linux.

But for now, shipping our most-used platform with an open-source
compiler is a huge milestone in and of itself. Big thanks to everyone
who has contributed to this effort on the Mozilla side, and also big
thanks to the developers of LLVM and Chromium who helped make clang on
Windows a realistic possibility.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using clang-cl to ship Windows builds

2018-07-10 Thread David Major
At the moment, performance is a mixed bag. Some tests are up and some
are down. In particular I believe Speedometer is down a few percent.

Note however that clang-cl is punching above its weight. These builds
currently have neither LTO nor PGO, while our MSVC builds use both of
those. Any regressions that we're seeing ought to be short-lived. Once
we enable LTO and PGO, I expect clang to be a clear performance win.

If the short-term regressions end up being unacceptable, we can revert
to MSVC later in the release, but at this early point in the cycle
that shouldn't prevent us from collecting data on Nightly.

On Tue, Jul 10, 2018 at 4:31 PM Chris Peterson  wrote:
>
> How does the performance of clang-cl builds compare to MSVC builds on
> benchmarks like Speedometer?
>
>
> On 2018-07-10 1:29 PM, David Major wrote:
> > Bug 1443590 is switching our official Windows builds to use clang-cl
> > as the compiler.
> >
> > Please keep an eye out for regressions and file a blocking bug for
> > anything that might be fallout from this change. I'm especially
> > interested in hearing about the quality of the debugging experience.
> >
> > It's possible that the patch may bounce and we'll go back and forth to
> > MSVC for a while. You can check your build's compiler at
> > `about:buildconfig`. Treeherder is running an additional set of MSVC
> > jobs on mozilla-central to make sure we can fall back to a green MSVC
> > if needed.
> >
> > Watch for more toolchain changes to come. The next steps after this
> > will be to switch to lld-link and enable ThinLTO. That will open the
> > door to a cross-language LTO that could inline calls between Rust and
> > C++. In the longer term we can look into cross-compiling from Linux.
> >
> > But for now, shipping our most-used platform with an open-source
> > compiler is a huge milestone in and of itself. Big thanks to everyone
> > who has contributed to this effort on the Mozilla side, and also big
> > thanks to the developers of LLVM and Chromium who helped make clang on
> > Windows a realistic possibility.
> > ___
> > firefox-dev mailing list
> > firefox-...@mozilla.org
> > https://mail.mozilla.org/listinfo/firefox-dev
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-11 Thread David Bruant
Thanks Kris for all this information and the beginning of the first issue
of this newsletter!

2018-07-10 20:19 GMT+02:00 Kris Maglione :

> The problem is thus: In order for site isolation to work, we need to be
> able to run *at least* 100 content processes in an average Firefox session

I've seen this information of 100 content processes in a couple places but
i haven't been able to find the rationale for it. How was the 100 number
picked? Would 90 prevent a release of project fission?
How will the rollout happen?
   Will the rollout happen progressively (like 2 content processes soon, 4
soon after, 10 some time after, etc.) or does it have to be 1 (current
situation IIUC) then 100?


* Andrew McCreight created a tool for tracking JS memory usage, and figuring
>   out which scripts and objects are responsible for how much of it
>   (https://bugzil.la/1463569).
>
How often is this code run? Is there a place to find the daily output of
this tool applied to a nightly build for instance?

Thanks again,

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-13 Thread David Major
This touches on a really important point: we're not the only ones
allocating memory.

Just a few that come to mind: GPU drivers, system media codecs, a11y
tools, and especially on Windows we have to deal with "utility"
applications, corporate-mandated gunk, and downright crapware.

When we're measuring progress toward our goals, look at not only your
own pristine dev box but also that one neighbor whose adware you're
always cleaning out.


On Fri, Jul 13, 2018 at 7:57 AM Gabriele Svelto  wrote:
>
> Just another bit of info to raise awareness on a thorny issue we have to
> face if we want to significantly raise the number of content processes.
> On 64-bit Windows we often consume significantly more commit space than
> physical memory. This consumption is currently unaccounted for in
> about:memory though I've seen hints of it being cause by the GPU driver
> (or other parts of the graphics pipeline). I've filed bug 1475518 [1] so
> that I don't forget and I encourage anybody with Windows experience to
> have a look because it's something we _need_ to solve to reduce content
> process memory usage.
>
>  Gabriele
>
> [1] Commit-space usage investigation
> https://bugzilla.mozilla.org/show_bug.cgi?id=1475518
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Ship: Updated third-party iframe storage behavior

2015-08-18 Thread David Illsley
Is this pref something that's exposed to users, or is it only
about:config (I can't seem to find any UI for it)?

If so, this seems like a step away from being able to ever expose it as
more apps will be built assuming IndexedDB will be unconditionally
available in 3rd party iframes. This change would make the 'it breaks
the web argument' against exposing it stringer. From my perspective,
this would be undesirable.

On Tue, Aug 18, 2015, at 04:20 PM, Michael Layzell wrote:
> Summary: Currently, there are inconsistent rules about the availability 
> of persistent storage in third-party iframes across different types of 
> storage (such as caches, IndexedDB, localstorage, sessionstorage, and 
> cookies). We are looking to unify these behaviors into a consistent set 
> of rules for when persistent storage should be available. We have 
> modeled this after our cookie rules, and now use the cookie behavior 
> preference to control third party access to these forms of persistent 
> storage. This means that IndexedDB (which was previously unconditionally 
> disabled in 3rd-party iframes) is now available in 3rd party iframes 
> when the accept third-party cookies preference is set to "Always". As 
> our current definition of accepting third-party cookies from "Only 
> Visited" makes no sense for non-cookie storage, we currently treat this 
> preference for these forms of storage as though the preference was
> "Never".
> 
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1184973
> 
> Link to standard: N/A.
> 
> Platform coverage: All platforms.
> 
> Target release: Firefox 43.
> 
> Preference behind which this will be implemented: None, although the 
> preference
> "network.cookie.cookieBehavior" will be used to guide the behavior of 
> storage in third-party iFrames.
> 
> DevTools bug: N/A.
> 
> Do other browser engines implement this: Based on my quick testing: 
> Chrome uses it's third party preference to control access to 
> localStorage and sessionStorage, but not IndexedDB or caches. Safari 
> appears to use it's preference to control IndexedDB, but not 
> sessionStorage or localStorage. IE appears to only use its 3rd party 
> preference for cookies. All other browsers allow IndexedDB in 3rd party 
> iframes with default settings.
> 
> Security & Privacy Concerns: This changes how websites can store data on 
> the user's machine.
> 
> Web designer / developer use-cases: Previously, we had made IndexedDB 
> unavailable in 3rd-party iframes. Web developers will now be able to use 
> IndexedDB in 3rd party iframes when the user has the accept cookies 
> preference set to always.
> 
> Michael
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-26 Thread David Burns
Another data point that we seem to have overlooked is that users want to be
able to side load their extensions for many different reasons. We see this
with apps on phones and with extensions currently. I appreciate that users
have grown to be warning blind but, as others have pointed out, this feels
like a sure way to have users move from us to Chrome if there extension
lives there too. Once they are lost it will be non-trivial to get them back.

My main gripe is that we will be breaking tools like WebDriver[1] (better
known as Selenium) and not once have we approached that community. Luckily
we have Marionette being developed as a replacement for them, and was being
developed before we started the addon signing. When mentioned I was told
that since it instruments the browser it can never get signed and we need
to get a move on or get everyone to change to the "whitelabel" version to
use WebDriver. Having spoke to peers at other large tech companies they
said no, they will remain on older versions and if it breaks then stop
support for it until they have a like for like replacement. They will stop
caring about WebCompat until they have a like for like replacement. We will
drive away other users because Firefox does work as well on their favourite
website.

There are also companies that have developed internal tools in addons that
they don't want in AMO. We are essentially telling them that we don't care
about how much effort they have put in or how "sooper sekrit" their addon
is. It's in AMO or else...

I honestly thought we would do the "signing keys to developers" approach
and revoke when they are being naughty.

David

[1] http://github.com/seleniumhq/selenium

On 26 November 2015 at 13:50, Thomas Zimmermann 
wrote:

> Hi
>
> Am 26.11.2015 um 13:56 schrieb Till Schneidereit:
>
> > I read the blog post, too, and if that were the final, uncontested word
> on
> > the matter, I think I would agree. As it is, this assessment strikes me
> as
> > awfully harsh: many people have put a lot of thought and effort into
> this,
> > so calling for it to simply be canned should require a substantial amount
> > of background knowledge.
>
> Ok, I take back the 'complete nonsense' part. There can be ways of
> improving security that involve signing, but the proposed one isn't. I
> think the blog post makes this obvious.
>
>
> >
> > I should also give a bit more information about the feedback I received:
> in
> > both cases, versions of the extensions exist for at least Chrome and
> > Safari. In at least one case, the extension uses a large framework that
> > needs to be reviewed in full for the extension to be approved. Apparently
> > this'd only need to happen once per framework, but it hasn't, yet. That
> > means that the review is bound to take much longer than if just the
> > extension's code was affected. While I think this makes sense, two things
> > strike me as very likely that make it a substantial problem: many authors
> > of extensions affected in similar ways will come out of the woodwork very
> > shortly before 43 is released or even after that, in reaction to users'
> > complaints. And many of these extensions will use large frameworks not
> > encountered before, or simply be too complex to review within a day or
> two.
>
> Thanks for this perspective. He didn't seem to use any frameworks, but
> the review process failed for an apparently trivial case. Regarding
> frameworks in general: there are many and there are usually different
> versions in use. Sometimes people make additional modifications. So this
> helps only partially.
>
> And of course reviews are not a panacea at all. Our own Bugzilla is
> proof of that. ;) Pretending that a reviewed extension (or any other
> piece of code) is more trust-worthy is not credible IMHO. Code becomes
> trust-worthy by working successfully in "the real world."
>
> >
> > I *do* think that we shouldn't ship enforced signing without having a
> solid
> > way of dealing with this problem. Or without having deliberately decided
> > that we're willing to live with these extensions' authors recommending
> (or
> > forcing, as the case may be) their users to switch browsers.
>
> I think, a good approach would be to hand-out signing keys to extension
> developers and require them to sign anything they upload to AMO. That
> would establish a trusted path from developers to users; so users would
> know they downloaded the official release of an extension. A malicious
> extensions can then be disabled/blacklisted by simply revoking the keys
> and affected users would notice. For anything non-AMO, the user is on
> their own

Re: ESLint is now available in the entire tree

2015-11-29 Thread David Bruant

Hi,

Just a drive-by comment to inform folks that there is an effort to 
transition Mozilla JavaScript codebase to standard JavaScript.

Main bugs is: https://bugzilla.mozilla.org/show_bug.cgi?id=867617

And https://bugzilla.mozilla.org/show_bug.cgi?id=1103158 is about 
removing non-standard features from SpiderMonkey.
Of course this can rarely be done right away and most often requires 
dependent bugs to move code to standard ECMAScript (with a period with 
warnings about the usage of the non-standard feature).


Le 27/11/2015 23:53, Dave Townsend a écrit :

We also know that some of our JS syntax isn't supported by ESLint. Array
generators look like they might not be standardized anyway so please
switch them to Array.map.
If anyone notices such cases and don't necessarily have the time to fix 
them right away, please make a bug that blocks

https://bugzilla.mozilla.org/show_bug.cgi?id=1220564

...and of course, you meant Array.prototype.map given Array.map is 
non-standard and meant to disappear ;-)

https://bugzilla.mozilla.org/show_bug.cgi?id=1222547


For conditional catch expressions we may
want to add support to eslint somehow but for now any of those cases
will just have to be ignored.
I created https://bugzilla.mozilla.org/show_bug.cgi?id=1228841 for 
eventual removal of this non-standard feature.

Adding support to ESLint might be a sensible choice in the meantime indeed.

Davud
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Thanks for all the great teamwork with the Sheriffs in 2015!

2015-12-30 Thread David Burns
Well done Sheriffs! Really proud of all the work you did this year!

David

On 30 December 2015 at 14:19, Carsten Book  wrote:

> Hi,
>
> Sheriffing is not just about Checkins, Uplifts and Backouts - its also a
> lot of teamwork with different Groups and our Community like Developers, IT
> Teams and Release Engineering and a lot more to keep the trees up and
> running. And without this great teamwork our job would be nearly impossible!
>
> So far in 2015 we had around:
>
> 56471 changesets with 336218 changes to 70807 files in mozilla-central
> and 4391 Bugs filed for intermittent failures (and a lot of them fixed).
>
> So thanks a lot for the great teamwork with YOU in 2015 - especially also
> a great thanks to our Community Sheriffs like philor, nigelb and Aryx who
> done great work!
>
> I hope we can continue this great teamwork in 2016 and also the monthly
> sheriff report with interesting news from the sheriffs and how you can
> contribute will continue then :)
>
> Have a great start into 2016!
>
> Tomcat
> on behalf of the Sheriffs-Team
>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread David Keeler
> { "aus5.mozilla.org", true, true, true, 7, &kPinset_mozilla },

Just for clarification and future reference, the second "true" means this
entry is in test mode, so it's not actually enforced by default.

On Mon, Jan 4, 2016 at 1:08 PM, Dave Townsend  wrote:

> aus5 (the server the app updater checks) is still pinned:
>
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/StaticHPKPins.h#739
>
> On Mon, Jan 4, 2016 at 12:54 PM, Robert Strong 
> wrote:
> > On Mon, Jan 4, 2016 at 12:46 PM, Jesper Kristensen <
> > moznewsgro...@something.to.remove.jesperkristensen.dk> wrote:
> >
> >> Den 04-01-2016 kl. 19:45 skrev Daniel Holbert:
> >>
> >>> On 01/04/2016 10:33 AM, Josh Matthews wrote:
> >>>
>  Wouldn't the SSL cert failures also prevent submitting the telemetry
>  payload to Mozilla's servers?
> 
> >>>
> >>> Hmm... actually, I'll bet the cert errors will prevent Firefox updates,
> >>> for that matter! (I'm assuming the update-check is performed over
> HTTPS.)
> >>>
> >>
> >> If I remember correctly, update checks are pinned to a specific CA, so
> >> updates for users with software that MITM AUS would already be broken?
> >
> > That was removed awhile ago in favor of using mar signing as an exploit
> > mitigation.
> >
> >
> >
> >>
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
> >>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Taking screenshots of single elements (XUL/XULRunner)

2016-01-19 Thread David Burns
You can try getting access to
https://dxr.mozilla.org/mozilla-central/source/testing/marionette/capture.js
and then that will give you everything you want or you can just "borrow"
the code from there.

David

On 19 January 2016 at 11:22, Ted Mielczarek  wrote:

> On Tue, Jan 19, 2016, at 01:39 AM, m.bauermeis...@sto.com wrote:
> > As part of my work on a prototyping suite I'd like to take screenshots
> > (preferably retaining the alpha channel) of single UI elements. I'd like
> > to do so on an onclick event.
> >
> > Is there a straightforward way to accomplish this? Possibly with XPCOM or
> > js-ctypes?
>
> You can use the drawWindow method of CanvasRenderingContext2D:
>
> https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/drawWindow
>
> You just need to create a canvas element, call getContext('2d') on it,
> and then calculate the offset of the element you want to screenshot.
>
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Report from Build Team Sprint

2016-03-08 Thread David Burns
February 22-26 saw the build team gathered in San Francisco to discuss all
the work that is required to speed up developer ergonomics around building
Firefox.

After discussing all the major items of work we want to work on in the
first half of this year (of which there are many), we started working on
replacing the Autoconf configure script. We are planning to replace the old
shell and m4 code with sandboxed python, which will allow us to have
improved performance (especially on Windows), parallel configure checks,
better caching, and improved maintainability.

We have also added more support for artifact builds
<https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Artifact_builds>.
If you use Git as part of your Firefox development, you will now be able to
get the latest artifacts for your builds. This will benefit anyone not
currently working on C++ code in Mozilla-Central. Support for C++
developers will be worked on later in the year.

We were also able to add more data points to the telemetry work. This is
still in development and being beta tested by Engineering Productivity
before we ask for more people to submit their build data.

There was also work to see if we can get speed ups by using different
versions of compilers. We are noticing huge gains by upgrading to newer
versions but some of the upgrades require some work before we can deploy
them. When we are looking to do the upgrade we will email the list to warn
you.

David Burns
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Update from the last 2 weeks

2016-03-21 Thread David Burns
Below is a highlight of all work the build peers have done in the last 2
weeks as part of their work to modernise the build infrastructure.

Since the last report[1] we have seen thousands of lines of configure and
m4 code removed from mozilla-central. We have removed over 30 Makefiles
from mozilla-central. The removal of the Makefiles gives us what we need to
use more performant build backends in the future.

In case you missed it, you can now make Artifact Builds[3] if you are using
git[4]. If you find any issues, please raise bugs against Core :: Build
Config.

Finally, we are seeing huge improvements in build times by switching to
Visual Studio 2015. There are however a few regressions in moving to the
latest version. gps has asked for assistance[5] in helping finalise what we
need to make the move. If you can, help get this over the line!

Regards,

David

[1] https://groups.google.com/forum/#!topic/mozilla.dev.platform/PD7Lmot1H3I

[2]
https://groups.google.com/forum/#!msg/mozilla.dev.builds/0Wo_8Vhgu9Y/XFCCSmKABQAJ

[3] https://developer.mozilla.org/en-US/docs/Artifact_builds

[4] https://groups.google.com/forum/#!topic/mozilla.dev.builds/XtyrGfY48wo

[5]
https://groups.google.com/d/msg/mozilla.dev.platform/2dja0nkKq_w/5iJPJXCaBwAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


#include sorting: case-sensitive or -insensitive?

2016-03-28 Thread David Keeler
(Everyone, start your bikesheds.)

The style guidelines at
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
indicate that #includes are to be sorted. It does not say whether or not
to consider case when doing so (and if so, which case goes first?). That
is, should it be:

#include "Foo.h"
#include "bar.h"

or

#include "bar.h"
#include "Foo.h"

Based on the "Java practices" section of that document, I'm assuming
it's the former, but that's just an assumption and in either case it
would be nice to document the accepted style for C/C++.

Cheers,
David



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Update from the last 2 weeks

2016-04-05 Thread David Burns
Below is a highlight of all work the build peers have done in the last 2
weeks as part of their work to modernise the build infrastructure.

Since the last report[1] a large number of improvements have landed in
Mozilla Central.

The build system now lazily installs test files. Before, the build copied
tens of thousands of test and support files. This could take dozens of
seconds on Windows or machines with slow I/O. Now, the build system defers
installing test files until they are needed there (e.g. when running tests
or creating test packages). Furthermore, only the test files relevant to
the action performed are installed. Mach commands running tests should be
significantly faster, as they no longer examine the state of tens of
thousands of files on every invocation.

After upgrading build machines to use VS2015, we have seen a decrease in
build times[2] for PGO on Windows by around 100 minutes. This brings PGO
times on Windows in line with that of PGO(Strictly speaking this is LTO)
times on Linux.

This work, coupled with the build promotion work[3] have reduced the time
it takes automation to release Firefox on Windows by over 10 hours.

We are continuing to remove configure, m4 code, and Makefiles from
mozilla-central. As mentioned in the previous status email, this will allow
us to replace the build backend with a more performant tool.

David

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/IKRdGCjdN_Y/sK2QbXqmCAAJ

[2]
https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=%5Bmozilla-inbound,04b9f1fd5577b40a555696555084e68a4ed2c28f,1%5D&series=%5Bmozilla-inbound,c0018285639940579da345da71bb7131d372c41e,1%5D&series=%5Bmozilla-inbound,65e0ddb3dc085864cbee77ab034dead6323a1ce6,1%5D

[3] https://rail.merail.ca/posts/release-build-promotion-overview.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   4   5   6   7   8   9   10   >