Intent to prototype ARIA reflection (non-IDREF)

2020-04-11 Thread Alexander
/Summary/: ARIA reflection is an extension of Element interface, which 
provides a handy way to manage ARIA attributes on a DOM element, what 
makes web authors life easier. Note, IDREF attributes part is not 
included at this point (see https://github.com/whatwg/html/issues/4925).


/Bug/: https://bugzilla.mozilla.org/show_bug.cgi?id=1628418

/Standard/: https://w3c.github.io/aria/#idl-interface

/Platform coverage/: web

/Preference/: accessibility.ARIAReflection.enable

/DevTools bug/: no extra effort should be required since the interfaces 
are extensions of Element interface and should be picked up 
automatically by DevTools.


/Other browsers/:

* Safari shipped (/Aug 2018, https://bugs.webkit.org/show_bug.cgi?id=184676)
/

* Chrome shipped behind a flag (Jun 2018, 
https://bugs.chromium.org/p/chromium/issues/detail?id=844540)


/web-platform-tests/: 
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/dom/nodes/aria-attribute-reflection.tentative.html


/Secure contexts/: security/privacy is scoped by the attribute 
reflection 
https://html.spec.whatwg.org/multipage/common-dom-interfaces.html#reflecting-content-attributes-in-idl-attribute, 
which shouldn't have any security/privacy issues, since API doesn't 
allows anything beyond setAttribute/getAttribute.


/Is this feature enabled by default in sandboxed iframes?/ yes


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Status of support for Android on ARMv7 without NEON

2017-01-23 Thread Nicholas Alexander
On Mon, Jan 23, 2017 at 9:58 AM, Henri Sivonen  wrote:

> Do we support Android on ARMv7 without NEON?
>

Ralph Giles told me just a few days ago that yes, we support ARMv7 with and
without NEON.  This is relevant to the Oxidation project.

I may have been incorrect; I'm CCing Ralph to verify or correct me.


> Does the answer differ for our different API level builds?
>

Right now, we ship only a single Fennec APK that supports Android API 15+.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: All platform uses ICU as default

2017-02-07 Thread Nicholas Alexander
On Tue, Feb 7, 2017 at 1:03 AM, Makoto Kato  wrote:

> I will land bug 1215247 [1] this week since Fennec's team approves it,
> so all platform will use ICU and JS Intl API as default.
>

This is a huge amount of work -- congratulations to all involved!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please write good commit messages before asking for code review

2017-03-09 Thread Nicholas Alexander
On Thu, Mar 9, 2017 at 1:48 PM, Boris Zbarsky  wrote:

> On 3/9/17 4:35 PM, Eric Rescorla wrote:
>
>> I'm in favor of good commit messages, but I would note that current m-c
>> convention really pushes against this, because people seem to feel that
>> commit messages should be one line.
>>
>
> They feel wrong, and we should tell them so.  ;)  The first line should
> include a brief summary of the change.  The rest of the commit message
> should explain details as needed.


Greg Szorc has been a vocal proponent of descriptive commit messages -- I
consider https://bugzilla.mozilla.org/show_bug.cgi?id=1271035 a work of art
-- and has converted many, including myself, to the cause.  I'm thrilled to
see this practice get traction!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please write good commit messages before asking for code review

2017-03-09 Thread Nicholas Alexander
On Thu, Mar 9, 2017 at 2:07 PM, Andrew McCreight 
wrote:

> On Thu, Mar 9, 2017 at 1:55 PM, Nicholas Alexander  >
> wrote:
>
> > On Thu, Mar 9, 2017 at 1:48 PM, Boris Zbarsky  wrote:
> >
> > > On 3/9/17 4:35 PM, Eric Rescorla wrote:
> > >
> > >> I'm in favor of good commit messages, but I would note that current
> m-c
> > >> convention really pushes against this, because people seem to feel
> that
> > >> commit messages should be one line.
> > >>
> > >
> > > They feel wrong, and we should tell them so.  ;)  The first line should
> > > include a brief summary of the change.  The rest of the commit message
> > > should explain details as needed.
> >
> >
> > Greg Szorc has been a vocal proponent of descriptive commit messages -- I
> > consider https://bugzilla.mozilla.org/show_bug.cgi?id=1271035 a work of
> > art
> > -- and has converted many, including myself, to the cause.  I'm thrilled
> to
> > see this practice get traction!
> >
>
> While that is certainly entertaining to read, personally I don't think it
> is a great commit message. Anybody who wants to figure out what the patch
> is actually doing and why it is doing it has to read through paragraphs
> about barleywine to find that out.
>

I suppose art is in the eye of the beholder.


>
> On the subject of long commit messages, here's a commit message I wrote
> that had 3 paragraphs to explain a patch that just changed a 0 to a 1:
> https://hg.mozilla.org/integration/autoland/rev/bf059ec2bdc9


This must have come up in a different discussion, 'cuz I've read (and
admired!) this particular commit message before.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-25 Thread Alexander Surkov
On Tuesday, April 18, 2017 at 6:53:14 PM UTC-4, smaug wrote:
> On 04/18/2017 04:24 PM, Ehsan Akhgari wrote:
> > On 2017-04-18 12:30 AM, Mike Hommey wrote:
> >>> I've yet to see that to happen. What is crucial is fast way to browse
> >>> through the blame in time. So commit messages should be short and
> >>> descriptive. Telling what and why. (I very often need to go back to CVS 
> >>> era
> >>> code). I won't spend time reading several paragraphs of commit messages.
> >>> Those are just too long.
> >>> Basically first line should tell the essentials 'what' and maybe the most 
> >>> obvious 'why' and the next couple of lines explaining 'why' more 
> >>> precisely.
> >>> Don't write stories which will make blame-browsing slower.
> >>
> >> What sort of blame-browsing makes more than the first line of the commit
> >> message a burden? I'm curious, because that doesn't match my use of
> >> blame.
> >
> > I can't say I have had this issue, and I also do a lot of code
> > archaeology as well.  I suppose to a large extent this does depend on
> > what tools you use, perhaps.  These days I use searchfox.org for a lot
> > of blame browsing that I do, which displays only the first line in the
> > default UI and only displays the full commit message when you look at
> > the full commit.  I find this a pretty good balance.
> >
> >> And I've done my share of archeology and for old enough stuff you often
> >> end up with one of:
> >> - a completely useless commit message *and* useless bug (if there's even
> >>   a bug number)
> >> - a mostly useless commit message and some information in the bug.
> >>   You're lucky if it's all contained in a few sentences.
> >> - a mostly useless commit message and tons of information in the bug,
> >>   that you have to spend quite some time parsing through.
> >>
> >> In *all* cases, you have to go switch between your blame and bugzilla.
> >> It's a PITA. Now, when you have everything in VCS, you just work with
> >> your blame. Obviously, CVS-era commits are not going to change, but do
> >> you really want to keep things in the same state for commits done now,
> >> when you (or someone else) goes through blame in a few years?
> >
> > Agreed.
> >
> > Also even for newer code you often end up in the third category.  Maybe
> > it's just me but I spend a lot of time reading old bugs, and I find it a
> > huge time sink to read through tons of comments just to find where the
> > actual discussion about why the fix was done in the way that it was done
> > happened in the bug.  Sometimes I have to really read 100+ comments.
> > And the sad reality is that all too often I find very questionable
> > things in patches happen in bugs without any discussion whatsoever which
> > means that sometimes one can spend half an hour reading those 100+
> > comments very carefully to find out in the end that they have just
> > wasted their time and the bug actually did not contain the interesting
> > information in the first place.
> >
> > I think I would probably sympathize more with the side of the argument
> > arguing for bugs as the source of the truth if this wasn't really true,
> > but I think part of the reason why people don't bother writing good
> > commit messages is that they assume that the reasons behind the
> > approaches they take and the design decisions and the like are obvious,
> > and while that may be true *now* and to them and the code reviewer, it
> > will not be true to everyone and in the future, and because of that if
> > you're not careful the important information is just not captured
> > anywhere.  I hope we can all agree that it's important that we capture
> > this information for the maintainability of our code, even if we can't
> > agree where the information needs to be captured.  :-)
> >
> 
> The important part of documentation should be in code comments, not commit 
> messages.
> We aren't too good commenting code.
> 
> 
> Another thing, today I was looking at a bug which has several patches, and 
> realized one can't really understand the big picture by looking at commit 
> messages of some particular commit. There needs to be some centralized place 
> for that, and it is the bug. Going through each patches individually and 
> looking at the commit messages would be really painful way to see what and 
> why was changed.

As a guy who stayed with Mozilla for a while, here's my two cents. I thumb up 
for Olli's descriptive short what and why commit messages in favor to long 
stories stretched over multiple patches. I'm also keen on going to the bug for 
details if a commit doesn't look quite clear with me.

I realize that bugs might be of thousands of comments, which makes them hard to 
read, but that's probably a case of bugs organization: one meta and several 
small dependent bugs might help here. I don't want to affirm that this approach 
suites every Mozilla module, but it seems be working well in relatively small 
modules like accessibility one.
___

Re: A reminder about commit messages: they should be useful

2017-04-25 Thread Alexander Surkov
On Tuesday, April 25, 2017 at 11:11:28 AM UTC-4, Boris Zbarsky wrote:
> On 4/25/17 10:50 AM, Alexander Surkov wrote:
> > I don't want to affirm that this approach suites every Mozilla module, but 
> > it seems be working well in relatively small modules like accessibility one.
> 
> Just as a counterpoint... as non-regular contributor to the 
> accessibility module, I have a _very_ hard time making sense of 
> accessibility commits, precisely because the commit messages are not 
> often not very descriptive and the bugs are often hard to make sense of 
> for a non-expert.
> 
> I don't have this problem in cases where I'm similarly out of my depth 
> but commit messages contain more information.

I bet there's always room for improvements, and I hope this was a counterpoint 
for the example only, not for the bug organization approach.

Overall it feels with me that long comments vs check-the-bug is rather 
different styles, with their own benefits and disadvantages, and different 
people prefer one over another one.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-25 Thread Alexander Surkov
On Tuesday, April 25, 2017 at 1:20:29 PM UTC-4, Boris Zbarsky wrote:
> On 4/25/17 1:07 PM, Alexander Surkov wrote:
> > I bet there's always room for improvements, and I hope this was a 
> > counterpoint for the example only, not for the bug organization approach.
> 
> Sort of.
> 
> It was a counterpoint to "just check the bug; all the info is there". 
> Often it's not there, or not there in usable form.
> 
> If people included a summary of the discussion in the bug right about 
> when they commit, or had bugs that actually explained what's going on 
> clearly. I would be a lot more OK with the "check the bug" approach.
> 
> > Overall it feels with me that long comments vs check-the-bug is rather 
> > different styles
> 
> To be clear, I don't think commit messages need to be _long_.  They need 
> to be _useful_.  A commit message pointing to a particular bug comment 
> that explains all the ins and outs is no worse, from my point of view, 
> than a commit message that explains the ins and outs.
> 
> The problem I started this thread to address is things like a commit 
> message that says "flip this bit" and references a bug entitled "flip 
> this bit", with no explanation of what the bit does or why it should be 
> flipped.  I hope we can all agree that _that_ is not OK.  And it's far 
> too common.
> 
> -Boris

Maybe we should have a style guide, explaining what makes a good commit message 
and what makes a good and descriptive bug, with number of (good and bad) 
examples.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for informing Developers and Sheriffs about new Testsuites

2017-06-14 Thread Nicholas Alexander
On Wed, Jun 14, 2017 at 1:58 AM, Carsten Book  wrote:

> Hi,
>
> we had the case that a new testsuite was enabled and one test of that
> suite resulted in a new perma orange failure. Its a tier-2 testsuite but of
> course also tier 2 cause additional work for sheriffs.
>
> The Problem on new Testsuite is that Sheriffs are in a lot of cases not
> informed about that new testsuite (so we need to find out who to contact
> etc) and also it affects Developers because they might be confused why
> their try run is showing a orange result they don't even know of (and waste
> time in finding out if that test is red/orange because of their change).
>
> So i propose:
>
> People should email with an intent to implement when adding a new
> platform/suite, and that email should contain links to docs/wiki/... and
> the persons (in a super optimal case more than one just for timezone
> coverage if sheriffs have question) we can contact in case of questions.
>
> this would benefit in :
>
> 1) sheriffs will know who to speak to if it's failing or needs to be
> hidden - so we can assign bugs or even ping people directly on irc
> 2) developers know what this new test is when they break it on their try
> push and so save time and resources if the failure is known and sheriffs or
> others can point to a bug/newsgroup mail
> 3) discussions can be had about any overlap the new tests have with other
> suites or say deciding what platforms it should run on etc
>
> I guess this would help Sheriffs and Developers. I initially was thinking
> about about a changelog on treeherder for such changes but that have the
> disadvantage that this would not include all the information we need to
> contact someone etc
>

> I hope that this proposal make sense.
>

There are requirements for the tiers already laid out that have not been
linked in this thread:

https://wiki.mozilla.org/Sheriffing/Job_Visibility_Policy

I found the requirements clear but onerous.

I am actively pushing a set of testsuites across the tier 2 to tier 1
boundary as we speak (for those interested in what's involved, see [1]).  I
was planning to get a sheriff's review of all the various pieces
(documentation, failure messages, etc) by flagging an individual sheriff,
but it would be better to have an intent-to-change-tier email for broader
discussion.

Nick

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1372075 and especially
https://bugzilla.mozilla.org/show_bug.cgi?id=1370539#c3
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New string types: nsAutoStringN<> and nsAutoCStringN<>

2017-08-22 Thread Nicholas Alexander
Hi Nicholas,

On Mon, Aug 21, 2017 at 11:38 PM, Nicholas Nethercote <
n.netherc...@gmail.com> wrote:

> I really wish our top-level namespace was "moz", rather than "mozilla".
>

Can you say why?  Is it "just" shorter?  Is it more pleasant to s/std/moz/
and vice versa?

Nick


>
> Nick
>
> On Tue, Aug 22, 2017 at 10:31 AM, Eric Rahm  wrote:
>
> > On Mon, Aug 21, 2017 at 8:32 AM, Jonathan Kew 
> wrote:
> >
> > >
> > > Wouldn't it be more "modern Gecko-ish", though, to drop the "ns"
> prefix,
> > > and perhaps put Auto[C]String into the mozilla namespace?
> > >
> > >
> > As Nick said, renaming all the things is a job for another day :)
> >
> > Coincidentally I have some pending changes that affect the internal
> naming
> > of all of our strings. Externally (outside of xpcom/string) there will be
> > no discernible change but I *could* move everything to the mozilla
> > namespace and drop the 'ns' prefix. We could then gradually migrate to
> the
> > new naming scheme externally. I think we'd eventually want to move the
> > include path to 'mozilla/String.h' as well if we went this route, in the
> > interim we could just have stubs that redirect to the mozilla path.
> >
> > I'm not sure how much backing that has -- we'd be going from nsString =>
> > String which is pretty close to std::string -- it would be interesting to
> > get some feedback.
> >
> > -e
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: #pragma once?

2017-10-12 Thread Nicholas Alexander
On Thu, Oct 12, 2017 at 8:16 AM, Gian-Carlo Pascutto 
wrote:

> On 11-10-17 11:45, Emilio Cobos Álvarez wrote:
> > Hi,
> >
> > I'm adding a header to the build, and I'm wondering: Can we use pragma
> once?
> >
> > I don't see it anywhere in the build except third-party paths and:
> >
> >   dom/svg/nsISVGPoint.h
> >   xpcom/io/nsAnonymousTemporaryFile.h
> >
> > So I'm not sure if it's because it's somehow prohibited or not
> > recommended, or just because of inertia.
>
> It's covered in our Coding Style:
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_
> guide/Coding_Style#CC_practices
>
> The #pragma is non-standard (though fairly widely supported), but the
> "portable" way is recognized as equivalent for several compilers. It
> actually seems MSVC is the only one that we (still) use that doesn't
> know about the equivalence.
>

To be fair, the Coding Style doesn't mention `#pragma once`, nor does it
have the discussion provided by gcp.  Should it be added?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experimenting with a shared review queue for Core::Build Config

2017-10-13 Thread Nicholas Alexander
On Fri, Oct 13, 2017 at 7:47 AM, Andreas Tolfsen  wrote:

> Also sprach smaug:
>
> How did the setup actually work?
>>
>
> I think farre gave a satisfactory summary of the review tool above.
>
> I've asked this from farre too before, and IIRC, the reply was
>> that is wasn't working that well. Certain devs still ended up
>> doing majority of the reviews.  But perhaps I misremember or
>> certainly I don't know the details.
>>
>
> Shared review queues isn’t a silver bullet balancing reviews
> between peers for a couple of different reasons, but it arguably
> makes it easier for peers to collaborate on (and be informed of)
> reviews.
>
> For certain parts of the codebase, it is a sad fact the bus factor
> is low and we shouldn’t be fooled that a system which practically
> allows any one of one’s peers to pick up the review, will somehow
> magically improve the turnaround time for patches in those areas.
> You will still have reviews that can practically only be reviewed by
> one single person who is the domain expert.
>
> On the flipside, when you have a patch for a piece of code multiple
> peers know well and feel comfortable accepting, turnaround time
> could be improved compared to the status quo where the single
> r? could be travelling, on PTO, or otherwise preoccupied.


This last point is what I think matters.  As a build peer (for a subset of
the build system), I know exactly who has the expertise to review my build
system patches -- and I also know which of my patches don't require deep
expertise and can be reviewed by any build peer.  I manually balance my
requests to keep this chaff out of the busy build peers' queues.  I'm
hoping the shared queue can let the set of build peers opt in to this
balancing, keeping the simple patches out of the busy build peers' queues.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Stylo is built as default even if Fennec/Android

2017-10-27 Thread Nicholas Alexander
On Fri, Oct 27, 2017 at 6:24 AM, Makoto Kato 
wrote:

> Hi, all.
>
> You know, stylo (Quantum CSS) is turned on Firefox Desktop only.
> Stylo team is working very hard for Android too, then all reftests and
> mochitests are passed now even if Fennec/Android.
>
> So I would like to turn on stylo build on Android of Nightly channel
> for feedback.  Although the preference still keeps off as default on
> 58 cycle, it will be turned on 59 cycle.
>

Great work, Makoto and team!  Excited to see new technology on Android.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: Linux 32bit Geckodriver executable

2017-11-21 Thread Nicholas Alexander
On Tue, Nov 21, 2017 at 5:25 AM, David Burns  wrote:

> For the next version of geckodriver I am intending that it not ship a Linux
> 32 bit version of Geckodriver. Currently it accounts of 0.1% of downloads
> and we regularly get somewhat cryptic intermittents which are hard to
> diagnose.
>

I don't see the connection between 32-bit geckodriver and the test changes
below.  Is it that the test suites we run require 32-bit geckodriver, and
that's the only consumer?


> *What does this mean for most people?* We will be turning off the WDSpec
> tests, a subset of Web-Platform Tests used for testing the WebDriver
> specification.


Are these WDSpec tests run anywhere?  My long play here is to use a Java
Web Driver client to drive web content to test interaction with GeckoView,
so I'm pretty interested in our implementation conforming to the Web Driver
spec ('cuz any Java Web Driver client will expect it to do so).  Am I
missing something here?

This is all rather vaporish, so if my concerns aren't concrete or immediate
enough, I'll accept that.


> Testharness.js and reftests in the Web-Platform tests will
> still be working as they use Marionette via another means.
>
> Let me know if you have any questions.
>

Thanks!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: Linux 32bit Geckodriver executable

2017-11-21 Thread Nicholas Alexander
David,

On Tue, Nov 21, 2017 at 12:54 PM, David Burns  wrote:

> Answered inline below.
>
> On 21 November 2017 at 19:03, Nicholas Alexander 
> wrote:
>
>>
>>
>> On Tue, Nov 21, 2017 at 5:25 AM, David Burns  wrote:
>>
>>> For the next version of geckodriver I am intending that it not ship a
>>> Linux
>>> 32 bit version of Geckodriver. Currently it accounts of 0.1% of downloads
>>> and we regularly get somewhat cryptic intermittents which are hard to
>>> diagnose.
>>>
>>
>> I don't see the connection between 32-bit geckodriver and the test
>> changes below.  Is it that the test suites we run require 32-bit
>> geckodriver, and that's the only consumer?
>>
>
> Linux 32 bit Geckodriver is only used on that platform for testing wdspec
> tests. It is built as part of the Linux 32 bit build and then moved to
> testers.
>
>
>>
>>
>>> *What does this mean for most people?* We will be turning off the WDSpec
>>> tests, a subset of Web-Platform Tests used for testing the WebDriver
>>> specification.
>>
>>
>> Are these WDSpec tests run anywhere?  My long play here is to use a Java
>> Web Driver client to drive web content to test interaction with GeckoView,
>> so I'm pretty interested in our implementation conforming to the Web Driver
>> spec ('cuz any Java Web Driver client will expect it to do so).  Am I
>> missing something here?
>>
>
> They are currently run on OSX, Windows 32bit and 64bit and Linux 64 bit.
> We are not dropping support for WebDriver. Actually this will allow us to
> focus more on where our users are.
>

Beautiful :)


> As for mobile, geckodriver is designed to speak to marionette over tcp. As
> long as we can speak to the view, probably over adb, geckodriver it can
> then speak to Marionette. This would make the host mostly irrelevant and
> seeing how Linux 32 is barely used its not going to affect any work that
> you do.
>

I have an unusual desire to drive Web Driver from the mobile device
(without having a hosted geckodriver) for Android workflow reasons, but
that's not relevant to this 32-bit issue.


>
>
>>
>> This is all rather vaporish, so if my concerns aren't concrete or
>> immediate enough, I'll accept that.
>>
>
> Hopefully this gives you a little more confidence :)
>

It does!  Thanks for clarifying.
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more #ifdef MOZ_CRASHREPORTER directives needed when using crash reporter functions

2017-11-24 Thread Nicholas Alexander
On Fri, Nov 24, 2017 at 2:55 AM, Gabriele Svelto 
wrote:

> [cross-posting to firefox-dev]
>
>  Fellow mozillians,
> bug 1402519 [1] just landed and it introduces a dummy implementation of
> the crash reporter which is built when configuring with
> --disable-crash-reporter. This removes the need for bracketing calls to
> the crash reporter with #ifdef MOZ_CRASHREPORTER / #endif preprocessor
> directives. You can now freely use the crash reporter methods without
> worrying about it being enabled at compiler time or not.
>

Thanks for doing this work!  These kind of incremental improvements that
reduce preprocessing and unify the code base really help over time.

Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: new code to help with cross-process windows/workers (and Clients API)

2017-12-08 Thread Nicholas Alexander
Hi Ben,

On Fri, Dec 8, 2017 at 12:23 PM, Ben Kelly  wrote:

> Hi all,
>
> I just want to give you a heads up about some new infrastructure that is
> landing in the tree.  In particular, we now have:
>


1. A central list of windows and workers for the entire browser in the
> parent process.  This includes information on their origin, url, and what
> process they are running in.  This lives in ClientManagerService.
>

This is the kind of work that makes me wonder how we _didn't_ have this
central list before :)




> P.S. I was also thinking of doing a blog post with diagrams of how it works
> if people think that would be useful.
>

I personally think this would be useful, and will (probably) only happen if
you do it now, before you move on to the next thing.  +1 for documentation!

Thanks for this work!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: searchfox now indexes rust

2018-01-12 Thread Nicholas Alexander
On Mon, Jan 8, 2018 at 1:51 PM, Mike Hommey  wrote:

> On Mon, Jan 08, 2018 at 11:15:13AM -0500, Kartikaya Gupta wrote:
> > Just a heads-up, thanks to a bunch of work by Emilio, searchfox.org
> > now indexes rust code as well, so you can do things like jump to
> > function definitions and call sites and whatnot. Please use it and
> > file bugs under Webtools :: Searchfox for defects or feature requests.
> > I'm not sure how quickly we'll be able to fix them but it'll be good
> > to have stuff on file.
>
> So, now that searchfox is definitely more useful than dxr, can we do
> something about not having two such services?
>

Does SF index non-trunk trees?  I can't find a way to search
mozilla-{release,beta} right now.  Please correct me if it does?

If SF does not index non-trunk trees, I think we should invest into making
SF do so and then deprecate DXR.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please stop using keypress event to handle non-printable keys

2018-01-18 Thread Nicholas Alexander
On Wed, Jan 17, 2018 at 6:34 PM, Masayuki Nakano 
wrote:

> Hello, everyone.
>
> Please stop using keypress event for handling non-printable keys in new
> code when you write new code and new automated tests. Firefox will stop
> dispatching keypress events for non-printable keys for conforming to UI
> Events and struggling with web compatibility. (non-printable key means key
> combination which won't cause inputting character, e.g., arrow keys,
> backspace key and Ctrl (and/or Alt) - "A" key, etc.)
>
> You can perhaps just use keydown event instead. KeyboardEvent.key and
> KeyboardEvent.keyCode of non-printable key events are always same.
> Difference between keydown event and keypress event is
> KeyboardEvent.charCode of printable keys (and Ctrl/Alt + printable keys).
> So, when you need to use only key or keyCode, please use keydown event.
> Otherwise, when you need to use charCode, please keep using keypress event.
>
> Background:
>
> We need to fix bug 968056 (*1) for web-compat issues.
>
> Currently, Firefox dispatches keypress events for any keys except modifier
> keys. This is traditional behavior from Netscape Navigator. However, this
> is now invalid behavior from a point of view of standards (*2).
>
> I'm going to start to work on the bug from next week. However, this
> requires to rewrite too many keypress event handlers in our internal code
> and our automated tests.  So, please stop using keypress event when you
> want to handle non-printable keys at least in new code.
>

Could someone who is knowledgeable about ES lint comment on whether it is
possible to leverage ES lint to draw a line in the sand, at least on the
JavaScript side of the house?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-24 Thread Nicholas Alexander



> Would it be possible that when I do an hg pull of mozilla-central or
> mozilla-inbound, I can also choose to download the object files from the
> most recent ancestor that had an automation build? (It could be a separate
> command, or ./mach pull.) They would go into a local ccache (or probably
> sccache?) directory. The files would need to be atomically updated with
> respect to my own builds, so I could race my build against the download.
> And preferably the download would go roughly in the reverse order as my own
> build, so they would meet in the middle at some point, after which only the
> modified files would need to be compiled. It might require splitting debug
> info out of the object files for this to be practical, where the debug info
> could be downloaded asynchronously in the background after the main build
> is complete.
>

Just FYI, in Austin (December 2017, for the archives) the build peers
discussed something like this.  The idea would be to figure out how to
slurp (some part of) an object directory produced in automation, in order
to get cache hits locally.  We really don't have a sense for how much of an
improvement this might be in practice, and it's a non-trivial effort to
investigate enough to find out.  (I wanted to work on it but it doesn't fit
my current hats.)

My personal concern is that our current build system doesn't have a single
place that can encode policy about our build.  That is, there's nothing to
control the caching layers and to schedule jobs intelligently (i.e., push
Rust and SpiderMonkey forward, and work harder to get them from a remote
cache).  That could be a distributed job server, but it doesn't have to be:
it just needs to be able to control our build process.  None of the current
build infrastructure (sccache, the recursive make build backend, the
in-progress Tup build backend) is a good home for those kind of policy
choices.  So I'm concerned that we'd find that an object directory caching
strategy is a good idea... and then have a chasm when it comes to
implementing it and fine-tuning it.  (The chasm from artifact builds to a
compile environment build is a huge pain point, and we don't want to
replicate that.)

Or, a different idea: have Rust "artifact builds", where I can download
> prebuilt Rust bits when I'm only recompiling C++ code. (Tricky, I know,
> when we have code generation that communicates between Rust and C++.) This
> isn't fundamentally different from the previous idea, or distributed
> compilation in general, if you start to take the exact interdependencies
> into account.


In theory, caching Rust crate artifacts is easier than caching C++ object
files.  (At least, so I'm told.)  In practice, nobody has tried to push
through the issues we might see in the wild.  I'd love to see investigation
into this area, since it seems likely to be fruitful on a short time
scale.  In a different direction, I am aware of some work (cited in this
thread?) towards an icecream-like job server for distributed Rust
compilation.  Doesn't hit the artifact build style caching, but related.

Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to require Node to build Firefox 61 and later

2018-02-28 Thread Nicholas Alexander
Hello dev-platform,

For the reasons outlined at
https://docs.google.com/document/d/1tOA2aeyjT93OoMv5tUMhAPOkf4rF_IJIHCAoJlwmDHI/edit?usp=sharing,
we would like to make Node a requirement to build Firefox sometime in the
Firefox 61 development cycle. (Firefox 60 will be an ESR release, so this
provides a complete ESR cycle without requiring Node.)

The requirement will likely be Node v8.9.4, the current LTS release.

I would like feedback -- positive and negative -- from downstream
packagers, users of various operating systems and distributions, and
interested developers about this proposal.  There has already been some
discussion on dev-builds:
https://groups.google.com/d/msg/mozilla.dev.builds/L2Tp2uS1PGE/yiy30e1EAgAJ.

Please comment on the Google Doc linked above (everybody with the link
should be able to comment), or reply with comments on
dev-bui...@lists.mozilla.org.

Thanks!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FYI: Short Nightly Shield Study involving DNS over HTTPs (DoH)

2018-03-19 Thread Nicholas Alexander
Hi all,

On Mon, Mar 19, 2018 at 3:56 PM, Xidorn Quan  wrote:

> It's fine to embed this experiment in the product, and blog about it, but
> it's definitely not fine to have it enabled by default and send every DNS
> request to a third-party.
>
> I can understand that the intent must be good, and for better privacy, but
> the approach of doing so is not acceptable. Users would think Firefox is
> going to just send data to arbitrary third-party without agreement from
> them.
>
> As you can see from the replies, almost all people outside the network
> team has expressed concerns about this, which should be considered a signal
> already that how other technical users may feel about this experiment, and
> how technical news would create a title for this.
>

Let me add my voice as a person outside the network team who can understand
the concerns and _still thinks we should be doing this_.

In particular, I'd like to argue against Henri Sivonen's rhetorical
question, "Why risk upsetting users in this case instead of obtaining
consent first?"

In today's age of impenetrable licensing agreements, the defaults matter.
It's not reasonable for users of the Web to assume the totality of the
risks of using the Web, and I think it's critical that Mozilla assume some
risk for its users.  That's why we should be bold, try things, and figure
out if we can move the default to be better for the mass market.  (This was
one of the points that Mikko Hyppönen emphasized for the security industry
in his recent talk to Mozilla.)

With regard to this experiment: we have a default right now that has
evolved over the last two decades to privilege forces close to the user
(ISP, DNS provider).  This experiment privileges forces farther away from
the user (the DoH provider).  The hope, as I see it, is that there will be
more robust competition in the market when the DoH provider can be
unbundled from the last mile connectivity provider.  (We've seen that last
mile connectivity providers don't have a lot of competition in many parts
of the world.)  I am interpreting this as something parallel to VPN
providers, where there's a robust market with diversified offerings.  Right
now, users have two functional choices: ISP-provided DNS or Google's DNS,
and both have serious downsides.  I think it's 100% Mozilla's role to
negotiate privacy-respecting agreements and service contracts -- things
that no individual user can arrange at this time.

I'm willing to upset some users in order to shift the defaults at scale.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Browser Architecture Newsletter 6 (S02E01)

2018-03-27 Thread Nicholas Alexander
the Firefox-related modules under control of a
new “Technical Leadership Module Committee” (TLMC).”  Our own Dave Townsend
is one of the six proposed committee members. The TLMC will be working on
improving engineering practices that apply across all modules that feed
into Firefox.

Is it time for Node?

We’re tackling how to better support the many parts of Firefox that use
Node.js.  We’re starting to see Node gain more acceptance: thanks to Mark
Banner and others we’re installing Node as part of mach bootstrap
<https://bugzilla.mozilla.org/show_bug.cgi?id=1424921>, we’re running
v8.9.4 in the lint automation tasks
<https://bugzilla.mozilla.org/show_bug.cgi?id=1443547>, and we’ll be upgrading
to v8.9.4 for Windows developers
<https://bugzilla.mozilla.org/show_bug.cgi?id=1443545> in Q2.  To further
the effort, Nick Alexander and Gregory Szorc have proposed to require Node
in the build system on dev-platform
<https://groups.google.com/d/msg/mozilla.dev.platform/7sPFmewLoUg/bO7oon4sAAAJ>.
That post talks about Firefox 61, but we’re still working through technical
details and don’t expect to require Node before Firefox 62 at the earliest.

Come work with us!

Joe Walker, who manages Browser Architecture, is spearheading
temporary rotations
into the Browser Architecture group
<https://docs.google.com/document/d/1SDFupachWhy1r4Ww6-FiZ1P25QRzRZg5KOKl5Kj5zzg>.
Engineers from across the organization will join the group for a few months
to work on a cross-cutting concern in their area with the support of the
team and our architecture review process
<https://mozilla.github.io/firefox-browser-architecture/text/0006-architecture-review-process.html>.
If you’re interested in rotating in, contact Joe directly.  (We always need
guest stars!)

You can always reach us on Slack or IRC (#browser-arch). This newsletter is
also available as a Google Doc
<https://docs.google.com/document/d/1V-O-5OGx42XSj-cG41X-DSFaBs1emkTJO3HNAAAMTfw/edit?usp=sharing>
.

Nick (cinematic editor for the Browser Architecture production studio)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of "remote XUL"

2018-03-27 Thread Nicholas Alexander
Hi Boris!

On Tue, Mar 27, 2018 at 8:36 AM, Boris Zbarsky  wrote:

> Background: We currently have various provisions for "remote XUL", wherein
> a hostname is whitelisted to:
>
> 1)  Allow parsing XUL coming from that hostname (not normally alllowed for
> the web).
>
> 2)  Allow access to XPConnect-wrapped objects, assuming someone hands out
> such an object.
>
> 3)  Run XBL JS in the same global as the webpage.
>
> 4)  Allow access to a "cut down" Components object, which has
> Components.interfaces but not Components.classes, for example.
>
> This machinery is also used for the "dom.allow_XUL_XBL_for_file"
> preference.
>
> The question is what we want to do with this going forward.  From my point
> of view, I would like to eliminate item 4 above, to reduce complexity.  I
> would also like to eliminate item 2 above, because that would get us closer
> to having the invariant that XPConnect is only used from system
> principals.  These two changes are likely to break some remote XUL
> consumers.
>
> The question is whether we should just go ahead and disable remote XUL
> altogether, modulo its usage in our test suites and maybe
> "dom.allow_XUL_XBL_for_file" (for local testing).  It seems like that might
> be clearer than a dribble of breakage as we remove bits like items 2/4
> above, slowly remove various bindings, slowly remove support for some XUL
> tags, etc...
>
> Thoughts?  My gut feeling is that we should just turn off remote XUL
> outside the IsInAutomation() and maybe the "dom.allow_XUL_XBL_for_file"
> case.
>

I am not expert in this area, but this sounds like a vestigial feature of
the Mozilla XUL application layer that should be removed immediately.  Can
you elaborate on:

- some of the details of "likely to break remote XUL consumers"?  Which
consumers are these -- internal?  External?
- do we have an estimate of how much remote XUL is used in our own test
suite?  Is this days/weeks/months of labour to replace?
- do we have any idea of the popularity of `dom.allow_XUL_XBL_for_file`?
Do we expect this usage is all internal?  (I really hope so!)

Sorry to ask for work (before you do the real work),
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent To Require Manifests For Vendored Code In mozilla-central

2018-04-10 Thread Nicholas Alexander
Hi all,

On Mon, Apr 9, 2018 at 9:25 PM, glob  wrote:

> mozilla-central contains code vendored from external sources. Currently
> there is no standard way to document and update this code. In order to
> facilitate automation around auditing, vendoring, and linting we intend to
> require all vendored code to be annotated with an in-tree YAML file, and
> for the vendoring process to be standardised and automated.
>
> The plan is to create a YAML file for each library containing metadata
> such as the homepage url, vendored version, bugzilla component, etc. See
> https://goo.gl/QZyz4x for the full specification.
>

Generally, I think this is a great plan, and I'm pleased to see it moving
forward.

A few notes:

1) for the Node.js work I'm actively pursuing, I expect a |mach vendor
rust|-like solution rather than a mozvendor.yaml-like solution (at least at
first).  This is because we need to integrate with Node-y workflows, where
things are captured in package.json files -- a situation parallel to Rust,
where things are captured in Cargo.toml files.

2) Firefox for Android vendored many dependencies into
mobile/android/third_party, modified them, and has more-or-less not updated
them since.  That's seen as a problem, and some folks are pushing to
upgrade those dependencies (see
https://bugzilla.mozilla.org/show_bug.cgi?id=1438716).  However, those
upgrades will be captured as regular Android Maven dependencies and not as
mozvendor.yaml dependencies.

I don't think either of those things are controversial, just adding to the
list of "things a little outside the system" on day one.

Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Node 8.9.1/npm 5.5.1 for ESLint

2018-04-23 Thread Nicholas Alexander
On Mon, Apr 23, 2018 at 4:20 AM, Mark Banner  wrote:

> I would like to increase the minimum requirements for node with ESLint to
> node v8.9.1, npm v5.5.1 for the following reasons:
>
>- ESLint 5.x is now in alpha, and raises its minimum node requirement
>level to 6.14.0 (ours is currently 6.9.1)
>- MozillaBuild & our automation already use node 8.9.1
>- node 8.9.1 ships with npm 5.5.1
>- A lot has changed in npm between 3.10.x and 5.5.x, upgrading the
>minimum will provide better consistency for developers, especially with
>respect to npm-shrinkwrap.json/package-lock.json
>- This brings us closer to what was suggested in the "Intent to
>require Node to build..." thread.
>
> I'm thinking about bumping this the week of 7th May - after the merges
> have completed.
>
> I would like to hear feedback - positive or negative - from anyone likely
> to be affected by this proposal.
>
Full steam ahead from me!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New Policy: Marking Bugzilla bugs for features riding behind a pref

2018-05-03 Thread Nicholas Alexander
On Wed, May 2, 2018 at 4:57 PM, Emma Humphries  wrote:

> Hello,
>
> We control enabling many features and changes to Firefox using preferences.
>
> Program and Release management as well as PI need a better view of this.
>
> We've written a new policy which you can read on our nascent bug-handling
> mini-site:
>
> https://github.com/mozilla/bug-handling/blob/master/
> policy/feature-flags.md
>
> To summarize, when you are releasing a feature that "rides behind a flag",
> on the bug for the feature:
>
> * set the behind-pref flag to +
> * set the qa-verify flag to ?
> * note the bug in the Firefox Feature Trello board
>
> We expect qa-verify to be set to + before enabling prefs on features.
>
> We'll be going over this at two upcoming meetings, with Reese's and JoeH's
> teams.
>
> There are two, known open questions to resolve on the policy:
>
> * Features developed over multiple releases with individual patches not
> verifiable by external testing (passing unit tests, but not integration
> tests) such as Hello and WebRTC
> * Features are often turned on in Nightly, ignoring prefs using compiler
> directives, and kept off in Beta and Release. Is this the right thing to
> do, or should we be flipping prefs from on to off when going from Nightly
> to Beta and forwards?
>

Not all features are feasible to ship behind feature flags.  Fennec
features that interact with the OS directly, in particular, can sometimes
just be all or nothing; and I would anticipate that things that interact
directly with newer App stores (iOS features, say, or Windows Store
features in the future) will become more common.  We can't have a policy
that fights against that trend.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Re-run old (non-syntax) try pushes with |mach try again|

2018-07-17 Thread Nicholas Alexander
Ahal,

On Tue, Jul 17, 2018 at 11:55 AM, Andrew Halberstadt 
wrote:

> While |mach try fuzzy| is generally a better experience than try
> syntax, there are a few cases where it can be annoying. One
> common case was when you selected a bunch of tasks in the
> interface and pushed. Then at a later date you wanted to push
> the exact same set of tasks again. This used to be a really poor
> experience as you needed to re-select all the same tasks
> manually.
>
> As of now, you can use |mach try again| instead. The general
> workflow is:
>

This is awesome, thank you for building it!

Can it be extended to "named pushes"?  That is, right now I use my shell
history to do `mach try fuzzy -q "'build-android | 'robocop", but nobody
else will find that without me telling them, and it won't be automatically
updated when robocop gets renamed.  That is, if I could `mach try fuzzy
--named android-tier1` or something I could save myself some manual command
editing and teach other people what a green try run means in my area.

Thanks again!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: jar: URIs from content

2015-10-15 Thread Nicholas Alexander
On Thu, Oct 15, 2015 at 10:58 AM, Ehsan Akhgari 
wrote:

> We currently support URLs such as  http://mxr.mozilla.org/mozilla-central/source/modules/libjar/test/mochitest/bug403331.zip?raw=1&ctype=application/java-archive!/test.html>.
> This is a Firefox specific feature that no other engine implements, and it
> increases our attack surface unnecessarily.  As such, I would like to put
> it behind a pref and disable it for Web content by default.
>

I've always been surprised by this (and resource:, although I think there's
a story behind that one).  Glad to see it go.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Decommissioning "dumbmake"

2015-10-18 Thread Nicholas Alexander
On Thu, Oct 15, 2015 at 5:15 PM, Mike Hommey  wrote:

> Hi,
>
> I started a thread with the same subject almost two years ago. The
> motivation hasn't changed, but the context surely has, so it's probably
> time to reconsider.
>
> As a reminder, "dumbmake" is the feature that makes "mach build foo/bar"
> sometimes rebuild in some other directories as well. For example, "mach
> build gfx" will build gfx, as well as toolkit/library.
>
> OTOH, it is pretty limited, and, for instance, "mach build gfx/2d" will
> only build gfx/2d.
>
> There are however now two build targets that can do the right thing for
> most use cases:
> - mach build binaries, which will build C/C++ related things
>   appropriately
> - mach build faster, which will build JS, XUL, CSS, etc. (iow, non
>   C/C++) (although it skips what doesn't end up in dist/bin)
>
> At this point, I think "dumbmake" is more harmful than helpful, and the
> above two targets should be used instead. Removing "dumbmake" would mean
> that "mach build foo/bar" would still work, but would stop to "magically"
> do something else than what was requested (or fail to do that thing for
> all the cases it doesn't know about).
>
> Are there still objections to go forward, within the new context?
>

I agree that dumbmake is more harmful than helpful.  I did some of the work
to port it forward to the mach world; IIRC, my motivation was to ease the
transition to "mach and smarter build targets".  We're now seeing smarter
build targets that aren't based on (recursive) Make, and dumbmake hasn't
helped anybody transition to anything: there have been a wopping 10 commits
to the underlying dependency tree [1], some of which are backouts and
relandings.

If we need smarter build targets, let's make that happen at the mach level,
not at the Make level.

Nick

[1]
https://hg.mozilla.org/mozilla-central/filelog/e8c7dfe727cd970e2c3294934e2927b14143c205/build/dumbmake-dependencies
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Decommissioning "dumbmake"

2015-10-18 Thread Nicholas Alexander
On Sun, Oct 18, 2015 at 3:12 PM, Eric Rescorla  wrote:

>
>
> On Sun, Oct 18, 2015 at 2:52 PM, Nicholas Alexander <
> nalexan...@mozilla.com> wrote:
>
>>
>>
>> On Thu, Oct 15, 2015 at 5:15 PM, Mike Hommey  wrote:
>>
>>> Hi,
>>>
>>> I started a thread with the same subject almost two years ago. The
>>> motivation hasn't changed, but the context surely has, so it's probably
>>> time to reconsider.
>>>
>>> As a reminder, "dumbmake" is the feature that makes "mach build foo/bar"
>>> sometimes rebuild in some other directories as well. For example, "mach
>>> build gfx" will build gfx, as well as toolkit/library.
>>>
>>> OTOH, it is pretty limited, and, for instance, "mach build gfx/2d" will
>>> only build gfx/2d.
>>>
>>> There are however now two build targets that can do the right thing for
>>> most use cases:
>>> - mach build binaries, which will build C/C++ related things
>>>   appropriately
>>> - mach build faster, which will build JS, XUL, CSS, etc. (iow, non
>>>   C/C++) (although it skips what doesn't end up in dist/bin)
>>>
>>> At this point, I think "dumbmake" is more harmful than helpful, and the
>>> above two targets should be used instead. Removing "dumbmake" would mean
>>> that "mach build foo/bar" would still work, but would stop to "magically"
>>> do something else than what was requested (or fail to do that thing for
>>> all the cases it doesn't know about).
>>>
>>> Are there still objections to go forward, within the new context?
>>>
>>
>> I agree that dumbmake is more harmful than helpful.  I did some of the
>> work to port it forward to the mach world; IIRC, my motivation was to ease
>> the transition to "mach and smarter build targets".  We're now seeing
>> smarter build targets that aren't based on (recursive) Make, and dumbmake
>> hasn't helped anybody transition to anything: there have been a wopping 10
>> commits to the underlying dependency tree [1], some of which are backouts
>> and relandings.
>>
>> If we need smarter build targets, let's make that happen at the mach
>> level, not at the Make level
>>
>
> Slightly off-topic, but I'd like to argue against smarter build targets,
> if by that you mean more stuff like "mach build binaries"
>
> What's needed here is a dependency management system that
> simply builds what's needed regardless of what's changed, not more
> ways for the user to tell the build system "only rebuild some stuff".
>

In general, I agree, but there are legitimate differences in user intention
that do need to be capture somewhere.  I claim that should be at the mach
command level, but I don't really care if those are top-level (mach
rebuild-xul) or subcommand level (mach build xul), and whether they are
internally "smarter build targets".

In Android land, we're using Gradle to task switch like this -- build, yes;
but also lint; and also run tests; etc.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Nicholas Alexander
+1 from me.  I'd just like to speak to one small part below.

On Fri, Oct 23, 2015 at 12:57 AM, Mike Hommey  wrote:

> Hi,
>
> This has been discussed in the past, to no avail. I would like to reopen
> the discussion.
>



- It sets a bad precedent, other Gecko-based projects might want to
>   merge.
>   - mobile/ set the precedent half a decade ago.
>   - as mentioned above, historically, everything was in the same
> repository, and the split can be argued to be the oddity here
>   - there are barely any Gecko-based projects left that are not in
> comm-central.
>

I've done a lot of mobile-specific build things.  mobile/ was merged into
m-c before I began working on it, but I can't imagine the overhead of doing
half the work in m-c and half of it in a separate repository.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ESLint is now available in the entire tree

2015-11-30 Thread Nicholas Alexander
On Mon, Nov 30, 2015 at 6:34 AM, Tim Guan-tin Chien 
wrote:

> Thanks, bug 1150859 only covers ./browser/.
> Is this a done deal which we want to get rid of #ifdef in all JS code
> everywhere?
>
> (My particular interest would be obviously ./dom/ and ./b2g/)
>

Make this happen!  Fennec (mobile/android) already did this:
https://bugzilla.mozilla.org/show_bug.cgi?id=1093358.

Nick


>
> On Mon, Nov 30, 2015 at 5:44 PM, Gijs Kruitbosch  >
> wrote:
>
> > Yes. See bug 1150859 and friends.
> >
> > ~ Gijs
> >
> > On 30/11/2015 09:05, Tim Guan-tin Chien wrote:
> >
> >> The Gecko JavaScript is also littered with #ifdef and # is really not a
> >> token for comment in JS... is there any plan to migrate that away since
> >> there is ESLint present?
> >>
> >> On Sun, Nov 29, 2015 at 10:37 PM, Vivien Nicolas 
> >> wrote:
> >>
> >> On Sun, Nov 29, 2015 at 2:30 PM, David Bruant 
> wrote:
> >>>
> >>> Hi,
> 
>  Just a drive-by comment to inform folks that there is an effort to
>  transition Mozilla JavaScript codebase to standard JavaScript.
>  Main bugs is: https://bugzilla.mozilla.org/show_bug.cgi?id=867617
> 
>  And https://bugzilla.mozilla.org/show_bug.cgi?id=1103158 is about
>  removing non-standard features from SpiderMonkey.
>  Of course this can rarely be done right away and most often requires
>  dependent bugs to move code to standard ECMAScript (with a period with
>  warnings about the usage of the non-standard feature).
> 
> 
> >>> What about .jsm modules ? Or is that not really considered ?
> >>>
> >>> I have been told that ES6 modules may help to solve some of the
> problems
> >>> covered by .jsm but I don't see how you can create a ES6 module that is
> >>> can
> >>> be accessed from multiple js context from the same origin. Mostly
> >>> interested as it would be nice to be able to write a module once and
> >>> share
> >>> it between multiple tabs instead of having to reload the same JS script
> >>> for
> >>> all similar tabs, like all the bugzilla tabs many of us have open for
> >>> example.
> >>> ___
> >>> dev-platform mailing list
> >>> dev-platform@lists.mozilla.org
> >>> https://lists.mozilla.org/listinfo/dev-platform
> >>>
> >>>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using the Taskcluster index to find builds

2015-11-30 Thread Nicholas Alexander
On Mon, Nov 30, 2015 at 12:43 PM, Chris AtLee  wrote:

> The RelEng, Cloud Services and Taskcluster teams have been doing a lot of
> work behind the scenes over the past few months to migrate the backend
> storage for builds from the old "FTP" host to S3. While we've tried to make
> this as seamless as possible, the new system is not a 100% drop-in
> replacement for the old system, resulting in some confusion about where to
> find certain types of builds.
>
> At the same time, we've been working on publishing builds to the
> Taskcluster Index [1]. This service provides a way to find a build given
> various different attributes, such as its revision or date it was built.
> Our plan is to make the index be the primary mechanism for discovering
> build artifacts. As part of the ongoing buildbot to Taskcluster migration
> project, builds happening on Taskcluster will no longer upload to
> https://archive.mozilla.org (aka https://ftp.mozilla.org). Once we shut
> off
> platforms in buildbot, the index will be the only mechanism for discovering
> new builds.
>
> I posted to planet Mozilla last week [2] with some more examples and
> details. Please explore the index, and ask questions about how to find what
> you're looking for!
>

Also FWIW -- the Taskcluster Index has been backing |mach artifact| since
day one.  It's nice!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Submit to MozReview using Git

2016-02-05 Thread Nicholas Alexander
On Fri, Feb 5, 2016 at 12:24 PM, Gregory Szorc  wrote:

> It is now possible to submit reviews to MozReview using Git. Instructions
> are at
>
> https://mozilla-version-control-tools.readthedocs.org/en/latest/mozreview/install-git.html
>
> Bugs should be filed against Developer Services :: MozReview. #mozreview is
> your support IRC channel.
>

Wow!  Great work gps and everybody else  involved!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Gecko/Firefox stats and diagrams wanted

2016-02-09 Thread Nicholas Alexander
+Kyle, +Nathan

On Tue, Feb 9, 2016 at 9:00 AM, Chris Mills  wrote:

> Hi all,
>
> I’m writing a presentation about browsers, standards implementation, and
> cross-browser coding to give at some universities. As a part of it, I
> wanted to present some stats about Firefox/Gecko to show how many people on
> average commit to it (say, every month, every year?), how many people work
> on localising the content strings, how many people work on platform/UI
> features, etc.
>

Kyle Lahnakoski has done some work in this area -- he set up a neat
contributor dashboard.  Perhaps Kyle has more data about paid activity
too.  I'm CCing him to see if he can say more.  I imagine Mike Hoye has
much to say here.


> I also wanted to try to find some diagrams to show how Firefox and Gecko
> work/their architecture, from a high level perspective (not too insane a
> level of detail, but reasonable).
>

Nathan Froyd worked up a very high-level slide deck for his onboarding
sessions; they're amazing.  I'm not sure how public those slides are, so
I've CCed him and he may choose to link to those.  I would really love to
see these worked up into a document rather than a presentation.

Thanks for doing this work!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Testing Wanted: APZ Scrollbar dragging

2016-02-17 Thread Nicholas Alexander
Benoit, (possibly kats),

On Wed, Feb 17, 2016 at 10:35 AM, Benoit Girard  wrote:

> Currently APZ does not cause scrollbar initiated scrolling to be async.
> I've been working in fixing this and I'd like some help testing it out
> before enabling it on Nightly. If you're interested please flip
> 'apz.drag.enabled' to true and restart. If you find any issue please make
> it block https://bugzilla.mozilla.org/show_bug.cgi?id=1211610.
>

Does this apply to Fennec?  Do you also want testing on Fennec?  A
cross-post to mobile-firefox-dev might be in order.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement: DOM Push API in Firefox for Android

2016-03-02 Thread Nicholas Alexander
Summary: "The Push API gives web applications the ability to receive
messages pushed to them from a server, whether or not the web app is in the
foreground, or even currently loaded, on a user agent. This lets developers
deliver asynchronous notifications and updates to users that opt in,
resulting in better engagement with timely new content."

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1206207

Link to standard: https://w3c.github.io/push-api/

Platform coverage: Firefox for Android.  This is already shipping in
Firefox Desktop:
the Firefox Desktop intent to implement message is at [1] and the intent to
ship message is at [2].

Estimated or target release: Firefox for Android 47 is desired.  48 seems
most likely.

Preference behind which this will be implemented: this is both behind a
build flag (due to Android-specific permission changes, etc) and behind
dom.push.enabled.

DevTools bug: this should be covered by the Desktop devtools ticket:
https://bugzilla.mozilla.org/show_bug.cgi?id=1214248.

Best,
Nick

[1] https://lists.mozilla.org/pipermail/dev-platform/2014-July/005828.html
[2]
https://groups.google.com/d/msg/mozilla.dev.platform/vU4NsuKhTOY/wc2PviRUBAAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOM Push API in Firefox for Android

2016-03-09 Thread Nicholas Alexander
Hi folks,

On Wed, Mar 2, 2016 at 10:06 AM, Nicholas Alexander 
wrote:

> Summary: "The Push API gives web applications the ability to receive
> messages pushed to them from a server, whether or not the web app is in the
> foreground, or even currently loaded, on a user agent. This lets developers
> deliver asynchronous notifications and updates to users that opt in,
> resulting in better engagement with timely new content."
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1206207
>
> Link to standard: https://w3c.github.io/push-api/
>
> Platform coverage: Firefox for Android.  This is already shipping in
> Firefox Desktop:
> the Firefox Desktop intent to implement message is at [1] and the intent
> to ship message is at [2].
>
> Estimated or target release: Firefox for Android 47 is desired.  48 seems
> most likely.
>
> Preference behind which this will be implemented: this is both behind a
> build flag (due to Android-specific permission changes, etc) and behind
> dom.push.enabled.
>
> DevTools bug: this should be covered by the Desktop devtools ticket:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1214248.
>

This should be in tonight's Firefox for Android Nightly [1].  If you have
Google Play Services on your device, and it is in good health (up to date),
etc, you should be able to subscribe for push notifications from DOM Push
consuming sites (like serviceworke.rs).

I'm tracking a number of improvements and follow-ups [2] [3] that will keep
this in Nightly for the 48 cycle, but progress is progress.

Yours,
Nick

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1252666
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1252650
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1229835
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_LOG_FILE and MOZ_LOG_MODULES are now the environment variables for logging

2016-03-21 Thread Nicholas Alexander
On Mon, Mar 21, 2016 at 9:13 AM, Honza Bambas  wrote:

> tl;dr: Start migrating to use of MOZ_LOG_* instead of NSPR_LOG_*.  Don't
> worry about backward compatibility tho, when MOZ_LOG_* is not set, we
> fallback to NSPR_LOG_* values.
>

Hi Honza,

Thanks for making the world a better place!  I'm sorry to add to your
burden, but could you drop links to updated MDN documentation?  I know that
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/NSPR_LOG_MODULES
probably wants to reference your new work, and there are probably
additional places.  Is this in the coding styles document?  (Yet?)

Thanks again!
Nick


>
>
> The longer version:
>
> Since the new logging code sits in XPCOM and has no longer anything to do
> with NSPR and NSPR logging, we want to modernize it and release it from the
> NSPR chains even more.
>
> Two days ago a patch for bug 1248565 [1] has landed.  That patch
> introduces two new environment variables - MOZ_LOG_MODULES and
> MOZ_LOG_FILE.  The values are expected to be the same as for the NSPR_LOG_*
> variables pair.
>
> If you don't specify MOZ_LOG_* vars but NSPR_LOG_* vars are specified, we
> will fallback to their values (i.e. those are then used for the new XPCOM
> logging as it was before the patch).  This preserves compatibility and
> there is no rush to redo all existing systems to use MOZ_LOG_*.
>
> The NSPR_LOG_* pair can be specified along with the MOZ_LOG_* pair.
> Actually, NSPR_LOG_* is still needed for code that haven't migrated to the
> new logging API, see [1] and [3], and also for modules defined in NSPR and
> NSS code.
>
>
> The rational for this move:
>
> - the new logging has nothing to do with NSPR, it's now in XPCOM
>
> - bug 1248565 [1] -> NSPR logging and XPCOM logging both write to the same
> file
>
>
> -hb-
>
>
> [1]
> https://mxr.mozilla.org/mozilla-central/search?string=PR_NewLogModule(%22
>
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1248565
>
> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1219461
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: #include sorting: case-sensitive or -insensitive?

2016-03-28 Thread Nicholas Alexander
On Mon, Mar 28, 2016 at 1:28 PM, David Keeler  wrote:

> (Everyone, start your bikesheds.)
>
> The style guidelines at
>
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style
> indicate that #includes are to be sorted. It does not say whether or not
> to consider case when doing so (and if so, which case goes first?). That
> is, should it be:
>
> #include "Foo.h"
> #include "bar.h"
>
> or
>
> #include "bar.h"
> #include "Foo.h"
>
> Based on the "Java practices" section of that document, I'm assuming
> it's the former, but that's just an assumption and in either case it
> would be nice to document the accepted style for C/C++.
>

Not a comprehensive argument, but could we do what moz.build does for file
names?  I believe that is case insensitive, and foo.h sorts before foox.h.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Triage Plan for Firefox Components

2016-04-05 Thread Nicholas Alexander
Emma,

On Tue, Apr 5, 2016 at 5:27 PM, Emma Humphries  wrote:

> It's been a week since I asked for your comments on the plan for triage,
> thank you.
>
> I'm going reply to some general comments on the plan, and outline next
> steps.
>

I just want to say that I have been following this thread, and that I
haven't been a fan of the thrust of the initiative, but I really appreciate
this summation and your effort to be transparent.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: libc++ enabled for Android / C++11 standard library update

2016-05-03 Thread Nicholas Alexander
On Tue, May 3, 2016 at 7:57 AM, Nathan Froyd  wrote:

> As the subject suggests.  It is also strongly suggested that you now
> use NDK r11b or above for your local Android development; this is what
> automation uses and what |mach bootstrap| installs.
>
> This change leaves Mac as our only tier-1 platform without a C++11
> standard library.
>
> Given the recent announcement that Mac 10.6-10.8 support will be
> dropped, the path to moving Mac to a C++11 standard library is much
> clearer.  Bug 1246743 will be repurposed for moving Mac to use
> -stdlib=libc++, and the changeover should happen in short order.  Once
> that's done, there are a large number of polyfills and non-C++11
> workarounds that need to be removed, and I'm happy to review those
> sorts of patches.
>

Thanks for driving this through, Nathan (and others!).

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ARIA membership and role="password"

2016-06-02 Thread Alexander Surkov
Hi, Jonathan.

As far as I can tell, no one in Mozilla is super active in ARIA
nowadays; I personally (among others I think) skim through ARIA email
threads and provide occasional feedback on feature by feature basis.

One of reasons keeping me off I think is I'm worried about direction
taken in ARIA 1.1, which looks with me as to turn ARIA into universal
markup language for web page semantics description, so that ARIA
vocabulary can be used to describe whole HTML and more. I feel like
every single feature from HTML gets added to ARIA eventually, and I
not always convinced that each requested feature has a real use case
on the web. I expressed these worries, but my voice probably wasn't
strong enough to turn the wheel back.

So I'm not confident about role='password' too. While I don't have
strong objections, but I never heard the web authors complaining about
the missing feature. I would say if we do have strong concerns, and
cannot negotiate with the group to keep the feature off the spec, then
it shouldn't mean we have to implement the feature in Gecko. After all
if the feature doesn't get two implementation, then it gets removed
from the spec afaik.

Thanks.
Alexander.

On Thu, Jun 2, 2016 at 8:55 AM, Jonathan Kingston  wrote:
> Hey,
>
> So I was just informed that Mozilla isn't a member of the ARIA working group
> which shocked me, we have however had a hand in the spec over the years (I
> have cc'd those mentioned).
>
> I notice over the years some disappointment with the spec as it being a
> separate module to semantics in HTML itself however I don't see a real
> opposition not to be in the group. This seems more of a formality when the
> group split into two working groups.
>
> It appears that ARIA 1.1 is moving to be resolved in the next few weeks so
> if we had any objections we would need to move now I have been told.
>
> role="password" has been added to the spec:
> https://rawgit.com/w3c/aria/master/aria/aria.html#password and I jotted my
> objections in a post:
> https://jotter.jonathankingston.co.uk/blog/2016/05/16/role-password-is-not-wise/
>
> My post was largely dismissed in this mail:
> https://lists.w3.org/Archives/Public/public-aria/2016May/0126.html
>
> Is it worth us joining? Can we discuss the wider use of ARIA itself? Rushing
> to standardise features seems a shame.
>
> Thanks
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Gecko shutdown (London session)

2016-06-30 Thread Nicholas Alexander
Hi Aaron, others,

On Thu, Jun 30, 2016 at 8:41 AM, Aaron Klotz  wrote:

> Did the now-defunct exit(0) project ever come up in this discussion?
>
> See bugs 662444 and 826143. This was a perf team project back in the
> Snappy days where the objective was to write important data to disk ASAP,
> then exit(0) without doing a bunch of cleanup. The argument for this was
> that, yes, during development we want to make sure that we are properly
> cleaning up after ourselves, but there is no reason for end users with opt
> builds to be waiting around while Firefox spends a bunch of time destroying
> things that are going to be wiped anyway by process termination.
>

I'd like to (re-)surface that exit(0) is never going to be feasible on
Android [1].  The Android process hosts many long-lived services with
lifecycles distinct from the lifecycle of the Gecko rendering engine.  In
some way, Fennec needs to be able to "shutdown" Gecko without Gecko killing
its process.  (For the record: separating Gecko into its own process is
technically feasible but I expect it to be a great deal of work that I
definitely think we should not do.  JNI across processes is probably
possible but definitely not pleasant!)

Nick

[1] If necessary, I can dig out Bugzilla links to discussions that have
taken place about this.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing navigator.buildID?

2016-10-29 Thread Nicholas Alexander
On Sat, Oct 29, 2016 at 7:21 AM, Kohei Yoshino 
wrote:

> So the Battery Status API has just been removed, I think now is a good
> time to think about navigator.buildID again, which bug [1] has been
> inactive for a whole year.
>
> 4 years ago, Firefox 16 removed a minor version number from the user agent
> string to mitigate fingerprinting [2][3]. However, the build ID unique to
> each minor version is still exposed via the non-standard navigator.buildID
> property. Since trackers can easily retrieve build IDs from Mozilla Wiki
> [4] to map them to minor version numbers, the fix in Firefox 16 was totally
> meaningless.
>
> There were some legitimate use cases on Mozilla properties, for example,
> warning visitors who are using an outdated Firefox, but those usages have
> been replaced with the UITour API [5]. A comment in the bug [1] explains
> that Netflix was also using the build ID to detect a specific playback bug
> in Firefox, but it's probably not longer relevant. Given that, I believe
> the buildID property should be removed, or at least made chrome-only.
>

I concur, we shouldn't leak such fine-grained information about the UA to
content.  For future discussion, my Nightly uses

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:52.0)
Gecko/20100101 Firefox/52.0

but navigator.buildID is 20161015030203, revealing much more than 52.0.

As for chrome-only -- I wonder how many consumers there are.
about:support, perhaps?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Web2Native Bridge

2016-11-29 Thread Nicholas Alexander
Hi Anders,

On Tue, Nov 29, 2016 at 9:10 AM, Anders Rundgren <
anders.rundgren@gmail.com> wrote:

> There are virtually tons of developers out there using Android Intents to
> start "Apps" from the Web.
>

Aye, and Firefox does support this custom URI scheme.


> However, this mechanism sucks big-time since:
> 1. It leaves the invoking Web page in an "orphaned" state
> 2. There's no way to "talk back" to the invoked Web page
> 3. There's no Web page security context available to invoked "App"
> 4. There's no App/Web life-cycle control
>
> The Web2Native Bridge does all that:
> https://github.com/cyberphone/web2native-bridge#api


I took a look at this and don't yet see how this translates to Android.  It
all seems very Chrome OS specific: "Applications callable by the Web2Native
Bridge emulator *must* be written in Java and stored in a for the purpose
dedicated directory."

Is your intention to define a new specification for how an Android App
advertises this capability and how the browser connects to it?  (Android
Intents, an AIDL and handshake protocol, etc...)  This would be
interesting; it's similar in spirit to the Chrome Custom Tabs work, which
goes in the opposite direction: it makes Apps have better entry into the
browser (where-as you want content in the browser to have better entry into
a companion App).

Unfortunately (or maybe you guys think fortunately) I will most likely
> implement this in Chromium since there is more activity in that channel.
>

I never think it's fortunate when folks passionate enough to implement a
thing don't implement under the Mozilla umbrella!  I can provide some
guidance on doing this in Firefox for Android, if you'd like to try it
under our umbrella.  I can't speak to things like ship criteria and release
schedules, though :(

As a bizarre "Mozilla bonus point", I bet you can do this in a bootstrapped
extension using the essentially undocumented "Java addons" feature!  It's
100% non-obvious how to use it, but you can add a classes.dex to a Firefox
addon and load it using a Java class loader.  See
https://dxr.mozilla.org/mozilla-central/source/mobile/android/javaaddons/java/org/mozilla/javaaddons/JavaAddonInterfaceV1.java
and the test at
https://dxr.mozilla.org/mozilla-central/source/mobile/android/tests/browser/chrome/test_java_addons.html,
which appears to be still running in our automation.  Using this, you could
have your extension inject the `navigator.nativeConnect` method into the
content context (at least I think this is possible, I did it once -- see
https://github.com/ncalexan/bootstrapped-webapi-skeleton -- but I think
this approach may no longer work) and then use a Java addon to handle
proxying out to your test application, either using Intents or binding a
Service or whatever. Wild stuff!

But honestly, you might find it easier to hack up Fennec, since Fennec +
extensions + a new Web API + Java addons probably requires you to be a
Fennec hacker in the first place...

Mozilla have already focused on a similar concept which I doubt will ever
> be supported in Android: https://browserext.github.io/native-messaging/
> In Android Apps does a better job than "in-browser" extensions.
>

Is this the same as
https://mail.mozilla.org/pipermail/firefox-dev/2016-July/004461.html?

In general, I think there are really hard security and permission questions
that need to be raised and answered around this work.  For example, it's a
fundamental tenet of "the Web" that sites are isolated.  How does one
ensure that origin "foo.com" can only access "com.foo.application"?  What
does this even mean on Android, where the first App with an Android package
name is the winner (leading to wild security holes, some of which I have
had to fix in Fennec), regardless of who publishes the App?  These are hard
questions that can't be punted in a shipping product.  (Of course, they can
be punted in an experiment or tech demo!)

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Web2Native Bridge

2016-12-01 Thread Nicholas Alexander
On Wed, Nov 30, 2016 at 2:40 AM, Anders Rundgren <
anders.rundgren@gmail.com> wrote:

> On Tuesday, November 29, 2016 at 10:01:38 PM UTC+1, Nicholas Alexander
> wrote:
> Hi Nick,
> Many thanks for your elaborate comments!
>

Heh, that message kept growing and probably could be a blog post when it's
all done :)


> If we begin with security, Android already allows Web-sites to invoke apps
> which they have no specific relation to using the custom URI scheme.  I
> don't see that the ability of "talking back" to the calling Web page would
> introduce new vulnerabilities. I haven't noted any web-site registry
> involved in this operation either, you do a one-time grant for selecting
> browser (if needed) and that's it.  The only quirk seems to be that the
> invocation must be related to a user action.
>

For Web -> App, there is one piece (at least) that prevents the user being
fooled: the invocation must be in response to a user action.  I think the
invoked App could silently process that Intent without indicating anything
to the user, so it would be up to the OS to ensure the user knows *which*
App is handling the action.  I've never used this flow so I don't know if
that happens, or how easy it would be to spoof.


> Anyway, the *long-term* goal for this API is that Web-enabled Apps should
> be *vetted* for this kind of usage which also involves changes in
> "AppStores" and OSes to work.  The need for vetting may also depend on what
> local resources an App access.  This part will likely be vendor-and
> OS-specific.
>

I agree that this would be best at the OS level -- Apple's App Store
certainly vets the URL space associated to an App -- but you might be able
to do it at the browser level too.  It seems like what we're trying to do
is extend the concept of "secure origin" to "secure App".  That is, we need
a dictionary between {SSL certifacte, origin, ...} and {App ID, App
signature, ...}.  If you think of Chrome's Origin Trials, this might be
similar: an origin could register an App ID *and* App signature that the
browser could verify before opening a channel to the App, for instance.  In
fact, this could just be part of the JS API you propose, which is already
extremely Android specific...

In general, my point remains: the security model of the Web is relatively
well understood, and users trust it.  The security model of Apps is very
poorly understood by users and I claim there is no "informed consent".  I
would like the many, many people who have thought about this problem
involved before we punch a bi-directional hole between these two models.
After all, isn't the Canonical Use Case to provide login credentials
between Web and App so you get a seamless experience with a single
sign-on?  That requires some level of trust between the two models.


> A security review of mine regarding the Google/Mozilla/Microsoft take on
> the matter:
> https://lists.w3.org/Archives/Public/public-webappsec/2015Oct/0071.html
>
>
> If you look into the W2NB docs it says it is an "emulator", not the real
> deal.  The application ID is just a proposal which though should fit
> Android pretty well.
>
> Now over to the actual implementation...
>
> I see two possibilities, one which is more of a PoC allowing people
> testing the core concept on a standard Android phone.  It is still not
> clear to me if this is technically feasible :-(


Having thought about this during my email, I guarantee you it is
technically feasible :)  That doesn't mean it's easy, and definitely
doesn't mean that it's a good idea.


>   Hopefully the links you provided give some answers.


More like starting points, but it will point you in the right direction.


>   I'm also looking into Web Payment API which I (believe) does quite
> similar things albeit in a very specific manner.


I doubt you would find Firefox's (or Chrome's) Web Payments API
implementation instructive at this time, but more power to you if I am
incorrect.


>   If this doesn't work I'm will try another route which I cannot write
> about on a public list.
>
> If Mozilla could consider giving me some support, I would be more than
> happy to build on Firefox!
>

I can commit to answering some questions, but little more.  (In fact, I'm
on parental leave right now, so this is truly a sideline.)  I'm not sure
what else you'd need, other than patience and ingenuity.


> The Web2Native Bridge is the core of a more ambitious effort which among
> many things is about transcending on-line payments from the pitiful state
> it has been in the last 20 years or so.  Well, Google and Apple have
> already done that, but I'm targeting the other 99.9% who don't have a

smartmake-like functionality has landed in mach

2013-05-02 Thread Nick Alexander

Hello dev-platform,

If you don't use mach, this message does not concern you.  If you use
mach, in particular |mach build DIRECTORY|, keep reading.

Bug 677452 landed "smartmake-like" functionality into mach, which I have
dubbed dumbmake.  smartmake (and dumbmake) [1] maintains a list of
dependencies between source directories that are not captured in our
recursive make configuration [2].  This means that

|mach build DIRECTORY|

might actually build several directories.  This change does not affect
|mach build|, and you can disable the new functionality on the command
line with |mach build -X|.

This is a short term step towards making |mach build| create functional
builds more often, while requiring developers to internalize fewer
dependencies.  Long term, any dependency information added by Bug 677452
should be captured by the ongoing efforts to Build Faster [3].

I would appreciate help improving the list of dependencies.  The list is
maintained at build/dumbmake-dependencies; see
python/mozbuild/dumbmake/README.rst for a description of the format.

Finally, I'd like to thank Josh Matthews, the original author of
smartmake, and Till Schneidereit, who added significant functionality
and provided valuable guidance on how to integrate some of these ideas
into mach.

Yours,
Nick Alexander

[1] http://hg.mozilla.org/users/josh_joshmatthews.net/smartmake

[2] smartmake can also maintain timestamps and deduce what directories
need to be built, but this isn't implemented in mach.

[3] Harder, better, ..., stronger.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: smartmake-like functionality has landed in mach

2013-05-02 Thread Nick Alexander

On 13-05-02 3:09 PM, Josh Matthews wrote:

According to
http://mxr.mozilla.org/mozilla-central/source/build/dumbmake-dependencies#8,
it is equivalent to the following:

./mach build -X chrome xpcom toolkit/library


That's correct.  Or, if you're not a mach user, it translates to

make -f client.mk chrome
make -f client.mk xpcom
make -f client.mk toolkit/library

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: smartmake-like functionality has landed in mach

2013-05-02 Thread Nick Alexander

On 13-05-02 4:23 PM, Dave Townsend wrote:

On 5/2/2013 3:45 PM, Nick Alexander wrote:

On 13-05-02 3:09 PM, Josh Matthews wrote:

According to
http://mxr.mozilla.org/mozilla-central/source/build/dumbmake-dependencies#8,


it is equivalent to the following:

./mach build -X chrome xpcom toolkit/library


That's correct.  Or, if you're not a mach user, it translates to

make -f client.mk chrome
make -f client.mk xpcom
make -f client.mk toolkit/library

Nick


Does it also build browser/app on OSX after every build? Since that is
pretty much required all the time and often missed.


Looking at 
http://mxr.mozilla.org/mozilla-central/source/build/dumbmake-dependencies, 
it does not.


I considered supporting conditional dependencies like this, but was torn 
between using the preprocessor or using the expression parser used for 
test manifests.  I decided to land the simplest thing that could 
possibly work first, but I'm game for improving the format to support 
conditionals.


Yours,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: smartmake-like functionality has landed in mach

2013-05-03 Thread Nick Alexander

On 13-05-03 12:13 PM, Neil wrote:

Ehsan Akhgari wrote:


On 2013-05-02 7:23 PM, Dave Townsend wrote:


On 5/2/2013 3:45 PM, Nick Alexander wrote:


On 13-05-02 3:09 PM, Josh Matthews wrote:


According to
http://mxr.mozilla.org/mozilla-central/source/build/dumbmake-dependencies#8,



it is equivalent to the following:

./mach build -X chrome xpcom toolkit/library


Or, if you're not a mach user, it translates to

make -f client.mk chrome
make -f client.mk xpcom
make -f client.mk toolkit/library



Not quite true, it translates to make $MAKEFLAGS -C $OBJDIR/chrome &&
make $MAKEFLAGS -C $OBJDIR/xpcom && make $MAKEFLAGS -C
$OBJDIR/toolkit/library where OBJDIR should be obvious and MAKEFLAGS is
usually -w -s and -j with an appropriate value.


I stand corrected.  For the curious, the definitive source is

http://mxr.mozilla.org/mozilla-central/source/python/mozbuild/mozbuild/base.py#201

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: smartmake-like functionality has landed in mach

2013-05-06 Thread Nick Alexander

On 13-05-06 9:03 PM, Josh Matthews wrote:

On 05/05/2013 09:07 PM, Felipe Gomes wrote:

Is the idea of smartmake to make things also work for non-toplevel
folders? For example, if I edit .cpp only in content/base/src, it
should be enough to rebuild that and toolkit/library. However, `mach
build content/base/src` won't add toolkit/library in that case.

I think this would be a nice use case to cover for the workflow of:
- enter folder
- make changes to files
- `mach build .`


In any case, I filed bug 868880 to include some of the browser/app
dependencies into smartmake.



I thought at one point I had longest substring matching in smartmake,
but the details are fuzzy. It feels like it should be fine to take the
dependency that is the longest substring match of the target and start
building from there, which would avoid the need to add
browser/devtools/* to the dependency list.


I commented on Bug 868880 to this effect, but: what if the longest 
substring is /?  Then we're forcing a top-level build.  Or we're special 
casing directories at the root (which I suppose we already are, since 
|mach build| and |mach build DIR| do different things).


Personally, adding browser/devtools/* to the dependency list prompts us 
developers to externalize our knowledge of the dependencies not captured 
by the current build system, which seems like an artifact we will 
appreciate in the future.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Persistently storing build system state

2013-05-16 Thread Nick Alexander

On 13-05-16 12:53 PM, Gregory Szorc wrote:

As I wrote at [1] there are some scenarios where the build system and
related tools would like to store persistent state. Some uses for this
include:

* Automatically recording previous build logs, compiler warnings, and
test results.
* Holding build environments (downloaded Mercurial extensions,
downloaded or built toolchains, etc).
* Global mach/mozconfig config files
* Caching of state such as last bootstrap time, last Mercurial extension
update check, etc.

Some of these we can't do today because state in the object directory
frequently gets lost (from clobbers) or is not shared among source
directories. As a result, I'd argue that build system UX and
productivity suffers because we can't tap this potential. I'd like to
tap this potential.

Since we can't store all state in the object directory or in the source
directory, I'm proposing that at some time in the future the build
system will automatically use ~/.mozbuild for persistent state storage.
The location will be configurable via an environment variable or
something, of course. And, when the directory is initially created, we
can add a prompt or a long delay with a prominent notification message
or something. Of course, things would be implemented such that multiple
source directories and multiple object directories continue to work.

Initially, I intend to utilize the state directory for holding a global
mach config file and bootstrap state (so mach and/or the build system
can prompt/notify you to rebootstrap every N days or something).

Are there any objections or concerns?


I also think we should tap this potential.  But:

Some of the things you mention are truly global: mach configuration, hg 
extensions.  But some things are more source-tree specific: build logs, 
compiler warnings, test results.  I would not want to share too much 
state between my mozilla-inbound, mozilla-central, and services-central 
trees.


I think making this distinction explicit will help: ~/.mozbuild and 
$SRCDIR/.mozbuild.


Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Persistently storing build system state

2013-05-16 Thread Nick Alexander

On 13-05-16 12:53 PM, Gregory Szorc wrote:

As I wrote at [1] there are some scenarios where the build system and
related tools would like to store persistent state. Some uses for this
include:

* Automatically recording previous build logs, compiler warnings, and
test results.
* Holding build environments (downloaded Mercurial extensions,
downloaded or built toolchains, etc).
* Global mach/mozconfig config files
* Caching of state such as last bootstrap time, last Mercurial extension
update check, etc.


Let me suggest another: keeping track of successfully built object 
directories, so that remote testing commands can guess MOZ_HOST_BIN 
candidates.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to make Firefox open in the foreground on Mac when launched from terminal

2013-05-29 Thread Nick Alexander

So I'd like to ask, if you care about this,

which way would _you_ have as the default?


I want |mach run| to start the application in the foreground.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


[ann] |mach python| command for running the Python in the object directory's virtual environment

2013-07-02 Thread Nick Alexander

Hello all,

Bug 818744 just landed a new |mach python| command that runs the Python 
in the object directory's virtual environment.  Command line arguments 
are passed along as is, so |mach python --version| prints the version, 
and input and output are passed through, so |mach python| starts the 
interactive REPL.


Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Poll: What do you need in MXR/DXR?

2013-10-02 Thread Nick Alexander

On 13-10-02 2:09 PM, Gijs Kruitbosch wrote:

On 02/10/13 21:33 , Erik Rose wrote:

What features do you most use in MXR and DXR?

Over in the recently renamed Web Engineering group, we're working hard
to retire MXR. It hasn't been maintained for a long time, and there's
a lot of duplication between it and DXR, which rests upon a more
modern foundation and has been developing like crazy. However, there
are some holes we need to fill before we can expect you to make a Big
Switch. An obvious one is indexing more trees: comm-central, aurora,
etc. And we certainly have some bothersome UI bugs to squash. But I'd
like to hear from you, the actual users, so it's not just me and Taras
guessing at priorities.

What keeps you off DXR? (What are the MXR things you use constantly?
Or the things which are seldom-used but vital?)

If you're already using DXR as part of your workflow, what could it do
to make your work more fun?

Feel free to reply here, or attach a comment to this blog post, which
talks about some of the things we've done recently and are considering
for the future:

https://blog.mozilla.org/webdev/2013/09/30/dxr-gets-faster-hardware-vcs-integration-and-snazzier-indexing/


We'll use your input to build our priorities for Q4, so wish away!

Cheers,
Erik




Something that's been bothering me about MXR for many years: When I
search for any file of which there is only one copy, like any of the
interface definition files, or the singleton implementations thereof, it
should just open the file. The extra click on the unique search result
is just wasted time.


For every use case, there is an equal and contradictory use case: I find 
DXR's "jump to the only result" behaviour frustrating.  I have separate 
mxr and mxrf keywords for searching in files; if mxrf jumped to the 
unique result, I would be irritated (because I often just want to see 
what the full path to the file is, like `git ls-files | grep`).


Nick

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

2013-10-31 Thread Alexander Keybl
I think it makes a lot of sense to test the spread. +1

- Original Message -
From: Armen Zambrano G. 
To: dev-platform@lists.mozilla.org
Sent: Tue, 29 Oct 2013 13:31:33 -0700 (PDT)
Subject: Re: Improving Mac OS X 10.6 test wait times by reducing 10.7 load

Hello all,
I would like to re-visit this.

I would like to look into stop running tests and talos for 10.7 and
re-purpose those machines as 10.6 machines.
* We have many more users on 10.6 than on 10.7.
* No new updates have been given to 10.6 since July 2011 [1]
* No new updates have been given to 10.7 since October, 2012 [2]

This will improve our current Mac OSX testing wait times.

On another note, 10.9 has come out and I already started seeing a decent
dip on 10.8 users (since it is a free update).

On another note, I would like to consider stop running jobs on 10.8 and
only run them on 10.9 once we have the infrastructure up and running.

cheers,
Armen

[1] https://en.wikipedia.org/wiki/Mac_OS_X_Snow_Leopard#Release_history
[2] https://en.wikipedia.org/wiki/Mac_OS_X_Lion#Release_history

On 2013-04-25 1:30 PM, Armen Zambrano G. wrote:
> (please follow up through mozilla.dev.planning)
> 
> Hello all,
> I have recently been looking into our Mac OS X test wait times which
> have been bad for many months and progressively getting worst.
> Less than 80% of test jobs on OS X 10.6 and 10.7 are able to start
> within 15 minutes of being requested.
> This slows down getting tests results for OS X and makes tree closures
> longer if we have Mac OS X test back logs.
> Unfortunately, we can't buy any more revision 4 Mac minis (they're not
> sold anymore) as Apple discontinues old hardware as new ones comes out.
> 
> In order to improve the turnaround time for Mac testing, we have to look
> into reducing our test load in one of these two OSes (both of them run
> on revision 4 minis).
> We have over a third of our OS X users running 10.6. Eventually, down
> the road, we could drop 10.6 but we still have a significant amount of
> our users there; even though Mac stopped serving them major updates
> since July 2011 [1].
> 
> Our current Mac OS X distribution looks like this:
> * 10.6 - 43%
> * 10.7 - 30%
> * 10.8 - 27%
> OS X 10.8 is the only version that is growing.
> 
> In order to improve our wait times, I propose that we stop testing on
> tbpl per-checkin [2] on OS X 10.7 and re-purpose the 10.7 machines as
> 10.6 to increase our capacity.
> 
> Please let us know if this plan is unacceptable and needs further
> discussion.
> 
> best regards,
> Armen Zambrano - Mozilla's Release Engineering

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: JavaScript Style Guide. Emacs mode line.

2014-01-08 Thread Nick Alexander
> BTW, do java or javascript programmers use Eclipse with its built-in editor
> with suitable editor configuration,
> and that is the end of the story for such Eclipse users when they tinker
> with Mozilla source code?

Android Background Services are developed in a separate git repo that we
then merge to mobile/android [1].

It's a small development team, usually only 2 or 3 at any one time.  We
develop using Eclipse and have shared or equivalently configured Eclipse
to format code how we like.  The Java formatter is not perfect but it
works well in practice, and we rarely have style issues.  I use emacs at
the same time, especially when touching up rebases [2].

Fennec is almost all text editors.  We have an open ticket to support
IDEs [3], but there are a lot of things working against us.

Nick

[1] This merge process is essentially manual and is similar to the
Jetpack SDK merge process.

[2] I use magit for all vcs interactions, so it's natural to do all of
this stuff in Emacs.

[3] https://bugzilla.mozilla.org/show_bug.cgi?id=853045
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Individual files should have a reviewer reference

2014-02-13 Thread Nick Alexander

On 2/13/2014, 1:23 PM, Honza Bambas wrote:

An optional /* @reviewer: m...@foo.com */ comment under the license would
do.  If not present, find reviewer the usual way (not always hitting the
right person).


-1 from me on anything that even *looks* like a file owner in the file 
itself.  This is really meta-data, and I think it will diverge so fast 
that it's not worth trying.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-26 Thread Nick Alexander

On 2/26/2014, 11:56 AM, Andreas Gal wrote:


This randomly reminds me that it might be time to review zip as our compression 
format for omni.ja.

ls -l omni.ja

7862939

ls -l omni.tar.xz (tar and then xz -z)

4814416

LZMA2 is available as a public domain implementation. It uses a bit more memory 
than zip, but its still within reason (the default level 6 is around 1MB to 
decode I believe). A fairly easy way to use it would be to add support for a 
custom compression format for our version of libjar.


Is there a meta ticket for this?  I'm interested in evaluating how much 
this would trim the mobile/android APK size.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including Adobe CMaps

2014-02-27 Thread Nick Alexander

On 2/27/2014, 12:30 AM, Axel Hecht wrote:

The feature of zip we want is the index, that let's us seek to a
position in the bundle and start unpacking, just given the filename.

How hard is to actually create a datastructure for the same purpose for
a tar.xz or so? I don't know really anything about the uncompression
algorithms to know if we could do something like
- seek to position N in bundle
- set state to X, if applicable
- uncompress, skip M bytes
-- you get your file contents, L bytes long

Or so. Yes, it'd be a new file format, I guess, at least as far as I can
tell? Maybe it's worth it.


Big -1 to a new format.  glandium's testing earlier in this thread was 
made trivial by the fact that we're using zip and other standard 
formats; that kind of tooling is the first thing to go when we introduce 
a custom format.  As an Android developer particularly interested in 
packaging, I inspect and rebuild omni.ja with a whole bunch of different 
tools.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tabs and spaces

2014-03-02 Thread Nick Alexander

On 2014-03-02, 9:12 AM, Ted Mielczarek wrote:

On 3/2/2014 5:25 AM, Neil wrote:

I've noticed seven moz.build files containing tabs, I assume this is
undesirable?


Yes. We should probably make the moz.build reader error on tabs.


+1, and filed Bug 978582 [1].

Nick

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=978582
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-14 Thread Nick Alexander

On 2014-04-14, 9:47 PM, Andreas Gal wrote:


Vlad asked a specific question in the first email. Are we comfortable using 
another open (albeit not open enough for MPL) license on trunk while we rewrite 
the library? Can we compromise on trunk in order to innovate faster and only 
ship to GA once the code is MPL friendly via re-licensing or re-writing? What 
is our view on this narrow question?


The idea that we will only "ship to GA once the code is MPL friendly via 
re-licensing or re-writing" sounds tricky in practice.  If including 
libovr has value, the incentive will be to ship, regardless of license. 
 (Personally, I am a VR skeptic.)  We could view this as an experiment: 
include libovr until it's dead weight or proven to be valuable enough to 
demand the resources to re-write it for GA.


We want to move faster and experiment more freely, and in tree has huge 
developer benefits, so my first impression is to include it, but I have 
a question. vlad said:


> The goal would be to remove LibOVR before we ship (or keep it in 
assuming it gets relicensed, if appropriate), and replace it with a 
standard "Open VR" library.


Can somebody save me some license reading and explain what the existing 
framework around shipping libovr is?  Is it explicitly allowed? 
Explicitly dis-allowed?  If I read gerv's post [1] correctly, it is 
allowed, but it's hard to distinguish gerv's opinion from Mozilla legal's.


Nick

[1] http://blog.gerv.net/2014/03/mozilla-and-proprietary-software/, in 
particular section 1B.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-14 Thread Nick Alexander

1. Check in the LibOVR sources as-is, in other-licenses/oculus.  Add a
configure flag, maybe --disable-non-free, that disables building it.
Build and ship it as normal in our builds.


Should be opt-in, not opt-out.


+1

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Nick Alexander

On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:

On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier  wrote:


isIdentity() indeed suffers from rounding errors but since it's useful, I'm
hesitant to remove it.
In our rendering libraries at Adobe, we check if a matrix is *almost*
identity. Maybe we can do the same here?



One option would be to make "isIdentity" and "is2D" state bits in the
object rather than predicates on the matrix coefficients. Then for each
matrix operation, we would define how it affects the isIdentity and is2D
bits. For example we could say translate(tx, ty, tz)'s result isIdentity if
and only if the source matrix isIdentity and tx, ty and tz are all exactly
0.0, and the result is2D if and only if the source matrix is2D and tz is
exactly 0.0.

With that approach, isIdentity and is2D would be much less sensitive to
precision issues. In particular they'd be independent of the precision used
to compute and store store matrix elements, which would be helpful I think.


I agree that most mathematical ways of determining a matrix (as a 
rotation, or a translation, etc) come with isIdentity for free; but are 
most matrices derived from some underlying transformation, or are they 
given as a list of coefficients?


If the latter, the isIdentity flag needs to be determined by the 
constructor, or fed as a parameter.  Exactly how does the constructor 
determine the parameter?  Exactly how does the user?


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Nick Alexander

On 2014-06-02, 9:59 AM, Rik Cabanier wrote:




On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander mailto:nalexan...@mozilla.com>> wrote:

On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:

On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier mailto:caban...@gmail.com>> wrote:

isIdentity() indeed suffers from rounding errors but since
it's useful, I'm
hesitant to remove it.
In our rendering libraries at Adobe, we check if a matrix is
*almost*
identity. Maybe we can do the same here?


One option would be to make "isIdentity" and "is2D" state bits
in the
object rather than predicates on the matrix coefficients. Then
for each
matrix operation, we would define how it affects the isIdentity
and is2D
bits. For example we could say translate(tx, ty, tz)'s result
isIdentity if
and only if the source matrix isIdentity and tx, ty and tz are
all exactly
0.0, and the result is2D if and only if the source matrix is2D
and tz is
exactly 0.0.

With that approach, isIdentity and is2D would be much less
sensitive to
precision issues. In particular they'd be independent of the
precision used
to compute and store store matrix elements, which would be
helpful I think.


I agree that most mathematical ways of determining a matrix (as a
rotation, or a translation, etc) come with isIdentity for free; but
are most matrices derived from some underlying transformation, or
are they given as a list of coefficients?


You can do it either way. Here are the constructors:
http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix

So you can do:

var m = new DOMMatrix(); // identity = true, 2d = true
var m = new DOMMatrix("translate(20 20) scale(4 4) skewX"); //
identity = depends, 2d = depends
var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d =
inherited
var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d = true
var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d =
depends

If the latter, the isIdentity flag needs to be determined by the
constructor, or fed as a parameter.  Exactly how does the
constructor determine the parameter?  Exactly how does the user?


The constructor would check the incoming parameters as defined:

http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity


Thanks for providing these references.  As an aside -- it worries me 
that these are defined rather differently:  is2d says "are equal to 0", 
while isIdentity says "are '0'".  Is this a syntactic or a semantic 
difference?


But, to the point, the idea of "carrying around the isIdentity flag" is 
looking bad, because we either have that A*A.inverse() will never have 
isIdentity() == true; or we promote the idiom that to check for 
identity, one always creates a new DOMMatrix, so that the constructor 
determines isIdentity, and then we query it.  This is no better than 
just having isIdentity do the (badly-rounded) check.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-08 Thread Nick Alexander

On 2014-06-07, 9:38 PM, Benoit Jacob wrote:




2014-06-07 12:49 GMT-04:00 L. David Baron mailto:dba...@dbaron.org>>:

On Monday 2014-06-02 20:45 -0700, Rik Cabanier wrote:
 > - change isIdentity() so it's a flag.

I'm a little worried about this one at first glance.

I suspect isIdentity is going to be used primarily for optimization.
But we want optimizations on the Web to be good -- we should care
about making it easy for authors to care about performance.  And I'm
worried that a flag-based isIdentity will not be useful for
optimization because it won't hit many of the cases that authors
care about, e.g., translating and un-translating, or scaling and
un-scaling.


Note that the current way that isIdentity() works also fails to offer
that characteristic, outside of accidental cases, due to how floating
point works.

The point of this optimizations is not so much to detect when a generic
transformation happens to be of a special form, it is rather to
represent transformations as a kind of variant type where "matrix
transformation" is one possible variant type, and exists alongside the
default, more optimized type, "identity transformation".

Earlier in this thread I pleaded for the removal of isIdentity(). What I
mean is that as it only is defensible as a "variant" optimization as
described above, it doesn't make sense in a _matrix_ class. If we want
to have such a variant type, we should call it a name that does not
contain the word "matrix", and we should have it one level above where
we actually do matrix arithmetic.

Strawman class diagram:

   Transformation
   /  |  \
  /   |   \
 /|\
/ | \
Identity   MatrixOther transform types
e.g. Translation

In such a world, the class containing the word "Matrix" in its name
would not have a isIdentity() method; and for use cases where having a
"variant type" that can avoid being a full blown matrix is meaningful,
we would have such a variant type, like "Transformation" in the above
diagram, and the isIdentity() method there would be merely asking the
variant type for its type field.


I agree with this approach, but I don't think that we want to expose 
this division (directly) to the Web.  It would be great to see these 
divisions used internally to optimize: composing translations gives 
another translation, etc.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-08 Thread Nick Alexander

On 2014-06-08, 2:44 AM, Neil wrote:

Benoit Jacob wrote:


Strawman class diagram:

 Transformation
 /  |  \
/   |   \
   /|\
  / | \
Identity   MatrixOther transform types
  e.g. Translation


In such a world, the class containing the word "Matrix" in its name
would not have a isIdentity() method; and for use cases where having a
"variant type" that can avoid being a full blown matrix is meaningful,
we would have such a variant type, like "Transformation" in the above
diagram, and the isIdentity() method there would be merely asking the
variant type for its type field.


I think roc suggested the possibility of something similar i.e.

Transformation
/  |  \
   /   |   \
  /|\
 / | \
Identity   2DMatrix   3DMatrix

Then from JS you would just write (matrix instanceof Identity). I don't
know whether this would make the implementation unduly complex though.


I worked a good deal on Sage [1], a general algebra computation 
platform, and one lesson we learned again and again is that baking 
mathematical properties into the type hierarchy directly caused more 
problems than it solved.  I'd like to avoid properties encoded as 
instance types if we possibly can.  (Exposing isIdentity as an internal 
instanceof is fine.)


Nick

[1] http://sagemath.org/


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reordering opened windows

2014-07-07 Thread Nick Alexander

On 2014-07-04, 7:38 AM, David Rajchenbach-Teller wrote:

Hi,

  We are considering redesigning slightly how windows are reopened by
Session Restore, to ensure that most recently used windows are loaded
first. I believe that, in many cases, this would enable users to start
browsing faster.


Do we have data on how many users have multiple windows?  I expect that 
we have very few such users, but data will carry the day.


I'm mostly asking out of interest, because even if we have few multiple 
window users, those users probably have lots of tabs.  That's a good 
proxy for being a "power user", and likely one who cares about restore 
responsiveness.


Nick

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Disabling auto-play videos on mobile networks/devices?

2014-08-21 Thread Nick Alexander

Hi Wes,

On 2014-08-21, 10:29 AM, Wesley Johnston wrote:

Summary: We've had some complaints at times about videos autoplaying on mobile devices 
when sites request autoplay. We should be more mindful of users and try to avoid using 
data if they don't want it. Sites should be doing this for us, but we've encountered 
pages where that is not the case. I'm proposing that we at least disable this if the 
audio/video has to be pulled over a (paid?) mobile network. It may, because of the 
context that phones are used, be something we'd disable on mobile in general. i.e. The 
bug report mentions someone using their phone in a quiet setting at home. Theres also 
some power usage concerns that this would help with. The spec allows this explicitly 
"Authors are urged to use the autoplay attribute rather than using script to trigger 
automatic playback, as this allows the user to override the automatic playback when it is 
not desired". Sites (like games) that want to override this could still use 
scripting to autoplay (and probably already do).


In general, I'm in favour of not autoplaying at all on mobile devices.


Link to standard: 
http://www.w3.org/TR/html5/embedded-content-0.html#attr-media-autoplay
Platform coverage: Where will this be available? Android, Firefox OS
Estimated or target release: Aiming for Firefox 35.
Preference behind which this will be implemented: Not sure. We already have a 
boolean media.autoplay.enabled. I think the best thing would probably be to 
make it a tri-state.pref.


Why is it not sufficient to just set media.autoplay.enabled=false on 
mobile platforms?  (MXR suggests autoplay is enabled on all platforms.) 
 Is the concern that disabling autoplay too widely will lead to 
widespread scripted-autoplay, reducing user control yet further?


Can you be explicit about the three states of this proposed tri-state pref?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Web Speech API - Speech Recognition with Pocketsphinx

2014-10-30 Thread Nick Alexander

On 2014-10-30, 4:18 PM, Andre Natal wrote:

I've been researching speech recognition in Firefox for two years. First
SpeechRTC, then emscripten, and now Web Speech API with CMU pocketsphinx
[1] embedded in Gecko C++ layer, project that I had the luck to develop for
Google Summer of Code with the mentoring of Olli Pettay, Guilherme
Gonçalves, Steven Lee, Randell Jesup plus others and with the management of
Sandip Kamat.

The implementation already works in B2G, Fennec and all FF desktop
versions, and the first language supported will be english. The API and
implementation are in conformity with W3C standard [2]. The preference to
enable it is: media.webspeech.service.default = pocketsphinx


First, Andre, let me offer my congratulations on getting this project to 
this point.  We've talked a few times and I've always been impressed.


Can you point me at Fennec try builds?  I vaguely recall that these 
speech recognition approaches require large pattern matching files, and 
I'd like to see what including the Speech API does to the Fennec APK 
size.  We're pushing pretty hard on reducing our APK size right now 
because we believe it's a big barrier to entry and especially to 
upgrading older devices.


Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we make try builds default to a different profile than Nightly/Aurora/Beta/Release builds?

2015-04-08 Thread Nicholas Alexander
On Wed, Apr 8, 2015 at 4:06 PM, Mike Hommey  wrote:

> On Wed, Apr 08, 2015 at 12:08:27PM -0700, Seth Fowler wrote:
> > I sometimes ask users to test a try build to verify that the build
> > fixes their problem. This is an important tool, especially when the
> > problem is hard to reproduce or when the definition of “fix” is a bit
> > nebulous (as it sometimes is in performance-related bugs).
> >
> > However, I don’t want to run the risk of causing users to lose data
> > when they test these builds! That’s especially true in light of the
> > fact that we sometimes make backwards incompatible changes to on-disk
> > data structures between releases (like the recent IndexedDB changes),
> > which could result in users being stuck on Nightly if they experiment
> > with a try build.
>
> It seems to me *this* is the actual problem that needs solving.
> Switching between esr-release-beta-aurora-nightly is *very* common. If
> running nightly screws up profiles for older versions, that's a serious
> problem imho.
>

Really?  Presumably not every forward DB migration can be reverted without
some data loss in theory, and and in my limited practice, handling DB
downgrades is pretty damn hard.  Is there an expectation that Release ->
Nightly -> Release will work?  Preserve data?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we make try builds default to a different profile than Nightly/Aurora/Beta/Release builds?

2015-04-08 Thread Nicholas Alexander
Hi Seth, others,

On Wed, Apr 8, 2015 at 12:08 PM, Seth Fowler  wrote:

> I sometimes ask users to test a try build to verify that the build fixes
> their problem. This is an important tool, especially when the problem is
> hard to reproduce or when the definition of “fix” is a bit nebulous (as it
> sometimes is in performance-related bugs).
>
> However, I don’t want to run the risk of causing users to lose data when
> they test these builds! That’s especially true in light of the fact that we
> sometimes make backwards incompatible changes to on-disk data structures
> between releases (like the recent IndexedDB changes), which could result in
> users being stuck on Nightly if they experiment with a try build.
>

It's quite important for Firefox for Android that try builds look and
behave exactly like Nightly builds.  On Android, it's not really possible
to move/copy/duplicate profiles, and it's rocket science to do it across
products (Release, Beta, ..., Nightly, custom).  So we need to preserve try
builds installing over-top of Nightly builds and vice-versa.  And we accept
that some migrations are one-way -- nature of the testing beast, I think.

So whatever we do needs to be Desktop only, or otherwise account for Fennec.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How does stl_wrappers works in mozilla building system?

2015-04-17 Thread Nicholas Alexander
On Fri, Apr 17, 2015 at 2:58 PM, Yonggang Luo  wrote:

> I can not found the cause that how mozilla building with stl_wrappers.
>

What do you want to know?
https://dxr.mozilla.org/mozilla-central/search?tree=mozilla-central&q=stl_wrapper&redirect=true
clearly shows a Python script that generates the wrapper header files and
also where the includes are added in configure.in.  Not standard but easy
to see the broad strokes.

Nick


> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: You can now log into BMO with your GitHub account

2015-04-27 Thread Nicholas Alexander
On Mon, Apr 27, 2015 at 7:15 AM, Mark Côté  wrote:

> This morning we enabled a feature on bugzilla.mozilla.org that allows
> users to log in with their GitHub credentials, similar to our existing
> Persona support.  If you have several email addresses associated with
> your GitHub account, you will be prompted to choose one.  In either
> case, if your chosen GitHub email address does not match an existing
> user, you will be prompted to create an account.
>
> Also note that, as with Persona, GitHub login is not available to users
> with special administrative or security permissions.
>
> Although hopefully convenient on its own, this feature is also a step
> toward our plan to enabling importing of GitHub pull requests into
> MozReview.
>

Hi Mark, hi BMO team,

As a developer working on Firefox for iOS (join us at
https://github.com/mozilla/firefox-ios) this is awesome.  I hope this will
reduce our contributor's overhead when reporting and tracking issues.  I
will update our README to include logging in with Github's credentials.

Thanks!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Web Speech API Installation Build Flags

2015-05-06 Thread Nicholas Alexander
On Tue, May 5, 2015 at 10:36 PM,  wrote:

> We would like some feedback on build flags for the Web Speech API
> installation.
>
> More specifically, we are planning to land an initial version of the Web
> Speech API[1] into Geko. However, due to a number of factors, model size
> being one of them, we plan to introduce various build flags which
> install/do not install parts of the Web Speech API for various build
> targets.
>
> Our current plan for B2G is as follows:
>
> 1. Introduce a flag to control installation of the Web Speech API
> 2. Introduce a flag to control installation of  Pocketsphinx[2], the
> STT/TTS engine.
> 3. Introduce a script to allow installation of models, allowing developers
> to test the Web Speech API (They can test once they've made a build with
> the previous two flags on)
>
> Our question is related to desktop and Fennec. Our current plan is to:
>
> 1. Introduce a flag to control installation of the Web Speech API +
> Pocketsphinx + English model[3]
>

For Fennec, that's about right -- a build flag should control building and
shipping all (or parts) of this, and a Gecko pref controls enabling the
feature (exposing it to web content).  There are numerous examples, and see
[1] and [2] for Fennec specific notes.

Fennec is extremely concerned about its APK size and is very unlikely to
ship the large model files that the Web Speech API relied on many moons ago
when I looked at it.  I encourage you to f? me and mfinkle on Fennec build
patches, and to pursue Fennec-specific discussion on mobile-firefox-dev.

Nick

[1]
http://www.ncalexander.net/blog/2014/07/09/how-to-land-a-fennec-feature-behind-a-build-flag/
[2]
http://www.ncalexander.net/blog/2015/02/13/how-to-update-and-test-fennec-feature-build-flags/


>
> The question is: Is this a good plan for desktop and Fennec? Should there
> be more/less fine grade control for installation there?
>
> [1] https://dvcs.w3.org/hg/speech-api/raw-file/tip/webspeechapi.html
> [2] http://cmusphinx.sourceforge.net/
> [3] Initially we will work only with English and introduce a mechanism to
> install other models later.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: gecko.readthedocs.org

2015-05-25 Thread Nicholas Alexander
Hi Philip,

On Fri, May 22, 2015 at 10:08 PM, Philip Chee  wrote:

> >
> https://gecko.readthedocs.org/en/latest/mobile/android/base/fennec/adjust.html


I am the author of this particular page.


> I had a very confusing time the first time I tried to use readthedocs.
> Clicking on every link I could see always appeared to land me back at
> the TOC. It took me a while to realize that I had to scroll down to the
> bottom of the page to see the relevant content.
>
> Every other similar documentation site that I've been to puts the TOC in
> a separate navigation column on the left (or right). Please, whoever is
> in charge of this, fix the problem.
>

Is this specific to the Fennec Adjust documentation, is or it general to
all of the documentation pages?

Could you include a lot more detail, including screenshots and STR?  For
me, RTD has a left sidebar and clicking links in the navigation bar (and
footnote markers in the text, etc) does exactly what I expect.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to getting html files under URI resource://URI-path have the same permission that chrome://URI-path does?

2015-06-07 Thread Nicholas Alexander
Hi Yonggang,

On Sun, Jun 7, 2015 at 8:47 AM, 罗勇刚(Yonggang Luo) 
wrote:

> I am trying to place html files under resource directory without need
> to maintain chrome for resource files..
>

I did something like this to make regular HTML files have chrome://
privileges in the fennec-bootstrapper [1].  Check out the custom
bootstrap:// protocol, which might Just Work for your resource:// URIs.

Good luck!
Nick

[1] https://github.com/ncalexan/fennec-bootstrapper


>
> --
>  此致
> 礼
> 罗勇刚
> Yours
> sincerely,
> Yonggang Luo
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Disabling the startup cache

2015-06-22 Thread Nicholas Alexander
On Sat, Jun 20, 2015 at 10:01 PM, Gregory Szorc  wrote:

>
>
> > On Jun 20, 2015, at 20:55, Mike Hommey  wrote:
> >
> >> On Sat, Jun 20, 2015 at 02:36:26PM -0600, Aaron Klotz wrote:
> >>> On 6/20/2015 10:20 AM, Philip Chee wrote:
> >>> Anyone want to try the same with Firefox nightly?
> >>
> >>
> >> Already did it and subsequently filed bug 873638. We can do it if we
> modify
> >> the encoding of the JS source at omnijar build time.
> >>
> >> I did some measurements last year at the 2014 JS workweek and it turns
> out
> >> that the cost of converting the omnijar's JS source to jschars during
> >> parsing is equivalent to the benefits we gain from StartupCache.
> Assuming
> >> that nothing has changed in SM with respect to the time required to do
> the
> >> conversion, if we eliminate that conversion step, we can eliminate
> >> XDR-encoded JS from the StartupCache.
> >
> > Would encoding the js files in UTF-16 eliminate the conversion step? If
> > so, we can easily do that at packaging time, although local builds would
> > not benefit from that.
>
> We could have local builds install a re-encoded file instead of doing a
> simple copy/preprocess/symlink. Might add overhead to builds and break some
> workflows where people rely on symlinks to avoid rebuilds though. I argue
> most JS developers don't care about startupcache most of the time. So no
> net win?
>

On Android, every developer build (roughly, !MOZILLA_OFFICIAL) purges the
startup cache every time Gecko is started.  See [1].  It avoids
understanding a chain of delicate dependencies from the build ID and makes
everyone's life easier.  It has bitten developers trying to profile
start-up time -- we warn loudly in the logcat but it's easy to miss.

Nick

[1]
https://wiki.mozilla.org/Mobile/Fennec/Android#Invalidate_the_JavaScript_startup_cache
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to getting custom protocol custom-protocol://some-uri-path to act like http to support page browser?

2015-06-24 Thread Nicholas Alexander
On Wed, Jun 24, 2015 at 5:53 AM, Yonggang Luo  wrote:

> 在 2015年6月24日星期三 UTC+8下午7:38:01,Philipp Kewisch写道:
> > On 6/24/15 1:10 PM, Yonggang Luo wrote:
> > > For example.
> > > custom-protocol://some-uri-path/test.html
> > >
> > > I want the test.html works like
> > >
> > > http://some-web-site/test.html
> > > That I can navigate in the html page.
> > >
> > Check out nsIProtocolHandler
> >
> > Here is an implementation that directly forwards to http(s):
> >
> >
> http://mxr.mozilla.org/comm-central/source/calendar/base/src/calProtocolHandler.js
> >
> > Philipp
>
> This is forwards to https, but I am forwards to resource://
> That's really different.
>

I'm quite sure I already told you about this, but
https://github.com/ncalexan/fennec-bootstrapper includes a bootstrapper://
protocol which does something quite similar: it handles bootstrapper://URI
like URI.  It does some things around privileges that I found necessary to
load http (and resource, I think) schemes in a privileged context.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [feature] open certain domains into a private window

2015-06-24 Thread Nicholas Alexander
On Tue, Jun 23, 2015 at 9:46 PM, Aaron Klotz  wrote:

> I would like to see this as an addition to tab queues: having an option in
> the toast to choose to open the link in a private tab.
>

This would be valuable, I think.  We have a (new) flag for opening all
links in PB mode; this should work with that.  Worth filing a ticket (if
there is not yet one).

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove DHE ciphers from WebRTC DTLS handshake

2018-08-30 Thread Nicholas Alexander
On Wed, Aug 29, 2018 at 3:56 PM, Nils Ohlmeier 
wrote:

> Summary:
>
> We are looking at removing the DHE cipher suites from the DTLS handshake
> in Firefox soon.
>
> Ciphers:
> - TLS_DHE_RSA_WITH_AES_128_CBC_SHA
> - TLS_DHE_RSA_WITH_AES_256_CBC_SHA
> are the  two suites which we want to remove, because they are considered
> too weak.
>

Are these suites considered "too weak" across the board?  For historical
reasons Firefox for Android will handshake to Firefox Sync servers using
these suites:
https://searchfox.org/mozilla-central/rev/05d91d3e02a0780f44599371005591d7988e2809/mobile/android/services/src/main/java/org/mozilla/gecko/background/common/GlobalConstants.java#73.
Sounds like we should drop those suites there too -- can you confirm?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove DHE ciphers from WebRTC DTLS handshake

2018-08-31 Thread Nicholas Alexander
On Thu, Aug 30, 2018 at 2:15 PM, Nicholas Alexander 
wrote:

>
>
> On Wed, Aug 29, 2018 at 3:56 PM, Nils Ohlmeier 
> wrote:
>
>> Summary:
>>
>> We are looking at removing the DHE cipher suites from the DTLS handshake
>> in Firefox soon.
>>
>> Ciphers:
>> - TLS_DHE_RSA_WITH_AES_128_CBC_SHA
>> - TLS_DHE_RSA_WITH_AES_256_CBC_SHA
>> are the  two suites which we want to remove, because they are considered
>> too weak.
>>
>
> Are these suites considered "too weak" across the board?  For historical
> reasons Firefox for Android will handshake to Firefox Sync servers using
> these suites: https://searchfox.org/mozilla-central/rev/
> 05d91d3e02a0780f44599371005591d7988e2809/mobile/android/
> services/src/main/java/org/mozilla/gecko/background/
> common/GlobalConstants.java#73.  Sounds like we should drop those suites
> there too -- can you confirm?
>

After a little (off-list) discussion, I've filed
https://bugzilla.mozilla.org/show_bug.cgi?id=1487842 tracking dropping
these.

Thanks, Nils (and others)!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Browser Architecture Newsletter #7 (S02E02)

2018-09-19 Thread Nicholas Alexander
Browser Architecture Newsletter #7 (S02E02)

It’s been a long six months since the last Browser Architecture newsletter
.
Short and sweet and no theme for this newsletter.

XUL/XBL Replacement

The XUL/XBL replacement project is churning ahead.  There are two major
goals right now:

   1.

   Replacing XBL bindings with Web technologies like Custom Elements; and
   2.

   Making the main browser window be an HTML document with (mostly) HTML
   DOM elements instead of a XUL document with (mostly) XUL DOM elements.

There’s been progress on both fronts; more details can be found in the
latest XUL/XBL Replacement newsletter
.

Fluent

Fluent is the localization system that Firefox intends as the replacement
for XML and DTD files.  The first Firefox feature to convert to Fluent is
about:preferences ,
which now has only 10 old-style strings remaining (down from an initial
1100 old-style strings)!  Behind the scenes, a big hurdle in the path to
smooth future conversions landed recently: Bug 1455649
 integrated
localization deeply into the (chrome-only) DOM.

Sync and storage

rkv  is a new lightweight Rust key-value
storage engine built on top of LMDB.  We’re looking at using it for a
variety of components, including XULStore and the search cache.  In order
to let others “kick the tires” easily, Myk Melez landed rkv into
mozilla-central ,
although it’s not yet compiled into Firefox for Desktop.  If you’ve got
some keys and some values that you’d like to persist efficiently, I know a
guy.

Mentat  was to be our heavyweight store
for user data that we wanted to sync between devices.  But we couldn’t
bring the technology to a meaningful market quickly enough, so we’re not
pursuing Mentat any further
.
The Application Services team is rapidly building a cross-platform Sync 1.5
stack in Rust  and that
will be the vehicle for improving the Firefox Sync experience for the
foreseeable future.

Node time!

We have made significant progress toward enabling Node.js tooling in the
Firefox build system.  This effort truly spans teams and projects: the
build maintainers and the browser architecture team made a case for Node.js
and set technical direction; the GitHub Integration working group incubated
the effort; and Firefox feature teams including the Activity Stream team
(Dan Mosedale), the ESLint team (Mark Banner), and the Devtools team (Jason
Laster and Alexandre Poirot) are driving the work across the line!

The build now requires Node.js by default
.  Up next is adding
the ability to use this from moz.build files and then drafting proposals
for how we’d like to manage node_modules.  The first consumers will be the
Firefox debugger, Activity Stream, and the ESLint integration. Thanks to
the many folks who have helped and continue to help this project forward.

Firefox technical leadership in the module ownership system

Since the last newsletter the new Firefox Technical Leadership Module has
been formed. You can see some of the discussion around it and its purpose
in the governance thread
.
The FTLM has representation from the Browser Architecture team in the form
of Dave Townsend.

You can always reach us on Slack or IRC (#browser-arch). This newsletter is
also available as a Google Doc

.

Nick (who promises a theme next time)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Browser Architecture Newsletter #7 (S02E02)

2018-09-20 Thread Nicholas Alexander
On Thu, Sep 20, 2018 at 7:25 AM smaug  wrote:

> On 09/20/2018 04:21 PM, Mike Hommey wrote:
> > On Thu, Sep 20, 2018 at 12:18:49PM +0300, smaug wrote:
> >> On 09/19/2018 08:34 PM, Nicholas Alexander wrote:
> >>> 2.
> >>>
> >>> Making the main browser window be an HTML document with (mostly)
> HTML
> >>> DOM elements instead of a XUL document with (mostly) XUL DOM
> elements.
> >>
> >> It is still mystery to me how the performance can be anywhere close to
> XUL when
> >> starting browser or opening a new window.
> >> XUL's prototype cache was explicitly added because of performance long
> ago.
> >> That is why we need to only load already parsed document in a binary
> format,
> >> or in case of new window, just clone from the prototype.
> >> Last time I heard, moving to use HTML document does cause significant
> regressions.
> >
> > I'm reminded of https://bugzilla.mozilla.org/show_bug.cgi?id=618912 but
> > IIRC there were similar experiments back then on desktop, and basic html
> > chrome was significantly faster than basic xul chrome.
> That bug seems to be more about the layout.
>
>
> https://screenshotscdn.firefoxusercontent.com/images/d1753829-3ebd-4c42-a757-14757051accf.png
> is
> the latest numbers I've seen. That isn't pgo, so may or many not be very
> accurate, but the regression is
> very significant.
>

I'm not expert in these areas, so I hope the experts chime in, but I think
there are lots of trade offs here.  I believe that you are correct: the XUL
prototype cache and similar mechanisms significantly impact browser startup
and related metrics.  But there is a general belief, which I do not have
references for, that the HTML widget set is either faster than or will be
faster than the XUL widget set.  Certainly folks believe that effort should
be put into optimizing core Web Platform technologies (rather than
Mozilla-specific extensions).

In any case, I do not personally know enough to put the regressions you're
highlighting into context, so I'll leave that for others.  Let me leave
with an honest question: why can't we do the equivalent of the XUL
prototype cache for the browser HTML document window?

Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A tool to put Phabricator and Bugzilla count into your browser toolbar

2018-10-17 Thread Nicholas Alexander
Mike,

On Wed, Oct 17, 2018 at 7:09 AM Mike Conley  wrote:

> Hi folks,
>
> I wrote a WebExtension to put your total review count (Phabricator +
> Bugzilla) into your browser UI.
>
> So if you've ever wanted your review queue to haunt you while you surf
> the web, you can have that now.
>
> Here it is: https://addons.mozilla.org/en-US/firefox/addon/myqonly/
>
> The WebExtension uses good ol' page scraping to pull your Phabricator
> review count, and only works if you've got a live Phabricator session
> cookie. For Bugzilla, you need to supply an API token and put it in the
> add-on config in about:addons.
>
> Source: https://github.com/mikeconley/myqonly
>
> Issues, feature requests, pull requests and punnier names welcome.
>

I haven't tried this and all I can say is THANK YOU THANK YOU THANK YOU.

Now, how do we add GH reviews as well?  It's ridiculous that my daily flow
includes three systems with no plan (that I am aware of) to integrate them
or even provide a dashboard, but here we are.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing new test platform "Android 7.0 x86"

2018-11-01 Thread Nicholas Alexander
On Thu, Nov 1, 2018 at 2:44 PM Geoffrey Brown  wrote:

> This week some familiar tier 1 test suites began running on a new test
> platform labelled "Android 7.0 x86" on treeherder. Only a few test suites
> are running so far; more are planned.
>
> Like the existing "Android 4.2" and "Android 4.3" test platforms, these
> tests run in an Android emulator running in a docker container (the same
> Ubuntu-based image used for linux64 tests).  The new platform runs an x86
> emulator using kvm acceleration, enabling tests to run much, much faster
> than on the older platforms. As a bonus, the new platform uses Android 7.0
> ("Nougat", API 24) - more modern, more relevant.
>

This is incredibly awesome news!  For those not in the know, the reason
this is so noteworthy is that most cloud-based hosts don't expose KVM
acceleration, since it can be a sec concern.  In particular, AWS doesn't
expose KVM to most (all?) instances.  There's an awful lot of plumbing
required to get jobs running in different cloud hosts... but here we are.

Bravo to everybody involved!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-03 Thread Nicholas Alexander
On Thu, Jan 3, 2019 at 8:43 AM Brian Grinstead 
wrote:

> Artifact builds don’t work with PGO, do they? When I do `-p all` on an
> artifact try push I get busted PGO builds (for example:
> https://treeherder.mozilla.org/#/jobs?repo=try&revision=7f8ead55ca97821c60ef38af4dec01b8bff0fdf3&selectedJob=219655864).
> What's needed to make it work? Requiring a full build for frontend-only
> changes would increase the turnaround time and resource savings in (3).
>

I can partly address this.  There are two things at play (at least):

1) automation builds need a special configuration piece in place to
properly support artifact builds.  Almost certainly that's not in place for
PGO builds, since it's such an unusual thing to do: "you want to pack PGO
binaries into a development build... why?"  But there's really no reason we
can't do that in automation so I've filed
https://bugzilla.mozilla.org/show_bug.cgi?id=15175323 for these things.
It's not high priority but we might as well capture the request; in
general, we always want try pushes to succeed with sensible results if we
can arrange it.

2) locally, we need to teach the artifact code to sniff whatever mozconfig
options say "I'm doing PGO" and fetch the right binaries based on that.  I
think that enabling PGO locally is a little delicate, and I know that
chmanchester (and others?) is working hard to make this more robust, so
perhaps this is easy or becomes easy soon.  I've filed
https://bugzilla.mozilla.org/show_bug.cgi?id=1517532 to track this.

If I'm wrong about the feasibility of these things, please update the
tickets!

Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-04 Thread Nicholas Alexander
On Thu, Jan 3, 2019 at 1:47 PM Chris AtLee  wrote:

> Thank you Joel for writing up this proposal!
>
> Are you also proposing that we stop the linux64-opt and win64-opt builds as
> well, except for leaving them as an available option on try? If we're not
> testing them on integration or release branches, there doesn't seem to be
> much purpose in doing the builds.
>

One reason we might not want to stop producing opt builds: we produce
artifact builds against opt (and debug, with --enable-debug in the local
mozconfig).  It'll be very odd to have --enable-artifact-build and
_require_ --enable-pgo or whatever it is in the local mozconfig.

I expect that these opt build platforms will be relatively inexpensive to
preserve, because step one (IIUC) of pgo is to build the same source files
as the opt builds.  So with luck we get sccache hits between the jobs.
Perhaps somebody with more knowledge of pgo and sccache can confirm or
refute that assertion?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cookie policy/permission in live documents - proposal

2019-01-23 Thread Nicholas Alexander
Hi Andrea, others,

I am mostly ignorant of these matters, so please correct me when I'm wrong.

On Wed, Jan 23, 2019 at 5:24 AM Andrea Marchesini 
wrote:

> Hi all,
>
> When the cookie policy or a cookie permission changes, firefox applies the
> new behavior to any existing documents immediately. I would like to change
> this, applying the new behavior to new documents only.
>




> The reasons why I think we should not apply the new behavior to
> live-documents are this:
>
> *websites can break*
> If we go from BEHAVIOR_ACCEPT to BEHAVIOR_REJECT (or ACCEPT_DENY), all the
> storage APIs will start throwing exceptions (or reject promises). This is
> an unpredictable behavior which can break the website's logic and makes the
> storage APIs inconsistent. Note that this is different than blocking
> cookies since the document loading.
> Plus it doesn't give any extra security/privacy benefit: the website could
> have already copied cookie data into local variables and it can still send
> this data to the server.
> If we go from BEHAVIOR_REJECT to BEHAVIOR_ACCEPT (or ACCEPT_ALLOW),
> basically no website can "recover" itself without a reloading.
>




> subdocument content, stores the cookie behavior (permission + policy) for
> its principal at creation time. The policy is applied to the whole
> document's cookie jar and it doesn't change.
> For BEHAVIOR_REJECT_TRACKER, when the storage access is granted (see
> AccessStorage API and the anti-tracking heuristics), we recalculate the
> document's cookie policy.
>
> When the cookie policy or a cookie permission is changed, we inform the
> user that the current tabs must be reloaded in order to apply the new
> settings.
>

You pointed out one case of unpredictable behaviour: a website's logic
cannot preserve assumptions across the entire duration of it's JS execution
context.  But if we don't apply the policy instantly, isn't the reverse
situation also possible?  When the browser tells me to reload a tab I
generally open a new tab and navigate back -- who knows what state I wanted
that would be lost.  Now suppose I do that and have the same origin in two
tabs.  In your situation the two tabs can have different policies for their
JS execution lifetimes, correct?  And some of these "cookie" mechanisms --
particularly localStorage -- are used for communicating between tabs,
IIUC.  I am not sure if cookies themselves matter between tabs, but perhaps
they do.  Now my two tabs might see subtly different data for the same
origin.  Is this an issue in theory?  Will it be an issue in the wild?

I'm totally willing to hear that I don't understand our
document/origin/something else model.

Thanks!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Cookie policy/permission in live documents - proposal

2019-01-23 Thread Nicholas Alexander
On Wed, Jan 23, 2019 at 11:33 AM Andrea Marchesini 
wrote:

>
> You pointed out one case of unpredictable behaviour: a website's logic
>> cannot preserve assumptions across the entire duration of it's JS
>> execution
>> context.  But if we don't apply the policy instantly, isn't the reverse
>> situation also possible?
>
>
> With my proposal, you will have 2 tabs, loading the same origin with 2
> different cookie behaviors.
> Let's assume that one is BEHAVIOR_ACCEPT and the other one
> BEHAVIOR_REJECT, doesn't matter the order.
> The 2 tabs will not be able to communicate to each other because:
>
> - we don't dispatch storage events, and/or they will not considered by the
> other tab.
> - sessionStorage, localStorage, indexedDB, ... let's say storage APIs
> throw exceptions in the tab with BEHAVIOR_REJECT policy.
> - that tab will not be able to use APIs such as SharedWorkers, or
> BroadcastChannels.
>
> In general, we allow tab communication only if they have both
> BEHAVIOR_ACCEPT cookie policy (or the corresponding permission:
> ACCEPT_ALLOW).
>
> Note that what I'm describing here already exists for private browsing
> contexts which are unable to talk with same origins in normal contexts.
>

Ah -- this private browsing analogy clarifies some things for me.  For PB
we make private tabs be in a different window.  I don't know of that's a
choice or a technical limitation.  If it's a choice, should we be
separating tabs with different cookie behaviours in some way?  If we're
concerned about this being confusing (like private tabs mixed with
non-private), can we force existing tabs to close or reload right then so
there's never this dual state you're working to change?

Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Searchfox support for mobile/android just got a little better

2019-02-28 Thread Nicholas Alexander
Kats, others,

On Thu, Feb 28, 2019 at 9:49 AM Kartikaya Gupta  wrote:

> As of today Searchfox is providing C++/Rust analysis for
> Android-specific code. So stuff in widget/android and behind android
> ifdefs should be turning up when you search for symbols. Note that
> we're using data from an armv7 build, so code that is specific to
> aarch64 or x86 is not covered. That can be added easily if there's a
> need for it.
>
> But wait, there's more! We now host a new top-level repo,
> "mozilla-mobile" [1] that includes a bunch of stuff from the
> mozilla-mobile github org, where much of the mobile work happens these
> days. This includes focus, the reference-browser and much more. At the
> moment searchfox only provides text search over this codebase (not
> even blame yet), but we'll add more functionality to this repo as time
> permits.
>

Burying the lede: this got a *lot* better!  Thanks for making it happen!
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intermediate CA Preloading is enabled for desktop Nightly users

2019-03-14 Thread Nicholas Alexander
On Wed, Mar 13, 2019 at 2:23 PM J.C. Jones  wrote:

> Tom,
>
> Kinto provides the whole list of metadata to clients as soon as it syncs
> [1].  The metadata uses the Kinto attachment
>  mechanism to store the
> DER-encoded certificate for separate download.
>
> Firefox maintains a "local field" boolean in the dataset to of whether a
> given metadata entry's certificate attachment has been downloaded or not,
> toggling as each one is pulled. Currently we don't deduplicate with the
> local NSS Cert DB, the inserts that are already there will fail and emit
> telemetry -- the amount of data saved didn't seem worth it for the
> experimental phase.
>

J.C. -- I don't think this answers Tom's question, but perhaps it does.  In
that case I'll ask what I think is the same question:

How is the set of certificates that _might_ be pushed to clients
determined?  In some way we must determine a set of relevant intermediate
certificates: how do we determine that set?  Is it that the set of
intermediates for every CA that we trust is known?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-04-01 Thread Nicholas Alexander
On Mon, Apr 1, 2019 at 1:03 PM Emilio Cobos Álvarez 
wrote:

> On 01/04/2019 21:40, Bobby Holley wrote:
> > My (possibly outdated) understanding is that the sccache distributed
> > compilation stuff is restricted to the build machines we control.
> Naively,
> > I'm not sure there's a secure way to have machines in the wild contribute
> > build artifacts, given the risk of malicious code.
> >
> > It seems plausible that we could at least allow read-only access to the
> > artifacts generated by the build machines, but that would presumably only
> > be useful if the developer's system configuration exactly matched that of
> > the build machine.
>
> Oh, the setup I was thinking of would be similar to how icecc works now,
> on a local network. So, for example, N developers in the Berlin office
> sharing a scheduler and build resources in the same network.
>
> I'm not 100% sure whether the distributed compilation part of sccache is
> only about sharing compilation artifacts, or also compiling stuff in
> remote machines (like icecc does), or both, but afaik even in a local
> network it could still be a pretty nice build-time / productivity
> improvement.
>

It does distribute compilation tasks to remote hosts.  At this time those
hosts must be running Linux (for the `bubblewrap` chroot-like package).
Months ago I configured this, following
https://github.com/mozilla/sccache/blob/master/docs/DistributedQuickstart.md,
for a Linux host and a macOS device.  Setting up the toolchains was
frustrating but do-able.  I did not get this working on the YVR office
build monster, just 'cuz I don't do a lot of native compilation.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Change to preferences via StaticPrefs, tremoval of gfxPrefs

2019-05-15 Thread Nicholas Alexander
On Wed, May 15, 2019 at 6:03 AM Jean-Yves Avenard 
wrote:

> Dear all.
>




> /Possibility to dynamically set a StaticPref on any threads (however,
> the changes aren't propagated to other processes; doing otherwise is
> certainly doable, I'm not convinced of the use case however)./
>

Forgive my rank ignorance here, but as an outsider this surprises me: I
expect "Gecko preferences" to be (eventually) consistent across processes.
Is this just not the case, and it's common for prefs to vary between the
main and content/gfx processes?  Is there a `user.js` equivalent for main
and child processes?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running source code analysis on Try

2019-05-15 Thread Nicholas Alexander
Bastien, others,

On Tue, May 7, 2019 at 9:38 AM Bastien Abadie  wrote:

> TL;DR: We are leveraging the try infrastructure to perform source code
> analysis (linters, static analyzers, etc). You can take advantage of this
> to trigger other try jobs on Phabricator revisions.
>

What is the status of this work?  I want to take advantage of the new
infrastructure to finally turn certain Android analysis tasks into proper
lints (see Bug 1512487 [1]) but the "Plan 4 Source Code Analysis" build
plan appears to both instantly fail and be opaque (see, for example [2]).
I believe that "reviewbot" was very unhappy last night; perhaps that is
related?

Even with setbacks, I'm excited for these changes to make it easier to
build analysis tasks!
Nick

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1512487
[2] https://phabricator.services.mozilla.com/harbormaster/build/89310/


>
> For more than a year, the Release Management & Quality team has been
> running the Code review bot
>  using several
> Mozilla in-house projects: Taskcluster  &
> release-services .
>
> Our process was simple at first: for every new patch on Mozreview, then
> Phabricator, we would run a few analyzers in a Taskcluster task. It worked
> well for some time, but did not provide the flexibility nor the performance
> needed to support more analyzers (clang-format, Infer, Coverity, …)
>
> So we decided to experiment with a more scalable architecture, using
> Treeherder/Try to run all necessary analyzers for your patches, in
> parallel.
>
> This move will give us a lot of benefits:
>
>-
>
>The analyzers themselves are well defined and exposed in
>mozilla-central, allowing us to use the exact same tools as our usual CI
>and the developers on their computers.
>-
>
>Better overall performance and stability (from mercurial clones to
>running analyzers safely in parallel)
>-
>
>A standard analyzer output in JSON will allow us to easily add new
>analyzers. These new implementations could even come from you!
>
>
> Over the last few months, we have been building the different pieces needed
> to move the bot on the Try infrastructure, and today, I’m glad to announce
> it’s running in production, on your revisions!
>
> Any new Diff on Phabricator should now have a build plan named Source Code
> Analysis .
> Phabricator triggers a code review as soon as the diff is published,
> automatically creating a new try job. You’ll get a “Treeherder Jobs” link
> next to the build plan when it’s created by the bot, allowing you to check
> on the analysis progress, and dive into the issues.
>
>
> You will also be able to use Treeherder to add new jobs, effectively making
> it possible to run try tests for Phabricator revisions in just a few
> clicks. On a Treeherder push, open its action menu on the right side (right
> after the Pin button). The "Add new jobs" options shows all the available
> jobs grouped by platform and test suite, and you can click to select and
> then submit. "Add new jobs (Search)" allows you to search in the job names
> like mach try fuzzy and add them.
>
> If the patch contains any issues, they will be reported as before, through
> inline comments. You can now restart a build if it fails (click on the
> Build, there is a “Restart Build” link in the right sidebar). Please note
> that secure revisions are not supported for now, but we are actively
> working on that (a build failure will be shown for secure revisions for the
> time being).
>
> We would like to thank the different people and teams involved in this
> effort: Andrew from the Automation team; Tom, Rail, Chris and Rok from
> Release Engineering; Dustin and Peter from the Taskcluster team; Steven,
> glob and David from the Engineering Workflow team.
>
> If you have any questions regarding this move or the code review bot, feel
> free to contact us by mail or on IRC #static-analyzers.
>
> Bastien, on the behalf of the Release Management and Quality team.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


[ann] Slides from Mozilla Android Bootcamp presentation

2019-06-19 Thread Nicholas Alexander
Hello folks,

As part of the June 2019 Whistler All Hands
, I delivered a
presentation titled "Android Bootcamp" for Gecko/platform engineers working
on Android.  It's a 10,000 foot view of Mozilla's Android ecosystem and how
to get started building and running Gecko and GeckoView on Android.

Here are the publicly available slides

.

Questions?  dev-platform (this mailing list) is best, but #mobile in IRC
works too.

The session was videotaped and I am told it will be available on
air.mozilla.org but I don't know when it will be posted.

Yours,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [ann] Slides from Mozilla Android Bootcamp presentation

2019-06-26 Thread Nicholas Alexander
Hello all,

On Wed, Jun 19, 2019 at 10:19 AM Nicholas Alexander 
wrote:

> Hello folks,
>
> As part of the June 2019 Whistler All Hands
> <https://wiki.mozilla.org/All_Hands/Whistler2019>, I delivered a
> presentation titled "Android Bootcamp" for Gecko/platform engineers working
> on Android.  It's a 10,000 foot view of Mozilla's Android ecosystem and how
> to get started building and running Gecko and GeckoView on Android.
>
> Here are the publicly available slides
> <https://docs.google.com/presentation/d/1MzU9q2wCwojC0kb1eVfma8hrQ-KayCRFFd_mV5Gx1F4/edit#slide=id.g37695b23f5_0_10>
> .
>

Many people reached out to inform me that the slide footer said "Mozilla
Confidential".  Nothing here is confidential and I have removed the
footer.  Sorry for the confusion, and many thanks to the folks who informed
me of the footer.


> The session was videotaped and I am told it will be available on
> air.mozilla.org but I don't know when it will be posted.
>

It is available on air.mozilla.org now:
https://onlinexperiences.com/Launch/Event.htm?ShowKey=44908&DisplayItem=E333861,
but I think that the recording is private to Mozilla.  The quality of the
recording is not great -- somebody kept walking in front of the projector
-- so I intend to record a webinar version.  More if and when I get to it!

Yours,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >