IndexedDB access in third-party iFrames disabled or not?

2016-01-25 Thread jason
Is IndexedDB supposed to be accessible in third-party iFrames? From what I 
understand, the MDN documentation says IndexedDB is *not* accessible in 
third-party iFrames, but from my own tests, I have been fully able to use 
IndexedDB in third-party iFrames.

The MDN documentation link: 
https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB#Security
Search for "It's important to note that IndexedDB doesn't work for content 
loaded into a frame"

MDN links to RESOLVED FIXED Bugzilla 595307: 
https://bugzilla.mozilla.org/show_bug.cgi?id=595307

I guess I don't really understand what the status is. I'm able to use IndexedDB 
in third-party iFrames, but maybe I shouldn't be able to? What are Firefox's 
plans for IndexedDB in third-party iFrames? 

If necessary, I can upload some test code. Hopefully this is the right place to 
ask, thanks for reading this!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: IndexedDB access in third-party iFrames disabled or not?

2016-01-25 Thread jason
Thank you so much for the prompt reply Kyle! Everything has been cleared up now!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Race Cache With Network experiment on Nightly

2017-05-25 Thread Jason Duell
I'm worried we're going from too little process here to too much (at least
for this bug).  Opening a meta-bug + 4 sub-bugs and doing a legal review,
etc., is a lot of overhead to test some network plumbing that is not going
to be especially noticeable to users.

Also, we expect that this code will mostly benefit users with slow hardware
(disk drives especially).  We'll need to cast a very wide net to get
nightly users that match that profile.  The Shield docs say that
"participation for Shield Studies is currently around 1-2% of randomly
selected participants" (does that map to 1-2% of nightly users?), so I'm
not sure we'd get enough coverage if we used Shield.

Jason

On Thu, May 25, 2017 at 11:30 AM,  wrote:

> Hey folks. I run the Shield team. Pref flipping experiments ARE available
> on Nightly and will be available in all channels (including Release) at
> some point in Firefox 54.
>
> Since the process is still relatively new, I've been hacking on some how
> to docs: https://docs.google.com/document/d/16bpDZGCPKrOIgkkIo5mWKHPTlYXOa
> tyg_-CUi-3-e54/edit#heading=h.mzzhkdagng85
>
> Feel free to give those a spin. Feedback on the docs/process is welcome.
>
> On Wednesday, May 24, 2017 at 6:14:55 PM UTC-7, Patrick McManus wrote:
> > a howto for a pref experiment would be awesome..
> >
> > On Wed, May 24, 2017 at 9:03 PM, Eric Rescorla  wrote:
> >
> > > What's the state of pref experiments? I thought they were not yet
> ready.
> > >
> > > -Ekr
> > >
> > >
> > > On Thu, May 25, 2017 at 7:15 AM, Benjamin Smedberg <
> benja...@smedbergs.us>
> > > wrote:
> > >
> > > > Is there a particular reason this is landing directly to nightly
> rather
> > > > than using a pref experiment? A pref experiment is going to provide
> much
> > > > more reliable comparative data. In general we're pushing everyone to
> use
> > > > controlled experiments for nightly instead of landing experimental
> work
> > > > directly.
> > > >
> > > > --BDS
> > > >
> > > > On Wed, May 24, 2017 at 11:36 AM, Valentin Gosu <
> valentin.g...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > As part of the Quantum Network initiative we are working on a
> project
> > > > > called "Race Cache With Network" (rcwn) [1].
> > > > >
> > > > > This project changes the way the network cache works. When we
> detect
> > > that
> > > > > disk IO may be slow, we send a network request in parallel, and we
> use
> > > > the
> > > > > first response that comes back. For users with slow spinning disks
> and
> > > a
> > > > > low latency network, the result would be faster loads.
> > > > >
> > > > > This feature is currently preffed off - network.http.rcwn.enabled
> > > > > In bug 1366224, which is about to land on m-c, we plan to enable
> it on
> > > > > nightly for one or two days, to get some useful telemetry for our
> > > future
> > > > > work.
> > > > >
> > > > > For any crashes or unexpected behaviour, please file bugs blocking
> > > > 1307504.
> > > > >
> > > > > Thanks!
> > > > >
> > > > > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=rcwn
> > > > > [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1366224
> > > > > ___
> > > > > dev-platform mailing list
> > > > > dev-platform@lists.mozilla.org
> > > > > https://lists.mozilla.org/listinfo/dev-platform
> > > > >
> > > > ___
> > > > dev-platform mailing list
> > > > dev-platform@lists.mozilla.org
> > > > https://lists.mozilla.org/listinfo/dev-platform
> > > >
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > >
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Race Cache With Network experiment on Nightly

2017-05-25 Thread Jason Duell
>I think you were looking at the docs for opt-in Shield studies

Ah, right you are :)

OK, I've filed a bug for the pref study:

   https://bugzilla.mozilla.org/show_bug.cgi?id=1367951

Thanks!

Jason

On Thu, May 25, 2017 at 2:27 PM, Matthew Grimes  wrote:

> I think you were looking at the docs for opt-in Shield studies
> (experiments deployed as add-ons), not for pref flipping experiments. Due
> to the nature of some of the opt-in studies we run they require a different
> approval process. Pref flipping is available for all users, it is not
> opt-in. The process currently requires one bug and an email to release
> drivers. Feedback on the doc/process is always welcome!
>
> On Thu, May 25, 2017 at 1:40 PM, Jason Duell  wrote:
>
>> I'm worried we're going from too little process here to too much (at
>> least for this bug).  Opening a meta-bug + 4 sub-bugs and doing a legal
>> review, etc., is a lot of overhead to test some network plumbing that is
>> not going to be especially noticeable to users.
>>
>> Also, we expect that this code will mostly benefit users with slow
>> hardware (disk drives especially).  We'll need to cast a very wide net to
>> get nightly users that match that profile.  The Shield docs say that
>> "participation for Shield Studies is currently around 1-2% of randomly
>> selected participants" (does that map to 1-2% of nightly users?), so I'm
>> not sure we'd get enough coverage if we used Shield.
>>
>> Jason
>>
>> On Thu, May 25, 2017 at 11:30 AM,  wrote:
>>
>>> Hey folks. I run the Shield team. Pref flipping experiments ARE
>>> available on Nightly and will be available in all channels (including
>>> Release) at some point in Firefox 54.
>>>
>>> Since the process is still relatively new, I've been hacking on some how
>>> to docs: https://docs.google.com/document/d/16bpDZGCPKrOIgkkIo5mWKHPT
>>> lYXOatyg_-CUi-3-e54/edit#heading=h.mzzhkdagng85
>>>
>>> Feel free to give those a spin. Feedback on the docs/process is welcome.
>>>
>>> On Wednesday, May 24, 2017 at 6:14:55 PM UTC-7, Patrick McManus wrote:
>>> > a howto for a pref experiment would be awesome..
>>> >
>>> > On Wed, May 24, 2017 at 9:03 PM, Eric Rescorla  wrote:
>>> >
>>> > > What's the state of pref experiments? I thought they were not yet
>>> ready.
>>> > >
>>> > > -Ekr
>>> > >
>>> > >
>>> > > On Thu, May 25, 2017 at 7:15 AM, Benjamin Smedberg <
>>> benja...@smedbergs.us>
>>> > > wrote:
>>> > >
>>> > > > Is there a particular reason this is landing directly to nightly
>>> rather
>>> > > > than using a pref experiment? A pref experiment is going to
>>> provide much
>>> > > > more reliable comparative data. In general we're pushing everyone
>>> to use
>>> > > > controlled experiments for nightly instead of landing experimental
>>> work
>>> > > > directly.
>>> > > >
>>> > > > --BDS
>>> > > >
>>> > > > On Wed, May 24, 2017 at 11:36 AM, Valentin Gosu <
>>> valentin.g...@gmail.com
>>> > > >
>>> > > > wrote:
>>> > > >
>>> > > > > As part of the Quantum Network initiative we are working on a
>>> project
>>> > > > > called "Race Cache With Network" (rcwn) [1].
>>> > > > >
>>> > > > > This project changes the way the network cache works. When we
>>> detect
>>> > > that
>>> > > > > disk IO may be slow, we send a network request in parallel, and
>>> we use
>>> > > > the
>>> > > > > first response that comes back. For users with slow spinning
>>> disks and
>>> > > a
>>> > > > > low latency network, the result would be faster loads.
>>> > > > >
>>> > > > > This feature is currently preffed off - network.http.rcwn.enabled
>>> > > > > In bug 1366224, which is about to land on m-c, we plan to enable
>>> it on
>>> > > > > nightly for one or two days, to get some useful telemetry for our
>>> > > future
>>> > > > > work.
>>> > > > >
>>> > > > > For any crashes or unexpected behaviour, please file bugs
>>> blocking
>>> > > > 1307504.
>>> > > > >
>>> > > > > Thanks!
>>> > > > >
>>> > > > > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=rcwn
>>> > > > > [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1366224
>>> > > > > ___
>>> > > > > dev-platform mailing list
>>> > > > > dev-platform@lists.mozilla.org
>>> > > > > https://lists.mozilla.org/listinfo/dev-platform
>>> > > > >
>>> > > > ___
>>> > > > dev-platform mailing list
>>> > > > dev-platform@lists.mozilla.org
>>> > > > https://lists.mozilla.org/listinfo/dev-platform
>>> > > >
>>> > > ___
>>> > > dev-platform mailing list
>>> > > dev-platform@lists.mozilla.org
>>> > > https://lists.mozilla.org/listinfo/dev-platform
>>> > >
>>> > >
>>>
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is Quantum DOM affecting DevTools?

2017-09-28 Thread Jason Duell
We did do some work to slow down network loads in background tabs (in order
to prioritize the active tab).  Is the issue that you need to switch to the
background tab in order to make it connect faster?  Or does the foreground
tab not load quickly unless you switch back and forth from it?  I'm hoping
the former.

If you've got reliable STR it's probably time to open a bug and cc me
(:jduell), :mcmanus and Honza (:mayhemer).

Jason

On Thu, Sep 28, 2017 at 10:19 AM, Bill McCloskey 
wrote:

> If that's caused by anything Quantum-related, it's more likely to be
> Quantum networking stuff.
> -Bill
>
> On Thu, Sep 28, 2017 at 2:30 AM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
> > Hello there!
> >
> > I was testing some WebRTC demos in two separate tabs in Nightly. I
> realized
> > I needed to switch from one to another for them to connect faster. I
> > thought it would be related to Quantum DOM throttling and I was
> > wondering...
> >
> >- Is it possible that throttling would be affecting DevTools?
> >- Would it be possible to disable throttling if DevTools are open?
> >
> > What do you think?
> >
> > --
> > 
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Yes=0, No=1

2018-07-12 Thread Jason Orendorff
The codebase has a few bool-like enum classes like this:

enum class HolodeckSafetyProtocolsEnabled {
Yes, No
};

Note that `bool(HolodeckSafetyProtocolsEnabled::Yes)` is false.

...This is bad, right? Asking for a friend.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: jar: URIs from content

2015-10-15 Thread Jason Duell
OMG yes please.

Jason

On Thu, Oct 15, 2015 at 11:31 AM, Ehsan Akhgari 
wrote:

> On 2015-10-15 1:58 PM, Ehsan Akhgari wrote:
>
>> We currently support URLs such as
>> > http://mxr.mozilla.org/mozilla-central/source/modules/libjar/test/mochitest/bug403331.zip?raw=1&ctype=application/java-archive!/test.html
>> >.
>>   This is a Firefox specific feature that no other engine implements,
>> and it increases our attack surface unnecessarily.  As such, I would
>> like to put it behind a pref and disable it for Web content by default.
>>
>
> FWIW I filed bug 1215235 for this.  We'll wait for this discussion before
> landing code there.
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: about:profiles and the new profile manager

2015-12-18 Thread Jason Duell
I think we need to compact the new UI.  The old profile manager shows me up
to 5 profiles that I can simply click on to launch the browser.  With the
new UI the info for the first profile fills up the whole popup window, so
launching other profiles now requires me to scroll down, then click.  That
sounds minor (and in some sense it is) but 95% of the time the profile
manager is launched to select which profile to run (versus trying to find
where the profile is stored on disk, etc), so making that a slower
experience seems like a step backward.

It's great to see it running as a normal tab, though!

Do we have any plans to make Profile manager more prominent in our user
experience?  I've used it for years now to keep one instance of firefox
running with work stuff, and one for personal use.  But it remains a buried
secret, when it's really handy for a lot of use cases (people sharing a
computer, etc).  I know a lot of people who wind up using different
browsers to achieve the same thing, which seems like a waste.

Jason


On Fri, Dec 18, 2015 at 12:54 PM, Andrea Marchesini  wrote:

> >
> >
> > The replacement for the ProfileManager probably needs some UX work,
> > though.  It was not clear to me which profile was actually going to be
> > launched if I clicked the "Start Nightly" button.
> >
> >
> Right. This is one of the bug I'm working on (bug 1233032). The new UI has
> been written but, definitely, and we still need some UX work.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Too many oranges!

2015-12-22 Thread Jason Duell
On Tue, Dec 22, 2015 at 11:38 AM, Ben Kelly  wrote:

>
> I'd rather see us do:
>
> 1) Raise the visibility of oranges.  Post the most frequent intermittents
> without an owner to dev-platform every N days.
> 2) Make its someone's job to find owners for top oranges.  I believe RyanVM
> used to do that, but not sure if its still happening now that he has
> changed roles.
>
> Ben
>
>
I'm with bkelly on this one.  Maybe with some additional initial messaging
("War on Orange raised in priority!") too.  I don't think we want to pivot
all work in platform for this.

Jason



>
> >
> > On Tue, Dec 22, 2015 at 7:41 AM Mike Conley  wrote:
> >
> > > I would support scheduled time[1] to do maintenance[2] and help improve
> > our
> > > developer tooling and documentation. I'm less sure how to integrate
> such
> > a
> > > thing in practice.
> > >
> > > [1]: A day, a week, heck maybe even a release cycle
> > > [2]: Where maintenance is fixing oranges, closing out papercuts,
> > > refactoring, etc.
> > >
> > > On 21 December 2015 at 17:35,  wrote:
> > >
> > > > On Monday, December 21, 2015 at 1:16:13 PM UTC-6, Kartikaya Gupta
> > wrote:
> > > > > So, I propose that we create an orangefactor threshold above which
> > the
> > > > > tree should just be closed until people start fixing intermittent
> > > > > oranges. Thoughts?
> > > > >
> > > > > kats
> > > >
> > > > How about regularly scheduled test fix days where everyone drops what
> > > they
> > > > are doing and spends a day fixing tests? mc could be closed to
> > everything
> > > > except critical work and test fixes. Managers would be able to opt
> > > > individuals out of this as needed but generally everyone would be
> > > expected
> > > > to take part.
> > > >
> > > > Jim
> > > > ___
> > > > dev-platform mailing list
> > > > dev-platform@lists.mozilla.org
> > > > https://lists.mozilla.org/listinfo/dev-platform
> > > >
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Jason Duell
Cameron,

The way the builtin protocols (HTTP/FTP/Websockets/etc) handle this is that
the protocol handler code checks whether we're in a child process or not
when a channel is created, and we hand out different things depending on
that.  In the parent, we hand out a "good old HTTP channel" (nsHttpChannel)
just as we've always done in single-process Firefox.  In the child we hand
out a stub channel (HttpChannelChild) that looks and smells like an
nsIHttpChannel, but actually uses IPDL (our C++ cross-platform messaging
language) to essentially shunt all the real work to the parent.  When
AsyncOpen is called on the child, the stub channel winds up telling the
parent to create a regular "real" http channel, which does the actual work
of creating/sending an HTTP request, and as the reply come back, sending
the data to the child, which dispatches OnStart/OnData/OnStopRequest
messages as they arrive from the parent.

One key ingredient here is to make sure all cross-process communication is
asynchronous whenever possible (and that should be 95%+ of the time).  You
want to avoid blocking the child process waiting for synchronous
cross-process communication (and you're not allowed to block the parent
waiting to the child to respond).  You also generally want to have the
parent channel send all the relevant data to the child that it will need to
service all nsI[Foo]Channel requests, as opposed to doing a "remote object"
style approach (where you'd send a message off to the parent process to ask
the "real" channel for the answer).  This is both because 1) that would be
painfully slow, and 2) the parent and child objects may not be in the same
state. For instance, if client code calls channel.isPending() on a child
channel that hasn't dispatched OnStopRequest yet, the answer should be
'true'.  But if you ask the parent channel, it may have already hit
OnStopRequest and sent that data for that to the child (where it's waiting
to be dispatched).  So for instance, HTTP channels ship the entire set of
HTTP response headers to the child as part of receiving OnStartRequest from
the parent, so that they can service any GetResponseHeader() calls without
asking the parent.

>From talking to folks who know JS better than I do, it sounds like the
mechanism you'll want to use for all your cross-process communication is
the Message Manager:


https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox/Message_Manager

https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox/Message_Manager/Message_manager_overview

http://mxr.mozilla.org/mozilla-central/source/dom/base/nsIMessageManager.idl?force=1#15

One difference between C++/IPDL and JS/MM  is that IPDL has the builtin
concept of an IPDL "channel": it's like a pipe you set up.  Each C++ necko
channel in e10s sets up its own IPDL 'channel' (which is really just a
unique ID under the covers).  So when, for instance, an OnDataAvailable
message gets sent from the parent to the child, we automatically know which
necko channel it belongs to (from the IPDL channel it arrives on).  The
Message Manager's messages are more like DOM events--there's no notion of a
channel that they belong to, so you'll need to include as part of the
message some kind of ID that you map back to the necko channel that it's
for (I'd say use the URI, but that wouldn't work if you've got multiple
channels open to the same URI.  So you'll probably assign each channel a
GUID and keep a hashtable on both the parent and child that lets you map
from GUID->channel.

I'm happy to help some more off-list with examples of how our current
protocols handle various things as you have questions.  Hopefully this will
get you started, along with this inspirational gopher:// video:

   https://www.youtube.com/watch?v=WaSUyYSQie8

:)

Jason

On Mon, Jan 4, 2016 at 4:03 PM, Cameron Kaiser  wrote:

> On 1/4/16 12:09 PM, Dave Townsend wrote:
>
>> On Mon, Jan 4, 2016 at 12:03 PM, Cameron Kaiser 
>> wrote:
>>
>>> What's different about nsIProtocolHandler in e10s? OverbiteFF works in 45
>>> aurora without e10s on, but fails to recognize the protocol it defines
>>> with
>>> e10s enabled. There's no explanation of this in the browser console and
>>> seemingly no error. Do I have to do extra work to register the protocol
>>> handler component, or is there some other problem? A cursory search of
>>> MDN
>>> was no help.
>>>
>>> Assuming you are registering the protocol handler in chrome.manifest
>> it will only be registered in the parent process but you will probably
>> need to register it in the child process too and make it do something
>> sensible in each case. You'll have to do that with JS in a frame or
>

Re: Coding style for C++ enums

2016-04-12 Thread Jason Orendorff
On Mon, Apr 11, 2016 at 7:58 PM, Jeff Gilbert  wrote:

> On Mon, Apr 11, 2016 at 4:00 PM, Bobby Holley 
> wrote:
> > On Mon, Apr 11, 2016 at 2:12 PM, Jeff Gilbert 
> wrote:
> >> I think the whole attempt is
> >> increasingly a distraction vs the alternative, and I say this as
> >> someone who writes and reviews under at least three differently-styled
> >> code areas. The JS engine will, as ever, remain distinct
> >
> > Jason agreed to converge SM style with Gecko.
>
> The SM engineers I talked to are dismissive of this. I will solicit
> responses.
>

For the record:

SM is converging to Gecko style, at the rate that people contribute patches
to re-style our existing code. I.e., very slowly.

The one exception is indentation level: enough SM devs prefer 4-space
indents that we aren't changing that without further (internal) discussion.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies

2016-04-15 Thread Jason Duell
On Thu, Apr 14, 2016 at 10:54 PM, Chris Peterson 
wrote:

>
> Focusing on third-party session cookies is an interesting idea.
> "Sessionizing" non-HTTPS third-party cookies would encourage ad networks
> and CDNs to use HTTPS, allowing content sites to use HTTPS without mixed
> content problems. Much later, we could consider sessionizing even HTTPS
> third-party cookies.
>

How about we sessionize only 3rd party HTTP cookies from sites that are on
our tracking protection list?  That seems the most targeted way to
encourage ad networks to bump up to HTTPS with a minimal amount of
collateral damage to other users of 3rd party HTTP cookies.

> We seem to have this already: network.cookie.thirdparty.sessionOnly

Correct, that's what it does.

Jason



>
> On 4/14/16 1:54 AM, Chris Peterson wrote:
>
>> Summary: Treat cookies set over non-secure HTTP as session cookies
>>
>> Exactly one year ago today (!), Henri Sivonen proposed [1] treating
>> cookies without the `secure` flag as session cookies.
>>
>> PROS:
>>
>> * Security: login cookies set over non-secure HTTP can be sniffed and
>> replayed. Clearing those cookies at the end of the browser session would
>> force the user to log in again next time, reducing the window of
>> opportunity for an attacker to replay the login cookie. To avoid this,
>> login-requiring sites should use HTTPS for at least their login page
>> that set the login cookie.
>>
>> * Privacy: most ad networks still use non-secure HTTP. Content sites
>> that use these ad networks are prevented from deploying HTTPS themselves
>> because of HTTP/HTTPS mixed content breakage. Clearing user-tracking
>> cookies set over non-secure HTTP at the end of every browser session
>> would be a strong motivator for ad networks to upgrade to HTTPS, which
>> would unblock content sites' HTTPS rollouts.
>>
>> However, my testing of Henri's original proposal shows that too few
>> sites set the `secure` cookie flag for this to be practical. Even sites
>> that primarily use HTTPS, like google.com, omit the `secure` flag for
>> many cookies set over HTTPS.
>>
>> Instead, I propose treating all cookies set over non-secure HTTP as
>> session cookies, regardless of whether they have the `secure` flag.
>> Cookies set over HTTPS would be treated as "secure so far" and allowed
>> to persist beyond the current browser session. This approach could be
>> tightened so any "secure so far" cookies later sent over non-secure HTTP
>> could be downgraded to session cookies. Note that Firefox's session
>> restore will persist "session" cookies between browser restarts for the
>> tabs that had been open. (This is "eternal session" feature/bug 530594.)
>>
>> To test my proposal, I loaded the home pages of the Alexa Top 25 News
>> sites [2]. These 25 pages set over 1300 cookies! Fewer than 200 were set
>> over HTTPS and only 7 had the `secure` flag. About 900 were third-party
>> cookies. Treating non-secure cookies as session cookies means that over
>> 1100 cookies would be cleared at the end of the browser session!
>>
>> CONS:
>>
>> * Sites that allow users to configure preferences without logging into
>> an account would forget the users' preferences if they are not using
>> HTTPS. For example, companies that have regional sites would forget the
>> user's selected region at the end of the browser session.
>>
>> * Ad networks' opt-out cookies (for what they're worth) set over
>> non-secure HTTP would be forgotten at the end of the browser session.
>>
>> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1160368
>>
>> Link to standard: N/A
>>
>> Platform coverage: All platforms
>>
>> Estimated or target release: Firefox 49
>>
>> Preference behind which this will be implemented:
>> network.cookie.lifetime.httpSessionOnly
>>
>> Do other browser engines implement this? No
>>
>> [1]
>>
>> https://groups.google.com/d/msg/mozilla.dev.platform/xaGffxAM-hs/aVgYuS3QA2MJ
>>
>> [2] http://www.alexa.com/topsites/category/Top/News
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies

2016-04-15 Thread Jason Duell
On Fri, Apr 15, 2016 at 2:12 AM, Jason Duell  wrote:

> On Thu, Apr 14, 2016 at 10:54 PM, Chris Peterson 
> wrote:
>
>>
>> Focusing on third-party session cookies is an interesting idea.
>> "Sessionizing" non-HTTPS third-party cookies would encourage ad networks
>> and CDNs to use HTTPS, allowing content sites to use HTTPS without mixed
>> content problems. Much later, we could consider sessionizing even HTTPS
>> third-party cookies.
>>
>
> How about we sessionize only 3rd party HTTP cookies from sites that are on
> our tracking protection list?  That seems the most targeted way to
> encourage ad networks to bump up to HTTPS with a minimal amount of
> collateral damage to other users of 3rd party HTTP cookies.
>

(We could presumably keep a list of CDNs too and sessionize those as well)

Jason



> > We seem to have this already: network.cookie.thirdparty.sessionOnly
>
> Correct, that's what it does.
>
> Jason
>
>
>
>>
>> On 4/14/16 1:54 AM, Chris Peterson wrote:
>>
>>> Summary: Treat cookies set over non-secure HTTP as session cookies
>>>
>>> Exactly one year ago today (!), Henri Sivonen proposed [1] treating
>>> cookies without the `secure` flag as session cookies.
>>>
>>> PROS:
>>>
>>> * Security: login cookies set over non-secure HTTP can be sniffed and
>>> replayed. Clearing those cookies at the end of the browser session would
>>> force the user to log in again next time, reducing the window of
>>> opportunity for an attacker to replay the login cookie. To avoid this,
>>> login-requiring sites should use HTTPS for at least their login page
>>> that set the login cookie.
>>>
>>> * Privacy: most ad networks still use non-secure HTTP. Content sites
>>> that use these ad networks are prevented from deploying HTTPS themselves
>>> because of HTTP/HTTPS mixed content breakage. Clearing user-tracking
>>> cookies set over non-secure HTTP at the end of every browser session
>>> would be a strong motivator for ad networks to upgrade to HTTPS, which
>>> would unblock content sites' HTTPS rollouts.
>>>
>>> However, my testing of Henri's original proposal shows that too few
>>> sites set the `secure` cookie flag for this to be practical. Even sites
>>> that primarily use HTTPS, like google.com, omit the `secure` flag for
>>> many cookies set over HTTPS.
>>>
>>> Instead, I propose treating all cookies set over non-secure HTTP as
>>> session cookies, regardless of whether they have the `secure` flag.
>>> Cookies set over HTTPS would be treated as "secure so far" and allowed
>>> to persist beyond the current browser session. This approach could be
>>> tightened so any "secure so far" cookies later sent over non-secure HTTP
>>> could be downgraded to session cookies. Note that Firefox's session
>>> restore will persist "session" cookies between browser restarts for the
>>> tabs that had been open. (This is "eternal session" feature/bug 530594.)
>>>
>>> To test my proposal, I loaded the home pages of the Alexa Top 25 News
>>> sites [2]. These 25 pages set over 1300 cookies! Fewer than 200 were set
>>> over HTTPS and only 7 had the `secure` flag. About 900 were third-party
>>> cookies. Treating non-secure cookies as session cookies means that over
>>> 1100 cookies would be cleared at the end of the browser session!
>>>
>>> CONS:
>>>
>>> * Sites that allow users to configure preferences without logging into
>>> an account would forget the users' preferences if they are not using
>>> HTTPS. For example, companies that have regional sites would forget the
>>> user's selected region at the end of the browser session.
>>>
>>> * Ad networks' opt-out cookies (for what they're worth) set over
>>> non-secure HTTP would be forgotten at the end of the browser session.
>>>
>>> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1160368
>>>
>>> Link to standard: N/A
>>>
>>> Platform coverage: All platforms
>>>
>>> Estimated or target release: Firefox 49
>>>
>>> Preference behind which this will be implemented:
>>> network.cookie.lifetime.httpSessionOnly
>>>
>>> Do other browser engines implement this? No
>>>
>>> [1]
>>>
>>> https://groups.google.com/d/msg/mozilla.dev.platform/xaGffxAM-hs/aVgYuS3QA2MJ
>>>
>>> [2] http://www.alexa.com/topsites/category/Top/News
>>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
>
> --
>
> Jason
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Out parameters, References vs. Pointers (was: Proposal: use nsresult& outparams in constructors to represent failure)

2016-04-21 Thread Jason Orendorff
More evidence that our coding conventions need an owner...

-j


On Wed, Apr 20, 2016 at 10:07 PM, Kan-Ru Chen (陳侃如) 
wrote:

> Nicholas Nethercote  writes:
>
> > Hi,
> >
> > C++ constructors can't be made fallible without using exceptions. As a
> result,
> > for many classes we have a constructor and a fallible Init() method
> which must
> > be called immediately after construction.
> >
> > Except... there is one way to make constructors fallible: use an
> |nsresult&
> > aRv| outparam to communicate possible failure. I propose that we start
> doing
> > this.
>
> Current coding style guidelines suggest that out parameters should use
> pointers instead of references. The suggested |nsresult&| will be
> consistent with |ErrorResult&| usage from DOM but against many other out
> parameters, especially XPCOM code.
>
> Should we special case that nsresult and ErrorResult as output
> parameters should always use references, or make it also the default
> style for out parameters?
>
> I think this topic has been discussed before didn't reach a
> consensus. Based the recent effort to make the code using somewhat
> consistent style, should we expend on this on the wiki?
>
>Kanru
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Static analysis for "use-after-move"?

2016-05-02 Thread Jason Orendorff
On Sun, May 1, 2016 at 7:39 PM, Gerald Squelart  wrote:

> Thinking of it, I suppose lots (all?) of these optimized content-stealing
> actions could be done through differently-named methods (e.g. 'Take()'), so
> they could not possibly be confused with C++ move semantics.
>

Yes. I think that will make for better code.

So I could live with a communal decision to forbid any use-after-move ever.
>

Good! Let's do it.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ICU proposing to drop support for WinXP (and OS X 10.6)

2016-05-03 Thread Jason Orendorff
It's not just strange. It's against Ecma's explicit organization-wide
policy.

-j

On Tue, May 3, 2016 at 1:13 AM, Anne van Kesteren  wrote:

> On Tue, May 3, 2016 at 2:17 AM, Jeff Walden  wrote:
> > Using a library to do certain things we do other ways right now, in
> sometimes inferior fashion, doesn't seem inherently objectionable to me. So
> long as the library's internal decisions don't bleed too far into the
> visible API, which I don't believe they did here.
>
> Note that the ECMA-402 folks have hinted that if Microsoft adopted ICU
> it would make it much easier to expose a bunch of new stuff and have
> been pushing Microsoft in that direction. Were that to happen you have
> software monoculture, bugs that cannot be fixed, etc. It's especially
> strange that a standards body is encouraging this kind of thing.
>
>
> --
> https://annevankesteren.nl/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How should I measure responsiveness?

2016-05-17 Thread Jason Orendorff
Hi everyone.

I'm trying to figure out how to measure the effects of a possible change
Morgan Phillips is making to the Slow Script dialog.[1] One specific thing
we want to measure is "responsiveness" in the few seconds after a user
chooses to stop a slow script. Whatever "responsiveness" means.

We have some Telemetry probes that seem related to responsiveness[2][3],
but I... can't really tell what they measure. The blurb on
telemetry.mozilla.org is not always super helpful.

Also I'm probably missing stuff. I don't see anything related to
frames-per-second while scrolling, for example.

Who knows more? Which Telemetry probes (or internal-to-Gecko
measurements/stats) are relevant to what users call "responsiveness"? Is
there one in particular that you've used in the past? Where can I get more
info?

Thanks,
-j

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1270729
[2]: http://mzl.la/1Nxj8BL EVENTLOOP_UI_ACTIVITY_EXP_MS
[3]: http://mzl.la/1YzQQXO INPUT_EVENT_RESPONSE_MS
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How should I measure responsiveness?

2016-05-18 Thread Jason Orendorff
On Tue, May 17, 2016 at 4:13 PM, Jason Orendorff 
wrote:

> I'm trying to figure out how to measure the effects of a possible change
> Morgan Phillips is making to the Slow Script dialog.[1] One specific thing
> we want to measure is "responsiveness" in the few seconds after a user
> chooses to stop a slow script. Whatever "responsiveness" means.
>

Summarizing the replies, since they all came to me personally somehow :)

*   Everyone agrees Morgan and I will need to make a custom probe
(as expected) if we want to correlate anything with the Slow Script
dialog.

*   We don't have a "responsiveness index", but several people have
found INPUT_EVENT_RESPONSE_MS useful.

*   We don't have a Telemetry probe for dropping graphics frames.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The Whiteboard Tag Amnesty

2016-06-08 Thread Jason Duell
Emma,

> it's not a indexed field or a real tag system, making it hard to parse,
search, and update.

Could we dig into details a little more here?  I assume we could add a
database index for the whiteboard field if performance is an issue.
Do we give keywords an enum value or something (so bugzilla can
index/search them faster)?  I'm not clear on what a "real tag system" means
concretely here.

thanks!

Jason

On Wed, Jun 8, 2016 at 3:29 PM, Emma Humphries  wrote:

> There is a ticket for a "proper" tag system,
> https://bugzilla.mozilla.org/show_bug.cgi?id=1266609 which given time, I'd
> like to get implemented, but with limited resources, this is what I can do
> now.
>
> On Wed, Jun 8, 2016 at 2:08 PM, Patrick McManus 
> wrote:
>
> > as you note the whiteboard tags are permissionless. That's their killer
> > property. Keywords as you note are not, that's their critical weakness.
> >
> > instead of fixing that situation in the "long term" can we please fix
> that
> > as a precondition of converting things? Mozilla doesn't need more
> > centralized systems. If they can't be 100% automated to be permissionless
> > (e.g. perhaps because they don't scale) then the new arrangement of
> things
> > is definitely worse.
> >
> > I'll note that even for triage, our eventual system evolved rapidly and
> > putting an administrator in the middle to add and drop keywords and
> > indicies would have just slowed stuff down. Permissionless to me is a
> > requirement.
> >
> >
> > On Wed, Jun 8, 2016 at 2:43 PM, Kartikaya Gupta 
> > wrote:
> >
> >> What happens after June 24? Is the whiteboard field going to be removed?
> >>
> >> On Wed, Jun 8, 2016 at 4:32 PM, Emma Humphries 
> wrote:
> >> > tl;dr -- nominate whiteboard tags you want converted to keywords. Do
> it
> >> by
> >> > 24 June 2016.
> >> >
> >> > We have a love-hate relationship with the whiteboard field in
> bugzilla.
> >> On
> >> > one hand, we can add team-specific meta data to a bug. On the other
> >> hand,
> >> > it's not a indexed field or a real tag system, making it hard to
> parse,
> >> > search, and update.
> >> >
> >> > But creating keywords is a hassle since you have to request them.
> >> >
> >> > The long term solution is to turn whiteboard into proper tag system,
> but
> >> > the Bugzilla Team's offering to help with some bulk conversion of
> >> > whiteboard tags your teams use into keywords.
> >> >
> >> > To participate:
> >> >
> >> > 1. Create a Bug in the bugzilla.mozilla.org::Administration component
> >> for each
> >> > whiteboard tag you want to convert.
> >> >
> >> > 2. The bug's description should have the old keyword, the new keyword
> >> you
> >> > want to replace it with, and the description of this new keyword which
> >> will
> >> > appear in the online help.
> >> >
> >> > 3. Make sure your keyword doesn't conflict with existing keywords, so
> be
> >> > prepared to rename it. If your keyword is semantically similar to an
> >> > existing keyword or other existing bugzilla field we'll talk you
> about a
> >> > mass change to your bugs.
> >> >
> >> > 4. Make the parent bug,
> >> https://bugzilla.mozilla.org/show_bug.cgi?id=1279022,
> >> > depend on your new bug.
> >> >
> >> > 5. CC Emma Humphries on the bug
> >> >
> >> > We will turn your whiteboard tag into a keyword and remove your old
> tag
> >> > from the whiteboard tags, so make sure your dashboards and other tools
> >> that
> >> > consume Bugzilla's API are updated to account for this.
> >> >
> >> > Please submit your whiteboard fields to convert by Friday 24 June
> 2016.
> >> >
> >> > Cheers,
> >> >
> >> > Emma Humphries
> >> > ___
> >> > dev-platform mailing list
> >> > dev-platform@lists.mozilla.org
> >> > https://lists.mozilla.org/listinfo/dev-platform
> >> ___
> >> firefox-dev mailing list
> >> firefox-...@mozilla.org
> >> https://mail.mozilla.org/listinfo/firefox-dev
> >>
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Basic Auth Prevalence (was Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies)

2016-06-10 Thread Jason Duell
This data also smells weird to me.  8% of pages using basic auth seems very
very high, and only 0.7% of basic auth being done unencypted seems low.

Perhaps we should chat in London (ideally with Honza Bambas) and make sure
we're getting the telemetry right here.

Jason

On Fri, Jun 10, 2016 at 2:15 PM, Adam Roach  wrote:

> On 4/18/16 09:59, Richard Barnes wrote:
>
>> Could we just disable HTTP auth for connections not protected with TLS?
>> At
>> least Basic auth is manifestly insecure over an insecure transport.  I
>> don't have any usage statistics, but I suspect it's pretty low compared to
>> form-based auth.
>>
>
> As a follow up from this: we added telemetry to answer the exact question
> about how prevalent Basic auth over non-TLS connections was. Now that 49 is
> off Nightly, I pulled the stats for our new little counter.
>
> It would appear telemetry was enabled for approximately 109M page
> loads[1], of which approximately 8.7M[2] used HTTP auth -- or approximately
> 8% of all pages. (This is much higher than I expected -- approximately 1
> out of 12 page loads uses HTTP auth? It seems far less dead than we
> anticipated).
>
> 749k of those were unencrypted basic auth[2]; this constitutes
> approximately 0.7% of all recorded traffic.
>
> I'll look at the 49 Aurora stats when it has enough data -- it'll be
> interesting to see how much if it is nontrivially different.
>
> /a
>
>
> [1]
> https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0&end_date=2016-06-06&keys=__none__!__none__!__none__&max_channel_version=nightly%252F49&measure=HTTP_PAGELOAD_IS_SSL&min_channel_version=null&product=Firefox&sanitize=1&sort_keys=submissions&start_date=2016-05-04&table=0&trim=1&use_submission_date=0
>
> [2]
> https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0&end_date=2016-06-06&keys=__none__!__none__!__none__&max_channel_version=nightly%252F49&measure=HTTP_AUTH_TYPE_STATS&min_channel_version=null&product=Firefox&sanitize=1&sort_keys=submissions&start_date=2016-05-04&table=0&trim=1&use_submission_date=0
>
>
> --
> Adam Roach
> Principal Platform Engineer
> Office of the CTO
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linux content sandbox tightened

2016-10-07 Thread Jason Duell
It sounds like this is going to break all file:// URI accesses until we
finish implementing e10s support for them:

  https://bugzilla.mozilla.org/show_bug.cgi?id=922481

That may be more bustage on nightly than is acceptable?

Jason


On Fri, Oct 7, 2016 at 9:49 AM, Gian-Carlo Pascutto  wrote:

> Hi all,
>
> the next Nightly build will have a significantly tightened Linux
> sandbox. Writes are no longer allowed except to shared memory (for IPC),
> and to the system TMPDIR (and we're eventually going to get rid of the
> latter, perhaps with an intermediate step to a Firefox-content-specific
> tmpdir).
>
> There might be some compatibility fallout from this. Extensions/add-ons
> that try to write from the content process will no longer work, but the
> impact there should be limited given that similar (and stricter)
> restrictions have been tried out on macOS. (See bug 1187099 and bug
> 1288874 for info/discussion). Because Firefox currently still loads a
> number of external libraries into the content process (glib, gtk,
> pulseaudio, etc) there is some risk of breakage there as well. You know
> where to report (Component: Security - Process Sandboxing).
>
> This behavior can be controlled via a pref:
> pref("security.sandbox.content.level", 2);
>
> Reverting this to 1 goes back to the previous behavior where the set of
> allowable system calls is restricted, but no filtering happens on
> filesytem IO.
>
> When Firefox is built with debugging enabled, it will log any policy
> violations. Currently, a clean Nightly build will show some of those.
> They are inconsequential, and we'll deal with them, eventually. (Patches
> welcome though!)
>
> --
> GCP
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linux content sandbox tightened

2016-10-07 Thread Jason Duell
Never mind--file:// only does reads.

Haven't had my coffee yet this morning :)

Jason

On Fri, Oct 7, 2016 at 10:13 AM, Jason Duell  wrote:

> It sounds like this is going to break all file:// URI accesses until we
> finish implementing e10s support for them:
>
>   https://bugzilla.mozilla.org/show_bug.cgi?id=922481
>
> That may be more bustage on nightly than is acceptable?
>
> Jason
>
>
> On Fri, Oct 7, 2016 at 9:49 AM, Gian-Carlo Pascutto 
> wrote:
>
>> Hi all,
>>
>> the next Nightly build will have a significantly tightened Linux
>> sandbox. Writes are no longer allowed except to shared memory (for IPC),
>> and to the system TMPDIR (and we're eventually going to get rid of the
>> latter, perhaps with an intermediate step to a Firefox-content-specific
>> tmpdir).
>>
>> There might be some compatibility fallout from this. Extensions/add-ons
>> that try to write from the content process will no longer work, but the
>> impact there should be limited given that similar (and stricter)
>> restrictions have been tried out on macOS. (See bug 1187099 and bug
>> 1288874 for info/discussion). Because Firefox currently still loads a
>> number of external libraries into the content process (glib, gtk,
>> pulseaudio, etc) there is some risk of breakage there as well. You know
>> where to report (Component: Security - Process Sandboxing).
>>
>> This behavior can be controlled via a pref:
>> pref("security.sandbox.content.level", 2);
>>
>> Reverting this to 1 goes back to the previous behavior where the set of
>> allowable system calls is restricted, but no filtering happens on
>> filesytem IO.
>>
>> When Firefox is built with debugging enabled, it will log any policy
>> violations. Currently, a clean Nightly build will show some of those.
>> They are inconsequential, and we'll deal with them, eventually. (Patches
>> welcome though!)
>>
>> --
>> GCP
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
>
> --
>
> Jason
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: NetworkInformation

2016-12-16 Thread Jason Duell
On Fri, Dec 16, 2016 at 11:35 AM, Tantek Çelik 
wrote:

>
> Honestly this is starting to sound more and more like a need for a
> "Minimal Network" variant of the "Work Offline" option we have in
> Firefox (which AFAIK no other current browser has), since no amount of
> OS-level guess-work is going to give you a reliable answer (as this
> thread has documented).
>

So a switch that toggles the "network is expensive" bit, plus turns off
browser updates, phishing list fetches, etc?  I can see how this would be
nice for power users on a tethered cell phone network.  One issue would be
to make sure users don't forget to turn it off (and never update their
browser again, etc).  Maybe it could time out.

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing mozilla::Result for better error handling

2016-12-21 Thread Jason Orendorff
On Tue, Dec 20, 2016 at 10:31 AM, Ehsan Akhgari 
wrote:

> >   Result Baz() { MOZ_TRY(Bar()); ... }
>
> I have one question: what is Bar() expected to return here?


Result where F is implicitly convertible to E.

  An E type,
> or a type that's implicitly convertible to E
> ( bb5d005749/mfbt/Result.h#198>
> suggests that this is the case), or something else?


The implicit conversion solves a real problem. Imagine these two operations
have two different error types:

MOZ_TRY(JS_DoFirstThing()); // JS::Error&
MOZ_TRY(mozilla::pkix::DoSecondThing()); // pkix::Error

We don't want our error-handling scheme getting in the way of using them
together. So we need a way of unifying the two error types: a shared base
class, perhaps, or a variant.

Experience with Rust says that MOZ_TRY definitely needs to address this
problem somehow. C++ implicit conversion is just one way to go; we can
discuss alternatives in the bug.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: LOAD_ANONYMOUS + LOAD_NOCOOKIES

2013-02-28 Thread Jason Duell

On 02/28/2013 07:29 PM, bernhardr...@gmail.com wrote:

just to keep this thread up to date. I asked jduell if it is possible to change 
long to int64_t


We're going to upgrade to 64 bits in

   https://bugzilla.mozilla.org/show_bug.cgi?id=846629

Jason

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: Re: Help wanted: Contact sites about Firefox OS UA detection

2013-03-01 Thread Jason Smith

+dev-platform - as this would be relevant to dev-platform as well

Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com



 Original Message 
Subject:Re: Help wanted: Contact sites about Firefox OS UA detection
Date:   Fri, 1 Mar 2013 10:35:14 -0800 (PST)
From:   Lawrence Mandel 
To: compatibil...@lists.mozilla.org, dev-b2g 



- Original Message -

tl;dr: Please take 5-10 minutes to contact at least one site on this
list about updating their UA detection to recognize Firefox OS
http://mzl.la/VideYT


Forgot to mention that if you don't want to be too involved, cc me on an e-mail 
to your contact. I'll drive the conversation after the initial introduction.

Lawrence



The success of Firefox OS will partially depend on the content that
is available to this platform. As a mitigation strategy for top
sites that do not serve mobile content to Firefox OS, we implemented
an UA override mechanism to which we added top global and locale
specific sites. We want to clear out this list by having the sites
that are currently listed recognize Firefox OS as mobile.

We need your help. There are currently 132 sites on this list.
(http://mzl.la/VideYT) Please take 5-10 minutes today to review the
list and, if you know anyone that works for/with one of the listed
sites, ask them about updating the site's UA detection to recognize
Firefox OS as mobile. Please comment in the related bug so that we
can track progress.

I also blogged about this where I provide a bit more background and
include the site list.

http://lawrencemandel.com/2013/03/01/help-wanted-firefox-os-ua-detection/

Thanks!

Lawrence

___
compatibility mailing list
compatibil...@lists.mozilla.org
https://lists.mozilla.org/listinfo/compatibility



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Bug 851818 - Modernizing the Enter Bug Entry Page in Bugzilla

2013-03-18 Thread Jason Smith

Hi Everyone,

In bug 851818 (https://bugzilla.mozilla.org/show_bug.cgi?id=851818), 
I've filed a bug to suggest rethinking what our enter bug entry page for 
users who are filing bugs with canconfirm or no canconfirm access. I'd 
like to discuss what changes we should make to that enter bug entry page 
such as adding new products to the list that we think should be there, 
keeping certain products on the list that are important enough to remain 
there, and removing certain products that we don't think need to be 
there. I've outlined the state of enter bug entry pages under canconfirm 
and no canconfirm below and included my initial thoughts below. Thoughts?


*existing canconfirm access* *enter bug entry page list*

- Core
- Firefox
- Thunderbird
- Calendar
- Camino
- SeaMonkey
- Firefox for Android
- Mozilla Localizations
- Mozilla Labs
- Mozilla Services
- Other Products

*existing no canconfirm access enter bug entry page list*

- Firefox
- Firefox for Android
- Thunderbird
- Mozilla Services
- SeaMonkey
- Mozilla Localizations
- Mozilla Labs
- Calendar
- Core

*Ideas for what products we should add*

- BootToGecko (canconfirm and not canconfirm)
- Firefox for Metro (canconfirm and not canconfirm)
- Marketplace (canconfirm and not canconfirm)

*Ideas for what products we should remove*

- Thunderbird (canconfirm and not canconfirm)
- Calendar (canconfirm and not canconfirm)
- Camino (canconfirm and not canconfirm)
- SeaMonkey (canconfirm and not canconfirm)
- Mozilla Labs (canconfirm and not canconfirm)

*Rationale for Additions*

- BootToGecko - Big focus at Mozilla with a lot of people working on this 
project, so I think this could benefit greatly from being on the front page.
- Firefox for Metro - Important for our Windows 8 story and could probably 
benefit for contributors from knowing up front where to file bugs on this while 
it's under development
- Marketplace - Important in conjunction with BootToGecko and a common bugzilla 
product I don't think people realize exists. In doing bug triage for B2G, 
there's very high cases where bugs for Marketplace have ended up in the 
incorrect component, so I'm hoping putting this on the front page will increase 
likelihood that the bug ends up in the right spot.

*Rationale for Removals*

- Thunderbird - Decreased focus at Mozilla now that has moved to contribution 
only now, so I don't think this needs to be on the front page.
- Calendar - Decreased focus at Mozilla overall in comparison to other products 
on the front page
- Camino - Low focus at Mozilla overall in comparison to other products on the 
front page
- SeaMonkey - Low focus at Mozilla overall in comparison to other products on 
the front page
- Mozilla Labs - Looks like an outdated Bugzilla product that may not have high 
bug filing usage - probably does not need to be on the front page

--
Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bug 851818 - Modernizing the Enter Bug Entry Page in Bugzilla

2013-03-18 Thread Jason Smith

Sounds good to me.

Also to add in a piece of private feedback I received - SeaMonkey wants 
to remain on the front page as there's a strong community behind it that 
still works on that project.


Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com

On 3/18/2013 9:39 AM, L. David Baron wrote:

On Monday 2013-03-18 09:27 -0700, Jason Smith wrote:

*existing canconfirm access* *enter bug entry page list*

- Core
- Firefox
- Thunderbird
- Calendar
- Camino
- SeaMonkey
- Firefox for Android
- Mozilla Localizations
- Mozilla Labs
- Mozilla Services
- Other Products

*existing no canconfirm access enter bug entry page list*

- Firefox
- Firefox for Android
- Thunderbird
- Mozilla Services
- SeaMonkey
- Mozilla Localizations
- Mozilla Labs
- Calendar
- Core

I'd actually like to see Core higher on the list for the
no-canconfirm case.  I think it's common for reasonably
well-informed Web developers (who would have been able to choose a
reasonably correct component within Core, given the list) to file
standards bugs and end up with them languishing in Firefox::General.
And I think appropriate guiding for Web developers filing bugs
against the rendering engine is an important case.

-David



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Resigning as the Necko module owner

2013-03-22 Thread Jason Duell

On 03/22/2013 11:46 AM, Christian Biesinger wrote:

Hi all,

I've been the necko module owner for many years now, but lately I have had
very little time to work on Mozilla-related things. Therefore, I decided
it's time to hand over the module ownership to Patrick McManus, who has
been doing excellent work in Necko.


Thanks for the many years of leadership, Biesi!   And congrats to 
Patrick, who has earned it.


Jason

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsITimers are officially now safe to use on threads besides the main thread

2013-04-05 Thread Jason Duell
Not that this had been stopping us from using them off-main thread anyway :) 

nsITimer.idl now clarifies thread usage.  For gory details:

  https://bugzilla.mozilla.org/show_bug.cgi?id=792920
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Landing on Trunk - Target Milestone vs. Status Flag Usage vs. Tracking for Future Releases

2013-04-10 Thread Jason Smith

Hi Everyone,

Right now, when a landing occurs, my understanding is that we're 
typically setting the target milestone field to what Firefox release the 
code lands on if it lands on trunk. If a patch is uplifted, then the 
status flag is set appropriately for the target Firefox release.


However, after working with three distinct teams, talking with Ed, and 
analyzing status flag usage, I have seen these contention points 
dependent on a team's method for tracking that might happen:


 * The need for a particular team to track the concept of "We would
   like to get this fixed in this Firefox release." Some teams I've
   worked with have considered using the target milestone field here,
   but that collides with the trunk management definition, which often
   causes contention during the landing or post landing process.
 * The need to be able to query the status-firefox flags to determine
   when a patch lands on a per branch basis even if has landed on
   trunk. This helps for those tracking fixes to use one universal flag
   to query on to determine such things such as:
 o What bugs that are features are worth verifying on X branch?
 o What bugs that defects are worth verifying on X branch?

Knowing this, why not consider just using the status-flags purely to 
track landings and let the team determine how to use target milestone? 
Also, why not set the status-flag in general for the appropriate Firefox 
release when a patch lands on trunk?


--
Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Landing on Trunk - Target Milestone vs. Status Flag Usage vs. Tracking for Future Releases

2013-04-10 Thread Jason Smith

Comments inline.

Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com

On 4/10/2013 2:19 PM, Justin Lebar wrote:

Right now the status and tracking flags for a version get hidden when
that version becomes old.  If we switched away from using
target-milestone, we'd need to prevent this from happening.


Ah, good point. Didn't think of that initially. That would actually 
argue in favor of target milestone being the permanent entity and the 
status flags only existing within the release --> beta --> aurora --> 
nightly timeframe.




On Wed, Apr 10, 2013 at 4:53 PM, Alex Keybl  wrote:

* The need for a particular team to track the concept of "We would
   like to get this fixed in this Firefox release." Some teams I've
   worked with have considered using the target milestone field here,
   but that collides with the trunk management definition, which often
   causes contention during the landing or post landing process.

We may want to go into more detail here and see if we can change the trunk 
management details in such a way that teams own the target milestone until the 
bug is resolved/fixed.


I think that sounds reasonable. We could grant control of the flag for 
planning purposes until the patch heads to land. When it lands, the 
patch reflects the actual milestone the patch lands in. So the before 
and after effect I think achieves both needs here for planning and tree 
management.





* The need to be able to query the status-firefox flags to determine
   when a patch lands on a per branch basis even if has landed on
   trunk. This helps for those tracking fixes to use one universal flag
   to query on to determine such things such as:

This does add one extra step, since the person querying needs to include target 
milestone and bug resolution in their query. I'd like to hear if this has been 
a major pain point for others before reacting.


Followup question on this - the right way to query generally then for 
all bugs fixed in a Firefox release? This for say, Firefox 23?


Target Milestone = Firefox 23 & Resolution = FIXED

or

status-firefox23 = fixed, verified, verified disabled

The one challenge I've had is that target milestone however isn't always 
set on landing, so there's always risk we could have bugs fall through 
the cracks if the target milestone field isn't set.





Knowing this, why not consider just using the status-flags purely to track 
landings and let the team determine how to use target milestone? Also, why not 
set the status-flag in general for the appropriate Firefox release when a patch 
lands on trunk?

The difference between version-specific status flags (fixed) and target 
milestone (along with resolved/fixed) is subtle. Status flags being set to 
fixed mean this bug has been fixed in this particular version of Firefox. The 
combination of a target milestone and resolved/fixed means that a bug is fixed 
in all Firefox versions after the one specified. That may prove a little 
difficult to untangle.

-Alex

On Apr 10, 2013, at 1:38 PM, Jason Smith  wrote:


Hi Everyone,

Right now, when a landing occurs, my understanding is that we're typically 
setting the target milestone field to what Firefox release the code lands on if 
it lands on trunk. If a patch is uplifted, then the status flag is set 
appropriately for the target Firefox release.

However, after working with three distinct teams, talking with Ed, and 
analyzing status flag usage, I have seen these contention points dependent on a 
team's method for tracking that might happen:

* The need for a particular team to track the concept of "We would
   like to get this fixed in this Firefox release." Some teams I've
   worked with have considered using the target milestone field here,
   but that collides with the trunk management definition, which often
   causes contention during the landing or post landing process.
* The need to be able to query the status-firefox flags to determine
   when a patch lands on a per branch basis even if has landed on
   trunk. This helps for those tracking fixes to use one universal flag
   to query on to determine such things such as:
 o What bugs that are features are worth verifying on X branch?
 o What bugs that defects are worth verifying on X branch?

Knowing this, why not consider just using the status-flags purely to track 
landings and let the team determine how to use target milestone? Also, why not 
set the status-flag in general for the appropriate Firefox release when a patch 
lands on trunk?

--
Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mai

Re: Landing on Trunk - Target Milestone vs. Status Flag Usage vs. Tracking for Future Releases

2013-04-10 Thread Jason Smith

That's a good idea. Should we file a bug to do this?

Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.com

On 4/10/2013 5:14 PM, Gregory Szorc wrote:

On 4/10/2013 1:38 PM, Jason Smith wrote:

Hi Everyone,

Right now, when a landing occurs, my understanding is that we're
typically setting the target milestone field to what Firefox release
the code lands on if it lands on trunk. If a patch is uplifted, then
the status flag is set appropriately for the target Firefox release.

However, after working with three distinct teams, talking with Ed, and
analyzing status flag usage, I have seen these contention points
dependent on a team's method for tracking that might happen:

  * The need for a particular team to track the concept of "We would
like to get this fixed in this Firefox release." Some teams I've
worked with have considered using the target milestone field here,
but that collides with the trunk management definition, which often
causes contention during the landing or post landing process.
  * The need to be able to query the status-firefox flags to determine
when a patch lands on a per branch basis even if has landed on
trunk. This helps for those tracking fixes to use one universal flag
to query on to determine such things such as:
  o What bugs that are features are worth verifying on X branch?
  o What bugs that defects are worth verifying on X branch?

Knowing this, why not consider just using the status-flags purely to
track landings and let the team determine how to use target milestone?
Also, why not set the status-flag in general for the appropriate
Firefox release when a patch lands on trunk?


I don't really have much to say on the specifics of this post other than
a simple question: why don't we have a checkin hook or bot automatically
update Bugzilla flags when changesets are pushed? This would save
developer time and would reduce the error rate of bugs not updated
properly (especially when you consider that policies change over time).


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensibility of JavaScript modules

2013-10-08 Thread Jason Orendorff
On Tue, Oct 8, 2013 at 8:24 AM, Till Schneidereit 
 wrote:

> Interesting. I wonder if the monkey patching will even still work once we
> have es6 modules and use them in the platform.
>
> Jason and Eddy, you probably know, but I'm under the impression that 
monkey
> patching an es6 module requires funneling it through a custom module 
loader.
> Maybe the platform should do that by default and by default allow 
addons to

> apply transforms to them?

Here's the simplest way to monkeypatch an ES6 module:

1. Use System.get("x") or System.import("x", callback) to get the 
Module object.
2. Make your own Module object to replace it (which is like, `new 
Module({a: my_a, b: my_b})`)
3. Use System.set(x, my_module) to put your patched module into the 
loader.


No custom module loader is needed for that. However, the scope of this 
style of monkeypatching is strictly per-Loader, and each global gets its 
own Loader (like all other builtins). We do need some customization to 
support system-wide monkeypatching. But we were going to need some 
customization anyway just for stuff like mapping module names to chrome: 
URLs or whatever.


-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensibility of JavaScript modules

2013-10-09 Thread Jason Orendorff

On 10/8/13 4:27 PM, David Rajchenbach-Teller wrote:

That sounds quite sufficient for me.
Do we have plans to backport Cu.import to ES6 modules?


No plans yet. Want to work on it with us? We're not ready to start just 
now, but parser support for ES6 modules is being added, and the 
self-hosted implementation of Loader is underway at 
.


-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensibility of JavaScript modules

2013-10-09 Thread Jason Orendorff

On 10/9/13 12:56 PM, David Rajchenbach-Teller wrote:

I am interested, although my buglist is rather full. What kind of help
would be useful?


When it's time, we'll need to:

1. write Loader hooks to make the `import` keyword behave like Cu.import

2. somehow have those hooks installed by default in every chrome window

And maybe:

3. migrate existing Cu.import call sites to ES6 `import`

4. reimplement Cu.import and friends on top of the Loader API

But I'm not sure 3 and 4 are possible. ES6 modules are designed for the 
web and so are inherently asynchronous. Cu.import is synchronous. 
Switching poses some risks.


-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What platform features can we kill?

2013-10-10 Thread Jason Orendorff

On 10/10/13 2:54 AM, Chris Peterson wrote:
The meta bug for removing JSD is bug 800200. I believe the primary 
blocking issue is bug 716647 ("allow Debugger to be enabled with 
debuggee frames on the stack"), which jorendorff is starting to work on.


Well, I tried it a year and a half ago. jandem and jimb are working on 
it now though (see the dependency tree).


-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Analyze C++/compiler usage and code stats easily

2013-11-15 Thread Jason Orendorff
On 11/14/13 11:43 PM, Gregory Szorc wrote:
> * Why does lots of js/'s source code gravitate towards the "bad"
> extreme for most of the metrics (code size, compiler time,
> preprocessor size)?

We use templates very heavily and we inline like mad.

Templates make C++ compilers go slowly. The GC smart pointer classes are
all templates. They are used everywhere. This will not change.

On to inlines.

"-inl.h" and "inlines.h" headers in mozilla-central
(js/src has more than everything else put together):
https://gist.github.com/jorendorff/7484945

Inlining a function in C++ often requires moving something else, like a
class definition the inline function requiers, into a header too. So the
affordances of C++ are such that unless you pay extremely close
attention to code organization when writing inline-heavy code, you make
a huge mess.

We have made a huge mess.

I don't yet know how to tidy up this kind of mess. I'd love to adopt C++
style rules that would help, but I don't know what those would be.

Nicholas Nethercote and I have spent time fighting this fire:

Bug 872416 -js/src #includes are completely out of control
https://bugzilla.mozilla.org/show_bug.cgi?id=872416

Bug 879831 - js/src #includes are still out of control
https://bugzilla.mozilla.org/show_bug.cgi?id=879831

Bug 886205 - Slim down inlines.h/-inl.h files
https://bugzilla.mozilla.org/show_bug.cgi?id=886205

Bug 888083 - Do not #include inline-headers from non-inline headers
https://bugzilla.mozilla.org/show_bug.cgi?id=888083

Bug 880088 - Break the build on #include madness
https://bugzilla.mozilla.org/show_bug.cgi?id=880088

This kind of work is high-opportunity-cost. It involves a lot of
clobbering and try servering, and it doesn't get any JS engine bugs
fixed or features implemented.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is for...of on a live DOM node list supposed to do the right thing?

2013-11-22 Thread Jason Orendorff
On 11/22/13 8:40 AM, Benjamin Smedberg wrote:
> Is for..of on live DOM nodelists supposed to correctly iterate even
> when items are removed from the list? I have a testcase where this
> does not seem to be working correctly:
>
> http://jsfiddle.net/f8xzQ/
>
> Is there a simple way to do this correctly?

Well, there's this:
http://jsfiddle.net/f8xzQ/1/

I think those iterators should be "live", like everything else in the
DOM! I mentioned this to anyone who would listen back when I made DOM
NodeLists iterable in the dumbest possible way, and got a lot of blank
stares. Certainly no one was interested in doing the work.

It's also possible to do this using a hand-JS-coded wrapper around
document.createNodeIterator, which is "live". Maybe not in all cases.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Jason Orendorff
On 12/10/13 9:09 AM, Boris Zbarsky wrote:
> On 12/10/13 9:49 AM, Benoit Jacob wrote:
>> since AFAIK we don't have a HashSet class in mfbt or xpcom.
>
> It's called nsBaseHashtable.  Granted, using it as a hashset is not
> that intuitive.

There's also js::HashSet in js/public/HashTable.h.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [JS-internals] Improvements to the Sphinx in-tree documentation generation

2013-12-19 Thread Jason Orendorff
Moving back to dev-platform.

On 12/17/13 4:41 PM, Gregory Szorc wrote:
> I guess what I'm trying to say is that SpiderMonkey/JavaScript appears
> lacking in the "language services" arena.

The concrete features you mentioned are extracting doc comments and
minimization, so that's what I'll address here. (What else did you have
in mind? Tokenization, perhaps, for syntax highlighting? Anything else?)

Doc comments can be done two ways:

* In pure JS, using a Reflect.parse() implementation that provides
comment syntax. This requires either supporting SpiderMonkey extensions
in Esprima or supporting comment output in our parser. Neither sounds hard.

* We could bless a particular notion of what a doc comment is and
implement that in our parser, and surface the results via Reflect.parse.

The former sounds better. It's more flexible. It's less code. And
improving Esprima and/or our Reflect.parse implementation is valuable
for tools other than documentation tools.


> FWIW, this issue dovetails with us trying to minify chrome JS (bug
> 903149). One can argue that if we had access to robust JavaScript
> language services (possibly including minifying) from the engine
> itself, we wouldn't be stuck in a rut there either.

I interpret this as saying: We should have JS minimization, and
architecturally it makes sense for that to be implemented in the JS
engine. But I'm not sure I agree on either point.

According to the latest in bug 903149, we're not sure minimization is
worth pursuing.

Architecturally, I think the argument is that minimization will be
brittle in the face of new syntax, unless the JS engine hackers maintain
it. OK. But I don't think this balances all the arguments against.

There are already several good open-source minimizers for JS. We don't
have the bandwidth to stand up and maintain a new minimizer nearly as
good as (say) Google Closure Compiler. The reason existing minimizers
don't work with Mozilla code is SpiderMonkey extensions; I think the
long-term fix for that is to converge on ES6. (Short-term fixes are
possible too.)

Minimization comes in many flavors (consider bug 903149 comment 37) and
I tend to think the JS engine should provide powerful general APIs that
can be used to implement many kinds of minimization and many other
things too. Reflect.parse is that kind of API. In bug 903149, I got to
write a 7-line program with Reflect.parse that detected bugs in jsmin.
You could write a primitive minimizer by starting with the code in
Reflect_stringify.js, in
, and
just deleting the bits that output indentation and spacing. This is
really cool! It's pure JS. Anyone who can hack JS could do it. That's
the kind of API I want to support.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


js-inbound as a separate tree

2013-12-19 Thread Jason Orendorff
On dev-tech-js-engine-internals, there's been some discussion about
reviving a separate tree for JS engine development.

The tradeoffs are like any other team-specific tree.

Pro:
- protect the rest of the project from closures and breakage due to JS
patches
- protect the JS team from closures and breakage on mozilla-inbound
- avoid perverse incentives (rushing to land while the tree is open)

Con:
- more work for sheriffs (mostly merges)
- breakage caused by merges is a huge pain to track down
- makes it harder to land stuff that touches both JS and other modules

We did this before once (the badly named "tracemonkey" tree), and it
was, I dunno, OK. The sheriffs have leveled up a *lot* since then.

There is one JS-specific downside: because everything else in Gecko
depends on the JS engine, JS patches might be extra likely to conflict
with stuff landing on mozilla-inbound, causing problems that only
surface after merging (the worst kind). I don't remember this being a
big deal when the JS engine had its own repo before, though.

We could use one of these to start:


Thoughts?

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: js-inbound as a separate tree

2013-12-19 Thread Jason Orendorff
On 12/19/13 4:55 PM, David Burns wrote:
> On 19/12/2013 18:48, Jason Orendorff wrote:
>> Con:
>> - more work for sheriffs (mostly merges)
>
> If mostly merges, are you suggesting there will be little traffic on
> the branch or the JS team will watch the tree for failures?

Neither, I'm just saying the overall rate of broken patches wouldn't
increase much, which I think shouldn't be controversial.

That is, sheriffing is not watching trees, it's fighting bustage. Each
busted patch and each intermittent orange creates a ton of work. It
stands to reason that diverting some patches to a separate tree won't
increase the volume of patches, except to the degree it actually
improves developer efficiency (and let's have that problem, please).

> 2013-07 : 6 days, 13:46:11
> 2013-08 : 4 days, 5:42:17
> 2013-09 : 4 days, 20:59:41
> 2013-10 : 4 days, 21:22:40
> 2013-11 : 8 days, 4:58:30
> 2013-12 : 2 days, 16:47:42

I know the point of including these numbers was, "hey look it's not that
bad", but this is really shocking. We're looking at an average of
something like 125 hours per month that developers can't check stuff in.
Even if the breakage is evenly distributed across time zones
(optimistic) we're looking at zero 9s of availability.

We've all gotten used to it, but it's kind of nuts.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Jason Orendorff
On 1/6/14 3:27 PM, Brian Smith wrote:
> On Sun, Jan 5, 2014 at 6:34 PM, Nicholas Nethercote
>  wrote:
>> - There is an semi-official policy that the owner of a module can dictate its
>>   style. Examples: SpiderMonkey, Storage, MFBT.
> AFAICT, there are not many rules that module owners are bound by. The
> reason module owners can dictate style is because module owners can
> dictate everything in their module. I think we should wait until we've
> heard from module owners that strongly oppose the style changes and
> then decide how to deal with that.

I'm fine with changing SpiderMonkey in whatever way is necessary to stop
these threads forever.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Jason Duell

On 01/06/2014 06:35 PM, Joshua Cranmer 🐧 wrote:


Side-by-side diffs are one use case I can think of. Another is that some
people prefer to develop by keeping tiled copies of windows; wider lines
reduce the number of tiled windows that one can see.


Yes--if we jump to >80 chars per line, I won't be able to keep two 
columns open in my editor (vim, but emacs would be the same) on my 
laptop, which would suck.


(Yes, my vision is not what it used to be--I'm using 10 point font. But 
that's not so huge.)


Jason



People who use

terminal-based editors for their coding are probably going to be rather
likely to use the default window size for terminals: 80x24. Given that
our style guide also requires adding vim and emacs modelines to files
(aside: if we're talking about doing mass style-conversions, can we also
fix modelines?), it seems reasonable that enough of our developers use
vim and emacs to make force-resizing of terminal size defaults a
noticeable cost.

With 2-space indent, parsimonious indenting requirements (e.g., don't
indent case: statements in switch or public in class), and liberal use
of importing names into localized namespaces, the 80-column width isn't
a big deal for most code.

I don't think most JS hackers care for abuse of Hungarian notation for
scope-based (or const) naming.  Every member/argument having a capital
letter in it surely makes typing slower.  And extra noise in every
name but locals seems worse for new-contributor readability.
Personally this doesn't bother me much (although "aCx" will always be
painful compared to "cx" as two no-cap letters, I'm sure), but others
are much more bothered.


And a '_' at the end of member names requires less typing than 'm' +
capital letter?

My choice of when to use or not use Hungarian notation is often messy
and seemingly inconsistent, although there is some method to my madness.
I find prefixing member variables with 'm' to be useful, although I
dislike using it in POD-ish structs where all the members are public.
The use of 'a' for arguments is where I am least consistent, especially
as I extremely dislike it being used for an outparam return value
(disclaimer: I'm used to XPCOM-taminated code, so having to write
NS_IMETHODIMP nsFoo::GetBoolValue(bool *retval) is common for me, and
this colors my judgements a lot). I've never found much use for the 's',
'g', and 'k' prefixes, although that may just as well be because I've
never found much use for using those types of variables in the first
place (or when I do, it's because I'm being dictated by other concerns
instead, e.g., type traits-like coding or C++11 polyfilling).



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Exact rooting is now enabled on desktop

2014-01-17 Thread Jason Orendorff
On 1/17/14 3:24 PM, Terrence Cole wrote:
> Exact stack rooting is now enabled by default on desktop builds of firefox.

*standing ovation forever*

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship DOM Promises

2014-01-30 Thread Jason Orendorff
On 1/30/14 9:06 AM, Boris Zbarsky wrote:
> On 1/30/14 5:03 AM, Till Schneidereit wrote:
>> What are the plans for moving Promises into SpiderMonkey?
>
> Moving Promises per se is not hard.
>
> The hard part is that this requires SpiderMonkey to grow a concept of
> an even loop.

The event loop itself would still be implemented by Gecko. Promises
really just need to be able to post events to it. Or "microtasks" as
they are called in the specification.

The necessary API surface is not much; here's a sketch:
https://gist.github.com/jorendorff/8724830

However, a Promise would then be a kind of JSObject, the Promise API
would be a JS-language API, and the JSAPI doesn't do a good job of
exposing those to C++ code. DOM bindings are better at it.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New necko cache?

2014-02-19 Thread Jason Duell


On Tue, Feb 18, 2014 at 7:56 PM, Neil <mailto:n...@parkwaycc.co.uk>> wrote:


Where can I find documentation for the new necko cache? So far
I've only turned up some draft planning documents. In particular,
I understand that there is a preference to toggle the cache. What
does application code have to do in order to work with whichever
cache has been enabled?



Application code generally doesn't have to do anything to work with the 
new cache, unless it's a direct consumer of the cache API (we changed a 
few APIs from sync to async IIRC).


Mostly we just have the new code, but as you point out there are some 
design docs and blogs posts and meta-bugs tracking the work:


http://www.janbambas.cz/new-firefox-http-cache-backend-implementation/

   https://wiki.mozilla.org/Necko/Cache/Plans

   https://bugzilla.mozilla.org/show_bug.cgi?id=913806

Is there something specific you're wondering about?

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-21 Thread Jason Duell

On 02/21/2014 01:38 PM, Nicholas Nethercote wrote:

Greetings,

We now live in a memory-constrained world. By "we", I mean anyone
working on Mozilla platform code. When desktop Firefox was our only
product, this wasn't especially true -- bad leaks and the like were a
problem, sure, but ordinary usage wasn't much of an issue. But now
with Firefox on Android and particularly Firefox OS, it is most
definitely true.

In particular, work is currently underway to get Firefox OS working on
devices that only have 128 MiB of RAM. The codename for these devices
is Tarako (https://wiki.mozilla.org/FirefoxOS/Tarako). In case it's
not obvious, the memory situation on these devices is *tight*.

Optimizations that wouldn't have been worthwhile in the desktop-only
days are now worthwhile. For example, an optimization that saves 100
KiB of memory per process is pretty worthwhile for Firefox OS.


Thanks for the heads-up.  Time to throw

   https://bugzilla.mozilla.org/show_bug.cgi?id=807359

back on the barbie... (Feel free to give it a new memshrink priority as 
you see fit, Nicholas)


Jason

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Column numbers appended to URLs recently

2014-03-04 Thread Jason Orendorff
On 3/3/14, 12:54 PM, Jan Honza Odvarko wrote:
> URLs in stack traces for exception objects have been recently changed. There 
> is a column number appended at the end (I am seeing this in Nightly, but it 
> could be also in Aurora).

Code that parses error.stack now needs to handle the case where both a
line number and a column number appear after the URL.

This should be easy to fix. What's affected? Just Firebug?

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-21 Thread Jason Orendorff
On 3/21/14, 5:42 PM, Jim Blandy wrote:
> On 03/21/2014 03:34 PM, Jim Blandy wrote:
>> What if these DOM nodes could use a special class of observers /
>> listeners that automatically set themselves aside when the node is
>> deleted from the document, and re-instate themselves if the node is
>> re-inserted in the document? Similarly for when the window goes away.
>
> Instead of addObserver or addMessageListener, you'd have
> observeWhileInserted or listenWhileInserted. Implemented in some
> clever and efficient way to avoid thrashing during heavy DOM manipulation.

+1e9

More of this sort of thing please. Deterministic, declarative, does what
it says on the label, easy to use correctly. The opposite of weak
references on every axis.

Stop me if you've heard this one. Some people, when confronted with a
problem, think, "I know, I'll use weak references." —Oh you've heard it?
Sorry.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-21 Thread Jason Orendorff
On 3/21/14, 10:23 PM, K. Gadd wrote:
> A hypothetical scenario (please ignore any minor detail errors, I'm
> using this to illustrate the scenario): Let's say I have a main
> document and it spawns 3 popup windows as documents. The popup windows
> have references to the parent document and use them to post messages
> to the parent document for the purposes of collaboration. Now, let's
> say I want the popups to *keep working* even if the parent document is
> closed.

By "keep working" do you mean that the popup windows should still be
able to post messages to the main document, and its event handlers
should still be firing?

> In this scenario, the popups *need* to retain their references
> to the parent document, but we don't want to leak the parent document
> after all 3 of them are closed.

Why are weak references useful here? Suppose we're just using strong
references, i.e. the web as it is today. After the main window and all
three popups are closed, why would the parent document leak?

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-23 Thread Jason Orendorff
On 3/22/14, 7:26 AM, K. Gadd wrote:
> The window and children example is intended to illustrate
> that manual cycle-breaking is non-trivial; in this case it requires a
> robust reference counting mechanism and code audits to ensure
> reference increments/decrements are carefully matched and put in every
> necessary location.

Kevin, I don't think that's true. Would you please state the desired
behavior for your example, what leaks, and how you would fix it using
weak references?

It seems like you're under the impression that the JS engine doesn't
collect cycles. It does.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-24 Thread Jason Orendorff
On 3/24/14, 2:42 AM, K. Gadd wrote:
> It is, however, my understanding that cycle collections in
> SpiderMonkey are separate from non-cycle-collections and occur less
> often; is that still true?

There are two collectors.

The JS engine's garbage collector runs more frequently. It collects
cycles among ordinary JS objects, functions, arrays, that sort of thing.

The XPCOM cycle collector runs less often. It is only required to
collect cycles that involve reference-counted C++ objects — which
includes DOM nodes. So, oddly enough, self-hosting the DOM would mean
that your cycles would be collected *sooner*.


> My example was merely intended to demonstrate a case where you can
> easily end up with a scenario where manual cycle-breaking has to
> happen, not something that absolutely is unsolvable without WRs.

I am starting to think the problem as posed can't be solved using weak
references.

A few solutions not involving weak references have been suggested. (I
still like Jim's formula: "identify the points at which an observer
becomes useless". Someone should write that library.)

> If
> the browser is clever about using WRs internally for references to
> windows, or between windows, then less objects stay alive longer than
> they're supposed to.

We use revocable proxies for references between windows, and in some
cases we will mass-revoke all proxies pointing into a particular window,
and blow that whole window out the air lock.

That's not the same thing as weak references. In fact weak references
would not help you implement it, even though the user-visible behavior
seems similar (stuff nondeterministically vanishes). The decision of
when to kill a window is not made by the garbage collector. It's a
totally different heuristic, one that takes into account that we really
*don't* want windows to be dropped as soon as the GC runs. (See Jim's
post quoting Allen quoting George Bosworth on heuristics.) Details of
garbage collection are not exposed to user code. Finalization is not
observable.


> If you move everything entirely into
> user-authored JS, various problems get harder because now you can't
> rely on the C++ parts of the browser to break cycles for you.

Revocable proxies are in ES6.

> The
> canonical example I used in some es-discuss threads is the idea of an
> 'application' that hosts multiple isolated 'modes', where the modes
> can reference each other and the application references all active
> modes. In such an application, it becomes trivial for a single active
> mode to hold references to dozens of dead modes, as a result leaking
> *all* of them until the cycle is manually broken or all modes are
> closed.

I do love examples, but maybe we can get to the second example after the
first one is well understood.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Jason Orendorff
On 4/1/14, 9:57 AM, Benjamin Smedberg wrote:
> On 4/1/2014 10:54 AM, Benoit Jacob wrote:
>> Let's see if we can wrap up this conversation soon now. How about:
>>
>>  MOZ_MAKE_COMPILER_BELIEVE_IS_UNREACHABLE
> I counter-propose that we remove the macro entirely. I don't believe
> that the potential performance benefits we've identified are worth the
> risk.

I agree.

What should we replace MOZ_ASSUME_UNREACHABLE with?

If the code is truly unreachable, it doesn't matter what we replace it
with. The question is what to do when the impossible occurs. To me,
letting control flow past such a place is just as scary as undefined
behavior. So I think our replacement for MOZ_ASSUME_UNREACHABLE should
crash even in non-debug builds. This suggests MOZ_CRASH.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Jason Orendorff
On 4/1/14, 10:05 AM, Benoit Jacob wrote:
> This macro is especially heavily used in SpiderMonkey. Maybe
> SpiderMonkey developers could weigh in on how large are the benefits
> brought by UNREACHABLE there?

I don't believe there are any benefits. Those uses of
MOZ_ASSUME_UNREACHABLE are not intended as optimizations. When they were
inserted, they used JS_NOT_REACHED, which called the assertion failure
code and crashed hard, even in non-debug builds.

Bug 723114 changed the semantics of the existing macro. Later, the name
was changed. In hindsight, that was a big safety regression. By all
means back it out.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Jason Orendorff
On 4/1/14, 4:37 PM, Karl Tomlinson wrote:
> Jason and I spoke on irc and realised that we were talking about 2
> slightly different things.

Yep. Filed bug 990764. If someone wants to take, we can continue
discussion there.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to (finish) implementing: Resource Timing API

2014-04-30 Thread Jason Duell

We have landed (so far pref'd off) most of the Resource Timing API:

  http://www.w3.org/TR/resource-timing/
  https://bugzilla.mozilla.org/show_bug.cgi?id=822480

We've opened a meta-bug for the followups that are needed for the full API:

  https://bugzilla.mozilla.org/show_bug.cgi?id=1002855

It's not clear to those of us toiling in the necko trenches whether the 
full API is actually needed to pref on or not (I hear rumors the code 
we've got so far might help devtools people with timings that they need).


Have thoughts?  Discuss in the meta-bug.

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Enabling new HTTP cache on nightly (browser only, not automated tests) soon

2014-04-30 Thread Jason Duell
In February we briefly turned on the new HTTP cache ("cache2") for a few 
days--it was quite useful at shaking out some bugs.  We're planning to 
do this again starting in the next day or two--if this seems like a Bad 
Idea to you please comment ASAP in


   https://bugzilla.mozilla.org/show_bug.cgi?id=1004185

Note: like last time, cache2 will be turned on for nightly Firefox 
desktop users only, and not for android/b2g.  It will also not be turned 
on for the buildbots, as we still have a few orange/perf bugs that we're 
tracking down.


We have a number of people who have been using cache2 for their daily 
browsing for quite some time, so we don't expect catastrophic failure. 
Please file any bugs you see in Bugzilla under Networking:Cache...


Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling new HTTP cache on nightly (browser only, not automated tests) soon

2014-05-01 Thread Jason Duell

On 05/01/2014 08:01 AM, Gavin Sharp wrote:

I had the same concern in bug 967693. There was some back and forth in
a private email thread (we should have discussed it in the bug...)
that essentially boiled down to "the orange/perf investigations are
blocked, we want more nightly crash/bug reports to work on in parallel
while those are figured out", if I recall correctly.


Yes, that's exactly right.  The orange/perf bugs are not major bugs, but 
they're enough to keep us from doing a full landing.  But meanwhile a 
wider test audience is likely to shake out bugs that don't emerge from 
our automated test coverage.  We found a few bugs that way in our last 
trial run.  It'd be good to be able to work on those in parallel.


> Do we have the telemetry or feedback channels in place to
> measure what you need to measure?

We'll be keeping an eye on telemetry measures like cache hit rate and 
response time.  The other feedback channel is to alert engineers and 
other nightly users that this is happening so we're more likely to get 
bugs filed.


Jason




Gavin

On Thu, May 1, 2014 at 10:00 AM, Benjamin Smedberg
 wrote:

On 5/1/2014 2:23 AM, Jason Duell wrote:




Note: like last time, cache2 will be turned on for nightly Firefox desktop
users only, and not for android/b2g.  It will also not be turned on for the
buildbots, as we still have a few orange/perf bugs that we're tracking down.



This part doesn't make much sense to me. What is the goal of getting nightly
feedback if we know that there are unresolved test failures and performance
issues? Do we think that enabling it on nightly will help fix the known
issues?

What kind of feedback are you hoping to get from nightly? Do we have the
telemetry or feedback channels in place to measure what you need to measure?

--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling new HTTP cache on nightly (browser only, not automated tests) soon

2014-05-05 Thread Jason Duell

Our trial run of the HTTP cache v2 is done and we will be back to using
the old cache as of tonight's nightly.  We found one very important bug
that didn't show up in automated tests, which is great.

Jason

https://bugzilla.mozilla.org/show_bug.cgi?id=1006181
https://bugzilla.mozilla.org/show_bug.cgi?id=1006197




On 04/30/2014 11:23 PM, Jason Duell wrote:

In February we briefly turned on the new HTTP cache ("cache2") for a few
days--it was quite useful at shaking out some bugs.  We're planning to
do this again starting in the next day or two--if this seems like a Bad
Idea to you please comment ASAP in

https://bugzilla.mozilla.org/show_bug.cgi?id=1004185

Note: like last time, cache2 will be turned on for nightly Firefox
desktop users only, and not for android/b2g.  It will also not be turned
on for the buildbots, as we still have a few orange/perf bugs that we're
tracking down.

We have a number of people who have been using cache2 for their daily
browsing for quite some time, so we don't expect catastrophic failure.
Please file any bugs you see in Bugzilla under Networking:Cache...

Jason
___
dev-planning mailing list
dev-plann...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning


___
dev-planning mailing list
dev-plann...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Argument validation as a JSM?

2014-05-19 Thread Jason Orendorff


On 05/15/2014 10:58 AM, ajvinc...@gmail.com wrote:

Re: readability, that's something to think about, but when I write code like 
this:

 if ((typeof num != "number") ||
 (Math.floor(num) != num) ||
 isNaN(num) ||
 (num < 0) ||
 Math.abs(num) == Infinity) {
   throw new Error("This need to be a non-negative whole number");
 }

Well, it adds up.  :)  Even now I can replace the fifth condition with 
!Number.isFinite(num).


FWIW, the newish function Number.isInteger covers every requirement here 
except the fourth:

http://people.mozilla.org/~jorendorff/es6-draft.html#sec-number.isinteger

Also FWIW, I agree with your main point about boilerplate: even if you 
use the nice standard library, you end up with

if (!Number.isInteger(num) || num < 0) {
throw new Error("handwritten error message");
}
and I would rather see
checkArgIsNonNegativeInteger(num);
But I'm not the one you have to convince.

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Are you interested in doing dynamic analysis of JS code?

2014-06-25 Thread Jason Orendorff

We're considering building a JavaScript API for dynamic analysis of JS code.
Here's the sort of thing you could do with it:

  - Gather code coverage information (useful for testing/release mgmt?)

  - Trace all object mutation and method calls (useful for devtools?)

  - Record/replay of JS execution (useful for devtools?)

  - Implement taint analysis (useful for the security team or devtools?)

  - Detect when a mathematical operation returns NaN (useful for game
developers?)

Note that the API would not directly offer all these features. Instead, it
would offer some powerful but mind-boggling way of instrumenting all JS
code. It would be up to you, the user, to configure the instrumentation, get
useful data out of it, and display or analyze it. There would be some 
overhead

when you turn this on; we don't know how much yet.

We would present a detailed example of how to use the proposed API, but 
we are

so early in the process that we're not even sure what it would look like.
There are several possibilities.

We need to know how to prioritize this work. We need to know what kind 
of API

we should build. So we're looking for early adopters. If that's you, please
speak up and tell us how you'd like to instrument JS code.

--
Nicolas B. Pierron
Jason Orendorff
(JavaScript engine developers)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Are you interested in doing dynamic analysis of JS code?

2014-06-25 Thread Jason Orendorff


On 06/25/2014 01:04 PM, Fitzgerald, Nick wrote:
Yes! We'd absolutely love to show code coverage in the debugger's 
source editor! We've played with implementations based on 
Debugger.prototype.onEnterFrame and hidden breakpoints, but it is 
pretty slow and feels like a huge hack. Would this work likely yield 
improved performance over that approach?


If this pans out, I'd expect it to be faster, but how much depends on a 
lot of variables. I wish I could be more specific.


Note that the API would not directly offer all these features. 
Instead, it

would offer some powerful but mind-boggling way of instrumenting all JS
code. It would be up to you, the user, to configure the 
instrumentation, get
useful data out of it, and display or analyze it. There would be some 
overhead

when you turn this on; we don't know how much yet.


Would the API be something like DTrace? Just want to figure out what 
kind of thing we are talking about here.


Very good question. I too am interested in figuring out what kind of 
thing we're talking about.


One proposal is to build something like strace: it would be impossible 
to modify the instrumented code or its execution, only observe. 
User-specified data about JS execution would be logged, then delivered 
asynchronously. (Don't read too much into this -- the implementation 
would be completely unlike strace. Note too that records would *not* be 
delivered synchronously and would not contain stacks, though you could 
recover the stack from enter/leave records.)


An alternative involves letting you modify JS code just before it's 
compiled (source-to-source transformation). This is more general (you 
could modify the instrumented code arbitrarily, and react synchronously 
as it executes) but maybe that's undesirable. It's not clear that 
transformed source would interact nicely with other tools, like the 
debugger. And a usable API for this is a tall order.


So. Tradeoffs.


/me raises hand

Mostly interested in tracing calls and mutations. Also code coverage, 
but to a bit of a lesser extent.


Great, we'll get in touch off-list!

Record/replay is such a holy grail (eclipsed only by "time traveling" 
/ reverse and replay interactive (as opposed to a static recording) 
debugging with live, on-stack code editing) that I hesitate to even 
get my hopes up...


Don't get your hopes up.

A dynamic analysis API would help with the "record" side of rr-style 
record/replay **of JS code alone**. You'd draw a boundary around all JS 
code and capture all input across that boundary. But replay requires 
more work in the engine. A bigger problem is that in such a scheme, 
replay only reproduces what happens inside the boundary, and the DOM, in 
this scenario, is still on the outside. A real record/replay product 
would have to support the DOM too. We're far from being able to do that.


The dynamic analysis API would only be one part of the puzzle. I'm sorry 
for the misleading gloss.


-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Are you interested in doing dynamic analysis of JS code?

2014-06-27 Thread Jason Orendorff


On 06/25/2014 05:40 PM, Robert O'Callahan wrote:
On Thu, Jun 26, 2014 at 8:06 AM, Jason Orendorff 
mailto:jorendo...@mozilla.com>> wrote:


An alternative involves letting you modify JS code just before
it's compiled (source-to-source transformation). This is more
general (you could modify the instrumented code arbitrarily, and
react synchronously as it executes) but maybe that's undesirable.
It's not clear that transformed source would interact nicely with
other tools, like the debugger. And a usable API for this is a
tall order.


Why is a usable S2S API difficult to produce?

A while ago I spent a few years doing dynamic analysis of Java code. 
Although the VMs had a lot of tracing and logging hooks, bytecode 
instrumentation was always more flexible and, done carefully, almost 
always more performant. I had to write some libraries to make it easy 
but tool builders enjoy writing and reusing those :-).


I meant "usable" in the Donald Norman sense. The underlying 
source-transformation hook could be quite simple, and yet completely 
beyond the people who actually have a use for it. How many potential 
users are willing and ready to munge ASTs by hand?


Bug 884602 contains a patch for the hook alone. I'd love to
- land that patch (it needs tests)
- write an addon that makes it easy to play with instrumentation of 
content JS

- post a video demoing it

But it's very hard to justify doing that instead of, say, ES6 classes.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


JS assertions that get optimized away! (was: Are you interested in doing dynamic analysis of JS code?)

2014-06-27 Thread Jason Orendorff


On 06/26/2014 06:59 AM, David Rajchenbach-Teller wrote:

I would be interested in adding boundary checks and invariant checks
that could be eliminated in opt builds. Is this in the scope of your
project?


No, I don't think so. S2S could certainly do it, but you wouldn't want 
to use S2S on all browser code in opt builds. Too expensive.


But: there's already a pattern you can use to enable assertions that 
will be zero-cost if your code runs in IonMonkey! I didn't realize this 
until you mentioned it. But suppose you start out with this:


function DEBUG() {
  return false;
}

function assert(condition) {
  if (!condition) {
throw new Error("assertion failed");
  }
}

Then you can write:

if (DEBUG()) assert(flowers.areBeautiful());

Now of course `assert(flowers.areBeautiful())` will not execute whether 
IonMonkey kicks in or not, because DEBUG() returns false.


But when IonMonkey compiles this, it can do even better. First, DEBUG() 
will be inlined. Then, since the return value is always `false`, the 
whole if-statement will be optimized away. (Currently, we don't optimize 
away if-statements that depend on global vars or even consts. Only 
functions.)


Admittedly this pattern is not quite as pretty as 
`assert(flowers.areBeautiful())`. Here's why that wouldn't work. JS 
doesn't have macros. :( Arguments to a function are evaluated before the 
function is called. This means that if we wrote assertions this way, 
`flowers.areBeautiful()` would have to be called even in non-debug 
builds. The language spec requires it. We need the if-statement to get 
the right debug-only behavior.


-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Exposing JIT performance faults in profiler/devtools (was: Are you interested in doing dynamic analysis of JS code?)

2014-06-27 Thread Jason Orendorff


On 06/25/2014 05:51 PM, Katelyn Gadd wrote:

Maybe you could use mutation/method call tracing to do a
profile-guided-optimization equivalent for JS where you record
argument types and clone functions per-callsite to ensure everything
is monomorphic?


Yes, I think this is something that could be done with it.

But! Note that we're separately working on exposing JIT performance 
faults in the profiler/devtools---directly reporting the kind of 
problems you're thinking about painstakingly inferring. \o/


I don't know how far along it is, and it's maybe too early to guess how 
actionable the output will be, but it's a much more certain thing than 
this dynamic analysis API. No meta bug, but several people are working 
on aspects of it. Maybe Kannan can tell more. (cc-ing him)


-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: c++ unit test in content process

2014-10-06 Thread Jason Orendorff


On 10/03/2014 08:59 AM, Benjamin Smedberg wrote:

On 10/3/2014 9:46 AM, Patrick Wang wrote:

The test I am writing is to test an implementation of WebRTC's TCP
socket in content process. These codes are build on top of TCPSocket's
IPDL in C++ and don't have IDL so it cannot be called directly from JS,
and the tests for chrome process version are written with gtest.
Therefore I am thinking of writing a similar one with gtest but run in
content process.


Does the unit of code you're testing include two processes 
communicating via IPDL? If so, I think you're going to need something 
more than a C++ unit test, and should probably focus your efforts on a 
more end-to-end integration test. Setting up IPDL connections and all 
of the machinery that goes with that doesn't sounds like something we 
should be trying in low-level unit tests.


I agree.

This was hard for me to accept when I first started working on the JS 
engine. It seemed like there should be more C++ unit tests. And maybe if 
there were, we'd have a better, less spaghetti-coupled design at that level.


But as it stands, both in the JS engine and in Gecko generally, the 
dependencies are such that bootstrapping one subsystem usually amounts 
to firing up the whole thing. Mocking "the rest of the browser" is a 
long hard road.


The good news is, using integration test frameworks to test unit 
functionality works astonishingly, deplorably well in Gecko. A mochitest 
often lets you target the exact code you wish to test using a tiny 
amount of HTML and JS. And it isolates your test from all sorts of 
future C++-level API churn.


That said, if you see ways to improve the situation, go strong. The 
reason we have tests and a testing culture here at all is herculean 
efforts from Rob Sayre and others who saw what needed doing and did it.


-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: About the bitfield requirement for portibility

2014-11-06 Thread Jason Orendorff
I guess I was a little irked that people are still tripping over this 
ancient document (didn't we delete that?), because I just took the time 
to clobber most of it and update what was left.


https://developer.mozilla.org/en-US/docs/Mozilla/Cpp_portability_guide

-j

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Is anyone still using JS strict warnings?

2014-12-19 Thread Jason Orendorff
So if you go to about:config and set the javascript.options.strict pref,
you'll get warnings about accessing undefined properties.

js> Math.TAU
undefined
/!\ ReferenceError: reference to undefined property Math.TAU

(It says "ReferenceError", but your code still runs normally; it really is
just a warning.)

Is anyone using this? Bug 1113380 points out that the rules about what kind
of code can cause a warning are a little weird (on purpose, I think). Maybe
it's time to retire this feature.

https://bugzilla.mozilla.org/show_bug.cgi?id=1113380

Please speak up now, if you're still using it!

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is anyone still using JS strict warnings?

2014-12-19 Thread Jason Orendorff
On Fri, Dec 19, 2014 at 5:13 PM, Jim Blandy  wrote:

> The bug is surprising, in that it claims that the bytecode that consumes
> the value determines whether a warning is issued (SETLOCAL;CALL), rather
> than the bytecode doing the fetch.
>
> Is that the intended behavior? I can't see how that makes much sense.
>

That's intentional, yes. The idea is that phrases like these:

if (obj.prop === undefined)  // ok

if (obj.prop && obj.prop.prop2)  // ok

shouldn't cause warnings. These are "detecting" uses: obj.prop is fetched
for the purpose of determining whether or not it exists.

But to distinguish "detecting" uses from others, we have to look at the
bytecode that consumes the value. All this is one reason I'd like to remove
the feature --- the bytecode inspection here is pretty sloppy. See comment
4 in the bug.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTTP/1.1 Multiplexing

2015-04-09 Thread Jason Duell
At this point the HTTP/2 ship has sailed.  It's exceedingly unlikely that
we or any other browser vendor or the IETF are going to pivot to a modified
HTTP/1.1 to get the feature set here.

Sorry!

Jason

On Thu, Apr 9, 2015 at 11:53 AM, Daniel Stenberg  wrote:

> On Wed, 8 Apr 2015, max.bruc...@gmail.com wrote:
>
>  A request begins by adding a header: X-Req-ID, set to a connection-unique
>> value. The server responded with an exact copy of this ID, and a
>> X-Req-Target header which specifies the location of the response(for server
>> pushing mostly). The server sets the X-Req-ID = push/based-id for pushed
>> responses. In doing so, we do away with the standard in-order processing &
>> one-response-per-request model.
>>
>
> In "doing away" with that, you're also no longer HTTP/1.1 compliant (which
> is a pretty significant drawback) and you also miss a few of the definite
> benefits that HTTP/2 brings: like priorities and HPACK on the first request.
>
> My guess is that you will have a hard time to convince HTTP people that
> this is a good idea worthy to implement even when you call it something
> else than 1.1. But then that's just my opinion.
>
> --
>
>  / daniel.haxx.se
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-04-30 Thread Jason Duell
+1 to asserting during tests. I'd feel better about doing it on nightly too
if there were a way to include the offending URI in the crash report.  But
I'm guessing there's not?

On Thu, Apr 30, 2015 at 3:42 PM, Jet Villegas  wrote:

> I wonder why we'd allow *any* parsing differences here? Couldn't you just
> assert and fail hard while you're testing against our tests and in Nightly?
> I imagine the differences you don't catch this way will be so subtle that
> crowd-sourcing is unlikely to catch them either.
>
> --Jet
>
> On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu 
> wrote:
>
> > As some of you may know, Rust is approaching its 1.0 release in a couple
> of
> > weeks. One of the major goals for Rust is using a rust library in Gecko.
> > The specific one I'm working at the moment is adding rust-url as a safer
> > alternative to nsStandardURL.
> >
> > This project is still in its infancy, but we're making good progress. A
> WIP
> > patch is posted in bug 1151899, while infrastructure support for the rust
> > compiler is tracked in bug 1135640.
> >
> > One of the main problems in this endeavor is compatibility. It would be
> > best if this change wouldn't introduce any changes in the way we parse
> and
> > encode/decode URLs, however rust-url does differ a bit from Gecko's own
> > parser. While we can account for the differences we know of, there may
> be a
> > lot of other cases we are not aware of. I propose using our volunteer
> base
> > in trying to find more of these differences by reporting them on Nightly.
> >
> > My patch currently uses printf to note when a parsing difference occurs,
> or
> > when any of the getters (GetHost, GetPath, etc) returns a string that's
> > different from our native implementation. Printf might not be the best
> way
> > of logging these differences though. NSPR logging might work, or even
> > writing to a log file in the current directory.
> >
> > These differences are quite privacy sensitive, so an automatic reporting
> > tool probably wouldn't work. Has anyone done something like this before?
> > Would fuzzing be a good way of finding more cases?
> >
> > I'm waiting for any comments and suggestions you may have.
> > Thanks!
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Voting in BMO

2015-06-09 Thread Jason Duell
I've never seen votes make a real difference in the 6 years I've been
around on Bugzilla.  The one use case I can think for keeping them is as an
"escape valve" for user frustration on old, long-standing bugs like

  https://bugzilla.mozilla.org/show_bug.cgi?id=41489

I.e. when people start griping about "I can't believe lame Mzilla hasn't
fixed this yet" we can tell people to vote instead of filling the comments
with complaints.  But that's a rare case and I'm not sure it's worth
keeping voting just for that.

On Tue, Jun 9, 2015 at 2:24 PM, Chris Peterson 
wrote:

> On 6/9/15 2:09 PM, Mark Côté wrote:
>
>> In a quest to simplify both the interface and the maintenance of
>> bugzilla.mozilla.org, we're looking for features that are of
>> questionable value to see if we can get rid of them.  As I'm sure
>> everyone knows, Bugzilla grew organically, without much of a road map,
>> over a long time, and it experienced a lot of scope bloat, which has
>> made it complex both on the inside and out.  I'd like to cut that down
>> at least a bit if I can.
>>
>> To that end, I'd like to consider the voting feature.  While it is
>> enabled on a quite a few products, anecdotally I have heard
>> many times that it isn't actually useful, that is, votes aren't really
>> being used to prioritize features & fixes.  If your team uses voting,
>> I'd like to talk about your use case and see if, in general, it makes
>> sense to continue to support this feature.
>>
>
> I vote for bugs as a polite (sneaky?) way to watch a bug's bugmail without
> spamming all the other CCs by adding myself to the bug's "real" CC list.
>
>
> chris
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Type-based alias analysis and Gecko C++

2019-02-15 Thread Jason Orendorff
On Fri, Feb 15, 2019 at 8:43 AM David Major  wrote:

> [...] If we can easily remove (or reduce) uses of this flag, I think
> that would be pretty uncontroversial.
>

It sounds risky to me. My impression is we have a lot of TBAA violations,
which would make every compiler version bump a game of UB roulette, right?

Probably looks like I'm weighing in on both sides here. I'm saying the
situation is not ideal.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Type-based alias analysis and Gecko C++

2019-02-19 Thread Jason Orendorff
I accidentally sent a question to Henri only. Re-adding the list.

-j

On Mon, Feb 18, 2019 at 1:24 AM Henri Sivonen  wrote:

> On Fri, Feb 15, 2019 at 6:27 PM Jason Orendorff 
> wrote:
> >
> > On Fri, Feb 15, 2019 at 3:00 AM Henri Sivonen 
> wrote:
> >>
> >> If we have no intention of getting rid of -fno-strict-aliasing, it
> >> would make sense to document this at
> >>
> https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code
> >> and make it explicitly OK for Gecko developers not to worry about
> >> type-based alias analysis UB--just like we don't worry about writing
> >> exception-safe code.
> >
> >
> > Not worrying about exceptions is easy for me to understand: I can assume
> that exceptions are never thrown.
> >
> > What does it mean to not worry about type-based alias analysis UB?
>
> (This needs a big "as I understand it" disclaimer from me:)
>
> It means that it's OK to reinterpret_cast T* into U* even if T and U
> differ by more than signedness and neither T nor U is char (or signed
> char or unsigned char).
>
> > What kind of code is OK under -fno-strict-aliasing,
>
> E.g. code that views a buffer of doubles via uint64_t* (without a
> memcpy in between). Or code that casts char16_t* to wchar_t* in clang
> on Windows. (MSVC doesn't implement type-based alias analysis anyway.)
>
> More importantly in terms of usage frequency but less importantly in
> terms of what C++ implementations we actually use, viewing a buffer of
> uint8_t via a pointer to some other type or vice versa. (Whether
> uint8_t is the same as unsigned char and, therefore, whether uint8_t*
> can alias anything, or whether uint8_t is a non-char type of the same
> size that doesn't inherit char's aliasing rules is
> implementation-defined. So whether something is Undefined Behavior is
> implementation-defined behavior. AFAIK, in the implementations we
> actually use, uint8_t is unsigned char. See
> https://bugzilla.mozilla.org/show_bug.cgi?id=1426909#c21 and
> https://bugzilla.mozilla.org/show_bug.cgi?id=1426909#c30 .)
>
> > and what does it do?
>
> My understanding is that the original purpose is that if you in a loop
> read from T* and write on each iteration via U* (where T and U differ
> by more than signedness and neither is char), without
> -fno-strict-aliasing the compiler doesn't need to pessimize on the
> assumption that a write via U* might change the next read via T*. With
> -fno-strict-aliasing, the compiler has to pessimize on the assumption
> that a write via U* might affect the next read via T*. The problem is
> that if accessing a T-typed value via U* is UB *in general*, who knows
> what *else* the compiler might do if -fno-strict-aliasing is not used
> and there are reinterpret_casts between pointer types, especially with
> LTO giving it a broader view to spot UB.
>
> The unproductivity is being afraid of reinterpret_cast on pointers due
> to not knowing what it might let the compiler do and whatever
> counter-measures and debates arise from not knowing. With
> -fno-strict-aliasing, reinterpret_cast on pointers works in a way that
> programmers can reason about.
>
> As for whether the optimizations enabled by the absence of
> -fno-strict-aliasing are valuable: On one hand, MSVC doesn't have the
> optimizations to begin with (AFAIK). On the other hand, they perceived
> as so valuable that a notable component of the motivation of char8_t
> is to make it a separate aliasing domain from char. Similarly, C++
> already makes char16_t and char32_t distinct aliasing domains from
> uint16_t and uint32_t while C does not. (So compiling C code that does
> not have UB in C++ mode can make the code have UB if
> -fno-strict-aliasing is not in use.)
>
> [...]

>
> --
> Henri Sivonen
> hsivo...@mozilla.com
>

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: String.prototype.matchAll

2019-03-04 Thread Jason Orendorff
In Firefox 67 I plan to ship String.prototype.matchAll. It's behind a
Nightly-only #ifdef right now (since December).

Volunteer contributor André Bargull wrote our implementation of this
feature. Tests too. Thank him if you see him.

## Standards

Proposal (with examples and rationale):
https://github.com/tc39/proposal-string-matchall

Specification: https://tc39.github.io/proposal-string-matchall/

This feature is at Stage 3 in the TC39 process. The spec is complete and no
significant changes are expected. The main hurdle left before
standardization is compatible shipping implementations. TC39 process
explainer: https://tc39.github.io/process-document/

## Tracking bugs

Implementation: https://bugzilla.mozilla.org/show_bug.cgi?id=1435829

Enable by default: https://bugzilla.mozilla.org/show_bug.cgi?id=1531830

## MDN

Documentation wanted. You can help:
https://bugzilla.mozilla.org/show_bug.cgi?id=1435829#c35

## Support in other browsers / JS engines

Enabled by default in Chrome 73.

JSCore (Safari) bug here: https://bugs.webkit.org/show_bug.cgi?id=186694

## Test coverage

More than you can shake a stick at.

https://searchfox.org/mozilla-central/source/js/src/tests/test262/built-ins/String/prototype/matchAll
https://searchfox.org/mozilla-central/source/js/src/tests/test262/built-ins/RegExp/prototype/Symbol.matchAll
https://searchfox.org/mozilla-central/source/js/src/tests/test262/built-ins/RegExpStringIteratorPrototype
https://searchfox.org/mozilla-central/source/js/src/tests/non262/String/matchAll.js

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: android-em-4-3-armv7-api16 possibly falsy timeout

2019-03-19 Thread Jason Orendorff
Here is another bug with the same pattern (recently added, unrelated to the
others, no other failures on this test, timeout "literally" "impossible",
log shows that the test is not timing out):

https://bugzilla.mozilla.org/show_bug.cgi?id=1533500

It would be good to have a single bug to dup all of these to. Who's working
on this platform?

-j



On Mon, Mar 18, 2019 at 11:20 PM  wrote:

> Hi,
>
> There are some strange intermittent timeout reports on
> android-em-4-3-armv7-api16 platform, I suspect there is something wrong on
> this test platform.
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1533737
> https://bugzilla.mozilla.org/show_bug.cgi?id=1535286
> https://bugzilla.mozilla.org/show_bug.cgi?id=1534079
>
> All of the tests are recently added, they are completely unrelated to each
> other, and there are no other failures on these tests.
>
> But they have one or two intermittent failure on
> "android-em-4-3-armv7-api16". I'm familiar with 2 of the testcases, they
> are impossible to cause a timeout. Especially Bug 1535286 is just a patch
> to remove a bogus assertion with a trivial testcase ` keyTimes='.1;.6;.6' path='m0,1l7,3' keyPoints='.6;1;.7'/>`, which literally
> is impossible to timeout.
>
> Could anyone investigate whether there are some problems on
> "android-em-4-3-armv7-api16" to cause newly added testcase to timeout?
>
> Thanks.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Danger: `hg rebase` with format-source seems to be eating patches

2019-04-09 Thread Jason Orendorff
If you use `hg rebase` or `hg pick`, you should disable the format-source
extension until this is resolved:

(data loss) Suspected bad interaction between hg rebase and format-source
https://bugzilla.mozilla.org/show_bug.cgi?id=1542871

You can disable this extension by commenting out a line in your ~/.hgrc,
like this:

[extensions]
...
#format-source =
/Users/you/.mozbuild/version-control-tools/hgext/format-source

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to our JS coding style

2019-06-13 Thread Jason Orendorff
On Thu, Jun 13, 2019 at 1:42 PM Victor Porof  wrote:

> To improve developer productivity, we plan to automate JS formatting using
> Prettier , a broadly adopted tool specifically
> designed for this task and maintained by well known stakeholders. This
> should reduce the amount of time spent writing and reviewing JS code.
>

Thanks for doing this!

This could cause some JS engine frontend tests (under js/src/tests/non262
and js/src/jit-test) to silently stop testing what they're meant to test.
For example, Prettier inserts missing semicolons (which is GREAT) but we
want to keep our Automatic Semicolon Insertion test coverage.

What should we do? The number of test files affected is probably a very
small fraction of the whole, but they're important (try implementing ASI
sometime) and I don't know how to identify them. It's not obvious from the
filenames. Some directory names (non262/{jit,JSON,Map}) imply that it's
probably fine to beautify everything in there, but a lot of files are named
stuff like "tests/non262/extensions/regress-474771-01.js".

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to our JS coding style

2019-06-13 Thread Jason Orendorff
On Thu, Jun 13, 2019 at 3:32 PM Dave Townsend  wrote:

> Prettier will share eslint's list of things to ignore and those tests are
> already ignored:
> https://searchfox.org/mozilla-central/rev/1133b6716d9a8131c09754f3f29288484896b8b6/.eslintignore#239
> .
>
> Of course if you want parts of that directory to be linted and formatted
> then you can modify that file as necessary.
>

Oh, this is perfect. Thanks.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming C++ standards meeting in Cologne

2019-07-18 Thread Jason Orendorff
Botond,

Presumably it's too late for the ongoing meeting, but I'm very concerned
about C++20 assertions.

The proposal says that in a release build, any contract violation is
undefined behavior. Sounds reasonable enough.

Every assertion adds potential UB. Hmm.

ISTM this makes the feature very much off-limits for us. Undefined behavior
is bad. It accounts for most security vulnerabilities in Firefox. We could
use this only if Clang adds a mode that, contrary to the standard,
guarantees that unchecked contracts do exactly nothing, while retaining all
the other optimizations that we rely on.

But is it really just us? Are these semantics good for anyone? C++20
assertions seem to be a software-engineering quality-assurance feature
crazy-glued to a very tricky, dangerous,
squeeze-out-the-last-few-drops-of-CPU performance feature. But those two
things are more useful separately; and surely the latter should not be
designed to look like the former. I would like to understand the motivation
behind this.

-j

On Fri, Jul 12, 2019 at 11:25 PM Botond Ballo  wrote:

> Hi everyone!
>
> The next meeting of the C++ Standards Committee (WG21) will be July
> 15-20 in Cologne, Germany.
>
> (Apologies for not sending this announcement sooner!)
>
> This is a particularly important meeting because the committee aims to
> publish the C++20 Committee Draft, a feature-complete draft of C++20
> that will be sent out for balloting and comment from national
> standards bodies, at the end of this meeting. C++20 is probably going
> to be the language's largest release since C++11, containing an
> extensive lineup of significant new features (Concepts, Modules,
> Coroutines, Contracts, Ranges, and default comparisons (<=>), among
> others). Modules has just merged into the working draft at the last
> meeting, and continues to be under active discussion due to tooling
> impacts. Contracts are at risk of being pulled due to controversy.
> Even for uncontroversial features, technical issues that we can't fix
> in time are sometimes discovered and result in the feature being
> bumped to the next release. All in all, this is going to be a busy and
> eventful meeting.
>
> If you're curious about the state of C++ standardization, I encourage
> you to check out my blog posts where I summarize each meeting in
> detail (most recent one here [1]), and the list of proposals being
> considered by the committee (new ones since the last meeting can be
> found here [2] and here [3]).
>
> I will be attending this meeting, likely splitting my time between the
> Evolution Working Group (where new language features are discussed at
> the design level) and interesting Study Groups such as SG7
> (Reflection) and SG15 (Tooling). As always, if there's anything you'd
> like me to find out for you at the meeting, or any feedback you'd like
> me to communicate, please let me know!
>
> Finally, I encourage you to reach out to me if you're thinking of
> submitting a proposal to the committee. I'm always happy to help with
> formulating and, if necessary, presenting a proposal.
>
> Cheers,
> Botond
>
> [1]
> https://botondballo.wordpress.com/2019/03/20/trip-report-c-standards-meeting-in-kona-february-2019/
> [2]
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/#mailing2019-03
> [3]
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/#mailing2019-06
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: Promise.allSettled

2019-10-30 Thread Jason Orendorff
In Firefox 71, we'll ship Promise.allSettled, a standard way to `await`
several promises at once. André Bargull [:anba] contributed the
implementation of this feature. It's in Nightly now.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1539694
Shipped in: https://bugzilla.mozilla.org/show_bug.cgi?id=1549176

Standard: https://tc39.es/ecma262/#sec-promise.allsettled

MDN:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/allSettled

Platform coverage: All, no pref

DevTools bug: N/A. The DevTools don't currently have any custom support
for peeking at the internal state of Promise objects.

Other browsers: Shipped in Chrome 76, Safari 13.

Testing: There are test262 tests covering this feature:
https://github.com/tc39/test262/tree/master/test/built-ins/Promise/allSettled

Use cases: Promise.allSettled is useful in async code. It's used to wait
for several tasks to finish in parallel. What sets it apart from the
existing methods Promise.race and Promise.all is that it *doesn't*
short-circuit as soon as a single task succeeds/fails.

Secure contexts: This is a JS language feature and is therefore present
in all contexts.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Promise.allSettled

2019-10-30 Thread Jason Orendorff
On Wed, Oct 30, 2019 at 5:20 PM Jan-Ivar Bruaroey  wrote:

> This always seemed trivial to me to do with:
>
>  Promise.all(promises.map(p => p.catch(e => e)))
>

I guess it depends on your definition of "trivial". If everyone knows de
Morgan's laws, we don't need *both* `||` and `&&` operators...

But I guess it's too late to make that point. [...]
>

Rather; but there is still a (rapidly closing) window to comment on the
*next* proposal in this vein: Promise.all <
https://github.com/tc39/proposal-promise-any/#ecmascript-proposal-promiseany>.
It is a stage 3 proposal now. Our representative on the ECMAScript standard
committee is Yulia Startsev (or you can just email me).

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Consider avoiding allSettled in tests (was: Re: Intent to ship: Promise.allSettled)

2019-10-31 Thread Jason Orendorff
On Thu, Oct 31, 2019 at 4:10 AM Paolo Amadini 
wrote:

>// INCORRECT
>//await Promise.allSettled([promise1, promise2]);
>
> The last example silently loses any rejection state.
>

Ignoring the awaited value here is like using `catch {}` to squelch all
exceptions, or ignoring the return value of an async function or method, or
any other expression that produces a Promise. Do we have lints for those
pitfalls? I'm kind of surprised that there doesn't seem to be a built-in
ESLint lint for `catch {}`.

There's nothing special about tests here, right? Those patterns are fishy
whether you're in a test or not.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Ship: String.prototype.replaceAll()

2019-11-14 Thread Jason Orendorff
String.prototype.replaceAll() is a convenience method for text processing.
André Bargull [:anba] contributed the implementation of this feature. It's
in Nightly now.

The proposal is at Stage 3 of the TC39 Process[1].

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1540021

Proposed standard: https://tc39.es/proposal-string-replaceall/

Proposal repository: https://github.com/tc39/proposal-string-replaceall

MDN: no page yet (dev-doc-needed is set on the bug)

Platform coverage: All, Nightly only, no pref

DevTools bug: N/A

Other browsers: Recently implemented in V8[2]. Not in JSC yet.[3]

Testing: Landed with automated tests.[4] test262 will supply independent
tests later.

Use cases: This is a quality-of-life feature. `string.replace(searchFor,
replacement)` has already been around for many years, but replaceAll is
what you usually mean. Spot the difference:

js> "words with spaces between".replace(" ", "-")
"words-with spaces between"
js> "words with spaces between".replaceAll(" ", "-")
"words-with-spaces-between"

The old-school workaround is to pass a RegExp with the `g` flag as the
first argument, or use `.split(" ").join("-")`. But neither hack is exactly
obvious--one StackOverflow answer explaining how to do it has 4,000+
upvotes.[5] The new method is clearer and more discoverable.

[1]: https://tc39.es/process-document/
[2]: https://bugs.chromium.org/p/v8/issues/detail?id=9801
[3]: https://bugs.webkit.org/show_bug.cgi?id=202471
[4]: https://phabricator.services.mozilla.com/D51842
[5]: https://stackoverflow.com/a/1144788/94977

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Implement and Ship: Promise.any

2019-11-14 Thread Jason Orendorff
We intend to implement and ship Promise.any, a proposed way to `await`
several promises and continue as soon as any of them becomes
fulfilled. André Bargull [:anba] has contributed patches implementing
this feature. It will land in Nightly soon.

We'll ship it once we're confident the specification is stable
(Promise.any is at Stage 3 in the TC39 Process[1], but some minor
changes are still in flight).

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1568903

Proposed standard: https://tc39.es/proposal-promise-any/

Proposal repository: https://github.com/tc39/proposal-promise-any/

MDN:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/any

Platform coverage: All, Nightly only, no pref

DevTools bug: N/A

Other browsers: Not implementing yet, as far as I can tell. I expect V8
will start soon and JSC will implement it eventually.

Polyfills are already available for this proposal[2][3].

Testing: The patch[4] contains tests, including tests for aspects of
behavior that we do not expect test262 to test thoroughly: error stacks
and cross-realm behavior. Test262 will eventually provide independent
tests.

Use cases: Promise.any is similar to Promise.race. It's for cases in
async code when you would use Promise.race, but you're interested in the
first success, not the first failure. If you were implementing something
like Gecko's "Race Cache With Network" behavior[5][6], you might write:

```js
let response = await Promise.any([network_fetch, cache_fetch]);
```

[1]: https://tc39.es/process-document/
[2]: https://github.com/zloirock/core-js#promiseany
[3]: https://github.com/es-shims/Promise.any#promiseany-
[4]: https://phabricator.services.mozilla.com/D51659
[5]:
https://groups.google.com/forum/#!topic/mozilla.dev.platform/fvCmc6kR9Uk
[6]: about:networking#rcwn

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please don't use functions from ctype.h and strings.h

2020-06-26 Thread Jason Orendorff
This turns out to be super easy to do in clang-tidy, so I took bug 1485588.

-j

On Wed, Jun 24, 2020 at 2:35 PM Chris Peterson 
wrote:

> On 8/27/2018 7:00 AM, Henri Sivonen wrote:
> > I think it's worthwhile to have a lint, but regexps are likely to have
> > false positives, so using clang-tidy is probably better.
> >
> > A bug is on file:https://bugzilla.mozilla.org/show_bug.cgi?id=1485588
> >
> > On Mon, Aug 27, 2018 at 4:06 PM, Tom Ritter  wrote:
> >> Is this something worth making a lint over?  It's pretty easy to make
> >> regex-based lints, e.g.
> >>
> >> yml-only based lint:
> >>
> https://searchfox.org/mozilla-central/source/tools/lint/cpp-virtual-final.yml
> >>
> >> yml+python for slightly more complicated regexing:
> >>
> https://searchfox.org/mozilla-central/source/tools/lint/mingw-capitalization.yml
> >>
> https://searchfox.org/mozilla-central/source/tools/lint/cpp/mingw-capitalization.py
>
>
> Bug 1642825 recently added a "rejected words" lint. It was intended to
> warn about words like "blacklist" and "whitelist", but dangerous C
> function names could easily be added to the list:
>
> https://searchfox.org/mozilla-central/source/tools/lint/rejected-words.yml
>
> A "good enough" solution that can find real bugs now is preferable to a
> cleaner clang-tidy solution someday, maybe. (The clang-tidy lint bug
> 1485588 was filed two years ago.)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


New JavaScript engine module owner

2021-03-09 Thread Jason Orendorff
Hi, everyone.

I'm pleased to announce that Jan De Mooij has agreed to take ownership of
the JavaScript engine module.

Following a Mozilla tradition that was venerable when I got here, Jan has
been doing the job already for quite some time. Please join me in
congratulating Jan and thanking him for his ongoing leadership.

Sincerely,
Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsITimer thread safety

2012-07-19 Thread Jason Duell

nsITimer.idl says nothing about thread safety. (Can we add some?)

From a glance at the code, it appears that it's safe to Cancel and 
null-out an nsITimer from any given thread, so long as the timer 
callback that's pointed to has thread-safe refcounting.  Correct?


Jason

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsITimer thread safety

2012-07-19 Thread Jason Duell

On 07/19/2012 01:16 PM, Boris Zbarsky wrote:

On 7/19/12 4:04 PM, Jason Duell wrote:

nsITimer.idl says nothing about thread safety. (Can we add some?)

 From a glance at the code, it appears that it's safe to Cancel and
null-out an nsITimer from any given thread, so long as the timer
callback that's pointed to has thread-safe refcounting. Correct?


If I recall correctly, it's "unsafe" to fire an nsITimer on any thread 
other than the main thread.  At least insofar as it will cause races 
on the timers' delay adjustment mechanism  It won't crash, but can 
cause timer firing times to get all weird.


(Note that it's not _intended_ that this be unsafe; it's just 
implemented in an unsafe way.)


Right--but whether can we *cancel* timers from any thread is my specific 
question here.  Looks like you can if your callback is thread-safe IIUTCC


Jason

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How to get Docshell for plugin?

2012-07-23 Thread Jason Duell
Do we have any way within our NPAPI functions to determine what docshell 
the plugin is displaying content in?  I don't see an obvious one.


For both per-tab private browsing (bug 722850)  and B2G application 
"cookie jars" (bug 756648) we need to implement separate simultaneous 
cookie databases, i.e. have separate cookie namespaces.  For most 
requests we can get the AppId and/or private browsing mode from the 
necko channel's callbacks (which now implement nsILoadContext on both 
parent/child in e10s).   But the cookie API allows callers to skip 
passing in a channel to get/setCookieString, and that's what NPAPI does


http://mxr.mozilla.org/mozilla-central/source/dom/plugins/base/nsNPAPIPlugin.cpp#2736

So I'm wondering how I can get the info some other way.

thanks,

Jason

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: "Touch" or "Tablet"?

2012-08-02 Thread Jason Smith
On Thursday, August 2, 2012 7:09:43 AM UTC-7, Lawrence Mandel wrote:
> > http://blogs.msdn.com/b/ie/archive/2012/07/12/ie10-user-agent-string-update.aspx
> 
> > 
> 
> > IE10 has introduced the Touch token to the UA string, which overlaps
> 
> > in
> 
> > intent with our Tablet token.
> 
> > 
> 
> > Dao suggests it would be nice to get cross-browser consistency here.
> 
> > https://bugzilla.mozilla.org/show_bug.cgi?id=773355
> 
> > 
> 
> > This makes some sense to me. "Touch" is more the matter of interest
> 
> > than
> 
> > the form factor. If we do switch, we should do it before we ship
> 
> > native
> 
> > Fennec on tablets.
> 
> > 
> 
> > Anyone want to argue for or against?
> 
> 
> 
> My argument against is that this change continues the churn on our UAs, which 
> have already been publicized [1]. Unless there is a clear benefit, I think we 
> should avoid shooting ourselves in the foot by making changes that may impact 
> compatibility. However, in all fairness, I have no data on how many sites 
> currently recognize the Firefox Tablet UA.
> 
> 
> 
> Lawrence
> 
> 
> 
> [1] https://developer.mozilla.org/en/Gecko_user_agent_string_reference

I agree with Lawrence. We know we have had a problem with churn on the user 
agent and partners have complained about it for being one of the reasons why X 
site stopped working on Firefox for Android.

As a point of comparison btw - what does the user agent of Safari on the IPad 
look like for comparison? IPad is dominant on the tablet market right now, so 
they might be a good point for comparison.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Verification Culture

2012-08-10 Thread Jason Smith

Hi Everyone,

Let's try posting this again. Disregard the comments I put on the other 
thread.


I think this is a good time to re-think our process for testing for 
something that is fixed or not fixed. I think a better approach that 
maybe we need to consider is something similar to what the security 
reviews do. They flag certain special areas for needing attention from a 
security perspective to do "deep dives" to find out the risks of certain 
area and flag them as such. In the context of QA, we could easily do 
something like that by reusing that process with the qawanted keyword. 
Some suggestions to build off of this:


 * Continue doing the process when a try build is created and needing
   QA testing to make use of the qawanted keyword to call out for
   explicit QA help.
 * Add a new process where more testing is needed after a bug has
   landed for some purpose. We could reuse the qawanted keyword to call
   attention to the bug and we can take it from there. For our QA
   process, we might want to file bugs in our component (Mozilla QA) to
   track any specialized testing requests that are needed
 * Throw out the general concept of blind verifications. Maybe we
   should even take out the verified portion of the bug workflow to
   ensure this, but that may need further discussion to get agreement
   on that.

Note that not everything to test is necessarily testing that "X bug 
works as expected." There's other cases that may warrant calling out 
attention to certain bugs for explicit testing after the bug lands. Here 
are two examples:


 * Unprefixing of a DOM or CSS property - It's useful to call out for
   explicit testing here not necessarily for actually testing if the
   property works (as we have a lot of automation that usually does
   some of the testing here), but instead call out testing for web
   compatibility concerns to find out if there are web compatibility
   risks with unprefixing X DOM or CSS property.
 * Flagging of a risky patch needing to be sure it works - This is
   where doing a deep dive generally by formulating a test plan and
   executing it is useful.

Note - I still think it's useful for a QA driver to look through a set 
of bugs fixed for a certain Firefox release, it's just the process would 
be re-purposed for flagging a bug for needing more extensive testing for 
X purpose (e.g. web compatibility).


Thoughts?

Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.org/

On 8/10/2012 1:41 PM, Anthony Hughes wrote:

Sorry, this should have went to dev-platform...

- Original Message -
From: "Anthony Hughes" 
To: "dev-planning" 
Cc: dev-qual...@lists.mozilla.org
Sent: Friday, August 10, 2012 1:40:15 PM
Subject: Fwd: Verification Culture

I started this discussion on dev-quality[1] but there has been some suggestion that the 
dev-planning list is more appropriate so I'm moving the discussion here. There's been a 
couple of great responses to the dev-quality thread so far but I won't repost them here 
verbatim. The general concensus in the feedback was that QA spending a lot of time simply 
verifying that the immediate test conditions (or test case) is not a valuable practice. 
It was suggested that it would be a far more valuable use of QA's time and be of greater 
benefit to the quality of our product if we pulled out a subset of "critical" 
issues and ran deep-diving sprints around those issues to touch on edge-cases.

I, for one, support this idea in the hypothetical form. I'd like to get various 
peoples' perspectives on this issue (not just QA).

Thank you do David Baron, Ehsan Akhgari, Jason Smith, and Boris Zbarsky for the 
feedback that was the catalyst for me starting this discussion. For reference, 
it might help to have a look at my dev-planning post[2] which spawned the 
dev-quality post, which in turn has spawned the post you are now reading.

Anthony Hughes
Mozilla Quality Engineer

1. https://groups.google.com/forum/#!topic/mozilla.dev.quality/zpK52mDE2Jg
2. https://groups.google.com/forum/#!topic/mozilla.dev.planning/15TSrCbakEc

- Forwarded Message -
From: "Anthony Hughes" 
To: dev-qual...@lists.mozilla.org
Sent: Thursday, August 9, 2012 5:14:02 PM
Subject: Verification Culture

Today, David Baron brought to my attention an old bugzilla comment[1] about 
whether or not putting as much emphasis on bug fix verification was a useful 
practice or not. Having read the comment for the first time, it really got me 
wondering whether our cultural desire to verify so many bug fixes before 
releasing Firefox to the public was a prudent one.

Does verifying as many fixes as we do really raise the quality bar for Firefox?
Could the time we spend be better used elsewhere?

If I were to ballpark it, I'd guess that nearly half of the testing we do 
during Beta is for bug fix verifications. N

Re: Verification Culture

2012-08-13 Thread Jason Smith
Reply on:

How are we planning to test this?  We have seen bugs in obscure web
sites which use the name of a new DOM property for example, but it seems
to me like there is no easy way for somebody to verify that adding such
a property doesn't break any popular website, even, since sometimes the
bug needs special interactions with the website to be triggered. 

Response:

You'd first crawl around thousands of sites to generate statistics on where the 
property is used in a prefixed form currently on the web (a-team btw is working 
on a tool for this). Next, you would prioritize the result list of sites using 
that prefix based on various factors such as site popularity using alexa data, 
frequency of prefix use, etc. Then, you would select a small subset of sites 
and do an exploratory test of those sites. If you notice immediately that there 
are general problems with the site, then you likely have found a web 
compatibility problem. Knowing the problem proactively gives you advantages 
such as knowing to double-check the implementation, but more importantly, 
knowing when and what level of outreach is needed.

Reply on:

I'm not quite sure why that would be useful.  If we believe that doing
blind verification is not helpful, why is doing that on a subset of bugs
fixed in a given release better? 

Response:

Probably because there are bugs that don't get flagged that should get flagged 
for testing. It can be useful a component owner to track to know what has 
landed to know what testing they need to follow up with, if any. The difference 
is that I'm not implying in generally going with "go verify that this works," 
but instead "go test these scenarios that would likely be useful to investigate 
as a result of this change."

Reply on:

I think QA should do some exploratory testing of major new features as
time allows, but just verifying existing test cases that often are
running automatically anyhow isn't a good use of time, I guess. 

Response:

Right, we should focus effort on areas not covered by automation primarily.

Reply on:

We (mostly) send Gecko developers to participate in Web
standardization. Opera (mostly) sends QA people. This results in Opera
QA having a very deep knowledge and understanding of Web standards.
(I'm not suggesting that we should stop sending Gecko developers to
participate. I think increasing QA attention on spec development could
be beneficial to us.) It seems (I'm making inferences from outside
Opera; I don't really know what's going on inside Opera) that when a
new Web platform feature is being added to Presto, Opera assigns the
QA person who has paid close attention to the standardization of the
feature to write test cases for the feature. This way, the cases that
get tested aren't limited by the imagination of the person who writes
the implementation.

So instead of verifying that patches no longer make bugs reproduce
with it steps to reproduce provided by the bug reporter, I think QA
time would be better used by getting to know a spec, writing
Mochitest-independent cross-browser test cases suitable for
contribution to an official test suite for the spec, running not only
Firefox but also other browsers against the tests and filing spec bugs
or Firefox bugs as appropriate (with the test case imported from the
official test suite to our test suite). (It's important to
sanity-check the spec by seeing what other browsers do. It would be
harmful for Firefox to change to match the spec if the spec is
fictional and Firefox already matches the other browsers.) 

Response:

I'd generally agree these are all good ideas. I've been recently exploring some 
of the ideas you propose by getting involved early with the specification and 
development work for getUserMedia and other WebRTC related parts. Providing the 
test results and general feedback immediately in the early phases of 
development and the spec process is already seems to be useful - it provides 
context into early problems, especially in the unknown areas not immediately 
identified when building the spec in the first place. I'll keep these ideas in 
mind as I continue to work with the WebRTC folks.

Reply on:

Verifications are important. I've seen way too many fixes go in across
my career that didn't really fix the bug to think that we should take
the workflow out completely, and I would never call them "blind" if
they're against a valid testcase. They might be naive, they might be
shallow, but they aren't blind. That's a misnomer. 

Response:

Right, we shouldn't take the workflow entirely. I think the general suggestion 
is to focus our efforts on the "right" bugs that are bound to dig into and find 
problems in. The reality is that we can't verify every single bug in a deep 
dive (there simply isn't enough time to do so). The blind verifications point 
being made above was more suggesting that I don't think it's a good idea to do 
a large amount of verifications with a simple point and click operation on 
every si

Re: Increase in mozilla-inbound bustage due to people not using Try

2012-08-16 Thread Jason Duell

On 08/16/2012 12:03 AM, Nicholas Nethercote wrote:

On Wed, Aug 15, 2012 at 11:41 PM, Mike Hommey  wrote:

A few months back, John Ford wrote a standalone win32 executable
that used the proper APIs to delete an entire directory. I think he
said that it deleted the object directory 5-10x faster or something.
No clue what happened with that.

I wish this were true, but I seriously doubt it. I can buy that it's
faster, but not 5-10 times so.

http://blog.johnford.org/writting-a-native-rm-program-for-windows/
says that it deleted a mozilla-central clone 3x faster.


And renaming the directory (then deleting it in parallel with the build, 
or later) ought to be some power of ten faster than that, at least from 
the build-time perspective. At least if you don't do anything expensive 
like our nsIFile NTFS renaming goopage (that traverses the directory 
tree making sure NTFS ACLs are preserved for all files).Which most 
versions of 'rm' aren't going to do, I'd guess.


Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increase in mozilla-inbound bustage due to people not using Try

2012-08-16 Thread Jason Duell

On 08/16/2012 06:23 AM, Aryeh Gregor wrote:

On Thu, Aug 16, 2012 at 4:18 PM, Ben Hearsum  wrote:

I don't think this would be any more than a one-time win until the disk
fills up. At the start of each job we ensure there's enough space to do
the current job. By moving the objdir away we'd avoiding doing any clean
up until we need more space than is available. After that, each job
would still end up cleaning up roughly one objdir to clean up enough
space to run.

Why can't you move it, then spawn a background thread to remove it at
minimum priority?  IIUC, Vista and later support I/O prioritization,


Brian Bondy just added I/O prioritization to our code that removes 
corrupt HTTP caches, in bug 773518, in case that code helps.


Jason


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adding hardware tokens to UA string

2012-09-24 Thread Jason Smith
On Sunday, September 23, 2012 6:02:35 PM UTC-7, Nicholas Nethercote wrote:
> On Fri, Sep 21, 2012 at 12:41 PM, Benoit Jacob  
> wrote:
> 
> >
> 
> > I would like to +1 on Henri's answer to make it clear that the outcome
> 
> > of this thread is not quite a nod to go ahead.
> 
> 
> 
> I'd upgrade your "not quite" to "definitely not" :)
> 
> 
> 
> We got a bit distracted by the net neutrality comparison, but I'd
> 
> summarize the thread as "lots of opposition and little (if any)
> 
> support".  In the past we've fought tooth and nail to (a) simply the
> 
> UA and (b) get people to stop looking at it, for good reasons.  We'd
> 
> be crazy to change tack now.
> 
> 
> 
> Nick

Just to quickly add in the conclusion that was brought up in the mobile web 
compatibility meeting about this thread:

We concluded that this isn't a good idea to go forward with for the following 
reasons:

1. It could promote UA sniffing behavior, which is exactly what were trying to 
move away from.
2. We're getting lots of push back from WURFL that we need to finalize a 
consistent UA. Right now, we can't complete our outreach to WURFL because the 
assumption is that we changing our UA too much too quickly right now.
3. For v1, we probably need to still stick to the original plan for the UA that 
does include Android in the UA, even though that's sub-optimal to receive 
Android specific content. We should move to the optimal UA in long-term though 
without the platform identifier.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Adding hardware tokens to UA string

2012-09-25 Thread Jason Smith

Hi Gerv,

Don't know. We probably should define a strategy of how we're going to 
get web content to move towards supporting the UA without the platform 
identifier, so that we don't get endlessly stuck with Android in the UA 
for FF OS in the long term.


For the v1 shipment - I originally nominated a bug in triage to ship v1 
with the UA without the platform identifier with the whitelist 
implemented, but it got unnominated due to a need for a larger 
discussion if that's what we want to do for v1. Perhaps it's time to 
revisit that discussion and push for v1 support.


Sincerely,
Jason Smith

Desktop QA Engineer
Mozilla Corporation
https://quality.mozilla.org/

On 9/25/2012 5:22 AM, Gervase Markham wrote:

On 24/09/12 20:06, Jason Smith wrote:

3. For v1, we probably need to still stick to the original plan for
the UA that does include Android in the UA, even though that's
sub-optimal to receive Android specific content. We should move to
the optimal UA in long-term though without the platform identifier.


Do we really think we will be able to make this shift post-v1? Surely 
shipping Firefox OS v1 with an Android-containing UA will simply embed 
the problem, as lots of sites will assume that the Firefox OS UA 
contains the word "Android"?


Gerv




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for reorganizing test directories

2012-10-25 Thread Jason Duell
> Why have those tests be placed into this location and not beside the 
actual implementation code?


Necko has had a general policy of not allowing mochitests within the 
/netwerk tree, so any cookie, etc tests that need browser functionality 
(i.e. more than xpcshell) live elsewhere.  It's really mostly a nervous 
tic at this point, left over from when we had the priority of making our 
network libraries a browser-independent module that other projects could 
take up (few have, and we officially Don't Care® as of a few years ago).


Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >