RE: DevTools: how to get list of mutation observers for an element

2013-09-05 Thread Jan Odvarko
> > Should I file a bug for this?
> Yes, please. CC me
Done
https://bugzilla.mozilla.org/show_bug.cgi?id=912874
Honza


> -Original Message-
> From: smaug [mailto:sm...@welho.com]
> Sent: Wednesday, September 04, 2013 9:21 PM
> To: Jan Odvarko
> Subject: Re: DevTools: how to get list of mutation observers for an
element
> 
> On 09/04/2013 09:43 AM, Jan Odvarko wrote:
> > It's currently possible to get registered event listeners for
> >
> > specific target (element, window, xhr, etc.)
> >
> > using nsIEventListenerService.getListenerInfoFor
> >
> >
> >
> > Is there any API that would allow to get also mutation observers?
> no
> 
> >
> > Should I file a bug for this?
> Yes, please. CC me
> 
> 
> -Olli :smaug
> 
> 
> >
> >
> >
> > Honza
> >

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


XUL splitmenu

2013-09-05 Thread Jan Odvarko
Two questions about  element:

#1) I wanted to displya a check-box in front of the  element,
but setting type="checkbox" and checked="true" doesn't help.
Shouldn't this just work? Is this a bug?

#2) It looks like that the  element doesn't work
on OSX. Correct?

Honza

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsTHashtable changes

2013-09-05 Thread Robert O'Callahan
nsTHashtable and its subclasses no longer have an Init method; they are
fully initialized on construction, like any good C++ object. You can
specify an initial number of buckets by passing an integer parameter to the
constructor.

nsTHashtables are always initialized now; there is no uninitialized state.
If you need an uninitialized nsTHashtable you'll have to wrap it in an
nsAutoPtr and treat null as uninitialized, or something like that.

Falllible initialization of nsTHashtables was removed because no-one uses
it. Fallible Put is still available.

The "MT" thread-safe hashtable subclasses were removed because no-one uses
them.

Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsTHashtable changes

2013-09-05 Thread Kyle Huey
Did you fix LDAP in comm-central to not use nsInterfaceHashtableMT?  That's
why I haven't finished Bug 849654.  I guess that should get duped to
wherever this happened.

- Kyle


On Thu, Sep 5, 2013 at 3:08 AM, Robert O'Callahan wrote:

> nsTHashtable and its subclasses no longer have an Init method; they are
> fully initialized on construction, like any good C++ object. You can
> specify an initial number of buckets by passing an integer parameter to the
> constructor.
>
> nsTHashtables are always initialized now; there is no uninitialized state.
> If you need an uninitialized nsTHashtable you'll have to wrap it in an
> nsAutoPtr and treat null as uninitialized, or something like that.
>
> Falllible initialization of nsTHashtables was removed because no-one uses
> it. Fallible Put is still available.
>
> The "MT" thread-safe hashtable subclasses were removed because no-one uses
> them.
>
> Rob
> --
> Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
> le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
> stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
> 'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
> waanndt  wyeonut  thoo mken.o w  *
> *
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more Makefile.in boilerplate

2013-09-05 Thread Axel Hecht

Hi,

out of curiousity, I recall that relativesrcdir was actually the trigger 
to switch on and off some l10n functionality in jar packaging.


Is that now on everywhere?

Axel

On 9/5/13 2:34 AM, Mike Hommey wrote:

Hi,

Assuming it sticks, bug 912293 made it unnecessary to start Makefile.in
files with the usual boilerplate:

   DEPTH = @DEPTH@
   topsrcdir = @top_srcdir@
   srcdir = @srcdir@
   VPATH = @srcdir@
   relativesrcdir = @relativesrcdir@

   include $(DEPTH)/config/autoconf.mk

All of the above can now be skipped. Directories that do require a
different value for e.g. VPATH or relativesrcdir can still place a value
that will be taken instead of the default. It is not recommended to do
that in new Makefile.in files, or to change existing files to do that,
but the existing files that did require such different values still do
use those different values.

Also, if the last line of a Makefile.in is:

   include $(topsrcdir)/config/rules.mk

That can be skipped as well.

Mike



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to how EXPORTS are handled

2013-09-05 Thread Neil

Mike Hommey wrote:


On Wed, Sep 04, 2013 at 11:35:19AM -0700, Gregory Szorc wrote:
 


It's worth explicitly mentioning that tiers limit the ability of the build system to 
build concurrently. So, we have to choose between speed and a moving/complex target of 
"dependency correctness." We have chosen to sacrifice this dependency 
correctness to achieve faster build speeds.

If we want to keep ensuring dependency correctness, I believe we should 
accomplish that via static analysis, compiler plugins, standalone builds, or 
special build modes.
   


Or not using -I$(DIST)/include.
 

Some headers install into $(DIST)/include/mozilla and there's no other 
way to include them.


--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more Makefile.in boilerplate

2013-09-05 Thread Mike Hommey
On Thu, Sep 05, 2013 at 12:24:11PM +0200, Axel Hecht wrote:
> Hi,
> 
> out of curiousity, I recall that relativesrcdir was actually the
> trigger to switch on and off some l10n functionality in jar
> packaging.
> 
> Is that now on everywhere?

I didn't find any l10n jar.mn without a relativesrcdir in the
corresponding Makefile.in. But maybe I didn't look properly?

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsTHashtable changes

2013-09-05 Thread Robert O'Callahan
On Thu, Sep 5, 2013 at 10:16 PM, Kyle Huey  wrote:

> Did you fix LDAP in comm-central to not use nsInterfaceHashtableMT?
> That's why I haven't finished Bug 849654.  I guess that should get duped to
> wherever this happened.
>

No, I completely stuffed up my search through comm-central and didn't find
any uses of nsTHashtable and its subclasses. I'll fix that now.

Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: partial GL buffer swap

2013-09-05 Thread Nicolas Silva
>From an API/feature point of view the partial buffer swap does not sound
like a bad idea, especially since, as Mat said, the OMTC BasicLayers will
need something along these lines to work efficiently.
One thing to watch out for, though, is that it is the kind of fine tuning
that, I suspect, will give very different results depending on the
hardware. On tile based GPUs, doing this without well working extensions
like QCOM_tiled_rendering will most likely yield bad performances, for
example. More importantly I am not sure how much we can rely on these
extensions reliably behaving across the different hardware.
Would we use something like WebGL's blacklisting for this optimization?
I heard that our WebGL blacklisting is a bit of a mess.

Are these the lowest hanging fruits to improve performances?




On Sun, Sep 1, 2013 at 6:53 AM, Matt Woodrow  wrote:

> We actually have code that does the computation of the dirty area
> already, see
>
> http://mxr.mozilla.org/mozilla-central/ident?i=LayerProperties&tree=mozilla-central
> .
>
> The idea is that we take a snapshot of the layer tree before we update
> it, and then do a comparison after we've finished updating it.
>
> We're currently only using this for main-thread BasicLayers, but we're
> almost certainly going to need to extend it to work on the compositor
> side too for OMTC BasicLayers.
>
> It shouldn't be too much work, we just need to make ThebesLayers shadow
> their invalid region, and update some of the LayerProperties comparison
> code to understand the **LayerComposite way of doing things.
>
> Once we have that, adding compositor specific implementations of
> restricting composition to that area should be easy!
>
> - Matt
>
> On 1/09/13 4:50 AM, Andreas Gal wrote:
> >
> > Soon we will be using GL (and its Windows equivalent) on most
> > platforms to implement a hardware accelerated compositor. We draw into
> > a back buffer and with up to 60hz we perform a buffer swap to display
> > the back buffer and make the front buffer the new back buffer (double
> > buffering). As a result, we have to recomposite the entire window with
> > up to 60hz, even if we are only animating a single pixel.
> >
> > On desktop, this is merely bad for battery life. On mobile, this can
> > genuinely hit hardware limits and we won't hit 60 fps because we waste
> > a lot of time recompositing pixels that don't change, sucking up
> > memory bandwidth.
> >
> > Most platforms support some way to only update a partial rect of the
> > frame buffer (AGL_SWAP_RECT on Mac, eglPostSubBufferNVfor Linux,
> > setUpdateRect for Gonk/JB).
> >
> > I would like to add a protocol to layers to indicate that the layer
> > has changed since the last composition (or not). I propose the
> > following API:
> >
> > void ClearDamage(); // called by the compositor after the buffer swap
> > void NotifyDamage(Rect); // called for every update to the layer, in
> > window coordinate space (is that a good choice?)
> >
> > I am using Damage here to avoid overloading Invalidate. Bike shedding
> > welcome. I would put these directly on Layer. When a color layer
> > changes, we damage the whole layer. Thebes layers receive damage as
> > the underlying buffer is updated.
> >
> > The compositor accumulates damage rects during composition and then
> > does a buffer swap of that rect only, if supported by the driver.
> >
> > Damage rects could also be used to shrink the scissor rect when
> > drawing the layer. I am not sure yet whether its easily doable to take
> > advantage of this, but we can try as a follow-up patch.
> >
> > Feedback very welcome.
> >
> > Thanks,
> >
> > Andreas
> >
> > PS: Does anyone know how this works on Windows?
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-05 Thread Henri Sivonen
On Fri, Aug 30, 2013 at 6:17 PM, Adam Roach  wrote:
>
> It seems to me that there's an important balance here between (a) letting 
> developers discover their configuration error and (b) allowing users to 
> render misconfigured content without specialized knowledge.

It's worth noting that for other classes of authoring errors  (except
for errors in https deployment) we  don't give the user the tools to
remedy authoring errors.

> Both of these are valid concerns, and I'm afraid that we're not assigning 
> enough weight to the user perspective.

Assigning weight to the *short-term* user perspective seems to be what
got  us into this mess in the first place. If Netscape had never had a
manual override for the character encoding or locale-specific
differences, user-exposed brokenness  would have quickly taught
authors to get their act encoding together--especially in the context
of languages like Japanese where a wrong encoding guess makes the page
completely unreadable.

(The obvious counter-argument is that  in the case of languages that
use a non-Latin, getting the  encoding  wrong is near the the YSoD
level of disaster and it's agreed that XML's error handling was a
mistake compared to HTML's. However, HTML's error handling surfaces no
UI choices to the user, works without having to reload the page and is
now well specified. Furthermore, even in the case of HTML, hindsight
says we'd be better off if no browser had tried to be too helpful
about fixing  in the first place.)

> I think we can find some middle ground here, where we help developers 
> discover their misconfiguration, while also handing users the tool they need 
> to fix it. Maybe an unobtrusive bar (similar to the password save bar) that 
> says something like: "This page's character encoding appears to be 
> mislabeled, which might cause certain characters to display incorrectly. 
> Would you like to reload this page as Unicode? [Yes] [No] [More Information] 
> [x]".

Why should we surface this class of authoring error to the UI in a way
that asks the user to make a decision considering how rare this class
of authoring error is? Are there other classes of authoring errors
that you think should have UI for the user to second-guess the author?
If yes, why? If not, why not?

That is, why is the case where text/html is in fact valid UTF-8 and
contains non-ASCII characters but has not been declared as UTF-8 so
special compared to other possible authoring errors that it should
have special treatment?

On Fri, Aug 30, 2013 at 8:24 PM, Mike Hoye  wrote:
> For what it's worth Internet Explorer handled this (before UTF-8 and caring
> about JS performance were a thing) by guessing what encoding to use,
> comparing a letter-frequency-analysis of a page's content to a table of what
> bytes are most common in which in what encodings of whatever languages.

Is there evidence of IE doing this  in locales other than Japanese,
Russian and Ukrainian? Or even locales other than Japanese? Firefox
does this only for the Japanese, Russian and Ukrainian locales.

(FWIW, studying whether this is still needed for the Russian and
Ukrainian locales is
https://bugzilla.mozilla.org/show_bug.cgi?id=845791 .  As for
Japanese, some sort of detection magic is probably staying for the
foreseeable future. It appears that Microsoft fairly recently tried to
take ISO-2022-JP out of their detector for security reasons but had to
put it back for compatibility: http://support.microsoft.com/kb/2416400
http://support.microsoft.com/kb/2482017 )

> It's
> probably not a suitable approach in modernity, because of performance
> problems and horrible-though-rare edge cases.

See point #3 in https://bugzilla.mozilla.org/show_bug.cgi?id=910211#c2

On Fri, Aug 30, 2013 at 9:33 PM, Joshua Cranmer 🐧  wrote:
> The problem I have with this approach is that it assumes that the page is
> authored by someone who definitively knows the charset, which is not a
> scenario which universally holds. Suppose you have a page that serves up the
> contents of a plain text file, so your source data has no indication of its
> charset. What charset should the page report?

Your scenario assumes that the page template is ASCII-only. If it
isn't, browser-side guessing doesn't solve the problem. Even when the
template is ASCII-only, whoever wrote the inclusion on the server
probably has better contextual knowledge about what the encoding of
the input text could be then the browser has.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
http://hsivonen.iki.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more Makefile.in boilerplate

2013-09-05 Thread Axel Hecht

On 9/5/13 1:17 PM, Mike Hommey wrote:

On Thu, Sep 05, 2013 at 12:24:11PM +0200, Axel Hecht wrote:

Hi,

out of curiousity, I recall that relativesrcdir was actually the
trigger to switch on and off some l10n functionality in jar
packaging.

Is that now on everywhere?


I didn't find any l10n jar.mn without a relativesrcdir in the
corresponding Makefile.in. But maybe I didn't look properly?


There shouldn't have been.

But 
https://hg.mozilla.org/mozilla-central/file/45097bc3a578/config/config.mk#l681 
seems to be always on now? (also 685)


Axel

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more Makefile.in boilerplate

2013-09-05 Thread Mike Hommey
On Thu, Sep 05, 2013 at 04:08:50PM +0200, Axel Hecht wrote:
> On 9/5/13 1:17 PM, Mike Hommey wrote:
> >On Thu, Sep 05, 2013 at 12:24:11PM +0200, Axel Hecht wrote:
> >>Hi,
> >>
> >>out of curiousity, I recall that relativesrcdir was actually the
> >>trigger to switch on and off some l10n functionality in jar
> >>packaging.
> >>
> >>Is that now on everywhere?
> >
> >I didn't find any l10n jar.mn without a relativesrcdir in the
> >corresponding Makefile.in. But maybe I didn't look properly?
> 
> There shouldn't have been.
> 
> But 
> https://hg.mozilla.org/mozilla-central/file/45097bc3a578/config/config.mk#l681
> seems to be always on now? (also 685)

Indeed. Although it's also possible to set relativesrcdir to nothing in
Makefile.in to get back to the case without relativesrcdir.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more Makefile.in boilerplate

2013-09-05 Thread Ted Mielczarek
On 9/5/2013 10:25 AM, Mike Hommey wrote:
> There shouldn't have been.
>
> But 
> https://hg.mozilla.org/mozilla-central/file/45097bc3a578/config/config.mk#l681
> seems to be always on now? (also 685)
> Indeed. Although it's also possible to set relativesrcdir to nothing in
> Makefile.in to get back to the case without relativesrcdir.
>
>

I think we should fix anything that depends on relativesrcdir being
unset. This has become a standard part of our Makefiles.

-Ted

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-05 Thread Adam Roach

On 9/5/13 09:10, Henri Sivonen wrote:

Why should we surface this class of authoring error to the UI in a way 
that asks the user to make a decision considering how rare this class 
of authoring error is?


It's not a matter of the user judging the rarity of the condition; it's 
the user being able to, by casual observation, look at a web page and 
tell that something is messed up in a way that makes it unusable for them.



Are there other classes of authoring errors
that you think should have UI for the user to second-guess the author?
If yes, why? If not, why not?


In theory, yes. In practice, I can't immediately think of any instances 
that fit the class other than this one and certain Content-Encoding issues.


If you want to reduce it to principle, I would say that we should 
consider it for any authoring error that is (a) relatively common in the 
wild; (b) trivially detectable by a lay user; (c) trivially detectable 
by the browser; (d) mechanically reparable by the browser; and (e) has 
the potential to make a page completely useless.


I would argue that we do, to some degree, already do this for things 
like Content-Encoding. For example, if a website attempts to send 
gzip-encoded bodies without a Content-Encoding header, we don't simply 
display the compressed body as if it were encoded according to the 
indicated type; we pop up a dialog box to ask the user what to do with 
the body.


I'm proposing nothing more radical than this existing behavior, except 
in a more user-friendly form.


As to the "why," it comes down to balancing the need to let the 
publisher know that they've done something wrong against punishing the 
user for the publisher's sins.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-05 Thread Mike Hoye

On 2013-09-05 10:10 AM, Henri Sivonen wrote:
It's worth noting that for other classes of authoring errors (except 
for errors in https deployment) we don't give the user the tools to 
remedy authoring errors.


Firefox silently remedies all kinds authoring errors.

- mhoye
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deploying more robust hg-git replication for gecko

2013-09-05 Thread Ehsan Akhgari

On 2013-09-03 9:39 PM, Aki Sasaki wrote:

On 9/3/13 8:25 AM, Ehsan Akhgari wrote:

Thanks for the update on this, John!  I have a number of questions about
this:

1. On the issue of the hg tags, can you please be more specific about the
problem that you're encountering?  In my experience, git deals with updated
tags the same way as it does with any non-fast-forward merge, and I've
never experienced any problems with that.


I'm explicitly not converting most tags, and only whitelisting certain
ones to avoid the issue caused by moving tags.

To quote
https://www.kernel.org/pub/software/scm/git/docs/git-tag.html#_on_re_tagging
on moving git tags:

``Does this seem a bit complicated?  It should be. There is no way that
it would be correct to just "fix" it automatically. People need to know
that their tags might have been changed.''

We move tags regularly on hg repos; this is standard operating procedure
for a release build 2, or if a build 1 has an automation hiccup.  While
we *could* convert the tags automatically, then either never move them
or move them behind users' backs, users would then never get the updated
tags unless they explicitly delete and re-fetch the tag by name...
something people wouldn't typically do without prompting.  In my
opinion, tags pointing at the wrong revision are worse than no tags.


Huh, interesting!  I was actually under the impression that tags are 
updated in a non-fast-forward manner similar to branches if you want, 
but it seems to not be the case given the documentation.



Also, I need to limit the tags pushed to gecko.git, as there is a hard
fastforward-only rule there, and notifying partners to delete and
recreate tags seems like a non-starter.  So I built in tag-limiting
whitelists for safety.

However, there appears to be an issue with the way I'm limiting tags.
Rather than delay things further, I decided to publish as-is and see if
anyone really cares about the tags, or if they would be fine using the
tip of each relbranch instead.

Since you bring up the point of tags, can you give examples of how you
use tags, or how you have seen others use tags?


I don't use tags myself at all, I remember someone pointing out that my 
repo did not contain the hg tags a long time ago (2+ years) so I 
unfortunately don't remember who that person was.  That's when I started 
pushing the tags as well.  I just pointed this out as something that 
caught my attention.



2. How frequently are these branches updated?


The current job is running on a 5 minute cron job. We can, of course,
change that if needed. When I add a new repo or a slew of new branches
to convert, the job can take longer to complete, but it typically
finishes in about 6 minutes.


Sounds good, that's exactly the same setup and frequency that I have as 
well.  It seems to be working fine for most people.



3. What are the plans for adding the rest of the branches that we currently
have in https://github.com/mozilla/mozilla-central, an what is the process
for developers to request their project branches to be added to the
conversion jobs?  Right now the process is pretty light-weight (they
usually just ping me on IRC or send me an email).  It would be nice if we
kept that property.


There are two concerns here: Project branches are often reset, and also
individual developers care about different subsets of branches.

Providing a core set of branches that everyone uses, and which we can
safely support at scale, seems a good set to include by default. For
users of other branches, one approach we're looking at is to provide
supported documentation that shows developers how to add
any-branch-of-their-choice to their local repo. We're still figuring out
what this default set should be, and as you have clearly expressed
opinions on this in the past, we'd of course be interested in your
current opinions here.


I strongly disagree that providing the current subset of branches is 
enough.  Firstly, to address your point about project branches being 
reset, it's true and it works perfectly fine for the people who are 
using those branches, since they will get a non-fast-forward merge which 
signals them about this change, which they can choose to avoid if they 
want.  Also, if somebody gives up their project branch (such as is the 
case for twigs) we can always delete the branch from the "main" remote 
and doing that will not affect people who have local branches based on 
that.  Git does the right thing in every case.


About the fact that individual developers care about different subsets 
of branches, that's precisely correct, and it's fine, since when using 
git, you *always* ignore branches that you're not interested in, and are 
never affected by what changes happen in those branches.  We have an 
extensive amount of experience with the current github mirror about both 
of these points and we _know_ that neither of these two are issues here.


Part of the value of having a git mirror for developers is that we don't

Re: Deploying more robust hg-git replication for gecko

2013-09-05 Thread Nicolas B. Pierron

On 09/05/2013 09:51 AM, Ehsan Akhgari wrote:

On 2013-09-03 9:39 PM, Aki Sasaki wrote:

On 9/3/13 8:25 AM, Ehsan Akhgari wrote:

2. How frequently are these branches updated?


The current job is running on a 5 minute cron job. We can, of course,
change that if needed. When I add a new repo or a slew of new branches
to convert, the job can take longer to complete, but it typically
finishes in about 6 minutes.


Sounds good, that's exactly the same setup and frequency that I have as
well.  It seems to be working fine for most people.


I had the same frequency before, but the problem is that when you are trying 
to push, you are 5 minutes late compared to the others.  It happens to me, 
that I had to rebase multiple times before being able to push anything to 
inbound.


Since, I changed to a system which is looking every 10s for modifications in 
the pushlogs, then if there is a modification the script will update of the 
modified branch.


The reasons why I bring the timer so low, is that the pushlog is serve 
through http, and the server is likely to cache it knowing that this is used 
by tbpl (which by the way use https).  I did not do it before because "hg 
id" still implies that we have to establish a secure connection.



Part of the value of having a git mirror for developers is that we don't
require individual people to have their own mirroring infrastructure, so
just providing documentation that tells people how to set up their own
branches by setting up hg-git locally, etc. is kind of defeating the purpose
of having a mirror that helps everybody.


I Agree.

This is for this exact reason that I move all this process to another 
computer.  Having to use a special tool for pushing is a terrible git 
integration.



4. About the interaction between gecko.git and beagle, when you say that
they cannot be identical, are you talking about the location where they're
hosted or the SHA1s for the same commits?  I think we need to guarantee the
latter being identical going forward, but the former doesn't matter as much.


It's a requirement to keep the SHA1s the same, and we are confident we
can do that.
The location and naming for legacy b2g branches will be different, as
well as a hard fastforward-only rule on gecko.git.


That sounds fine.

One point to note here is that because of reasons which I have not explored,
some of our branches such as aurora and beta are regularly updated in a
non-fast-forward manner.  You can see this if you run git fetch in a local
clone (there will be a "(forced update)" next to the name of the branch in
the output.)  You might want to investigate why that happens in case B2G
starts to follow regular release trains in the future (but please also note
that it might just be a bug in my existing setup.)


I never experienced that before.  I guess this might be related to the fact 
that I pull all mercurial repositories into one before doing the conversion 
to git.


--
Nicolas B. Pierron
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-05 Thread Robert Kaiser

Zack Weinberg schrieb:

It is possible to distinguish UTF-8 from most legacy
encodings heuristically with high reliability, and I'd like to suggest
that we ought to do so, independent of locale.


I would very much agree with doing that. UTF-8 is what is being 
suggested everywhere as the encoding to go with, and as we should be 
able to detect it easily enough, we should do it and switch to it when 
we find it.


Robert Kaiser

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting the current release version

2013-09-05 Thread Robert Kaiser

Robert Helmer schrieb:

Thanks for the shout-out - releases-api exposes the metadata that's only
available on ftp.m.o (this is something we need for Socorro so we decided
to split this out into it's own service). Ideally this would come from a
better source, but FTP is as "official" as we can get at the moment for
things like getting the buildid for the latest nightly, that sort of thing.


Also note that this only know what builds have been *generated* and not 
what has been *released* to users, which will only happen after QA signs 
off those generated builds.


If you want what has been released, product-details is the place to go.

Robert Kaiser
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Python 2.7.3 now required to build

2013-09-05 Thread Gregory Szorc
Just landed in inbound and making its way to a tree near you is the 
requirement that you use Python 2.7.3 or greater (but not Python 3) to 
build the tree. If you see any issues, bug 870420 is responsible.
Previously we required Python 2.7.0 or greater. This change has been 
planned and agreed to for months and was recently unblocked to land.


Hopefully things should "just work" for most people. MozillaBuild 
(Windows dev environment) is shipping Python 2.7.5 and most Linux 
distros are running 2.7.3+. OS X, however, lags. OS X 10.8 ships 2.7.2, 
so attempting to build out of the box will give you an error.


If your Python is too old, the build system will instruct you what to 
do. For most people, run `mach bootstrap` and it will hopefully install 
a modern Python on your machine. If that doesn't work, patches welcome 
(code in /python/mozboot).

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-05 Thread Boris Zbarsky

On 9/5/13 11:15 AM, Adam Roach wrote:

I would argue that we do, to some degree, already do this for things
like Content-Encoding. For example, if a website attempts to send
gzip-encoded bodies without a Content-Encoding header, we don't simply
display the compressed body as if it were encoded according to the
indicated type


Actually, we do, unless the indicated type is text/plain.

The one fixup I'm aware of with Content-Encoding is that if the content 
type is application/gzip and the Content-Encoding is gzip and the file 
extension is .gz we ignore the Content-Encoding.


Both of these are workarounds for a very widespread server 
misconfiguration (in particular, the default Apache configuration for 
many years had the text/plain problem and the default Apache 
configuration on most Linux distributions had the gzip problwm).


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


window.opener always returns null

2013-09-05 Thread digitalcre8
I'm creating a FF extension that needs to detect whether a window was opened 
via javascript with the window.open() command.  I've tried many different ways: 
window.opener, parent.window.opener, this.opener, etc. and it always returns 
null.

I was also trying it using the code editor here:  
http://www.w3schools.com/js/tryit.asp?filename=try_win_focus   The only way 
that it returned a valid opener object was when I changed the line to:
myWindow.document.write(myWindow.opener);

However, this implies that you have to know the object reference name in order 
to get the opener, but there is no way that I'm aware of to detect this in a 
loaded website.  Any ideas of how I can reliably detect if a window was opened 
via javascript?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: vsync proposal

2013-09-05 Thread Robert O'Callahan
I had some off-thread discussion with Bas about lowering latency. We seem
to have agreed on the following plan:

On some non-main thread:
1. Wait for vsync event
2. Dispatch refresh driver ticks to all windows that don't already have a
pending refresh driver tick unacknowledged by a layer tree update response
3. Wait for N ms, processing any layer-tree updates (where N =
vsyncinterval - generously estimated time to composite all windows)
4. Composite all updated windows
5. Goto 1

This could happen on the compositor thread or some other thread depending
on how "wait for vsync event" is implemented on each platform.

Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: window.opener always returns null

2013-09-05 Thread Boris Zbarsky

On 9/5/13 10:41 PM, digitalc...@gmail.com wrote:

I'm creating a FF extension that needs to detect whether a window was opened 
via javascript with the window.open() command.


Generally, testing .opener on the relevant window will work.  However 
note that pages can explicitly null this out to break references to the 
opener...


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How to create a child process in a chrome mochitest?

2013-09-05 Thread Nicholas Nethercote
Hi,

I want to create a child process in
toolkit/components/aboutmemory/tests/test_memoryReporters.xul, so I
can test the cross-process memory reporting.

In theory, this is as easy as  or , or something like that.  But I've tried about 80 different
variations on these, and I cannot get a child process.

This is the closest I've got:

   

AIUI, the |html:| prefix and the |="true"| attribute values are
necessary because this is a XUL file and hence XML.  My 
element looks like this:

http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul";
xmlns:html="http://www.w3.org/1999/xhtml";>

With the above code I do get an iframe that loads about:about, which
is good.  But there's no child process created, and when I inspect the
|remote| attribute of the iframe it is |undefined|, as if something
prevented it from being set to true.

What am I doing wrong?  The docs
(https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe and
https://developer.mozilla.org/en-US/docs/WebAPI/Browser?redirectlocale=en-US&redirectslug=DOM%2FUsing_the_Browser_API
and https://developer.mozilla.org/en-US/docs/XUL/iframe) make it
sounds like this should be easy.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform