Counter-intuitively, having multiple content processes may use less memory than
taking screenshots per tab. Especially if we use the same COW forking FFOS uses
the overhead of a content processes should be very small, certainly less than a
high resolution screenshot kept around. Not sure do wha
I guess we can add a command line option to our executable that calls the
function and prints the results and exits and then invoke ourselves to do this
in a new process and parse the output. What a silly bug.
Thanks,
Andreas
Sent from Mobile.
> On Mar 26, 2015, at 07:03, Daniel Stenberg wr
Force a buffer in <2GB memory and always copy into/out of that buffer?
Thanks,
Andreas
> On Mar 25, 2015, at 11:17 PM, Randell Jesup wrote:
>
> Thanks to detective work by a subscriber to dev.media (Tor-Einar
> Jarnbjo), we've found the cause of unexplained ICE (NAT-traversal)
> failures in We
> On Mar 6, 2015, at 6:18 PM, Ehsan Akhgari wrote:
>
> On 2015-03-06 1:14 PM, andreas@gmail.com wrote:
>>
>>> On Mar 6, 2015, at 5:52 PM, Anne van Kesteren wrote:
>>>
>>> On Fri, Mar 6, 2015 at 6:33 PM, wrote:
Is the threat model for all of these permissions significant enough to
> On Mar 6, 2015, at 5:52 PM, Anne van Kesteren wrote:
>
> On Fri, Mar 6, 2015 at 6:33 PM, wrote:
>> Is the threat model for all of these permissions significant enough to
>> warrant the breakage?
>
> What breakage do you envision?
I can no longer unblock popups on sites that use HTTP. The
>
> You might say that having a local network attacker able to see what
> your webcam is looking at is not scary, but I'm going to disagree.
> Also c.f. RFC 7258.
I asked for something very specific: popups. What is the threat model for the
popup permission state?
Thanks,
Andreas
_
Is the threat model for all of these permissions significant enough to warrant
the breakage? Popups for example are annoying, but a spoofed origin to take
advantage of whitelisted popups seems not terribly dangerous.
Thanks,
Andreas
> On Mar 6, 2015, at 5:27 PM, Anne van Kesteren wrote:
>
>
Would it make sense to check in some of the libraries we build that we very
rarely change, and that don’t have a lot of configure dependencies people
twiddle with? (icu, pixman, cairo, vp8, vp9). This could speed up build times
in our infrastructure and for developers. This doesn’t have to be i
>
> Are we using the discrete GPU when Chrome is not?
That was my first guess as well. As far as I can tell we fall back to
integrated GPU just fine, according to the Activity Monitor. Even App Nap seems
to work when FF is occluded. Yet, our avg energy impact is 5x of Chrome.
Andreas
>
> - K
I am using Nightly on Yosemite and power use is pretty atrocious. The battery
menu tags Firefox Nightly as a significant battery hog, and I can confirm this
from the user experience perspective as well. My battery time is a fraction of
using Chrome for the same tasks.
Not every kind of content
> On Oct 27, 2014, at 1:16 AM, Karl Dubost wrote:
>
> Andreas,
>
> Le 27 oct. 2014 à 08:15, Andreas Gal a écrit :
>> What happens when a user types letters into the Google search box we ship by
>> default in Firefox?
>
> Do you mean desktop? Sorry I was n
What happens when a user types letters into the Google search box we ship by
default in Firefox?
Thanks,
Andreas
> On Oct 27, 2014, at 12:08 AM, Karl Dubost wrote:
>
> In Firefox 2.2.0, each time you try to enter a letter, there are a list of
> icons displayed which seems to be delivered by
longer the expert there; I took over a bunch of his
> work to clean it up a year or two ago, and Seth is the benevolent dictator
> now and has done some good cleanup work on it as well.
>
> Cheers,
> Josh
>
> On 2014-10-16 6:45 PM, Andreas Gal wrote:
>>
>> The co
, Nicholas Nethercote wrote:
> On Fri, Oct 17, 2014 at 8:55 AM, Andreas Gal wrote:
>>
>> I would like to nominate image/src/* and in particular its class hierarchy
>> which completely doesn’t make any sense what so ever. imgRequest,
>> imgIRequest, we got i
I would like to nominate image/src/* and in particular its class hierarchy
which completely doesn’t make any sense what so ever. imgRequest, imgIRequest,
we got it all.
Andreas
On Oct 16, 2014, at 6:44 PM, Randell Jesup wrote:
>> On Fri, Oct 17, 2014 at 1:32 AM, Nicholas Nethercote >> wrote:
I looked at lzma2 a while ago for FFOS. I got pretty consistently 30% smaller
omni.ja with that. We could add it pretty easily to our decompression code but
it has slightly different memory behavior.
Andreas
On Oct 13, 2014, at 5:39 PM, Gregory Szorc wrote:
> On 10/13/14 4:54 PM, Chris More
On Jun 18, 2014, at 2:03 AM, Vivien Nicolas wrote:
>
> On 06/17/2014 09:18 PM, James Burke wrote:
>> On 6/17/14, 10:08 AM, Vivien Nicolas wrote:
>>> That's true. Actually there are many other hacks that depends on the fact
>>> that application are certified. So even if I would like to have mor
Please read my email again. This kind of animation cannot be rendered with high
FPS by any engine. It's simply conceptually expensive and inefficient for the
DOM rendering model. We will work on matching other engines if we are slightly
slower than we could be, but you will never reach solid per
There are likely two causes here.
First, until we have APZ enabled its very unlikely that we can ever maintain a
high frame-rate scrolling on low-end hardware. OMTC is a prerequisite for APZ
(async pan/zoom). Low end hardware is simply not fast enough to repaint and
buffer-rotate with 60FPS.
I think we should shift the conversation to how we actually animate here.
Animating by trying to reflow and repaint with 60fps is just a bad idea. This
might work on very high end hardware, but it will cause poor performance on the
low-end Windows notebooks people buy these days. In other words
On Apr 15, 2014, at 9:00 PM, Robert O'Callahan wrote:
> On Wed, Apr 16, 2014 at 11:14 AM, Vladimir Vukicevic
> wrote:
>
>> Note that for purposes of this discussion, "VR support" is minimal.. some
>> properties to read to get some info about the output device (resolution,
>> eye distance, dist
On Apr 15, 2014, at 4:17 PM, Benoit Jacob wrote:
>
>
>
> 2014-04-15 18:28 GMT-04:00 Andreas Gal :
>
> You can’t beat the competition by fast following the competition. Our
> competition are native, closed, proprietary ecosystems. To beat them, the Web
> has to be
You can’t beat the competition by fast following the competition. Our
competition are native, closed, proprietary ecosystems. To beat them, the Web
has to be on the bleeding edge of technology. I would love to see VR support in
the Web platform before its available as a builtin capability in an
Vlad asked a specific question in the first email. Are we comfortable using
another open (albeit not open enough for MPL) license on trunk while we rewrite
the library? Can we compromise on trunk in order to innovate faster and only
ship to GA once the code is MPL friendly via re-licensing or r
just sticking with zip
we should get better compression and likely better load times. Wdyt?
Andreas
On Feb 27, 2014, at 12:25 AM, Mike Hommey wrote:
> On Wed, Feb 26, 2014 at 08:56:37PM +0100, Andreas Gal wrote:
>>
>> This randomly reminds me that it might be time to re
This sounds like quite an opportunity to shorten download times and reduce CDN
load. Who wants to file the bug? :)
Andreas
On Feb 26, 2014, at 9:44 PM, Benjamin Smedberg wrote:
> On 2/26/2014 3:21 PM, Jonathan Kew wrote:
>> On 26/2/14 19:57, Andreas Gal wrote:
>>>
>> them down to 1.1MB binary which gzips to 990KB. This seems like a
>> reasonable size to me and involves a lot less work than setting up a
>> process for distributing these files via CDN.
>>
>> Brendan
>>
>> On Feb 24, 2014, at 10:14 PM, Rik Cabanier
t;>
>>
>>
>> On Mon, Feb 24, 2014 at 5:01 PM, Andreas Gal wrote:
>>
>> My assumption is that certain users only need certain CMaps because they
>> tend to read only documents in certain languages. This seems like something
>> we can really optimize a
irefox resources?
>
> From what I’ve seen, many PDF’s use CMaps even if they don’t necessarily have
> CJK characters, so it may just be better to include them. FWIW both Popper
> and Mupdf embed the CMaps.
>
> Brendan
>
> On Feb 24, 2014, at 3:01 PM, Andreas Gal wrote:
&g
Is this something we could load dynamically and offline cache?
Andreas
Sent from Mobile.
> On Feb 24, 2014, at 23:41, Brendan Dahl wrote:
>
> PDF.js plans to soon start including and using Adobe CMap files for
> converting character codes to character id's(CIDs) and mapping character
> codes
On Feb 22, 2014, at 1:12 PM, David Rajchenbach-Teller
wrote:
> So, I'm wondering how much effort we should put in reducing the number
> of ChromeWorkers. On Desktop, we have a PageThumbsWorker and a
> SessionWorker, which are both useful but not strictly necessary. I seem
> to remember that the
Caches are fine in the child process as long they are reasonably sized, and
they purge themselves in case of a low-memory notification.
On FFOS the parent process never dies, so leaks or long-lived memory allocation
there tends to hurt. Responding to low-memory notifications is really important
We could easily add a time multiplier pref and you could set that during your
test. This is probably cheap enough and useful enough to do in production
builds.
Andreas
On Feb 16, 2014, at 7:24 PM, Andrew Sutherland
wrote:
> In Gaia, the system and many of the apps use transitions/animations
It seems to me that we have arrived at the conclusion that a good drawing API
should be mostly stateless (like Moz2D), instead of Cairo's stateful API. As a
result we are currently removing all uses of the Cairo API and we will
eventually remove Cairo from our codebase altogether (in favor of D
On Nov 7, 2013, at 3:06 PM, "L. David Baron" wrote:
> On Thursday 2013-11-07 13:24 -0800, Andreas Gal wrote:
>> On Nov 7, 2013, at 1:19 PM, Karl Tomlinson wrote:
>>> Will any MoCo developers be permitted to spend some time fixing
>>> these or the already-kn
On Nov 7, 2013, at 1:48 PM, Karl Tomlinson wrote:
> Andreas Gal writes:
>
>> Its not a priority to fix Linux/X11. We will happily take
>> contributed patches, and people are welcome to fix issues they
>> see, as long its not at the expense of the things that matter.
>
On Nov 7, 2013, at 1:19 PM, Karl Tomlinson wrote:
> Nicholas Cameron writes:
>
>> Currently on Linux our only 'supported' graphics backend is the
>> main-thread software backend (basic layers).
>
> FWIW basic layers is predominantly GPU-based compositing
> (not-softwared) on most X11 systems.
If you can access the remaining battery status of a large enough
population over time it should be easy to use telemetry to measure
this pre and post patch.
Andreas
Sent from Mobile.
> On Nov 5, 2013, at 16:46, David Rajchenbach-Teller
> wrote:
>
> Context: I am currently working on patches de
Looks like the comms app has some residual use of the old audio API:
apps/communications/dialer/js/keypad.js:this._audio.mozSetup(1,
this._sampleRate);
apps/system/emergency-call/js/keypad.js: this._audio.mozSetup(2,
this._sampleRate);
Should be easy to replace. I will file a bug and mak
The experiment here is quite a bit different from what the current patch is
proposing (6 shader programs, only drive swizzle and alpha/no-alpha via
uniforms). Benoit is redoing the measurements for that scenario. More data
coming shortly.
Andreas
On Oct 16, 2013, at 7:00 AM, Benoit Jacob wro
Hi,
we currently have a zoo of shaders to render layers:
RGBALayerProgramType,
BGRALayerProgramType,
RGBXLayerProgramType,
BGRXLayerProgramType,
RGBARectLayerProgramType,
RGBXRectLayerProgramType,
BGRARectLayerProgramType,
RGBAExternalLayerProgramType,
ColorLayerProgramType,
Y
Pepper is not an API, its basically a huge set of Chromium guts exposed you can
link against. The only documentation is the source, and that source keeps
constantly changing. I don't think its viable for anyone to implement Pepper
without also pulling in most or all of Chromium. Pepper is Chrom
Can you delete this boilerplate from existing makefiles if not already
done? That will prevent people from adding it since people look at
examples when adding new makefiles.
Andreas
Mike Hommey wrote:
Hi,
Assuming it sticks, bug 912293 made it unnecessary to start Makefile.in
files with th
also working on a set of CSS benchmarks to measure fill rate (CSS
allows us to measure both, 2D compositor and the GPU). That should shed
some light on this variance between devices.
Andreas
Sent from Mobile.
On Aug 31, 2013, at 17:40, Benoit Jacob wrote:
2013/8/31 Andreas Gal
>
> S
Soon we will be using GL (and its Windows equivalent) on most platforms
to implement a hardware accelerated compositor. We draw into a back
buffer and with up to 60hz we perform a buffer swap to display the back
buffer and make the front buffer the new back buffer (double buffering).
As a res
First of all, thanks for raising this. Its definitely a problem that
needs fixing.
I am not convinced by your approach though. In a few months from now
disabling WebRTC is like calling for the DOM or JS or CSS to be disabled
in local developer builds. It will become a natural part of the cor
+1
Sent from Mobile.
On Aug 14, 2013, at 9:25, Chris Peterson wrote:
> We could also send a weekly congratulations to the person who removed the
> most lines of code that week. :)
>
> chris
>
>
> On 8/13/13 1:57 PM, Jet Villegas wrote:
>> This is awesome! Is it possible to see a log of the rec
We are working on ways to make add-ons like adblock work with e10s on
desktop without major changes to the add-on. That mechanism might work
for the thumbnail case. Gavin can reach out to trev and discuss whether
this is something we should try to make work. I do agree this isn't
super high p
Whats the main pain point? Whether promises are resolved immediately or
from a future event loop iteration?
Andreas
Gavin Sharp wrote:
On Tue, Jul 30, 2013 at 11:17 AM, Boris Zbarsky wrote:
On 7/30/13 11:13 AM, Dave Townsend wrote:
The JS promise implementation came out of a desire to use
Yeah, I just saw that grepping through the tree. Both completely
independent, too. On the upside, this might solve Jan's problem.
Andreas
Boris Zbarsky wrote:
On 7/30/13 7:36 AM, Andreas Gal wrote:
For that we would have to implement Promise via IDL. Definitely
possible. All you need
For that we would have to implement Promise via IDL. Definitely
possible. All you need is a bit IDL and some JS that implements it. It
would be a lot slower than the jsm since it wraps into C++ objects that
call into JS, but in most cases that doesn't really matter.
Andreas
janjongb...@gmai
On May 1, 2013, at 1:06 AM, "Robert O'Callahan" wrote:
> On Wed, May 1, 2013 at 6:41 PM, Andreas Gal wrote:
> Both Skia/SkiaGL and D2D support basically all the effects and filters we
> want.
>
> D2D does not support GLSL custom filters. We'd need A
On May 1, 2013, at 12:14 AM, Nicholas Cameron wrote:
> This sounds like an awful lot of work, a lot more than some glue code and
> code deletion. It sounds like you are proposing to make Moz2D pretty much a
> general purpose 2D and 3D graphics library,
Minus support for 3D transforms which bo
at do people think?
Andreas
On Apr 30, 2013, at 10:36 PM, "Robert O'Callahan" wrote:
> On Wed, May 1, 2013 at 5:28 PM, Andreas Gal wrote:
> Should we hide the temporary surface generation (when needed) within the API?
>
> GLContext::Composite(Target, Source, EffectChai
On Apr 30, 2013, at 10:36 PM, "Robert O'Callahan" wrote:
> On Wed, May 1, 2013 at 5:28 PM, Andreas Gal wrote:
> Should we hide the temporary surface generation (when needed) within the API?
>
> GLContext::Composite(Target, Source, EffectChain, Filters)
>
>
On Apr 30, 2013, at 10:28 PM, Andreas Gal wrote:
>
> On Apr 30, 2013, at 9:56 PM, "Robert O'Callahan" wrote:
>
>> On Wed, May 1, 2013 at 4:11 PM, Andreas Gal wrote:
>> I wonder whether we should focus on one fast GPU path via GLSL, and have one
>&g
On Apr 30, 2013, at 9:56 PM, "Robert O'Callahan" wrote:
> On Wed, May 1, 2013 at 4:11 PM, Andreas Gal wrote:
> I wonder whether we should focus on one fast GPU path via GLSL, and have one
> precise, working, I-don't-care-how-slow CPU fallback.
>
> I agree t
eading) and adequate performance on-GPU?
> It is my understanding that OpenCL can manipulate textures, but I don't know
> what the constraints are (or whether ordinary users actually have a working
> OpenCL implementation).
>
> -kg
>
>
> On Tue, Apr 30, 2013 at 9:
You propose SIMD optimization for the software fallback path. I wonder whether
we should focus on one fast GPU path via GLSL, and have one precise, working,
I-don't-care-how-slow CPU fallback. All hardware made the last few years will
have a GPU we support. Really old XP hardware might not, but
We filed a bug for this and I am working on the patch.
Andreas
Sent from Mobile.
On Apr 26, 2013, at 16:06, Mounir Lamouri wrote:
> On 26/04/13 11:17, Gregory Szorc wrote:
>> Anyway, I just wanted to see if others have thought about this. Do
>> others feel it is a concern? If so, can we formul
Preferences are as the name implies intended for preferences. There is no sane
use case for storing data in preferences. I would give any patch I come across
doing that an automatic sr- for poor taste and general insanity.
SQLite is definitely not cheap, and we should look at more suitable back
How many 10.7 machines do we operate in that pool?
Andreas
On Apr 25, 2013, at 10:30 AM, "Armen Zambrano G." wrote:
> (please follow up through mozilla.dev.planning)
>
> Hello all,
> I have recently been looking into our Mac OS X test wait times which have
> been bad for many months and prog
JS is a big advantage for rapid implementation of features and it's
easier to avoid exploitable mistakes. Also, in many cases JS code
(bytecode, not data) should be slimmer than C++. Using JS for
infrequently executing code should be a memory win. I think I would
like to hear from the JS team on re
I assume all this data/reasoning will be posted in the bug. People
just didn't get around to it yet. The idea was to use the bug to
discuss the issue. There is definitely no decision yet to ship, just a
decision to take a look at some additional data point someone raised.
Andreas
Sent from Mobile
Do we actually need the tab, or just the document? If its the latter, can we
just keep the document around invisibly?
Andreas
On Feb 25, 2013, at 10:14 PM, Zack Weinberg wrote:
> https://bugzilla.mozilla.org/show_bug.cgi?id=650960 seeks to replace the
> existing print progress bars with some
On Feb 25, 2013, at 8:22 PM, George Wright wrote:
> On 02/23/2013 04:00 PM, Andreas Gal wrote:
>> OpenVG is a Khronos standard API for GPU accelerated 2D rendering. Its very
>> similar to OpenGL in design. In fact, its an alternative API to OpenGL ES on
>> top of EG
and Mali seem to have special dedicated hardware for
OpenVG (for tessellation I am guessing). If we find OpenVG too unstable for the
broad set of Android hardware, we can always limit OpenVG use to B2G devices
where we can work with the vendor.
Andreas
>
> Benoit
>
> 2013/2/23 Andreas
at approach is strictly superior, but probably not
available broadly for quite a while. Over time we will want to switch over to
this if it becomes more common, but for now, especially for mobile, OpenVG
seems attractive.
Andreas
>
> -kg
>
> On Sat, Feb 23, 2013 at 1:00 PM, Andrea
OpenVG is a Khronos standard API for GPU accelerated 2D rendering. Its very
similar to OpenGL in design. In fact, its an alternative API to OpenGL ES on
top of EGL. It looks like that OpenVG is supported on most Android devices and
is used there by Flash (or well used to be used). B2G devices h
animate? I am sure you guys
considered this, so I am curious why this was excluded.
Thanks,
Andreas
On Feb 12, 2013, at 11:29 PM, Asa Dotzler wrote:
> On 2/12/2013 8:05 PM, Andreas Gal wrote:
>> Hey Asa,
>>
>> where does the magic 20 pages deep history number c
Hey Asa,
where does the magic 20 pages deep history number come from? Why not 1? Or 999?
Andreas
On Feb 12, 2013, at 9:40 PM, Asa Dotzler wrote:
> On 2/12/2013 3:08 PM, Ed Morley wrote:
>> On 12 February 2013 22:11:12, Stephen Pohl wrote:
>>> I wanted to give a heads up that we're in the proce
There is definitely a lot of prerequisites we can work on first. Making all the
DL code self-containing means eliminating much of nsCSSRendering.cpp, which is
probably quite some refactoring work we can do right now.
Andreas
On Feb 12, 2013, at 12:24 PM, Jet Villegas wrote:
> I'm not too con
On Feb 12, 2013, at 9:50 AM, Milan Sreckovic wrote:
>
> I think we need a stronger statement than "worthwhile" in this:
>
> It would be worthwhile to wait for the Layers refactoring to be completed to
> avoid too many conflicts.
>
> when it comes to actually landing code. Something like "we
Hi Anthony,
thanks for bringing this up. I completely agree that we have to unify all that
scrolling code. Chris Jones and Doug Sherk wrote most of the current C++ async
scrolling code. We should definitely unify around that. Once we support
multiple concurrent scrollable regions instead of jus
"object" for typeof.
Andreas
On Dec 31, 2012, at 8:44 AM, Boris Zbarsky wrote:
> On 12/30/12 10:34 PM, Andreas Gal wrote:
>> In this sea of terrible choices, how about making HTMLAnchorElement an
>> actual function, but having it return "object" for typeof?
&g
On Dec 31, 2012, at 8:08 AM, Boris Zbarsky wrote:
> On 12/30/12 2:16 PM, Robert O'Callahan wrote:
>> How bad would it be to make " instanceof > WebIDL interface>" special-cased to re-map the RHS to the appropriate
>> WebIDL interface object for ?
>
> In terms of implementation complexity on our
I think it would be extremely surprising to chrome JS authors if instanceof
works differently in content and chrome, resulting in very hard to diagnose
bugs.
Andreas
On Dec 31, 2012, at 12:16 AM, "Robert O'Callahan" wrote:
> On Fri, Dec 28, 2012 at 8:20 PM, Boris Zbarsky wrote:
>
>> Well,
We have test coverage using emulators, and actual hardware (panda boards) is
being set up. Reporting of the results and integration is very lacking, and
until those pieces fall into place (e.g. try integration), the developer
experience is going to suck a lot if we enforce the rule below (backo
If you disable NPAPI for B2G with a fatal configure warning, I think I could
live with that. In the alternative, we could disallow instantiating plugins in
a (packaged) app context.
Plugins delenda est.
Andreas
On Jul 23, 2012, at 9:36 PM, Jason Duell wrote:
> Do we have any way within our N
79 matches
Mail list logo