Re: Intent to ship: WebVR

2015-10-29 Thread vladimir
On Wednesday, October 28, 2015 at 11:38:26 AM UTC-4, Gervase Markham wrote:
> On 26/10/15 19:19, Kearwood "Kip" Gilbert wrote:
> > As of Oct 29, 2015 I intend to turn WebVR on by default for all
> > platforms. It has been developed behind the dom.vr.enabled preference. 
> > A compatible API has been implemented (but not yet shipped) in Chromium
> > and Blink.
> 
> At one point, integrating with available hardware required us to use
> proprietary code. Is shipping proprietary code in Firefox any part of
> this plan, or not?

No.

 - Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: WebVR

2015-10-29 Thread vladimir
On Monday, October 26, 2015 at 9:39:57 PM UTC-4, Ehsan Akhgari wrote:
> First things first, congratulations on getting this close!
> 
> What's the status of the specification?  I just had a quick skim and it 
> seems extremely light on details.

The spec is still a draft, and the API is expected to change significantly 
(specifically the fullscreen window integration is going to change).  The 
intent to ship here is a bit premature; the intent is to pref it on in nightly 
& aurora, not ship it all the way to release.

> There is quite a bit of details missing.  The security model is 
> essentially blank, and the descriptions in section 4 seem to be 
> high-level overviews of what the DOM interfaces do, rather that detailed 
> descriptions that can be used in order to implement the specification.

Yep.

> Also some things that I was expecting to see in the API seem to be 
> missing.  For example, what should happen if the VR device is 
> disconnected as the application is running?  It seems like right now the 
> application can't even tell that happened.

Also something that's coming in an upcoming revision of the API.

> Another question: do you know if Chrome is planning to ship this feature 
> at some point?  Has there been interoperability tests?

They are currently in the same boat as us, shipping it in dev or one-off 
builds.  We're working with them on the specification, and we're generally 
interoperable currently.

- Vlad

> On 2015-10-26 3:19 PM, Kearwood "Kip" Gilbert wrote:
> > As of Oct 29, 2015 I intend to turn WebVR on by default for all
> > platforms. It has been developed behind the dom.vr.enabled preference.
> > A compatible API has been implemented (but not yet shipped) in Chromium
> > and Blink.
> >
> > Bug to turn on by default:
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1218482
> >
> > Link to standard: https://mozvr.github.io/webvr-spec/webvr.html
> >
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


crash stats query tools

2016-04-21 Thread vladimir
Hi all,

I wrote some tools a while back intended to make it possible to do complex 
crash stats queries locally, using downloaded crash stats data.  It can do 
queries using a mongodb-like query language; even based on functions (running a 
function on each crash to decide whether it should be included or not).  You 
can use these queries/buckets to create custom top crash lists, or otherwise 
pull out data from crash stats.

They're node.js tools; you can find the repository and some instructions here: 
https://github.com/vvuk/crystalball  You'll need an API key from crash stats, 
and be aware that the initial data download is expensive on the server; you can 
copy the cache files to multiple machines instead of re-downloading (they're 
static; all the data for a given day is downloaded).

Let me know if anyone finds this useful, or if there are features you'd like to 
see added (pull requests accepted as well).

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Request Feedback - Submitting Canvas Frames, WebVR Compositor

2016-05-19 Thread Vladimir Vukicevic
This looks good to me in general -- for Gecko, this combined with offscreen
canvas and canvas/webgl in workers is going to be the best way to get
performant WebGL-based VR.  This is likely going to be the better way to
solve the custom-vsync for VR issue; while the large patch queue that I
have does work, it adds significant complexity to Gecko's vsync, and is
unlikely to get used by any other system.  You'll want to make sure that
the gfx folks weigh in on this as well.

Some comments -- if you want direct front-buffer rendering from canvas,
this will be tricky; you'll have to add support to canvas itself, because
right now the rendering context always allocates its own buffers.  That is,
you won't be able to render directly to the texture that goes in to an
Oculus compositor layer textureset, for example, even though that's what
you really want to do.  But I'd get the core working first and then work on
eliminating that copy and sharing the textureset surfaces with webgl canvas.

Same thing with support for Oculus Home as well as allowing for HTML
layers; those should probably be later steps (HTML/2D layers will need to
be rendered on the main thread and submitted from there, so timing them
between worker-offscreen-canvas layers and the main thread could be tricky).

- Vlad

On Tue, May 10, 2016 at 6:18 PM Kearwood "Kip" Gilbert 
wrote:

> Hello All,
>
> In order to support features in the WebVR 1.0 API (
> https://mozvr.com/webvr-spec/) and to improve performance for WebVR, I
> would like to implement an optimized path for submitting Canvas and
> OffscreenCanvas frames to VR headsets.  The WebVR 1.0 API introduces "VR
> Layers", explicit frame submission, and presenting different content to the
> head mounted display independently of the output the regular 2d monitor.  I
> would like some feedback on a proposed “VR Compositor” concept that would
> enable this.
>
> *What would be common between the “VR Compositor” and the regular “2d
> Compositor”?*
> - TextureHost and TextureChild would be used to transfer texture data
> across processes.
> - When content processes crash, the VR Compositor would continue to run.
> - There is a parallel between regular layers created by layout and “VR
> Layers”.
> - There would be one VR Compositor serving multiple content processes.
> - The VR Compositor would not allow unprivileged content to read back
> frames submitted by other content and chrome ux.
> - Both compositors would exist in the “Compositor” process, but in
> different threads.
>
> *What is different about the “VR Compositor”?*
> - The VR Compositor would extend the PVRManager protocol to include VR
> Layer updates.
> - The VR Compositor will not obscure the main 2d output window or require
> entering full screen to activate a VR Headset.
> - In most cases, there will be no visible window created by the VR
> Compositor as the VR frames are presented using VR specific API’s that
> bypass the OS-level window manager.
> - The VR Compositor will not run synchronously with a refresh driver as it
> can simultaneously present content with mixed frame rates.
> - Texture updates submitted for VR Layers would be rendered as soon as
> possible, often asynchronously with other VR Layer updates.
> - VR Layer textures will be pushed from both Canvas elements and
> OffscreenCanvas objects, enabling WebVR in WebWorkers.
> - The VR compositor will guarantee perfect frame uniformity, with each
> frame associated with a VR headset pose frame explicitly passed into
> VRDisplay.SubmitFrame.  No frames will be dropped, even if multiple frames
> are sent within a single hardware vsync.
> - For most devices (i.e. Oculus and HTC Vive), the VR Compositor will
> perform front-buffer rendering.
> - VR Layers asynchronously move with the user’s HMD pose between VR Layer
> texture updates if given geometry and a position within space.
> - The VR Compositor implements latency hiding effects such as Asynchronous
> Time Warp and Pose Prediction.
> - The VR Compositor will be as minimal as possible.  In most cases, the VR
> Compositor will offload the actual compositing to the VR device runtimes.
>  (Both Oculus and HTC Vive include a VR compositor)
> - When the VR device runtime does not supply a VR Compositor, we will
> emulate this functionality.  (i.e. for Cardboard VR)
> - All VR hardware API calls will be made exclusively from the VR
> Compositor’s thread.
> - The VR Compositor will implement focus handling, window management, and
> other functionality required for Firefox to be launched within environments
> such as Oculus Home and SteamVR.
> - To support backwards compatibility and fall-back views of 2d web content
> within the VR headset, the VR compositor could provide an nsWidget /
> nsWindow interface to the 2d compositor.  The 2d compositor output would be
> projected onto the geometry of a VR Layer and updated asynchronously with
> HMD poses.
> - The VR Compositor will not allocate unnecessary resources until either
> Web

Re: UNIFIED_SOURCES breaks breakpoints in LLDB (Was: Unified builds)

2013-11-20 Thread Vladimir Vukicevic
I just did a unified and non-unified build on my windows desktop -- non SSD.  
VS2012, using mozmake.  Full clobber. (mozmake -s -j8)

Unified: 20 min
Non-Unified: 36 min

This is huge!  I was curious about the cost for incremental builds...

touch gfx/2d/Factory.cpp (part of a unified file), rebuild using binaries 
target:

Unified: 53s
Non-Unified: 58s

touch gfx/thebes/gfxPlatform.cpp (note: this dir/file is not unified), rebuild 
using binaries target:

Unified: 56s
Non-Unified: 56s

(I need to rerun this on my computer with a SSD; I had a single-file binaries 
rebuild down to 10s there)

... and was very surprised to see no real difference, often non-unified taking 
slightly longer.  So.  Big win, thanks guys!

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Oculus VR support & somehwat-non-free code in the tree

2014-04-14 Thread Vladimir Vukicevic
Hey all,

I have a prototype of VR display and sensor integration with the web, along 
with an implementation for the Oculus VR.  Despite there really being only one 
vendor right now, there is a lot of interest in VR.  I'd like to add the web 
and Firefox to that flurry of activity... especially given our successes and 
leadership position on games and asm.js.

I'd like to get this checked in so that we can either have it enabled by 
default in nightlies (and nightlies only), or at least allow it enabled via a 
pref.  However, there's one issue -- the LibOVR library has a 
not-fully-free-software license [1].  It's compatible with our licenses, but it 
is not fully "free".

There are a couple of paths forward, many of which can take place 
simultaneously.  I'd like to suggest that we do all of the following:

1. Check in the LibOVR sources as-is, in other-licenses/oculus.  Add a 
configure flag, maybe --disable-non-free, that disables building it.  Build and 
ship it as normal in our builds.

2. Contact Oculus with our concerns about the license, and see if they would be 
willing to relicense to something more standard.  The MPL might actually fit 
their needs pretty well, though we are effectively asking them to relicense 
their SDK code.  There is no specific driver for the OVR; it shows up as a USB 
HID device, and LibOVR knows how to interpret the data stream coming from it.  
This gets them easy compat with all operating systems, and the support I'd add 
would be for Windows, Mac, and Linux.

3. Start investigating "Open VR", with the intent being to replace the 
Oculus-specific library with a more standard one before we standardize and ship 
the API more broadly than to nightly users.

The goal would be to remove LibOVR before we ship (or keep it in assuming it 
gets relicensed, if appropriate), and replace it with a standard "Open VR" 
library.

There are a few other options that are worse:

1. We could ship the VR glue in nightly, but the Oculus support packaged as an 
addon.  This is doable, but it requires significant rework in changing the 
interfaces to use XPCOM, to do category-based registration of the Oculus 
provider, in building and packaging the addon, etc.  It also requires a 
separate install step for developers/users.

2. We could ship the VR integration as a plugin.  vr.js does this already.  But 
we are trying to move away from plugins, and there's no reason why the Oculus 
can't function in places where plugins are nonexistent, such as mobile.  
Delivering this to developers via a plugin would be admitting that we can't 
actually deliver innovative features without the plugin API, which is untrue 
and pretty silly.

3. Require developers to install the SDK themselves, and deploy it to all of 
the build machines so that we can build it.  This is IMO a very non-pargmatic 
option; it requires a ton more fragile work (everyone needs to get and keep the 
SDK updated; releng needs to do the same on build machines) and sacrifices 
developer engagement (additional SDKs suck -- see the DirectX SDK that we're 
working on eliminating the need for) in order to try to preserve some form of 
purity.

3. We do nothing.  This option won't happen: I'm tired of not having Gecko and 
Firefox at the forefront of web technology in all aspects.

Any objections to the above, or alternative suggestions?  This is a departure 
in our current license policy, but not a huge one.  There were some concerns 
expressed about that, but I'm hoping that we can take a pragmatic path here.

   - Vlad

[1] https://developer.oculusvr.com/license
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-14 Thread Vladimir Vukicevic
On Monday, April 14, 2014 7:29:43 PM UTC-4, Ralph Giles wrote:
> > The goal would be to remove LibOVR before we ship (or keep it in assuming 
> > it gets relicensed, if appropriate), and replace it with a standard "Open 
> > VR" library.
> 
> Can you dlopen the sdk, so it doesn't have to be in-tree? That still
> leaves the problem of how to get it on a user's system, but perhaps an
> add-on can do that part while the interface code in is-tree.

Unfortunately, no -- the interface is all C++, and the headers are licensed 
under the same license.  A C layer could be written, but then we're back to 
having to ship it separately via addon or plugin anyway.

> Finally, did you see Gerv's post at
> 
> http://blog.gerv.net/2014/03/mozilla-and-proprietary-software/

Yes -- perhaps unsurprisingly, I disagree with Gerv on some of the particulars 
here.  Gerv's opinions are his own, and are not official Mozilla policy.  That 
post I'm sure came out of a discussion regarding this very issue here.  In 
particular, my stance is that we build open source software because we believe 
there is value in that, and that it is the best way to build innovative, 
secure, and meaningful software.  We don't build open source software for the 
sake of building open source.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-15 Thread Vladimir Vukicevic
On Tuesday, April 15, 2014 5:57:13 PM UTC-4, Robert O'Callahan wrote:
> On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob wrote:
> 
> > I'm asking because the Web has so far mostly been a common denominator,
> > conservative platform. For example, WebGL stays at a distance behind the
> > forefront of OpenGL innovation. I thought of that as being intentional.
> 
> That is not intentional. There are historical and pragmatic reasons why the
> Web operates well in "fast follow" mode, but there's no reason why we can't
> lead as well. If the Web is going to be a strong platform it can't always
> be the last to get shiny things. And if Firefox is going to be strong we
> need to lead on some shiny things.
> 
> So we need to solve Vlad's problem.

It's very much a question of pragmatism, and where we draw the line.  There are 
many options that we can do that avoid having to consider almost-open or 
almost-free licenses, or difficulties such as not being able to accept 
contributions for this one chunk of code.  But they all result in the end 
result being weaker; developers or worse, users have to go through extra steps 
and barriers to access the functionality.  I think that putting up those 
barriers dogmatically doesn't really serve our goals well; instead, we need to 
find a way to be fast and scrappy while still staying within the spirit of our 
mission.

Note that for purposes of this discussion, "VR support" is minimal.. some 
properties to read to get some info about the output device (resolution, eye 
distance, distortion characteristics, etc) and some more to get the orientation 
of the device.  This is not a highly involved API nor is it specific to Oculus, 
but more as a first-put based on hardware that's easily available.

I also briefly suggested an entirely separate non-free repository -- you can 
clone non-free into the top level mozilla-central directory, or create it in 
other ways, and configure can figure things out based on what's present or not. 
 That's an option, and it might be a way to avoid some of these issues.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standards side of VR

2014-04-16 Thread Vladimir Vukicevic
Yep, my plan was to to not let this get beyond Nightly, maybe Aurora, but
not further until the functionality and standards were firmer.

- Vlad


On Wed, Apr 16, 2014 at 12:08 PM, Anne van Kesteren wrote:

> On Wed, Apr 16, 2014 at 4:59 PM, Ehsan Akhgari 
> wrote:
> > I think a great way to deal with that is to keep features on the beta
> > channel and continue to make breaking changes to them before we feel
> ready
> > to ship them.  The reality is that once we ship an API our ability to
> make
> > any backwards incompatible changes to it will be severely diminished if
> > websites that our users depend on break because of that (albeit in the
> case
> > of a very new technology such as VR which will not be useful on all Web
> > applications, the trade-off might be different.)
>
> Yeah, enabling on beta combined with talks with our competitors about
> shipping might be a good way to go.
>
>
> --
> http://annevankesteren.nl/
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-16 Thread Vladimir Vukicevic
On Wednesday, April 16, 2014 9:00:40 PM UTC-4, Eric Rahm wrote:
> So who actually needs to talk to Oculus? I can try to reach out some 
> folks I used to work with who are there now and see if they're 
> interested in making license modifications.

Already in the works. :)

The good news is that with the preview release of the latest SDK, they added a 
C API that does everything that we need.  So this might become a moot point; we 
can dlopen/dlsym our way to victory, and I'm already reworking the code that I 
have in terms of that.

We'll have to build and package the DLLs for all the platforms for nightly 
builds, but that's not a huge deal.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-16 Thread Vladimir Vukicevic
On Tuesday, April 15, 2014 8:17:44 PM UTC-4, Rob Manson wrote:
> We've also put together a plugin for our open source awe.js framework 
> that uses getUserMedia() to turn the Rift into a video-see-thru AR 
> device too. And for the 6dof tracking we just use the open source 
> oculus-bridge app that makes this data available via a WebSocket which 
> is enough for this type of proof of concept.
> 
> Of course if that just turned up as the DeviceOrientation API when you 
> plugged in the Rift then that would be even better.

This is actually not a good API for this; as you know, latency is death in VR.  
For this to work well, the most up to date orientation information needs to be 
available right when a frame is being rendered, and ideally needs to be 
predicted for the time that the frame will be displayed on-screen.

Currently the prototype API I have allows for querying VR devices, and then 
returns a bag of HMDs and various positional/orientation sensors that might be 
present (looking towards a future with sixense and similar support; Leap might 
also be interesting).  Once those device objects are queried, methods on them 
return the current, immediate state of the position/orientation, and optionally 
take a time delta for prediction.

Conveniently, requestAnimationFrame is passed in a frame time which at some 
point in the near future (!) will become the actual scheduled frame time for 
that frame, so we have a nice system whereby we can predict and render things 
properly.

Very cool to hear about awe.js and similar.  Will definitely take a look.

> On a slightly related note we've also implemented Kinect support that 
> exposes the OpenNI Skeleton data via a WebSocket. This allows you to use 
> the Kinect to project your body into a WebGL scene. This is great for VR 
> and is definitely a new area where no existing open web standard is 
> already working.

Also interesting -- Kinect was brought up earlier as another device to explore, 
and I think there's value in figuring out how to add it to this framework.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support & somehwat-non-free code in the tree

2014-04-17 Thread Vladimir Vukicevic
On Thursday, April 17, 2014 10:18:01 AM UTC-4, Gervase Markham wrote:
> > The good news is that with the preview release of the latest SDK,
> > they added a C API that does everything that we need.  So this might
> > become a moot point; we can dlopen/dlsym our way to victory, and I'm
> > already reworking the code that I have in terms of that.
> > 
> > We'll have to build and package the DLLs for all the platforms for
> > nightly builds, but that's not a huge deal.
> 
> At the risk of sounding like a broken record: shipping not-open-source
> code in Firefox nightly builds is a big deal. As Mike says, can't we
> just go for an addon at this point, while we work on the open source
> replacement code? How many Rifts are there in the world right now anyway?

An addon is not sufficient to get the strategic and mindshare benefits of VR 
support.

There are around 60,000 Rift DK1 units in the world today, with 25,000 of the 
DK2 units having been preordered as of yesterday.  So around 85,000 total in a 
month or two. The units cost $350 each, well within the reach of many 
bleeding-edge consumers, despite it not being a commercial unit yet -- they are 
not a $1500 piece of hardware as you mentioned before.  These are virtually all 
in the hands of tech leaders and innovators; and I mean leaders in the sense of 
"people in their family who others will go to for help with their computer".

These 85,000 people are the very people who likely used to be Firefox fans and 
are now firmly in the Chrome camp.  I want to win them back.

 - Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Vladimir Vukicevic
On Tuesday, May 6, 2014 7:30:42 PM UTC-4, Ehsan Akhgari wrote:
> On 2014-05-06, 6:41 PM, Jonas Sicking wrote:
> 
> >> That's why if we just expose different features on the object returned by
> >> getContext("webgl") depending on client hardware details, we will create a
> >> compatibility mess, unlike other Web APIs.
> 
> > The main probably that you have is that you haven't designed the API
> > as to allow authors to test if the API is available.
> > If you had, this discussion would be moot.
> >
> > But since you haven't, you're now stuck having to find some other way
> > of detecting if these features are implemented or not.

This was explicitly not designed, because it is not the way that WebGL works.  
The API to allow authors to test if something is available is 
getExtension()/getSupportedExtensions().  It's either part of the core, in 
which case always available, or you call getExtension().  The alternative is 
something like D3D's "caps bits", which are basically equivalent, just less 
flexible.

The API calls we're talking about here aren't "Is function frobFoo() 
available", it's more like "is it valid to pass FROB to function Foo is BLARG 
is enabled and there is a FOO texture bound".  If you have EXT_frob_blarg_foo 
extension then it is (assuming that extension is enabled).  Otherwise that 
state is an error.

FWIW, the reason you have to explicitly enable extensions is that we didn't 
want content that "accidentally works".  In contrast with regular OpenGL, where 
every extension is always enabled and the query just tells you what is 
available, WebGL requires explicit author action to enable an extension.  This 
has been a big boon to WebGL compatibility.

> Yeah, I think this is the core of the issue.  Can we just make sure that 
> WebGL2 features which are not in WebGL1 are all designed so that they 
> are individually feature detectible?  And if we do that, would there be 
> any other reason why we would want to define a new version of the 
> context object?

What's the value of this?  The current set of WebGL extensions to WebGL 1 was 
carefully chosen such that the baseline WebGL capability (OpenGL ES 2.0) was 
present on all devices we cared about.  The extensions that are defined for 
WebGL 1 are present on either all or most devices as well.

WebGL 2 features (ES 3) are generally *not* available on many of these devices, 
except for a feature here or there.  However, if the device supports ES 3.0 
(and basically all even remotely recent desktop GPUs), then *all* of ES 
3.0/WebGL 2 is available.  So, in practice, either all of WebGL 2 will be 
available, or basically the subset that is WebGL 1 + WebGL 1 extensions will be.

Defining every feature of WebGL 2 as an extension would result in a huge amount 
of busy work -- because then enabling those features is optional.  Much of this 
busy work is even more painful, because it might require explicitly not 
supporting certain GLSL language features (e.g. do you still support 32-bit and 
16-bit integers if someone doesn't enable the extension that allows them? 
what's the value in spending time writing the different paths in the shader 
validator)?

There is no value in not defining "webgl2" (and later "webgl3", "webgl4", 
etc.).  Existing content can continue to use "webgl", as they wouldn't use the 
new functionality anyway.  A "webgl2" context can be used just like a "webgl" 
context can be, it just has additional pre-enabled functionality.  Crucially, 
it doesn't give you the choice of enabling those things piecemeal or not.

WebGL is already following the OpenGL path.  Trying to make it more "webby" by 
trying to mush the APIs together isn't doing the web a favor since the API is 
already more OpenGL-like, isn't doing developers a favor since they now have to 
have this pile of getExtension() code around, and is definitely not doing us a 
favor, because we have to support the explosion of all combo of extensions.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-07 Thread Vladimir Vukicevic
On Wednesday, May 7, 2014 9:20:18 AM UTC-4, Ehsan Akhgari wrote:
> On 2014-05-07, 6:15 AM, Anne van Kesteren wrote:
> 
> >> WebGL is already following the OpenGL path.  Trying to make it more 
> >> "webby" by trying to mush the APIs together isn't doing the web a favor 
> >> since the API is already more OpenGL-like, isn't doing developers a favor 
> >> since they now have to have this pile of getExtension() code around, and 
> >> is definitely not doing us a favor, because we have to support the 
> >> explosion of all combo of extensions.
> 
> >
> 
> > Again, this seems like a false comparison. No need for an explosion of
> > extensions. Furthermore, what you're advocating will give us an
> > enormous API surface area long term, which we'll have to maintain and
> > test, even though most new content will not use it (they'll get the
> > newer classes).
> 
> So earlier I suggested feature detecting based on _a_ new WebGL2 
> feature, to which Benoit replied that is a bad idea since your code 
> would be testing something and assuming unrelated support of other 
> features based on that (which _is_ bad practice, but I thought is a goal 
> of the all-or-nothing approach of the WebGL2 API.)
> 
> So in response, I suggested making individual features feature 
> detectible (which is totally doable in your FOO/FROB/BLARG example BTW!) 
> to which you replied saying that is a bad idea since all of the features 
> are either supported or not (if I understand your response correctly.)
> 
> 
> I think one possible middle ground would be to define the API as follows:
> 
> interface WebGL2RenderingContext : WebGLRenderingContext {
>// new methods go here
> };
> 
> partial interface HTMLCanvasElement {
>(CanvasRenderingContext2D or
> WebGLRenderingContext or
> WebGL2RenderingContext) getContext(DOMString id, optional any 
> options = null);
> };

Totally with you.  That's how the current implementation works.

> And keep the string "webgl" as the name of the context.  This way, Web 
> content can do the following checks:
> 
> if (window.WebGL2RenderingContext) {
>// this browser supports the WebGL2 APIs
> }
> 
> var ctx = canvas.getContext("webgl");
> 
> if (ctx instanceof WebGL2RenderingContext) {
>// I actually got a WebGL2 context, my hardware also supports it!
> }

Ok, now you lost me for a bit.  Why is this better than:

var ctx = canvas.getContext("webgl2");
if (!ctx) {
  ctx = canvas.getContext("webgl");
  useWebGL1Renderer = true;
}

> This will preserve the all-or-nothing-ness of the WebGL2 feature set, it 
> will not require maintaining different code paths depending on which set 
> of individual extensions are enabled, is completely forward/backward 
> compatible, and at least limits the API surface that we will need to 
> support forever.

WebGL 2 is a superset of WebGL 1.  So the API surface we need to support 
doesn't get any more complicated, except for explicit "is webgl2 in use" 
checks.  The actual C++ implementation has explicit inheritance even.

Doing getExtension("webgl2") seems weird, and also requires extra 
implementation work since we need to create the right kind of OpenGL context 
under the hood.  Currently we do that when we call getContext.  If we had to 
wait until getExtension("webgl2"), then we'd need to recreate the context.  To 
do that, we'd have to either define that you're not allowed to call any context 
API functions before calling getExtension("webgl2"), making that the only 
"extension" to have that limitation, or we'd need to trigger a context lost and 
require all content to wait for a content restored event in order to continue 
rendering.  This is added complexity both for developers and implementations 
for zero benefit.

In the future, there might be a WebGL 3 that's a complete API departure from 
WebGL 1/2, which would require a new context name anyway.

WebGL made an explicit decision to do things this way -- the one thing that I 
would go back and change is getExtension() returning an extension object, 
instead of acting as an enableExtension().  I don't remember *why* we did this, 
but I seem to recall some decent arguments for it.

There really isn't a good solution to this problem in general.  The Web has 
tried multiple things, including the WebGL approach.  All of them have a set of 
upsides and downsides; there isn't a perfect option.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-07 Thread Vladimir Vukicevic
On Wednesday, May 7, 2014 9:30:27 AM UTC-4, Henri Sivonen wrote:

> 
> In general, I'm worried about groups that are rather isolated and that
> have non-Web background members making decisions that go against the
> Web wisdom gained from painful experience.

The WebGL group wasn't/isn't isolated from the general web.  I chaired the 
group when the original decisions were made, the other members of the group 
were all core developers of WebKit, Chrome, and Opera.  That's still the case.

> WebGL already made the Web little-endian forever.

And good thing it did!  The "Web wisdom" also had a bad habit of making 
non-pragmatic decisions in the past in order to appease everyone with a seat on 
whatever committee was making whatever recommendation to some working group.  
Thankfully, that's gone now, but it's not like the Web is a bastion of 
fantastic API design.  It's getting better, though.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Vladimir Vukicevic
On Thursday, May 8, 2014 5:25:49 AM UTC-4, Henri Sivonen wrote:
> Making the Web little-endian may indeed have been the right thing.
> Still, at least from the outside, it looks like the WebGL group didn't
> make an intentional wise decision to make the Web little-endian but
> instead made a naive decision that coupled with the general Web
> developer behavior and the dominance of little-endian hardware
> resulted in the Web becoming little-endian.
> 
> http://www.khronos.org/registry/typedarray/specs/latest/#2.1 still
> says "The typed array view types operate with the endianness of the
> host computer. " instead of saying "The typed array view types operate
> in the little-endian byte order. Don't build big endian systems
> anymore."
> 
> *Maybe* that's cunning politics to get a deliberate
> little-endianization pass without more objection, but from the spec
> and a glance at the list archives it sure looks like the WebGL group
> thought that it's reasonable to let Web developers deal with the API
> behavior differing on big-endian and little-endian computers, which
> isn't at all a reasonable expectation given everything we know about
> Web developers.

This is a digression, and I'm happy to discuss the endianness of typed 
arrays/webgl in a separate thread, but this decision was made because it made 
the most sense, both from a technical perspective (even for big endian 
machines!) and from an implementation perspective.

You seem to have a really low opinion of Web developers.  That's unfortunate, 
but it's your opinion.  It's not one that I share.  The Web is a complex 
platform.  It lets you do simple things simply, and it makes complex/difficult 
things possible.  You need to have some development skill to do the 
complex/difficult things.  I'd rather have that than make those things 
impossible.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-28 Thread Vladimir Vukicevic
(Note: I have not looked into the details of CART/TART and their interaction 
with OMTC)

It's entirely possible that (b) is true *now* -- the test may have been good 
and proper for the previous environment, but now the environment 
characteristics were changed such that the test needs tweaks.  Empirically, I 
have not seen any regressions on any of my Windows machines (which is basically 
all of them); things like tab animations and the like have started feeling 
smoother even after a long-running browser session with many tabs.  I realize 
this is not the same as cold hard numbers, but it does suggest to me that we 
need to take another look at the tests now.

- Vlad

- Original Message -
> From: "Gijs Kruitbosch" 
> To: "Bas Schouten" , "Gavin Sharp" 
> 
> Cc: dev-tech-...@lists.mozilla.org, "mozilla.dev.platform group" 
> , "release-drivers"
> 
> Sent: Thursday, May 22, 2014 4:46:29 AM
> Subject: Re: OMTC on Windows
> 
> Looking on from m.d.tree-management, on Fx-Team, the merge from this
> change caused a >40% CART regression, too, which wasn't listed in the
> original email. Was this unforeseeen, and if not, why was this
> considered acceptable?
> 
> As gavin noted, considering how hard we fought for 2% improvements (one
> of the Australis folks said yesterday "1% was like Christmas!") despite
> our reasons of why things were really OK because of some of the same
> reasons you gave (e.g. running in ASAP mode isn't realistic, "TART is
> complicated", ...), this hurts - it makes it seem like (a) our
> (sometimes extremely hacky) work was done for no good reason, or (b) the
> test is fundamentally flawed and we're better off without it, or (c)
> when the gfx team decides it's OK to regress it, it's fine, but not when
> it happens to other people, quite irrespective of reasons given.
> 
> All/any of those being true would give me the sad feelings. Certainly it
> feels to me like (b) is true if this is really meant to be a net
> perceived improvement despite causing a 40% performance regression in
> our automated tests.
> 
> ~ Gijs
> 
> On 18/05/2014 19:47, Bas Schouten wrote:
> > Hi Gavin,
> >
> > There have been several e-mails on different lists, and some communication
> > on some bugs. Sadly the story is at this point not anywhere in a condensed
> > form, but I will try to highlight a couple of core points, some of these
> > will be updated further as the investigation continues. The official bug
> > is bug 946567 but the numbers and the discussion there are far outdated
> > (there's no 400% regression ;)):
> >
> > - What OMTC does to tart scores differs wildly per machine, on some
> > machines we saw up to 10% improvements, on others up to 20% regressions.
> > There also seems to be somewhat more of a regression on Win7 than there is
> > on Win8. What the average is for our users is very hard to say, frankly I
> > have no idea.
> > - One core cause of the regression is that we're now dealing with two D3D
> > devices when using Direct2D since we're doing D2D drawing on one thread,
> > and D3D11 composition on the other. This means we have DXGI locking
> > overhead to synchronize the two. This is unavoidable.
> > - Another cause is that we're now having two surfaces in order to do double
> > buffering, this means we need to initialize more resources when new layers
> > come into play. This again, is unavoidable.
> > - Yet another cause is that for some tests we composite 'ASAP' to get
> > interesting numbers, but this causes some contention scenario's which are
> > less likely to occur in real-life usage. Since the double buffer might
> > copy the area validated in the last frame from the front buffer to the
> > backbuffer in order to prevent having to redraw much more. If the
> > compositor is compositing all the time this can block the main thread's
> > rasterization. I have some ideas on how to improve this, but I don't know
> > how much they'll help TART, in any case, some cost here will be
> > unavoidable as a natural additional consequence of double buffering.
> > - The TART number story is complicated, sometimes it's hard to know what
> > exactly they do, and don't measure (which might be different with and
> > without OMTC) and how that affects practical performance. I've been told
> > this by Avi and it matches my practical experience with the numbers. I
> > don't know the exact reasons and Avi is probably a better person to talk
> > about this than I am :-).
> >
> > These are the core reasons that we were able to identify from profiling.
> > Other than that the things I said in my previous e-mail still apply. We
> > believe we're offering significant UX improvements with async video and
> > are enabling more significant improvements in the future. Once we've fixed
> > the obvious problems we will continue to see if there's something that can
> > be done, either through tiling or through other improvements, particularly
> > in the last point I mentioned there might be some, not 't

PSA: Windows builds will default to XPCOM_DEBUG_BREAK=warn

2014-08-12 Thread Vladimir Vukicevic
I'm about to land bug 1046222, which changes the default behaviour of 
XPCOM_DEBUG_BREAK on Windows to "warn", instead of "trap".  If run under the 
debugger, "trap" will cause the debugger to stop as if it hit a breakpoint in 
debug builds.

This would be useful behaviour if we didn't still have a whole ton of 
assertions, but as it is it's an unnecessary papercut for windows developers, 
or people doing testing/debugging on windows -- some of whom may not know that 
they should set XPCOM_DEBUG_BREAK in the debugger, and are instead clicking 
"continue" through tons of assertions until they get to what they care about!

The change will bring Windows assertion behaviour in line with other platforms.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-12 Thread Vladimir Vukicevic
On Tuesday, August 12, 2014 11:22:05 AM UTC-4, Aryeh Gregor wrote:
> For refcounted types, isn't a raw pointer in a local variable a red
> flag to reviewers to begin with?  If GetT() returns a raw pointer
> today, like nsINode::GetFirstChild() or something, storing the result
> in a raw pointer is a potential use-after-free, and that definitely
> has happened already.  Reviewers need to make sure that refcounted
> types aren't ever kept in raw pointers in local variables, unless
> perhaps it's very clear from the code that nothing can possibly call
> Release() (although it still makes me nervous).

Putting the burden on reviewers when something can be automatically checked 
doesn't seem like a good idea -- it requires reviewers to know that GetT() 
*does* return a refcounted type, for example.  As dbaron pointed out, there are 
cases where we do actually return and keep things around as bare pointers.

It's unfortunate that we can't create a nsCOMPtr<> that will disallow 
assignment to a bare pointer without an explicit .get(), but will still allow 
conversion to a bare pointer for arg passing purposes.  (Or can we? I admit my 
C++-fu is not that strong in this area...)  It would definitely be nice to get 
rid of already_AddRefed<> (not least because the spelling of "Refed" always 
grates when I see it :).

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


landing soon: core APIs for VR

2014-11-19 Thread Vladimir Vukicevic
Hi all,

We've had a lot of excitement around our VR efforts and the MozVR site, and we 
want to capitalize on this momentum.  Very soon, I'll be landing the early 
support code for VR in mozilla-central, pref'd off by default.  This includes 
adding the core VR interfaces, display item and layers functionality for VR 
rendering, as well as supporting code such as extensions to the Fullscreen API.

Core VRDevices API:
https://bugzilla.mozilla.org/show_bug.cgi?id=1036604

Layers/gfx pieces:
https://bugzilla.mozilla.org/show_bug.cgi?id=1036602

Fullscreen API extensions:
https://bugzilla.mozilla.org/show_bug.cgi?id=1036606
https://bugzilla.mozilla.org/show_bug.cgi?id=1036597

This code is sufficient to perform WebGL-based VR rendering with an output 
going to an Oculus Rift.  None of the CSS-based VR code is ready to land yet.  
Additionally, this won't work out of the box, even if the pref is flipped, 
until we ship the Oculus runtime pieces (but initially instructions will be 
available on where to get the relevant DLL and where to put them).

The following things need to take place to pref this code on by default for 
nightly/aurora builds (not for release):

- Figure out how to ship/package/download/etc. the Oculus runtime pieces.
- Add support for Linux
- Add basic tests for VRDevice APIs

Beyond that there is a lot of work left to be done, both around supporting 
additional headset and input devices (Cardboard, etc.) as well as platforms 
(Android, FxOS), but that can be done directly in the tree instead of needing 
to maintain separate one-off builds for this work.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove the function timer code

2012-09-19 Thread Vladimir Vukicevic

On 9/19/2012 12:04 AM, Ehsan Akhgari wrote:

A while ago (I think more than a couple of years ago now), Vlad
implemented FunctionTimer which is a facility to time how much each
function exactly takes to run.  Then, I went ahead and instrumented a
whole bunch of code which was triggered throughout our startup path to
get a sense of what things are expensive there and what we can do about
that.  That code is hidden behind the NS_FUNCTION_TIMER build-time flag,
turned on by passing --enable-functiontimer.

This dates back to the infancy of our profiling tools, and we have a
much better built-in profiler these days which is usable without needing
a build-time option.  I don't even have the scripts I used to parse the
output and the crude UI I used to view the log around any more.  I've
stopped building with --enable-functiontimer for a long time now, and I
won't be surprised if that flag would even break the builds these days.

So, I'd like to propose that we should remove all of that code.  Is
anybody using function timers these days?  (I'll take silence as
consent! :-)


Yep, sounds fine to me -- though we don't have equivalent functionality 
right now, e.g. we don't quite have the ability to time/measure 
"regions", if it's not being maintained it's not useful.


- Vlad


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform