On Tuesday, May 6, 2014 7:30:42 PM UTC-4, Ehsan Akhgari wrote: > On 2014-05-06, 6:41 PM, Jonas Sicking wrote: > > >> That's why if we just expose different features on the object returned by > >> getContext("webgl") depending on client hardware details, we will create a > >> compatibility mess, unlike other Web APIs. > > > The main probably that you have is that you haven't designed the API > > as to allow authors to test if the API is available. > > If you had, this discussion would be moot. > > > > But since you haven't, you're now stuck having to find some other way > > of detecting if these features are implemented or not.
This was explicitly not designed, because it is not the way that WebGL works. The API to allow authors to test if something is available is getExtension()/getSupportedExtensions(). It's either part of the core, in which case always available, or you call getExtension(). The alternative is something like D3D's "caps bits", which are basically equivalent, just less flexible. The API calls we're talking about here aren't "Is function frobFoo() available", it's more like "is it valid to pass FROB to function Foo is BLARG is enabled and there is a FOO texture bound". If you have EXT_frob_blarg_foo extension then it is (assuming that extension is enabled). Otherwise that state is an error. FWIW, the reason you have to explicitly enable extensions is that we didn't want content that "accidentally works". In contrast with regular OpenGL, where every extension is always enabled and the query just tells you what is available, WebGL requires explicit author action to enable an extension. This has been a big boon to WebGL compatibility. > Yeah, I think this is the core of the issue. Can we just make sure that > WebGL2 features which are not in WebGL1 are all designed so that they > are individually feature detectible? And if we do that, would there be > any other reason why we would want to define a new version of the > context object? What's the value of this? The current set of WebGL extensions to WebGL 1 was carefully chosen such that the baseline WebGL capability (OpenGL ES 2.0) was present on all devices we cared about. The extensions that are defined for WebGL 1 are present on either all or most devices as well. WebGL 2 features (ES 3) are generally *not* available on many of these devices, except for a feature here or there. However, if the device supports ES 3.0 (and basically all even remotely recent desktop GPUs), then *all* of ES 3.0/WebGL 2 is available. So, in practice, either all of WebGL 2 will be available, or basically the subset that is WebGL 1 + WebGL 1 extensions will be. Defining every feature of WebGL 2 as an extension would result in a huge amount of busy work -- because then enabling those features is optional. Much of this busy work is even more painful, because it might require explicitly not supporting certain GLSL language features (e.g. do you still support 32-bit and 16-bit integers if someone doesn't enable the extension that allows them? what's the value in spending time writing the different paths in the shader validator)? There is no value in not defining "webgl2" (and later "webgl3", "webgl4", etc.). Existing content can continue to use "webgl", as they wouldn't use the new functionality anyway. A "webgl2" context can be used just like a "webgl" context can be, it just has additional pre-enabled functionality. Crucially, it doesn't give you the choice of enabling those things piecemeal or not. WebGL is already following the OpenGL path. Trying to make it more "webby" by trying to mush the APIs together isn't doing the web a favor since the API is already more OpenGL-like, isn't doing developers a favor since they now have to have this pile of getExtension() code around, and is definitely not doing us a favor, because we have to support the explosion of all combo of extensions. - Vlad _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform