Vlad put in a "let's see if we can cache compiled shaders" bug in a few weeks ago, and perhaps that is something we should consider when discussing shaders in general. I didn't know about recompiling when some uniforms change though, that's good intel. -- - Milan
On 2013-10-10, at 15:13 , Jeff Gilbert <jgilb...@mozilla.com> wrote: > I'll also add a note that just because we aren't recompiling doesn't mean the > driver isn't. > If we change enough (or maybe just the correct) uniforms, this can cause the > driver to recompile the shader, which is indeed slow. Trying to unify too > many shader types might just tickle this. > > Some drivers will shoot us a warning via KHR_debug that we can catch, when > shader-recompilation happens. > > -Jeff > > ----- Original Message ----- > From: "Nicolas Silva" <nical.si...@gmail.com> > To: "Benoit Jacob" <jacob.benoi...@gmail.com> > Cc: "Benoit Girard" <bgir...@mozilla.com>, dev-platform@lists.mozilla.org, > "Andreas Gal" <andreas....@gmail.com> > Sent: Thursday, October 10, 2013 11:23:45 AM > Subject: Re: unified shader for layer rendering > > I do appreciate the fact that it reduces complexity (in addition to less > state changes). > > I agree that the decision of dedicating resources on that rather than on > other high priority projects that are in the pipes should be motivated by > some numbers. > > Cheers, > > Nical > > > > > On Thu, Oct 10, 2013 at 11:04 AM, Benoit Jacob > <jacob.benoi...@gmail.com>wrote: > >> 2013/10/10 Benoit Jacob <jacob.benoi...@gmail.com> >> >>> I'll pile on what Benoit G said --- this is the kind of work that would >>> require very careful performance measurements before we commit to it. >>> >>> Also, like Benoit said, we have seen no indication that glUseProgram is >>> hurting us. General GPU "wisdom" is that switching programs is not per se >>> expensive as long as one is not relinking them, and besides the general >>> performance caveat with any state change, forcing to split drawing into >>> multiple draw-calls, which also applies to updating uniforms, so we're >> not >>> escaping it here. >>> >>> In addition to that, not all GPUs have real branching. My Sandy Bridge >>> Intel chipset has real branching, but older Intel integrated GPUs don't, >>> and I'd be very surprised if all of the mobile GPUs we're currently >>> supporting did. To put this in perspective, in the world of discrete >>> desktop NVIDIA GPUs, this was only introduced in the Geforce 6 series. >>> >> >> In fact, even on a Geforce 6, we only get full real CPU-like ("MIMD") >> branching in vertex shaders, not in fragment shaders. >> >> http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter34.html >> >> Benoit >> _______________________________________________ >> dev-platform mailing list >> dev-platform@lists.mozilla.org >> https://lists.mozilla.org/listinfo/dev-platform >> > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform