On Tue, May 13, 2014 at 11:20 AM, Ehsan Akhgari <ehsan.akhg...@gmail.com> wrote: > Can you please provide some examples of actual web applications that do > this, and what they're exactly trying to do with the number once they > estimate one? (Eli's timing attack demos don't count. ;-)
One example of a website in the wild that is currently using navigator.hardwareConcurrency with my polyfill is http://danielsadventure.info/html5fractal/ On Tue, May 13, 2014 at 11:20 AM, Ehsan Akhgari <ehsan.akhg...@gmail.com> wrote: > I don't think that the value exposed by the native platforms is > particularly useful. Really if the use case is to try to adapt the number > of workers to a number that will allow you to run them all concurrently, > that is not the same number as reported traditionally by the native > platforms Can you back that up with a real-world example desktop application that behaves as such? Every highly parallel desktop application that I have (HandBrake, xz, Photoshop, GIMP, Blender (CPU-based render modes)) use all available CPU cores and keep the same threadpool size throughout the application life. Can you provide a single example of a one desktop application that resizes its threadpool based on load, as opposed to allowing the OS scheduler to do its job? The use case of navigator.hardwareConcurrency is not to "adapt the number of workers to a number that will allow you to run them all concurrently". The use case is sizing a threadpool so that that an application can perform parallel tasks with as many system CPU resources as it can get. You state that "this API is too focused on the past/present". I may be compressing some data with xz while also compiling Firefox. If both of these applications use 12 threads on my 12-thread Intel CPU, the OS scheduler balances the loads so that they both finish as fast as possible. If I use only 1 thread for compression while compiling Firefox, Firefox may finish compiling faster, but my compression will undoubtedly take longer. On Tue, May 13, 2014 at 11:20 AM, Ehsan Akhgari <ehsan.akhg...@gmail.com> wrote: > On Tue, May 13, 2014 at 2:37 AM, Rik Cabanier <caban...@gmail.com> wrote: > >> On Mon, May 12, 2014 at 10:15 PM, Joshua Cranmer 🐧 <pidgeo...@gmail.com >> >wrote: >> >> > On 5/12/2014 7:03 PM, Rik Cabanier wrote: >> > >> >> *Concerns* >> >> >> >> The original proposal required that a platform must return the exact >> >> number >> >> of logical CPU cores. To mitigate the fingerprinting concern, the >> proposal >> >> was updated so a user agent can "lie" about this. >> >> In the case of WebKit, it will return a maximum of 8 logical cores so >> high >> >> value machines can't be discovered. (Note that it's already possible to >> do >> >> a rough estimate of the number of cores) >> >> >> > >> > The discussion on the WHATWG mailing list covered a lot more than the >> > fingerprinting concern. Namely: >> > 1. The user may not want to let web applications hog all of the cores on >> a >> > machine, and exposing this kind of metric makes it easier for >> (good-faith) >> > applications to inadvertently do this. >> > >> >> Web applications can already do this today. There's nothing stopping them >> from figuring out the CPU's and trying to use them all. >> Worse, I think they will likely optimize for popular platforms which either >> overtax or underutilize non-popular ones. >> > > Can you please provide some examples of actual web applications that do > this, and what they're exactly trying to do with the number once they > estimate one? (Eli's timing attack demos don't count. ;-) > > >> > 2. It's not clear that this feature is necessary to build high-quality >> > threading workload applications. In fact, it's possible that this >> technique >> > makes it easier to build inferior applications, relying on a potentially >> > inferior metric. (Note, for example, the disagreement on figuring out >> what >> > you should use for make -j if you have N cores). >> >> >> Everyone is in agreement that that is a hard problem to fix and that there >> is no clear answer. >> Whatever solution is picked (maybe like Grand Central or Intel TBB), most >> solutions will still want to know how many cores are available. >> Looking at the native platform (and Adobe's applications), many query the >> operating system for this information to balance the workload. I don't see >> why this would be different for the web platform. >> > > I don't think that the value exposed by the native platforms is > particularly useful. Really if the use case is to try to adapt the number > of workers to a number that will allow you to run them all concurrently, > that is not the same number as reported traditionally by the native > platforms. If you try Eli's test case in Firefox under different workloads > (for example, while building Firefox, doing a disk intensive operation, > etc.), the utter inaccuracy of the results is proof in the ineffectiveness > of this number in my opinion. > > Also, I worry that this API is too focused on the past/present. For > example, I don't think anyone sufficiently addressed Boris' concern on the > whatwg thread about AMP vs SMP systems. This proposal also assumes that > the UA itself is mostly contempt with using a single core, which is true > for the current browser engines, but we're working on changing that > assumption in Servo. It also doesn't take the possibility of several ones > of these web application running at the same time. > > Until these issues are addressed, I do not think we should implement or > ship this feature. > > Cheers, > Ehsan > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform