On Mon, May 19, 2014 at 3:27 PM, Jonas Sicking <jo...@sicking.cc> wrote:

> On Mon, May 12, 2014 at 5:03 PM, Rik Cabanier <caban...@gmail.com> wrote:
> > Primary eng emails
> > caban...@adobe.com, bugm...@eligrey.com
> >
> > *Proposal*
> > http://wiki.whatwg.org/wiki/NavigatorCores
> >
> > *Summary*
> > Expose a property on navigator called hardwareConcurrency that returns
> the
> > number of logical cores on a machine.
> >
> > *Motivation*
> > All native platforms expose this property, It's reasonable to expose the
> > same capabilities that native applications get so web applications can be
> > developed with equivalent features and performance.
> >
> > *Mozilla bug*
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1008453
> > The patch is currently not behind a runtime flag, but I could add it if
> > requested.
> >
> > *Concerns*
> > The original proposal required that a platform must return the exact
> number
> > of logical CPU cores. To mitigate the fingerprinting concern, the
> proposal
> > was updated so a user agent can "lie" about this.
> > In the case of WebKit, it will return a maximum of 8 logical cores so
> high
> > value machines can't be discovered. (Note that it's already possible to
> do
> > a rough estimate of the number of cores)
>
> Here's the responses that I sent to blink-dev before you sent the
> above email here.
>
> "For what it's worth, in Firefox we've avoided implementing this due to
> the increased fingerprintability. Obviously we can't forbid any APIs
> which increase fingerprintability, however in this case we felt that
> the utility wasn't high enough given that the number of cores on the
> machine often does not equal the number of cores available to a
> particular webpage.
>
> A better approach is an API which enables the browser to determine how
> much to parallelize a particular computation."
>

Devising a complex API to divide up the work is:
- a lot of work that will be controversial
- difficult because those algorithms rely on shared memory (which workers
don't offer)
- not fixing the problem where I want to do break up different tasks (ie
the network logic + physics example I provided early)

Other platforms offer an API to the number of CPU's and they are able to
use it successfully. (see the ten of thousands of examples on GitHub)
I don't see why the web platform is special here and we should trust that
authors can do the right thing.


> and
>
> "Do note that the fact that you can already approximate this API using
> workers is just as much an argument for that there are no additional
> fingerprinting entropy exposed here, as it is an argument for that
> this use case has already been resolved.
>
> Additionally many of the people that are fingerprinting right now are
> unlikely to be willing to peg the CPU for 20 seconds in order to get a
> reliable fingerprint. Though obviously there are exceptions.
>

The fingerprinting can be done in a worker where there would be no delay
for the main thread.
Fingerprinting only seems usefull to get to high value systems (= a lot of
CPU's) since the output fluctuates a lot.


> Another relevant piece of data here is that we simply haven't gotten
> high priority requests for this feature. This lowers the relative
> value-to-risk ratio."
>
> I still feel like the value-to-risk ratio here isn't good enough. It
> would be relatively easy to define a WorkerPool API which spins up
> additional workers as needed.
>

I don't really follow.
Yes, this is not a very important feature which is a reason to provide a
simple API that people can use as they see fit, not a reason to come up
with a complex one.


> A very simple version could be something as simple as:
>
> page.html:
> var wp = new WorkerPool("worker.js");
> wp.onmessage = resultHandler;
> myArrayOfWorkTasks.forEach(x => wp.postMessage(x));
>
> worker.js:
> onmessage = function(e) {
>   var res = doHeavyComputationWith(e.data);
>   postMessage(res);
> }
> function doHeavyComputationWith(val) {
>   ...
> }
>
> This obviously is very handwavey. It's definitely missing some
> mechanism to make sure that you get the results back in a reasonable
> order. But it's not rocket science to get this to be a coherent
> proposal.
>

You assume that people just want to use workers to break up 1 complex
problem into a bunch of smaller tasks.
I don't think this is a problem in the real world because you can just spin
up a large number of threads and rely on the OS scheduler to sort it out.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to