On Thu, Mar 14, 2013 at 03:56:33PM -0700, "Martin v. Löwis" wrote: > Am 14.03.13 11:23, schrieb Trent Nelson: > > Porting the Py_PXCTX part is trivial compared to the work that is > > going to be required to get this stuff working on POSIX where none > > of the sublime Windows concurrency, synchronisation and async IO > > primitives exist. > > I couldn't understand from your presentation why this is essential > to your approach. IIUC, you are "just" relying on the OS providing > a thread pool, (and the sublime concurrency and synchronization > routines are nothing more than that, ISTM).
Right, there's nothing Windows* does that can't be achieved on Linux/BSD, it'll just take more scaffolding (i.e. we'll need to manage our own thread pool at the very least). [*]: actually, the interlocked singly-linked list stuff concerns me; the API seems straightforward enough but the implementation becomes deceptively complex once you factor in the ABA problem. (I'm not aware of a portable open source alternative for that stuff.) > Implementing a thread pool on top of select/poll/kqueue seems > straight-forward. Nod, that's exactly what I've got in mind. Spin up a bunch of threads that sit there and call poll/kqueue in an endless loop. That'll work just fine for Linux/BSD/OSX. Actually, what's really interesting is the new registered IO facilities in Windows 8/2012. The Microsoft recommendation for achieving the ultimate performance (least amount of jitter, lowest latency, highest throughput) is to do something like this: while (1) { if (!DequeueCompletionRequests(...)) { YieldProcessor(); continue; } else { /* Handle requests */ } } That pattern looks a lot more like what you'd do on Linux/BSD (spin up a thread per CPU and call epoll/kqueue endlessly) than any of the previous Windows IO patterns. Trent. _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com