Antoine Pitrou writes:
> Yes, I hindsight I think Guido was right.
Guido does too. One of the benefits of having a time machine is
getting to turn your hindsight into foresight.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org
Antoine Pitrou wrote:
Also, the MSDN doc (*) says timeBeginPeriod() can have a detrimental effect on
system performance; I don't know how much of it is true.
(*) http://msdn.microsoft.com/en-us/library/dd757624(VS.85).aspx
Indeed it does. This is ancient, dusty wisdom, from the days of 50mhz
Hello Eric,
> I notice that Guido vetoed this idea, but just in case it should come up
> again, I have some thoughts that likely have already occurred to people,
> but I didn't notice on the list.
Yes, I hindsight I think Guido was right.
(no, I'm not trying to make myself agreeable :-))
> Also
Eric Hopper:
> I don't suppose it will ever be ported back to Python 2.x? It doesn't
> look like the whole GIL concept has changed much between Python 2.x and
> 3.x so I expect back-porting it would be pretty easy.
There was a patch but it has been rejected.
http://bugs.python.org/issue7753
I recently saw the video of David Beazley's presentation on how poorly
the old GIL implementation handled certain cases and thought "I can fix
that!". Unfortunately for me, someone else has beaten me to it, and
done a somewhat better job than I would've because I wasn't thinking of
doing anything
> I built something very similar for my company last year, and it’s been running
> flawlessly in production at a few customer sites since, with avg. CPU usage
> ~50%
> around the clock. I even posted about it on the Python mailing list [1] where
> there was almost no resonance at that time. I neve
Stefan Ring wrote:
> [2] http://www.bestinclass.dk/index.php/2009/10/python-vs-clojure-evolving/
> [3] www.dabeaz.com/python/GIL.pdf
>
> PS On a slightly different note, I came across some Python bashing [2]
> yesterday
> and somehow from there to David Beazley’s presentation about the GIL [3].
Hello,
I built something very similar for my company last year, and it’s been running
flawlessly in production at a few customer sites since, with avg. CPU usage ~50%
around the clock. I even posted about it on the Python mailing list [1] where
there was almost no resonance at that time. I never p
Hello again,
I've now removed priority requests, tried to improve the internal doc a
bit, and merged the changes into py3k.
Afterwards, the new Windows 7 buildbot has hung in test_multiprocessing,
but I don't know whether it's related.
Regards
Antoine.
Guido van Rossum python.org> writes:
Baptiste Lepilleur gmail.com> writes:
>
> I've tried, but there is no change in result (the regexp does not use \w &
> co but specify a lot unicode ranges). All strings are already of unicode
> type in 2.6.
No they aren't. You should add "from __future__ import unicode_literals" at the
start of
2009/11/7 Antoine Pitrou
>
> Hello again,
>
> > It shows that, on my platform for this specific benchmark:
> > * newgil manage to leverage a significant amount of parallelism
> > (1.7) where python 3.1 does not (3.1 is 80% slower)
>
> I think you are mistaken:
>
> -j0 (main thread
Hello again,
> It shows that, on my platform for this specific benchmark:
> * newgil manage to leverage a significant amount of parallelism
> (1.7) where python 3.1 does not (3.1 is 80% slower)
I think you are mistaken:
-j0 (main thread only)
newgil: 47.483s, 47.605s, 47.512s
-j4
2009/11/7 Antoine Pitrou
>
> [...]
> So, to sum it up, the way the current GIL manages to have good latencies is
> by
> issueing an unreasonable number of system calls on a contended lock, and
> potentially killing throughput performance (this depends on the OS too,
> because
> numbers under Linu
On Sat, Nov 7, 2009 at 9:01 AM, Antoine Pitrou wrote:
>
> Hello Guido,
>
>> How close are you to merging this into the Py3k branch? It looks like
>> a solid piece of work, that can only get better in the period between
>> now and the release of 3.2. But I don't want to rush you, and I only
>> have
Hello Guido,
> How close are you to merging this into the Py3k branch? It looks like
> a solid piece of work, that can only get better in the period between
> now and the release of 3.2. But I don't want to rush you, and I only
> have had a brief look at your code.
The code is ready. Priority re
Antoine,
How close are you to merging this into the Py3k branch? It looks like
a solid piece of work, that can only get better in the period between
now and the release of 3.2. But I don't want to rush you, and I only
have had a brief look at your code. (I whipped up a small Dave Beazley
example a
Hello,
> Solaris X86, 16 cores: some python extension are likely missing (see
config.log)
> Windows XP SP3, 4 cores: all python extensions but TCL (I didn't bother
checking why it failed as it is not used in the benchmark). It is a release
build.
>
> The results look promising but I let you sh
Hi Antoine,
I was finally able to compile py3k and run the benchmark (my compilation
issue was caused by checking out on Windows and compiling on Unix. Some
Makefile templates are missing correct EOL properties in SVN I think).
The benchmark results can be obtained from:
http://gaiacrtn.free.fr/py
Martin v. Löwis skrev:
I did, and it does nothing of what I suggested. I am sure I can make the
Windows GIL in ceval_gil.h and the mutex in thread_nt.h at lot more precise and
efficient.
Hmm. I'm skeptical that your code makes it more accurate, and I
completely fail to see that it makes it
On Mon, Nov 2, 2009 at 4:15 AM, Antoine Pitrou wrote:
> Martin v. Löwis v.loewis.de> writes:
>>
> [gil_drop_request]
>> Even if it is read from memory, I still wonder what might happen on
>> systems that require explicit memory barriers to synchronize across
>> CPUs. What if CPU 1 keeps reading a
On Mon, Nov 2, 2009 at 11:27 AM, "Martin v. Löwis" wrote:
> Hmm. This creates a busy wait loop; if you add larger sleep values,
> then it loses accuracy.
>
I thought that at first, too, but then I checked the documentation for
Sleep(0):
"A value of zero causes the thread to relinquish the remain
> I did, and it does nothing of what I suggested. I am sure I can make the
> Windows GIL in ceval_gil.h and the mutex in thread_nt.h at lot more precise
> and
> efficient.
Hmm. I'm skeptical that your code makes it more accurate, and I
completely fail to see that it makes it more efficient (by wh
Sturla Molden molden.no> writes:
>
> I'd love to try, but I don't have VC++ to build Python, I use GCC on
> Windows.
You can use Visual Studio Express, which is free (gratis).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org
Sturla Molden wrote:
> Antoine Pitrou skrev:
>> It certainly is.
>> But once again, I'm no Windows developer and I don't have a native
>> Windost host
>> to test on; therefore someone else (you?) has to try.
>>
> I'd love to try, but I don't have VC++ to build Python, I use GCC on
> Windows.
>
Antoine Pitrou skrev:
It certainly is.
But once again, I'm no Windows developer and I don't have a native Windost host
to test on; therefore someone else (you?) has to try.
I'd love to try, but I don't have VC++ to build Python, I use GCC on
Windows.
Anyway, the first thing to try then is t
Sturla Molden molden.no> writes:
>
> By the way Antoine, if you think granularity of 1-2 ms is sufficient,
It certainly is.
But once again, I'm no Windows developer and I don't have a native Windost host
to test on; therefore someone else (you?) has to try.
Also, the MSDN doc (*) says timeBegi
Sturla Molden skrev:
And just so you don't ask: There should not just be a Sleep(0) in the
loop, but a sleep that gets shorter and shorter until a lower
threshold is reached, where it skips to Sleep(0). That way we avoid
hammering om WaitForMultipleObjects and QueryPerformanceCounter more
th
Sturla Molden skrev:
I would turn on multimedia timer (it is not on by default), and
replace this
call with a loop, approximately like this:
for (;;) {
r = WaitForMultipleObjects(2, objects, TRUE, 0);
/* blah blah blah */ QueryPerformanceCounter(&cnt);if (cnt >
timeout) break;
Sturla Molden molden.no> writes:
>
> And the timeout "milliseconds" would now be computed from querying the
> performance
> counter, instead of unreliably by the Windows NT kernel.
Could you actually test your proposal under Windows and report what kind of
concrete benefits it brings?
Thank yo
Martin v. Löwis skrev:
Maybe you should study the code under discussion before making such
a proposal.
I did, and it does nothing of what I suggested. I am sure I can make the
Windows GIL
in ceval_gil.h and the mutex in thread_nt.h at lot more precise and
efficient.
This is the kind of code
> - all machines Python runs on should AFAIK be cache-coherent: CPUs synchronize
> their views of memory in a rather timely fashion.
Ok. I thought that Itanium was an example where this assumption is
actually violated (as many web pages claim such a restriction), however,
it seems that on Itanium,
Martin v. Löwis v.loewis.de> writes:
>
[gil_drop_request]
> Even if it is read from memory, I still wonder what might happen on
> systems that require explicit memory barriers to synchronize across
> CPUs. What if CPU 1 keeps reading a 0 value out of its cache, even
> though CPU 1 has written an
>> c) is the gil_drop_request handling thread-safe? If your answer is
>>"yes", you should add a comment as to what the assumptions are of
>>this code (ISTM that multiple threads may simultaneously attempt
>>to set the drop request, overlapping with the holding thread actually
>>drop
Martin v. Löwis v.loewis.de> writes:
>
> I've looked at this part of the implementation, and have a few comments.
> a) why is gil_interval a double? Make it an int, counting in
>microseconds.
Ok, I'll do that.
> b) notice that, on Windows, minimum wait resolution may be as large as
>15m
Sturla Molden wrote:
> Martin v. Löwis skrev:
>> b) notice that, on Windows, minimum wait resolution may be as large as
>>15ms (e.g. on XP, depending on the hardware). Not sure what this
>>means for WaitForMultipleObjects; most likely, if you ask for a 5ms
>>wait, it waits until the nex
Martin v. Löwis skrev:
b) notice that, on Windows, minimum wait resolution may be as large as
15ms (e.g. on XP, depending on the hardware). Not sure what this
means for WaitForMultipleObjects; most likely, if you ask for a 5ms
wait, it waits until the next clock tick. It would be bad if,
> The new GIL does away with this by ditching _Py_Ticker entirely and
> instead using a fixed interval (by default 5 milliseconds, but settable)
> after which we ask the main thread to release the GIL and let another
> thread be scheduled.
I've looked at this part of the implementation, and have a
On Sun, Nov 1, 2009 at 3:33 AM, Antoine Pitrou wrote:
>
> Hello again,
>
> Brett Cannon python.org> writes:
>>
>> I think it's worth it. Removal of the GIL is a totally open-ended problem
>> with no solution in sight. This, on the other hand, is a performance benefit
>> now. I say move forward wi
Antoine Pitrou wrote:
> Christian Heimes cheimes.de> writes:
>> +1 from me. I trust you like Brett does.
>>
>> How much work would it cost to make your patch optional at compile time?
>
> Quite a bit, because it changes the logic for processing asynchronous pending
> calls (signals) and asynchron
Christian Heimes cheimes.de> writes:
>
> +1 from me. I trust you like Brett does.
>
> How much work would it cost to make your patch optional at compile time?
Quite a bit, because it changes the logic for processing asynchronous pending
calls (signals) and asynchronous exceptions in the eval lo
Antoine Pitrou wrote:
> Based on this whole discussion, I think I am going to merge the new GIL work
> into the py3k branch, with priority requests disabled.
>
> If you think this is premature or uncalled for, or if you just want to review
> the changes before making a judgement, please voice up :
On Sun, Nov 1, 2009 at 03:33, Antoine Pitrou wrote:
>
> Hello again,
>
> Brett Cannon python.org> writes:
>>
>> I think it's worth it. Removal of the GIL is a totally open-ended problem
>> with no solution in sight. This, on the other hand, is a performance benefit
>> now. I say move forward with
Hello again,
Brett Cannon python.org> writes:
>
> I think it's worth it. Removal of the GIL is a totally open-ended problem
> with no solution in sight. This, on the other hand, is a performance benefit
> now. I say move forward with this. If it happens to be short-lived because
> some actua
Kristján Valur Jónsson ccpgames.com> writes:
>
> In my experience (from stackless python) using priority wakeup for IO can
result in very erratic
> scheduling when there is much IO going on, every IO trumping another.
I whipped up a trivial multithreaded HTTP server using
socketserver.ThreadingM
On 26Oct2009 22:45, exar...@twistedmatrix.com wrote:
| On 04:18 pm, dan...@stutzbachenterprises.com wrote:
| >On Mon, Oct 26, 2009 at 10:58 AM, Antoine Pitrou write:
| >Do we really need priority requests at all? They seem counter to your
| >desire for simplicity and allowing the operating system
On Oct 26, 2009, at 6:45 PM, exar...@twistedmatrix.com wrote:
Despite what I said above, however, I would also take a default
position against adding any kind of more advanced scheduling system
here. It would, perhaps, make sense to expose the APIs for
controlling the platform scheduler, t
On 04:18 pm, dan...@stutzbachenterprises.com wrote:
On Mon, Oct 26, 2009 at 10:58 AM, Antoine Pitrou
wrote:
Er, I prefer to keep things simple. If you have lots of I/O you should
probably
use an event loop rather than separate threads.
On Windows, sometimes using a single-threaded event loop i
On Mon, Oct 26, 2009 at 2:43 PM, Antoine Pitrou wrote:
> Collin Winter gmail.com> writes:
> [the Dave Beazley benchmark]
>> The results below are benchmarking py3k as the control, newgil as the
>> experiment. When it says "x% faster", that is a measure of newgil's
>> performance over py3k's.
>>
>
Collin Winter gmail.com> writes:
>
> My results for an 2.4 GHz Intel Core 2 Duo MacBook Pro (OS X 10.5.8):
Thanks!
[the Dave Beazley benchmark]
> The results below are benchmarking py3k as the control, newgil as the
> experiment. When it says "x% faster", that is a measure of newgil's
> perfor
On Sun, Oct 25, 2009 at 1:22 PM, Antoine Pitrou wrote:
> Having other people test it would be fine. Even better if you have an
> actual multi-threaded py3k application. But ccbench results for other
> OSes would be nice too :-)
My results for an 2.4 GHz Intel Core 2 Duo MacBook Pro (OS X 10.5.8):
Daniel Stutzbach stutzbachenterprises.com> writes:
>
> Do we really need priority requests at all? They seem counter to your
> desire for simplicity and allowing the operating system's scheduler to do
> its work.
No, they can be disabled (removed) if we prefer. With priority requests
disabled,
On Mon, Oct 26, 2009 at 10:58 AM, Antoine Pitrou wrote:
> Er, I prefer to keep things simple. If you have lots of I/O you should
> probably
> use an event loop rather than separate threads.
>
On Windows, sometimes using a single-threaded event loop is sometimes
impossible. WaitForMultipleObjects
Sturla Molden molden.no> writes:
>
> Antoine Pitrou skrev:
> > - priority requests, which is an option for a thread requesting the GIL
> > to be scheduled as soon as possible, and forcibly (rather than any other
> > threads). T
> Should a priority request for the GIL take a priority number?
Er,
Antoine Pitrou skrev:
- priority requests, which is an option for a thread requesting the GIL
to be scheduled as soon as possible, and forcibly (rather than any other
threads). T
Should a priority request for the GIL take a priority number?
- If two threads make a priority requests for the GIL,
On Oct 26, 2009, at 10:09 AM, Kristján Valur Jónsson wrote:
-Original Message-
From: python-dev-bounces+kristjan=ccpgames@python.org
[mailto:python-dev-bounces+kristjan=ccpgames@python.org] On
Behalf
Of Sturla Molden
time.sleep should generate a priority request to re-acqu
> -Original Message-
> From: python-dev-bounces+kristjan=ccpgames@python.org
> [mailto:python-dev-bounces+kristjan=ccpgames@python.org] On Behalf
> Of Sturla Molden
> time.sleep should generate a priority request to re-acquire the GIL;
> and
> so should all other blocking standard
Antoine Pitrou skrev:
- priority requests, which is an option for a thread requesting the GIL
to be scheduled as soon as possible, and forcibly (rather than any other
threads).
So Python threads become preemptive rather than cooperative? That would
be great. :-)
time.sleep should generate a p
Brett Cannon wrote:
It's up to Andrew to get the support in. While I have faith he will,
this is why we have been scaling back the support for alternative OSs
for a while and will continue to do so. I suspect the day Andrew stops
keeping up will be the day we push to have OS/2 be externally mai
Terry Reedy udel.edu> writes:
>
> I am curious as to whether the entire mechanism is or can be turned off
> when not needed -- when there are not threads (other than the main,
> starting thread)?
It is an implicit feature: when no thread is waiting on the GIL, the GIL-holding
thread isn't noti
Antoine Pitrou wrote:
Hello there,
The last couple of days I've been working on an experimental rewrite of
the GIL. Since the work has been turning out rather successful (or, at
least, not totally useless and crashing!) I thought I'd announce it
here.
I am curious as to whether the entire mech
>
> [SNIP - a lot of detail on what sounds like a good design]
>
> Now what remains to be done?
>
> Having other people test it would be fine. Even better if you have an
> actual multi-threaded py3k application. But ccbench results for other
> OSes would be nice too :-)
> (I get good results under
61 matches
Mail list logo