Re: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread Barry Warsaw
On Mon, 2005-01-31 at 00:17, Guido van Rossum wrote:
> > I had hoped for the core of p3k to be built for scratch [...]
> 
> Stop right there.

Phew!
-Barry



signature.asc
Description: This is a digitally signed message part
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread Barry Warsaw
On Mon, 2005-01-31 at 00:00, Skip Montanaro wrote:
> Raymond> I had hoped for the core of p3k to be built for scratch ...
> 
> Then we should just create a new CVS module for it (or go whole hog and try
> a new revision control system altogether - svn, darcs, arch, whatever).

I've heard rumors that SF was going to be making svn available.  Anybody
know more about that?  I'd be +1 on moving from cvs to svn.

-Barry



signature.asc
Description: This is a digitally signed message part
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread Evan Jones
On Jan 31, 2005, at 0:17, Guido van Rossum wrote:
The "just kidding" applies to the whole list, right? None of these
strike me as good ideas, except for improvements to function argument
passing.
Really? You see no advantage to moving to garbage collection, nor 
allowing Python to leverage multiple processor environments? I'd be 
curious to hear your reasons why not.

My knowledge about garbage collection is weak, but I have read a little 
bit of Hans Boehm's work on garbage collection. For example, his 
"Memory Allocation Myths and Half Truths" presentation 
(http://www.hpl.hp.com/personal/Hans_Boehm/gc/myths.ps) is quite 
interesting. On page 25 he examines reference counting. The biggest 
disadvantage mentioned is that simple pointer assignments end up 
becoming "increment ref count" operations as well, which can "involve 
at least 4 potential memory references." The next page has a 
micro-benchmark that shows reference counting performing very poorly. 
Not to mention that Python has a garbage collector *anyway,* so 
wouldn't it make sense to get rid of the reference counting?

My only argument for making Python capable of leveraging multiple 
processor environments is that multithreading seems to be where the big 
performance increases will be in the next few years. I am currently 
using Python for some relatively large simulations, so performance is 
important to me.

Evan Jones
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread Bob Ippolito
On Jan 31, 2005, at 10:43, Evan Jones wrote:
On Jan 31, 2005, at 0:17, Guido van Rossum wrote:
The "just kidding" applies to the whole list, right? None of these
strike me as good ideas, except for improvements to function argument
passing.
Really? You see no advantage to moving to garbage collection, nor 
allowing Python to leverage multiple processor environments? I'd be 
curious to hear your reasons why not.

My knowledge about garbage collection is weak, but I have read a 
little bit of Hans Boehm's work on garbage collection. For example, 
his "Memory Allocation Myths and Half Truths" presentation 
(http://www.hpl.hp.com/personal/Hans_Boehm/gc/myths.ps) is quite 
interesting. On page 25 he examines reference counting. The biggest 
disadvantage mentioned is that simple pointer assignments end up 
becoming "increment ref count" operations as well, which can "involve 
at least 4 potential memory references." The next page has a 
micro-benchmark that shows reference counting performing very poorly. 
Not to mention that Python has a garbage collector *anyway,* so 
wouldn't it make sense to get rid of the reference counting?

My only argument for making Python capable of leveraging multiple 
processor environments is that multithreading seems to be where the 
big performance increases will be in the next few years. I am 
currently using Python for some relatively large simulations, so 
performance is important to me.
Wouldn't it be nicer to have a facility that let you send messages 
between processes and manage concurrency properly instead?  You'll need 
most of this anyway to do multithreading sanely, and the benefit to the 
multiple process model is that you can scale to multiple machines, not 
just processors.  For brokering data between processes on the same 
machine, you can use mapped memory if you can't afford to copy it 
around, which gives you basically all the benefits of threads with 
fewer pitfalls.

-bob
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Moving towards Python 3.0 (was Re: Speed up functioncalls)

2005-01-31 Thread Fredrik Lundh
Bob Ippolito wrote:

> Wouldn't it be nicer to have a facility that let you send messages between 
> processes and manage 
> concurrency properly instead?  You'll need most of this anyway to do 
> multithreading sanely, and 
> the benefit to the multiple process model is that you can scale to multiple 
> machines, not just 
> processors.

yes, please!

> For brokering data between processes on the same machine, you can use
> mapped memory if you can't afford to copy it around

this mechanism should be reasonably hidden, of course, at least for "normal
use".

 



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up functioncalls)

2005-01-31 Thread Michael Chermside
Evan Jones writes:
> My knowledge about garbage collection is weak, but I have read a little
> bit of Hans Boehm's work on garbage collection. [...] The biggest
> disadvantage mentioned is that simple pointer assignments end up
> becoming "increment ref count" operations as well...

Hans Boehm certainly has some excellent points. I believe a little
searching through the Python dev archives will reveal that attempts
have been made in the past to use his GC tools with CPython, and that
the results have been disapointing. That may be because other parts
of CPython are optimized for reference counting, or it may be just
because this stuff is so bloody difficult!

However, remember that changing away from reference counting is a change
to the semantics of CPython. Right now, people can (and often do) assume
that objects which don't participate in a reference loop are collected
as soon as they go out of scope. They write code that depends on
this... idioms like:

>>> text_of_file = open(file_name, 'r').read()

Perhaps such idioms aren't a good practice (they'd fail in Jython or
in IronPython), but they ARE common. So we shouldn't stop using
reference counting unless we can demonstrate that the alternative is
clearly better. Of course, we'd also need to devise a way for extensions
to cooperate (which is a problem Jython, at least, doesn't face).

So it's NOT an obvious call, and so far numerous attempts to review
other GC strategies have failed. I wouldn't be so quick to dismiss
reference counting.

> My only argument for making Python capable of leveraging multiple
> processor environments is that multithreading seems to be where the big
> performance increases will be in the next few years. I am currently
> using Python for some relatively large simulations, so performance is
> important to me.

CPython CAN leverage such environments, and it IS used that way.
However, this requires using multiple Python processes and inter-process
communication of some sort (there are lots of choices, take your pick).
It's a technique which is more trouble for the programmer, but in my
experience usually has less likelihood of containing subtle parallel
processing bugs. Sure, it'd be great if Python threads could make use
of separate CPUs, but if the cost of that were that Python dictionaries
performed as poorly as a Java HashTable or synchronized HashMap, then it
wouldn't be worth the cost. There's a reason why Java moved away from
HashTable (the threadsafe data structure) to HashMap (not threadsafe).

Perhaps the REAL solution is just a really good IPC library that makes
it easier to write programs that launch "threads" as separate processes
and communicate with them. No change to the internals, just a new
library to encourage people to use the technique that already works.

-- Michael Chermside

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up functioncalls)

2005-01-31 Thread Skip Montanaro

Michael> CPython CAN leverage such environments, and it IS used that
Michael> way.  However, this requires using multiple Python processes
Michael> and inter-process communication of some sort (there are lots of
Michael> choices, take your pick).  It's a technique which is more
Michael> trouble for the programmer, but in my experience usually has
Michael> less likelihood of containing subtle parallel processing
Michael> bugs.

In my experience, when people suggest that "threads are easier than ipc", it
means that their code is sprinkled with "subtle parallel processing bugs".

Michael> Perhaps the REAL solution is just a really good IPC library
Michael> that makes it easier to write programs that launch "threads" as
Michael> separate processes and communicate with them. 

Tuple space, anyone?

Skip
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Re: PEP 309

2005-01-31 Thread Michael Hudson
Paul Moore <[EMAIL PROTECTED]> writes:

> Also, while looking at patches I noticed 1077106. It doesn't apply to
> me - I don't use Linux - but it looks like this may have simply been
> forgotten. The last comment is in December from from Michael Hudson,
> saying in effect "I'll commit this tomorrow". Michael?

Argh.  Committed.

Cheers,
mwh

-- 
  LINTILLA:  You could take some evening classes.
ARTHUR:  What, here?
  LINTILLA:  Yes, I've got a bottle of them.  Little pink ones.
   -- The Hitch-Hikers Guide to the Galaxy, Episode 12
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up functioncalls)

2005-01-31 Thread Glyph Lefkowitz
On Mon, 2005-01-31 at 08:51 -0800, Michael Chermside wrote:

> However, remember that changing away from reference counting is a change
> to the semantics of CPython. Right now, people can (and often do) assume
> that objects which don't participate in a reference loop are collected
> as soon as they go out of scope. They write code that depends on
> this... idioms like:
> 
> >>> text_of_file = open(file_name, 'r').read()
> 
> Perhaps such idioms aren't a good practice (they'd fail in Jython or
> in IronPython), but they ARE common. So we shouldn't stop using
> reference counting unless we can demonstrate that the alternative is
> clearly better. Of course, we'd also need to devise a way for extensions
> to cooperate (which is a problem Jython, at least, doesn't face).

I agree that the issue is highly subtle, but this reason strikes me as
kind of bogus.  The problem here is not that the semantics are really
different, but that Python doesn't treat file descriptors as an
allocatable resource, and therefore doesn't trigger the GC when they are
exhausted.

As it stands, this idiom works most of the time, and if an EMFILE errno
triggered the GC, it would always work.

Obviously this would be difficult to implement pervasively, but maybe it
should be a guideline for alternative implementations to follow so as
not to fall into situations where tricks like this one, which are
perfectly valid both semantically and in regular python, would fail due
to an interaction with the OS...?


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread =?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=
Evan Jones wrote:
The next page has a 
micro-benchmark that shows reference counting performing very poorly. 
Not to mention that Python has a garbage collector *anyway,* so wouldn't 
it make sense to get rid of the reference counting?
It's not clear what these numbers exactly mean, but I don't believe
them. With the Python GIL, the increments/decrements don't have to
be atomic, which already helps in a multiprocessor system (as you
don't need a buslock). The actual costs of GC occur when a
collection happens - and it should always be possible to construct
cases where the collection needs longer, because it has to look
at so much memory.
I like reference counting because of its predictability. I
deliberately do
data = open(filename).read()
without having to worry about closing the file - just because
reference counting does it for me. I guess a lot of code will
break when you drop refcounting - perhaps unless an fopen
failure will trigger a GC.
Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread =?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=
Barry Warsaw wrote:
I've heard rumors that SF was going to be making svn available.  Anybody
know more about that?  I'd be +1 on moving from cvs to svn.
It was on their "things we do in 2005" list. 2005 isn't over yet...
I wouldn't be surprised if it gets moved to their "things we do in 2006"
list in November (just predicting from past history, without any
insight).
Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Moving towards Python 3.0

2005-01-31 Thread Michael Hudson
Evan Jones <[EMAIL PROTECTED]> writes:

> On Jan 31, 2005, at 0:17, Guido van Rossum wrote:
>> The "just kidding" applies to the whole list, right? None of these
>> strike me as good ideas, except for improvements to function argument
>> passing.
>
> Really? You see no advantage to moving to garbage collection, nor
> allowing Python to leverage multiple processor environments? I'd be
> curious to hear your reasons why not.

Obviously, if one could wave a wand and make it so, we would.  The
argument about whether the cost (in backwards compatibility,
portability, uniprocessor performace, developer time, etc) outweighs
the benefit.

> My knowledge about garbage collection is weak, but I have read a
> little bit of Hans Boehm's work on garbage collection. For example,
> his "Memory Allocation Myths and Half Truths" presentation
> (http://www.hpl.hp.com/personal/Hans_Boehm/gc/myths.ps) is quite
> interesting. On page 25 he examines reference counting. The biggest
> disadvantage mentioned is that simple pointer assignments end up
> becoming "increment ref count" operations as well, which can "involve
> at least 4 potential memory references." The next page has a
> micro-benchmark that shows reference counting performing very
> poorly.

Given the current implementations *extreme* malloc-happyness I posit
that it would be more-or-less impossible to make any form of
non-copying garabage collector go faster for Python that refcounting.
I may be wrong, but I don't think so and I have actually thought about
this a little bit :)

The "non-copying" bit is important for backwards compatibility of C
extensions (unless there's something I don't know).

> Not to mention that Python has a garbage collector *anyway,* so
> wouldn't it make sense to get rid of the reference counting?

Here you're confused.  Python's cycle collector depends utterly on
reference counting.

(And what is it with this "let's ditch refcounting and use a garbage
collector" thing that people always wheel out?  Refcounting *is* a
form of garbage collection by most reasonable definitions, esp. when
you add Python's cycle collector).

> My only argument for making Python capable of leveraging multiple
> processor environments is that multithreading seems to be where the
> big performance increases will be in the next few years. I am
> currently using Python for some relatively large simulations, so
> performance is important to me.

I'm sure you're tired of hearing it, but I think processes are your
friend...

Cheers,
mwh

-- 
  It is time-consuming to produce high-quality software. However,
  that should not alone be a reason to give up the high standards
  of Python development.  -- Martin von Loewis, python-dev
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] linux executable - how?

2005-01-31 Thread apocalypznow
How can I take my python scripts and create a linux executable out of it 
(to be distributed without having to also distribute python) ?

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] linux executable - how?

2005-01-31 Thread Aahz
On Mon, Jan 31, 2005, apocalypznow wrote:
>
> How can I take my python scripts and create a linux executable out of it 
> (to be distributed without having to also distribute python) ?

python-dev is for discussion of patches and bugs to Python itself.
Please post your question on comp.lang.python.  Thanks!
-- 
Aahz ([EMAIL PROTECTED])   <*> http://www.pythoncraft.com/

"Given that C++ has pointers and typecasts, it's really hard to have a serious 
conversation about type safety with a C++ programmer and keep a straight face.
It's kind of like having a guy who juggles chainsaws wearing body armor 
arguing with a guy who juggles rubber chickens wearing a T-shirt about who's 
in more danger."  --Roy Smith, c.l.py, 2004.05.23
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] python-dev Summary for 2004-12-16 through 2004-12-31 [draft]

2005-01-31 Thread Brett C.
Nice and short summary this time.  Plan to send this off Wednesday or Thursday 
so get corrections in before then.

--
=
Summary Announcements
=
You can still `register `__ for 
`PyCon`_.  The `schedule of talks`_ is now online.  Jim Hugunin is lined up to 
be the keynote speaker on the first day with Guido being the keynote on 
Thursday.  Once again PyCon looks like it is going to be great.

On a different note, as I am sure you are all aware I am still about a month 
behind in summaries.  School this quarter for me has just turned out hectic.  I 
think it is lack of motivation thanks to having finished my 14 doctoral 
applications just a little over a week ago (and no, that number is not a typo). 
 I am going to for the first time in my life come up with a very regimented 
study schedule that will hopefully allow me to fit in weekly Python time so as 
to allow me to catch up on summaries.

And this summary is not short because I wanted to finish it.  2.5 was released 
just before the time this summary covers so most stuff was on bug fixes 
discovered after the release.

.. _PyCon: http://www.pycon.org/
.. _schedule of talks: http://www.python.org/pycon/2005/schedule.html
===
Summary
===
-
PEP movements
-
I introduced a `proto-PEP 
`__ to 
the list on how one can go about changing CPython's bytecode.  It will need 
rewriting once the AST branch is merged into HEAD on CVS.  Plus I need to get a 
PEP number assigned to me.  =)

Contributing threads:
  - ` proto-pep: How to change Python's bytecode <>`__

Handling versioning within a package

The suggestion of extending import syntax to support explicit version 
importation came up.  The idea was to have something along the lines of 
``import foo version 2, 4`` so that one can have packages that contain 
different versions of itself and to provide an easy way to specify which 
version was desired.

The idea didn't fly, though.  The main objection was that import-as support was 
all you really needed; ``import foo_2_4 as foo``.  And if you had a ton of 
references to a specific package and didn't want to burden yourself with 
explicit imports, one can always have a single place before codes starts 
executing doing ``import foo_2_4; sys.modules["foo"] = foo_2_4``.  And that 
itself can even be lower by creating a foo.py file that does the above for you.

You can also look at how wxPython handles it at 
http://wiki.wxpython.org/index.cgi/MultiVersionInstalls .

Contributing threads:
  - `Re: [Pythonmac-SIG] The versioning question... <>`__
===
Skipped Threads
===
- Problems compiling Python 2.3.3 on Solaris 10 with gcc 3.4.1
- 2.4 news reaches interesting places
 see `last summary`_ for coverage of this thread
- RE: [Python-checkins] python/dist/src/Modules posixmodule.c, 2.300.8.10, 
2.300.8.11
- mmap feature or bug?
- Re: [Python-checkins]	python/dist/src/Pythonmarshal.c, 1.79, 1.80
- Latex problem when trying to build documentation
- Patches: 1 for the price of 10.
- Python for Series 60 released
- Website documentation - link to descriptor information
- Build extensions for windows python 2.4 what are the compiler rules?
- Re: [Python-checkins] python/dist/src setup.py, 1.208, 1.209
- Zipfile needs?
fake 32-bit unsigned int overflow with ``x = x & 0xL`` and signed 
ints with the additional ``if x & 0x8000L: x -= 0x1L`` .
- Re: [Python-checkins] python/dist/src/Mac/OSX	fixapplepython23.py, 1.1, 1.2
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread Nathan Binkert
> Wouldn't it be nicer to have a facility that let you send messages
> between processes and manage concurrency properly instead?  You'll need
> most of this anyway to do multithreading sanely, and the benefit to the
> multiple process model is that you can scale to multiple machines, not
> just processors.  For brokering data between processes on the same
> machine, you can use mapped memory if you can't afford to copy it
> around, which gives you basically all the benefits of threads with
> fewer pitfalls.

I don't think this is an answered problem.  There are plenty of
researchers on both sides of this fence.  It is not been proven at all
that threads are a bad model.

http://capriccio.cs.berkeley.edu/pubs/threads-hotos-2003.pdf or even
http://www.python.org/~jeremy/weblog/030912.html
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread Donovan Baarda
On Mon, 2005-01-31 at 15:16 -0500, Nathan Binkert wrote:
> > Wouldn't it be nicer to have a facility that let you send messages
> > between processes and manage concurrency properly instead?  You'll need
> > most of this anyway to do multithreading sanely, and the benefit to the
> > multiple process model is that you can scale to multiple machines, not
> > just processors.  For brokering data between processes on the same
> > machine, you can use mapped memory if you can't afford to copy it
> > around, which gives you basically all the benefits of threads with
> > fewer pitfalls.
> 
> I don't think this is an answered problem.  There are plenty of
> researchers on both sides of this fence.  It is not been proven at all
> that threads are a bad model.
> 
> http://capriccio.cs.berkeley.edu/pubs/threads-hotos-2003.pdf or even
> http://www.python.org/~jeremy/weblog/030912.html

These are both threads vs events discussions (ie, threads vs an
async-event handler loop). This has nearly nothing to do with multiple
CPU utilisation. The real discussion for multiple CPU utilisation is
threads vs processes.

Once again, my knowledge of this is old and possibly out of date, but
threads do not scale well on multiple CPU's because threads use shared
memory between each thread. Multiple CPU hardware _can_ have physically
shared memory, but it is hardware hell keeping CPU caches in sync etc.
It is much easier to build a multi-CPU machine with separate memory for
each CPU, and high speed communication channels between each CPU. I
suspect most modern multi-CPU's use this architecture. 

Assuming they have the separate-memory architecture, you get much better
CPU utilisation if you design your program as separate processes
communicating together, not threads sharing memory. In fact, it wouldn't
surprise me if most Operating Systems that support threads don't support
distributing threads over multiple CPU's at all.

A quick google search revealed this;

http://www.heise.de/ct/english/98/13/140/

Keeping in mind the high overheads of sharing memory between CPU's, the
discussion about threads at this url seems to confirm; threads with
shared memory are hard to distribute over multiple CPU's. Different OS's
and/or thread implementations have tried (or just outright rejected)
different ways of doing it, to varying degrees of success. IMHO, the
fact that QNX doesn't distribute threads speaks volumes.

-- 
Donovan Baarda <[EMAIL PROTECTED]>
http://minkirri.apana.org.au/~abo/

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: Moving towards Python 3.0 (was Re: [Python-Dev] Speed up function calls)

2005-01-31 Thread Donovan Baarda
On Tue, 2005-02-01 at 10:30 +1100, Donovan Baarda wrote:
> On Mon, 2005-01-31 at 15:16 -0500, Nathan Binkert wrote:
> > > Wouldn't it be nicer to have a facility that let you send messages
> > > between processes and manage concurrency properly instead?  You'll need
[...]
> A quick google search revealed this;
> 
> http://www.heise.de/ct/english/98/13/140/
> 
> Keeping in mind the high overheads of sharing memory between CPU's, the
> discussion about threads at this url seems to confirm; threads with
> shared memory are hard to distribute over multiple CPU's. Different OS's
> and/or thread implementations have tried (or just outright rejected)
> different ways of doing it, to varying degrees of success. IMHO, the
> fact that QNX doesn't distribute threads speaks volumes.

Sorry for replying to my reply, but I forgot the bit that brings it all
back On Topic :-)

The belief that the opcode granularity thread-switch driven by the GIL
is the cause of Python's threads being non-distributable is only half
true. 

Since OS's don't distribute threads well, any attempts to "Fix Python's
Threading" in an attempt to make its threads distributable is a waste of
time. The only thing that this might achieve would be to reduce the
latency on thread switches, maybe allowing faster response to OS events
like signals. However, the complexity introduced would cause more
problems than it would fix, and could easily result in worse
performance, not better.

-- 
Donovan Baarda <[EMAIL PROTECTED]>
http://minkirri.apana.org.au/~abo/

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com