Re: [Python-Dev] PEP 309 enhancements

2005-02-26 Thread Paul Moore
On Sat, 26 Feb 2005 16:50:06 +1000, Nick Coghlan <[EMAIL PROTECTED]> wrote:
> Moving a discussion from the PEP309 SF tracker (Patch #941881) to here, since
> it's gone beyond the initial PEP 309 concept (and the SF tracker is a lousy
> place to have a design discussion, anyway).
> 
> The discussion started when Steven Bethard pointed out that partial objects
> can't be used as instance methods (using new.instancemethod means that the
> automatically supplied 'self' argument ends up in the wrong place, after the
> originally supplied arguments).
[...]
> However, this breaks down for nested partial functions - the call to the 
> nested
> partial again moves the 'self' argument to after the originally supplied
> argument list. This can be addressed by automatically 'unfolding' nested
> partials (which should also give a speed benefit when supplying arguments
> piecemeal, since building incrementally or all at once will get you to the 
> same
> place):
[...]
> At this point, the one thing you can't do is use a partial function as a 
> *class*
> method, as the classmethod implementation doesn't give descriptors any special
> treatment.

While I see a theoretical need for this functionality, I don't know of
a practical use case. PEP 309 is marked as accepted in its current
form, and has an implementation in C (which appears to be justified on
speed grounds - see the SF tracker). The new code is in Python - if
it's to be included, then at *some* stage it needs translating to C.

Personally, I'd rather see partial as it stands, with its current
limitations, included. The alternative seems to be a potentially long
discussion, petering out without conclusion, and the whole thing
missing Python 2.5. (I know that's a long way off, but this already
happened with 2.4...)

> So, instead of the above, I propose the inclusion of a callable 
> 'partialmethod'
> descriptor in the functional module that takes the first positional argument
> supplied at call time and prepends it in the actual function call (this still
> requires automatic 'unfolding'in order to work correctly with nested partial
> functions):

No problem with this, as long as:

1. An implementation (which needs to work alongside the existing C
code) is provided
2. PEP 309 is updated to include an explanation of the issue and a
justification for this solution.
3. The documentation is updated to make it clear why there are two
callables, and which to use where. (That could be difficult to explain
clearly - I understand some of the issues here, but just barely - the
docs would need to be much clearer).

Personally, I feel this classifies as YAGNI (whereas, I'd use the
current partial a *lot*).

OTOH, I think the fact that the existing partial can't be used as an
instance method does deserve mention in the documentation, regardless.
For that your explanation (quoted above, and repeated here) would
suffice:

"""
partial objects can't be used as instance methods (using
new.instancemethod means that the automatically supplied 'self'
argument ends up in the wrong place, after the originally supplied
arguments)
"""

Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 309 enhancements

2005-02-26 Thread Nick Coghlan
Paul Moore wrote:
Personally, I'd rather see partial as it stands, with its current
limitations, included.  The alternative seems to be a potentially long
discussion, petering out without conclusion, and the whole thing
missing Python 2.5. (I know that's a long way off, but this already
happened with 2.4...)
Yes - I certainly don't think this is a reason to hold off checking in what we 
already have. The only question is whether or not it gets tweaked before the 
first 2.5 alpha.

So, instead of the above, I propose the inclusion of a callable 'partialmethod'
descriptor in the functional module that takes the first positional argument
supplied at call time and prepends it in the actual function call (this still
requires automatic 'unfolding'in order to work correctly with nested partial
functions):
No problem with this, as long as:
1. An implementation (which needs to work alongside the existing C
code) is provided
That's the plan - the Python code is just the easiest way to make the intended 
semantics clear for the discussion.

2. PEP 309 is updated to include an explanation of the issue and a
justification for this solution.
Seems like the most sensible place to record it for posterity (assuming we 
actually end up doing anything about it)

3. The documentation is updated to make it clear why there are two
callables, and which to use where. (That could be difficult to explain
clearly - I understand some of the issues here, but just barely - the
docs would need to be much clearer).
The basic concept is that if you're building a partial function in general, use 
'partial'. If you're building an instance method or class method (i.e. the two 
cases where Python automatically provides the first argument), use 
'partialmethod', because the standard 'partial' generally won't do the right 
thing with the first argument.

Personally, I feel this classifies as YAGNI (whereas, I'd use the
current partial a *lot*).
I can certainly appreciate that point of view. It's a rather large conceptual 
hole though, so if it can be plugged elegantly, I'd like to see that happen 
before 2.5 goes live.

OTOH, I think the fact that the existing partial can't be used as an
instance method does deserve mention in the documentation, regardless.
For that your explanation (quoted above, and repeated here) would
suffice:
"""
partial objects can't be used as instance methods (using
new.instancemethod means that the automatically supplied 'self'
argument ends up in the wrong place, after the originally supplied
arguments)
"""
The distinction is actually finer than that - it can work in some cases, 
provided the predefined arguments are all given as keyword arguments rather than 
positional arguments.

The real issue is that there may be situations where you don't have control over 
how the function you want to turn into an instance method was created, and if 
someone has used partial with positional arguments to create the function, you 
may have no practical way out. 'partialmethod' fixes that - it allows creating a 
partial function which expects the next positional argument to be the first 
argument to the underlying function, while remaining positional arguments are 
appended as usual.

Regards,
Nick.
--
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
http://boredomandlaziness.skystorm.net
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] pystone

2005-02-26 Thread stelios xanthakis
Hi,
It seems that in pystone, Proc1 the else branch is never reached.
Is this OK ??
Stelios
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 309

2005-02-26 Thread Raymond Hettinger
Paul Moore wrote:
> Personally, I'd rather see partial as it stands, with its current
> limitations, included.  The alternative seems to be a potentially long
> discussion, petering out without conclusion, and the whole thing
> missing Python 2.5. (I know that's a long way off, but this already
> happened with 2.4...)

-0  My preference is that it not go in as-is.  

It is better to teach how to write a closure than to introduce a new
construct that has its own problems and doesn't provide a real
improvement over what we have now.  Despite my enthusiasm for functional
programming and the ideas in PEP, I find the PFA implementation
annoying.

I had tried out the implementation that was being pushed for Py2.4 and
found it wanting.  Hopefully, it has improved since then.  Here are a
few thoughts based on trying to apply it to my existing code.

* The PFA implementation proposed for Py2.4 ran slower than an
equivalent closure.  If the latest implementation offers better
performance, then that may be a reason for having it around.

* Having PFA only handle the first argument was a PITA with Python.  In
Haskell and ML, function signatures seems to have been designed with
argument ordering better suited for left curries.  In contrast, Python
functions seem to have adverse argument ordering where the variables you
want to freeze appear toward the right.  This is subjective and may just
reflect the way I was aspiring to use "partial" to freeze options and
flags rather than the first argument of a binary operator.  Still, I
found closures to be more flexible in that they could handle any
argument pattern and could freeze more than one variable or keyword at a
time.

* The instance method limitation never came up for me.  However, it
bites to have a tool working in a way that doesn't match your mental
model.  We have to document the limitations, keep them in mind while
programming, and hope to remember them as possible causes if bugs ever
arise.  It would be great if these limitations could get ironed out.

* Using the word "partial" instead of "lambda" traded one bit of
unreadability for another.  The "lambda" form was often better because
it didn't abstract away its mechanism and because it supported more
general expressions.  I found that desk-checking code was harder because
I had to mentally undo the abstraction to check that the argument
signature was correct.

* It is not clear that the proposed implementation achieves one of the
principal benefits laid out in the PEP:  "I agree that lambda is usually
good enough, just not always. And I want the possibility of useful
introspection and subclassing."

If we get a better implementation, it would be nice if the PEP were
updated with better examples.  The TkInter example is weak because we
often want to set multiple defaults at the same time (foreground,
background, textsize, etc) and often those values are config options
rather than hardwired constants.


Raymond

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 309

2005-02-26 Thread Paul Moore
On Sat, 26 Feb 2005 13:20:46 -0500, Raymond Hettinger
<[EMAIL PROTECTED]> wrote:
> It is better to teach how to write a closure than to introduce a new
> construct that has its own problems and doesn't provide a real
> improvement over what we have now.

You make some good points. But this all reminds me of the discussion
over itemgetter/attrgetter. They also special-case particular uses of
lambda, and in those cases the stated benefits were speed and
(arguably) readability (I still dislike the names, personally).

I think partial hits a similar spot - it covers a fair number of
common cases, and the C implementation is quoted as providing a speed
advantage over lambda. On the minus side, I'm not sure it covers as
many uses as {item,attr}getter, but on the plus side, I like the name
better :-) Seriously, not needing to explicitly handle *args and **kw
is a genuine readability benefit of partial.

Of course, optimising Python function calls, and optimising lambda to
death, would remove the need for any of these discussions. But there's
no real indication that this is likely in the short term...

This got me thinking, so I did a quick experiment:

>python -m timeit -s "from operator import itemgetter; l=range(8)"
"itemgetter(1)(l)"
100 loops, best of 3: 0.548 usec per loop

>python -m timeit -s "l=range(8)" "(lambda x:x[1])(l)"
100 loops, best of 3: 0.597 usec per loop

That's far less of a difference than I expected from itemgetter! The
quoted speed improvement in the C implementation of partial is far
better... So I got worried, and tried a similar experiment with the C
implementation of the functional module:

>python -m timeit -s "import t" "t.partial(t.f, 1, 2, 3, a=4, b=5)(6,
7, 8, c=9, d=10)"
10 loops, best of 3: 3.91 usec per loop

>python -m timeit -s "import t" "(lambda *args, **kw: t.f(1, 2, 3,
a=4, b=5, *args, **kw))(6, 7, 8, c=9, d=10)"
10 loops, best of 3: 3.6 usec per loop

[Here, t is just a helper which imports partial, and defines f as def
f(*args, **kw): return (args, kw)]

Now I wonder. Are my tests invalid, did lambda get faster, or is the
"lambda is slow" argument a myth?

Hmm, I'm starting to go round in circles here. I'll post this as it
stands, with apologies if it's incoherent. Blame it on a stinking cold
:-(

Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] PEP 309

2005-02-26 Thread Raymond Hettinger
> But this all reminds me of the discussion
> over itemgetter/attrgetter. They also special-case particular uses of
> lambda, and in those cases the stated benefits were speed and
> (arguably) readability (I still dislike the names, personally).

I wouldn't use those as justification for partial().  The names suck and
the speed-up is small.  They were directed at a specific and recurring
use case related to key= arguments.



> I think partial hits a similar spot - it covers a fair number of
> common cases, 

Are you sure about that?  Contriving examples is easy, but download a
few modules, scan them for use cases, and you may find, as I did, that
partial() rarely applies.  The argument order tends to be problematic.

Grepping through the standard library yields no favorable examples.  In
inspect.py, you could replace "formatvarkw=lambda name: '**' + name"
with
"partial(operator.add, '**') but that would not be an improvement.

Looking through the builtin functions also provides a clue:

  cmp(x,y)# partial(cmp, refobject) may be useful.
  coerce(x,y) # not suitable for partial().
  divmod(x,y) # we would want a right curry.
  filter(p,s) # partial(filter, p) might be useful.
  getattr(o,n,d)  # we would want a right curry.
  hasattr(o,n)# we would want a right curry.
  int(x,b)# we would want a right curry.
  isinstance(o,c) # we would want a right curry.
  issubclass(a,b) # we would want a right curry.
  iter(o,s)   # we would want a right curry.
  long(x,b)   # we would want a right curry.
  map(f,s)# partial(map, f) may be useful.
  pow(x,y,z)  # more likely to want to freeze y or z.
  range([a],b,[c])# not a good candidate.
  reduce(f,s,[i]) # could work for operator.add and .mul
  round(x, n) # we would want a right curry.
  setattr(o,n,v)  # more likely to want to freeze n.
 


> the C implementation is quoted as providing a speed
> advantage over lambda. 

Your recent timings and my old timings show otherwise.



> Seriously, not needing to explicitly handle *args and **kw
> is a genuine readability benefit of partial.

I hope that is not the only real use case.  How often do you need to
curry a function with lots of positional and keyword arguments?  Even
when it does arise, it may a code smell indicating that subclassing
ought to be used.



> Now I wonder. Are my tests invalid, did lambda get faster, or is the
> "lambda is slow" argument a myth?

The test results are similar to what I got when I had tested the version
proposed for Py2.4.  The lambda version will win by an even greater
margin if you put it on an equal footing by factoring out the attribute
lookup with something like f=t.f.

Calling Python functions (whether defined with lambda or def) is slower
than C function calls because of the time to setup the stack-frame.
While partial() saves that cost, it has to spend some time building the
new argument tuple and forwarding the call.  You're timings show that to
be a net loss.

Sidenote:  Some C methods with exactly zero or one argument have
optimized paths that save time spent constructing, passing, and
unpacking an argument tuple.  Since partial() is aimed at multi-arg
functions, that part of "lambda is slower" is not relevant to the
comparison.



> Hmm, I'm starting to go round in circles here. 

I also wish that partial() ran faster than closures, that it didn't have
limitations, and that it applied in more situations.  C'est le vie.



Raymond

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] PEP 309

2005-02-26 Thread Raymond Hettinger
> I did a quick experiment:
> 
> >python -m timeit -s "from operator import itemgetter; l=range(8)"
> "itemgetter(1)(l)"
> 100 loops, best of 3: 0.548 usec per loop
> 
> >python -m timeit -s "l=range(8)" "(lambda x:x[1])(l)"
> 100 loops, best of 3: 0.597 usec per loop
> 
> That's far less of a difference than I expected from itemgetter! 

You've timed how long it takes to both construct and apply the retrieval
function.  The relevant part is only the application:

C:\pydev>python -m timeit -r9 -s "from operator import itemgetter;
s=range(8); f=itemgetter(1)" "f(s)"
100 loops, best of 9: 0.806 usec per loop

C:\pydev>python -m timeit -r9 -s "s=range(8); f=lambda x:x[1]" "f(s)"
10 loops, best of 9: 1.18 usec per loop

So the savings is about 30% which is neither astronomical, nor
negligible.



Raymond

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 309

2005-02-26 Thread Steven Bethard
On Sat, 26 Feb 2005 19:26:11 -0500, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
> Are you sure about that?  Contriving examples is easy, but download a
> few modules, scan them for use cases, and you may find, as I did, that
> partial() rarely applies.  The argument order tends to be problematic.
>
> Grepping through the standard library yields no favorable examples.

I also didn't find many the last time I looked through:

http://mail.python.org/pipermail/python-list/2004-December/257990.html

> In inspect.py, you could replace "formatvarkw=lambda name: '**' + name"
> with "partial(operator.add, '**') but that would not be an improvement.

Yeah, I remember thinking that the nicer way to write this was probably
   formatvarkw='**%s'.__mod__

Steve
-- 
You can wordify anything if you just verb it.
--- Bucky Katt, Get Fuzzy
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 309

2005-02-26 Thread Nick Coghlan
Raymond Hettinger wrote:
* The PFA implementation proposed for Py2.4 ran slower than an
equivalent closure.  If the latest implementation offers better
performance, then that may be a reason for having it around.
Not having done the timing, I'll defer to Paul and yourself here. However, one 
of the proposed enhancements is to automatically flatten out nested partial 
calls - this won't speed up the basic cases, but will allow incremental 
construction in two or more stages without a speed loss at the final call.

flags rather than the first argument of a binary operator.  Still, I
found closures to be more flexible in that they could handle any
argument pattern and could freeze more than one variable or keyword at a
time.
I'm not sure what this one is about - the PEP 309 implementation allows a single 
partial call to freeze an arbitrary number of positional arguments (starting 
from the left), and an arbitrary number of keyword arguments at any position. 
(This is why the name was changed from curry to partial - it was general purpose 
partial function application, rather than left currying)

* The instance method limitation never came up for me.  However, it
bites to have a tool working in a way that doesn't match your mental
model.  We have to document the limitations, keep them in mind while
programming, and hope to remember them as possible causes if bugs ever
arise.  It would be great if these limitations could get ironed out.
The 'best' idea I've come up with so far is to make partial a class factory 
instead of a straight class, taking an argument that states how many positional 
arguments to prepend at call time. A negative value would result in the addition 
of (len(callargs)+1) to the position at call time.

Then, partial() would return a partial application class which appended all 
positional arguments at call time, partial(-1) a class which prepended all 
positional arguments. partial(1) would be the equivalent of partialmethod, with 
the first argument prepended, and the rest appended.

In the general case, partial(n)(fn, *args1)(*args2) would give a call that 
looked like fn(args2[:n] + args1 + args2[n:]) for positive n, and 
fn(args2[:len(args2)+n+1] + args1 + args2[len(args2)+n+1:]) for negative n. n==0 
and n==-1 being obvious candidates for tailored implementations that avoided the 
unneeded slicing. The presence of keyword arguments at any point wouldn't affect 
the positional arguments.

With this approach, it may be necessary to ditch the flattening of nested 
partials in the general case. For instance, partial(partial(-1)(fn, c), a)(b) 
should give an ultimate call that looks like fn(a, b, c). Simple cases where the 
nested partial application has the same number of prepended arguments as the 
containing partial application should still permit flattening, though.

* Using the word "partial" instead of "lambda" traded one bit of
unreadability for another.
The class does do partial function application though - I thought the name fit 
pretty well.

* It is not clear that the proposed implementation achieves one of the
principal benefits laid out in the PEP:  "I agree that lambda is usually
good enough, just not always. And I want the possibility of useful
introspection and subclassing."
I think it succeeds on the introspection part, since the flattening of nested 
partials relies on the introspection abilities. Not so much on the subclassing - 
partialmethod wasn't able to reuse too much functionality from partial.

If we get a better implementation, it would be nice if the PEP were
updated with better examples.  The TkInter example is weak because we
often want to set multiple defaults at the same time (foreground,
background, textsize, etc) and often those values are config options
rather than hardwired constants.
Hmm - the PEP may give a misleading impression of what the current 
implementation can and can't do. It's already significantly more flexible than 
what you mention here. For instance, most of the examples you give below could 
be done using keyword arguments.

That's likely to be rather slow though, since you end up manipulating 
dictionaries rather than tuples, so I won't pursue that aspect. Instead, I'm 
curious how many of them could be implemented using positional arguments and the 
class factory approach described above:

  cmp(x,y)# partial()(cmp, refobject)
  divmod(x,y) # partial(-1)(y)
  filter(p,s) # partial()(filter, p)
  getattr(o,n,d)  # partial(-1)(getattr, n, d)
  hasattr(o,n)# partial(-1)(hasttr, n)
  int(x,b)# partial(-1)(int, b)
  isinstance(o,c) # partial(-1)(isinstance, c)
  issubclass(a,b) # partial(-1)(issubclass, b)
  iter(o,s)   # partial(-1)(iter, s)
  long(x,b)   # partial(-1)(long, b)
  map(f,s)# partial()(map, f)
  pow(x,y,z)  # partial(1)(pow, y) OR partial(-1)(pow, z)
  # OR partial(-1)(pow, y, z)
  range([a],b,[c])# partial(-1)(range, c) (Default step other than 1)
 

Re: [Python-Dev] PEP 309

2005-02-26 Thread Dima Dorfman
Nick Coghlan <[EMAIL PROTECTED]> wrote:
> Raymond Hettinger wrote:
> >* The instance method limitation never came up for me.  However, it
> >bites to have a tool working in a way that doesn't match your mental
> >model.  We have to document the limitations, keep them in mind while
> >programming, and hope to remember them as possible causes if bugs ever
> >arise.  It would be great if these limitations could get ironed out.
> 
> The 'best' idea I've come up with so far is to make partial a class factory 
> instead of a straight class, taking an argument that states how many 
> positional arguments to prepend at call time. A negative value would result 
> in the addition of (len(callargs)+1) to the position at call time.

Other dynamic languages, like Lisp, are in the same boat in this
respect--real currying (and things that look like it) doesn't work too well
because the API wasn't designed with it in mind (curried languages have
functions that take less-frequently-changing parameters first, but most
other languages take them last). Scheme has a nice solution for this in
SRFI 26 (http://srfi.schemers.org/srfi-26/). It looks like this:

(cut vector-set! x <> 0)

That produces a function that takes one argument. The <> is an
argument slot; for every <> in the cut form, the resultant callable
takes another argument. (This explanation is incomplete, and there are
some other features; read the SRFI for details.) I've been using
something similar in Python for a while, and I really like it. It
doesn't look as good because the slot has to be a real object and not
punctuation, but it works just as well. For example:

cut(islice, cutslot, 0, 2)

That's pretty readable to me. My version also allows the resultant
callable to take any number of parameters after the slots have been
satisfied, so partial is just the special case of no explicit slots.
Perhaps a full example will make it clearer:

>>> def test(a, b, c):
... print 'a', a, 'b', b, 'c', c
... 
>>> f = cut(test, cutslot, 'bravo')
>>> f('alpha', 'charlie')
a alpha b bravo c charlie

Here, b is specialized at cut time, a is passed through the slot, and
c is passed through the implicit slots at the end. The only thing this
can't do is a generic right-"curry"--where we don't know how many
parameters come before the one we want to specialize. If someone wants
to do that, they're probably better off using keyword arguments.

So far, my most common use for this is to specialize the first
argument to map, zip, or reduce. Very few cases actually need an
explicit cutslot, but those that do (like the islice example above)
look pretty good with it. My reasons for using cut instead of lambda
are usually cosmetic--the cut form is shorter and reads better when
what I'm doing would be a curry in a language designed for that. Throw
in a compose function and I almost never need to use lambda in a
decorator .
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 309

2005-02-26 Thread Nick Coghlan
Dima Dorfman wrote:
Nick Coghlan <[EMAIL PROTECTED]> wrote:
Here, b is specialized at cut time, a is passed through the slot, and
c is passed through the implicit slots at the end. The only thing this
can't do is a generic right-"curry"--where we don't know how many
parameters come before the one we want to specialize. If someone wants
to do that, they're probably better off using keyword arguments.
I think Raymond posted some decent examples using the builtins where having 
binding of the last few arguments on the right with decent performance would be 
desirable. As you yourself said - Python functions tend to have the arguments 
one is most likely to want to lock down on the right of the function signature, 
rather than on the left.

The current PEP 309 certainly supports that in the form of keyword arguments, 
but anyone interested in performance is going to revert back to the lambda solution.

The class factory approach relies on shaping the partial application of the 
arguments by specifying where the call time positional arguments are to be 
placed (with 'all at the start', 'all at the end' and 'one at the start, rest at 
the end' being the most common options).

Regards,
Nick.
--
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
http://boredomandlaziness.skystorm.net
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com