Terry Reedy wrote:
I almost agree, except that the API uses the dict, not list, API.
Yes, as long as the API is dict-like, it really needs to
be thought of as a kind of dict.
Perhaps the terminology should be
ordereddict -- what we have here
sorteddict -- hypothetical future type that k
rdmur...@bitdance.com wrote:
I actually like StableDict best. When I hear that I think, "ah, the
key order is stable in the face of insertions, unlike a regular dict".
But it still doesn't convey what the ordering actually *is*.
--
Greg
___
Python-
Neil Schemenauer wrote:
What I would like to see is a module that provides a low-level API
for doing cross-platform asynchronous IO. The two necessary parts
are:
* a wrapper that allows non-blocking reads and writes on
channels (sockets, file descriptors, serial ports, etc)
*
Daniel Stutzbach wrote:
If you have a working select(), it will tell you the sockets on which
read() and write() won't block, so non-blocking reads and writes are not
necessary.
No, but there should be an interface that lets you say
"when something comes in on this fd, call this function
for
Collin Winter wrote:
Reusing the Pickler without clearing the
memo will produce pickles that are, as best I can see, invalid
I'm not sure what you mean by "reusing the pickler" here,
and how it can produce an invalid pickle.
I think what the docs mean by it is continuing to pickle
objects to
Josiah Carlson wrote:
A read callback, a write callback. What about close, error, connect,
and accept callbacks?
Yep, all those as well.
I hate to say it (not really), but that's pretty much the handle_*()
methods of asyncore :/ .
Well, then, what's the problem?
Is there anything else pe
Guido van Rossum wrote:
This was a bad idea (*), and I'd be happy to ban it -- but we'd
probably have to bump the pickle protocol version in order to maintain
backwards compatibility.
If you're talking about multiple calls to dump() on the same
pickler, it might be a bad idea for a network con
Guido van Rossum wrote:
Then it'd be better to have a method clear_memo() on pickle objects.
You should have that anyway. I was just suggesting a
way of preserving compatibility with old code without
exposing all the details of the memo.
--
Greg
___
Antoine Pitrou wrote:
For starters, since py3k is supposed to support non-blocking IO, why not write a
portable API to make a raw file or socket IO object non-blocking?
I think we need to be clearer what we mean when we talk
about non-blocking in this context. Normally when you're
using select
Hrvoje Niksic wrote:
Under Linux, select() may report a socket file descriptor
as "ready for reading", while nevertheless
a subsequent read blocks.
Blarg. Linux is broken, then. This should not happen.
This could for example
happen when data has arrived b
Michael Haggerty wrote:
A similar effect could *almost* be obtained without accessing the memos
by saving the pickled primer itself in the database. The unpickler
would be primed by using it to load the primer before loading the record
of interest. But AFAIK there is no way to prime new pickle
Antoine Pitrou wrote:
If these strings are not interned, then perhaps they should be.
I think this is a different problem. Even if the strings are
interned, if you start with a fresh pickler each time, you
get a copy of the strings in each pickle. What he wants is
to share strings between diff
Michael Haggerty wrote:
Typically, the purpose of a database is to persist data across program
runs. So typically, your suggestion would only help if there were a way
to persist the primed Pickler across runs.
I don't think you need to be able to pickle picklers.
In the case in question, the
Barry Warsaw wrote:
Of course, a careful *nix application can ensure that the file owners
and mod bits are set the way it needs them to be set. A convenience
function might be useful though.
A specialised function would also provide a place for
dealing with platform-specific extensions, su
Lie Ryan wrote:
I actually prefer strings. Just like 'w' or 'r' in open().
Or why not add "f" "c" as modes?
open('file.txt', 'wf')
I like this, because it doesn't expand the signature that
file-like objects need to support. If you're wrapping
another file object you just need to pass on the
Martin v. Löwis wrote:
That should be implement by passing O_SYNC on open, rather than
explicitly calling fsync.
On platforms which have it (MacOSX doesn't seem to,
according to the man page).
This is another good reason to put these things in the
mode string.
--
Greg
___
Antoine Pitrou wrote:
What do you mean? open() doesn't allow you to wrap other file objects.
I'm talking about things like GzipFile that take a
filename and mode, open the file and then wrap the
file object.
--
Greg
___
Python-Dev mailing list
Pytho
Nick Coghlan wrote:
[[fill]align][sign][#][0][minimumwidth][,sep][.precision][type]
'sep' is the new field that defines the thousands separator.
Wouldn't it be better to use a locale setting for this,
instead of having to specify it in every format string?
If an app is using a particular th
James Y Knight wrote:
You might be interested to know that in India, the commas don't come
every 3 digits. In india, they come every two digits, after the first
three. Thus one billion = 1,00,00,00,000. How are you gonna represent
*that* in a formatting mini-language? :)
We outsource it.
Stephen J. Turnbull wrote:
Greg Ewing writes:
> If an app is using a particular thousands separator in
> one place, it will probably want to use it everywhere.
Not if that app is internationalized (eg, a webapp that serves both
Americans and Chinese).
I don't think you'll
Nick Coghlan wrote:
It actually wouldn't be a bad place to put a "create a temporary file
and rename it to when closing it" helper class.
I'm not sure it would be a good idea to make that
behaviour automatic on closing. If anything goes
wrong while writing the file, you *don't* want the
renam
Aahz wrote:
This is pretty much the canonical example showing why control-flow
exceptions are a Good Thing. They're a *structured* goto.
I'm wondering whether what we really want is something
that actually *is* a structured goto. Or something like
a very light-weight exception that doesn't ca
Nick Coghlan wrote:
One of the
premises of PEP 343 was "Got a frequently recurring block of code that
only has one variant sequence of statements somewhere in the middle?
Well, now you can factor that out
Um, no -- it says explicitly right at the very top of
PEP 343 that it's only about factori
Antoine Pitrou wrote:
Do we really want to add a syntactic feature which has such a complicated
expansion? I fear it will make code using "yield from" much more difficult to
understand and audit.
As I've said before, I don't think the feature itself is
difficult to understand. You're not meant
P.J. Eby wrote:
My concern is that allowing 'return value' in generators is going to be
confusing, since it effectively causes the return value to "disappear"
if you're not using it in this special way with some framework that
takes advantage.
But part of all this is that you *don't* need a
P.J. Eby wrote:
(I'm thus finding it hard
to believe there's a non-contrived example that's not doing I/O,
scheduling, or some other form of co-operative multitasking.)
Have you seen my xml parser example?
http://www.cosc.canterbury.ac.nz/greg.ewing/python/yield-from/
Whether you'll conside
Antoine Pitrou wrote:
If it's really enough to understand and debug all corner cases of using "yield
from", then fair enough.
In the case where the subiterator is another generator and
isn't shared, it's intended to be a precise and complete
specification. That covers the vast majority of the
Nick Coghlan wrote:
The main problem is that many of these methods are not only used
internally, but are *also* part of the public C API made available to
extension modules. We want misuse of the latter to trigger exceptions,
not segfault the interpreter.
But is it worth slowing everything dow
Guido van Rossum wrote:
I really don't like to have things whose semantics is
defined in terms of code inlining -- even if you don't mean that as
the formal semantics but just as a mnemonic hint.
Think about it the other way around, then. Take any chunk
of code containing a yield, factor it ou
Steve Holden wrote:
What about extending the syntax somewhat to
yield expr for x from X
I can't see much advantage that would give you
over writing
for x in X:
yield expr
There would be little or no speed advantage,
since you would no longer be able to shortcut
the intermediate gener
Guido van Rossum wrote:
The way I think of it, that refactoring has nothing to do with
yield-from.
I'm not sure what you mean by that. Currently it's
*impossible* to factor out code containing a yield.
Providing a way to do that is what led me to invent
this particular version of yield-from in
P.J. Eby wrote:
Now, if somebody came up with a different way to spell the extra value
return, I wouldn't object as much to that part. I can just see people
inadvertently writing 'return x' as a shortcut for 'yield x; return',
Well, they need to be educated not to do that. I'm
not sure they
Guido van Rossum wrote:
That's all good. I just don't think that a presentation in terms of
code in-lining is a good idea.
I was trying to describe it in a way that would give
some insight into *why* the various aspects of the
formal definition are the way they are. The inlining
concept seemed
Nick Coghlan wrote:
I think the main thing that may be putting me off is the amount of
energy that went into deciding whether or not to emit Py3k warnings or
DeprecationWarning or PendingDeprecationWarning for use of the old
threading API.
Having made that decision, though, couldn't the result
Nick Coghlan wrote:
Although the PEP may still want to mention how one would write *tests*
for these things. Will the test drivers themselves need to be generators
participating in some kind of trampoline setup?
I don't see that tests are fundamentally different
from any other code that wants
Trying to think of a better usage example that
combines send() with returning values, I've realized
that part of the problem is that I don't actually
know of any realistic uses for send() in the first
place.
Can anyone point me to any? Maybe it will help
to inspire a better example.
--
Greg
Here's a new draft of the PEP. I've added a Motivation
section and removed any mention of inlining.
There is a new expansion that incorporates recent ideas,
including the suggested handling of StopIteration raised
by a throw() call (i.e. if it wasn't the one thrown in,
treat it as a return value)
Antoine Pitrou wrote:
There seems to be a misunderstanding as to how generators
are used in Twisted. There isn't a global "trampoline" to schedule generators
around. Instead, generators are wrapped with a decorator (*) which collects each
yielded value (it's a Deferred object) and attaches to it
Guido van Rossum wrote:
I'll gladly take that as an added rationalization of my plea not to
change datetime.
In the case of datetime, could perhaps just the
module name be changed so that it's not the same
as a name inside the module? Maybe call it
date_time or date_and_time.
--
Greg
__
Guido van Rossum wrote:
Can I suggest that API this takes a glob-style pattern?
Globs would be nice to have, but the minimum
needed is some kind of listdir-like functionality.
Globbing can be built on that if need be.
--
Greg
___
Python-Dev mailing l
Olemis Lang wrote:
... well ... it is too long ... :-§ ... perhaps it is better this way ...
--lmdtbicdfyeiwdimoweiiiapiyssiansey ... :P
Isn't that the name of a town in Wales somewhere?
--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
ht
Guido van Rossum wrote:
(Well here is Greg's requested use case for .send(). :-)
There was a complaint that my return-value-with-send
example was too much of a coroutine scenario, so I
was hoping to find something un-coroutine-like. But
if coroutines are the main uses for send in the first
pla
P.J. Eby wrote:
And they *still* wouldn't be able to do away with their trampolines --
It's not really about doing away with trampolines anyway.
You still need at least one trampoline-like thing at the
top. What you do away with is the need for creating
special objects to yield, and the attend
P.J. Eby wrote:
In particular, it should explain why these choices are so costly as to
justify new syntax and a complex implementation:
If avoiding trampolines was the only reason for
yield-from, that mightn't be enough justification
on its own. But it addresses several other use
cases as well.
Nick Coghlan wrote:
Since correctly written generators are permitted to
convert GeneratorExit to StopIteration, the 'yield from' expression
should detect when that has happened and reraise the original exception.
I'll have to think about that a bit, but you're
probably right.
it is also neces
Guido van Rossum wrote:
That +0 could turn into a +1 if there was a way to flag this as an
error (at runtime), at least if the return is actually executed:
def g():
yield 42
return 43
for x in g():
print x# probably expected to print 42 and then 43
Perhaps the exception used i
P.J. Eby wrote:
Could we at least have some syntax like 'return from yield with 43', to
distinguish it from a regular return, clarify that it's returning a
value to a yield-from statement, and emphasize that you need a
yield-from to call it?
You don't, though -- yield-from just happens to be
Jim Jewett wrote:
I still don't see why it needs to be a return statement. Why not make
the intent of g explicit
def g():
yield 42
raise StopIteration(43)
Because it would be tedious and ugly, and would actually make
the intent *less* clear in the intended use cases. When
Steve Holden wrote:
I am a *bit* concerned, without really being able to put my finger on
it, that the "yield from" expression's value comes from inside (the
"return" from the nested generator) while the "yield from" expression's
value comes from "outside" (the value passed to a .send() method c
anatoly techtonik wrote:
Correct me if I wrong, but shouldn't Python include function for
version comparisons?
Can't you just compare sys.version_info tuples?
>>> sys.version_info
(2, 5, 0, 'final', 0)
Assuming the other possibilities for 'final' are
'alpha' and 'beta', these should compare
Draft 10 of the PEP. Removed the outer try-finally
from the expansion and fixed it to re-raise
GeneratorExit if the throw call raises StopIteration.
--
Greg
PEP: XXX
Title: Syntax for Delegating to a Subgenerator
Version: $Revision$
Last-Modified: $Date$
Author: Gregory Ewing
Status: Draft
Type
Guido van Rossum wrote:
Perhaps the crux is that *if* you accidentally use "return " in
a vanilla generator expecting the value to show up somewhere, you are
probably enough of a newbie that debugging this will be quite hard.
I'd like not to have such a newbie trap lying around.
Okay, so would
Guido van Rossum wrote:
The new exception could either be a designated (built-in) subclass of
StopIteration, or not;
I think it would have to not be; otherwise any existing
code that catches StopIteration would catch the new
exception as well without complaint.
Using a different exception rai
Guido van Rossum wrote:
I think in either case a check in
PyIter_Next() would cover most cases
If that's acceptable, then the check might as well
be for None as the StopIteration value, and there's
no need for a new exception.
I don't understand this.
Maybe I misunderstood what you were s
Guido van Rossum wrote:
But it's been answered already -- we can't change the meaning of
StopIteration() with a value unequal to None, so it has to be a
separate exception, and it should not derive from StopIteration.
How about having StopIteration be a subclass of the
new exception? Then thin
Nick Coghlan wrote:
Jim Fulton's example in that tracker issue shows that with a bit of
creativity you can provoke this behaviour *without* using a from-style
import. Torsten Bronger later brought up the same issue that Fredrik did
- it prevents some kinds of explicit relative import that look l
Mike Coleman wrote:
I mentioned this once on the git list and Linus' response was
something like "C lets me see exactly what's going on". I'm not
unsympathetic to this point of view--I'm really growing to loathe C++
partly because it *doesn't* let me see exactly what's going on--but
I'm not con
Nick Coghlan wrote:
'import a.b.c' will look in sys.modules for "a.b.c", succeed and work,
even if "a.b.c" is in the process of being imported.
'from a.b import c' (or 'from . import c' in a subpackage of "a.b") will
only look in sys.modules for "a.b", and then look on that object for a
"c" att
Jim Fulton wrote:
The only type-safety mechanism for a CObject is it's identity. If you
want to make sure you're using the foomodule api, make sure the address
of the CObject is the same as the address of the api object exported by
the module.
I don't follow that. If you already have the
Hrvoje Niksic wrote:
I thought the entire *point* of C object was that it's an opaque box
without any info whatsoever, except that which is known and shared by
its creator and its consumer.
But there's no way of telling who created a given
CObject, so *nobody* knows anything about it for
cert
Jim Fulton wrote:
The original use case for CObjects was to export an API from a module,
in which case, you'd be importing the API from the module. The presence
in the module indicates the type.
Sure, but it can't hurt to have an additional sanity
check.
Also, there are wider uses for CObje
C. Titus Brown wrote:
we're having a discussion over on the GSoC mailing list about basic
math types, and I was wondering if there is any history that we should
be aware of in python-dev.
Something I've suggested before is to provide a set of
functions for doing elementwise arithmetic operatio
Antoine Pitrou wrote:
Again, I don't want to spoil the party, but multidimensional buffers are
not implemented, and neither are buffers of anything other than single-byte
data.
When you say "buffer" here, are you talking about the
buffer interface itself, or the memoryview object?
--
Greg
___
Antoine Pitrou wrote:
Both.
Well, taking a buffer or memoryview to non-bytes data is supported, but since
it's basically unused, some things are likely missing or broken
So you're saying the buffer interface *has* been fully
implemented, it just hasn't been tested very well?
If so, writing so
Nick Coghlan wrote:
Actually *finishing* parts 2 and 3 of PEP 3118 would be a good precursor
to having some kind of multi-dimensional mathematics in the standard
library though.
Even if they only work on the existing one-dimensional
sequence types, elementwise operations would still be
useful
Brian Quinlan wrote:
if not self.__closed:
try:
-self.flush()
+IOBase.flush(self)
except IOError:
pass # If flush() fails, just give up
self.__closed = True
That doesn't seem like a good idea to me at
Nick Coghlan wrote:
Still, as both you and Greg have pointed out, even in its current form
memoryview is already useful as a replacement for buffer that doesn't
share buffer's problems
That may be so, but I was more pointing out that the
elementwise functions I'm talking about would be useful
Firephoenix wrote:
I basically agreed with renaming the next() method to __next__(), so as
to follow the naming of other similar methods (__iter__() etc.).
But I noticed then that all the other methods of the generator had
stayed the same (send, throw, close...)
Keep in mind that next() is pa
Antoine Pitrou wrote:
Your proposal looks sane, although the fact that a semi-private method
(starting with an underscore) is designed to be overriden in some classes is a
bit annoying.
The only other way I can see is to give up any attempt
in the base class to ensure that flushing occurs befor
Steve Holden wrote:
Isn't it strange how nobody every complained about the significance of
whitespace in makefiles: only the fact that leading tabs were required
rather than just-any-old whitespace.
Make doesn't care how *much* whitespace there
is, though, only whether it's there or not. If
it
David Cournapeau wrote:
Having a full
fledged language for complex builds is nice, I think most familiar
with complex makefiles would agree with this.
Yes, people will still need general computation in their
build process from time to time whether the build tool
they're using supports it or not
Alexander Neundorf wrote:
My experience is that people don't need
general computation in their build process.
> ...
CMake supports now more general purpose programming features than it
did 2 years ago, e.g. it has now functions with local variables, it
can do simple math, regexps and other thin
John Arbash Meinel wrote:
And when you look at the intern function, it doesn't use
setdefault logic, it actually does a get() followed by a set(), which
means the cost of interning is 1-2 lookups depending on likelyhood, etc.
Keep in mind that intern() is called fairly rarely, mostly
only at mo
John Arbash Meinel wrote:
And the way intern is currently
written, there is a third cost when the item doesn't exist yet, which is
another lookup to insert the object.
That's even rarer still, since it only happens the first
time you load a piece of code that uses a given variable
name anywhere
Nick Coghlan wrote:
I sometimes wish for a nice, solid lazy
module import mechanism that manages to avoid the potential deadlock
problems created by using import statements inside functions.
I created an ad-hoc one of these for PyGUI recently.
I can send you the code if you're interested.
I d
Paul Moore wrote:
3. Encoding
JSON text SHALL be encoded in Unicode. The default encoding is
UTF-8.
This is at best confused (in my utterly non-expert opinion :-)) as
Unicode isn't an encoding...
I'm inclined to agree. I'd go further and say that if JSON
is really mean to be a text f
Chris Withers wrote:
Nick Coghlan wrote:
A similar naming scheme (i.e. msg.headers and msg.headersb) would
probably work for email as well.
That just feels nasty though :-(
It does tend to look like a typo to me. Inserting an
underscore (headers_b) would make it look less
accidental.
--
Gr
Antoine Pitrou wrote:
Say you are filtering or sorting data based on some URL parameters. If the user
wants to remove one of those filters, you have to remove the corresponding query
parameter.
For an application like that, I would be keeping the
parameters as a list or some other structured w
Barry Warsaw wrote:
The default
would probably be some unstructured parser for headers like Subject.
Only for headers known to be unstructured, I think.
Completely unknown headers should be available only
as bytes.
--
Greg
___
Python-Dev mailing lis
Barry Warsaw wrote:
For an
Originator or Destination address, what does str(header) return?
It should be an error, I think.
--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
ht
R. David Murray wrote:
That doesn't make sense to me. str() should return
_something_.
Well, it might return something like "". But you shouldn't rely on it
to give you anything useful for an arbitrary header.
--
Greg
___
Python-Dev mailing list
Py
Alexandre Vassalotti wrote:
print("Content-Type: application/json; charset=utf-8")
input_object = json.loads(sys.stdin.read())
output_object = do_some_work(input_object)
print(json.dumps(output_object))
print()
That assumes the encoding being used by stdout has
ascii as a subset.
--
Greg
Jess Austin wrote:
This is a perceptive observation: in the absence of parentheses to
dictate a different order of operations, the third quantity will
differ from the second.
Another aspect of this is the use case mentioned right
at the beginning of this discussion concerning a recurring
event
Steven D'Aprano wrote:
"2rd of March on leap years,
^^^
The turd of March?
--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-de
Steven D'Aprano wrote:
it should be obvious in the
same way that string concatenation is different from numerical
addition:
1 + 2 = 2 + 1
'1' + '2' != '2' + '1'
However, the proposed arithmetic isn't just non-
commutative, it's non-associative, which is a
much rarer and more surprising thing
Steven Bethard wrote:
That's an unfortunate decision. When the 2.X line stops being
maintained (after 2.7 maybe?) we're going to be stuck with the "3"
suffix forever for the "real" Python.
I don't see why we have to be stuck with it forever.
When 2.x has faded into the sunset, we can start
ali
Nick Coghlan wrote:
Note that such an approach would then require an altaltinstall command
in order to be able to install a specific version of python 3.x without
changing the python3 alias (e.g. installing 3.2 without overriding 3.1).
Seems like what we need is something in between altinstall
Benjamin Peterson wrote:
What's the status of yield from? There's still a small window open for
a patch to be checked into 3.1's branch. I haven't been following the
python-ideas threads, so I'm not sure if it's ready yet.
The PEP itself seems to have settle down, and is
awaiting a verdict from
Larry Hastings wrote:
Removing tp_reserved would affect everybody, with inscrutable
compiler errors.
This would have to be considered in conjunction with the
proposed programmatic type-building API, I think.
I'd like to see a migration towards something like that,
BTW. Recently I had occasio
Are we solving an actual problem by changing the
behaviour here, or is it just a case of foolish
consistency?
Seems to me that trying to pin down exactly what
constitutes a "special method" is a fool's errand,
especially if you want it to include __enter__ and
__exit__ but not __reduce__, etc.
-
MRAB wrote:
Next you'll be saying that they should be named after years. Python
2010, anyone? :-)
To keep people on their toes, we should switch to a
completely random new naming scheme with every release,
like Microsoft has been doing with Windows.
--
Greg
___
Antoine Pitrou wrote:
you can't be sure all the responders are
over 18. Actually, they might even not be human beings!
(hint: I'm not)
Not over 18, or not a human being?
--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org
Robert Kern wrote:
The 'single' mode, which is used for the REPL, is a bit different than
'exec', which is used for modules. This difference lets you insert
"blank" lines of whitespace into a function definition without exiting
the definition.
All that means is that the REPL needs to keep re
Michael Foord wrote:
if you are added as nosy on a tracker item (which happens
when you make a comment or you can do yourself) then you get emailed
about new comments.
That's good, but...
only going to the tracker to add responses.
is not so good. If the goal is to ensure that all previou
Antoine Pitrou wrote:
The original docstring for peek() says:
...we
do at most one raw read to satisfy it.
In that light, I'm not sure it's a bug
It may be behaving according to the docs, but is that
behaviour useful?
Seems to me that if you're asking for n bytes, then it's
b
Zooko Wilcox-O'Hearn wrote:
1. Add a "st_crtime" field which gets populated on filesystems
(Windows, ZFS, Mac) which can do so.
"crtime" looks rather too similar to "ctime" for my
liking. People who think that the "c" in "ctime"
means "creation" are still likely to confuse them.
Why not give
Cameron Simpson wrote:
For myself, I'd expect more often to want to see if there's stuff in the
buffer _without_ doing any raw reads at all.
What uses do you have in mind for that?
--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mai
Cameron Simpson wrote:
It seems like whenever I want to do some kind of opportunistic but
non-blocking stuff with a remote service
Do you actually do this with buffered streams? I find
it's better to steer well clear of buffered I/O objects
when doing non-blocking stuff, because they don't pla
Cameron Simpson wrote:
I normally avoid
non-blocking requirements by using threads, so that the thread gathering
from the stream can block.
If you have a thread dedicated to reading from that
stream, then I don't see why you need to peek into
the buffer. Just have it loop reading a packet at a
Lenard Lindstrom wrote:
I assumed that since PyModule_AddObject is documented as
stealing a reference, it always stole a reference. But in reality it
only does so conditionally, when it succeeds.
As an aside, is this a general feature of functions
that steal references, or is PyModule_AddObje
701 - 800 of 2443 matches
Mail list logo