> > I think that realization is important. It would be great to have a
> > section of the PEP that focuses on separability and matching
features to
> > benefits. Start with above observation that the proposed examples
can
> > be achieved with generators driving the block statement.
>
> Good idea
> >(So do you want this feature now or not? Earlier you said it was no
big
> deal.)
>
> It *isn't* a big deal; but it'd still be nice, and I'd happily
volunteer
> to
> do the actual implementation of the 'close()' method myself, because
it's
> about the same amount of work as updating PEP 333 and
not a good
idea. Putting tools in the standard library should be the last
evolutionary step, not the first.
Raymond Hettinger
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.p
> > Ultimately, some of these will likely end-up in the library. For
the
> > time being, I think it best that these get posted and evolve either
as
> > Wiki entries or as ASPN entries. The best practices and proven
winners
> > have yet to emerge. Solidifying first attempts is likely not a good
>
> > Yes, there has been quite a bit of interest including several ASPN
> > recipes and a wiki:
> >
> >http://www.python.org/moin/PythonDecoratorLibrary
>
> Thanks, I didn't know about that page. BTW, I notice that all the
> decorators
> in that page are improper, in the sense that they change
> > I'd be willing to break these off into a separate PEP if people
think
> > it's a good idea. I've seen very few complaints about any of these
> > pieces of the proposal. If possible, I'd like to see these things
> > approved now, so that the discussion could focus more directly on
the
> > bloc
func.func_defaults, func.func_closure)
>
> and I *hate* it!
Sounds reasonable.
Choices:
- submit a patch adding a __copy__ method to functions,
- submit a patch for the copy module, or
- submit a feature request, assign to me, and wait.
Raymond Hettinger
__
> -was, under the covers, a (optential) looping construct. This
> +was, under the covers, a (potential) looping construct. This
I'm glad I didn't fix this one.
I thought he meant to use "optional".
Raymond Hettinger
, alter, and then restore:
oldprec = decimal.getcontext().prec
decimal.getcontext().prec += 2
yield None
decimal.getcontext().prec = oldprec
Raymond Hettinger
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
> What's the advantage of using two calls to getcontext() vs. saving the
> context in a local variable?
I prefer saving the context in a local variable but that is just a
micro-optimization. The presentation with multiple calls to
getcontext() was kept just to match the style of the original -- t
e whole
context and let the wrapped block only work with a copy.
oldcontext = decimal.getcontext()
newcontext = oldcontext.copy()
newcontext.prec += 2
yield None
decimal.setcontext(oldcontext)
This approach defends against various kinds of unruly behavior by the
> I think you're missing a decimal.setcontext(newcontext) before the
> yield..
Right.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/arc
> I don't see a call to setcontext() in the sin() example in the library
> reference. Is that document wrong? I thought that simply modifying the
> parameters of the current context would be sufficient.
The sin() example is correct. The precision is changed and restored in
the current context.
H
> +def sin(x):
> +"Return the sine of x as measured in radians."
> +do with_extra_precision():
> +i, lasts, s, fact, num, sign = 1, 0, x, 1, x, 1
> +while s != lasts:
> +lasts = s
> +i += 2
> +
[Raymond]
> > The following example shows the kind of oddity that can arise when
> > working with quantities that have not been rounded to the current
> precision:
> >
> > >>> from decimal import getcontext, Decimal as D
> > >>> getcontext().prec = 3
> > >>> D('3.104') + D('2.104')
> > Decimal("5.2
> I'd like to propose to make that a separate PEP, which can combine
> elements of PEP 288 and PEP 325.
+1
Overall, the combined PEP proposal looks pretty good.
> - g.throw(type, value, traceback) causes the specified exception to be
> thrown at the place where the generator g is currently su
> > Should this maybe just be added to PEP 342? To me, PEP 342 has
always
> > seemed incomplete without ways to throw() and close(), but that
could
> > easily be just me. In any case I'd expect the implementation of
> > 'next(arg)' to have some overlap with the implementation of
'throw()'.
>
> M
> Okay. Maybe we should just update PEP 325, then?
-1.
Keep this separate.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/ar
> [Raymond Hettinger]
> > Are the value and traceback arguments optional as they are with the
> > current raise statement? If they are optional, what would the
default
> > be? I think the preferred choice is to have the call to the throw
> > method be the anchor po
> > So at this point it seems your proposal is just nailing down
specifics
> for
> > the open parts of PEP 325.
>
> Or PEP 288? That has throw() (albeit with a different signature). I
> could do without the attributes though (PEP 342 provides a much better
> solution IMO).
>
> If either of those
> Not sure what the "right" answer is, but I wanted to stick my oar in
to
> say that I think that Decimal has not been in the field long enough or
> widely-enough used that we should feel that the API has been set in
> stone. If there's agreement that a mistake was made, let's fix it!
There is no
> BTW I think that close() and __del__() should raise an exception when
> the throw(GeneratorExit) call doesn't end up either re-raising
> GeneratorExit or raising StopIteration. The framework for calling
> __del__() takes care of handling this exception (by printing and then
> ignoring it). Raymon
[Tim suggesting that I'm clueless and dazzled by sparkling lights]
> There seems to be an unspoken "wow that's cool!" kind of belief
> that because Python's Decimal representation is _potentially_
> unbounded, the constructor should build an object big enough to
> hold any argument exactly (up t
I sense a religious fervor about this so go ahead and do whatever you
want.
Please register my -1 for the following reasons:
a.) It re-introduces representation error into a module that worked so
hard to overcome that very problem. The PEP explicitly promises that a
transformation from a literal
Addenda:
j.) The same rules would need to apply to all forms of the Decimal
contructor, so Decimal(someint) would also need to truncate/round if it
has more than precision digits -- likewise with Decimal(fromtuple) and
Decimal(fromdecimal). All are problematic. Integer conversions are
expected t
> Does it really need to be argued interminably that deviating from a
> standard is a Big Deal?
The word deviate inaccurately suggests that we do not have a compliant
method which, of course, we do. There are two methods, one context
aware and the other context free. The proposal is to change t
[Michael Chermside]
> Frankly, I have no idea WHAT purpose is served by passing a context
> to the decimal constructor... I didn't even realize it was allowed!
Quoth the docs for the Decimal constructor:
"""
The context precision does not affect how many digits are stored. That
is determined excl
[Guido]
> > You know that, but Raymond seems confused. From one of his posts
(point
> (k)):
[Raymond]
> > "Throughout the implementation, the code calls the Decimal
> > constructor to create intermediate values. Every one of those calls
> > would need to be changed to specify a context."
[Facun
> It looks like if you pass in a context, the Decimal constructor still
> ignores that context:
>
> >>> import decimal as d
> >>> d.getcontext().prec = 4
> >>>
d.Decimal("1.234567890123456789012345678901234567890123456789",
> d.getcontext())
> Decimal("1.234567890123456789012345678901234567890
[Tim]
> I'm sorry, but if you mentally reduced everything I've written about
> this to "the sole argument", rational discussion has become impossible
> here.
Forgive me one melodramatic email.
I've laid out my reasoning and understand yours.
Crossing light sabers with one such as yourself is of
Some of the private email I've received indicates a need for a decimal
FAQ that would shorten the module's learning curve.
A discussion draft follows.
Raymond
---
Q. It is cumbersome to type decimal.Decimal('1234.5'). Is there a way
to
mi
tart out with CVS tracker permissions.
When you have a patch that is really to apply,
upload it to the tracker and assign to me.
Raymond Hettinger
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/pyth
> Let's start out with CVS tracker permissions.
> When you have a patch that is really to apply,
> upload it to the tracker and assign to me.
really --> ready
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/pyth
> As the process of deprecating old bugs evolves, the following
> categories got empty:
>
> Python 2.1.1
> Python 2.1.2
> Python 2.2.1
> Python 2.2.1 candidate
> Python 2.2.2
That's great news.
> The SF interface doesn't allow to delete old categories, but maybe we
> could
There should be some greater care exercised in closing old bugs.
Marking them "deprecated" and then erasing them is only a good strategy
if we have no means of reproducing the error or ascertaining what the OP
was talking about.
For instance, in www.python.org/sf/640553 , it was possible for a
re
> If Raymond would rather defer to me, I can give it a shot in a revised
> version of PEP 343,
Thanks, that would be great. The decimal conversation used up a lot of
my available cycles.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http
> > Old age and a missing OP is not sufficient reason to close a bug.
> >
> > But if closing a bug is an effective way of kicking things into life
> > again...
>
> I'm seeing this effect in a lot of bugs I closed as old ones.
That means they shouldn't have been closed and that we almost lost a v
> I've seen some systems that solve this problem by allowing users to
"vote"
> for favorite bugs... then you can tell the "important" bugs because
they
> are more likely to have lots of votes. As I see it, Facundo is using a
> variant of that system. He is asking whether there is *ONE PERSON* out
>
gt; skipyes = _len(code); emit(0)
> _compile(code, av[1], flags)
> if av[2]:
The times two operation also occurs twice in nearby code. Are those
also incorrect?
Raymond Hettinger
___
Python-Dev mailing list
Python-Dev
a wonderful,
must-have addition.
Raymond Hettinger
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
> > If Fred were up for it, I think ElementTree would be a wonderful,
> > must-have addition.
> I might be missing fine details of the English language here
> (what does "to be up for something" mean?), however, I believe
> ElementTree is an unlikely addition to the standard library.
Rewritten:
> - throw() is a term taken from Java & C++.
This was intended. There was much discussion about this for PEP 288 and
this was more or less a concensus choice. The term is already
associated with exceptions in other languages and it captures the
concept of the raise occurring somewhere else (what
More than case-statement semantics or PEP343, I wish for a dowhile
statement.
The most straight-forward way is to put the conditional expression at
the beginning of the block with the understanding that a dowhile keyword
will evaluate the condition only after the block runs:
dowhile :
> I'm horsing around with recognizing switch-like if statements like:
>
> if x == 1:
> print 1
> elif x == 2:
> print 2
> else:
> print "unknown"
>
> in the compiler and generating O(1) code. "x" can be any expression,
but
> must be precisely the same in each
> By the way, whatever happened to "and while"?
That is making something hard and weird out of something simple.
There were no shortage of odd suggestions to force the condition line
to appear lower in the block than the starting line. All of them
smelled of rotten eggs -- they just don't fit th
[BJörn Lindqvist]
> I would like to have do-while's like this:
>
> do:
>
> until
>
> But I'm sure that has problems too.
That looks nice to me.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listi
[Jp Calderone]
> Anything can be a for loop.
>
> for chunk in iter(lambda: f1.read(CHUNK_SIZE), ''):
> f2.write(chunk)
It would be nice to have this encapsulated in a file method:
for chunk in f1.iterblocks(CHUNK_SIZE):
f2.write(chunk)
Raymond
__
> >> By the way, whatever happened to "and while"? i.e.:
> >>
> >> while True:
> >> data = inp.read(blocksize)
> >> and while data:
> >> out.write(data)
> >>
> >
> > My favourite version of this is
> >
> >while:
> > data = inp.read(blocksize)
> >gives data:
> def readby(inp, blocksize=1024):
> while True:
> data = inp.read(blocksize)
> if not data:
> break
> yield data
>
> for data in readby(inp, blocksize):
> . . .
readby() relies on the existence of a read() method for inp.
itertools work with gen
[Bob]
> islice depends on __getitem__.
Incorrect. It uses the iterator protocol.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive
In gen.throw(), are all three
arguments required? Or do the value
and traceback Nones need to be listed explicitly?
g.throw(MyException)
or
g.throw(MyException, None, None)
FWIW, I prefer the former. That will make throw() as flexible as
the raise statement.
PEP 288 is now withdrawn. The generator exceptions portion is subsumed
by PEP 343, and the generator attributes portion never garnered any
support.
The fate of generator attributes is interesting vís-a-vís PEP 342. The
motivation was always related to supporting advanced generator uses such
as e
May I suggest rejecting PEP 265.
As of Py2.4, its use case is easily solved with:
>>> sorted(d.iteritems(), key=itemgetter(1), reverse=True)
[('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]
Further, Py2.5 offers a parallel solution to the more likely use case of
wanting the access only the l
The need for the indices() proposal was mostly met by PEP 279's
enumerate() builtin.
Commenting on 279 before it was accepted for Py2.3, PEP 281's author,
Magnus Lie Hetland, wrote, "I'm quite happy to have it make PEP 281
obsolete."
Raymond
___
Pytho
[Phillip]
> I could definitely go for dropping __next__ and the next() builtin
from
> PEP
> 342, as they don't do anything extra. I also personally don't care
about
> the new continue feature, so I could do without for-loop alteration
> too. I'd be perfectly happy passing arguments to next() exp
[Phillip]
> > I also personally don't care about the new continue feature,
> > so I could do without for-loop alteration too.
[Guido]
> I do like "continue EXPR" but I have to admit I haven't even tried to
> come up with examples -- it may be unnecessary. As Phillip says, yield
> expressions an
The principal use case was largely met by enumerate(). From PEP 276's
rationale section:
"""
A common programming idiom is to take a collection of objects and apply
some operation to each item in the collection in some established
sequential order. Python provides the "for in" looping control
st
While the majority of Python users deem this to be a nice-to-have
feature, the community has been unable to reach a consensus on the
proper syntax after more than two years of intensive debate (the PEP was
introduced in early April 2003).
Most agree that there should be only-one-way-to-do-it; howe
After nine months, no support has grown beyond the original poster. The
PEP did however generate some negative responses when brought-up on
comp.lang.python (it made some people's stomach churn).
The PEP fails the tests of obviousness and necessity. The PEP's switch
example is easily handled by
These PEPs are four years old. Nothing is intrinsically wrong with them, but they have garnered little enthusiasm, discussion, or support, suggesting that the original need was somewhat modest. In addition, the principal (but not only) use cases for a builtin rational type and corresponding
This PEP has been open for two and half years without generating
discussion or support.
Its primary case (converting cumulative seconds into a tuple days,
hours, minutes, and seconds) is a bit wanting because it doesn't
generalize to months and years. That need is already met in a more
robust and
This PEP is an excellent example of improving readability and usability
by omitting a keyword and simplifying syntax. It neither provides nor
takes away functionality; instead, it is a bit of a beautification
effort.
Essentially it creates a lighter-weight, more usable syntax for
specifying defer
IIRC, there was a decision to not implement phase C and to keep the
trailing L in representations of long integers.
If so, I believe the PEP can be marked as final. We've done all we're
going to do.
Raymond
___
Python-Dev mailing list
Python-Dev@pyth
[Joachim Koenig-Baltes]
> > My use case for this is a directory tree walking generator that
> > yields all the files including the directories in a depth first
manner.
> > If a directory satisfies a condition (determined by the caller) the
> > generator shall not descend into it.
> >
> > Something
I recommend that the proposed syntax be altered to be more parallel with
the existing for-loop syntax to make it more parsable for both humans
and for the compiler. Like existing for-statements, the target
expression should immediately follow the 'for' keyword. Since this is
known to be a range a
[Raymond Hettinger]
> > I think it unwise to allow x to be any expression. Besides altering
> > existing semantics, it leads to code redundancy and to a fragile
> > construct (where the slightest alteration of any of the expressions
> > triggers a silent reversion to O(n)
[Raymond Hettinger]
> > > I recommend that the proposed syntax be altered to be more
parallel
> > > with the existing for-loop syntax to make it more parsable for
both
> > > humans and for the compiler.
[Michael Hudson]
> > Although all your suggestions are imp
[Donovan Baarda]
> As I see it, a lambda is an anonymous function. An anonymous function
is
> a function without a name. We already have a syntax for a function...
> why not use it. ie:
>
> f = filter(def (a): return a > 1, [1,2,3])
This approach is entirely too obvious. If we want to be on th
> [Donovan Baarda]
> > As I see it, a lambda is an anonymous function. An anonymous
function
> is
> > a function without a name. We already have a syntax for a
function...
> > why not use it. ie:
> >
> > f = filter(def (a): return a > 1, [1,2,3])
[Me]
> This approach is entirely too obvious. If
Introducing a new set of duplicate type names and deprecating old ones
causes a certain amount of disruption. Given the age of the types
module, the disruption is likely to be greater than any potential
benefit that could be realized. Plenty of people will have to incur the
transition costs, but
Do we have *any* known use cases where we would actually run bytecode
that was suspicious enough to warrant running a well-formedness check?
In assessing security risks, the PEP notes, "Practically, it would be
difficult for a malicious user to 'inject' invalid bytecode into a PVM
for the purposes
This PEP is an empty stub that is unlikely to ever get filled-out in a
way that adds anything beyond what is already implemented and
documented.
Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-de
Please do not spam multiple mail lists with these posts (edu-sig,
python-dev, and tutor).
Raymond
- Original Message -
From: "Smith" <[EMAIL PROTECTED]>
To:
Cc: ;
Sent: Monday, February 13, 2006 12:10 PM
Subject: Re: [Python-Dev] nice()
_
[Guido van Rossum]
> Somewhat controversial:
>
> - bytes("abc") == bytes(map(ord, "abc"))
At first glance, this seems obvious and necessary, so if it's somewhat
controversial, then I'm missing something. What's the issue?
Raymond
___
Python-Dev mail
[Smith]
> The following discussion ends with things that python-dev might want to
> consider in terms of adding a function that allows something other than the
> default 12- and 17-digit precision representations of numbers that str() and
> repr() give. Such a function (like nice(), perhaps name
>> Over lunch with Alex Martelli, he proposed that a subclass of dict
>> with this behavior (but implemented in C) would be a good addition to
>> the language
I would like to add something like this to the collections module, but a PEP is
probably needed to deal with issues like:
* implications
> My conclusion is that setdefault() is a failure -- it was a
> well-intentioned construct, but doesn't actually create more readable
> code.
It was an across the board failure: naming, clarity, efficiency.
Can we agree to slate dict.setdefault() to disappear in Py3.0?
Raymond
_
[Greg Ewing]
> Would people perhaps feel better if defaultdict
> *wasn't* a subclass of dict, but a distinct mapping
> type of its own? That would make it clearer that it's
> not meant to be a drop-in replacement for a dict
> in arbitrary contexts.
Absolutely. That's the right way to avoid Liskov
> > Also, I think has_key/in should return True if there is a default.
> It certainly seems desirable to see True where d[some_key]
> doesn't raise an exception, but one could argue either way.
Some things can be agreed by everyone:
* if __contains__ always returns True, then it is a useless fea
[Martin v. Löwis]
> If you have a default value, you cannot ultimately del a key. This
> sequence is *not* a basic mapping invariant.
You believe that key deletion is not basic to mappings?
> This kind of invariant doesn't take into account
> that there might be a default value.
Precisely. The
[Terry Reedy]
> One is a 'universal dict' that maps every key to something -- the default if
> nothing else. That should not have the default ever explicitly entered.
> Udict.keys() should only give the keys *not* mapped to the universal value.
Would you consider it a mapping invariant that "k
[Crutcher Dunnavant]
> Anyway, I'm looking for feedback, feature requests before starting the
> submission process.
With respect to the API, the examples tend to be visually dominated dominated
by
the series of decorators. The three decorators do nothing more than add a
function attribute, so
[Raymond Hettinger]
> 1) The "chars" variable can be eliminated and the "while chars" and
> "c=chars.pop(0)" sequence simplified to just:
>for c in reversed(str):
Actually, that should have been just:
>> @cmdloop.aliases('goodbye')
>> @cmdloop.shorthelp('say goodbye')
>> @cmdloop.usage('goodbye TARGET')
>>
>> to just:
>>
>> @cmdloop.addspec(aliases=['goodbye'], shorthelp ='say goodbye',
>> usage='goodbye TARGET')
>>
>> leaving the possibility of multiple decorator
[GvR]
> I'm not convinced by the argument
> that __contains__ should always return True
Me either. I cannot think of a more useless behavior or one more likely to
have
unexpected consequences. Besides, as Josiah pointed out, it is much easier for
a subclass override to substitute always True
[Crutcher Dunnavant ]
>> There are many times that I want d[key] to give me a value even when
>> it isn't defined, but that doesn't always mean I want to _save_ that
>> value in the dict.
How does that differ from the existing dict.get method?
Raymond
___
[Steven Bethard]
> * Should default_factory be an argument to the constructor? The three
> answers I see:
>
> - "No." I'm not a big fan of this answer. Since the whole point of
> creating a defaultdict type is to provide a default, requiring two
> statements (the constructor call and the defaul
[Alex]
>> I see d[k]+=1 as a substantial improvement -- conceptually more
>> direct, "I've now seen one more k than I had seen before".
[Guido]
> Yes, I now agree. This means that I'm withdrawing proposal A (new
> method) and championing only B (a subclass that implements
> __getitem__() calling o
>> > Just one more thing -- have you made a final decision
>> > about the name yet? I'd still prefer something like
>> > 'autodict', because to me 'defaultdict' suggests
>>
>> autodict is shorter and sharper and I prefer it, too: +1
>
> Apart from it somehow hashing to the same place as "autodidact
Then you will likely be happy with Guido's current version of the patch.
- Original Message -
From: "Crutcher Dunnavant" <[EMAIL PROTECTED]>
To: "Raymond Hettinger" <[EMAIL PROTECTED]>
Cc: "Python Dev"
Sent: Monday, February 20, 200
> Speaking of which, I suspect it'll be a lot more common to need integer
> objects in the full range [0, 255] than it is now.
>
> Perhaps we should extend the pre-allocated integer objects to cover the
> full byte range.
+1
___
Python-Dev mailing list
> Alex Martelli wrote:
>
>> If we call the type autodict, then having the factory attribute named
>> autofactory seems to fit.
>
> Or just 'factory', since it's the only kind of factory
> the object is going to have.
Gack, no. You guys are drifting towards complete ambiguity.
You might as wel
I'm concerned that the on_missing() part of the
proposal is gratuitous. The main use cases for defaultdict have a simple
factory that supplies a zero, empty list, or empty set. The on_missing()
hook is only there to support the rarer case of needing a key to
compute a default value. The h
> >>> from operator import isSequenceType, isMappingType
> >>> class anything(object):
> ... def __getitem__(self, index):
> ... pass
> ...
> >>> something = anything()
> >>> isMappingType(something)
> True
> >>> isSequenceType(something)
> True
>
> I suggest we either deprecate these f
[Alex]
> I'd love to remove setdefault in 3.0 -- but I don't think it can be done
> before that: default_factory won't cover the occasional use cases where
> setdefault is called with different defaults at different locations, and,
> rare as those cases may be, any 2.* should not break any e
[Guido van Rossum"]
> If we removed on_missing() from dict, we'd have to override
> __getitem__ in defaultdict (regardless of whether we give
>defaultdict an on_missing() hook or in-line it).
You have another option. Keep your current modifications to
dict.__getitem__ but do not include dict.on_m
[Ian Bicking]
> They seem terribly pointless to me.
FWIW, here is the script that had I used while updating and improving the two
functions (can't remember whether it was for Py2.3 or Py2.4). It lists
comparative results for many different types of inputs. Since perfection was
not possible, t
> But given :
>
> True True Instance w getitem
> True True NewStyle Instance w getitem
> True True []
> True True {}
>
> (Last one is UserDict)
>
> I can't conceive of circumstances where this is useful without duck
> typing *as well*.
Yawn. Give it up. For user defined instances, these fun
> Michael Chermside wrote:
>> The next() method of iterators was an interesting
>> object lesson. ... Since it was sometimes invoked by name
>> and sometimes by special mechanism, the choice was to use the
>> unadorned name, but later experience showed that it would have been
>> better the other wa
[Skip]
>I just noticed that cProfile (like profile) prints to stdout. Yuck. I
> guess that's to be expected because the pstats module does the actual
> printing and it's used by both modules. I'm willing to give up backward
> compatibility to achieve a little more sanity and flexibility here. I
1101 - 1200 of 1487 matches
Mail list logo