Re: [Python-Dev] LinkedHashSet/LinkedHashMap equivalents
Steven Bethard wrote: > Thomas Heller <[EMAIL PROTECTED]> wrote: > >> [About an ordered dictionary] > > > Well, that was basically the question I posed. So far I've seen only > one use for it, and that one is better served by adding a function to > itertools. What use do you have for it other than filtering > duplicates from a list while retaining order? > > Steve Using a LinkedHashMap generally cuts down in the amount of apparent randomness in a program. This is especially helpful when it comes time to debug a really complicated program by diffing log files, since it prevents slightly different maps from having wildly different iteration orders. Often using a plain HashMap can introduce enough randomness to make two otherwise similar log files nearly impossible to compare. The other use case I have is for dealing with data where the iteration order doesn't matter to the program but it does matter to users. For instance, consider the ConfigParser's write method. Any ordering of values in the output is functionally equivalent, but the original data is likely to have come from a file that was arranged in some meaningful order, and it would be nice to preserve that order, especially if it can be done with no extra effort. --jw ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adding any() and all()
Jim Jewett wrote: Guido van Rossum: [Why any() and all() shouldn't need to be imported.] Is that so bad? If you plan to use them often, then from itertools import any, every is reasonable. If you only use them once and weren't expecting it (and want your imports at the top) ... well how awful is it to have an extra line or two in your code? The problem with this approach is that any() and all() are so fundamental* that you should just use them without thinking about it, just as when you use "+" to conctenate strings, you don't have to stop and think to yourself, "Ah, this program needs to be able to manipulate strings. I'd better make sure string operations as available in this module." Thinking such thoughts takes you away from thinking about the problem you're trying to solve by manipulating strings. Likewise, programmers solve a lot of problems with boolean expressions, and it seems silly to require a special declaration just to make the full complement of boolean operations available. I can think of three ways of coping with any() and all() being in a module: First, I could just not use them. In that case all the effort here is wasted, and my code becomes less readable than it would have been otherwise. This is the approach I usually take with modules like "operator", where I can just as easily write a lambda expression (for now at least). Second, I could break my concentration to think about import statements every time I have a use for these particular functions. Third, I could import them at the top of every module. Since one of the distinguishing features of Python in a lack of gratuitous boilerplate code everywhere, I would find it very sad to add even a little bit. So while putting any() and all() into a module isn't that bad in itself, it seems like the start of a slippery slope that has Python at the top and C++ at the bottom. -- jw *I appreciate the irony of calling something "fundamental" when we've all gotten by just fine without it for many years--I'm trying to think from the perspective of someone used to dealing with a later (and hopefully better) version of Python. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Rationale for sum()'s design?
Michael Walter wrote: On Tue, 15 Mar 2005 07:47:20 -0800, Guido van Rossum <[EMAIL PROTECTED]> wrote: But I'm not so sure now. Thinking ahead to generic types, I'd like the full signature to be: def sum(seq: sequence[T], initial: T = 0) -> T. Would this _syntax_ work with generic types: def sum(seq: sequence[T], initial: T = T()) -> T. This doesn't make sense with existing semantics because default arguments are evaluated when the function is defined, but T() can't be evaluated until the function is called. I'm not sure there's a way around that problem without turning default arguments into a trap for the unwary. jw ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Numerical robustness, IEEE etc.
On 6/19/06, Michael Hudson <[EMAIL PROTECTED]> wrote: > Nick Maclaren <[EMAIL PROTECTED]> writes: > > 2) Because some people are dearly attached to the current behaviour, > > warts and all, and there is a genuine quandary of whether the 'right' > > behaviour is trap-and-diagnose, propagate-NaN or whatever-IEEE-754R- > > finally-specifies (let's ignore C99 and Java as beyond redemption), > > Why? Maybe it's clear to you, but it's not totally clear to me, and > it any case the discussion would be better informed for not being too > dismissive. I just happened to be reading this, which I found very convincing: How Java's Floating-Point Hurts Everyone Everywhere http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] For Python 3k, drop default/implicit hash, and comparison
(This is kind of on a tangent to the original discussion, but I don't want to create yet another subject line about object comparisons.) Lately I've found that virtually all my implementations of __cmp__, __hash__, etc. can be factored into this form inspired by the "key" parameter to the built-in sorting functions: class MyClass: def __key(self): # Return a tuple of attributes to compare. return (self.foo, self.bar, ...) def __cmp__(self, that): return cmp(self.__key(), that.__key()) def __hash__(self): return hash(self.__key()) I wonder if it wouldn't make sense to formalize this pattern with a magic __key__ method such that a class with a __key__ method would behave as if it had interited the definitions of __cmp__ and __hash__ above. This scheme would eliminate the tedium of keeping the __hash__ method in sync with the __cmp__/__eq__ method, and writing a __key__ method would involve writing less code than a naive __eq__ method, since each attribute name only needs to be mentioned once instead of appearing on either side of a "==" expression. On the other hand, this idea doesn't work in all situations (for instance, I don't think you could define the default __cmp__/__hash__ semantics in terms of __key__), it would only eliminate two one-line methods for each class, and it would further complicate the "==" operator (__key__, falling back to __eq__, falling back to __cmp__, falling back to object identity--ouch!) If anyone thinks this is a good idea I'll investiate how many places in the standard library this pattern would apply. --jw ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com