Re: [Python-Dev] PEP Proposal: Revised slice objects & lists use slice objects as indexes
> I am not sure what you are trying to propose here. The slice object > isn't special, it's just a regular built-in type. The idea is to have slice objects be generators. You could make a slice like 1:10:2 , and that would make a slice object which could be used as a list index. The list would return a list with the corresponding item for every index in the generator. Then, lists could transparently be used as list indexes, or you could supply your own generator instead of a slice object. > I don't see how introducing new syntax would simplify indexing. It would move the slice logic from the list to the slice object. Now the slice object is just a container, but with this it would be useful. > Why should lists accept a list or a generator as an index? What is the > use case you have in mind? For example, a multiple selection box's selection would be changed into its values by supplying the chosen indexes into a list of values. > > Optionally, the 1:2 syntax would create a slice object outside of list > > index areas. > > Again, I don't see how this could be useful... > > > > >>> list(1:5) > > [1, 2, 3, 4] > > > > >>> list(1:5:2) > > [1, 3] > > > > list(range(1,5,2))? It would only be useful as shorthand for xrange, but its addition capabilities would be useful. > > >>> range(30)[1:5 + 15:17] > > [1, 2, 3, 4, 15, 16] > > > > This is confusing, IMHO, and doesn't provide any advantage over: > > >>> s = list(range(30)) > >>> s[1:5] + s[15:17] > I don't think it's confusing. Also, an advantage would be if the slice object were being automatically generated, this would simplify the code. > If you really needed it, you could define a custom class with a fancy > __getitem__ > > class A: > def __getitem__(self, x): >return x > > >>> A()[1:3,2:5] > (slice(1, 3, None), slice(2, 5, None)) > > P.S. You should consider using the python-ideas > (http://mail.python.org/mailman/listinfo/python-ideas) mailing list, > instead of python-dev for posting suggestions. > > Cheers, > -- Alexandre > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Equality on method objects
Hi Phillip, On Sun, Mar 09, 2008 at 07:05:12PM -0400, Phillip J. Eby wrote: > I did not, however, need the equality of bound methods to be based on > object value equality, just value identity. > > ...at least until recently, anyway. I do have one library that wants > to have equality-based comparison of im_self. What I ended up doing > is writing code that tests what the current Python interpreter is > doing, and if necessary implements a special method type, just for > purposes of working around the absence of im_self equality > testing. However, it's a pretty specialized case (...) I found myself in exactly the same case: a pretty specialized example where I wanted bound methods to use im_self equality rather than identity, solved by writing my own bound-method-like object. But that's not really hard to do, and the general tendency (which matches my own opinion too) seems to be that using im_self identity is less surprizing. In general, "x.append" is interchangeable with "x.append" even if "x.append is not x.append", so let's go for the least surprizing behavior: "m1.im_self is m2.im_self and m1.im_func==m2.im_func". Objection? A bientot, Armin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Equality on method objects
At 12:26 PM 3/10/2008 +0100, Armin Rigo wrote: >Hi Phillip, > >On Sun, Mar 09, 2008 at 07:05:12PM -0400, Phillip J. Eby wrote: > > I did not, however, need the equality of bound methods to be based on > > object value equality, just value identity. > > > > ...at least until recently, anyway. I do have one library that wants > > to have equality-based comparison of im_self. What I ended up doing > > is writing code that tests what the current Python interpreter is > > doing, and if necessary implements a special method type, just for > > purposes of working around the absence of im_self equality > > testing. However, it's a pretty specialized case (...) > >I found myself in exactly the same case: a pretty specialized example >where I wanted bound methods to use im_self equality rather than >identity, solved by writing my own bound-method-like object. But that's >not really hard to do, and the general tendency (which matches my own >opinion too) seems to be that using im_self identity is less surprizing. > >In general, "x.append" is interchangeable with "x.append" even if >"x.append is not x.append", so let's go for the least surprizing >behavior: "m1.im_self is m2.im_self and m1.im_func==m2.im_func". >Objection? Nope; that's exactly what I proposed at the end of the email quoted above. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP Proposal: Revised slice objects & lists use slice objects as indexes
Forrest Voight wrote: >> I am not sure what you are trying to propose here. The slice object >> isn't special, it's just a regular built-in type. > > The idea is to have slice objects be generators. You could make a > slice like 1:10:2 , and that would make a slice object which could be > used as a list index. The list would return a list with the > corresponding item for every index in the generator. Then, lists could > transparently be used as list indexes, or you could supply your own > generator instead of a slice object. You can already pass whatever items you like to __getitem__ for your own sequences without even touching the builtin slice(). You can even write a decorator to convert slice objects to the relatively arbitrary index iterators you appear to favour (I wish you good luck in getting those to play well with numpy and the myriad of other C extensions that rely on the existing extended slicing semantics, or explaining how they work to a Python novice - you're going to need it). That said, and as Alexandre already pointed out, this thread is off-topic for python-dev - please take it to python-ideas to thrash out whether or not it has any practical applications, and whether those applications (assuming they exist) are even remotely close to compelling enough to justify the pain involved in making such a major change to the core language. Regards, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Equality on method objects
Phillip J. Eby wrote: > At 12:26 PM 3/10/2008 +0100, Armin Rigo wrote: >> In general, "x.append" is interchangeable with "x.append" even if >> "x.append is not x.append", so let's go for the least surprizing >> behavior: "m1.im_self is m2.im_self and m1.im_func==m2.im_func". >> Objection? > > Nope; that's exactly what I proposed at the end of the email quoted above. +1 here - this is the behaviour I would expect if attempting to provide a called-once-only guarantee for a callback list. Regards, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Get rid of strerror.c and memmove.c?
> Since both strerror() and memmove() are both in C89 (they are listed > in K&R), can we ditch these files? I think they can safely be deleted. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Equality on method objects
On Mon, Mar 10, 2008 at 4:26 AM, Armin Rigo <[EMAIL PROTECTED]> wrote: > Hi Phillip, > > > On Sun, Mar 09, 2008 at 07:05:12PM -0400, Phillip J. Eby wrote: > > I did not, however, need the equality of bound methods to be based on > > object value equality, just value identity. > > > > ...at least until recently, anyway. I do have one library that wants > > to have equality-based comparison of im_self. What I ended up doing > > is writing code that tests what the current Python interpreter is > > doing, and if necessary implements a special method type, just for > > purposes of working around the absence of im_self equality > > testing. However, it's a pretty specialized case (...) > > I found myself in exactly the same case: a pretty specialized example > where I wanted bound methods to use im_self equality rather than > identity, solved by writing my own bound-method-like object. But that's > not really hard to do, and the general tendency (which matches my own > opinion too) seems to be that using im_self identity is less surprizing. > > In general, "x.append" is interchangeable with "x.append" even if > "x.append is not x.append", so let's go for the least surprizing > behavior: "m1.im_self is m2.im_self and m1.im_func==m2.im_func". > Objection? +1 -- Adam Olsen, aka Rhamphoryncus ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
Hello again, Guido van Rossum wrote: > Well, there you have hit the nail on the head -- should we document > the actual or the guaranteed O() expression? I think this is a can of > worms better left unopened. At best we should include some hints to I will open this can and say that average case complexity is the most important. For example sorting O(nlogn) and dict lookup O(1). But because worst case is important too, it would be nice (but not necessary) if there was an extra column on the table with that information, or blank if not applicable. > clarify that random access to list and tuple elements is constant time > in CPython, and that dicts and sets are implemented using a hash table > with open hashing -- readers can draw their own conclusions from that. Notes are nice and already exist at random places in the *huge* documentation archive. But I still believe that the best place for this are the already existing tables in the docs (I pointed them at my initial post). One should trivially be able to find this information. On the other hand notes could be added for various oddities according to experience. For example that for sets up to 10(?) elements, a list would probably be better because of the hashing overhead. > What other implementations do should be up to them -- it becomes a > Quality of Implementation issue. I agree that CPython docs should document CPython behaviour. This would be the most helpful for someone writing a program in CPython. People who use other implementations should consult the appropriate docs. Implementors with worst algorithms should try to reach CPython efficiency. > > Regarding the OP's need for this kind of information (deciding between > the various standard data types), I would recommend a different > approach -- pick the data type that produces the shortest code. In all > likelihood it's also the most efficient. Hmmm, the first thing that comes to mind is prepending an item in a list which is small and seems nice, but is O(n) I think! And besides that, for someone who *cares* about his code good looks is not enough... Thanks for your answers, Dimitris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
Fred Drake wrote: > On Mar 9, 2008, at 10:22 AM, Aahz wrote: >> This has been discussed before and rejected for two reasons: >> >> * Other Python implementations (Jython, PyPy, IronPython) may not be >> able to provide the same type implementations >> >> * Algorithmic information does sometimes change between versions, and >> keeping the docs updated is not trivial > > Also, given the significance of the constant factors and the fact that > these are significantly dependent, especially for containers, on the > objects passed in (either to be contained or tested), it's not clear > that it often makes sense to worry about at too detailed a level. > Common sense, knowledge of the interpreter, and experience are often > more valuable the easily-outdated documentation. Fair enough but the fact is that this documentation already exists, at random locations unfortunately. Who would expect to find such valuable info in a rejected PEP (3128)! I will agree that experience and interpreter inside knowledge is most valuable for choosing the right structure, but isn't this too much for occasional python programmers? Thanks, Dimitris > > >-Fred > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
> I will open this can and say that average case complexity is the most > important. For example sorting O(nlogn) and dict lookup O(1). I would still debate the complexity of dict lookup. What are you averaging over? In absence of any distribution property of hash functions in the average case, you can't say anything about dictionary performance. I also disagree that the average complexity is "most important". I find the worst-case complexity most important. For average-case complexity, I just measure my application, and don't care about theoretical numbers. > Notes are nice and already exist at random places in the *huge* > documentation archive. But I still believe that the best place for this > are the already existing tables in the docs (I pointed them at my > initial post). One should trivially be able to find this information. Feel free to start a Wiki page then. With the right keywords on the page, it shouldn't take long for Google to pick up the page, and return it to people asking the right questions. > I agree that CPython docs should document CPython behaviour. Actually, no. The "CPython documentation" documents Python, the language, and its standard library. It is a specification that CPython conforms to (hopefully). There might be fragments in it that are both worthwhile-to-mention and implementation-specific, but they should be marked as the latter. > Hmmm, the first thing that comes to mind is prepending an item in a list > which is small and seems nice, but is O(n) I think! And besides that, > for someone who *cares* about his code good looks is not enough... Define "small". For a theoretically-reasonable definition of "small", prepending is O(1): namely, when a "small" list is one whose size is bounded by some upper bound (say, M). For such a list, prepending is O(1) (namely, not more than M copies). Of course, you can only prepend a finite number of times to such a list, unless you also delete in-between. Over a long series of insertions and deletions, prepending will be O(1) (if M is large, with a large factor). Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
On Sun, Mar 9, 2008 at 9:22 AM, Aahz <[EMAIL PROTECTED]> wrote: > There probably would be some value in a wiki page on python.org that > provides this information, particularly across versions. You may be > able to find volunteers to help on comp.lang.python. I just created a very basic one at http://wiki.python.org/moin/TimeComplexity?action=show I'm not that familiar with the Wiki syntax, so the tables are kind of ugly at the moment. I wasn't sure about many of the set() operations, so I didn't include those. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises LLC ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
On Mon, Mar 10, 2008, Daniel Stutzbach wrote: > On Sun, Mar 9, 2008 at 9:22 AM, Aahz <[EMAIL PROTECTED]> wrote: >> >> There probably would be some value in a wiki page on python.org that >> provides this information, particularly across versions. You may be >> able to find volunteers to help on comp.lang.python. > > I just created a very basic one at > http://wiki.python.org/moin/TimeComplexity?action=show > > I'm not that familiar with the Wiki syntax, so the tables are kind of > ugly at the moment. Please specify which Python version you're using. Thanks! -- Aahz ([EMAIL PROTECTED]) <*> http://www.pythoncraft.com/ "All problems in computer science can be solved by another level of indirection." --Butler Lampson ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
> I just created a very basic one at > http://wiki.python.org/moin/TimeComplexity?action=show I just knew that this could be a field of endless nitpicking. I think the dict.copy classification is incorrect. A dict.copy can take unbounded time, if the dict has seen many recent deletions (which didn't shrink it). Copying will have to iterate over all slots of the dictionary, and the ratio of slots to keys can grow unbounded if you have just deletions without insertions. IOW, if you do d = {} for i in xrange(N): d[i]=i for i in xrange(N-1): del d[i] then doing d.copy() takes O(N), not constant time (even though there is only one key in the dictionary). > I wasn't sure about many of the set() operations, so I didn't include those. set is just implemented like a dictionary, without keys. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
On Mon, Mar 10, 2008 at 5:06 PM, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: > > I just created a very basic one at > > http://wiki.python.org/moin/TimeComplexity?action=show > > I just knew that this could be a field of endless nitpicking. Certainly. I am hoping that this thread will soon wind down and future nitpicking can come in the form of edits to the wiki page with footnotes or links to other pages for anything non-obvious. > I think the dict.copy classification is incorrect. A dict.copy > can take unbounded time, if the dict has seen many recent > deletions (which didn't shrink it). Copying will have to iterate > over all slots of the dictionary, and the ratio of slots to > keys can grow unbounded if you have just deletions without > insertions. I have updated the wiki page accordingly. I assume there is a reason that PyDict_DelItem never calls dictresize? -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises LLC ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
> I assume there is a reason that PyDict_DelItem never calls dictresize? Yes - the assumption is that more del calls will follow, so that the dictionary eventually ends up empty. Only when new inserts are made, that assumption is proven wrong, and the shrinking can be done in one sweep. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Equality on method objects
On Mon, Mar 10, 2008 at 9:20 AM, Adam Olsen <[EMAIL PROTECTED]> wrote: > > On Mon, Mar 10, 2008 at 4:26 AM, Armin Rigo <[EMAIL PROTECTED]> wrote: > > Hi Phillip, > > > > > > On Sun, Mar 09, 2008 at 07:05:12PM -0400, Phillip J. Eby wrote: > > > I did not, however, need the equality of bound methods to be based on > > > object value equality, just value identity. > > > > > > ...at least until recently, anyway. I do have one library that wants > > > to have equality-based comparison of im_self. What I ended up doing > > > is writing code that tests what the current Python interpreter is > > > doing, and if necessary implements a special method type, just for > > > purposes of working around the absence of im_self equality > > > testing. However, it's a pretty specialized case (...) > > > > I found myself in exactly the same case: a pretty specialized example > > where I wanted bound methods to use im_self equality rather than > > identity, solved by writing my own bound-method-like object. But that's > > not really hard to do, and the general tendency (which matches my own > > opinion too) seems to be that using im_self identity is less surprizing. > > > > In general, "x.append" is interchangeable with "x.append" even if > > "x.append is not x.append", so let's go for the least surprizing > > behavior: "m1.im_self is m2.im_self and m1.im_func==m2.im_func". > > Objection? > > +1 > > -- > Adam Olsen, aka Rhamphoryncus > +1 here too. For 2.6 as well as 3.0. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
Martin v. Löwis wrote: > Yes - the assumption is that more del calls will follow, so that the > dictionary eventually ends up empty. Only when new inserts are made, > that assumption is proven wrong, and the shrinking can be done in > one sweep. Hmmm. So the most efficient way to copy a dict that you've just deleted a lot of things from is to insert something, then delete it again, and then copy. :-) -- Greg ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
Martin v. Löwis wrote: > I would still debate the complexity of dict lookup. What are you > averaging over? All the Python programs I've ever run. :-) > I also disagree that the average complexity is "most important". > I find the worst-case complexity most important. While in theory the worst-case behaviour of a hash table is O(n), in practice the worst case is so vanishingly rare that nobody worries about it. Can you really say that you don't make any design decisions early on based on the assumption that dict lookup will almost certainly be a lot faster than searching a list? >>Hmmm, the first thing that comes to mind is prepending an item in a list >>which is small and seems nice, but is O(n) I think! > > Define "small". I think he was talking about the size of the code. In other words, just because the code is small doesn't necessarily mean the algorithm is efficient. > For a theoretically-reasonable definition of "small", > prepending is O(1): Big O-notation is all about the limit when things get big. So it doesn't make sense to talk about "O() when something is small". -- Greg ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP Proposal: Revised slice objects & lists use slice objects as indexes
Forrest Voight wrote: > Slice objects that are produced in a list index area would be different, > and optionally the syntax for slices in list indexes would be expanded > to work everywhere. Something like this was quite close to getting in a while back, but it was eventually dropped. Anyone advocating this should probably look back over the discussion and find out why. I think they were to be called "range expressions" or something like that. A point to consider is that iterating over a range of integers is actually quite a rare thing to do in idiomatic Python. It's much more common to iterate directly over a sequence of things you want to operate on. This is even more true now that we have enumerate(). So this proposal is addressing a fairly small set of use cases. -- Greg ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
Dimitrios Apostolou wrote: > On the other hand notes could be added for various oddities according to > experience. For example that for sets up to 10(?) elements, a list would > probably be better because of the hashing overhead. That's the sort of thing that tends to be *very* implementation dependent -- not just between CPython and others, but between different releases of CPython. > Hmmm, the first thing that comes to mind is prepending an item in a list > which is small and seems nice, but is O(n) I think! Yeah, there's no substitute for having at least a rough idea of how the object you're using is implemented, e.g. array vs. linked list. This kind of very basic information is something that I think ought to be documented, and some guarantees made in the language definition. For example, I think a Python implementation that implemented lists as linked lists would make many people unhappy, as their algorithms suddenly went from O(n**m) to O(n**(m+1)) without anyone telling them. -- Greg ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP Proposal: Revised slice objects & lists use slice objects as indexes
On Mon, Mar 10, 2008 at 3:57 AM, Forrest Voight <[EMAIL PROTECTED]> wrote: > > I am not sure what you are trying to propose here. The slice object > > isn't special, it's just a regular built-in type. > > The idea is to have slice objects be generators. You could make a > slice like 1:10:2 , and that would make a slice object which could be > used as a list index. The list would return a list with the > corresponding item for every index in the generator. Then, lists could And what indices would the slice 1:-1 return, for example? Your proposal can't play well with slices including negative indices. Also, your desired use case of alist[indices] is already pretty well covered by [alist[i] for i in indices]. Alex ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Complexity documentation request
"Greg Ewing" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] || Yeah, there's no substitute for having at least a | rough idea of how the object you're using is implemented, | e.g. array vs. linked list. As I understand it, the new 2.6 docs include a new one on CPython specifically. A page there might be appropriate. But someone has to write and submit a patch for review. | This kind of very basic information is something that | I think ought to be documented, and some guarantees | made in the language definition. For example, I think | a Python implementation that implemented lists as | linked lists would make many people unhappy, as their | algorithms suddenly went from O(n**m) to O(n**(m+1)) | without anyone telling them. Such an implementation should document such a design decision, but I don't see that an interpreter that runs the test suite should be prohibited from calling itself a 'Python interpreter' tjr ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com