Re: [Python-Dev] Proposed: drop unnecessary "context" pointer from PyGetSetDef
On May 4, 2009, at 3:10 AM, Larry Hastings wrote: I should have brought this up to python-dev before--sorry for being so slow. It's already in the tracker for a couple of days: http://bugs.python.org/issue5880 The idea: PyGetSetDef has this "void *closure" field that acts like a context pointer. You stick it in the PyGetSetDef, and it gets passed back to you when your getter or setter is called. It's a reasonable API design, but in practice you almost never need it. Meanwhile, it clutters up CPython, particularly typeobject.c; there are all these function calls that end with ", NULL);", just to satisfy the getter/setter prototype internally. I think this is an important feature, which allows you to define generic, reusable getter and setter functions and pass static metadata to them at runtime. Admittedly I have never needed the full pointer, my typical usage is to pass in an offset. I think this should only be removed if a suitable mechanism replaces it, if not it will require some needless duplication of code in extensions that use it (in particular my own) 8^) -Casey ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] In late this am
Going to the Dr., will be in thereafter. -Casey ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] In late this am
Heh, wrong dev list 8^). Sorry for the noise. -Casey On Aug 3, 2009, at 8:47 AM, Casey Duncan wrote: Going to the Dr., will be in thereafter. -Casey ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Why is nan != nan?
On Mar 25, 2010, at 7:19 PM, P.J. Eby wrote: > At 11:57 AM 3/26/2010 +1100, Steven D'Aprano wrote: >> But they're not -- they're *signals* for "your calculation has gone screwy >> and the result you get is garbage", so to speak. You shouldn't even think of >> a specific NAN as a piece of specific garbage, but merely a label on the >> *kind* of garbage you've got (the payload): INF-INF is, in some sense, a >> different kind of error to log(-1). In the same way you might say "INF-INF >> could be any number at all, therefore we return NAN", you might say "since >> INF-INF could be anything, there's no reason to think that INF-INF == >> INF-INF." > > So, are you suggesting that maybe the Pythonic thing to do in that case would > be to cause any operation on a NAN (including perhaps comparison) to fail, > rather than allowing garbage to silently propagate? > > In other words, if NAN is only a signal that you have garbage, is there > really any reason to keep it as an *object*, instead of simply raising an > exception? Then, you could at least identify what calculation created the > garbage, instead of it percolating up through other calculations. > > In low-level languages like C or Fortran, it obviously makes sense to > represent NAN as a value, because there's no other way to represent it. But > in a language with exceptions, is there a use case for it existing as a value? If a NaN object is allowed to exist, that is a float operation that does not return a real number does not itself raise an exception immediately, then it will always be possible to get (seemingly) nonsensical behavior when it is used in containers that do not themselves "operate" on their elements. So even provided that performing any "operation" on a NaN object raises an exception, it would still be possible to add such an object to a list or tuple and have subsequent containment checks for that object return false. So this "solution" would simply narrow the problem posed, but not eliminate it. None of the solution posed seem very ideal, in particular when they deviate from the standard in arbitrary ways someone deems "better". It's obvious to me that no ideal solution exists so long as you attempt to represent non-numeric values in a numeric type. So unless you simply eliminate NaNs (thus breaking the standard), you are going to confuse somebody. And I think having float deviate from the IEEE standard is ill advised unless there is no alternative (i.e., the standard cannot be practically implemented), and breaking it will confuse people too (and probably the ones that know this domain). I propose that the current behavior stands as is and that the documentation make mention of the fact that NaN values are unordered, thus some float values may not behave intuitively wrt hashing, equality, etc. The fact of the matter is that using floats as dict keys or set values or even just checking equality is much more complex in practice than you would expect. I mean even representing 1.1 is problematic ;^). Unless the float values you are using are constants, how would you practically use them as dict keys, or hsah set members anyway? I'm not saying it can't be done, but is a hash table with float keys ever a data structure that someone on this list would recommend? If so good luck and god speed 8^) -Casey ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Why is nan != nan?
On Mar 26, 2010, at 3:16 PM, Xavier Morel wrote: > On 26 Mar 2010, at 18:40 , Casey Duncan wrote: >> >> >> On Mar 25, 2010, at 7:19 PM, P.J. Eby wrote: >> >>> At 11:57 AM 3/26/2010 +1100, Steven D'Aprano wrote: >>>> But they're not -- they're *signals* for "your calculation has gone screwy >>>> and the result you get is garbage", so to speak. You shouldn't even think >>>> of a specific NAN as a piece of specific garbage, but merely a label on >>>> the *kind* of garbage you've got (the payload): INF-INF is, in some sense, >>>> a different kind of error to log(-1). In the same way you might say >>>> "INF-INF could be any number at all, therefore we return NAN", you might >>>> say "since INF-INF could be anything, there's no reason to think that >>>> INF-INF == INF-INF." >>> >>> So, are you suggesting that maybe the Pythonic thing to do in that case >>> would be to cause any operation on a NAN (including perhaps comparison) to >>> fail, rather than allowing garbage to silently propagate? >>> >>> In other words, if NAN is only a signal that you have garbage, is there >>> really any reason to keep it as an *object*, instead of simply raising an >>> exception? Then, you could at least identify what calculation created the >>> garbage, instead of it percolating up through other calculations. >>> >>> In low-level languages like C or Fortran, it obviously makes sense to >>> represent NAN as a value, because there's no other way to represent it. >>> But in a language with exceptions, is there a use case for it existing as a >>> value? >> >> If a NaN object is allowed to exist, that is a float operation that does not >> return a real number does not itself raise an exception immediately, then it >> will always be possible to get (seemingly) nonsensical behavior when it is >> used in containers that do not themselves "operate" on their elements. > How about raising an exception instead of creating nans in the first place, > except maybe within specific contexts (so that the IEEE-754 minded can get > their nans working as they currently do)? > > That way, there cannot be any nan-induced seemingly nonsensical behavior > except within known scopes. Having NaN creation raise an exception would undoubtedly break plenty of existing code that either expects and deals with NaNs itself or works accidentally because the NaNs do not cause harm. I don't sympathize much with the latter case since they are just hidden bugs probably, but the former makes it hard to justify raising exceptions for NaNs as the default behavior. But since I assume we're talking Python 3 here, maybe arguments containing the phase "existing code" can be dutifully ignored, I dunno. -Casey ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Continuing 2.x
On Oct 28, 2010, at 10:59 PM, Stephen J. Turnbull wrote: > Mark's position is different. His words suggest that he thinks that > Python.org owes the users something, although if pressed I imagine > he'd present some argument that more users will lead to development of > a better language. I think the developers universally consider that > to be objectively false: Python 3 is a much better language, and is on > track to be a much better environment for development -- of itself and > of applications -- in 2013 than Python 2 could conceivably be. There is tension here. python-dev wants Python to succeed, and now Python == Python 3.x. That means end-of-lifing Python 2.x, for many reasons, not the least of which is that more Python 2.x releases are a disincentive for folks to move their projects to Python 3.x. However there are many many more users of Python 2.x than Python 3.x. Many may never upgrade for the life of these projects, because if it ain't broke, why fix it? It doesn't matter how much better Python 3 is than Python 2. It isn't better enough. I like Python 3, I am using it for my latest projects, but I am also keeping Python 2 compatibility. This incurs some overhead, and basically means I am still really only using Python 2 features. So in some respects, my Python 3.x support is only tacit, it works as well as for Python 2, but it's not taking advantage of Python 3 really. I haven't run into a situation yet where I really want to or have to use Python 3 exclusive features, but then again I'm not really learning to use Python 3 either, short of the new C api. In this regard the existence of Python 3 is a disadvantage, not an advantage for my new code, regardless of how much better a language or dev environment it may be. Of course I made the choice to support both 2 and 3, but it was largely informed by the fact that other dependancies for my projects currently only support Python 2 and I don't have the spare time to port them right now. So at least right now, for me, Python 3 is not helping make new projects easier, it is the contrary unfortunately. Yeah, I am getting older and the years are going by faster, but gosh 2013 still feels like a ways off. -Casey ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Relocating Python
On Jul 29, 2008, at 12:56 PM, Lupusoru, Razvan A wrote: Hello, I am trying to get Python 2.5.2 working for an IA32 system. The compilation is done on an Ubuntu 8.04.1 dev system. I am using a custom gcc and ld specific to the IA32 system. This is my makefile: ## BUILD_DEST = /i686-custom-kernel CC = $(BUILD_DEST)/bin/i686-linux-gcc CPP = $(CC) –E CXX = $(BUILD_DEST)/bin/i686-linux-g++ LD = $(BUILD_DEST)/bin/i686-linux-ld PYTHONINSTALLPATH = $(BUILD_DEST)/usr Export all: tar xzfv Python-2.5.2.tgz ./Python-2.5.2/configure –prefix=$ {PYTHONINSTALLPATH} –host=i686-linux –enable-shared cd Python-2.5.2 make make install ## Everything compiles correctly. I then copy the contents of the $BUILD_DEST and put them on the hard drive for my IA32 system. I basically use the contents of $BUILD_DEST as the root directory on my IA32 system. Python seems to run correctly when I run it, but when I do things like “import pysqlite”, it cannot find it. Is there anything special I have to do to relocate my python (since on my IA32 system it runs from /usr/bin/python but it originally gets created in ${BUILD_DEST}/usr/bin/python)? You'll want to pass configure the prefix where python will ultimately be installed, otherwise the paths used during make won't make sense on the destination system. That said, pysqlite is not part of the stdlib, so your actual problem may have more to do with how you've installed it then anything. When you run python once relocated, what does os.path contain? What we do for packaging, is run 'configure' normally and then 'make', then override some make variables for 'make install' to temporarily install it in a different place for package staging. It looks something like this: ./configure && \ make && \ make BINDIR=$(shell pwd)/path/to/tmp/bin \ CONFINCLUDEDIR=$(shell pwd)/path/to/tmp/include \ INCLUDEDIR=$(shell pwd)/path/to/tmp/include \ LIBDIR=$(shell pwd)/path/to/tmp/lib \ MANDIR=$(shell pwd)/path/to/tmp/man \ SCRIPTDIR=$(shell pwd)/path/to/tmp/lib \ install Then when the package is deployed, the files are actually installed under the standard 'configure' prefix (/usr/local I think). hth, -Casey ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Things to Know About Super
On Aug 29, 2008, at 11:46 AM, Michele Simionato wrote: On Fri, Aug 29, 2008 at 6:15 PM, Nick Coghlan <[EMAIL PROTECTED]> wrote: The mixin methods in the ABC machinery would be a lot less useful without multiple inheritance (and the collections ABCs would be a whole lot harder to define and to write). So if you're looking for use cases for multiple inheritance, I'd suggest starting with the Python 2.6 collections module and seeing how you would go about rewriting it using only single inheritance. I believe the new io module is also fairly dependent on multiple inheritance. I am very well aware of the collection module and the ABC mechanism. However, you are missing that mixins can be implemented in a single- inheritance world without the complications of the MRO. See my answer to Alex Martelli in this same thread. As interesting as this conversation is at a meta-level, I'm not sure how much more can be accomplished here by debating the merits of multiple vs. single inheritance. Unfortunately I think this is a case where there is not just one good way to do it in all cases, especially given the subjective nature of "good" in this context. This is what I take away from this: - super() is tricky to use at best, and its documentation is inaccurate and incomplete. I think it should also be made more clear that super() is really mostly useful for framework developers, including users extending frameworks. Unfortunately many frameworks require you to extend them in order to write useful applications in my experience, so it trickles down to the app developer at times. In short, more correct documentation == good. - The difficulties of super() are really symptomatic of the difficulties of multiple inheritance. I think it's clear that multiple inheritance is here to stay in Python, and it solves certain classes of problems quite well. But, it requires careful consideration, and it's easy to get carried away and create a huge mess (ala Zope2, which I am all too familiar), and it can hinder code clarity as much as it facilitates reuse. - There are good alternatives to multiple inheritance for many cases, but there are cases where multiple inheritance is arguably best. Traits are a possible alternative that deserve further study. I think that study would be greatly aided by a 3rd-party library implementing traits for Python. If traits are to gain any traction or ever be considered for inclusion into the language such a library would need to exist first and demonstrate its utility. I know I'm probably just stating the obvious here, but I found it therapeutic ;^) -Casey ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com