> Because isinstance is faster and handier than testing with try/except > around (say) "x+0".
Are you sure? The former has two builtin lookups (which also entail two failed global lookups), a function call, and a test/jump for the result. The latter approach has no lookups (just a load constant), a try-block setup, an add operation (optimized for integers, a fast slot lookup otherwise), and a block end. Even if there were a performance edge, I suppose that the type checking is the time critical part of most scripts. > If an abstract > basetype > 'basenumber' caught many useful cases, I'd put it right at the start of > the KnownNumberTypes tuple, omit all subclasses thereof from it, get > better performance, AND be able to document very simply what the user > must do to ensure his own custom type is known to me as "a number". So, this would essentially become a required API? It would no longer be enough to duck-type a number, you would also have to subclass from basenumber? Wouldn't this preclude custom number types that also need to derive from another builtin such as str? For instance, somewhere I have Gram-Schmidt orthogonalization transformation code for computing orthonormal bases and the only requirement for the input basis is that it be an inner-product space -- accordingly, the inputs can be functions, symbols (a subclass of str), or vectors (pretty-much anything supporting multiplication and subtraction). Similar conflicts arise for any pure computation function -- I should be able to supply either numbers or symbolic inputs (a subclass of str) that act like numbers. > in Python/bltinmodule.c , function builtin_sum uses C-coded > typechecking > to single out strings as an error case: > > /* reject string values for 'start' parameter */ > if (PyObject_TypeCheck(result, &PyBaseString_Type)) { > PyErr_SetString(PyExc_TypeError, > "sum() can't sum strings [use ''.join(seq) instea > > [etc]. Now, what builtin_sum really "wants" to do is to accept numbers, > only -- it's _documented_ as being meant for "numbers": it uses +, NOT > +=, so its performance on sequences, matrix and array-ish things, etc, > is not going to be good. But -- it can't easily _test_ whether > something > "is a number". If we had a PyBaseNumber_Type to use here, it would > be smooth, easy, and fast to check for it. > """ I think this a really a counter-example. We wanted to preclude sequences so that sum() didn't become a low performance synonym for ''.join(). However, there's no reason to preclude user-defined matrix or vector classes. And, if you're suggesting that the type-check should have been written with as instance(result, basenumber), one consequence would be that existing number-like classes would start to fail unless retro-fitted with a basenumber superclass. Besides being a PITA, retro-fitting existing, working code could also entail changing from an old to new-style class (possibly changing the semantics of the class). > A fast rational number type, see http://gmpy.sourceforge.net for > details (gmpy wraps LGPL'd library GMP, and gets a lot of speed and > functionality thereby). > > > if the parameter belongs to some algebraic ring homomorphic > > with the real numbers, or some such. Are complex numbers also numbers? > > Is it meaningful to construct gmpy.mpqs out of them? What about > > Z/nZ? > > If I could easily detect "this is a number" about an argument x, I'd > then ask x to change itself into a float, so complex would be easily > rejected (while decimals would mostly work fine, although a bit > slowly without some specialcasing, due to the Stern-Brocot-tree > algorithm I use to build gmpy.mpq's from floats). I can't JUST ask x > to "make itself into a float" (without checking for x's "being a > number") because that would wrongfully succeed for many cases such as > strings. Part of the rationale for basestring was convenience of writing isinstance(x,basestring) instead of isinstance(x,(str,unicode)). No existing code needed to be changed for this to work. This proposal, on the other hand, is different. To get the purported benefits, everyone has to play along. All existing number-look-a-like classes would have to be changed in order to work with functions testing for basenumber. That would be too bad for existing, work code. Likewise, it would be impossible for symbolic code which already subclassed from str (as discussed above). > >> If I do write the PEP, should it be just about basenumber, or should > >> it include baseinteger as well? Introducing a baseinteger type is likely to create confusion because all integers are real, but baseintegers are not a subclass of floats. I don't see a way around creating an integer recognition tool that doesn't conflate its terminology with broadly-held, pre-existing math knowledge: complex is a superset of reals, reals include rationals and irrationals some of which are trancendental, and rationals include integers which are an infinite superset of non-negative integers, whole numbers, negative numbers, etc. The decimal class only makes this more complicated. All binary floats can be translated exactly to decimal but not vice-versa. I'm not sure where they would fit into a inheritance hierarchy. On the one hand, decimals are very much like floats. On the other hand, they can be used as integers and should not be rejected by some code testing for baseinteger unless the value actually has a fractional part. The latter isn't a nit in the same league as wanting float('2.0') to be acceptable as an integer argument; rather, the decimal class was specifically designed to usable in integer-only situations (the spec holds itself out as encompassing integer math, fixed-point, and floating-point models). Raymond (my real name is unpronounceable by humans) _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com