On 05/03/2011 08:19 PM, Robert Bradshaw wrote:

Btw we shouldn't count on pruning for the design of this, I think this will
for a large part be used with def functions. And if you use a cdef function
from another module through a pxd, you also need all versions.

Well, we'll want to avoid compiler warnings. E.g. floating might
include long double, but only float and double may be used. In pxd and
def functions, however, we will make all versions available.

Which is a reminder to hash out exactly how the dispatch will be resolved when coming from Python space (we do want to support "f(x, y)", without []-qualifier, when calling from Python, right?)

Fused types mostly make sense when used through PEP 3118 memory views (using the planned syntax for brevity):

def f(floating[:] x, floating y): ...

I'm thinking that in this kind of situation we let the array override how y is interpreted (y will always be a double here, but if x is passed as a float32 then use float32 for y as well and coerce y).

Does this make sense as a general rule -- if there's a conflict between array arguments and scalar arguments (array base type is narrower than the scalar type), the array argument wins? It makes sense because we can easily convert a scalar while we can't convert an array; and there's no "3.4f" notation in Python.

This makes less sense

def f(floating x): ...

as it can only ever resolve to double; although I guess we should allow it for consistency with usecases that do make sense, such as "real_or_complex" and "int_or_float"

The final and most difficult problem is what Python ints resolve to in this context. The widest integer type available in the fused type? Always Py_ssize_t? -1 on making the dispatch depend on the actual run-time value.

Dag Sverre
_______________________________________________
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel

Reply via email to