On Tue, May 3, 2011 at 12:59 AM, mark florisson <markflorisso...@gmail.com> wrote: > On 3 May 2011 00:21, Robert Bradshaw <rober...@math.washington.edu> wrote: >> On Mon, May 2, 2011 at 1:56 PM, mark florisson >> <markflorisso...@gmail.com> wrote: >>> On 2 May 2011 18:24, Robert Bradshaw <rober...@math.washington.edu> wrote: >>>> On Sun, May 1, 2011 at 2:38 AM, mark florisson >>>> <markflorisso...@gmail.com> wrote: >>>>> A remaining issue which I'm not quite certain about is the >>>>> specialization through subscripts, e.g. func[double]. How should this >>>>> work from Python space (assuming cpdef functions)? Would we want to >>>>> pass in cython.double etc? Because it would only work for builtin >>>>> types, so what about types that aren't exposed to Python but can still >>>>> be coerced to and from Python? Perhaps it would be better to pass in >>>>> strings instead. I also think e.g. "int *" reads better than >>>>> cython.pointer(cython.int). >>>> >>>> That's whey we offer cython.p_int. On that note, we should support >>>> cython.astype("int *") or something like that. Generally, I don't like >>>> encoding semantic information in strings. >>>> >>>> OTHO, since it'll be a mapping of some sort, there's no reason we >>>> can't support both. Most of the time it should dispatch (at runtime or >>>> compile time) based on the type of the arguments. >>> >>> If we have an argument type that is composed of a fused type, would be >>> want the indexing to specify the composed type or the fused type? e.g. >>> >>> ctypedef floating *floating_p >> >> How should we support this? It's clear in this case, but only because >> you chose good names. Another option would be to require >> parameterization floating_p, with floating_p[floating] the >> "as-yet-unparameterized" version. Explicit but redundant. (The same >> applies to struct as classes as well as typedefs.) On the other had, >> the above is very succinct and clear in context, so I'm leaning >> towards it. Thoughts? > > Well, it is already supported. floating is fused, so any composition > of floating is also fused. > >>> cdef func(floating_p x): >>> ... >>> >>> Then do we want >>> >>> func[double](10.0) >>> >>> or >>> >>> func[double_p](10.0) >>> >>> to specialize func? >> >> The latter. > > I'm really leaning towards the former.
Ugh. I totally changed the meaning of that when I refactored my email. I'm in agreement with you: func[double]. > What if you write > > cdef func(floating_p x, floating_p *y): > ... > > Then specializing floating_p using double_p sounds slightly > nonsensical, as you're also specializing floating_p *. > >>> FYI, the type checking works like 'double_p is >>> floating_p' and not 'double is floating_p'. But for functions this is >>> a little different. On the one hand specifying the full types >>> (double_p) makes sense as you're kind of specifying a signature, but >>> on the other hand you're specializing fused types and you don't care >>> how they are composed -- especially if they occur multiple times with >>> different composition. So I'm thinking we want 'func[double]'. >> >> That's what I'm thinking too. The type you're branching on is >> floating, and withing that block you can declare variables as >> floating*, ndarray[dtype=floating], etc. > > What I actually meant there was "I think we want func[double] for the > func(floating_p x) signature". > > Right, people can already say 'cdef func(floating *p): ...' and then > use 'floating'. However, if you do 'cdef floating_p x): ...', then > 'floating' is not available, only 'floating_p'. It would be rather > trivial to also support 'floating' in the latter case, which I think > we should, floating is implicitly available, we could require making it explicit. > unless you are adamant about prohibiting regular typedefs > of fused types. No, I'm nto adamant against it, just wanted to get some discussion going. - Robert _______________________________________________ cython-devel mailing list cython-devel@python.org http://mail.python.org/mailman/listinfo/cython-devel