[Cython] Dead code removal

2011-05-03 Thread Vitja Makarov
Hi!

I can move unreachable code removal into its own branch and then make
a pull request.

But it seems to me that dead code removal should be done in control
flow analysis so there are few options:

1. remove unreachable code in control flow transformation
2. set is_terminator flag in control flow and then run removal
transformation later

For instance, RemoveUnreachableCode doesn't handle this case:

try:
raise Error
finally:
pass
print 'unreachable'

This case could be easily handled if I set is_terminator flag on the
whole TryFinallyStatNode,
but I'm not sure that this is the only unhandled case.

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread mark florisson
On 3 May 2011 00:21, Robert Bradshaw  wrote:
> On Mon, May 2, 2011 at 1:56 PM, mark florisson
>  wrote:
>> On 2 May 2011 18:24, Robert Bradshaw  wrote:
>>> On Sun, May 1, 2011 at 2:38 AM, mark florisson
>>>  wrote:
 A remaining issue which I'm not quite certain about is the
 specialization through subscripts, e.g. func[double]. How should this
 work from Python space (assuming cpdef functions)? Would we want to
 pass in cython.double etc? Because it would only work for builtin
 types, so what about types that aren't exposed to Python but can still
 be coerced to and from Python? Perhaps it would be better to pass in
 strings instead. I also think e.g. "int *" reads better than
 cython.pointer(cython.int).
>>>
>>> That's whey we offer cython.p_int. On that note, we should support
>>> cython.astype("int *") or something like that. Generally, I don't like
>>> encoding semantic information in strings.
>>>
>>> OTHO, since it'll be a mapping of some sort, there's no reason we
>>> can't support both. Most of the time it should dispatch (at runtime or
>>> compile time) based on the type of the arguments.
>>
>> If we have an argument type that is composed of a fused type, would be
>> want the indexing to specify the composed type or the fused type? e.g.
>>
>> ctypedef floating *floating_p
>
> How should we support this? It's clear in this case, but only because
> you chose good names. Another option would be to require
> parameterization floating_p, with floating_p[floating] the
> "as-yet-unparameterized" version. Explicit but redundant. (The same
> applies to struct as classes as well as typedefs.) On the other had,
> the above is very succinct and clear in context, so I'm leaning
> towards it. Thoughts?

Well, it is already supported. floating is fused, so any composition
of floating is also fused.

>> cdef func(floating_p x):
>>    ...
>>
>> Then do we want
>>
>>    func[double](10.0)
>>
>> or
>>
>>    func[double_p](10.0)
>>
>> to specialize func?
>
> The latter.

I'm really leaning towards the former. What if you write

cdef func(floating_p x, floating_p *y):
...

Then specializing floating_p using double_p sounds slightly
nonsensical, as you're also specializing floating_p *.

>> FYI, the type checking works like 'double_p is
>> floating_p' and not 'double is floating_p'. But for functions this is
>> a little different. On the one hand specifying the full types
>> (double_p) makes sense as you're kind of specifying a signature, but
>> on the other hand you're specializing fused types and you don't care
>> how they are composed -- especially if they occur multiple times with
>> different composition. So I'm thinking we want 'func[double]'.
>
> That's what I'm thinking too. The type you're branching on is
> floating, and withing that block you can declare variables as
> floating*, ndarray[dtype=floating], etc.

What I actually meant there was "I think we want func[double] for the
func(floating_p x) signature".

Right, people can already say 'cdef func(floating *p): ...' and then
use 'floating'. However, if you do 'cdef floating_p x): ...', then
'floating' is not available, only 'floating_p'. It would be rather
trivial to also support 'floating' in the latter case, which I think
we should, unless you are adamant about prohibiting regular typedefs
of fused types.

> - Robert
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Dag Sverre Seljebotn

On 05/03/2011 09:59 AM, mark florisson wrote:

On 3 May 2011 00:21, Robert Bradshaw  wrote:

On Mon, May 2, 2011 at 1:56 PM, mark florisson
  wrote:

On 2 May 2011 18:24, Robert Bradshaw  wrote:

On Sun, May 1, 2011 at 2:38 AM, mark florisson
  wrote:

A remaining issue which I'm not quite certain about is the
specialization through subscripts, e.g. func[double]. How should this
work from Python space (assuming cpdef functions)? Would we want to
pass in cython.double etc? Because it would only work for builtin
types, so what about types that aren't exposed to Python but can still
be coerced to and from Python? Perhaps it would be better to pass in
strings instead. I also think e.g. "int *" reads better than
cython.pointer(cython.int).


That's whey we offer cython.p_int. On that note, we should support
cython.astype("int *") or something like that. Generally, I don't like
encoding semantic information in strings.

OTHO, since it'll be a mapping of some sort, there's no reason we
can't support both. Most of the time it should dispatch (at runtime or
compile time) based on the type of the arguments.


If we have an argument type that is composed of a fused type, would be
want the indexing to specify the composed type or the fused type? e.g.

ctypedef floating *floating_p


How should we support this? It's clear in this case, but only because
you chose good names. Another option would be to require
parameterization floating_p, with floating_p[floating] the
"as-yet-unparameterized" version. Explicit but redundant. (The same
applies to struct as classes as well as typedefs.) On the other had,
the above is very succinct and clear in context, so I'm leaning
towards it. Thoughts?


Well, it is already supported. floating is fused, so any composition
of floating is also fused.


cdef func(floating_p x):
...

Then do we want

func[double](10.0)

or

func[double_p](10.0)

to specialize func?


The latter.


I'm really leaning towards the former. What if you write

cdef func(floating_p x, floating_p *y):
 ...

Then specializing floating_p using double_p sounds slightly
nonsensical, as you're also specializing floating_p *.


I made myself agree with both of you in turn, but in the end I think I'm 
with Robert here.


Robert's approach sounds perhaps slightly simpler if you think of it 
this way:


ctypedef fused_type(float, double) floating
ctypedef floating* floating_p

is really a short-hand for

ctypedef fused_type(float*, double*) floating_p

I.e., when using a fused_type in a typedef you simply get a new 
fused_type. This sounds in a sense simpler without extra complexity 
getting in the way ("which was my fused base type again...").


Dag SVerre
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread mark florisson
On 3 May 2011 10:07, Dag Sverre Seljebotn  wrote:
> On 05/03/2011 09:59 AM, mark florisson wrote:
>>
>> On 3 May 2011 00:21, Robert Bradshaw  wrote:
>>>
>>> On Mon, May 2, 2011 at 1:56 PM, mark florisson
>>>   wrote:

 On 2 May 2011 18:24, Robert Bradshaw
  wrote:
>
> On Sun, May 1, 2011 at 2:38 AM, mark florisson
>   wrote:
>>
>> A remaining issue which I'm not quite certain about is the
>> specialization through subscripts, e.g. func[double]. How should this
>> work from Python space (assuming cpdef functions)? Would we want to
>> pass in cython.double etc? Because it would only work for builtin
>> types, so what about types that aren't exposed to Python but can still
>> be coerced to and from Python? Perhaps it would be better to pass in
>> strings instead. I also think e.g. "int *" reads better than
>> cython.pointer(cython.int).
>
> That's whey we offer cython.p_int. On that note, we should support
> cython.astype("int *") or something like that. Generally, I don't like
> encoding semantic information in strings.
>
> OTHO, since it'll be a mapping of some sort, there's no reason we
> can't support both. Most of the time it should dispatch (at runtime or
> compile time) based on the type of the arguments.

 If we have an argument type that is composed of a fused type, would be
 want the indexing to specify the composed type or the fused type? e.g.

 ctypedef floating *floating_p
>>>
>>> How should we support this? It's clear in this case, but only because
>>> you chose good names. Another option would be to require
>>> parameterization floating_p, with floating_p[floating] the
>>> "as-yet-unparameterized" version. Explicit but redundant. (The same
>>> applies to struct as classes as well as typedefs.) On the other had,
>>> the above is very succinct and clear in context, so I'm leaning
>>> towards it. Thoughts?
>>
>> Well, it is already supported. floating is fused, so any composition
>> of floating is also fused.
>>
 cdef func(floating_p x):
    ...

 Then do we want

    func[double](10.0)

 or

    func[double_p](10.0)

 to specialize func?
>>>
>>> The latter.
>>
>> I'm really leaning towards the former. What if you write
>>
>> cdef func(floating_p x, floating_p *y):
>>     ...
>>
>> Then specializing floating_p using double_p sounds slightly
>> nonsensical, as you're also specializing floating_p *.
>
> I made myself agree with both of you in turn, but in the end I think I'm
> with Robert here.
>
> Robert's approach sounds perhaps slightly simpler if you think of it this
> way:
>
> ctypedef fused_type(float, double) floating
> ctypedef floating* floating_p
>
> is really a short-hand for
>
> ctypedef fused_type(float*, double*) floating_p
>
> I.e., when using a fused_type in a typedef you simply get a new fused_type.
> This sounds in a sense simpler without extra complexity getting in the way
> ("which was my fused base type again...").
>
> Dag SVerre
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>

Ok, if those typedefs should be disallowed then specialization through
indexing should then definitely get the types listed in the fused_type
typedef.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Dag Sverre Seljebotn

On 05/03/2011 10:42 AM, mark florisson wrote:

On 3 May 2011 10:07, Dag Sverre Seljebotn  wrote:

On 05/03/2011 09:59 AM, mark florisson wrote:


On 3 May 2011 00:21, Robert Bradshawwrote:


On Mon, May 2, 2011 at 1:56 PM, mark florisson
wrote:


On 2 May 2011 18:24, Robert Bradshaw
  wrote:


On Sun, May 1, 2011 at 2:38 AM, mark florisson
wrote:


A remaining issue which I'm not quite certain about is the
specialization through subscripts, e.g. func[double]. How should this
work from Python space (assuming cpdef functions)? Would we want to
pass in cython.double etc? Because it would only work for builtin
types, so what about types that aren't exposed to Python but can still
be coerced to and from Python? Perhaps it would be better to pass in
strings instead. I also think e.g. "int *" reads better than
cython.pointer(cython.int).


That's whey we offer cython.p_int. On that note, we should support
cython.astype("int *") or something like that. Generally, I don't like
encoding semantic information in strings.

OTHO, since it'll be a mapping of some sort, there's no reason we
can't support both. Most of the time it should dispatch (at runtime or
compile time) based on the type of the arguments.


If we have an argument type that is composed of a fused type, would be
want the indexing to specify the composed type or the fused type? e.g.

ctypedef floating *floating_p


How should we support this? It's clear in this case, but only because
you chose good names. Another option would be to require
parameterization floating_p, with floating_p[floating] the
"as-yet-unparameterized" version. Explicit but redundant. (The same
applies to struct as classes as well as typedefs.) On the other had,
the above is very succinct and clear in context, so I'm leaning
towards it. Thoughts?


Well, it is already supported. floating is fused, so any composition
of floating is also fused.


cdef func(floating_p x):
...

Then do we want

func[double](10.0)

or

func[double_p](10.0)

to specialize func?


The latter.


I'm really leaning towards the former. What if you write

cdef func(floating_p x, floating_p *y):
 ...

Then specializing floating_p using double_p sounds slightly
nonsensical, as you're also specializing floating_p *.


I made myself agree with both of you in turn, but in the end I think I'm
with Robert here.

Robert's approach sounds perhaps slightly simpler if you think of it this
way:

ctypedef fused_type(float, double) floating
ctypedef floating* floating_p

is really a short-hand for

ctypedef fused_type(float*, double*) floating_p

I.e., when using a fused_type in a typedef you simply get a new fused_type.
This sounds in a sense simpler without extra complexity getting in the way
("which was my fused base type again...").

Dag SVerre
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel



Ok, if those typedefs should be disallowed then specialization through
indexing should then definitely get the types listed in the fused_type
typedef.


I'm not sure what you mean here. What is disallowed exactly?

DS
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread mark florisson
On 3 May 2011 10:44, Dag Sverre Seljebotn  wrote:
> On 05/03/2011 10:42 AM, mark florisson wrote:
>>
>> On 3 May 2011 10:07, Dag Sverre Seljebotn
>>  wrote:
>>>
>>> On 05/03/2011 09:59 AM, mark florisson wrote:

 On 3 May 2011 00:21, Robert Bradshaw
  wrote:
>
> On Mon, May 2, 2011 at 1:56 PM, mark florisson
>     wrote:
>>
>> On 2 May 2011 18:24, Robert Bradshaw
>>  wrote:
>>>
>>> On Sun, May 1, 2011 at 2:38 AM, mark florisson
>>>     wrote:

 A remaining issue which I'm not quite certain about is the
 specialization through subscripts, e.g. func[double]. How should
 this
 work from Python space (assuming cpdef functions)? Would we want to
 pass in cython.double etc? Because it would only work for builtin
 types, so what about types that aren't exposed to Python but can
 still
 be coerced to and from Python? Perhaps it would be better to pass in
 strings instead. I also think e.g. "int *" reads better than
 cython.pointer(cython.int).
>>>
>>> That's whey we offer cython.p_int. On that note, we should support
>>> cython.astype("int *") or something like that. Generally, I don't
>>> like
>>> encoding semantic information in strings.
>>>
>>> OTHO, since it'll be a mapping of some sort, there's no reason we
>>> can't support both. Most of the time it should dispatch (at runtime
>>> or
>>> compile time) based on the type of the arguments.
>>
>> If we have an argument type that is composed of a fused type, would be
>> want the indexing to specify the composed type or the fused type? e.g.
>>
>> ctypedef floating *floating_p
>
> How should we support this? It's clear in this case, but only because
> you chose good names. Another option would be to require
> parameterization floating_p, with floating_p[floating] the
> "as-yet-unparameterized" version. Explicit but redundant. (The same
> applies to struct as classes as well as typedefs.) On the other had,
> the above is very succinct and clear in context, so I'm leaning
> towards it. Thoughts?

 Well, it is already supported. floating is fused, so any composition
 of floating is also fused.

>> cdef func(floating_p x):
>>    ...
>>
>> Then do we want
>>
>>    func[double](10.0)
>>
>> or
>>
>>    func[double_p](10.0)
>>
>> to specialize func?
>
> The latter.

 I'm really leaning towards the former. What if you write

 cdef func(floating_p x, floating_p *y):
     ...

 Then specializing floating_p using double_p sounds slightly
 nonsensical, as you're also specializing floating_p *.
>>>
>>> I made myself agree with both of you in turn, but in the end I think I'm
>>> with Robert here.
>>>
>>> Robert's approach sounds perhaps slightly simpler if you think of it this
>>> way:
>>>
>>> ctypedef fused_type(float, double) floating
>>> ctypedef floating* floating_p
>>>
>>> is really a short-hand for
>>>
>>> ctypedef fused_type(float*, double*) floating_p
>>>
>>> I.e., when using a fused_type in a typedef you simply get a new
>>> fused_type.
>>> This sounds in a sense simpler without extra complexity getting in the
>>> way
>>> ("which was my fused base type again...").
>>>
>>> Dag SVerre
>>> ___
>>> cython-devel mailing list
>>> cython-devel@python.org
>>> http://mail.python.org/mailman/listinfo/cython-devel
>>>
>>
>> Ok, if those typedefs should be disallowed then specialization through
>> indexing should then definitely get the types listed in the fused_type
>> typedef.
>
> I'm not sure what you mean here. What is disallowed exactly?

ctypedef cython.fused_type(float, double) floating
ctypedef floating *floating_p

That is what you meant right? Because prohibiting that makes it easier
to see where a type is variable (as the entire type always is, and not
some base type of it).

> DS
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Dag Sverre Seljebotn

On 05/03/2011 10:49 AM, mark florisson wrote:

On 3 May 2011 10:44, Dag Sverre Seljebotn  wrote:

On 05/03/2011 10:42 AM, mark florisson wrote:


On 3 May 2011 10:07, Dag Sverre Seljebotn
  wrote:


On 05/03/2011 09:59 AM, mark florisson wrote:


On 3 May 2011 00:21, Robert Bradshaw
  wrote:


On Mon, May 2, 2011 at 1:56 PM, mark florisson
  wrote:


On 2 May 2011 18:24, Robert Bradshaw
  wrote:


On Sun, May 1, 2011 at 2:38 AM, mark florisson
  wrote:


A remaining issue which I'm not quite certain about is the
specialization through subscripts, e.g. func[double]. How should
this
work from Python space (assuming cpdef functions)? Would we want to
pass in cython.double etc? Because it would only work for builtin
types, so what about types that aren't exposed to Python but can
still
be coerced to and from Python? Perhaps it would be better to pass in
strings instead. I also think e.g. "int *" reads better than
cython.pointer(cython.int).


That's whey we offer cython.p_int. On that note, we should support
cython.astype("int *") or something like that. Generally, I don't
like
encoding semantic information in strings.

OTHO, since it'll be a mapping of some sort, there's no reason we
can't support both. Most of the time it should dispatch (at runtime
or
compile time) based on the type of the arguments.


If we have an argument type that is composed of a fused type, would be
want the indexing to specify the composed type or the fused type? e.g.

ctypedef floating *floating_p


How should we support this? It's clear in this case, but only because
you chose good names. Another option would be to require
parameterization floating_p, with floating_p[floating] the
"as-yet-unparameterized" version. Explicit but redundant. (The same
applies to struct as classes as well as typedefs.) On the other had,
the above is very succinct and clear in context, so I'm leaning
towards it. Thoughts?


Well, it is already supported. floating is fused, so any composition
of floating is also fused.


cdef func(floating_p x):
...

Then do we want

func[double](10.0)

or

func[double_p](10.0)

to specialize func?


The latter.


I'm really leaning towards the former. What if you write

cdef func(floating_p x, floating_p *y):
 ...

Then specializing floating_p using double_p sounds slightly
nonsensical, as you're also specializing floating_p *.


I made myself agree with both of you in turn, but in the end I think I'm
with Robert here.

Robert's approach sounds perhaps slightly simpler if you think of it this
way:

ctypedef fused_type(float, double) floating
ctypedef floating* floating_p

is really a short-hand for

ctypedef fused_type(float*, double*) floating_p

I.e., when using a fused_type in a typedef you simply get a new
fused_type.
This sounds in a sense simpler without extra complexity getting in the
way
("which was my fused base type again...").

Dag SVerre
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel



Ok, if those typedefs should be disallowed then specialization through
indexing should then definitely get the types listed in the fused_type
typedef.


I'm not sure what you mean here. What is disallowed exactly?


ctypedef cython.fused_type(float, double) floating
ctypedef floating *floating_p

That is what you meant right? Because prohibiting that makes it easier
to see where a type is variable (as the entire type always is, and not
some base type of it).



No. I meant that the above is automatically transformed into

ctypedef cython.fused_type(float, double) floating
ctypedef cython.fused_type(float*, double*) floating_p


DS
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread mark florisson
On 3 May 2011 10:57, Dag Sverre Seljebotn  wrote:
> On 05/03/2011 10:49 AM, mark florisson wrote:
>>
>> On 3 May 2011 10:44, Dag Sverre Seljebotn
>>  wrote:
>>>
>>> On 05/03/2011 10:42 AM, mark florisson wrote:

 On 3 May 2011 10:07, Dag Sverre Seljebotn
  wrote:
>
> On 05/03/2011 09:59 AM, mark florisson wrote:
>>
>> On 3 May 2011 00:21, Robert Bradshaw
>>  wrote:
>>>
>>> On Mon, May 2, 2011 at 1:56 PM, mark florisson
>>>       wrote:

 On 2 May 2011 18:24, Robert Bradshaw
  wrote:
>
> On Sun, May 1, 2011 at 2:38 AM, mark florisson
>       wrote:
>>
>> A remaining issue which I'm not quite certain about is the
>> specialization through subscripts, e.g. func[double]. How should
>> this
>> work from Python space (assuming cpdef functions)? Would we want
>> to
>> pass in cython.double etc? Because it would only work for builtin
>> types, so what about types that aren't exposed to Python but can
>> still
>> be coerced to and from Python? Perhaps it would be better to pass
>> in
>> strings instead. I also think e.g. "int *" reads better than
>> cython.pointer(cython.int).
>
> That's whey we offer cython.p_int. On that note, we should support
> cython.astype("int *") or something like that. Generally, I don't
> like
> encoding semantic information in strings.
>
> OTHO, since it'll be a mapping of some sort, there's no reason we
> can't support both. Most of the time it should dispatch (at runtime
> or
> compile time) based on the type of the arguments.

 If we have an argument type that is composed of a fused type, would
 be
 want the indexing to specify the composed type or the fused type?
 e.g.

 ctypedef floating *floating_p
>>>
>>> How should we support this? It's clear in this case, but only because
>>> you chose good names. Another option would be to require
>>> parameterization floating_p, with floating_p[floating] the
>>> "as-yet-unparameterized" version. Explicit but redundant. (The same
>>> applies to struct as classes as well as typedefs.) On the other had,
>>> the above is very succinct and clear in context, so I'm leaning
>>> towards it. Thoughts?
>>
>> Well, it is already supported. floating is fused, so any composition
>> of floating is also fused.
>>
 cdef func(floating_p x):
    ...

 Then do we want

    func[double](10.0)

 or

    func[double_p](10.0)

 to specialize func?
>>>
>>> The latter.
>>
>> I'm really leaning towards the former. What if you write
>>
>> cdef func(floating_p x, floating_p *y):
>>     ...
>>
>> Then specializing floating_p using double_p sounds slightly
>> nonsensical, as you're also specializing floating_p *.
>
> I made myself agree with both of you in turn, but in the end I think
> I'm
> with Robert here.
>
> Robert's approach sounds perhaps slightly simpler if you think of it
> this
> way:
>
> ctypedef fused_type(float, double) floating
> ctypedef floating* floating_p
>
> is really a short-hand for
>
> ctypedef fused_type(float*, double*) floating_p
>
> I.e., when using a fused_type in a typedef you simply get a new
> fused_type.
> This sounds in a sense simpler without extra complexity getting in the
> way
> ("which was my fused base type again...").
>
> Dag SVerre
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>

 Ok, if those typedefs should be disallowed then specialization through
 indexing should then definitely get the types listed in the fused_type
 typedef.
>>>
>>> I'm not sure what you mean here. What is disallowed exactly?
>>
>> ctypedef cython.fused_type(float, double) floating
>> ctypedef floating *floating_p
>>
>> That is what you meant right? Because prohibiting that makes it easier
>> to see where a type is variable (as the entire type always is, and not
>> some base type of it).
>
>
> No. I meant that the above is automatically transformed into
>
> ctypedef cython.fused_type(float, double) floating
> ctypedef cython.fused_type(float*, double*) floating_p

I see, so you want to allow it but you consider the entire thing
variable. Ok sure, in a way it makes as much sense as the other. So
then 'floating *' would get specialized using 'double', and
'floating_p' would get specialized using 'double_p'.

>
> DS
> ___
> cython-devel mailing list
> cython-devel@python.org
> 

Re: [Cython] Fused Types

2011-05-03 Thread Greg Ewing

I'm a bit confused about how fused types combine to
create further fused types. If you have something
like

  ctypedef struct Vector:
floating x
floating y
floating z

then is it going to generate code for all possible
combinations of types for x, y and z?

That's probably not what the user intended -- it's
more likely that he wants *two* versions of type
Vector, one with all floats and one with all doubles.
But without explicit type parameters, there's no way
to express that.

I worry that you're introducing a parameterised type
system in a half-baked way here without properly
thinking through all the implications.

--
Greg

___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread mark florisson
On 3 May 2011 07:47, Greg Ewing  wrote:
> I'm a bit confused about how fused types combine to
> create further fused types. If you have something
> like
>
>  ctypedef struct Vector:
>    floating x
>    floating y
>    floating z
>
> then is it going to generate code for all possible
> combinations of types for x, y and z?
>
> That's probably not what the user intended -- it's
> more likely that he wants *two* versions of type
> Vector, one with all floats and one with all doubles.
> But without explicit type parameters, there's no way
> to express that.
>
> I worry that you're introducing a parameterised type
> system in a half-baked way here without properly
> thinking through all the implications.
>
> --
> Greg
>
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>

Currently you cannot use them at all in structs, only as cdef function
argument types. Then if you have

cdef func(floating x, floating y):
...

you get a "float, float" version, and a "double, double" version, but
not "float, double" or "double, float". When and if support for
structs is there, it will work the same way as with functions.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread mark florisson
On 3 May 2011 15:17, mark florisson  wrote:
> On 3 May 2011 07:47, Greg Ewing  wrote:
>> I'm a bit confused about how fused types combine to
>> create further fused types. If you have something
>> like
>>
>>  ctypedef struct Vector:
>>    floating x
>>    floating y
>>    floating z
>>
>> then is it going to generate code for all possible
>> combinations of types for x, y and z?
>>
>> That's probably not what the user intended -- it's
>> more likely that he wants *two* versions of type
>> Vector, one with all floats and one with all doubles.
>> But without explicit type parameters, there's no way
>> to express that.
>>
>> I worry that you're introducing a parameterised type
>> system in a half-baked way here without properly
>> thinking through all the implications.
>>
>> --
>> Greg
>>
>> ___
>> cython-devel mailing list
>> cython-devel@python.org
>> http://mail.python.org/mailman/listinfo/cython-devel
>>
>
> Currently you cannot use them at all in structs, only as cdef function
> argument types. Then if you have
>
> cdef func(floating x, floating y):
>    ...
>
> you get a "float, float" version, and a "double, double" version, but
> not "float, double" or "double, float". When and if support for
> structs is there, it will work the same way as with functions.
>

However the difference here is a ctypedef:

ctypedef floating *floating_p

Now a parameter with floating_p may have a different specialized base
type than the specialized type of floating.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Stefan Behnel

mark florisson, 03.05.2011 15:17:

if you have

cdef func(floating x, floating y):
 ...

you get a "float, float" version, and a "double, double" version, but
not "float, double" or "double, float".


So, what would you have to do in order to get a "float, double" and 
"double, float" version then? Could you get that with


ctypedef fused_type(double, float) floating_df
ctypedef fused_type(float, double) floating_fd

cdef func(floating_df x, floating_fd y):

?

I assume there's no special casing for floating point types in the 
compiler, is there?


Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Dag Sverre Seljebotn

On 05/03/2011 03:51 PM, Stefan Behnel wrote:

mark florisson, 03.05.2011 15:17:

if you have

cdef func(floating x, floating y):
...

you get a "float, float" version, and a "double, double" version, but
not "float, double" or "double, float".


So, what would you have to do in order to get a "float, double" and
"double, float" version then? Could you get that with

ctypedef fused_type(double, float) floating_df
ctypedef fused_type(float, double) floating_fd

cdef func(floating_df x, floating_fd y):

?


Well, if you do something like

ctypedef fused_type(float, double) speed_t
ctypedef fused_type(float, double) acceleration_t

cdef func(speed_t x, acceleration_t y)

then you get 4 specializations. Each new typedef gives a new polymorphic 
type.


OTOH, with

ctypedef speed_t acceleration_t

I guess only 2 specializations.

Treating the typedefs in this way is slightly fishy of course. It may 
hint that "ctypedef" is the wrong way to declare a fused type *shrug*.


To only get the "cross-versions" you'd need something like what you 
wrote + Pauli's "paired"-suggestion.



Dag Sverre
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Sturla Molden

Den 03.05.2011 16:06, skrev Dag Sverre Seljebotn:


Well, if you do something like

ctypedef fused_type(float, double) speed_t
ctypedef fused_type(float, double) acceleration_t

cdef func(speed_t x, acceleration_t y)

then you get 4 specializations. Each new typedef gives a new 
polymorphic type.


OTOH, with

ctypedef speed_t acceleration_t

I guess only 2 specializations.

Treating the typedefs in this way is slightly fishy of course. It may 
hint that "ctypedef" is the wrong way to declare a fused type *shrug*.


To only get the "cross-versions" you'd need something like what you 
wrote + Pauli's "paired"-suggestion.




This is a bloatware generator.

It might not be used right, or it might generate bloat due to small 
mistakes (which will go unnoticed/silent).



Sturla



___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Robert Bradshaw
On Tue, May 3, 2011 at 12:59 AM, mark florisson
 wrote:
> On 3 May 2011 00:21, Robert Bradshaw  wrote:
>> On Mon, May 2, 2011 at 1:56 PM, mark florisson
>>  wrote:
>>> On 2 May 2011 18:24, Robert Bradshaw  wrote:
 On Sun, May 1, 2011 at 2:38 AM, mark florisson
  wrote:
> A remaining issue which I'm not quite certain about is the
> specialization through subscripts, e.g. func[double]. How should this
> work from Python space (assuming cpdef functions)? Would we want to
> pass in cython.double etc? Because it would only work for builtin
> types, so what about types that aren't exposed to Python but can still
> be coerced to and from Python? Perhaps it would be better to pass in
> strings instead. I also think e.g. "int *" reads better than
> cython.pointer(cython.int).

 That's whey we offer cython.p_int. On that note, we should support
 cython.astype("int *") or something like that. Generally, I don't like
 encoding semantic information in strings.

 OTHO, since it'll be a mapping of some sort, there's no reason we
 can't support both. Most of the time it should dispatch (at runtime or
 compile time) based on the type of the arguments.
>>>
>>> If we have an argument type that is composed of a fused type, would be
>>> want the indexing to specify the composed type or the fused type? e.g.
>>>
>>> ctypedef floating *floating_p
>>
>> How should we support this? It's clear in this case, but only because
>> you chose good names. Another option would be to require
>> parameterization floating_p, with floating_p[floating] the
>> "as-yet-unparameterized" version. Explicit but redundant. (The same
>> applies to struct as classes as well as typedefs.) On the other had,
>> the above is very succinct and clear in context, so I'm leaning
>> towards it. Thoughts?
>
> Well, it is already supported. floating is fused, so any composition
> of floating is also fused.
>
>>> cdef func(floating_p x):
>>>    ...
>>>
>>> Then do we want
>>>
>>>    func[double](10.0)
>>>
>>> or
>>>
>>>    func[double_p](10.0)
>>>
>>> to specialize func?
>>
>> The latter.
>
> I'm really leaning towards the former.

Ugh. I totally changed the meaning of that when I refactored my email.
I'm in agreement with you: func[double].

> What if you write
>
> cdef func(floating_p x, floating_p *y):
>    ...
>
> Then specializing floating_p using double_p sounds slightly
> nonsensical, as you're also specializing floating_p *.
>
>>> FYI, the type checking works like 'double_p is
>>> floating_p' and not 'double is floating_p'. But for functions this is
>>> a little different. On the one hand specifying the full types
>>> (double_p) makes sense as you're kind of specifying a signature, but
>>> on the other hand you're specializing fused types and you don't care
>>> how they are composed -- especially if they occur multiple times with
>>> different composition. So I'm thinking we want 'func[double]'.
>>
>> That's what I'm thinking too. The type you're branching on is
>> floating, and withing that block you can declare variables as
>> floating*, ndarray[dtype=floating], etc.
>
> What I actually meant there was "I think we want func[double] for the
> func(floating_p x) signature".
>
> Right, people can already say 'cdef func(floating *p): ...' and then
> use 'floating'. However, if you do 'cdef floating_p x): ...', then
> 'floating' is not available, only 'floating_p'. It would be rather
> trivial to also support 'floating' in the latter case, which I think
> we should,

floating is implicitly available, we could require making it explicit.

> unless you are adamant about prohibiting regular typedefs
> of fused types.

No, I'm nto adamant against it, just wanted to get some discussion going.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Robert Bradshaw
On Tue, May 3, 2011 at 7:06 AM, Dag Sverre Seljebotn
 wrote:
> On 05/03/2011 03:51 PM, Stefan Behnel wrote:
>>
>> mark florisson, 03.05.2011 15:17:
>>>
>>> if you have
>>>
>>> cdef func(floating x, floating y):
>>> ...
>>>
>>> you get a "float, float" version, and a "double, double" version, but
>>> not "float, double" or "double, float".
>>
>> So, what would you have to do in order to get a "float, double" and
>> "double, float" version then? Could you get that with
>>
>> ctypedef fused_type(double, float) floating_df
>> ctypedef fused_type(float, double) floating_fd
>>
>> cdef func(floating_df x, floating_fd y):
>>
>> ?
>
> Well, if you do something like
>
> ctypedef fused_type(float, double) speed_t
> ctypedef fused_type(float, double) acceleration_t
>
> cdef func(speed_t x, acceleration_t y)
>
> then you get 4 specializations. Each new typedef gives a new polymorphic
> type.

Yep.

> OTOH, with
>
> ctypedef speed_t acceleration_t
>
> I guess only 2 specializations.
>
> Treating the typedefs in this way is slightly fishy of course. It may hint
> that "ctypedef" is the wrong way to declare a fused type *shrug*.

True, but if we start supporting nested things, I think this still
makes the most sense. E.g.

ctypedef floating speed_t
ctypedef floating acceleration_t

struct delta:
speed_t v
acceleration_t a

would still be exactly two versions (or three, if we throw long-double
in there), rather than having un-intended combinatorial explosion. The
as-yet-unspecialized version of delta would be delta[floating] or,
possibly, just delta. One could explicitly ask for delta[double] or
delta[float].

In terms of floating_p, note that "floating is double" and "floating_p
is double*" could both make perfect sense for that particular
specialization.

In terms of compiler support, as long as

ctypedef double my_double

produced something "in floating" then I think it could all be done in
a header file.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread mark florisson
On 3 May 2011 16:36, Sturla Molden  wrote:
> Den 03.05.2011 16:06, skrev Dag Sverre Seljebotn:
>>
>> Well, if you do something like
>>
>> ctypedef fused_type(float, double) speed_t
>> ctypedef fused_type(float, double) acceleration_t
>>
>> cdef func(speed_t x, acceleration_t y)
>>
>> then you get 4 specializations. Each new typedef gives a new polymorphic
>> type.
>>
>> OTOH, with
>>
>> ctypedef speed_t acceleration_t
>>
>> I guess only 2 specializations.
>>
>> Treating the typedefs in this way is slightly fishy of course. It may hint
>> that "ctypedef" is the wrong way to declare a fused type *shrug*.
>>
>> To only get the "cross-versions" you'd need something like what you wrote
>> + Pauli's "paired"-suggestion.
>>
>
> This is a bloatware generator.
>
> It might not be used right, or it might generate bloat due to small mistakes
> (which will go unnoticed/silent).

Except that we could decide to not generate code for unused
specializations. In any case, the C compile will know it, and issue
warnings if we don't, as the functions are static.

>
> Sturla
>
>
>
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread mark florisson
On 3 May 2011 18:00, Robert Bradshaw  wrote:
> On Tue, May 3, 2011 at 12:59 AM, mark florisson
>  wrote:
>> On 3 May 2011 00:21, Robert Bradshaw  wrote:
>>> On Mon, May 2, 2011 at 1:56 PM, mark florisson
>>>  wrote:
 On 2 May 2011 18:24, Robert Bradshaw  wrote:
> On Sun, May 1, 2011 at 2:38 AM, mark florisson
>  wrote:
>> A remaining issue which I'm not quite certain about is the
>> specialization through subscripts, e.g. func[double]. How should this
>> work from Python space (assuming cpdef functions)? Would we want to
>> pass in cython.double etc? Because it would only work for builtin
>> types, so what about types that aren't exposed to Python but can still
>> be coerced to and from Python? Perhaps it would be better to pass in
>> strings instead. I also think e.g. "int *" reads better than
>> cython.pointer(cython.int).
>
> That's whey we offer cython.p_int. On that note, we should support
> cython.astype("int *") or something like that. Generally, I don't like
> encoding semantic information in strings.
>
> OTHO, since it'll be a mapping of some sort, there's no reason we
> can't support both. Most of the time it should dispatch (at runtime or
> compile time) based on the type of the arguments.

 If we have an argument type that is composed of a fused type, would be
 want the indexing to specify the composed type or the fused type? e.g.

 ctypedef floating *floating_p
>>>
>>> How should we support this? It's clear in this case, but only because
>>> you chose good names. Another option would be to require
>>> parameterization floating_p, with floating_p[floating] the
>>> "as-yet-unparameterized" version. Explicit but redundant. (The same
>>> applies to struct as classes as well as typedefs.) On the other had,
>>> the above is very succinct and clear in context, so I'm leaning
>>> towards it. Thoughts?
>>
>> Well, it is already supported. floating is fused, so any composition
>> of floating is also fused.
>>
 cdef func(floating_p x):
    ...

 Then do we want

    func[double](10.0)

 or

    func[double_p](10.0)

 to specialize func?
>>>
>>> The latter.
>>
>> I'm really leaning towards the former.
>
> Ugh. I totally changed the meaning of that when I refactored my email.
> I'm in agreement with you: func[double].

I see, however Dag just agreed on double_p :) So it depends, as Dag
said, we can view

ctypedef floating *floating_p

as a fused type with variable part double * and float *. But you can
also view the variable part as double and float. Either way makes
sense, but the former allows you to differentiate floating from
floating_p.

So I suppose that if we want func[double] to specialize 'cdef
func(floating_p x, floating y)', then it would specialize both
floating_p and floating. However, if we settle on Dag's proposal, we
can differentiate 'floating' from 'floating_p' and we could make
'speed_t' and 'acceleration_t' a ctypedef of floating.

So I guess Dag's proposal makes sense, because if you want a single
specialization, you'd write 'cdef func(floating *x, floating y)'. So
overall you get more flexibility.

>> What if you write
>>
>> cdef func(floating_p x, floating_p *y):
>>    ...
>>
>> Then specializing floating_p using double_p sounds slightly
>> nonsensical, as you're also specializing floating_p *.
>>
 FYI, the type checking works like 'double_p is
 floating_p' and not 'double is floating_p'. But for functions this is
 a little different. On the one hand specifying the full types
 (double_p) makes sense as you're kind of specifying a signature, but
 on the other hand you're specializing fused types and you don't care
 how they are composed -- especially if they occur multiple times with
 different composition. So I'm thinking we want 'func[double]'.
>>>
>>> That's what I'm thinking too. The type you're branching on is
>>> floating, and withing that block you can declare variables as
>>> floating*, ndarray[dtype=floating], etc.
>>
>> What I actually meant there was "I think we want func[double] for the
>> func(floating_p x) signature".
>>
>> Right, people can already say 'cdef func(floating *p): ...' and then
>> use 'floating'. However, if you do 'cdef floating_p x): ...', then
>> 'floating' is not available, only 'floating_p'. It would be rather
>> trivial to also support 'floating' in the latter case, which I think
>> we should,
>
> floating is implicitly available, we could require making it explicit.

How would we make it explicit.

>> unless you are adamant about prohibiting regular typedefs
>> of fused types.
>
> No, I'm nto adamant against it, just wanted to get some discussion going.
>
> - Robert
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>
___
cython-dev

Re: [Cython] Fused Types

2011-05-03 Thread Dag Sverre Seljebotn
I was wrong. We need

cdef f(floating x, floating_p y)

...to get 2 specializations, not 4. And the rest follows from there. So I'm 
with Robert's real stance :-)

I don't think we want flexibility, we want simplicity over all. You can always 
use a templating language.

Btw we shouldn't count on pruning for the design of this, I think this will for 
a large part be used with def functions. And if you use a cdef function from 
another module through a pxd, you also need all versions.

DS
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

mark florisson  wrote:

On 3 May 2011 18:00, Robert Bradshaw  wrote: > On 
Tue, May 3, 2011 at 12:59 AM, mark florisson >  
wrote: >> On 3 May 2011 00:21, Robert Bradshaw  
wrote: >>> On Mon, May 2, 2011 at 1:56 PM, mark florisson >>> 
 wrote:  On 2 May 2011 18:24, Robert Bradshaw 
 wrote: > On Sun, May 1, 2011 at 2:38 AM, 
mark florisson >  wrote: >> A remaining 
issue which I'm not quite certain about is the >> specialization through 
subscripts, e.g. func[double]. How should this >> work from Python space 
(assuming cpdef functions)? Would we want to >> pass in cython.double etc? 
Because it would only work for builtin >> types, so what about types that 
aren't exposed to Python but can still >> be coerced to and from Python? 
Perhaps it would be better to pass in >> strings instead. I also think e.g.
  "int
*" reads better than >> cython.pointer(cython.int). > > That's whey 
we offer cython.p_int. On that note, we should support > cython.astype("int 
*") or something like that. Generally, I don't like > encoding semantic 
information in strings. > > OTHO, since it'll be a mapping of some 
sort, there's no reason we > can't support both. Most of the time it should 
dispatch (at runtime or > compile time) based on the type of the arguments. 
  If we have an argument type that is composed of a fused type, would 
be  want the indexing to specify the composed type or the fused type? e.g. 
  ctypedef floating *floating_p >>> >>> How should we support this? 
It's clear in this case, but only because >>> you chose good names. Another 
option would be to require >>> parameterization floating_p, with 
floating_p[floating] the >>> "as-yet-unparameterized" version. Explicit but 
redundant. (The same >>> applies to struct as classes as well as type
 defs.)
On the other had, >>> the above is very succinct and clear in context, so I'm 
leaning >>> towards it. Thoughts? >> >> Well, it is already supported. floating 
is fused, so any composition >> of floating is also fused. >>  cdef 
func(floating_p x): ...   Then do we want  
func[double](10.0)   or  func[double_p](10.0)   to 
specialize func? >>> >>> The latter. >> >> I'm really leaning towards the 
former. > > Ugh. I totally changed the meaning of that when I refactored my 
email. > I'm in agreement with you: func[double]. I see, however Dag just 
agreed on double_p :) So it depends, as Dag said, we can view ctypedef floating 
*floating_p as a fused type with variable part double * and float *. But you 
can also view the variable part as double and float. Either way makes sense, 
but the former allows you to differentiate floating from floating_p. So I 
suppose that if we want func[double] to specialize 'cdef func(floating_p x, fl
 oating
y)', then it would specialize both floating_p and floating. However, if we 
settle on Dag's proposal, we can differentiate 'floating' from 'floating_p' and 
we could make 'speed_t' and 'acceleration_t' a ctypedef of floating. So I guess 
Dag's proposal makes sense, because if you want a single specialization, you'd 
write 'cdef func(floating *x, floating y)'. So overall you get more 
flexibility. >> What if you write >> >> cdef func(floating_p x, floating_p *y): 
>>... >> >> Then specializing floating_p using double_p sounds slightly >> 
nonsensical, as you're also specializing floating_p *. >>  FYI, the type 
checking works like 'double_p is  floating_p' and not 'double is 
floating_p'. But for functions this is  a little different. On the one hand 
specifying the full types  (double_p) makes sense as you're kind of 
specifying a signature, but  on the other hand you're specializing fused 
types and you don't care  how they are composed -- especially if they 
 occur
multiple times with  different composition. So I'm thinking we want 
'func[double]'. >>> >>> That's what I'm thinking too. The type you're branching 
on is >>> floating, and withing that block you can declare variables as >>> 
floating*, ndarray[dtype=floating], etc. >> >> What I actually meant there was 
"I think we want func[double] for the >> func(floating_p x) signature". >> >> 
Right, people can already say 'cdef func(floating *p): ...' and then >> use 
'floating'. However, if you do 'cdef floating_p x): ...', then >> 'floa

Re: [Cython] Fused Types

2011-05-03 Thread Robert Bradshaw
On Tue, May 3, 2011 at 10:06 AM, mark florisson
 wrote:
> On 3 May 2011 18:00, Robert Bradshaw  wrote:
>> floating is implicitly available, we could require making it explicit.
>
> How would we make it explicit.

Require the parameterization, i.e.

floating_p[floating]

would be the as-yet-unspecified type. In this particular example, it
does seem unnecessarily verbose. Without it, one would have to know
that

cdef object foo(Vector v):
...

may depend on floating if Vector does.


On Tue, May 3, 2011 at 10:52 AM, Dag Sverre Seljebotn
 wrote:
> I was wrong. We need
>
> cdef f(floating x, floating_p y)
>
> ...to get 2 specializations, not 4. And the rest follows from there. So I'm
> with Robert's real stance :-)
>
> I don't think we want flexibility, we want simplicity over all. You can
> always use a templating language.

+1

> Btw we shouldn't count on pruning for the design of this, I think this will
> for a large part be used with def functions. And if you use a cdef function
> from another module through a pxd, you also need all versions.

Well, we'll want to avoid compiler warnings. E.g. floating might
include long double, but only float and double may be used. In pxd and
def functions, however, we will make all versions available.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Dag Sverre Seljebotn

On 05/03/2011 08:19 PM, Robert Bradshaw wrote:


Btw we shouldn't count on pruning for the design of this, I think this will
for a large part be used with def functions. And if you use a cdef function
from another module through a pxd, you also need all versions.


Well, we'll want to avoid compiler warnings. E.g. floating might
include long double, but only float and double may be used. In pxd and
def functions, however, we will make all versions available.


Which is a reminder to hash out exactly how the dispatch will be 
resolved when coming from Python space (we do want to support "f(x, y)", 
without []-qualifier, when calling from Python, right?)


Fused types mostly make sense when used through PEP 3118 memory views 
(using the planned syntax for brevity):


def f(floating[:] x, floating y): ...

I'm thinking that in this kind of situation we let the array override 
how y is interpreted (y will always be a double here, but if x is passed 
as a float32 then use float32 for y as well and coerce y).


Does this make sense as a general rule -- if there's a conflict between 
array arguments and scalar arguments (array base type is narrower than 
the scalar type), the array argument wins? It makes sense because we can 
easily convert a scalar while we can't convert an array; and there's no 
"3.4f" notation in Python.


This makes less sense

def f(floating x): ...

as it can only ever resolve to double; although I guess we should allow 
it for consistency with usecases that do make sense, such as 
"real_or_complex" and "int_or_float"


The final and most difficult problem is what Python ints resolve to in 
this context. The widest integer type available in the fused type? 
Always Py_ssize_t? -1 on making the dispatch depend on the actual 
run-time value.


Dag Sverre
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Greg Ewing

mark florisson wrote:


cdef func(floating x, floating y):
...

you get a "float, float" version, and a "double, double" version, but
not "float, double" or "double, float".


It's hard to draw conclusions from this example because
it's degenerate. You don't really need multiple versions of a
function like that, because of float <-> double coercions.

A more telling example might be

  cdef double dot_product(floating *u, floating *v, int length)

By your current rules, this would give you one version that
takes two float vectors, and another that takes two double
vectors.

But if you want to find the dot product of a float vector and
a double vector, you're out of luck.

--
Greg
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Greg Ewing

Dag Sverre Seljebotn wrote:


ctypedef fused_type(float, double) speed_t
ctypedef fused_type(float, double) acceleration_t

then you get 4 specializations.

ctypedef speed_t acceleration_t

I guess only 2 specializations.

Treating the typedefs in this way is slightly fishy of course.


Indeed. This whole business seems rather too implicit to
me. I think I'd rather have explicit type parameters in
some form. Maybe

  cdef func2[floating F](F x, F y):
# 2 specialisations

  cdef func4[floating F, floating G](F x, G y):
# 4 specialisations

This also makes it clear how to refer to particular
specialisations: func2[float] or func4[float, double].

Pointers are handled in a natural way:

  cdef funcfp[floating F](F x, F *y):
# 2 specialisations

It also extends naturally to fused types used in other
contexts:

  cdef struct Vector[floating F]:
F x, y, z

--
Greg
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Robert Bradshaw
On Tue, May 3, 2011 at 12:59 PM, Dag Sverre Seljebotn
 wrote:
> On 05/03/2011 08:19 PM, Robert Bradshaw wrote:
>
>>> Btw we shouldn't count on pruning for the design of this, I think this
>>> will
>>> for a large part be used with def functions. And if you use a cdef
>>> function
>>> from another module through a pxd, you also need all versions.
>>
>> Well, we'll want to avoid compiler warnings. E.g. floating might
>> include long double, but only float and double may be used. In pxd and
>> def functions, however, we will make all versions available.
>
> Which is a reminder to hash out exactly how the dispatch will be resolved
> when coming from Python space (we do want to support "f(x, y)", without
> []-qualifier, when calling from Python, right?)
>
> Fused types mostly make sense when used through PEP 3118 memory views (using
> the planned syntax for brevity):
>
> def f(floating[:] x, floating y): ...
>
> I'm thinking that in this kind of situation we let the array override how y
> is interpreted (y will always be a double here, but if x is passed as a
> float32 then use float32 for y as well and coerce y).
>
> Does this make sense as a general rule -- if there's a conflict between
> array arguments and scalar arguments (array base type is narrower than the
> scalar type), the array argument wins? It makes sense because we can easily
> convert a scalar while we can't convert an array; and there's no "3.4f"
> notation in Python.

I hadn't thought of that... it''s a bit odd, but we do have object ->
float but no float[:] -> double[:], so the notion of something being
the only match, even if it involves truncation, would be OK here. I'd
like the rules to be clear. Perhaps we allow object -> float (e.g. in
the auto-dispatching mode) but not double -> float.

> This makes less sense
>
> def f(floating x): ...
>
> as it can only ever resolve to double; although I guess we should allow it
> for consistency with usecases that do make sense, such as "real_or_complex"
> and "int_or_float"
>
> The final and most difficult problem is what Python ints resolve to in this
> context. The widest integer type available in the fused type? Always
> Py_ssize_t? -1 on making the dispatch depend on the actual run-time value.

-1 to run-time cutoffs. I'd choose the widest integral type.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-03 Thread Robert Bradshaw
On Tue, May 3, 2011 at 4:40 PM, Greg Ewing  wrote:
> Dag Sverre Seljebotn wrote:
>
>> ctypedef fused_type(float, double) speed_t
>> ctypedef fused_type(float, double) acceleration_t
>>
>> then you get 4 specializations.
>>
>> ctypedef speed_t acceleration_t
>>
>> I guess only 2 specializations.
>>
>> Treating the typedefs in this way is slightly fishy of course.
>
> Indeed. This whole business seems rather too implicit to
> me. I think I'd rather have explicit type parameters in
> some form. Maybe
>
>  cdef func2[floating F](F x, F y):
>    # 2 specialisations
>
>  cdef func4[floating F, floating G](F x, G y):
>    # 4 specialisations
>
> This also makes it clear how to refer to particular
> specialisations: func2[float] or func4[float, double].
>
> Pointers are handled in a natural way:
>
>  cdef funcfp[floating F](F x, F *y):
>    # 2 specialisations
>
> It also extends naturally to fused types used in other
> contexts:
>
>  cdef struct Vector[floating F]:
>    F x, y, z

That's an idea, it is nice and explicit without being too verbose. Any
thoughts on how one would define one's own "floating?"

I presume one then use a Vector[F] inside of func2[floating F]?

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


[Cython] jenkins problems

2011-05-03 Thread Vitja Makarov
Hi!

Jenkins doesn't work for me. It seems that it can't do pull and is
running tests again obsolete sources.
May be because of forced push.

There are only 6 errors here:
https://sage.math.washington.edu:8091/hudson/view/cython-vitek/job/cython-vitek-tests-py27-c/

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] jenkins problems

2011-05-03 Thread Vitja Makarov
2011/5/4 Vitja Makarov :
> Hi!
>
> Jenkins doesn't work for me. It seems that it can't do pull and is
> running tests again obsolete sources.
> May be because of forced push.
>
> There are only 6 errors here:
> https://sage.math.washington.edu:8091/hudson/view/cython-vitek/job/cython-vitek-tests-py27-c/
>

Can you please provide me jenkins account and I'll try to fix the issues myself?


-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] jenkins problems

2011-05-03 Thread Vitja Makarov
2011/5/4 Vitja Makarov :
> 2011/5/4 Vitja Makarov :
>> Hi!
>>
>> Jenkins doesn't work for me. It seems that it can't do pull and is
>> running tests again obsolete sources.
>> May be because of forced push.
>>
>> There are only 6 errors here:
>> https://sage.math.washington.edu:8091/hudson/view/cython-vitek/job/cython-vitek-tests-py27-c/
>>
>
> Can you please provide me jenkins account and I'll try to fix the issues 
> myself?
>

It's better to use:

$ git fetch origin
$ git checkout -f origin/master

Instead of git pull

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel