Re: [Cython] Fused Types

2011-05-02 Thread Stefan Behnel

Robert Bradshaw, 30.04.2011 08:16:

On Fri, Apr 29, 2011 at 8:04 AM, mark florisson

With the type matching it matches on exactly 'if src_type is
dst_type:' so you can't use 'and' and such... perhaps I should turn
these expression into a node with the constant value first and then
see if the result of the entire expression is known at compile time?


Yes, that's exactly what I was thinking you should do.


For now, you could simply (try to) re-run the ConstantFolding transform 
after the type splitting. It will evaluate constant expressions and also 
drop unreachable branches from conditionals.


Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread Dag Sverre Seljebotn

On 05/01/2011 06:25 PM, Sturla Molden wrote:

Den 01.05.2011 16:36, skrev Stefan Behnel:


Not everyone uses C++. And the C++ compiler cannot adapt the code to
specific Python object types.


Ok, that makes sence.

Second question: Why not stay with the current square-bracket syntax?
Does Cython
need a fused-type in addition?


There is no current feature for templates in Cython currently, only 
interfacing with C++ templates, which is rather different.


I.e., your question is very vague.

You're welcome to draft your own proposal for full-blown templates in 
Cython, if that is what you mean. When we came up with this idea, we 
felt that bringing the full power of C++ templates (including pattern 
matching etc.) into Cython would be a bit too much; I think Cython devs 
are above average sceptical to C++ and the mixed blessings of templates.


E.g., one reason for not wanting to do it the C++ way is the need to 
stick largs parts of your program in header files. With fused types, the 
valid instantiations are determined up front.


DS
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread mark florisson
On 2 May 2011 11:08, Stefan Behnel  wrote:
> Robert Bradshaw, 30.04.2011 08:16:
>>
>> On Fri, Apr 29, 2011 at 8:04 AM, mark florisson
>>>
>>> With the type matching it matches on exactly 'if src_type is
>>> dst_type:' so you can't use 'and' and such... perhaps I should turn
>>> these expression into a node with the constant value first and then
>>> see if the result of the entire expression is known at compile time?
>>
>> Yes, that's exactly what I was thinking you should do.
>
> For now, you could simply (try to) re-run the ConstantFolding transform
> after the type splitting. It will evaluate constant expressions and also
> drop unreachable branches from conditionals.

Right thanks! I actually ran it on the condition only, as I overlooked
its branch pruning capability (sometimes I see 'if (1)' generated
code, so that's probably because that happens after ConstantFolding).
So wouldn't it be a good idea to just insert another ConstantFolding
transform at the end of the pipeline also, and have it recalculate
constants in case it was previously determined not_a_constant?

> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread Stefan Behnel

mark florisson, 02.05.2011 11:22:

On 2 May 2011 11:08, Stefan Behnel wrote:

Robert Bradshaw, 30.04.2011 08:16:


On Fri, Apr 29, 2011 at 8:04 AM, mark florisson


With the type matching it matches on exactly 'if src_type is
dst_type:' so you can't use 'and' and such... perhaps I should turn
these expression into a node with the constant value first and then
see if the result of the entire expression is known at compile time?


Yes, that's exactly what I was thinking you should do.


For now, you could simply (try to) re-run the ConstantFolding transform
after the type splitting. It will evaluate constant expressions and also
drop unreachable branches from conditionals.


Right thanks! I actually ran it on the condition only, as I overlooked
its branch pruning capability (sometimes I see 'if (1)' generated
code, so that's probably because that happens after ConstantFolding).


... or because something generates that explicitly (e.g. to simplify some 
specific code generation), but I couldn't find this with a quick grep right 
now. I once put a C for-loop into straight sequential argument unpacking 
code, so that I could "break" out of it at any point, rather than using 
gotos and labels for that. But that was only half-way hackish because that 
particular code mimics an unrolled loop anyway.


In the future, Vitja's dead code removal branch will handle branch pruning 
anyway. I just put it into the ConstantFolding transform at the time as it 
fit in there quite well, and because I thought it would help also during 
the constant calculation to get rid of dead code while we're at it, e.g. in 
conditional expressions (x if c else y) or later on in combination with 
things like the FlattenInList/SwitchTransform.




So wouldn't it be a good idea to just insert another ConstantFolding
transform at the end of the pipeline also, and have it recalculate
constants in case it was previously determined not_a_constant?


An additional run would also make sense before the optimisation steps that 
run after type analysis. Some of them can take advantage of constants, 
including those created before and during type analysis. Maybe even a third 
run right before the code generation, i.e. after the optimisations.


Not sure if "not_a_constant" values can become a problem. They'll be 
ignored in the next run, only "constant_value_not_set" values will be 
recalculated. While newly created expressions will usually be virgins, new 
values/expressions that are being added to existing "not_a_constant" 
expressions may end up being ignored, even if the whole expression could 
then be condensed into a constant.


OTOH, these cases may be rare enough to not deserve the additional 
performance penalty of looking at all expression nodes again. It may be 
better to call back into ConstantFolding explicitly when generating 
expressions at a later point, or to just calculate and set the constant 
values properly while creating the nodes.


While you're at it, you can test if it works to reuse the same transform 
instance for all runs. That should be a tiny bit quicker, because it only 
needs to build the node dispatch table once (not that I expect it to make 
any difference, but anyway...).


Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread mark florisson
On 2 May 2011 12:32, Stefan Behnel  wrote:
> mark florisson, 02.05.2011 11:22:
>>
>> On 2 May 2011 11:08, Stefan Behnel wrote:
>>>
>>> Robert Bradshaw, 30.04.2011 08:16:

 On Fri, Apr 29, 2011 at 8:04 AM, mark florisson
>
> With the type matching it matches on exactly 'if src_type is
> dst_type:' so you can't use 'and' and such... perhaps I should turn
> these expression into a node with the constant value first and then
> see if the result of the entire expression is known at compile time?

 Yes, that's exactly what I was thinking you should do.
>>>
>>> For now, you could simply (try to) re-run the ConstantFolding transform
>>> after the type splitting. It will evaluate constant expressions and also
>>> drop unreachable branches from conditionals.
>>
>> Right thanks! I actually ran it on the condition only, as I overlooked
>> its branch pruning capability (sometimes I see 'if (1)' generated
>> code, so that's probably because that happens after ConstantFolding).
>
> ... or because something generates that explicitly (e.g. to simplify some
> specific code generation), but I couldn't find this with a quick grep right
> now. I once put a C for-loop into straight sequential argument unpacking
> code, so that I could "break" out of it at any point, rather than using
> gotos and labels for that. But that was only half-way hackish because that
> particular code mimics an unrolled loop anyway.
>
> In the future, Vitja's dead code removal branch will handle branch pruning
> anyway. I just put it into the ConstantFolding transform at the time as it
> fit in there quite well, and because I thought it would help also during the
> constant calculation to get rid of dead code while we're at it, e.g. in
> conditional expressions (x if c else y) or later on in combination with
> things like the FlattenInList/SwitchTransform.
>
>
>> So wouldn't it be a good idea to just insert another ConstantFolding
>> transform at the end of the pipeline also, and have it recalculate
>> constants in case it was previously determined not_a_constant?
>
> An additional run would also make sense before the optimisation steps that
> run after type analysis. Some of them can take advantage of constants,
> including those created before and during type analysis. Maybe even a third
> run right before the code generation, i.e. after the optimisations.
>
> Not sure if "not_a_constant" values can become a problem. They'll be ignored
> in the next run, only "constant_value_not_set" values will be recalculated.
> While newly created expressions will usually be virgins, new
> values/expressions that are being added to existing "not_a_constant"
> expressions may end up being ignored, even if the whole expression could
> then be condensed into a constant.

Right, in my branch I introduced a new instance variable, and if
that's true it ignores "not_a_constant" and tries to compute it
anyway, as the pruning is mandatory and at the expression was
determined "not_a_constant".

> OTOH, these cases may be rare enough to not deserve the additional
> performance penalty of looking at all expression nodes again. It may be
> better to call back into ConstantFolding explicitly when generating
> expressions at a later point, or to just calculate and set the constant
> values properly while creating the nodes.

So far (from what I've seen, I haven't actually done any benchmarks) I
think the C compiler is still like 5 times as slow as Cython.

> While you're at it, you can test if it works to reuse the same transform
> instance for all runs. That should be a tiny bit quicker, because it only
> needs to build the node dispatch table once (not that I expect it to make
> any difference, but anyway...).

Ok, sure.

> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
>
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread Sturla Molden

Den 02.05.2011 11:15, skrev Dag Sverre Seljebotn:


I.e., your question is very vague.


Ok, what I wanted to ask was "why have one syntax for interfacing C++ 
templates and another for generics?" It seems like syntax bloat to me.



You're welcome to draft your own proposal for full-blown templates in 
Cython, if that is what you mean. When we came up with this idea, we 
felt that bringing the full power of C++ templates (including pattern 
matching etc.) into Cython would be a bit too much; I think Cython 
devs are above average sceptical to C++ and the mixed blessings of 
templates.


E.g., one reason for not wanting to do it the C++ way is the need to 
stick largs parts of your program in header files. With fused types, 
the valid instantiations are determined up front.


C++ templates are evil. They require huge header files (compiler 
dependent, but they all do) and make debugging a night mare. Template 
metaprogramming in C++ is crazy; we have optimizing compilers for 
avoiding that. Java and C# has a simpler form of generics, but even that 
can be too general.


Java and C# can specialize code at run-time, because there is a 
JIT-compiler. Cython must do this in advance, for which fused_types 
which will give us a combinatoral bloat of specialized code. That is why 
I suggested using run-time type information from test runs to select 
those we want.


Personally I solve this by "writing code that writes code". It is easy 
to use a Python script to generate ad print specialized C or Cython code.


Sturla





___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread Dag Sverre Seljebotn

On 05/02/2011 03:00 PM, Sturla Molden wrote:

Den 02.05.2011 11:15, skrev Dag Sverre Seljebotn:


I.e., your question is very vague.


Ok, what I wanted to ask was "why have one syntax for interfacing C++
templates and another for generics?" It seems like syntax bloat to me.


But we do that. The CEP specifies that if you have

def f(floating x): return x**2

then "f[double]" will refer to the specialization where 
floating==double, and calling f[double](3.4f) will make the float be 
upcast to a double.


There's no [] within the function definition, but there's no "prior art" 
for how that would look within Cython.


Dag Sverre
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread Dag Sverre Seljebotn

On 05/02/2011 03:00 PM, Sturla Molden wrote:

Den 02.05.2011 11:15, skrev Dag Sverre Seljebotn:


I.e., your question is very vague.


Ok, what I wanted to ask was "why have one syntax for interfacing C++
templates and another for generics?" It seems like syntax bloat to me.



You're welcome to draft your own proposal for full-blown templates in
Cython, if that is what you mean. When we came up with this idea, we
felt that bringing the full power of C++ templates (including pattern
matching etc.) into Cython would be a bit too much; I think Cython
devs are above average sceptical to C++ and the mixed blessings of
templates.

E.g., one reason for not wanting to do it the C++ way is the need to
stick largs parts of your program in header files. With fused types,
the valid instantiations are determined up front.


C++ templates are evil. They require huge header files (compiler
dependent, but they all do) and make debugging a night mare. Template
metaprogramming in C++ is crazy; we have optimizing compilers for
avoiding that. Java and C# has a simpler form of generics, but even that
can be too general.

Java and C# can specialize code at run-time, because there is a
JIT-compiler. Cython must do this in advance, for which fused_types
which will give us a combinatoral bloat of specialized code. That is why
I suggested using run-time type information from test runs to select
those we want.


Well, I think that what you see about "fused_types(object, list)" is 
mainly theoretical exercises at this point.


When fused_types was discussed originally the focus was very much on 
just finding something that would allow people to specialise for 
"float,double", or real and complex.


IOW, the kind of specializations people would have generated themselves 
using a templating language anyway.


Myself I see typing from profile-assisted compilation as a completely 
seperate feature (and something that's internal to "cython 
optimization"), even though they may share most implementation details, 
and fused types makes such things easier (but so would C++-style 
templates have done).



Personally I solve this by "writing code that writes code". It is easy
to use a Python script to generate ad print specialized C or Cython code.


fused_types is simply a proposal to make people resort to this a little 
less often (not everybody are comfortable generating source code -- I 
think everybody reading cython-devel are though). Basically: We don't 
want C++ templates, but can we extend the language in a way that deals 
with the most common situations. And fused_types was the compromise we 
ended up with.


DS
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread Robert Bradshaw
On Sun, May 1, 2011 at 2:38 AM, mark florisson
 wrote:
> On 30 April 2011 09:51, Dag Sverre Seljebotn  
> wrote:
>> On 04/30/2011 08:39 AM, Robert Bradshaw wrote:
>>>
>>> On Fri, Apr 29, 2011 at 3:53 AM, mark florisson
>>>   wrote:

 On 29 April 2011 12:28, Pauli Virtanen  wrote:
>
> No, just that real_t is specialized to float whenever struct_t is
> specialized
> to A and to double when B. Or a more realistic example,
>
>        ctypedef cython.fused_type(float, double) real_t
>        ctypedef cython.fused_type(float complex, double complex)
> complex_t
>
>        cdef real_plus_one(complex_t a):
>            real_t b = a.real
>            return b + 1
>
> which I suppose would not be a very unusual thing in numerical codes.
> This would also allow writing the case you had earlier as
>
>        cdef cython.fused_type(string_t, int, paired=struct_t) attr_t
>
>        cdef func(struct_t mystruct, int i):
>            cdef attr_t var
>
>            if typeof(mystruct) is typeof(int):
>                var = mystruct.attrib + i
>                ...
>            else:
>                var = mystruct.attrib + i
>                ...
>
> Things would need to be done explicitly instead of implicitly, though,
> but it would remove the need for any special handling of
> the "complex" keyword.
>>>
>>> If we're going to introduce pairing, another option would be
>>>
>>>     ctypedef fused_type((double complex, double), (float complex,
>>> float)) (complex_t, real_t)
>>>
>>> though I'm not sure I like that either. We're not trying to create the
>>> all-powerful templating system here, and anything that can be done
>>> with pairing can be done (though less elegantly) via branching on the
>>> types, or, as Pauli mentions, using a wider type is often (but not
>>> always) a viable option.
>>
>> Keeping the right balance is difficult. But, at least there's some cases of
>> needing this in various codebases when interfacing with LAPACK.
>>
>> Most uses of templating with Cython code I've seen so far does a similar
>> kind of "zip" as what you have above (as we discussed on the workshop). So
>> at least the usage pattern you write above is very common.
>>
>> float32 is not about to disappear, it really is twice as fast when you're
>> memory IO bound.
>>
>> Using a wider type is actually quite often not possible; any time the type
>> is involved as the base type of an array it is not possible, and that's a
>> pretty common case.
>
> Well, if the array is passed into the function directly (and not e.g.
> as an attribute of something passed in), then you can just write
> 'my_fused_type *array' or 'my_fused_type array[]', and the base type
> will be available as 'my_fused_type'.
>
>> (With LAPACK you take the address of the variable and
>> pass it to Fortran, so using a wider type is not possible there either,
>> although I'll agree that's a more remote case.)
>>
>> My proposal: Don't support either "real_t complex" or paired fused types for
>> the time being. Then see.
>
> Ok, sounds good.
>
>> But my vote is for paired fused types instead of "real_t complex".
>>
>> Dag Sverre
>> ___
>> cython-devel mailing list
>> cython-devel@python.org
>> http://mail.python.org/mailman/listinfo/cython-devel
>>
>
> A remaining issue which I'm not quite certain about is the
> specialization through subscripts, e.g. func[double]. How should this
> work from Python space (assuming cpdef functions)? Would we want to
> pass in cython.double etc? Because it would only work for builtin
> types, so what about types that aren't exposed to Python but can still
> be coerced to and from Python? Perhaps it would be better to pass in
> strings instead. I also think e.g. "int *" reads better than
> cython.pointer(cython.int).

That's whey we offer cython.p_int. On that note, we should support
cython.astype("int *") or something like that. Generally, I don't like
encoding semantic information in strings.

OTHO, since it'll be a mapping of some sort, there's no reason we
can't support both. Most of the time it should dispatch (at runtime or
compile time) based on the type of the arguments.

> It also sounds bad to rely on objects from the Shadow module, as
> people using Cython modules may not have Cython (or the Shadow module
> shipped as cython) installed. And what happens if they import it under
> a different name and we import another cython module?

The (vague) idea is that one could ship as much of the cython shadow
module with your own code as needed to run without Cython.

> Perhaps the
> specializations of the signature could also be exposed on the function
> object, so users can see which ones are available.

Sure.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Fused Types

2011-05-02 Thread mark florisson
On 2 May 2011 18:24, Robert Bradshaw  wrote:
> On Sun, May 1, 2011 at 2:38 AM, mark florisson
>  wrote:
>> On 30 April 2011 09:51, Dag Sverre Seljebotn  
>> wrote:
>>> On 04/30/2011 08:39 AM, Robert Bradshaw wrote:

 On Fri, Apr 29, 2011 at 3:53 AM, mark florisson
   wrote:
>
> On 29 April 2011 12:28, Pauli Virtanen  wrote:
>>
>> No, just that real_t is specialized to float whenever struct_t is
>> specialized
>> to A and to double when B. Or a more realistic example,
>>
>>        ctypedef cython.fused_type(float, double) real_t
>>        ctypedef cython.fused_type(float complex, double complex)
>> complex_t
>>
>>        cdef real_plus_one(complex_t a):
>>            real_t b = a.real
>>            return b + 1
>>
>> which I suppose would not be a very unusual thing in numerical codes.
>> This would also allow writing the case you had earlier as
>>
>>        cdef cython.fused_type(string_t, int, paired=struct_t) attr_t
>>
>>        cdef func(struct_t mystruct, int i):
>>            cdef attr_t var
>>
>>            if typeof(mystruct) is typeof(int):
>>                var = mystruct.attrib + i
>>                ...
>>            else:
>>                var = mystruct.attrib + i
>>                ...
>>
>> Things would need to be done explicitly instead of implicitly, though,
>> but it would remove the need for any special handling of
>> the "complex" keyword.

 If we're going to introduce pairing, another option would be

     ctypedef fused_type((double complex, double), (float complex,
 float)) (complex_t, real_t)

 though I'm not sure I like that either. We're not trying to create the
 all-powerful templating system here, and anything that can be done
 with pairing can be done (though less elegantly) via branching on the
 types, or, as Pauli mentions, using a wider type is often (but not
 always) a viable option.
>>>
>>> Keeping the right balance is difficult. But, at least there's some cases of
>>> needing this in various codebases when interfacing with LAPACK.
>>>
>>> Most uses of templating with Cython code I've seen so far does a similar
>>> kind of "zip" as what you have above (as we discussed on the workshop). So
>>> at least the usage pattern you write above is very common.
>>>
>>> float32 is not about to disappear, it really is twice as fast when you're
>>> memory IO bound.
>>>
>>> Using a wider type is actually quite often not possible; any time the type
>>> is involved as the base type of an array it is not possible, and that's a
>>> pretty common case.
>>
>> Well, if the array is passed into the function directly (and not e.g.
>> as an attribute of something passed in), then you can just write
>> 'my_fused_type *array' or 'my_fused_type array[]', and the base type
>> will be available as 'my_fused_type'.
>>
>>> (With LAPACK you take the address of the variable and
>>> pass it to Fortran, so using a wider type is not possible there either,
>>> although I'll agree that's a more remote case.)
>>>
>>> My proposal: Don't support either "real_t complex" or paired fused types for
>>> the time being. Then see.
>>
>> Ok, sounds good.
>>
>>> But my vote is for paired fused types instead of "real_t complex".
>>>
>>> Dag Sverre
>>> ___
>>> cython-devel mailing list
>>> cython-devel@python.org
>>> http://mail.python.org/mailman/listinfo/cython-devel
>>>
>>
>> A remaining issue which I'm not quite certain about is the
>> specialization through subscripts, e.g. func[double]. How should this
>> work from Python space (assuming cpdef functions)? Would we want to
>> pass in cython.double etc? Because it would only work for builtin
>> types, so what about types that aren't exposed to Python but can still
>> be coerced to and from Python? Perhaps it would be better to pass in
>> strings instead. I also think e.g. "int *" reads better than
>> cython.pointer(cython.int).
>
> That's whey we offer cython.p_int. On that note, we should support
> cython.astype("int *") or something like that. Generally, I don't like
> encoding semantic information in strings.
>
> OTHO, since it'll be a mapping of some sort, there's no reason we
> can't support both. Most of the time it should dispatch (at runtime or
> compile time) based on the type of the arguments.

If we have an argument type that is composed of a fused type, would be
want the indexing to specify the composed type or the fused type? e.g.

ctypedef floating *floating_p

cdef func(floating_p x):
...

Then do we want

func[double](10.0)

or

func[double_p](10.0)

to specialize func? FYI, the type checking works like 'double_p is
floating_p' and not 'double is floating_p'. But for functions this is
a little different. On the one hand specifying the full types
(double_p) makes sense as you're kind of specifying a signature, but
on t

Re: [Cython] Fused Types

2011-05-02 Thread Robert Bradshaw
On Mon, May 2, 2011 at 1:56 PM, mark florisson
 wrote:
> On 2 May 2011 18:24, Robert Bradshaw  wrote:
>> On Sun, May 1, 2011 at 2:38 AM, mark florisson
>>  wrote:
>>> A remaining issue which I'm not quite certain about is the
>>> specialization through subscripts, e.g. func[double]. How should this
>>> work from Python space (assuming cpdef functions)? Would we want to
>>> pass in cython.double etc? Because it would only work for builtin
>>> types, so what about types that aren't exposed to Python but can still
>>> be coerced to and from Python? Perhaps it would be better to pass in
>>> strings instead. I also think e.g. "int *" reads better than
>>> cython.pointer(cython.int).
>>
>> That's whey we offer cython.p_int. On that note, we should support
>> cython.astype("int *") or something like that. Generally, I don't like
>> encoding semantic information in strings.
>>
>> OTHO, since it'll be a mapping of some sort, there's no reason we
>> can't support both. Most of the time it should dispatch (at runtime or
>> compile time) based on the type of the arguments.
>
> If we have an argument type that is composed of a fused type, would be
> want the indexing to specify the composed type or the fused type? e.g.
>
> ctypedef floating *floating_p

How should we support this? It's clear in this case, but only because
you chose good names. Another option would be to require
parameterization floating_p, with floating_p[floating] the
"as-yet-unparameterized" version. Explicit but redundant. (The same
applies to struct as classes as well as typedefs.) On the other had,
the above is very succinct and clear in context, so I'm leaning
towards it. Thoughts?

> cdef func(floating_p x):
>    ...
>
> Then do we want
>
>    func[double](10.0)
>
> or
>
>    func[double_p](10.0)
>
> to specialize func?

The latter.

> FYI, the type checking works like 'double_p is
> floating_p' and not 'double is floating_p'. But for functions this is
> a little different. On the one hand specifying the full types
> (double_p) makes sense as you're kind of specifying a signature, but
> on the other hand you're specializing fused types and you don't care
> how they are composed -- especially if they occur multiple times with
> different composition. So I'm thinking we want 'func[double]'.

That's what I'm thinking too. The type you're branching on is
floating, and withing that block you can declare variables as
floating*, ndarray[dtype=floating], etc.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


[Cython] Fwd: automatic character conversion problem

2011-05-02 Thread Robert Bradshaw
Dear Cython developers,

Recently I encountered a problem with Cython's automatic char* to string
conversion (Cython version 0.14.1). I'll attach two sample source files. The
first one, char2str_a.pyx prints "The quick...", just as I expected. But the
second example prints "... lazy dog.". In the original situation I had a call to
free() instead of the call to strcpy() which I use here for illustration
purposes. Then I got unpredictable results. Apparently the Python string object
keeps referring to the C char* a bit longer than I would expect. A previous
version (0.11.2) didn't have this problem.

Best regards,
Hans Terlouw

--
J. P. Terlouw
Kapteyn Astronomical Institute
University of Groningen
Postbus 800
NL-9700 AV Groningen
The Netherlands

Phone: +31-(0)50-3634068/73
Fax:   +31-(0)50-3636100
Web:   http://www.astro.rug.nl/~terlouw/


cdef extern from "stdlib.h":
  void free(void* ptr)
  void* malloc(size_t size)

cdef extern from "string.h":
  char *strcpy(char *dest, char *src)

def char2str():
  cdef char *c_str_a = malloc(80)
  cdef char *c_str_b = "The quick...   "
  cdef char *c_str_c = "... lazy dog.  "

  strcpy(c_str_a, c_str_b)

  p_str = c_str_a
  strcpy(c_str_a, c_str_c)
  p_str = p_str.rstrip()
  print p_str



cdef extern from "stdlib.h":
  void free(void* ptr)
  void* malloc(size_t size)

cdef extern from "string.h":
  char *strcpy(char *dest, char *src)

def char2str():
  cdef char *c_str_a = malloc(80)
  cdef char *c_str_b = "The quick...   "
  cdef char *c_str_c = "... lazy dog.  "

  strcpy(c_str_a, c_str_b)

  p_str = c_str_a
  strcpy(c_str_a, c_str_c)
  print p_str.rstrip()
cdef extern from "stdlib.h":
   void free(void* ptr)
   void* malloc(size_t size)

cdef extern from "string.h":
   char *strcpy(char *dest, char *src)

def char2str():
   cdef char *c_str_a = malloc(80)
   cdef char *c_str_b = "The quick...   "
   cdef char *c_str_c = "... lazy dog.  "

   strcpy(c_str_a, c_str_b)

   p_str = c_str_a
   strcpy(c_str_a, c_str_c)
   p_str = p_str.rstrip()
   print p_str


cdef extern from "stdlib.h":
   void free(void* ptr)
   void* malloc(size_t size)

cdef extern from "string.h":
   char *strcpy(char *dest, char *src)

def char2str():
   cdef char *c_str_a = malloc(80)
   cdef char *c_str_b = "The quick...   "
   cdef char *c_str_c = "... lazy dog.  "

   strcpy(c_str_a, c_str_b)

   p_str = c_str_a
   strcpy(c_str_a, c_str_c)
   print p_str.rstrip()


___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] automatic character conversion problem

2011-05-02 Thread Robert Bradshaw
On Fri, Apr 29, 2011 at 6:57 AM, Hans Terlouw  wrote:
> Dear Cython developers,
>
> Recently I encountered a problem with Cython's automatic char* to string
> conversion (Cython version 0.14.1). I'll attach two sample source files. The
> first one, char2str_a.pyx prints "The quick...", just as I expected. But the
> second example prints "... lazy dog.". In the original situation I had a
> call to
> free() instead of the call to strcpy() which I use here for illustration
> purposes. Then I got unpredictable results. Apparently the Python string
> object
> keeps referring to the C char* a bit longer than I would expect. A previous
> version (0.11.2) didn't have this problem.

This is due to type inference, in the second example, p_str is
inferred to be of type char*.

- Robert

> cdef extern from "stdlib.h":
>   void free(void* ptr)
>   void* malloc(size_t size)
>
> cdef extern from "string.h":
>   char *strcpy(char *dest, char *src)
>
> def char2str():
>   cdef char *c_str_a = malloc(80)
>   cdef char *c_str_b = "The quick...   "
>   cdef char *c_str_c = "... lazy dog.  "
>
>   strcpy(c_str_a, c_str_b)
>
>   p_str = c_str_a
>   strcpy(c_str_a, c_str_c)
>   p_str = p_str.rstrip()
>   print p_str
>
>
>
> cdef extern from "stdlib.h":
>   void free(void* ptr)
>   void* malloc(size_t size)
>
> cdef extern from "string.h":
>   char *strcpy(char *dest, char *src)
>
> def char2str():
>   cdef char *c_str_a = malloc(80)
>   cdef char *c_str_b = "The quick...   "
>   cdef char *c_str_c = "... lazy dog.  "
>
>   strcpy(c_str_a, c_str_b)
>
>   p_str = c_str_a
>   strcpy(c_str_a, c_str_c)
>   print p_str.rstrip()
>
>
>
>
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] automatic character conversion problem

2011-05-02 Thread Stefan Behnel

[moving this to cython-users]

Robert Bradshaw, 03.05.2011 06:38:

On Fri, Apr 29, 2011 at 6:57 AM, Hans Terlouw wrote:

Recently I encountered a problem with Cython's automatic char* to string
conversion (Cython version 0.14.1). I'll attach two sample source files. The
first one, char2str_a.pyx prints "The quick...", just as I expected. But the
second example prints "... lazy dog.". In the original situation I had a
call to
free() instead of the call to strcpy() which I use here for illustration
purposes. Then I got unpredictable results. Apparently the Python string
object
keeps referring to the C char* a bit longer than I would expect. A previous
version (0.11.2) didn't have this problem.


This is due to type inference, in the second example, p_str is
inferred to be of type char*.


Just to make this a bit clearer:


cdef extern from "stdlib.h":
   void free(void* ptr)
   void* malloc(size_t size)

cdef extern from "string.h":
   char *strcpy(char *dest, char *src)

def char2str():
   cdef char *c_str_a =malloc(80)
   cdef char *c_str_b = "The quick...   "
   cdef char *c_str_c = "... lazy dog.  "

   strcpy(c_str_a, c_str_b)

   p_str = c_str_a
   strcpy(c_str_a, c_str_c)
   p_str = p_str.rstrip()
   print p_str


In this example, p_str is assigned both a char* and a Python object, so 
type inference makes it a Python object. The first assignment is therefore 
a copy operation that creates a Python bytes object, and the second 
operation assigns the object returned from the .rstrip() call.




cdef extern from "stdlib.h":
   void free(void* ptr)
   void* malloc(size_t size)

cdef extern from "string.h":
   char *strcpy(char *dest, char *src)

def char2str():
   cdef char *c_str_a =malloc(80)
   cdef char *c_str_b = "The quick...   "
   cdef char *c_str_c = "... lazy dog.  "

   strcpy(c_str_a, c_str_b)

   p_str = c_str_a
   strcpy(c_str_a, c_str_c)
   print p_str.rstrip()


Here, p_str is only assigned once from a pointer, so the type is inferred 
as a char*, and the first assignment is a pointer assignment, not a copy 
operation.


You can see the difference with "cython -a", which generates an HTML 
representation of your code that highlights Python object operations. 
(Click on a source line to see the C code).


Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel