Re: [Cython] CF based type inference

2012-05-10 Thread Vitja Makarov
2012/5/10 Stefan Behnel :
> Vitja Makarov, 08.05.2012 15:47:
>> 2012/5/8 Stefan Behnel:
>>> Vitja has rebased the type inference on the control flow, so I wonder if
>>> this will enable us to properly infer this:
>>>
>>>  def partial_validity():
>>>    """
>>>    >>> partial_validity()
>>>    ('Python object', 'double', 'str object')
>>>    """
>>>    a = 1.0
>>>    b = a + 2   # definitely double
>>>    a = 'test'
>>>    c = a + 'toast'  # definitely str
>>>    return typeof(a), typeof(b), typeof(c)
>>>
>>> I think, what is mainly needed for this is that a NameNode with an
>>> undeclared type should not report its own entry as dependency but that of
>>> its own cf_assignments. Would this work?
>>>
>>> (Haven't got the time to try it out right now, so I'm dumping it here.)
>>
>> Yeah, that might work. The other way to go is to split entries:
>>
>>  def partial_validity():
>>    """
>>    >>> partial_validity()
>>    ('str object', 'double', 'str object')
>>    """
>>    a_1 = 1.0
>>    b = a_1 + 2   # definitely double
>>    a_2 = 'test'
>>    c = a_2 + 'toast'  # definitely str
>>    return typeof(a_2), typeof(b), typeof(c)
>>
>> And this should work better because it allows to infer a_1 as a double
>> and a_2 as a string.
>
> How would type checks fit into this? Stupid example:
>
>   def test(x):
>       if isinstance(x, MyExtType):
>           x.call_c_method()    # type known, no None check needed
>       else:
>           x.call_py_method()   # type unknown, may be None
>
> Would it work to consider a type checking branch an assignment to a new
> (and differently typed) entry?
>

No, at least without special handler for this case.
Anyway that's not that hard to implement isinstance() condition may
mark x as being assigned to MyExtType, e.g.:

if isinstance(x, MyExtType):
x =  x  # Fake assignment
x.call_c_method()





-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-10 Thread Dag Sverre Seljebotn

On 05/09/2012 09:08 PM, mark florisson wrote:

On 9 May 2012 19:56, Robert Bradshaw  wrote:

On Tue, May 8, 2012 at 3:35 AM, mark florisson
  wrote:

On 8 May 2012 10:47, Dag Sverre Seljebotn  wrote:


After some thinking I believe I can see more clearly where Mark is coming
from. To sum up, it's either

A) Keep both np.ndarray[double] and double[:] around, with clearly defined
and separate roles. np.ndarray[double] implementation is revamped to allow
fast slicing etc., based on the double[:] implementation.

B) Deprecate np.ndarray[double] sooner rather than later, but make double[:]
have functionality that is *really* close to what np.ndarray[double]
currently does. In most cases one should be able to basically replace
np.ndarray[double] with double[:] and the code should continue to work just
like before; difference is that if you pass in anything else than a NumPy
array, it will likely fail with a runtime AttributeError at some point
rather than fail a PyType_Check.


That's a good summary. I have a big preference for B here, but I agree
that treating a typed memoryview as both a user object (possibly
converted through callback) and a typed memoryview "subclass" is quite
magicky.


With the talk of overlay modules and go-style interface, being able to
specify the type of an object as well as its bufferness could become
more interesting than it even is now. The notion of supporting
multiple interfaces, e.g.

cdef np.ndarray&  double[:] my_array

could obviate the need for np.ndarray[double]. Until we support
something like this, or decide to reject it, I think we need to keep
the old-style syntax around. (np.ndarray[double] could even become
this intersection type to gain all the new features before we decide
on a appropriate syntax).


It's kind of interesting but also kind of a pain to declare everywhere
like that. Buffer syntax should by no means deprecated in the near
future, but at some point it will be better to have one way to do
things, whether slightly magicky or more convoluted or not. Also, as
Dag mentioned, if we want fused extension types it makes more sense to
remove buffer syntax to disambiguate this and avoid context-dependent
special casing (e.g. np.ndarray and array.array).


I don't think it hurts to have two ways of doing things if they are 
sufficiently well-motivated, sufficiently well-defined, and sufficiently 
different from one another.


The original reason I wanted double[:] was to stop tying ourselves to 
NumPy and don't promise to be compatible, because of the polymorphic 
aspect of NumPy. I think in the future, the Python behaviour of, say, +, 
in np.ndarray is going to be different from what we have today. You'll 
have the + fetching data over the network in some cases, or treating NA 
in special ways (I think there might be over a thousand about NA on the 
NumPy now?). In short, lots of stuff can be going on that we can't 
emulate in Cython.


OTOH, perhaps that doesn't matter -- we just raise an exception for the 
NumPy arrays that we can't deal with, and move on...



I wouldn't particularly mind something concise like 'm.obj'.
The AttributeError would be the case as usual, when a python object
doesn't have the right interface.


Having to insert the .obj in there does make it more painful to
convert existing Python code.


Yes, hence my slight bias towards magicky. But I do fully agree with
all opposing arguments that say "too much magic". I just prefer to be
pragmatic here :)


It's a very big decision. I think two or three alternatives are starting 
to crystallise; but to choose between them I think it calls for a CEP 
with code examples, and a request for comment on both cython-users and 
numpy-discussion.


Until that happens, avoiding any magic seems like a conservative 
forward-compatible default.


Dag
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-10 Thread mark florisson
On 10 May 2012 08:27, Vitja Makarov  wrote:
> 2012/5/10 Stefan Behnel :
>> Vitja Makarov, 08.05.2012 15:47:
>>> 2012/5/8 Stefan Behnel:
 Vitja has rebased the type inference on the control flow, so I wonder if
 this will enable us to properly infer this:

  def partial_validity():
    """
    >>> partial_validity()
    ('Python object', 'double', 'str object')
    """
    a = 1.0
    b = a + 2   # definitely double
    a = 'test'
    c = a + 'toast'  # definitely str
    return typeof(a), typeof(b), typeof(c)

 I think, what is mainly needed for this is that a NameNode with an
 undeclared type should not report its own entry as dependency but that of
 its own cf_assignments. Would this work?

 (Haven't got the time to try it out right now, so I'm dumping it here.)
>>>
>>> Yeah, that might work. The other way to go is to split entries:
>>>
>>>  def partial_validity():
>>>    """
>>>    >>> partial_validity()
>>>    ('str object', 'double', 'str object')
>>>    """
>>>    a_1 = 1.0
>>>    b = a_1 + 2   # definitely double
>>>    a_2 = 'test'
>>>    c = a_2 + 'toast'  # definitely str
>>>    return typeof(a_2), typeof(b), typeof(c)
>>>
>>> And this should work better because it allows to infer a_1 as a double
>>> and a_2 as a string.
>>
>> How would type checks fit into this? Stupid example:
>>
>>   def test(x):
>>       if isinstance(x, MyExtType):
>>           x.call_c_method()    # type known, no None check needed
>>       else:
>>           x.call_py_method()   # type unknown, may be None
>>
>> Would it work to consider a type checking branch an assignment to a new
>> (and differently typed) entry?
>>
>
> No, at least without special handler for this case.
> Anyway that's not that hard to implement isinstance() condition may
> mark x as being assigned to MyExtType, e.g.:
>
> if isinstance(x, MyExtType):
>    x =  x  # Fake assignment
>    x.call_c_method()
>

That would be nice. It might also be useful to do branch pruning
before that stage, which may avoid a merge after the branch leading to
a different (unknown, i.e. object) type. That could be useful in the
face of fused types, where people write generic code triggering only a
certain branch depending on the specialization. Bit of a special case
maybe :)

>
>
>
> --
> vitja.
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-10 Thread mark florisson
On 10 May 2012 08:37, Dag Sverre Seljebotn  wrote:
> On 05/09/2012 09:08 PM, mark florisson wrote:
>>
>> On 9 May 2012 19:56, Robert Bradshaw  wrote:
>>>
>>> On Tue, May 8, 2012 at 3:35 AM, mark florisson
>>>   wrote:

 On 8 May 2012 10:47, Dag Sverre Seljebotn
  wrote:
>
>
> After some thinking I believe I can see more clearly where Mark is
> coming
> from. To sum up, it's either
>
> A) Keep both np.ndarray[double] and double[:] around, with clearly
> defined
> and separate roles. np.ndarray[double] implementation is revamped to
> allow
> fast slicing etc., based on the double[:] implementation.
>
> B) Deprecate np.ndarray[double] sooner rather than later, but make
> double[:]
> have functionality that is *really* close to what np.ndarray[double]
> currently does. In most cases one should be able to basically replace
> np.ndarray[double] with double[:] and the code should continue to work
> just
> like before; difference is that if you pass in anything else than a
> NumPy
> array, it will likely fail with a runtime AttributeError at some point
> rather than fail a PyType_Check.


 That's a good summary. I have a big preference for B here, but I agree
 that treating a typed memoryview as both a user object (possibly
 converted through callback) and a typed memoryview "subclass" is quite
 magicky.
>>>
>>>
>>> With the talk of overlay modules and go-style interface, being able to
>>> specify the type of an object as well as its bufferness could become
>>> more interesting than it even is now. The notion of supporting
>>> multiple interfaces, e.g.
>>>
>>> cdef np.ndarray&  double[:] my_array
>>>
>>>
>>> could obviate the need for np.ndarray[double]. Until we support
>>> something like this, or decide to reject it, I think we need to keep
>>> the old-style syntax around. (np.ndarray[double] could even become
>>> this intersection type to gain all the new features before we decide
>>> on a appropriate syntax).
>>
>>
>> It's kind of interesting but also kind of a pain to declare everywhere
>> like that. Buffer syntax should by no means deprecated in the near
>> future, but at some point it will be better to have one way to do
>> things, whether slightly magicky or more convoluted or not. Also, as
>> Dag mentioned, if we want fused extension types it makes more sense to
>> remove buffer syntax to disambiguate this and avoid context-dependent
>> special casing (e.g. np.ndarray and array.array).
>
>
> I don't think it hurts to have two ways of doing things if they are
> sufficiently well-motivated, sufficiently well-defined, and sufficiently
> different from one another.
>
> The original reason I wanted double[:] was to stop tying ourselves to NumPy
> and don't promise to be compatible, because of the polymorphic aspect of
> NumPy. I think in the future, the Python behaviour of, say, +, in np.ndarray
> is going to be different from what we have today. You'll have the + fetching
> data over the network in some cases, or treating NA in special ways (I think
> there might be over a thousand about NA on the NumPy now?). In short, lots
> of stuff can be going on that we can't emulate in Cython.
>
> OTOH, perhaps that doesn't matter -- we just raise an exception for the
> NumPy arrays that we can't deal with, and move on...
>

Basically, the only thing that both np.ndarray and memoryviews
guarantee is that they operate through the buffer interface, and that
they obtain this view at certain points (assignment). Hence, if you
decide to resize your array, or swap your axes or whatever, then your
object view may no longer be consistent with your buffer. When or if
your buffer view changes isn't even defined, but kind of dictated by
the implementation.

Hence, if memoryviews overload +, then that + will always be triggered
on a typed view. I do suppose that if people rely on type inference
getting the type right, things start to get messy. As for NA, maybe
they will extend the buffer interface at some point, but on the other
hand Python people may feel that it will be too specific of a use case
(wild guess). Unti then, keep your separate masks around :)

Anyway, a valid point. It's hard to see where this is going and how
future proof it is.

 I wouldn't particularly mind something concise like 'm.obj'.
 The AttributeError would be the case as usual, when a python object
 doesn't have the right interface.
>>>
>>>
>>> Having to insert the .obj in there does make it more painful to
>>> convert existing Python code.
>>
>>
>> Yes, hence my slight bias towards magicky. But I do fully agree with
>> all opposing arguments that say "too much magic". I just prefer to be
>> pragmatic here :)
>
>
> It's a very big decision. I think two or three alternatives are starting to
> crystallise; but to choose between them I think it calls for a CEP with code
> examples, and a request for comment on both cython-u

Re: [Cython] Bug in print statement

2012-05-10 Thread Vitja Makarov
2012/5/9 Vitja Makarov :
> 2012/5/9 Stefan Behnel :
>> Vitja Makarov, 09.05.2012 18:31:
>>> Del statement inference enabled pyregr.test_descr testcase and it SIGSEGVs.
>>> Here is minimal example:
>>>
>>> import unittest
>>> import sys
>>>
>>> class Foo(unittest.TestCase):
>>>     def test_file_fault(self):
>>>         # Testing sys.stdout is changed in getattr...
>>>         test_stdout = sys.stdout
>>>         class StdoutGuard:
>>>             def __getattr__(self, attr):
>>>                 test_stdout.write('%d\n' % sys.getrefcount(self))
>>>                 sys.stdout =  test_stdout #sys.__stdout__
>>>                 test_stdout.write('%d\n' % sys.getrefcount(self))
>>>                 test_stdout.write('getattr: %r\n' % attr)
>>>                 test_stdout.flush()
>>>                 raise RuntimeError("Premature access to sys.stdout.%s" % 
>>> attr)
>>>         sys.stdout = StdoutGuard()
>>>         try:
>>>             print "Oops!"
>>>         except RuntimeError:
>>>             pass
>>>         finally:
>>>             sys.stdout = test_stdout
>>>
>>>     def test_getattr_hooks(self):
>>>         pass
>>>
>>> from test import test_support
>>> test_support.run_unittest(Foo)
>>>
>>> It works in python and sigsegvs in cython.
>>> It seems to me that the problem is StdoutGuard() is still used when
>>> its reference counter is zero since Python interpreter does
>>> Py_XINCREF() for file object and __Pyx_Print() doesn't.
>>
>> Makes sense to change that, IMHO. An additional INCREF during something as
>> involved as a print() will not hurt anyone.
>>
>> IIRC, I had the same problem with PyPy - guess I should have fixed it back
>> then instead of taking the lazy escape towards using the print() function.
>>
>
> I've moved printing function to Utility/ and fixed refcount bug, if
> jenkins is ok I'm gonna push this commit to master
>
> https://github.com/vitek/cython/commit/83eceb31b4ed9afc0fd6d24c9eda5e52d9420535
>

I've pushed fixes to master.

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


[Cython] pure mode quirk

2012-05-10 Thread Stefan Behnel
Hi,

when declaring a C function in pure mode, you eventually end up with this:

@cython.cfunc
@cython.returns(cython.bint)
@cython.locals(a=cython.int, b=cython.int, c=cython.int)
def c_compare(a,b):
c = 5
return a == b + c

That is very verbose, making it hard to find the name of the actual
function. It's also not very intuitive that @cython.locals() is the way to
declare arguments.

I would find it more readable to support this:

@cython.cfunc(cython.bint, a=cython.int, b=cython.int)
@cython.locals(c=cython.int)
def c_compare(a,b):
c = 5
return a == b

But the problem here is that it conflicts with

@cython.cfunc
def c_compare(a,b):
c = 5
return a == b

when executed from Shadow.py. How should the fake decorator know that it is
being called with a type as first argument and not with the function it
decorates? Legacy, legacy ...

An alternative would be this:

@cython.cfunc(a=cython.int, b=cython.int, _returns=cython.bint)
@cython.locals(c=cython.int)
def c_compare(a,b):
c = 5
return a == b

But that's not clearer than an explicit decorator for the return value.

I'm somewhat concerned about the redundancy this introduces with @locals(),
which could still be used to declare argument types (even conflicting
ones). However, getting rid of the need for a separate @returns() seems
worthwhile by itself, so this might provide a compromise:

@cython.cfunc(returns=cython.bint)
@cython.locals(a=cython.int, b=cython.int, c=cython.int)
def c_compare(a,b):
c = 5
return a == b + c

This would work in Shadow.py because it's easy to distinguish between a
positional argument (the decorated function) and a keyword argument
("returns"). It might lead to bugs in user code, though, if they forget to
pass the return type as a keyword argument. Maybe just a minor concern,
because the decorator doesn't read well without the keyword.

What do you think? Is this worth doing something about at all?

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-10 Thread Vitja Makarov
2012/5/9 Robert Bradshaw :
> On Wed, May 9, 2012 at 6:33 AM, Stefan Behnel  wrote:
>> mark florisson, 09.05.2012 15:18:
>>> On 9 May 2012 14:16, Vitja Makarov wrote:
 from cython cimport typeof

 def foo(float[::1] a):
    b = a
    #del b
    print typeof(b)
    print typeof(a)


 In this example `b` is inferred as 'Python object' and not
 `float[::1]`, is that correct?

>>> That's the current behaviour, but it would be better if it inferred a
>>> memoryview slice instead.
>>
>> +1
>
> +1. This looks like it would break inference of extension classes as well.
>
> https://github.com/vitek/cython/commit/f5acf44be0f647bdcbb5a23c8bfbceff48f4414e#L0R336
>
> could be changed to check if it's already a py_object_type (or memory
> view) as a quick fix, but it's not as pure as adding the constraints
> "can be del'ed" to the type inference engine.
>

Here is fixed version:

https://github.com/vitek/cython/commit/0f122b6dfb6d0c7932b08cc35cdcc90c3c30257b#L0R334


-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] pure mode quirk

2012-05-10 Thread mark florisson
On 10 May 2012 10:25, Stefan Behnel  wrote:
> Hi,
>
> when declaring a C function in pure mode, you eventually end up with this:
>
>    @cython.cfunc
>    @cython.returns(cython.bint)
>    @cython.locals(a=cython.int, b=cython.int, c=cython.int)
>    def c_compare(a,b):
>        c = 5
>        return a == b + c
>
> That is very verbose, making it hard to find the name of the actual
> function. It's also not very intuitive that @cython.locals() is the way to
> declare arguments.
>
> I would find it more readable to support this:
>
>    @cython.cfunc(cython.bint, a=cython.int, b=cython.int)
>    @cython.locals(c=cython.int)
>    def c_compare(a,b):
>        c = 5
>        return a == b
>
> But the problem here is that it conflicts with
>
>    @cython.cfunc
>    def c_compare(a,b):
>        c = 5
>        return a == b
>
> when executed from Shadow.py. How should the fake decorator know that it is
> being called with a type as first argument and not with the function it
> decorates? Legacy, legacy ...

I personally don't care much for pure mode, but it could just do an
instance check for a function. You only accept real def functions
anyway.

> An alternative would be this:
>
>    @cython.cfunc(a=cython.int, b=cython.int, _returns=cython.bint)
>    @cython.locals(c=cython.int)
>    def c_compare(a,b):
>        c = 5
>        return a == b
>
> But that's not clearer than an explicit decorator for the return value.
>
> I'm somewhat concerned about the redundancy this introduces with @locals(),
> which could still be used to declare argument types (even conflicting
> ones). However, getting rid of the need for a separate @returns() seems
> worthwhile by itself, so this might provide a compromise:
>
>    @cython.cfunc(returns=cython.bint)
>    @cython.locals(a=cython.int, b=cython.int, c=cython.int)
>    def c_compare(a,b):
>        c = 5
>        return a == b + c
>
> This would work in Shadow.py because it's easy to distinguish between a
> positional argument (the decorated function) and a keyword argument
> ("returns"). It might lead to bugs in user code, though, if they forget to
> pass the return type as a keyword argument. Maybe just a minor concern,
> because the decorator doesn't read well without the keyword.
>
> What do you think? Is this worth doing something about at all?
>
> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] pure mode quirk

2012-05-10 Thread Stefan Behnel
mark florisson, 10.05.2012 11:41:
> On 10 May 2012 10:25, Stefan Behnel wrote:
>> when declaring a C function in pure mode, you eventually end up with this:
>>
>>@cython.cfunc
>>@cython.returns(cython.bint)
>>@cython.locals(a=cython.int, b=cython.int, c=cython.int)
>>def c_compare(a,b):
>>c = 5
>>return a == b + c
>>
>> That is very verbose, making it hard to find the name of the actual
>> function. It's also not very intuitive that @cython.locals() is the way to
>> declare arguments.
>>
>> I would find it more readable to support this:
>>
>>@cython.cfunc(cython.bint, a=cython.int, b=cython.int)
>>@cython.locals(c=cython.int)
>>def c_compare(a,b):
>>c = 5
>>return a == b
>>
>> But the problem here is that it conflicts with
>>
>>@cython.cfunc
>>def c_compare(a,b):
>>c = 5
>>return a == b
>>
>> when executed from Shadow.py. How should the fake decorator know that it is
>> being called with a type as first argument and not with the function it
>> decorates? Legacy, legacy ...
> 
> I personally don't care much for pure mode, but it could just do an
> instance check for a function. You only accept real def functions
> anyway.

Hmm, maybe, yes. IIRC, non-Cython decorators are otherwise forbidden on
cdef functions (but also on cpdef functions?), so the case that another
decorator replaces the function with something else in between isn't very
likely to occur.

In any case, the fix would be to change the decorator order to move the
Cython decorators right at the function declaration. Not sure if that'd be
completely obvious to everyone, but, as I said, not very likely to be a
problem ...

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] pure mode quirk

2012-05-10 Thread Robert Bradshaw
On Thu, May 10, 2012 at 2:25 AM, Stefan Behnel  wrote:
> Hi,
>
> when declaring a C function in pure mode, you eventually end up with this:
>
>    @cython.cfunc
>    @cython.returns(cython.bint)
>    @cython.locals(a=cython.int, b=cython.int, c=cython.int)
>    def c_compare(a,b):
>        c = 5
>        return a == b + c
>
> That is very verbose, making it hard to find the name of the actual
> function. It's also not very intuitive that @cython.locals() is the way to
> declare arguments.
>
> I would find it more readable to support this:
>
>    @cython.cfunc(cython.bint, a=cython.int, b=cython.int)
>    @cython.locals(c=cython.int)
>    def c_compare(a,b):
>        c = 5
>        return a == b
>
> But the problem here is that it conflicts with
>
>    @cython.cfunc
>    def c_compare(a,b):
>        c = 5
>        return a == b
>
> when executed from Shadow.py. How should the fake decorator know that it is
> being called with a type as first argument and not with the function it
> decorates? Legacy, legacy ...
>
> An alternative would be this:
>
>    @cython.cfunc(a=cython.int, b=cython.int, _returns=cython.bint)
>    @cython.locals(c=cython.int)
>    def c_compare(a,b):
>        c = 5
>        return a == b
>
> But that's not clearer than an explicit decorator for the return value.
>
> I'm somewhat concerned about the redundancy this introduces with @locals(),
> which could still be used to declare argument types (even conflicting
> ones). However, getting rid of the need for a separate @returns() seems
> worthwhile by itself, so this might provide a compromise:
>
>    @cython.cfunc(returns=cython.bint)
>    @cython.locals(a=cython.int, b=cython.int, c=cython.int)
>    def c_compare(a,b):
>        c = 5
>        return a == b + c
>
> This would work in Shadow.py because it's easy to distinguish between a
> positional argument (the decorated function) and a keyword argument
> ("returns"). It might lead to bugs in user code, though, if they forget to
> pass the return type as a keyword argument. Maybe just a minor concern,
> because the decorator doesn't read well without the keyword.
>
> What do you think? Is this worth doing something about at all?

I didn't implement it this way originally because of the whole
called/not ambiguity, but I didn't think about taking a keyword and
using that to distinguish. (Testing the type of the input seemed to
hackish...) I'm +1 for this, as well as accepting argument types in
the cfunc decorator as well. There is a bit of overlap as "returns"
has a special meaning (vs. an argument named returns) but I think
that's OK, and cython.locals should still work.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.16.1

2012-05-10 Thread mark florisson
On 9 May 2012 08:49, Stefan Behnel  wrote:
> Stefan Behnel, 09.05.2012 08:41:
>> Robert Bradshaw, 09.05.2012 00:16:
>>> If we're looking at doing 0.17 soon, lets just do that.
>> I think it's close enough to be released.
>
> ... although one thing I just noticed is that the "numpy_memoryview" test
> is still disabled because it lead to crashes in recent Py3.2 releases (and
> thus most likely also in the latest Py3k). Not sure if it still crashes,
> but should be checked before going for a release.
>
> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel

Hurgh. Disabling tests in bugs.txt is terrible, there should have been
a comment in numpy_memoryview saying DISABLED and the testcase
function should have been a noop. Your commit
e3838e42c4b6f67f180d06b8cd75566f3380ab95 broke how typedef types are
compared, which makes the test get a temporary of the wrong type. Let
me try reverting that commit, what was it needed for?
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.16.1

2012-05-10 Thread Stefan Behnel
mark florisson, 10.05.2012 21:13:
> On 9 May 2012 08:49, Stefan Behnel wrote:
>> Stefan Behnel, 09.05.2012 08:41:
>>> Robert Bradshaw, 09.05.2012 00:16:
 If we're looking at doing 0.17 soon, lets just do that.
>>> I think it's close enough to be released.
>>
>> ... although one thing I just noticed is that the "numpy_memoryview" test
>> is still disabled because it lead to crashes in recent Py3.2 releases (and
>> thus most likely also in the latest Py3k). Not sure if it still crashes,
>> but should be checked before going for a release.
> 
> Hurgh. Disabling tests in bugs.txt is terrible, there should have been
> a comment in numpy_memoryview saying DISABLED and the testcase
> function should have been a noop.

... or have a release mode in the test runner that barks at disabled tests.


> Your commit
> e3838e42c4b6f67f180d06b8cd75566f3380ab95 broke how typedef types are
> compared, which makes the test get a temporary of the wrong type. Let
> me try reverting that commit, what was it needed for?

It was meant to fix the comparison of different char* ctypedefs. However,
seeing it in retrospect now, it would definitely break user code to compare
ctypedefs by their underlying base type because it's common for users to be
lax about ctypedefs, e.g. for integer types.

I think a better (and substantially safer) way to do it would be to use the
hash value of the underlying declared type, but to make the equals
comparison based on the typedef-ed name. That, plus a special case
treatment for char* compatible types.

Thanks for figuring out the problem.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.16.1

2012-05-10 Thread mark florisson
On 10 May 2012 21:21, Stefan Behnel  wrote:
> mark florisson, 10.05.2012 21:13:
>> On 9 May 2012 08:49, Stefan Behnel wrote:
>>> Stefan Behnel, 09.05.2012 08:41:
 Robert Bradshaw, 09.05.2012 00:16:
> If we're looking at doing 0.17 soon, lets just do that.
 I think it's close enough to be released.
>>>
>>> ... although one thing I just noticed is that the "numpy_memoryview" test
>>> is still disabled because it lead to crashes in recent Py3.2 releases (and
>>> thus most likely also in the latest Py3k). Not sure if it still crashes,
>>> but should be checked before going for a release.
>>
>> Hurgh. Disabling tests in bugs.txt is terrible, there should have been
>> a comment in numpy_memoryview saying DISABLED and the testcase
>> function should have been a noop.
>
> ... or have a release mode in the test runner that barks at disabled tests.
>
>
>> Your commit
>> e3838e42c4b6f67f180d06b8cd75566f3380ab95 broke how typedef types are
>> compared, which makes the test get a temporary of the wrong type. Let
>> me try reverting that commit, what was it needed for?
>
> It was meant to fix the comparison of different char* ctypedefs. However,
> seeing it in retrospect now, it would definitely break user code to compare
> ctypedefs by their underlying base type because it's common for users to be
> lax about ctypedefs, e.g. for integer types.
>
> I think a better (and substantially safer) way to do it would be to use the
> hash value of the underlying declared type, but to make the equals
> comparison based on the typedef-ed name. That, plus a special case
> treatment for char* compatible types.

Yeah, I was thinking the same thing. I pushed a reverted commit, if
you want you can try out that scheme and see if it works.

> Thanks for figuring out the problem.

No problem. We learned that disabled tests aren't very good for
continuous integration now :)

> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


[Cython] weird declarations in fused types C code

2012-05-10 Thread Stefan Behnel
Hi,

while trying to replace the "import sys; if sys.version_info >= (3,0)" in
the fused types dispatch code by the more straight forward "if
PY_MAJOR_VERSION >= 3" (before I came to think that this particular case
only guards useless code that does the wrong thing), I noticed that the
code generates a declaration of PyErr_Clear() into the outside environment.
When used in cdef classes, this leads to an external method being declared
in the class, essentially like this:

cdef class MyClass:
cdef extern from *:
void PyErr_Clear()

Surprisingly enough, this actually works. Cython assigns the real C-API
function pointer to it during type initialisation and even calls the
function directly (instead of going through the vtab) when used. A rather
curious feature that I would never had thought of.

Anyway, this side effect is obviously a bug in the fused types dispatch,
but I don't have a good idea on how to fix it. I'm sure Mark put some
thought into this while trying hard to make it work and just didn't notice
the impact on type namespaces.

I've put up a pull request to remove the Py3 specialisation code, but this
is worth some more consideration.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel