Re: [Cython] callable() optimization

2012-05-09 Thread Stefan Behnel
Vitja Makarov, 08.05.2012 13:27:
> I've noticed regression related to callable() optimization.
> 
> https://github.com/cython/cython/commit/a40112b0461eae5ab22fbdd07ae798d4a72ff523
> 
> class C:
> pass
> print callable(C())
> 
> It prints True optimized version checks ((obj)->ob_type->tp_call !=
> NULL) condition that is True for both class and instance.
> 
> >>> help(callable)
> callable(...)
> callable(object) -> bool
> 
> Return whether the object is callable (i.e., some kind of function).
> Note that classes are callable, as are instances with a __call__() method.

Ah, right - old style classes are special cased in Py2.

I'll make this a Py3-only optimisation then.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] callable() optimization

2012-05-09 Thread Vitja Makarov
2012/5/9 Stefan Behnel :
> Vitja Makarov, 08.05.2012 13:27:
>> I've noticed regression related to callable() optimization.
>>
>> https://github.com/cython/cython/commit/a40112b0461eae5ab22fbdd07ae798d4a72ff523
>>
>> class C:
>>     pass
>> print callable(C())
>>
>> It prints True optimized version checks ((obj)->ob_type->tp_call !=
>> NULL) condition that is True for both class and instance.
>>
>> >>> help(callable)
>> callable(...)
>>     callable(object) -> bool
>>
>>     Return whether the object is callable (i.e., some kind of function).
>>     Note that classes are callable, as are instances with a __call__() 
>> method.
>
> Ah, right - old style classes are special cased in Py2.
>
> I'll make this a Py3-only optimisation then.
>

I don't see difference between py2 and py3 here:

Python 3.2.3 (default, May  3 2012, 15:51:42)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class Foo: pass
...
>>> callable(Foo())
False
>>>

There is PyCallable_Check() CPython function:

int
PyCallable_Check(PyObject *x)
{
if (x == NULL)
return 0;
if (PyInstance_Check(x)) {
PyObject *call = PyObject_GetAttrString(x, "__call__");
if (call == NULL) {
PyErr_Clear();
return 0;
}
/* Could test recursively but don't, for fear of endless
   recursion if some joker sets self.__call__ = self */
Py_DECREF(call);
return 1;
}
else {
return x->ob_type->tp_call != NULL;
}
}



-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.16.1

2012-05-09 Thread Stefan Behnel
Stefan Behnel, 09.05.2012 08:41:
> Robert Bradshaw, 09.05.2012 00:16:
>> If we're looking at doing 0.17 soon, lets just do that.
> I think it's close enough to be released.

... although one thing I just noticed is that the "numpy_memoryview" test
is still disabled because it lead to crashes in recent Py3.2 releases (and
thus most likely also in the latest Py3k). Not sure if it still crashes,
but should be checked before going for a release.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Vitja Makarov
2012/5/9 Stefan Behnel :
> Robert Bradshaw, 09.05.2012 00:12:
>> On Tue, May 8, 2012 at 6:47 AM, Vitja Makarov wrote:
>>> 2012/5/8 Stefan Behnel:
 Vitja has rebased the type inference on the control flow, so I wonder if
 this will enable us to properly infer this:

  def partial_validity():
    """
    >>> partial_validity()
    ('Python object', 'double', 'str object')
    """
    a = 1.0
    b = a + 2   # definitely double
    a = 'test'
    c = a + 'toast'  # definitely str
    return typeof(a), typeof(b), typeof(c)

 I think, what is mainly needed for this is that a NameNode with an
 undeclared type should not report its own entry as dependency but that of
 its own cf_assignments. Would this work?

 (Haven't got the time to try it out right now, so I'm dumping it here.)

>>>
>>> Yeah, that might work. The other way to go is to split entries:
>>>
>>>  def partial_validity():
>>>   """
>>>   >>> partial_validity()
>>>   ('str object', 'double', 'str object')
>>>   """
>>>   a_1 = 1.0
>>>   b = a_1 + 2   # definitely double
>>>   a_2 = 'test'
>>>   c = a_2 + 'toast'  # definitely str
>>>   return typeof(a_2), typeof(b), typeof(c)
>>>
>>> And this should work better because it allows to infer a_1 as a double
>>> and a_2 as a string.
>>
>> This already works, right?
>
> It would work if it was implemented. *wink*
>
>
>> I agree it's nicer in general to split
>> things up, but not being able to optimize a loop variable because it
>> was used earlier or later in a different context is a disadvantage of
>> the current system.
>
> Absolutely. I was considering entry splitting more of a "soon, maybe not
> now" type of thing because it isn't entire clear to me what needs to be
> done. It may not even be all that hard to implement, but I think it's more
> than just a local change in the scope implementation because the current
> lookup_here() doesn't know what node is asking.
>

That could be done the following way:
 - Before running type inference find independent assignment groups
and split entries
 - Run type infrerence
 - Join entries of the same type or of PyObject base type
 - Then change names to private ones "{old_name}.{index}"

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] callable() optimization

2012-05-09 Thread Stefan Behnel
Vitja Makarov, 09.05.2012 09:43:
> 2012/5/9 Stefan Behnel :
>> Vitja Makarov, 08.05.2012 13:27:
>>> I've noticed regression related to callable() optimization.
>>>
>>> https://github.com/cython/cython/commit/a40112b0461eae5ab22fbdd07ae798d4a72ff523
>>>
>>> class C:
>>> pass
>>> print callable(C())
>>>
>>> It prints True optimized version checks ((obj)->ob_type->tp_call !=
>>> NULL) condition that is True for both class and instance.
>>>
>> help(callable)
>>> callable(...)
>>> callable(object) -> bool
>>>
>>> Return whether the object is callable (i.e., some kind of function).
>>> Note that classes are callable, as are instances with a __call__() 
>>> method.
>>
>> Ah, right - old style classes are special cased in Py2.
>>
>> I'll make this a Py3-only optimisation then.
>>
> 
> I don't see difference between py2 and py3 here:
> 
> Python 3.2.3 (default, May  3 2012, 15:51:42)
> [GCC 4.6.3] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> class Foo: pass
> ...
> >>> callable(Foo())
> False
> >>>
> 
> There is PyCallable_Check() CPython function:
> 
> int
> PyCallable_Check(PyObject *x)
> {
> if (x == NULL)
> return 0;
> if (PyInstance_Check(x)) {
> PyObject *call = PyObject_GetAttrString(x, "__call__");
> if (call == NULL) {
> PyErr_Clear();
> return 0;
> }
> /* Could test recursively but don't, for fear of endless
>recursion if some joker sets self.__call__ = self */
> Py_DECREF(call);
> return 1;
> }
> else {
> return x->ob_type->tp_call != NULL;
> }
> }

That's the Py2 version. In Py3, it looks as follows, because old-style
"instances" no longer exist:

"""
int
PyCallable_Check(PyObject *x)
{
if (x == NULL)
return 0;
return x->ob_type->tp_call != NULL;
}
"""

That's what I had initially based my optimisation on.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] callable() optimization

2012-05-09 Thread Vitja Makarov
2012/5/9 Stefan Behnel :
> Vitja Makarov, 09.05.2012 09:43:
>> 2012/5/9 Stefan Behnel :
>>> Vitja Makarov, 08.05.2012 13:27:
 I've noticed regression related to callable() optimization.

 https://github.com/cython/cython/commit/a40112b0461eae5ab22fbdd07ae798d4a72ff523

 class C:
     pass
 print callable(C())

 It prints True optimized version checks ((obj)->ob_type->tp_call !=
 NULL) condition that is True for both class and instance.

>>> help(callable)
 callable(...)
     callable(object) -> bool

     Return whether the object is callable (i.e., some kind of function).
     Note that classes are callable, as are instances with a __call__() 
 method.
>>>
>>> Ah, right - old style classes are special cased in Py2.
>>>
>>> I'll make this a Py3-only optimisation then.
>>>
>>
>> I don't see difference between py2 and py3 here:
>>
>> Python 3.2.3 (default, May  3 2012, 15:51:42)
>> [GCC 4.6.3] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
>> >>> class Foo: pass
>> ...
>> >>> callable(Foo())
>> False
>> >>>
>>
>> There is PyCallable_Check() CPython function:
>>
>> int
>> PyCallable_Check(PyObject *x)
>> {
>>     if (x == NULL)
>>         return 0;
>>     if (PyInstance_Check(x)) {
>>         PyObject *call = PyObject_GetAttrString(x, "__call__");
>>         if (call == NULL) {
>>             PyErr_Clear();
>>             return 0;
>>         }
>>         /* Could test recursively but don't, for fear of endless
>>            recursion if some joker sets self.__call__ = self */
>>         Py_DECREF(call);
>>         return 1;
>>     }
>>     else {
>>         return x->ob_type->tp_call != NULL;
>>     }
>> }
>
> That's the Py2 version. In Py3, it looks as follows, because old-style
> "instances" no longer exist:
>
> """
> int
> PyCallable_Check(PyObject *x)
> {
>        if (x == NULL)
>                return 0;
>        return x->ob_type->tp_call != NULL;
> }
> """
>
> That's what I had initially based my optimisation on.
>

Ok, so why don't you want to use PyCallable_Check() in all cases?

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread mark florisson
On 9 May 2012 09:02, Vitja Makarov  wrote:
> 2012/5/9 Stefan Behnel :
>> Robert Bradshaw, 09.05.2012 00:12:
>>> On Tue, May 8, 2012 at 6:47 AM, Vitja Makarov wrote:
 2012/5/8 Stefan Behnel:
> Vitja has rebased the type inference on the control flow, so I wonder if
> this will enable us to properly infer this:
>
>  def partial_validity():
>    """
>    >>> partial_validity()
>    ('Python object', 'double', 'str object')
>    """
>    a = 1.0
>    b = a + 2   # definitely double
>    a = 'test'
>    c = a + 'toast'  # definitely str
>    return typeof(a), typeof(b), typeof(c)
>
> I think, what is mainly needed for this is that a NameNode with an
> undeclared type should not report its own entry as dependency but that of
> its own cf_assignments. Would this work?
>
> (Haven't got the time to try it out right now, so I'm dumping it here.)
>

 Yeah, that might work. The other way to go is to split entries:

  def partial_validity():
   """
   >>> partial_validity()
   ('str object', 'double', 'str object')
   """
   a_1 = 1.0
   b = a_1 + 2   # definitely double
   a_2 = 'test'
   c = a_2 + 'toast'  # definitely str
   return typeof(a_2), typeof(b), typeof(c)

 And this should work better because it allows to infer a_1 as a double
 and a_2 as a string.
>>>
>>> This already works, right?
>>
>> It would work if it was implemented. *wink*
>>
>>
>>> I agree it's nicer in general to split
>>> things up, but not being able to optimize a loop variable because it
>>> was used earlier or later in a different context is a disadvantage of
>>> the current system.
>>
>> Absolutely. I was considering entry splitting more of a "soon, maybe not
>> now" type of thing because it isn't entire clear to me what needs to be
>> done. It may not even be all that hard to implement, but I think it's more
>> than just a local change in the scope implementation because the current
>> lookup_here() doesn't know what node is asking.
>>
>
> That could be done the following way:
>  - Before running type inference find independent assignment groups
> and split entries
>  - Run type infrerence
>  - Join entries of the same type or of PyObject base type
>  - Then change names to private ones "{old_name}.{index}"

Sounds like a good approach. Do you think it would be useful if a
variable can be type inferred at some point, but at no other point in
the function, to specialize for both the first type you find and
object? i.e.

i = 0
while something:
use i
i = something_not_inferred()

and specialize on 'i' being an int? Bonus points maybe :)

If these entries are different depending on control flow, it's
basically a form of ssa, which is cool. Then optimizations like
none-checking, boundschecking, wraparound etc can, for each new
variable insert a single check (for bounds checking it depends on the
entire expression, but...). The only thing I'm not entirely sure about
is this when the user eliminates your check through try/finally or
try/except, e.g.

try:
buf[i]
except IndexError:
print "no worries"

buf[i]

Here you basically want a new (virtual) reference of "i". Maybe that
could just be handled in the optimization transform though, where it
invalidates the previous check (especially since there is no
assignment here).

> --
> vitja.
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.16.1

2012-05-09 Thread mark florisson
On 9 May 2012 07:41, Stefan Behnel  wrote:
> Robert Bradshaw, 09.05.2012 00:16:
>> On Tue, May 8, 2012 at 12:04 PM, Vitja Makarov wrote:
>>> 2012/5/8 mark florisson:
 On 8 May 2012 19:36, mark florisson wrote:
> Ok, so for the bugfix release 0.16.1 I propose that everyone cherry
> picks over its own fixes into the release branch (at least Stefan,
> since your fixes pertain to your newly merged branches and sometimes
> to the master branch itself). This branch should not be merged back
> into master, and any additional fixes should go into master and be
> picked over to release.
>
> Some things that should still be fixed:
>    - nonechecks for memoryviews
>    - memoryview documentation
>    - more?
>
> We can then shortly-ish after release 0.17 with actual features (and
> new bugs, lets call those features too), depending on how many bugs
> are still found in 0.16.1.

 TBH, if we're actually close to a major release, the usefulness of a
 bugfix release is imho not that great.
>>>
>>> There are some fixes to generators implementation that depend on
>>> "yield from" that can't be easily cherry-picked.
>>> So I think you're right about 0.17 release. But new features may
>>> introduce new bugs and we'll have to release 0.17.1 soon.
>>
>> If we're looking at doing 0.17 soon, lets just do that.
>
> I think it's close enough to be released. I'll try to get around to list
> the changes in the release notes (and maybe even add a note about alpha
> quality PyPy support to the docs), but I wouldn't mind if someone else was
> quicker, at least for a start. ;)
>
>
>> In the future,
>> we could have a bugfix branch that all bugfixes get checked into,
>> regularly merged into master, which we could release more often as
>> x.y.z releases.
>
> +11. We have the release branch for that, it just hasn't been used much
> since the last release.

Yeah, I like it too. It's much easier than cherry-picking stuff over
in a large history, where fixes may depend (partially) on features.

> I also don't mind releasing a 0.16.1 shortly before (or even after) a 0.17.
> Distributors (e.g. Debian) often try to stick to a given release series
> during their support time frame (usually more than a year), so unless we
> release fixes, they'll end up cherry picking or porting their own fixes,
> each on their own. Applying at least the obvious fixes to the release
> branch and then merging it into the master from there would make it easier
> for them.

Debian stable? :) Good point though, I think we can manage that.

> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.16.1

2012-05-09 Thread mark florisson
On 9 May 2012 08:49, Stefan Behnel  wrote:
> Stefan Behnel, 09.05.2012 08:41:
>> Robert Bradshaw, 09.05.2012 00:16:
>>> If we're looking at doing 0.17 soon, lets just do that.
>> I think it's close enough to be released.
>
> ... although one thing I just noticed is that the "numpy_memoryview" test
> is still disabled because it lead to crashes in recent Py3.2 releases (and
> thus most likely also in the latest Py3k). Not sure if it still crashes,
> but should be checked before going for a release.

Hm, all the tests or just one? Was that the problem with gc_refs != 0?
That should be fixed now.

> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] callable() optimization

2012-05-09 Thread Stefan Behnel
Vitja Makarov, 09.05.2012 10:21:
> 2012/5/9 Stefan Behnel :
>> Vitja Makarov, 09.05.2012 09:43:
>>> 2012/5/9 Stefan Behnel :
 Vitja Makarov, 08.05.2012 13:27:
> I've noticed regression related to callable() optimization.
>
> https://github.com/cython/cython/commit/a40112b0461eae5ab22fbdd07ae798d4a72ff523
>
> class C:
> pass
> print callable(C())
>
> It prints True optimized version checks ((obj)->ob_type->tp_call !=
> NULL) condition that is True for both class and instance.
>
 help(callable)
> callable(...)
> callable(object) -> bool
>
> Return whether the object is callable (i.e., some kind of function).
> Note that classes are callable, as are instances with a __call__() 
> method.

 Ah, right - old style classes are special cased in Py2.

 I'll make this a Py3-only optimisation then.

>>>
>>> I don't see difference between py2 and py3 here:
>>>
>>> Python 3.2.3 (default, May  3 2012, 15:51:42)
>>> [GCC 4.6.3] on linux2
>>> Type "help", "copyright", "credits" or "license" for more information.
>> class Foo: pass
>>> ...
>> callable(Foo())
>>> False
>>
>>>
>>> There is PyCallable_Check() CPython function:
>>>
>>> int
>>> PyCallable_Check(PyObject *x)
>>> {
>>> if (x == NULL)
>>> return 0;
>>> if (PyInstance_Check(x)) {
>>> PyObject *call = PyObject_GetAttrString(x, "__call__");
>>> if (call == NULL) {
>>> PyErr_Clear();
>>> return 0;
>>> }
>>> /* Could test recursively but don't, for fear of endless
>>>recursion if some joker sets self.__call__ = self */
>>> Py_DECREF(call);
>>> return 1;
>>> }
>>> else {
>>> return x->ob_type->tp_call != NULL;
>>> }
>>> }
>>
>> That's the Py2 version. In Py3, it looks as follows, because old-style
>> "instances" no longer exist:
>>
>> """
>> int
>> PyCallable_Check(PyObject *x)
>> {
>>if (x == NULL)
>>return 0;
>>return x->ob_type->tp_call != NULL;
>> }
>> """
>>
>> That's what I had initially based my optimisation on.
> 
> Ok, so why don't you want to use PyCallable_Check() in all cases?

Well, maybe this isn't performance critical enough to merit inlining. Do
you think it matters?

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] callable() optimization

2012-05-09 Thread mark florisson
On 9 May 2012 09:02, Stefan Behnel  wrote:
> Vitja Makarov, 09.05.2012 09:43:
>> 2012/5/9 Stefan Behnel :
>>> Vitja Makarov, 08.05.2012 13:27:
 I've noticed regression related to callable() optimization.

 https://github.com/cython/cython/commit/a40112b0461eae5ab22fbdd07ae798d4a72ff523

 class C:
     pass
 print callable(C())

 It prints True optimized version checks ((obj)->ob_type->tp_call !=
 NULL) condition that is True for both class and instance.

>>> help(callable)
 callable(...)
     callable(object) -> bool

     Return whether the object is callable (i.e., some kind of function).
     Note that classes are callable, as are instances with a __call__() 
 method.
>>>
>>> Ah, right - old style classes are special cased in Py2.
>>>
>>> I'll make this a Py3-only optimisation then.
>>>
>>
>> I don't see difference between py2 and py3 here:
>>
>> Python 3.2.3 (default, May  3 2012, 15:51:42)
>> [GCC 4.6.3] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
>> >>> class Foo: pass
>> ...
>> >>> callable(Foo())
>> False
>> >>>
>>
>> There is PyCallable_Check() CPython function:
>>
>> int
>> PyCallable_Check(PyObject *x)
>> {
>>     if (x == NULL)
>>         return 0;
>>     if (PyInstance_Check(x)) {
>>         PyObject *call = PyObject_GetAttrString(x, "__call__");
>>         if (call == NULL) {
>>             PyErr_Clear();
>>             return 0;
>>         }
>>         /* Could test recursively but don't, for fear of endless
>>            recursion if some joker sets self.__call__ = self */
>>         Py_DECREF(call);
>>         return 1;
>>     }
>>     else {
>>         return x->ob_type->tp_call != NULL;
>>     }
>> }
>
> That's the Py2 version. In Py3, it looks as follows, because old-style
> "instances" no longer exist:
>
> """
> int
> PyCallable_Check(PyObject *x)
> {
>        if (x == NULL)
>                return 0;
>        return x->ob_type->tp_call != NULL;
> }
> """
>
> That's what I had initially based my optimisation on.
>
> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel

Huh, so __call__ in a user defined new style class could never end up
in ob_type.tp_call right? So how can it avoid that dict lookup?
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread mark florisson
On 9 May 2012 09:28, mark florisson  wrote:
> On 9 May 2012 09:02, Vitja Makarov  wrote:
>> 2012/5/9 Stefan Behnel :
>>> Robert Bradshaw, 09.05.2012 00:12:
 On Tue, May 8, 2012 at 6:47 AM, Vitja Makarov wrote:
> 2012/5/8 Stefan Behnel:
>> Vitja has rebased the type inference on the control flow, so I wonder if
>> this will enable us to properly infer this:
>>
>>  def partial_validity():
>>    """
>>    >>> partial_validity()
>>    ('Python object', 'double', 'str object')
>>    """
>>    a = 1.0
>>    b = a + 2   # definitely double
>>    a = 'test'
>>    c = a + 'toast'  # definitely str
>>    return typeof(a), typeof(b), typeof(c)
>>
>> I think, what is mainly needed for this is that a NameNode with an
>> undeclared type should not report its own entry as dependency but that of
>> its own cf_assignments. Would this work?
>>
>> (Haven't got the time to try it out right now, so I'm dumping it here.)
>>
>
> Yeah, that might work. The other way to go is to split entries:
>
>  def partial_validity():
>   """
>   >>> partial_validity()
>   ('str object', 'double', 'str object')
>   """
>   a_1 = 1.0
>   b = a_1 + 2   # definitely double
>   a_2 = 'test'
>   c = a_2 + 'toast'  # definitely str
>   return typeof(a_2), typeof(b), typeof(c)
>
> And this should work better because it allows to infer a_1 as a double
> and a_2 as a string.

 This already works, right?
>>>
>>> It would work if it was implemented. *wink*
>>>
>>>
 I agree it's nicer in general to split
 things up, but not being able to optimize a loop variable because it
 was used earlier or later in a different context is a disadvantage of
 the current system.
>>>
>>> Absolutely. I was considering entry splitting more of a "soon, maybe not
>>> now" type of thing because it isn't entire clear to me what needs to be
>>> done. It may not even be all that hard to implement, but I think it's more
>>> than just a local change in the scope implementation because the current
>>> lookup_here() doesn't know what node is asking.
>>>
>>
>> That could be done the following way:
>>  - Before running type inference find independent assignment groups
>> and split entries
>>  - Run type infrerence
>>  - Join entries of the same type or of PyObject base type
>>  - Then change names to private ones "{old_name}.{index}"
>
> Sounds like a good approach. Do you think it would be useful if a
> variable can be type inferred at some point, but at no other point in
> the function, to specialize for both the first type you find and
> object? i.e.
>
> i = 0
> while something:
>    use i
>    i = something_not_inferred()
>
> and specialize on 'i' being an int? Bonus points maybe :)
>
> If these entries are different depending on control flow, it's
> basically a form of ssa, which is cool.

You could reuse entry cnames if you re-encounter the same type though,
but it would be nice if they were different, uniquely referencable
objects if they originate from different assignment or merge points.

> Then optimizations like
> none-checking, boundschecking, wraparound etc can, for each new
> variable insert a single check (for bounds checking it depends on the
> entire expression, but...). The only thing I'm not entirely sure about
> is this when the user eliminates your check through try/finally or
> try/except, e.g.
>
> try:
>    buf[i]
> except IndexError:
>    print "no worries"
>
> buf[i]
>
> Here you basically want a new (virtual) reference of "i". Maybe that
> could just be handled in the optimization transform though, where it
> invalidates the previous check (especially since there is no
> assignment here).
>
>> --
>> vitja.
>> ___
>> cython-devel mailing list
>> cython-devel@python.org
>> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] callable() optimization

2012-05-09 Thread Stefan Behnel
mark florisson, 09.05.2012 10:37:
> On 9 May 2012 09:02, Stefan Behnel  wrote:
>> Vitja Makarov, 09.05.2012 09:43:
>>> 2012/5/9 Stefan Behnel :
 Vitja Makarov, 08.05.2012 13:27:
> I've noticed regression related to callable() optimization.
>
> https://github.com/cython/cython/commit/a40112b0461eae5ab22fbdd07ae798d4a72ff523
>
> class C:
> pass
> print callable(C())
>
> It prints True optimized version checks ((obj)->ob_type->tp_call !=
> NULL) condition that is True for both class and instance.
>
 help(callable)
> callable(...)
> callable(object) -> bool
>
> Return whether the object is callable (i.e., some kind of function).
> Note that classes are callable, as are instances with a __call__() 
> method.

 Ah, right - old style classes are special cased in Py2.

 I'll make this a Py3-only optimisation then.

>>>
>>> I don't see difference between py2 and py3 here:
>>>
>>> Python 3.2.3 (default, May  3 2012, 15:51:42)
>>> [GCC 4.6.3] on linux2
>>> Type "help", "copyright", "credits" or "license" for more information.
>> class Foo: pass
>>> ...
>> callable(Foo())
>>> False
>>
>>>
>>> There is PyCallable_Check() CPython function:
>>>
>>> int
>>> PyCallable_Check(PyObject *x)
>>> {
>>> if (x == NULL)
>>> return 0;
>>> if (PyInstance_Check(x)) {
>>> PyObject *call = PyObject_GetAttrString(x, "__call__");
>>> if (call == NULL) {
>>> PyErr_Clear();
>>> return 0;
>>> }
>>> /* Could test recursively but don't, for fear of endless
>>>recursion if some joker sets self.__call__ = self */
>>> Py_DECREF(call);
>>> return 1;
>>> }
>>> else {
>>> return x->ob_type->tp_call != NULL;
>>> }
>>> }
>>
>> That's the Py2 version. In Py3, it looks as follows, because old-style
>> "instances" no longer exist:
>>
>> """
>> int
>> PyCallable_Check(PyObject *x)
>> {
>>if (x == NULL)
>>return 0;
>>return x->ob_type->tp_call != NULL;
>> }
>> """
>>
>> That's what I had initially based my optimisation on.
> 
> Huh, so __call__ in a user defined new style class could never end up
> in ob_type.tp_call right?

Yes it does. CPython special cases these method names.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] callable() optimization

2012-05-09 Thread Vitja Makarov
2012/5/9 Stefan Behnel :
> Vitja Makarov, 09.05.2012 10:21:
>> 2012/5/9 Stefan Behnel :
>>> Vitja Makarov, 09.05.2012 09:43:
 2012/5/9 Stefan Behnel :
> Vitja Makarov, 08.05.2012 13:27:
>> I've noticed regression related to callable() optimization.
>>
>> https://github.com/cython/cython/commit/a40112b0461eae5ab22fbdd07ae798d4a72ff523
>>
>> class C:
>>     pass
>> print callable(C())
>>
>> It prints True optimized version checks ((obj)->ob_type->tp_call !=
>> NULL) condition that is True for both class and instance.
>>
> help(callable)
>> callable(...)
>>     callable(object) -> bool
>>
>>     Return whether the object is callable (i.e., some kind of function).
>>     Note that classes are callable, as are instances with a __call__() 
>> method.
>
> Ah, right - old style classes are special cased in Py2.
>
> I'll make this a Py3-only optimisation then.
>

 I don't see difference between py2 and py3 here:

 Python 3.2.3 (default, May  3 2012, 15:51:42)
 [GCC 4.6.3] on linux2
 Type "help", "copyright", "credits" or "license" for more information.
>>> class Foo: pass
 ...
>>> callable(Foo())
 False
>>>

 There is PyCallable_Check() CPython function:

 int
 PyCallable_Check(PyObject *x)
 {
     if (x == NULL)
         return 0;
     if (PyInstance_Check(x)) {
         PyObject *call = PyObject_GetAttrString(x, "__call__");
         if (call == NULL) {
             PyErr_Clear();
             return 0;
         }
         /* Could test recursively but don't, for fear of endless
            recursion if some joker sets self.__call__ = self */
         Py_DECREF(call);
         return 1;
     }
     else {
         return x->ob_type->tp_call != NULL;
     }
 }
>>>
>>> That's the Py2 version. In Py3, it looks as follows, because old-style
>>> "instances" no longer exist:
>>>
>>> """
>>> int
>>> PyCallable_Check(PyObject *x)
>>> {
>>>        if (x == NULL)
>>>                return 0;
>>>        return x->ob_type->tp_call != NULL;
>>> }
>>> """
>>>
>>> That's what I had initially based my optimisation on.
>>
>> Ok, so why don't you want to use PyCallable_Check() in all cases?
>
> Well, maybe this isn't performance critical enough to merit inlining. Do
> you think it matters?
>

Py3k case is quite simple expression so I think it may be inlined. On
the other hand it's not often used.

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.16.1

2012-05-09 Thread Stefan Behnel
Stefan Behnel, 09.05.2012 08:41:
> Robert Bradshaw, 09.05.2012 00:16:
>> If we're looking at doing 0.17 soon, lets just do that.
> 
> I think it's close enough to be released. I'll try to get around to list
> the changes in the release notes (and maybe even add a note about alpha
> quality PyPy support to the docs), but I wouldn't mind if someone else was
> quicker, at least for a start. ;)

Well, here's a start:

http://wiki.cython.org/ReleaseNotes-0.17

Please add to it if you see anything missing.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.16.1

2012-05-09 Thread Stefan Behnel
Stefan Behnel, 09.05.2012 10:51:
> Stefan Behnel, 09.05.2012 08:41:
>> Robert Bradshaw, 09.05.2012 00:16:
>>> If we're looking at doing 0.17 soon, lets just do that.
>>
>> I think it's close enough to be released. I'll try to get around to list
>> the changes in the release notes (and maybe even add a note about alpha
>> quality PyPy support to the docs), but I wouldn't mind if someone else was
>> quicker, at least for a start. ;)
> 
> Well, here's a start:
> 
> http://wiki.cython.org/ReleaseNotes-0.17

Oh, and I think this makes it pretty clear that this is a 0.17 and not a
0.16.1.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.17

2012-05-09 Thread Stefan Behnel
Stefan Behnel, 06.05.2012 20:22:
> Dag Sverre Seljebotn, 06.05.2012 19:51:
>> On 05/06/2012 04:28 PM, mark florisson wrote:
>>> I think we already have quite a bit of functionality (nearly) ready,
>>> after merging some pending pull requests maybe it will be a good time
>>> for a 0.17 release? I think it would be good to also document to what
>>> extent pypy support works, what works and what doesn't. Stefan, since
>>> you added a large majority of the features, would you want to be the
>>> release manager?
>>>
>>> In summary, the following pull requests should likely go in
>>>  - array.array support (unless further discussion prevents that)
>>>  - fused types runtime buffer dispatch
>>>  - newaxis
>>>  - more?
>>
>>
>> Sounds more like a 0.16.1? (Did we have any rules for that -- except the
>> obvious one that breaking backwards compatibility in noticeable ways has to
>> increment the major?)
> 
> Those are only the pending pull requests, the current feature set in the
> master branch is way larger than that. I'll start writing up the release
> notes soon.

Reviving this thread because it's the proper one to discuss 0.17 (instead
of the "0.16.1" thread).

So, here are the release notes so far:

http://wiki.cython.org/ReleaseNotes-0.17

There are a couple of bugs targeted for 0.17 that have not been closed (or
worked on?) yet. Please look through them as well to see if they a) have
been fixed, b) will be fixed soon or c) should be postponed.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Stefan Behnel
Stefan Behnel, 08.05.2012 14:24:
> Vitja has rebased the type inference on the control flow

On a related note, is this fixable now?

  def test():
  x = 1# inferred as int
  del x# error: Deletion of non-Python, non-C++ object

http://trac.cython.org/cython_trac/ticket/768

It might be enough to infer "object" for names that are being del-ed for
now, and to fix "del" The Right Way when we split entries.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] 0.17

2012-05-09 Thread Stefan Behnel
mark florisson, 06.05.2012 16:28:
> I think we already have quite a bit of functionality (nearly) ready,
> after merging some pending pull requests maybe it will be a good time
> for a 0.17 release? I think it would be good to also document to what
> extent pypy support works, what works and what doesn't. Stefan, since
> you added a large majority of the features, would you want to be the
> release manager?
> 
> In summary, the following pull requests should likely go in
> - array.array support (unless further discussion prevents that)
> - fused types runtime buffer dispatch
> - newaxis
> - more?

Looks like it was not a good idea to disable the numpy_memoryview tests:

"""
numpy_memoryview.cpp: In function ‘PyObject*
__pyx_pf_16numpy_memoryview_32test_coerce_to_numpy(PyObject*)’:
numpy_memoryview.cpp:15069: error: cannot convert ‘td_h_short*’ to ‘int*’
in assignment
numpy_memoryview.cpp:15118: error: cannot convert ‘td_h_double*’ to
‘float*’ in assignment
"""

https://sage.math.washington.edu:8091/hudson/job/cython-devel-tests/BACKEND=cpp,PYVERSION=py32-ext/374/consoleFull

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Vitja Makarov
2012/5/9 Stefan Behnel :
> Stefan Behnel, 08.05.2012 14:24:
>> Vitja has rebased the type inference on the control flow
>
> On a related note, is this fixable now?
>
>  def test():
>      x = 1    # inferred as int
>      del x    # error: Deletion of non-Python, non-C++ object
>
> http://trac.cython.org/cython_trac/ticket/768
>
> It might be enough to infer "object" for names that are being del-ed for
> now, and to fix "del" The Right Way when we split entries.
>

Do you mean that `x` should be inferred as "python object" in your example?

Yes, we may add workaround for del  case.
Del is represented now by NameDeletion with the same rhs and lhs.

We can add method infer_type() to NameAssignment and use it instead of
Node.infer_type()


-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Vitja Makarov
2012/5/9 Vitja Makarov :
> 2012/5/9 Stefan Behnel :
>> Stefan Behnel, 08.05.2012 14:24:
>>> Vitja has rebased the type inference on the control flow
>>
>> On a related note, is this fixable now?
>>
>>  def test():
>>      x = 1    # inferred as int
>>      del x    # error: Deletion of non-Python, non-C++ object
>>
>> http://trac.cython.org/cython_trac/ticket/768
>>
>> It might be enough to infer "object" for names that are being del-ed for
>> now, and to fix "del" The Right Way when we split entries.
>>
>
> Do you mean that `x` should be inferred as "python object" in your example?
>
> Yes, we may add workaround for del  case.
> Del is represented now by NameDeletion with the same rhs and lhs.
>
> We can add method infer_type() to NameAssignment and use it instead of
> Node.infer_type()
>
>

Here I've tried to fix it, now deletion always infers as python_object

https://github.com/vitek/cython/commit/225c9c60bed6406db46e87da31596e053056f8b7


That may break C++ object deletion

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread mark florisson
On 9 May 2012 13:39, Vitja Makarov  wrote:
> 2012/5/9 Vitja Makarov :
>> 2012/5/9 Stefan Behnel :
>>> Stefan Behnel, 08.05.2012 14:24:
 Vitja has rebased the type inference on the control flow
>>>
>>> On a related note, is this fixable now?
>>>
>>>  def test():
>>>      x = 1    # inferred as int
>>>      del x    # error: Deletion of non-Python, non-C++ object
>>>
>>> http://trac.cython.org/cython_trac/ticket/768
>>>
>>> It might be enough to infer "object" for names that are being del-ed for
>>> now, and to fix "del" The Right Way when we split entries.
>>>
>>
>> Do you mean that `x` should be inferred as "python object" in your example?
>>
>> Yes, we may add workaround for del  case.
>> Del is represented now by NameDeletion with the same rhs and lhs.
>>
>> We can add method infer_type() to NameAssignment and use it instead of
>> Node.infer_type()
>>
>>
>
> Here I've tried to fix it, now deletion always infers as python_object
>
> https://github.com/vitek/cython/commit/225c9c60bed6406db46e87da31596e053056f8b7
>
>
> That may break C++ object deletion
>
> --
> vitja.
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel

Memoryviews can be deleted as well.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Vitja Makarov
2012/5/9 mark florisson :
> On 9 May 2012 13:39, Vitja Makarov  wrote:
>> 2012/5/9 Vitja Makarov :
>>> 2012/5/9 Stefan Behnel :
 Stefan Behnel, 08.05.2012 14:24:
> Vitja has rebased the type inference on the control flow

 On a related note, is this fixable now?

  def test():
      x = 1    # inferred as int
      del x    # error: Deletion of non-Python, non-C++ object

 http://trac.cython.org/cython_trac/ticket/768

 It might be enough to infer "object" for names that are being del-ed for
 now, and to fix "del" The Right Way when we split entries.

>>>
>>> Do you mean that `x` should be inferred as "python object" in your example?
>>>
>>> Yes, we may add workaround for del  case.
>>> Del is represented now by NameDeletion with the same rhs and lhs.
>>>
>>> We can add method infer_type() to NameAssignment and use it instead of
>>> Node.infer_type()
>>>
>>>
>>
>> Here I've tried to fix it, now deletion always infers as python_object
>>
>> https://github.com/vitek/cython/commit/225c9c60bed6406db46e87da31596e053056f8b7
>>
>>
>> That may break C++ object deletion
>>
>
> Memoryviews can be deleted as well.


That code is run for entries with unspecified_type only


-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Vitja Makarov
2012/5/9 Vitja Makarov :
> 2012/5/9 mark florisson :
>> On 9 May 2012 13:39, Vitja Makarov  wrote:
>>> 2012/5/9 Vitja Makarov :
 2012/5/9 Stefan Behnel :
> Stefan Behnel, 08.05.2012 14:24:
>> Vitja has rebased the type inference on the control flow
>
> On a related note, is this fixable now?
>
>  def test():
>      x = 1    # inferred as int
>      del x    # error: Deletion of non-Python, non-C++ object
>
> http://trac.cython.org/cython_trac/ticket/768
>
> It might be enough to infer "object" for names that are being del-ed for
> now, and to fix "del" The Right Way when we split entries.
>

 Do you mean that `x` should be inferred as "python object" in your example?

 Yes, we may add workaround for del  case.
 Del is represented now by NameDeletion with the same rhs and lhs.

 We can add method infer_type() to NameAssignment and use it instead of
 Node.infer_type()


>>>
>>> Here I've tried to fix it, now deletion always infers as python_object
>>>
>>> https://github.com/vitek/cython/commit/225c9c60bed6406db46e87da31596e053056f8b7
>>>
>>>
>>> That may break C++ object deletion
>>>
>>
>> Memoryviews can be deleted as well.
>
>
> That code is run for entries with unspecified_type only
>
>

Yeah, this code doesn't work now:

cdef extern from "foo.h":
cdef cppclass Foo:
Foo()

def foo():
foo = new Foo()
print typeof(foo)
del foo

And I'm not sure how to fix it.

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Vitja Makarov
2012/5/9 Vitja Makarov :
> 2012/5/9 Vitja Makarov :
>> 2012/5/9 mark florisson :
>>> On 9 May 2012 13:39, Vitja Makarov  wrote:
 2012/5/9 Vitja Makarov :
> 2012/5/9 Stefan Behnel :
>> Stefan Behnel, 08.05.2012 14:24:
>>> Vitja has rebased the type inference on the control flow
>>
>> On a related note, is this fixable now?
>>
>>  def test():
>>      x = 1    # inferred as int
>>      del x    # error: Deletion of non-Python, non-C++ object
>>
>> http://trac.cython.org/cython_trac/ticket/768
>>
>> It might be enough to infer "object" for names that are being del-ed for
>> now, and to fix "del" The Right Way when we split entries.
>>
>
> Do you mean that `x` should be inferred as "python object" in your 
> example?
>
> Yes, we may add workaround for del  case.
> Del is represented now by NameDeletion with the same rhs and lhs.
>
> We can add method infer_type() to NameAssignment and use it instead of
> Node.infer_type()
>
>

 Here I've tried to fix it, now deletion always infers as python_object

 https://github.com/vitek/cython/commit/225c9c60bed6406db46e87da31596e053056f8b7


 That may break C++ object deletion

>>>
>>> Memoryviews can be deleted as well.
>>
>>
>> That code is run for entries with unspecified_type only
>>
>>
>
> Yeah, this code doesn't work now:
>
> cdef extern from "foo.h":
>    cdef cppclass Foo:
>        Foo()
>
> def foo():
>    foo = new Foo()
>    print typeof(foo)
>    del foo
>
> And I'm not sure how to fix it.

I've fixed cppclasses:

https://github.com/vitek/cython/commit/f5acf44be0f647bdcbb5a23c8bfbceff48f4414e

About memoryviews:

from cython cimport typeof

def foo(float[::1] a):
b = a
#del b
print typeof(b)
print typeof(a)


In this example `b` is inferred as 'Python object' and not
`float[::1]`, is that correct?

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread mark florisson
On 9 May 2012 14:16, Vitja Makarov  wrote:
> 2012/5/9 Vitja Makarov :
>> 2012/5/9 Vitja Makarov :
>>> 2012/5/9 mark florisson :
 On 9 May 2012 13:39, Vitja Makarov  wrote:
> 2012/5/9 Vitja Makarov :
>> 2012/5/9 Stefan Behnel :
>>> Stefan Behnel, 08.05.2012 14:24:
 Vitja has rebased the type inference on the control flow
>>>
>>> On a related note, is this fixable now?
>>>
>>>  def test():
>>>      x = 1    # inferred as int
>>>      del x    # error: Deletion of non-Python, non-C++ object
>>>
>>> http://trac.cython.org/cython_trac/ticket/768
>>>
>>> It might be enough to infer "object" for names that are being del-ed for
>>> now, and to fix "del" The Right Way when we split entries.
>>>
>>
>> Do you mean that `x` should be inferred as "python object" in your 
>> example?
>>
>> Yes, we may add workaround for del  case.
>> Del is represented now by NameDeletion with the same rhs and lhs.
>>
>> We can add method infer_type() to NameAssignment and use it instead of
>> Node.infer_type()
>>
>>
>
> Here I've tried to fix it, now deletion always infers as python_object
>
> https://github.com/vitek/cython/commit/225c9c60bed6406db46e87da31596e053056f8b7
>
>
> That may break C++ object deletion
>

 Memoryviews can be deleted as well.
>>>
>>>
>>> That code is run for entries with unspecified_type only
>>>
>>>
>>
>> Yeah, this code doesn't work now:
>>
>> cdef extern from "foo.h":
>>    cdef cppclass Foo:
>>        Foo()
>>
>> def foo():
>>    foo = new Foo()
>>    print typeof(foo)
>>    del foo
>>
>> And I'm not sure how to fix it.
>
> I've fixed cppclasses:
>
> https://github.com/vitek/cython/commit/f5acf44be0f647bdcbb5a23c8bfbceff48f4414e
>
> About memoryviews:
>
> from cython cimport typeof
>
> def foo(float[::1] a):
>    b = a
>    #del b
>    print typeof(b)
>    print typeof(a)
>
>
> In this example `b` is inferred as 'Python object' and not
> `float[::1]`, is that correct?
>
> --
> vitja.
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel

That's the current behaviour, but it would be better if it inferred a
memoryview slice instead.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Stefan Behnel
Vitja Makarov, 09.05.2012 15:16:
> 2012/5/9 Vitja Makarov:
 On 9 May 2012 13:39, Vitja Makarov wrote:
> 2012/5/9 Vitja Makarov:
>> 2012/5/9 Stefan Behnel:
>>>  def test():
>>>  x = 1# inferred as int
>>>  del x# error: Deletion of non-Python, non-C++ object
>>>
>>> http://trac.cython.org/cython_trac/ticket/768
>>>
>>> It might be enough to infer "object" for names that are being del-ed for
>>> now, and to fix "del" The Right Way when we split entries.
>>
>> Do you mean that `x` should be inferred as "python object" in your 
>> example?
>>
>> Yes, we may add workaround for del  case.
>> Del is represented now by NameDeletion with the same rhs and lhs.
>>
>> We can add method infer_type() to NameAssignment and use it instead of
>> Node.infer_type()

Yes, looks ok.


> Here I've tried to fix it, now deletion always infers as python_object
>
> https://github.com/vitek/cython/commit/225c9c60bed6406db46e87da31596e053056f8b7
>
> That may break C++ object deletion
>>
>> Yeah, this code doesn't work now:
>>
>> cdef extern from "foo.h":
>>cdef cppclass Foo:
>>Foo()
>>
>> def foo():
>>foo = new Foo()
>>print typeof(foo)
>>del foo
>>
>> And I'm not sure how to fix it.
> 
> I've fixed cppclasses:
> 
> https://github.com/vitek/cython/commit/f5acf44be0f647bdcbb5a23c8bfbceff48f4414e

Sure, that makes sense. If the type cannot be del-ed, we'll get an error
elsewhere - not a concern of type inference.


> About memoryviews:
> 
> from cython cimport typeof
> 
> def foo(float[::1] a):
> b = a
> #del b
> print typeof(b)
> print typeof(a)
> 
> In this example `b` is inferred as 'Python object' and not
> `float[::1]`, is that correct?

I think it currently is, but it may no longer be in the future. See the
running ML thread about the future of the buffer syntax and the memoryview
syntax.

If we're up to changing this, it would be good to give it a suitable
behaviour right for the next release, so that users don't start relying on
the above.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Stefan Behnel
mark florisson, 09.05.2012 15:18:
> On 9 May 2012 14:16, Vitja Makarov wrote:
>> from cython cimport typeof
>>
>> def foo(float[::1] a):
>>b = a
>>#del b
>>print typeof(b)
>>print typeof(a)
>>
>>
>> In this example `b` is inferred as 'Python object' and not
>> `float[::1]`, is that correct?
>>
> That's the current behaviour, but it would be better if it inferred a
> memoryview slice instead.

+1

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Stefan Behnel
Dag Sverre Seljebotn, 08.05.2012 18:52:
> Vitja Makarov wrote:
>> def partial_validity():
>>   """
>>   >>> partial_validity()
>>   ('str object', 'double', 'str object')
>>   """
>>   a_1 = 1.0
>>   b = a_1 + 2   # definitely double
>>   a_2 = 'test'
>>   c = a_2 + 'toast'  # definitely str
>>   return typeof(a_2), typeof(b), typeof(c)
>>
>> And this should work better because it allows to infer a_1 as a double
>> and a_2 as a string.
> 
> +1 (as also Mark has hinted several times). I also happen to like that
> typeof returns str rather than object... I don't think type inferred code
> has to restrict itself to what you could dousing *only* declarations.
> 
> To go out on a hyperbole: Reinventing compiler theory to make things 
> fit better with our current tree and the Pyrex legacy isn't sustainable 
> forever, at some point we should do things the standard way and
> refactor some code if necesarry.

That's how these things work, though. It's basically register allocation
and variable renaming mapped to a code translator (rather than a compiler
that emits assembly or byte code).

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread mark florisson
On 9 May 2012 16:13, Stefan Behnel  wrote:
> Dag Sverre Seljebotn, 08.05.2012 18:52:
>> Vitja Makarov wrote:
>>> def partial_validity():
>>>   """
>>>   >>> partial_validity()
>>>   ('str object', 'double', 'str object')
>>>   """
>>>   a_1 = 1.0
>>>   b = a_1 + 2   # definitely double
>>>   a_2 = 'test'
>>>   c = a_2 + 'toast'  # definitely str
>>>   return typeof(a_2), typeof(b), typeof(c)
>>>
>>> And this should work better because it allows to infer a_1 as a double
>>> and a_2 as a string.
>>
>> +1 (as also Mark has hinted several times). I also happen to like that
>> typeof returns str rather than object... I don't think type inferred code
>> has to restrict itself to what you could dousing *only* declarations.
>>
>> To go out on a hyperbole: Reinventing compiler theory to make things
>> fit better with our current tree and the Pyrex legacy isn't sustainable
>> forever, at some point we should do things the standard way and
>> refactor some code if necesarry.
>
> That's how these things work, though. It's basically register allocation
> and variable renaming mapped to a code translator (rather than a compiler
> that emits assembly or byte code).
>
> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel

That's not what he was hinting at though. Many of these things we're
doing are standard in compiler theory, and inventing our own ad-hoc
ways and sloppy algorithms for things like control flow, type
inference, variable renaming, bounds check optimizations, none
checking optimizations, etc, isn't going to cut it. As we have already
seen, standard ways to do control flow have worked out very great due
to Vitja's work.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Dag Sverre Seljebotn

On 05/09/2012 05:13 PM, Stefan Behnel wrote:

Dag Sverre Seljebotn, 08.05.2012 18:52:

Vitja Makarov wrote:

def partial_validity():
   """
   >>>  partial_validity()
   ('str object', 'double', 'str object')
   """
   a_1 = 1.0
   b = a_1 + 2   # definitely double
   a_2 = 'test'
   c = a_2 + 'toast'  # definitely str
   return typeof(a_2), typeof(b), typeof(c)

And this should work better because it allows to infer a_1 as a double
and a_2 as a string.


+1 (as also Mark has hinted several times). I also happen to like that
typeof returns str rather than object... I don't think type inferred code
has to restrict itself to what you could dousing *only* declarations.

To go out on a hyperbole: Reinventing compiler theory to make things
fit better with our current tree and the Pyrex legacy isn't sustainable
forever, at some point we should do things the standard way and
refactor some code if necesarry.


That's how these things work, though. It's basically register allocation
and variable renaming mapped to a code translator (rather than a compiler
that emits assembly or byte code).


Yes, to be crystal clear, I was actually hinting at your original 
proposal here, and applauding Vitja's counter-proposal as a more 
standard way of doing things.


But I regretted posting at all afterwards, I do so little coding on 
Cython these days that I shouldn't interfer at this level. I'll try to 
leave such rants to Mark in the future :-)


Dag
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


[Cython] Bug in print statement

2012-05-09 Thread Vitja Makarov
Del statement inference enabled pyregr.test_descr testcase and it SIGSEGVs.
Here is minimal example:

import unittest
import sys

class Foo(unittest.TestCase):
def test_file_fault(self):
# Testing sys.stdout is changed in getattr...
test_stdout = sys.stdout
class StdoutGuard:
def __getattr__(self, attr):
test_stdout.write('%d\n' % sys.getrefcount(self))
sys.stdout =  test_stdout #sys.__stdout__
test_stdout.write('%d\n' % sys.getrefcount(self))
test_stdout.write('getattr: %r\n' % attr)
test_stdout.flush()
raise RuntimeError("Premature access to sys.stdout.%s" % attr)
sys.stdout = StdoutGuard()
try:
print "Oops!"
except RuntimeError:
pass
finally:
sys.stdout = test_stdout

def test_getattr_hooks(self):
pass

from test import test_support
test_support.run_unittest(Foo)

It works in python and sigsegvs in cython.
It seems to me that the problem is StdoutGuard() is still used when
its reference counter is zero since Python interpreter does
Py_XINCREF() for file object and __Pyx_Print() doesn't.

-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Bug in print statement

2012-05-09 Thread Stefan Behnel
Vitja Makarov, 09.05.2012 18:31:
> Del statement inference enabled pyregr.test_descr testcase and it SIGSEGVs.
> Here is minimal example:
> 
> import unittest
> import sys
> 
> class Foo(unittest.TestCase):
> def test_file_fault(self):
> # Testing sys.stdout is changed in getattr...
> test_stdout = sys.stdout
> class StdoutGuard:
> def __getattr__(self, attr):
> test_stdout.write('%d\n' % sys.getrefcount(self))
> sys.stdout =  test_stdout #sys.__stdout__
> test_stdout.write('%d\n' % sys.getrefcount(self))
> test_stdout.write('getattr: %r\n' % attr)
> test_stdout.flush()
> raise RuntimeError("Premature access to sys.stdout.%s" % attr)
> sys.stdout = StdoutGuard()
> try:
> print "Oops!"
> except RuntimeError:
> pass
> finally:
> sys.stdout = test_stdout
> 
> def test_getattr_hooks(self):
> pass
> 
> from test import test_support
> test_support.run_unittest(Foo)
> 
> It works in python and sigsegvs in cython.
> It seems to me that the problem is StdoutGuard() is still used when
> its reference counter is zero since Python interpreter does
> Py_XINCREF() for file object and __Pyx_Print() doesn't.

Makes sense to change that, IMHO. An additional INCREF during something as
involved as a print() will not hurt anyone.

IIRC, I had the same problem with PyPy - guess I should have fixed it back
then instead of taking the lazy escape towards using the print() function.

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] Bug in print statement

2012-05-09 Thread Vitja Makarov
2012/5/9 Stefan Behnel :
> Vitja Makarov, 09.05.2012 18:31:
>> Del statement inference enabled pyregr.test_descr testcase and it SIGSEGVs.
>> Here is minimal example:
>>
>> import unittest
>> import sys
>>
>> class Foo(unittest.TestCase):
>>     def test_file_fault(self):
>>         # Testing sys.stdout is changed in getattr...
>>         test_stdout = sys.stdout
>>         class StdoutGuard:
>>             def __getattr__(self, attr):
>>                 test_stdout.write('%d\n' % sys.getrefcount(self))
>>                 sys.stdout =  test_stdout #sys.__stdout__
>>                 test_stdout.write('%d\n' % sys.getrefcount(self))
>>                 test_stdout.write('getattr: %r\n' % attr)
>>                 test_stdout.flush()
>>                 raise RuntimeError("Premature access to sys.stdout.%s" % 
>> attr)
>>         sys.stdout = StdoutGuard()
>>         try:
>>             print "Oops!"
>>         except RuntimeError:
>>             pass
>>         finally:
>>             sys.stdout = test_stdout
>>
>>     def test_getattr_hooks(self):
>>         pass
>>
>> from test import test_support
>> test_support.run_unittest(Foo)
>>
>> It works in python and sigsegvs in cython.
>> It seems to me that the problem is StdoutGuard() is still used when
>> its reference counter is zero since Python interpreter does
>> Py_XINCREF() for file object and __Pyx_Print() doesn't.
>
> Makes sense to change that, IMHO. An additional INCREF during something as
> involved as a print() will not hurt anyone.
>
> IIRC, I had the same problem with PyPy - guess I should have fixed it back
> then instead of taking the lazy escape towards using the print() function.
>

I've moved printing function to Utility/ and fixed refcount bug, if
jenkins is ok I'm gonna push this commit to master

https://github.com/vitek/cython/commit/83eceb31b4ed9afc0fd6d24c9eda5e52d9420535


-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Robert Bradshaw
On Wed, May 9, 2012 at 6:33 AM, Stefan Behnel  wrote:
> mark florisson, 09.05.2012 15:18:
>> On 9 May 2012 14:16, Vitja Makarov wrote:
>>> from cython cimport typeof
>>>
>>> def foo(float[::1] a):
>>>    b = a
>>>    #del b
>>>    print typeof(b)
>>>    print typeof(a)
>>>
>>>
>>> In this example `b` is inferred as 'Python object' and not
>>> `float[::1]`, is that correct?
>>>
>> That's the current behaviour, but it would be better if it inferred a
>> memoryview slice instead.
>
> +1

+1. This looks like it would break inference of extension classes as well.

https://github.com/vitek/cython/commit/f5acf44be0f647bdcbb5a23c8bfbceff48f4414e#L0R336

could be changed to check if it's already a py_object_type (or memory
view) as a quick fix, but it's not as pure as adding the constraints
"can be del'ed" to the type inference engine.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Vitja Makarov
2012/5/9 Robert Bradshaw :
> On Wed, May 9, 2012 at 6:33 AM, Stefan Behnel  wrote:
>> mark florisson, 09.05.2012 15:18:
>>> On 9 May 2012 14:16, Vitja Makarov wrote:
 from cython cimport typeof

 def foo(float[::1] a):
    b = a
    #del b
    print typeof(b)
    print typeof(a)


 In this example `b` is inferred as 'Python object' and not
 `float[::1]`, is that correct?

>>> That's the current behaviour, but it would be better if it inferred a
>>> memoryview slice instead.
>>
>> +1
>
> +1. This looks like it would break inference of extension classes as well.
>
> https://github.com/vitek/cython/commit/f5acf44be0f647bdcbb5a23c8bfbceff48f4414e#L0R336
>
> could be changed to check if it's already a py_object_type (or memory
> view) as a quick fix, but it's not as pure as adding the constraints
> "can be del'ed" to the type inference engine.
>

Yeah, right. It must be something like this:

if not inferred_type.is_pyobject and
inferred_type.can_coerce_to_pyobject(scope):



-- 
vitja.
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-09 Thread Robert Bradshaw
On Tue, May 8, 2012 at 2:48 AM, Stefan Behnel  wrote:
> mark florisson, 08.05.2012 11:24:
> Dag Sverre Seljebotn, 08.05.2012 09:57:
>>  1) We NEVER deprecate "np.ndarray[double]", we commit to keeping that in
>> the language. It means exactly what you would like double[:] to mean,
>> i.e.
>> a variable that is memoryview when you need to and an object otherwise.
>> When you use this type, you bear the consequences of early-binding things
>> that could in theory be overridden.
>>
>>  2) double[:] is for when you want to access data of *any* Python object
>> in a generic way. Raw PEP 3118. In those situations, access to the
>> underlying object is much less useful.
>>
>>   2a) Therefore we require that you do "mview.asobject()" manually; doing
>> "mview.foo()" is a compile-time error
>>> [...]
>>> Character pointers coerce to strings. Hell, even structs coerce to and
>>> from python dicts, so disallowing the same for memoryviews would just
>>> be inconsistent and inconvenient.
>
> Two separate things to discuss here: the original exporter and a Python
> level wrapper.
>
> As long as wrapping the memoryview in a new object is can easily be done by
> users, I don't see a reason to provide compiler support for getting at the
> exporter. After all, a user may have a memory view that is backed by a
> NumPy array but wants to reinterpret it as a PIL image. Just because the
> underlying object has a specific object type doesn't mean that's the one to
> use for a given use case. If a user requires a specific object *instead* of
> a bare memory view, we have the object type buffer syntax for that.

On the other hand, if the object type buffer syntax to be deprecated
and replaced by bare memory views, then a user-specified exporter is I
think quite important so that, e.g. when slicing NumPy arrays one gets
NumPy arrays back.

Is slicing the only way in which to get new memoryviews from old? If
this is the case, perhaps we could use a Python __getitem__ call with
the appropriate slice to create a new underlying object from the
original underlying object (only when needed of course). This is
assuming that the underlying object supports it.

> It's also not necessarily more efficient to access the underlying object
> than to create a new one if the underlying exporter has to learn about the
> mapped layout first.
>
> Regarding the coercion to Python, I do not see a problem with providing a
> general Python view object for memory views that arbitrary Cython memory
> views can coerce to. In fact, I consider that a useful feature. The builtin
> memoryview type in Python (at least the one in CPython 3.3) should be quite
> capable of providing this, although I don't mind what exactly this becomes.

I'd rather not make things global, but for memory views that were
created without an underlying object, having a good default (I'd
rather not have a global registry) makes a lot of sense.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-09 Thread mark florisson
On 9 May 2012 19:35, Robert Bradshaw  wrote:
> On Tue, May 8, 2012 at 2:48 AM, Stefan Behnel  wrote:
>> mark florisson, 08.05.2012 11:24:
>> Dag Sverre Seljebotn, 08.05.2012 09:57:
>>>  1) We NEVER deprecate "np.ndarray[double]", we commit to keeping that 
>>> in
>>> the language. It means exactly what you would like double[:] to mean,
>>> i.e.
>>> a variable that is memoryview when you need to and an object otherwise.
>>> When you use this type, you bear the consequences of early-binding 
>>> things
>>> that could in theory be overridden.
>>>
>>>  2) double[:] is for when you want to access data of *any* Python object
>>> in a generic way. Raw PEP 3118. In those situations, access to the
>>> underlying object is much less useful.
>>>
>>>   2a) Therefore we require that you do "mview.asobject()" manually; 
>>> doing
>>> "mview.foo()" is a compile-time error
 [...]
 Character pointers coerce to strings. Hell, even structs coerce to and
 from python dicts, so disallowing the same for memoryviews would just
 be inconsistent and inconvenient.
>>
>> Two separate things to discuss here: the original exporter and a Python
>> level wrapper.
>>
>> As long as wrapping the memoryview in a new object is can easily be done by
>> users, I don't see a reason to provide compiler support for getting at the
>> exporter. After all, a user may have a memory view that is backed by a
>> NumPy array but wants to reinterpret it as a PIL image. Just because the
>> underlying object has a specific object type doesn't mean that's the one to
>> use for a given use case. If a user requires a specific object *instead* of
>> a bare memory view, we have the object type buffer syntax for that.
>
> On the other hand, if the object type buffer syntax to be deprecated
> and replaced by bare memory views, then a user-specified exporter is I
> think quite important so that, e.g. when slicing NumPy arrays one gets
> NumPy arrays back.
>
> Is slicing the only way in which to get new memoryviews from old? If
> this is the case, perhaps we could use a Python __getitem__ call with
> the appropriate slice to create a new underlying object from the
> original underlying object (only when needed of course). This is
> assuming that the underlying object supports it.

You can also use newaxis indexing or transpose the view, but those are
the only ways to change the view I think. I like the idea quite a bit,
as the callback has no sane way of getting registered. For newaxes we
could pass in None in the right places to __getitem__, as for
transpose, the 'T' attribute works for numpy, I don't know about other
exposers.

>> It's also not necessarily more efficient to access the underlying object
>> than to create a new one if the underlying exporter has to learn about the
>> mapped layout first.
>>
>> Regarding the coercion to Python, I do not see a problem with providing a
>> general Python view object for memory views that arbitrary Cython memory
>> views can coerce to. In fact, I consider that a useful feature. The builtin
>> memoryview type in Python (at least the one in CPython 3.3) should be quite
>> capable of providing this, although I don't mind what exactly this becomes.
>
> I'd rather not make things global, but for memory views that were
> created without an underlying object, having a good default (I'd
> rather not have a global registry) makes a lot of sense.
>
> - Robert
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-09 Thread Stefan Behnel
mark florisson, 09.05.2012 20:45:
> You can also use newaxis indexing or transpose the view

What is "newaxis indexing"?

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-09 Thread Robert Bradshaw
On Tue, May 8, 2012 at 3:35 AM, mark florisson
 wrote:
> On 8 May 2012 10:47, Dag Sverre Seljebotn  wrote:
>>
>> After some thinking I believe I can see more clearly where Mark is coming
>> from. To sum up, it's either
>>
>> A) Keep both np.ndarray[double] and double[:] around, with clearly defined
>> and separate roles. np.ndarray[double] implementation is revamped to allow
>> fast slicing etc., based on the double[:] implementation.
>>
>> B) Deprecate np.ndarray[double] sooner rather than later, but make double[:]
>> have functionality that is *really* close to what np.ndarray[double]
>> currently does. In most cases one should be able to basically replace
>> np.ndarray[double] with double[:] and the code should continue to work just
>> like before; difference is that if you pass in anything else than a NumPy
>> array, it will likely fail with a runtime AttributeError at some point
>> rather than fail a PyType_Check.
>
> That's a good summary. I have a big preference for B here, but I agree
> that treating a typed memoryview as both a user object (possibly
> converted through callback) and a typed memoryview "subclass" is quite
> magicky.

With the talk of overlay modules and go-style interface, being able to
specify the type of an object as well as its bufferness could become
more interesting than it even is now. The notion of supporting
multiple interfaces, e.g.

cdef np.ndarray & double[:] my_array

could obviate the need for np.ndarray[double]. Until we support
something like this, or decide to reject it, I think we need to keep
the old-style syntax around. (np.ndarray[double] could even become
this intersection type to gain all the new features before we decide
on a appropriate syntax).

> I wouldn't particularly mind something concise like 'm.obj'.
> The AttributeError would be the case as usual, when a python object
> doesn't have the right interface.

Having to insert the .obj in there does make it more painful to
convert existing Python code.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-09 Thread mark florisson
On 9 May 2012 19:55, Stefan Behnel  wrote:
> mark florisson, 09.05.2012 20:45:
>> You can also use newaxis indexing or transpose the view
>
> What is "newaxis indexing"?
>
> Stefan
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel

It's when you introduce a new one-sized dimension. E.g. if you have a
1D array with shape (10,), and index it like myarray[None, :], you get
a 2D array with shape (1, 10). There is a pending pull request for
that (which should make it into 0.17).
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-09 Thread mark florisson
On 9 May 2012 19:56, Robert Bradshaw  wrote:
> On Tue, May 8, 2012 at 3:35 AM, mark florisson
>  wrote:
>> On 8 May 2012 10:47, Dag Sverre Seljebotn  wrote:
>>>
>>> After some thinking I believe I can see more clearly where Mark is coming
>>> from. To sum up, it's either
>>>
>>> A) Keep both np.ndarray[double] and double[:] around, with clearly defined
>>> and separate roles. np.ndarray[double] implementation is revamped to allow
>>> fast slicing etc., based on the double[:] implementation.
>>>
>>> B) Deprecate np.ndarray[double] sooner rather than later, but make double[:]
>>> have functionality that is *really* close to what np.ndarray[double]
>>> currently does. In most cases one should be able to basically replace
>>> np.ndarray[double] with double[:] and the code should continue to work just
>>> like before; difference is that if you pass in anything else than a NumPy
>>> array, it will likely fail with a runtime AttributeError at some point
>>> rather than fail a PyType_Check.
>>
>> That's a good summary. I have a big preference for B here, but I agree
>> that treating a typed memoryview as both a user object (possibly
>> converted through callback) and a typed memoryview "subclass" is quite
>> magicky.
>
> With the talk of overlay modules and go-style interface, being able to
> specify the type of an object as well as its bufferness could become
> more interesting than it even is now. The notion of supporting
> multiple interfaces, e.g.
>
> cdef np.ndarray & double[:] my_array
>
> could obviate the need for np.ndarray[double]. Until we support
> something like this, or decide to reject it, I think we need to keep
> the old-style syntax around. (np.ndarray[double] could even become
> this intersection type to gain all the new features before we decide
> on a appropriate syntax).

It's kind of interesting but also kind of a pain to declare everywhere
like that. Buffer syntax should by no means deprecated in the near
future, but at some point it will be better to have one way to do
things, whether slightly magicky or more convoluted or not. Also, as
Dag mentioned, if we want fused extension types it makes more sense to
remove buffer syntax to disambiguate this and avoid context-dependent
special casing (e.g. np.ndarray and array.array).

>> I wouldn't particularly mind something concise like 'm.obj'.
>> The AttributeError would be the case as usual, when a python object
>> doesn't have the right interface.
>
> Having to insert the .obj in there does make it more painful to
> convert existing Python code.

Yes, hence my slight bias towards magicky. But I do fully agree with
all opposing arguments that say "too much magic". I just prefer to be
pragmatic here :)

> - Robert
> ___
> cython-devel mailing list
> cython-devel@python.org
> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-09 Thread mark florisson
On 9 May 2012 20:08, mark florisson  wrote:
> On 9 May 2012 19:56, Robert Bradshaw  wrote:
>> On Tue, May 8, 2012 at 3:35 AM, mark florisson
>>  wrote:
>>> On 8 May 2012 10:47, Dag Sverre Seljebotn  
>>> wrote:

 After some thinking I believe I can see more clearly where Mark is coming
 from. To sum up, it's either

 A) Keep both np.ndarray[double] and double[:] around, with clearly defined
 and separate roles. np.ndarray[double] implementation is revamped to allow
 fast slicing etc., based on the double[:] implementation.

 B) Deprecate np.ndarray[double] sooner rather than later, but make 
 double[:]
 have functionality that is *really* close to what np.ndarray[double]
 currently does. In most cases one should be able to basically replace
 np.ndarray[double] with double[:] and the code should continue to work just
 like before; difference is that if you pass in anything else than a NumPy
 array, it will likely fail with a runtime AttributeError at some point
 rather than fail a PyType_Check.
>>>
>>> That's a good summary. I have a big preference for B here, but I agree
>>> that treating a typed memoryview as both a user object (possibly
>>> converted through callback) and a typed memoryview "subclass" is quite
>>> magicky.
>>
>> With the talk of overlay modules and go-style interface, being able to
>> specify the type of an object as well as its bufferness could become
>> more interesting than it even is now. The notion of supporting
>> multiple interfaces, e.g.
>>
>> cdef np.ndarray & double[:] my_array
>>
>> could obviate the need for np.ndarray[double]. Until we support
>> something like this, or decide to reject it, I think we need to keep
>> the old-style syntax around. (np.ndarray[double] could even become
>> this intersection type to gain all the new features before we decide
>> on a appropriate syntax).
>
> It's kind of interesting but also kind of a pain to declare everywhere
> like that.

Although I suppose a typedef could help. But then it's harder to see
the dtype without lookup up the typedef declaration. Oh well :)

> Buffer syntax should by no means deprecated in the near
> future, but at some point it will be better to have one way to do
> things, whether slightly magicky or more convoluted or not. Also, as
> Dag mentioned, if we want fused extension types it makes more sense to
> remove buffer syntax to disambiguate this and avoid context-dependent
> special casing (e.g. np.ndarray and array.array).
>
>>> I wouldn't particularly mind something concise like 'm.obj'.
>>> The AttributeError would be the case as usual, when a python object
>>> doesn't have the right interface.
>>
>> Having to insert the .obj in there does make it more painful to
>> convert existing Python code.
>
> Yes, hence my slight bias towards magicky. But I do fully agree with
> all opposing arguments that say "too much magic". I just prefer to be
> pragmatic here :)
>
>> - Robert
>> ___
>> cython-devel mailing list
>> cython-devel@python.org
>> http://mail.python.org/mailman/listinfo/cython-devel
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] buffer syntax vs. memory view syntax

2012-05-09 Thread Robert Bradshaw
On Wed, May 9, 2012 at 12:09 PM, mark florisson
 wrote:
> On 9 May 2012 20:08, mark florisson  wrote:
>> On 9 May 2012 19:56, Robert Bradshaw  wrote:
>>> On Tue, May 8, 2012 at 3:35 AM, mark florisson
>>>  wrote:
 On 8 May 2012 10:47, Dag Sverre Seljebotn  
 wrote:
>
> After some thinking I believe I can see more clearly where Mark is coming
> from. To sum up, it's either
>
> A) Keep both np.ndarray[double] and double[:] around, with clearly defined
> and separate roles. np.ndarray[double] implementation is revamped to allow
> fast slicing etc., based on the double[:] implementation.
>
> B) Deprecate np.ndarray[double] sooner rather than later, but make 
> double[:]
> have functionality that is *really* close to what np.ndarray[double]
> currently does. In most cases one should be able to basically replace
> np.ndarray[double] with double[:] and the code should continue to work 
> just
> like before; difference is that if you pass in anything else than a NumPy
> array, it will likely fail with a runtime AttributeError at some point
> rather than fail a PyType_Check.

 That's a good summary. I have a big preference for B here, but I agree
 that treating a typed memoryview as both a user object (possibly
 converted through callback) and a typed memoryview "subclass" is quite
 magicky.
>>>
>>> With the talk of overlay modules and go-style interface, being able to
>>> specify the type of an object as well as its bufferness could become
>>> more interesting than it even is now. The notion of supporting
>>> multiple interfaces, e.g.
>>>
>>> cdef np.ndarray & double[:] my_array
>>>
>>> could obviate the need for np.ndarray[double]. Until we support
>>> something like this, or decide to reject it, I think we need to keep
>>> the old-style syntax around. (np.ndarray[double] could even become
>>> this intersection type to gain all the new features before we decide
>>> on a appropriate syntax).
>>
>> It's kind of interesting but also kind of a pain to declare everywhere
>> like that.
>
> Although I suppose a typedef could help. But then it's harder to see
> the dtype without lookup up the typedef declaration. Oh well :)

One would only use this syntax when one wanted to use features from both.

>> Yes, hence my slight bias towards magicky. But I do fully agree with
>> all opposing arguments that say "too much magic". I just prefer to be
>> pragmatic here :)

Same here. I think part of the magic feel is due to the ambiguity; a
concrete and simple declaration of when it acts as an object and when
it doesn't could help here. Auto-coercion is well engrained into the
Cython language (and one of the big selling points) so I think that's
OK.

- Robert
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel


Re: [Cython] CF based type inference

2012-05-09 Thread Stefan Behnel
Vitja Makarov, 08.05.2012 15:47:
> 2012/5/8 Stefan Behnel:
>> Vitja has rebased the type inference on the control flow, so I wonder if
>> this will enable us to properly infer this:
>>
>>  def partial_validity():
>>"""
>>>>> partial_validity()
>>('Python object', 'double', 'str object')
>>"""
>>a = 1.0
>>b = a + 2   # definitely double
>>a = 'test'
>>c = a + 'toast'  # definitely str
>>return typeof(a), typeof(b), typeof(c)
>>
>> I think, what is mainly needed for this is that a NameNode with an
>> undeclared type should not report its own entry as dependency but that of
>> its own cf_assignments. Would this work?
>>
>> (Haven't got the time to try it out right now, so I'm dumping it here.)
> 
> Yeah, that might work. The other way to go is to split entries:
> 
>  def partial_validity():
>"""
>>>> partial_validity()
>('str object', 'double', 'str object')
>"""
>a_1 = 1.0
>b = a_1 + 2   # definitely double
>a_2 = 'test'
>c = a_2 + 'toast'  # definitely str
>return typeof(a_2), typeof(b), typeof(c)
> 
> And this should work better because it allows to infer a_1 as a double
> and a_2 as a string.

How would type checks fit into this? Stupid example:

   def test(x):
   if isinstance(x, MyExtType):
   x.call_c_method()# type known, no None check needed
   else:
   x.call_py_method()   # type unknown, may be None

Would it work to consider a type checking branch an assignment to a new
(and differently typed) entry?

Stefan
___
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel