[Numpy-discussion] [f2py] f2py ignores 'fortranname' inside F90 modules?

2012-04-16 Thread Irwin Zaid
Hi all,

I've been having an issue with f2py simply ignoring the fortranname 
option if the Fortran subroutine is inside an F90 module. That option is 
useful for renaming Fortran subroutines.

I don't know if this behaviour is to be expected, or if I am doing 
something wrong. I would definitely appreciate any help!

As an example, here is code that correctly produces a Python module 
'test' with a single Fortran subroutine 'my_wrapped_subroutine'.

TEST_SUBROUTINE.F90
---
subroutine my_subroutine()
 write (*,*) 'Hello, world!'
end subroutine my_subroutine

TEST_SUBROUTINE.PYF
---
python module test
 interface
 subroutine my_wrapped_subroutine()
 fortranname my_subroutine
 end subroutine my_wrapped_subroutine
 end interface
end python module test

But, when the Fortran subroutine 'my_subroutine' is placed inside a 
module, the fortranname option seems to be entirely ignored. The 
following example fails to compile. The error is "Error: Symbol 
'my_wrapped_subroutine' referenced at (1) not found in module 'my_module'".

TEST_MODULE.F90
---
module my_module
 contains
 subroutine my_subroutine()
 write (*,*) 'Hello, world!'
 end subroutine my_subroutine
end module my_module

TEST_MODULE.PYF
---
python module test
 interface
 module my_module
 contains
 subroutine my_wrapped_subroutine()
 fortranname my_subroutine
 end subroutine my_wrapped_subroutine
 end module my_module
 end interface
end python module test

F2py is a great tool aside from this and a few other minor quibbles. So 
thanks a lot!

Cheers,

Irwin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Using NumPy iterators in C extension

2011-06-06 Thread Irwin Zaid
Hi all,

I am writing a C extension for NumPy that implements a custom function.
I have a minor question about iterators, and their order of iteration,
that I would really appreciate some help with. Here goes...

My function takes a sequence of N-dimensional input arrays and a single
(N+1)-dimensional output array. For each input array, it iterates over
the N dimensions of that array and the leading N dimensions of the
output array. It stores a result in the trailing dimension of the output 
array that is cumulative for the entire sequence of input arrays.

Crucially, I need to be sure that the input and output elements always
match during an iteration, otherwise the output array becomes incorrect. 
Meaning, is the kth element visited in the output array during a single 
iteration the same for each input array?

So far, I have implemented this function using two iterators created
with NpyIter_New(), one for an N-dimensional input array and one for the 
(N+1)-dimensional output array. I call NpyIter_RemoveAxis() on the
output array iterator. I am using NPY_KEEPORDER for the NPY_ORDER flag,
which I suspect is not what I want.

How should I guarantee that I preserve order? Should I be using a
different order flag, like NPY_CORDER? Or should I be creating a
multi-array iterator with NpyIter_MultiNew for a single input array and
the output array, though I don't see how to do that in my case.

Thanks in advance for your help! I can provide more information if
something is unclear.

Best,

Irwin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Notes from the numpy dev meeting at scipy 2015

2015-08-26 Thread Irwin Zaid
Hello everyone,

Mark and I thought it would be good to weigh in here and also be explicitly
around to discuss DyND. To be clear, neither of us has strong feelings on
what NumPy *should* do -- we are both long-time NumPy users and we both see
NumPy being around for a while. But, as Francesc mentioned, there is also
the open question of where the community should be implementing new
features. It would certainly be nice to not have duplication of effort, but
a decision like that can only arise naturally from a broad consensus.

Travis covered DyND's history and it's relationship with Continuum pretty
well, so what's really missing here is what DyND is, where it is going, and
how long we think it'll take to get there. We'll try to stick to those topics.

We designed DyND to fill what we saw as fundamental gaps in NumPy. These are
not only missing features, but also limitations of its architecture. Many of
these gaps have been mentioned several times before in this thread and
elsewhere, but a brief list would include: better support for missing
values, variable-length strings, GPUs, more extensible types, categoricals,
more datetime features, ... Some of these were indeed on Nathaniel's list
and many of them are already working (albeit sometimes partially) in DyND.

And, yes, we strongly feel that NumPy's fundamental dependence on Python
itself is a limitation. Why should we not take the fantastic success of
NumPy and generalize it across other languages?

So, we see DyND is having a twofold purpose. The first is to expand upon the
kinds of data that NumPy can represent and do computations upon. The second
is to provide a standard array package that can cross the language barrier
and easily interoperate between C++, Python, or whatever you want.

DyND, at the moment, is quite functional in some areas and lacking a bit in
others. There is no doubt that it is still "experimental" and a bit
unstable. But, it has advanced by a lot recently, and we are steadily
working towards something like a version 1.0. In fact, DyND's internal C++
architecture stabilized some time ago -- what's missing now is really solid
coverage of some common use cases, alongside up-to-date Python bindings and
an easy installation process. All of these are in progress and advancing as
quick as we can make them.

On the other hand, we are also building out some other features. To give
just one example that might excite people, DyND now has Numba
interoperability -- one can write DyND's equivalent of a ufunc in Python
and, with a single decorator, have a broadcasting or reduction callable that
gets JITed or (soon) ahead-of-time compiled.

Over the next few months, we are hopeful that we can get DyND into a state
where it is largely usable by those familiar with NumPy semantics. The
reason why we can be a bit more aggressive in our timeline now is because of
the great support we are getting from Continuum.

With all that said, we are happy to be a part of of any broader conversation
involving NumPy and the community.

All the best,

Irwin and Mark

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Notes from the numpy dev meeting at scipy 2015

2015-08-26 Thread Irwin Zaid
On Wed, Aug 26, 2015 at 6:11 PM, Antoine Pitrou  wrote:

> One possible limitation is that the lingua franca for language
> interoperability is C, not C++. DyND doesn't have to be written in C,
> but exposing a nice C API may help make it attractive to the various
> language runtimes out there.
>

That is absolutely true and a C API is on the long-term roadmap. At the
moment, a C API is not needed for DyND to be stable and usable from Python,
which is one reason we aren't doing it now.

Irwin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] DyND 0.7.1 Release

2016-02-15 Thread Irwin Zaid
Hello everyone,

I'm pleased to announce the latest 0.7.1 release of DyND. The release notes
are at https://github.com/libdynd/libdynd/blob/master/docs/release_notes.txt
.

Over the last 6 months, DyND has really matured a lot and many features
that were "experimental" before are quite usable at the moment. At the same
time, we still have bugs and issues to sort out, so I wouldn't claim DyND
has reached stability just yet. Nevertheless, I'd encourage early adopters
to try it out. You can install it easily using conda, via "conda install -c
dynd/channel/dev dynd-python".

Presently, the core DyND team consists of myself, Mark Wiebe, and Ian
Henriksen, alongside several other contributors. Our focus is almost
entirely on gaps in stability and usability -- the novel features in DyND
that people find attractive (including missing values, ragged arrays,
variable-sized strings, dynamic custom types, and overloadable callables,
among others) are functioning pretty well now.

NumPy compatibility and interoperability is very important to us, and is
something we are constantly improving. We would eventually like to have an
optional NumPy-like API that is fully consistent with what a NumPy user
would expect, but we're not there just yet.

The DyND team would be happy to answer any questions people have about
DyND, like "what is working and what is not" or "what do we still need to
do to hit DyND 1.0".

All the best,

Irwin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Starting work on ufunc rewrite

2016-03-31 Thread Irwin Zaid
Hey guys,

I figured I'd just chime in here.

Over in DyND-town, we've spent a lot of time figuring out how to structure
DyND callables, which are actually more general than NumPy gufuncs. We've
just recently got them to a place where we are very happy, and are able to
represent a wide range of computations.

Our callables use a two-fold approach to evaluation. The first pass is a
resolution pass, where a callable can specialize what it is doing based on
the input types. It is able to deduce the return type, multidispatch, or
even perform some sort of recursive analysis in the form of computations
that call themselves. The second pass is construction of a kernel object
that is exactly specialized to the metadata (e.g., strides, contiguity,
...) of the array.

The callable itself can store arbitrary data, as can each pass of the
evaluation. Either (or both) of these passes can be done ahead of time,
allowing one to have a callable exactly specialized for your array.

If NumPy is looking to change it's ufunc design, we'd be happy to share our
experiences with this.

Irwin

On Thu, Mar 31, 2016 at 4:00 PM, Jaime Fernández del Río
https://mail.scipy.org/mailman/listinfo/numpy-discussion>> wrote:
>* I have started discussing with Nathaniel the implementation of the ufunc ABI
*>* break that he proposed in a draft NEP a few months ago:
*>>* http://thread.gmane.org/gmane.comp.python.numeric.general/61270

*>>* His original proposal was to make the public portion of PyUFuncObject be:
*>>* typedef struct {
*>* PyObject_HEAD
*>* int nin, nout, nargs;
*>* } PyUFuncObject;
*>>* Of course the idea is that internally we would use a much larger
struct that
*>* we could change at will, as long as its first few entries matched those of
*>* PyUFuncObject. My problem with this, and I may very well be missing
*>* something, is that in PyUFunc_Type we need to set the tp_basicsize to the
*>* size of the extended struct, so we would end up having to expose its
*>* contents. This is somewhat similar to what now happens with PyArrayObject:
*>* anyone can #include "ndarraytypes.h", cast PyArrayObject* to
*>* PyArrayObjectFields*, and access the guts of the struct without using the
*>* supplied API inline functions. Not the end of the world, but if you want to
*>* make something private, you might as well make it truly private.
*>>* I think it would be to have something similar to what NpyIter does::
*>>* typedef struct {
*>* PyObject_HEAD
*>* NpyUFunc *ufunc;
*>* } PyUFuncObject;
*>>* where NpyUFunc would, at this level, be an opaque type of which nothing
*>* would be known. We could have some of the NpyUFunc attributes cached on the
*>* PyUFuncObject struct for easier access, as is done in NewNpyArrayIterObject.
*>* This would also give us more liberty in making NpyUFunc be whatever we want
*>* it to be, including a variable-sized memory chunk that we could use and
*>* access at will. NpyIter is again a good example, where rather than storing
*>* pointers to strides and dimensions arrays, these are made part of the
*>* NpyIter memory chunk, effectively being equivalent to having variable sized
*>* arrays as part of the struct. And I think we will probably no longer trigger
*>* the Cython warnings about size changes either.
*>>* Any thoughts on this approach? Is there anything fundamentally wrong with
*>* what I'm proposing here?
*>>* Also, this is probably going to end up being a rewrite of a
pretty large and
*>* complex codebase. I am not sure that working on this on my own and
*>* eventually sending a humongous PR is the best approach. Any thoughts on how
*>* best to handle turning this into a collaborative, incremental effort? Anyone
*>* who would like to join in the fun?
*>>* Jaime
*>>* --
*>* (\__/)
*>* ( O.o)
*>* ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de
*>* dominación mundial.
*>>* ___
*>* NumPy-Discussion mailing list
*>* NumPy-Discussion at scipy.org

*>* https://mail.scipy.org/mailman/listinfo/numpy-discussion

*>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion