Den 13.06.2010 05:47, skrev David Cournapeau:
>
> This only works in simple cases. What do you do when you don't know
> the output size ?
First: If you don't know, you don't know. Then you're screwed and C is
not going to help.
Second: If we cannot figure out how much to allocate before starting
On Sun, Jun 13, 2010 at 11:39 AM, Sturla Molden wrote:
> If NumPy does not allocate memory on it's own, there will be no leaks
> due to errors in NumPy.
>
> There is still work to do in the core, i.e. the computational loops in
> array operators, broadcasting, ufuncs, copying data between buffers
Den 13.06.2010 02:39, skrev David Cournapeau:
>
> But the point is to get rid of the python dependency, and if you don't
> allow any api call to allocate memory, there is not much left to
> implement in the core.
>
>
Memory allocation is platform dependent. A CPython version could use
bytearr
On Sun, Jun 13, 2010 at 2:00 AM, Sturla Molden wrote:
> Den 12.06.2010 15:57, skrev David Cournapeau:
>> Anything non trivial will require memory allocation and object
>> ownership conventions. If the goal is interoperation with other
>> languages and vm, you may want to use something else than pl
I apologize ahead of time for anything I might be totally missing, but in
order to make PyArray_Scalar() work on non-CPython interpreters, it's
necessary for me to significantly refactor that function. I've made
(untested but correct looking) changes to the function to handle all of the
data types
Sat, 12 Jun 2010 16:30:14 -0400, Alan Bromborsky wrote:
> If I have a single numpy array, for example with 3 indices T_{ijk} and I
> want to sum over two them in the sense of tensor contraction -
>
> T_{k} = \sum_{i=0}^{n-1} T_{iik}. Is there an easy way to do this with
> numpy?
HTH, (not really
Sat, 12 Jun 2010 23:15:16 +0200, Friedrich Romstedt wrote:
[clip]
> But note that for:
> T[:, I, I]
> the shape is reversed with respect to that of:
> T[I, :, I] and T[I, I, :] .
>
> I think it should be written in the docs how the shape is derived.
It's explained there in detail (although mayb
Friedrich Romstedt wrote:
> 2010/6/12 Alan Bromborsky :
>
>> If I have a single numpy array, for example with 3 indices T_{ijk} and I
>> want to sum over two them in the sense of tensor contraction -
>>
>> T_{k} = \sum_{i=0}^{n-1} T_{iik}. Is there an easy way to do this with
>> numpy?
>>
2010/6/12 Alan Bromborsky :
> If I have a single numpy array, for example with 3 indices T_{ijk} and I
> want to sum over two them in the sense of tensor contraction -
>
> T_{k} = \sum_{i=0}^{n-1} T_{iik}. Is there an easy way to do this with
> numpy?
Also you can give:
T[I, I, :].sum(axis=0)
a
On Sat, Jun 12, 2010 at 2:56 PM, Dag Sverre Seljebotn <
da...@student.matnat.uio.no> wrote:
> Charles Harris wrote:
> > On Sat, Jun 12, 2010 at 11:38 AM, Dag Sverre Seljebotn <
> > da...@student.matnat.uio.no> wrote:
> >
> >> Christopher Barker wrote:
> >> > David Cournapeau wrote:
> >> >>> In the
On Sat, Jun 12, 2010 at 4:30 PM, Alan Bromborsky wrote:
> If I have a single numpy array, for example with 3 indices T_{ijk} and I
> want to sum over two them in the sense of tensor contraction -
>
> T_{k} = \sum_{i=0}^{n-1} T_{iik}. Is there an easy way to do this with
> numpy?
looking at numpy
Charles Harris wrote:
> On Sat, Jun 12, 2010 at 11:38 AM, Dag Sverre Seljebotn <
> da...@student.matnat.uio.no> wrote:
>
>> Christopher Barker wrote:
>> > David Cournapeau wrote:
>> >>> In the core C numpy library there would be new "numpy_array" struct
>> >>> with attributes
>> >>>
>> >>> numpy_
If I have a single numpy array, for example with 3 indices T_{ijk} and I
want to sum over two them in the sense of tensor contraction -
T_{k} = \sum_{i=0}^{n-1} T_{iik}. Is there an easy way to do this with
numpy?
___
NumPy-Discussion mailing list
Num
On Sat, Jun 12, 2010 at 3:57 PM, David Cournapeau wrote:
> On Sat, Jun 12, 2010 at 10:27 PM, Sebastian Walter
> wrote:
>> On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden wrote:
>>>
>>> I have a few radical suggestions:
>>>
>>> 1. Use ctypes as glue to the core DLL, so we can completely forget abo
If I could, I would like to throw out another possible feature that might
need to be taken into consideration for designing the implementation of
numpy arrays.
One thing I found somewhat lacking -- if that is the right term -- is a way
to convolve a numpy array with an arbitrary windowing element.
2010/6/12 Charles R Harris
>
> This is more the way I see things, except I would divide the bottom layer
> into two parts, views and memory. The memory can come from many places --
> memmaps, user supplied buffers, etc. -- but we should provide a simple
> reference counted allocator for the defau
On Sat, Jun 12, 2010 at 1:35 PM, Charles R Harris wrote:
>
>
> On Sat, Jun 12, 2010 at 11:38 AM, Dag Sverre Seljebotn <
> da...@student.matnat.uio.no> wrote:
>
>> Christopher Barker wrote:
>> > David Cournapeau wrote:
>> >>> In the core C numpy library there would be new "numpy_array" struct
>>
On Sat, Jun 12, 2010 at 11:38 AM, Dag Sverre Seljebotn <
da...@student.matnat.uio.no> wrote:
> Christopher Barker wrote:
> > David Cournapeau wrote:
> >>> In the core C numpy library there would be new "numpy_array" struct
> >>> with attributes
> >>>
> >>> numpy_array->buffer
> >
> >> Anything n
Christopher Barker wrote:
> David Cournapeau wrote:
>>> In the core C numpy library there would be new "numpy_array" struct
>>> with attributes
>>>
>>> numpy_array->buffer
>
>> Anything non trivial will require memory allocation and object
>> ownership conventions.
>
> I totally agree -- I've bee
David Cournapeau wrote:
>> In the core C numpy library there would be new "numpy_array" struct
>> with attributes
>>
>> numpy_array->buffer
> Anything non trivial will require memory allocation and object
> ownership conventions.
I totally agree -- I've been thinking for a while about a core ar
Den 12.06.2010 15:57, skrev David Cournapeau:
> Anything non trivial will require memory allocation and object
> ownership conventions. If the goal is interoperation with other
> languages and vm, you may want to use something else than plain
> malloc, to interact better with the allocation strateg
On Sat, Jun 12, 2010 at 10:27 PM, Sebastian Walter
wrote:
> On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden wrote:
>>
>> I have a few radical suggestions:
>>
>> 1. Use ctypes as glue to the core DLL, so we can completely forget about
>> refcounts and similar mess. Why put manual reference counting
On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden wrote:
>
> I have a few radical suggestions:
>
> 1. Use ctypes as glue to the core DLL, so we can completely forget about
> refcounts and similar mess. Why put manual reference counting and error
> handling in the core? It's stupid.
I totally agree,
23 matches
Mail list logo