On Tue, Mar 29, 2011 at 11:03 AM, Dag Sverre Seljebotn <
d.s.seljeb...@astro.uio.no> wrote:
>
> I think it should be a(1:n*stride:stride) or something.
>
>
Yes, it was my typo and I assumed that n is the length of the original
array.
Pearu
___
NumPy-Dis
On 03/29/2011 09:35 AM, Pearu Peterson wrote:
On Tue, Mar 29, 2011 at 8:13 AM, Pearu Peterson
mailto:pearu.peter...@gmail.com>> wrote:
On Mon, Mar 28, 2011 at 10:44 PM, Sturla Molden mailto:stu...@molden.no>> wrote:
Den 28.03.2011 19:12, skrev Pearu Peterson:
>
On Tue, Mar 29, 2011 at 8:13 AM, Pearu Peterson wrote:
>
>
> On Mon, Mar 28, 2011 at 10:44 PM, Sturla Molden wrote:
>
>> Den 28.03.2011 19:12, skrev Pearu Peterson:
>> >
>> > FYI, f2py in numpy 1.6.x supports also assumed shape arrays.
>>
>> How did you do that? Chasm-interop, C bindings from F03
On Mon, Mar 28, 2011 at 10:44 PM, Sturla Molden wrote:
> Den 28.03.2011 19:12, skrev Pearu Peterson:
> >
> > FYI, f2py in numpy 1.6.x supports also assumed shape arrays.
>
> How did you do that? Chasm-interop, C bindings from F03, or marshalling
> through explicit-shape?
>
The latter.
Basically
Den 28.03.2011 19:12, skrev Pearu Peterson:
>
> FYI, f2py in numpy 1.6.x supports also assumed shape arrays.
How did you do that? Chasm-interop, C bindings from F03, or marshalling
through explicit-shape?
Can f2py pass strided memory from NumPy to Fortran?
Sturla
___
On Mon, Mar 28, 2011 at 6:01 PM, Sturla Molden wrote
>
>
> I'll try to clarify this:
>
> ** Most Fortran 77 compilers (and beyond) assume explicit-shape and
> assumed-size arrays are contiguous blocks of memory. That is, arrays
> declared like a(m,n) or a(m,*). They are usually passed as a pointer
Den 28.03.2011 14:28, skrev Dag Sverre Seljebotn:
>
> Sure, I realize that it is not standard. I'm mostly wondering whether
> major Fortran compilers support working with strided memory in practice
> (defined as you won't get out-of-memory-errors when passing around huge
> strided array subset).
O
Den 28.03.2011 17:01, skrev Sturla Molden:
>
> ** Most Fortran compilers will make a temporary copy when passing a
> non-contiguous array section to a subroutine expecting an explicit-shape
> or assumed-shape array.
Sorry, typo. The latter should be "assumed-size array".
Sturla
_
Den 28.03.2011 14:28, skrev Dag Sverre Seljebotn:
>
> Sure, I realize that it is not standard. I'm mostly wondering whether
> major Fortran compilers support working with strided memory in practice
> (defined as you won't get out-of-memory-errors when passing around huge
> strided array subset).
I
On 03/28/2011 12:55 PM, Sturla Molden wrote:
> Den 28.03.2011 09:34, skrev Dag Sverre Seljebotn:
>> What would we do exactly -- pass the entire underlying buffer to Fortran
>> and then re-slice it Fortran side?
> Pass a C pointer to the first element along with shape and strides, get
> a Fortran po
Den 28.03.2011 09:34, skrev Dag Sverre Seljebotn:
> What would we do exactly -- pass the entire underlying buffer to Fortran
> and then re-slice it Fortran side?
Pass a C pointer to the first element along with shape and strides, get
a Fortran pointer using c_f_pointer, then reslice the Fortran p
On 03/27/2011 08:54 PM, Sturla Molden wrote:
> Den 26.03.2011 19:31, skrev Christopher Barker:
>
>> To understand all this, you'll need to study up a bit on how numpy
>> arrays lay out and access the memory that they use: they use a concept
>> of "strided" memory. It's very powerful and flexible, b
Den 26.03.2011 19:31, skrev Christopher Barker:
> To understand all this, you'll need to study up a bit on how numpy
> arrays lay out and access the memory that they use: they use a concept
> of "strided" memory. It's very powerful and flexible, but most other
> numeric libs can't use those same d
On Sat, Mar 26, 2011 at 3:16 PM, srean wrote:
>
> Ah! very nice. I did not know that numpy-1.6.1 supports in place 'dot',
>
In place is perhaps not the right word, I meant "in a specified location"
___
NumPy-Discussion mailing list
NumPy-Discussion@sci
Ah! very nice. I did not know that numpy-1.6.1 supports in place 'dot', and
neither the fact that you could access the underlying BLAS functions like
so. This is pretty neat. Thanks. Now I at least have an idea how the sparse
version might work.
If I get time I will probably give numpy-1.6.1 a sho
On Sat, 26 Mar 2011 19:13:43 +, Pauli Virtanen wrote:
[clip]
> If you want to have control over temporaries, you can make use of the
> out= argument of ufuncs (`numpy.dot` will gain it in 1.6.1 --- you can
> call LAPACK routines from scipy.lib in the meantime, if your data is in
> Fortran order
On Sat, 26 Mar 2011 12:32:24 -0500, srean wrote:
[clip]
> Is there anyway apart from using ufuncs that I can make updater() write
> the result directly in b and not create a new temporary column that is
> then copied into b? Say for the matrix vector multiply example. I can
> write the matrix vect
Hi Christopher,
thanks for taking the time to reply at length. I do understand the concept
of striding in general but was not familiar with the Numpy way of accessing
that information. So thanks for pointing me to .flag and .stride.
That said, BLAS/LAPACK do have apis that take the stride length
On 3/26/11 10:12 AM, Pauli Virtanen wrote:
> On Sat, 26 Mar 2011 13:10:42 -0400, Hugo Gagnon wrote:
> [clip]
>> a1 = b[:,0]
>> a2 = b[:,1]
>> ...
>>
>> and it works but that doesn't help me for my problem. Is there a way to
>> reformulate the first code snippet above but with shallow copying?
>
> N
On 3/26/11 10:32 AM, srean wrote:
> I am also interested in this. In my application there is a large 2d
> array, lets call it 'b' to keep the notation consistent in the thread.
> b's columns need to be recomputed often. Ideally this re-computation
> happens in a function. Lets call that function
Hi,
I am also interested in this. In my application there is a large 2d array,
lets call it 'b' to keep the notation consistent in the thread. b's
columns need to be recomputed often. Ideally this re-computation happens in
a function. Lets call that function updater(b, col_index): The simplest
e
On Sat, 26 Mar 2011 13:10:42 -0400, Hugo Gagnon wrote:
[clip]
> a1 = b[:,0]
> a2 = b[:,1]
> ...
>
> and it works but that doesn't help me for my problem. Is there a way to
> reformulate the first code snippet above but with shallow copying?
No. You need an 2-D array to "own" the data. The second
Hello,
Say I have a few 1d arrays and one 2d array which columns I want to be
the 1d arrays.
I also want all the a's arrays to share the *same data* with the b
array.
If I call my 1d arrays a1, a2, etc. and my 2d array b, then
b[:,0] = a1[:]
b[:,1] = a2[:]
...
won't work because apparently copyi
23 matches
Mail list logo