On 26/10/2007, Travis E. Oliphant <[EMAIL PROTECTED]> wrote:
> There is an optimization where-in the inner-loops are done over the
> dimension with the smallest stride.
>
> What other cache-coherent optimizations do you recommend?
That sounds like a very good first step. I'm far from an expert on
Hi all,
I cannot install numpy from recent svn
creating build/temp.linux-x86_64-2.5
creating build/temp.linux-x86_64-2.5/numpy
creating build/temp.linux-x86_64-2.5/numpy/core
creating build/temp.linux-x86_64-2.5/numpy/core/src
compile options:
'-Ibuild/src.linux-x86_64-2.5/numpy/core/src
-Inump
> What are people's opinions about the value of NumPy and SciPy on
> the CLR?
If anything, wouldn't the "big win" (if it's a win at all) be to get
NumPy/SciPy working on top of the JVM (as T. Hochber tried)? This way
it's pretty much universally portable.
I know Jython isn't as up to speed
On 10/26/07, Robert Crida <[EMAIL PROTECTED]> wrote:
> Hi all
>
> I recently posted about a memory leak in numpy and failed to mention the
> version. The leak manifests itself in numpy-1.0.3.1 but is not present in
> numpy-1.0.2
>
> The following code reproduces the bug:
>
> import numpy as np
>
>
Travis E. Oliphant wrote:
>>> An IronPython compatible version of NumPy would be great.Of course
>>> it could be done by using C# to write NumPy, but I'm not sure that this
>>> would really be any less work than creating a "glue" layer that allowed
>>> most (or all) C-Python extensions to wo
Scott Ransom wrote:
What are people's opinions about the value of NumPy and SciPy on the
CLR?
As someone who uses Numpy/Scipy almost exclusively on Linux workstations
or on clusters (in coordination with lots of C code), I wouldn't value
NumPy and SciPy on the CLR at all.
I am kind of
> What are people's opinions about the value of NumPy and SciPy on the
> CLR?
As someone who uses Numpy/Scipy almost exclusively on Linux workstations
or on clusters (in coordination with lots of C code), I wouldn't value
NumPy and SciPy on the CLR at all.
I am kind of curious, though, to see
On 10/26/07, Travis E. Oliphant <[EMAIL PROTECTED]> wrote:
>
> >> An IronPython compatible version of NumPy would be great.Of course
> >> it could be done by using C# to write NumPy, but I'm not sure that this
> >> would really be any less work than creating a "glue" layer that allowed
> >> mos
>> An IronPython compatible version of NumPy would be great.Of course
>> it could be done by using C# to write NumPy, but I'm not sure that this
>> would really be any less work than creating a "glue" layer that allowed
>> most (or all) C-Python extensions to work with IronPython.
>>
I
On 10/26/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
>
> On 10/26/07, dmitrey <[EMAIL PROTECTED]> wrote:
> > Travis E. Oliphant wrote:
> > > Giles Thomas wrote:
> > >
> > >> Hi,
> > >>
> > >> At Resolver Systems, we have a product that is written in IronPython
> -
> > >> the .NET Python impleme
On 10/26/07, dmitrey <[EMAIL PROTECTED]> wrote:
> Travis E. Oliphant wrote:
> > Giles Thomas wrote:
> >
> >> Hi,
> >>
> >> At Resolver Systems, we have a product that is written in IronPython -
> >> the .NET Python implementation - and allows users to use that language
> >> to script a spreadsheet-
Hi,
On 10/26/07, Timothy Hochberg <[EMAIL PROTECTED]> wrote:
>
>
> On 10/26/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> > On 10/26/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
> > > P.S: IMHO, this is one of the main limitation of numpy (or any language
> > > using arrays for speed; and th
Travis E. Oliphant wrote:
> Giles Thomas wrote:
>
>> Hi,
>>
>> At Resolver Systems, we have a product that is written in IronPython -
>> the .NET Python implementation - and allows users to use that language
>> to script a spreadsheet-like interface. Because they're using
>> IronPython, they
Anne Archibald wrote:
> On 26/10/2007, Georg Holzmann <[EMAIL PROTECTED]> wrote:
>
>
>> if in that example I also change the strides:
>>
>>int s = tmp->strides[1];
>>tmp->strides[0] = s;
>>tmp->strides[1] = s * dim0[0];
>>
>> Then I get in python the fortran-style array in right orde
David Cournapeau wrote:
> Oliver Kranz wrote:
>> Hi,
>>
>> I am working on a Python extension module using of the NumPy C-API. The
>> extension module is an interface to an image processing and analysis
>> library written in C++. The C++ functions are exported with
>> boos::python. Currently I a
On 26/10/2007, Georg Holzmann <[EMAIL PROTECTED]> wrote:
> if in that example I also change the strides:
>
>int s = tmp->strides[1];
>tmp->strides[0] = s;
>tmp->strides[1] = s * dim0[0];
>
> Then I get in python the fortran-style array in right order.
This is the usual way. More or le
>
> What does NOT work
> ==
>
> One important target missing is windows 64, but this should not be too
> difficult to solve.
>
> There are still many corner cases not yet solved (in particular some
> windows things, most libraries path cannot yet be overriden in
> site.cfg); also, I
On 10/26/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
> On 10/26/07, Robert Crida <[EMAIL PROTECTED]> wrote:
> > Hi again
> >
> > I watch the VmSize of the process using eg top or ps
> >
> > If a is a list then it remains constant. If a is an ndarray as shown in the
> > example, then the VmSize
On 10/26/07, Robert Crida <[EMAIL PROTECTED]> wrote:
> Hi again
>
> I watch the VmSize of the process using eg top or ps
>
> If a is a list then it remains constant. If a is an ndarray as shown in the
> example, then the VmSize grows quite rapidly.
>
Actually, I did a typo while copying your exampl
I can confirm the same behaviour with numpy '1.0.4.dev4271' on OS X
10.4with python
2.5.1 (installer from python.org).
For me the memory used by the python process grows at about 1MB/sec. The
memory isn't released when the loop is canceled.
___
Numpy-dis
On 10/26/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
>
> On 10/26/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
> > P.S: IMHO, this is one of the main limitation of numpy (or any language
> > using arrays for speed; and this is really difficult to optimize: you
> > need compilation, JIT or sim
Hi again
I watch the VmSize of the process using eg top or ps
If a is a list then it remains constant. If a is an ndarray as shown in the
example, then the VmSize grows quite rapidly.
Cheers
Robert
On 10/26/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
>
> Robert Crida wrote:
> > Hi
> >
> > I
Hallo!
> This depends on what you are trying to do, but generally, I find that if
> you can afford it memory-wise, it is much faster to just get a C
> contiguous array if you treat your C array element per element. If you
Yes, but the problem is that this data is very big (up to my memory
lim
Georg Holzmann wrote:
> Hallo!
>
> I found now a way to get the data:
>
>
>> Therefore I do the following (2D example):
>>
>>obj = PyArray_FromDimsAndData(2, dim0, PyArray_DOUBLE, (char*)data);
>>PyArrayObject *tmp = (PyArrayObject*)obj;
>>tmp->flags = NPY_FARRAY;
>>
>
> if in t
Hallo!
I found now a way to get the data:
> Therefore I do the following (2D example):
>
>obj = PyArray_FromDimsAndData(2, dim0, PyArray_DOUBLE, (char*)data);
>PyArrayObject *tmp = (PyArrayObject*)obj;
>tmp->flags = NPY_FARRAY;
if in that example I also change the strides:
int s
Hi there,
I've finally managed to implement most of the things I wanted for
numpy.scons, hence a first alpha. Compared to the 2d milestone from a
few days ago, a few optimized libraries are supported (ATLAS, Sun
sunperf, Apple Accelerate and vecLib, Intel MKL).
Who
===
Outside people inte
On 10/26/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
> P.S: IMHO, this is one of the main limitation of numpy (or any language
> using arrays for speed; and this is really difficult to optimize: you
> need compilation, JIT or similar to solve those efficiently).
This is where the scipy - sandb
Robert Crida wrote:
> Hi
>
> I don't think it is a python issue because if you change the line b =
> str(a) to just read
> str(a)
> then the problem still occurs.
>
> Also, if you change a to be a list instead of ndarray then the problem
> does not occur.
How do you know there is a memory leak ?
Hi
I don't think it is a python issue because if you change the line b = str(a)
to just read
str(a)
then the problem still occurs.
Also, if you change a to be a list instead of ndarray then the problem does
not occur.
Cheers
Robert
On 10/26/07, Matthew Brett <[EMAIL PROTECTED]> wrote:
>
> Hi,
Hi,
> I seem to have tracked down a memory leak in the string conversion mechanism
> of numpy. It is demonstrated using the following code:
>
> import numpy as np
>
> a = np.array([1.0, 2.0, 3.0])
> while True:
> b = str(a)
Would you not expect python rather than numpy to be dealing with the
Which version Python are you using ?
Matthieu
2007/10/26, Robert Crida <[EMAIL PROTECTED]>:
>
> Hi all
>
> I recently posted about a memory leak in numpy and failed to mention the
> version. The leak manifests itself in numpy-1.0.3.1 but is not present in
> numpy-1.0.2
>
> The following code repr
Hi all
I recently posted about a memory leak in numpy and failed to mention the
version. The leak manifests itself in numpy-1.0.3.1 but is not present in
numpy-1.0.2
The following code reproduces the bug:
import numpy as np
a = np.array([1.0, 2.0, 3.0])
while True:
b = str(a)
What happens
Robert Crida wrote:
> Hi all
>
> I seem to have tracked down a memory leak in the string conversion
> mechanism of numpy. It is demonstrated using the following code:
>
> import numpy as np
>
> a = np.array([1.0, 2.0, 3.0])
> while True:
> b = str(a)
>
> What happens above is that is repeatedl
Hi all
I seem to have tracked down a memory leak in the string conversion mechanism
of numpy. It is demonstrated using the following code:
import numpy as np
a = np.array([1.0, 2.0, 3.0])
while True:
b = str(a)
What happens above is that is repeatedly converted to a string. The process
size
I've seen a few references on this, but hadn't found a proper solution...
I'm doing Lattice-Boltzmann simulations with periodic boundary conditions,
which always necessarily involve either padding the edges and doing
additional steps, or making a "wrapped" array (for example, if I have an
array
On Fri, Oct 26, 2007 at 01:56:26AM -0500, Robert Kern wrote:
> Gael Varoquaux wrote:
> > On Thu, Oct 25, 2007 at 04:16:06PM -0700, Mathew Yeates wrote:
> >> Anybody know of any tricks for handling something like
> >> z[0]=1.0
> >> for i in range(100):
> >> out[i]=func1(z[i])
> >> z[i+1]=fu
36 matches
Mail list logo