On 04.02.2014 18:26, Chris Laumann wrote:
> Hi all-
>
> Thanks for the info re: memory leak. In trying to work around it, I
> think I’ve discovered another (still using SuperPack). This leaks ~30MB
> / run:
>
> hists = zeros((50,64), dtype=int)
> for i in range(50):
> for j in range(2**13):
>
Hi all-
Thanks for the info re: memory leak. In trying to work around it, I think I’ve
discovered another (still using SuperPack). This leaks ~30MB / run:
hists = zeros((50,64), dtype=int)
for i in range(50):
for j in range(2**13):
hists[i,j%64] += 1
The code leaks using hists[i,j]
On 31.01.2014 18:12, Nathaniel Smith wrote:
> On Fri, Jan 31, 2014 at 4:29 PM, Benjamin Root wrote:
>> Just to chime in here about the SciPy Superpack... this distribution tracks
>> the master branch of many projects, and then puts out releases, on the
>> assumption that master contains pristine c
On Fri, Jan 31, 2014 at 4:29 PM, Benjamin Root wrote:
> Just to chime in here about the SciPy Superpack... this distribution tracks
> the master branch of many projects, and then puts out releases, on the
> assumption that master contains pristine code, I guess. I have gone down
> strange rabbit h
Just to chime in here about the SciPy Superpack... this distribution tracks
the master branch of many projects, and then puts out releases, on the
assumption that master contains pristine code, I guess. I have gone down
strange rabbit holes thinking that a particular bug was fixed already and
the u
On Fri, Jan 31, 2014 at 3:14 PM, Chris Laumann wrote:
>
> Current scipy superpack for osx so probably pretty close to master.
What does numpy.__version__ say?
-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
Current scipy superpack for osx so probably pretty close to master. So it's a known leak? Hmm. Maybe I'll have to work on a different machine for a bit. Chris---Sent from my iPhone using Mail Ninja--- Original Message ---which version of numpy are you using?there seems to be a leak in the scalar re
which version of numpy are you using?
there seems to be a leak in the scalar return due to the PyObject_Malloc
usage in git master, but it doesn't affect 1.8.0
On Fri, Jan 31, 2014 at 7:20 AM, Chris Laumann wrote:
> Hi all-
>
> The following snippet appears to leak memory badly (about 10 MB per
Hi all-
The following snippet appears to leak memory badly (about 10 MB per execution):
P = randint(0,2,(30,13))
for i in range(50):
print "\r", i, "/", 50
for ai in ndindex((2,)*13):
j = np.sum(P.dot(ai))
If instead you execute (no np.sum call):
P = randint(0,2,(30,13))
for i
Subject: Re: [Numpy-discussion] Memory leak in numpy?
On 29.01.2014 20:44, Nathaniel Smith wrote:
> On Wed, Jan 29, 2014 at 7:39 PM, Joseph McGlinchy wrote:
>> Upon further investigation, I do believe it is within the scipy code
>> where there is a leak. I commented
On 29.01.2014 20:44, Nathaniel Smith wrote:
> On Wed, Jan 29, 2014 at 7:39 PM, Joseph McGlinchy wrote:
>> Upon further investigation, I do believe it is within the scipy code where
>> there is a leak. I commented out my call to processBinaryImage(), which is
>> all scipy code calls, and my memory
On Wed, Jan 29, 2014 at 7:39 PM, Joseph McGlinchy wrote:
> Upon further investigation, I do believe it is within the scipy code where
> there is a leak. I commented out my call to processBinaryImage(), which is
> all scipy code calls, and my memory usage remains flat with approximately a
> 1MB var
Subject: Re: [Numpy-discussion] Memory leak in numpy?
Perhaps it is an ESRI/Arcpy issue then. I don't see anything that could be
doing that, though, as it is very minimal.
From:
numpy-discussion-boun...@scipy.org<mailto:numpy-discussion-boun...@scipy.org>
[mailto:numpy-disc
on of Numerical Python
Subject: Re: [Numpy-discussion] Memory leak in numpy?
Hmmm, I see no reason why that would eat up memory. I just tried it out on my
own system (numpy 1.6.1, CentOS 6, python 2.7.1), and had no issues, Memory
usage stayed flat for the 10 seconds it took to go through the loop.
Hmmm, I see no reason why that would eat up memory. I just tried it out on
my own system (numpy 1.6.1, CentOS 6, python 2.7.1), and had no issues,
Memory usage stayed flat for the 10 seconds it took to go through the
loop. Note, I am not using ATLAS or BLAS, so maybe the issue lies there?
(i don'
Hi all-
I think I just found a memory leak in numpy, or maybe I just don’t understand
generators. Anyway, the following snippet will quickly eat a ton of RAM:
P = randint(0,2, (20,13))
for i in range(50):
for ai in ndindex((2,)*13):
j = P.dot(ai)
If you replace the last line with
On Thu, 2013-06-13 at 16:50 +0200, Pietro Bonfa' wrote:
> Dear Numpy users,
>
> I have a memory leak in my code. A simple way to reproduce my problem is:
>
> import numpy
>
> class test():
> def __init__(self):
> pass
>
> def t(self):
> temp = numpy.zeros([200,100,100])
Upgrading numpy solved the problem!
Thanks!
On 06/13/13 17:06, Sebastian Berg wrote:
> On Thu, 2013-06-13 at 16:50 +0200, Pietro Bonfa' wrote:
>> Dear Numpy users,
>>
>> I have a memory leak in my code. A simple way to reproduce my problem is:
>>
>> import numpy
>>
>> class test():
>> def __
On Thu, Jun 13, 2013 at 8:56 AM, Aron Ahmadia wrote:
> Hi Petro,
>
> What version of numpy are you running?
>
> A
>
>
> On Thu, Jun 13, 2013 at 3:50 PM, Pietro Bonfa'
> wrote:
>
>> Dear Numpy users,
>>
>> I have a memory leak in my code. A simple way to reproduce my problem is:
>>
>> import nump
The numpy version is 1.7.0
In [5]: numpy.version.full_version
Out[5]: '1.7.0b2'
Thanks,
Pietro
On 06/13/13 16:56, Aron Ahmadia wrote:
> Hi Petro,
>
> What version of numpy are you running?
>
> A
>
>
> On Thu, Jun 13, 2013 at 3:50 PM, Pietro Bonfa'
> mailto:pietro.bo...@fis.unipr.it>> wrote:
Hi Petro,
What version of numpy are you running?
A
On Thu, Jun 13, 2013 at 3:50 PM, Pietro Bonfa' wrote:
> Dear Numpy users,
>
> I have a memory leak in my code. A simple way to reproduce my problem is:
>
> import numpy
>
> class test():
> def __init__(self):
> pass
>
> def t(s
Dear Numpy users,
I have a memory leak in my code. A simple way to reproduce my problem is:
import numpy
class test():
def __init__(self):
pass
def t(self):
temp = numpy.zeros([200,100,100])
A = numpy.zeros([200], dtype = numpy.float)
for i in range(200):
I've tracked down and fixed a memory leak in 1.7 and master. The pull
request to check and backport is here:
https://github.com/numpy/numpy/pull/2928
Thanks,
Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
On Mon, Sep 24, 2012 at 07:59:11PM +0100, Nathaniel Smith wrote:
> > which means I probably forgot a DECREF while doing the
> > PyArray_Diagonal changes...
> Yep: https://github.com/numpy/numpy/pull/457
Awesome. I can confirm that this fixes the problem. Script below to check.
You are my hero!
On Mon, Sep 24, 2012 at 7:45 PM, Nathaniel Smith wrote:
> On Mon, Sep 24, 2012 at 7:19 PM, Gael Varoquaux
> wrote:
>> Hi Fred,
>>
>> On Mon, Sep 24, 2012 at 02:17:16PM -0400, Frédéric Bastien wrote:
>>> with numpy '1.6.1', I have no problem.
>>
>>> With numpy 1.7.0b2, I can reproduce the problem.
On Mon, Sep 24, 2012 at 7:19 PM, Gael Varoquaux
wrote:
> Hi Fred,
>
> On Mon, Sep 24, 2012 at 02:17:16PM -0400, Frédéric Bastien wrote:
>> with numpy '1.6.1', I have no problem.
>
>> With numpy 1.7.0b2, I can reproduce the problem.
>
> OK, thanks. I think that I'll start a bisect to figure out whe
Hi Fred,
On Mon, Sep 24, 2012 at 02:17:16PM -0400, Frédéric Bastien wrote:
> with numpy '1.6.1', I have no problem.
> With numpy 1.7.0b2, I can reproduce the problem.
OK, thanks. I think that I'll start a bisect to figure out when it crept
in.
Gael
__
Hi,
with numpy '1.6.1', I have no problem.
With numpy 1.7.0b2, I can reproduce the problem.
HTH
Fred
On Mon, Sep 24, 2012 at 1:04 PM, Gael Varoquaux
wrote:
> Hi list,
>
> I think that I am hit a memory leak with numpy master. The following code
> enables to reproduce it:
>
>
Hi list,
I think that I am hit a memory leak with numpy master. The following code
enables to reproduce it:
import numpy as np
n = 100
m = np.eye(n)
for i in range(3):
#np.linalg.slogdet(m)
t, result_t =
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] Memory Leak
The attached program leaks about 24 bytes per loop. The comments give a
bit more detail as to when the leak occurs and doesn't. How can I track
down where this leak is actually coming from?
Here is a sample run on my ma
The attached program leaks about 24 bytes per loop. The comments give a
bit more detail as to when the leak occurs and doesn't. How can I track
down where this leak is actually coming from?
Here is a sample run on my machine:
$ python simple.py
Python Version: 2.7.3 (default, Apr 20 2012, 22:
On Fri, Mar 9, 2012 at 3:07 PM, FRENK Andreas wrote:
> Dear Community,
>
>
>
> I have an issue with numpy consuming more and more memory.
>
> According to ticket: http://projects.scipy.org/numpy/ticket/1427
>
>
>
> This is a known issue. It should be fixed in 2.0.0.dev-9451260. What does
> this
Dear Community,
I have an issue with numpy consuming more and more memory.
According to ticket: http://projects.scipy.org/numpy/ticket/1427
This is a known issue. It should be fixed in 2.0.0.dev-9451260. What does this
mean? It sound like a development release.
Is there any chance to fix it in 1
memory leak was observed in numpy versions 1.5.1 and latest git trunc
from numpy import *
for i in range(10):
if i % 100 == 0:
print(i)
a = empty(1,object)
for j in range(1):
a[j] = array(1)
a = take(a, range(9000),out=a[:9000])
___
NumPy-
On Thu, May 19, 2011 at 1:53 AM, Pauli Virtanen wrote:
> On Wed, 18 May 2011 16:36:31 -0700, G Jones wrote:
> [clip]
> > As a followup, I managed to install tcmalloc as described in the article
> > I mentioned. Running the example I sent now shows a constant memory foot
> > print as expected. I a
On Wed, 18 May 2011 16:36:31 -0700, G Jones wrote:
[clip]
> As a followup, I managed to install tcmalloc as described in the article
> I mentioned. Running the example I sent now shows a constant memory foot
> print as expected. I am surprised such a solution was necessary.
> Certainly others must
Hello,
I have seen the effect you describe, I had originally assumed this was the
case, but in fact there seems to be more to the problem. If it were only the
effect you mention, there should not be any memory error because the OS will
drop the pages when the memory is actually needed for something
On Wed, 18 May 2011 15:09:31 -0700, G Jones wrote:
[clip]
> import numpy as np
>
> x = np.memmap('mybigfile.bin',mode='r',dtype='uint8') print x.shape #
> prints (42940071360,) in my case ndat = x.shape[0]
> for k in range(1000):
> y = x[k*ndat/1000:(k+1)*ndat/1000].astype('float32') #The ast
Hello,
I need to process several large (~40 GB) files. np.memmap seems ideal for
this, but I have run into a problem that looks like a memory leak or memory
fragmentation. The following code illustrates the problem
import numpy as np
x = np.memmap('mybigfile.bin',mode='r',dtype='uint8')
print x.s
On Mon, Jul 12, 2010 at 6:44 PM, Nathaniel Peterson
wrote:
> Wes McKinney writes:
>
>> Did you mean to post a different link? That's the ticket I just created :)
>
> How silly of me! I meant http://projects.scipy.org/numpy/ticket/1427
> ___
> NumPy-Disc
Wes McKinney writes:
> Did you mean to post a different link? That's the ticket I just created :)
How silly of me! I meant http://projects.scipy.org/numpy/ticket/1427
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/ma
On Mon, Jul 12, 2010 at 3:39 PM, Nathaniel Peterson
wrote:
> This memory leak may be related: http://projects.scipy.org/numpy/ticket/1542
> It shows what appears to be a memory leak when calling astype('float')
> on an array of dtype 'object'.
> ___
> Nu
This memory leak may be related: http://projects.scipy.org/numpy/ticket/1542
It shows what appears to be a memory leak when calling astype('float')
on an array of dtype 'object'.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.sci
On Mon, Jul 12, 2010 at 2:22 PM, Wes McKinney wrote:
> This one was quite a bear to track down, starting from the of course
> very high level observation of "why is my application leaking memory".
> I've reproduced it on Windows XP using NumPy 1.3.0 on Python 2.5 and
> 1.4.1 on Python 2.6 (EPD). B
This one was quite a bear to track down, starting from the of course
very high level observation of "why is my application leaking memory".
I've reproduced it on Windows XP using NumPy 1.3.0 on Python 2.5 and
1.4.1 on Python 2.6 (EPD). Basically it seems that calling
.astype(bool) on an ndarray sli
Fri, 02 Jul 2010 14:56:47 +0200, Tillmann Falck wrote:
> I am hitting a memory leak with the combination of numpy and
> cvxopt.matrix. As I am not where it occurs, I am cross posting.
Probably a bug in cvxopt, as also the following leaks memory:
from cvxopt import
Hi all,
I am hitting a memory leak with the combination of numpy and cvxopt.matrix. As
I am not where it occurs, I am cross posting.
On my machine (Fedora 13, x86_64) this example quickly eats up all my memory.
---
from cvxopt import matrix
import numpy as np
N = 2000
X = np.ones((N,
On May 14, 2010, at 16:03 , josef.p...@gmail.com wrote:
> On Fri, May 14, 2010 at 3:26 PM, wrote:
>> On Fri, May 14, 2010 at 2:43 PM, Brian Blais
>> wrote:
>>> Hello,
>>>
>>> I have the following code, where I noticed a memory leak with +=,
>>> but
>>> not with + alone.
>>> import numpy
>>>
On Fri, May 14, 2010 at 3:26 PM, wrote:
> On Fri, May 14, 2010 at 2:43 PM, Brian Blais wrote:
>> Hello,
>>
>> I have the following code, where I noticed a memory leak with +=, but
>> not with + alone.
>> import numpy
>>
>> m=numpy.matrix(numpy.ones((23,23)))
>>
>> for i in range(1000):
>>
On Fri, May 14, 2010 at 2:43 PM, Brian Blais wrote:
> Hello,
>
> I have the following code, where I noticed a memory leak with +=, but
> not with + alone.
> import numpy
>
> m=numpy.matrix(numpy.ones((23,23)))
>
> for i in range(1000):
> m+=0.0 # keeps growing in memory
> # m=m+0.0
Hello,
I have the following code, where I noticed a memory leak with +=, but
not with + alone.
import numpy
m=numpy.matrix(numpy.ones((23,23)))
for i in range(1000):
m+=0.0 # keeps growing in memory
#m=m+0.0 # is stable in memory
My version of python is 2.5, numpy 1.3.0,
On Tue, Mar 9, 2010 at 7:13 PM, David Reichert
wrote:
> Hi,
>
> Just another update:
>
> signal.convolve and signal.fftconvolve indeed do not seem to have the
> problem,
> however, they are slower by at least a factor of 2 for my situation.
>
> Moreover, I also tried out the numpy 1.4.x branch and
Hi,
Just another update:
signal.convolve and signal.fftconvolve indeed do not seem to have the
problem,
however, they are slower by at least a factor of 2 for my situation.
Moreover, I also tried out the numpy 1.4.x branch and the latest scipy svn,
and a short test seemed to indicate that the me
On Tue, Mar 9, 2010 at 4:24 PM, David Reichert
wrote:
> Hm, upgrading scipy from 0.7.0 to 0.7.1 didn't do the trick for me (still
> running numpy 1.3.0).
> I'm not sure if I feel confident enough to use developer versions, but I'll
> look into it.
If you don't need the extra options, you could al
Hm, upgrading scipy from 0.7.0 to 0.7.1 didn't do the trick for me (still
running numpy 1.3.0).
I'm not sure if I feel confident enough to use developer versions, but I'll
look into it.
Cheers
David
On Tue, Mar 9, 2010 at 7:57 PM, Robert Kern wrote:
> On Tue, Mar 9, 2010 at 13:49, David Reiche
On Tue, Mar 9, 2010 at 13:49, David Reichert wrote:
> Hi,
>
> I just reported a memory leak with matrices, and I might have found
> another (unrelated) one in the convolve2d function:
>
> import scipy.signal
> from numpy import ones
>
> while True:
> scipy.signal.convolve2d(ones((1,1)), ones((
Hi,
I just reported a memory leak with matrices, and I might have found
another (unrelated) one in the convolve2d function:
import scipy.signal
from numpy import ones
while True:
scipy.signal.convolve2d(ones((1,1)), ones((1,1)))
Is there an alternative implementation of a 2d convolution? O
On Tue, Mar 9, 2010 at 1:28 PM, Robert Kern wrote:
> On Tue, Mar 9, 2010 at 11:31, David Paul Reichert
> wrote:
>> Hi,
>>
>> I've got two issues:
>>
>> First, the following seems to cause a memory leak,
>> using numpy 1.3.0:
>>
>> a = matrix(ones(1))
>>
>> while True:
>> a += 0
>>
>>
>> This o
On Tue, Mar 9, 2010 at 11:31, David Paul Reichert
wrote:
> Hi,
>
> I've got two issues:
>
> First, the following seems to cause a memory leak,
> using numpy 1.3.0:
>
> a = matrix(ones(1))
>
> while True:
> a += 0
>
>
> This only seems to happen when a is a matrix rather
> than an array, and whe
Thanks for the reply.
Yes never mind the second issue, I had myself confused there.
Any comments on the memory leak?
On Tue, Mar 9, 2010 at 5:55 PM, wrote:
> On Tue, Mar 9, 2010 at 12:31 PM, David Paul Reichert
> wrote:
> > Hi,
> >
> > I've got two issues:
> >
> > First, the following seems t
On Tue, Mar 9, 2010 at 12:31 PM, David Paul Reichert
wrote:
> Hi,
>
> I've got two issues:
>
> First, the following seems to cause a memory leak,
> using numpy 1.3.0:
>
> a = matrix(ones(1))
>
> while True:
> a += 0
>
>
> This only seems to happen when a is a matrix rather
> than an array, and
Hi,
I've got two issues:
First, the following seems to cause a memory leak,
using numpy 1.3.0:
a = matrix(ones(1))
while True:
a += 0
This only seems to happen when a is a matrix rather
than an array, and when the short hand '+=' is used.
Second, I'm not sure whether that's a bug or whet
Hi Karol
Thank you very much for the sleuth work here. We are in the midst of
software ATP so it helps a lot that I will be able to resolve this bug
properly.
Robert
On 10/29/07, Karol Langner <[EMAIL PROTECTED]> wrote:
>
> I opened a ticket for this (#602). Hopefully someone will confirm that
>
Karol Langner wrote:
> I opened a ticket for this (#602). Hopefully someone will confirm that adding
> that Py_DECREF call fixes the leak and someone with write access patches it
> in svn.
>
Thanks for looking into this and isolating the problem...
-Travis O.
__
I opened a ticket for this (#602). Hopefully someone will confirm that adding
that Py_DECREF call fixes the leak and someone with write access patches it
in svn.
- Karol
--
written by Karol Langner
Sun Oct 28 23:29:18 EDT 2007
___
Numpy-discussion ma
On Friday 26 October 2007 05:39, Robert Crida wrote:
> I recently posted about a memory leak in numpy and failed to mention the
> version. The leak manifests itself in numpy-1.0.3.1 but is not present in
> numpy-1.0.2
>
> The following code reproduces the bug:
>
> import numpy as np
>
> a = np.arra
On Friday 26 October 2007 05:39, Robert Crida wrote:
> Hi all
>
> I recently posted about a memory leak in numpy and failed to mention the
> version. The leak manifests itself in numpy-1.0.3.1 but is not present in
> numpy-1.0.2
>
> The following code reproduces the bug:
>
> import numpy as np
>
>
On 10/26/07, Robert Crida <[EMAIL PROTECTED]> wrote:
> Hi all
>
> I recently posted about a memory leak in numpy and failed to mention the
> version. The leak manifests itself in numpy-1.0.3.1 but is not present in
> numpy-1.0.2
>
> The following code reproduces the bug:
>
> import numpy as np
>
>
On 10/26/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
> On 10/26/07, Robert Crida <[EMAIL PROTECTED]> wrote:
> > Hi again
> >
> > I watch the VmSize of the process using eg top or ps
> >
> > If a is a list then it remains constant. If a is an ndarray as shown in the
> > example, then the VmSize
On 10/26/07, Robert Crida <[EMAIL PROTECTED]> wrote:
> Hi again
>
> I watch the VmSize of the process using eg top or ps
>
> If a is a list then it remains constant. If a is an ndarray as shown in the
> example, then the VmSize grows quite rapidly.
>
Actually, I did a typo while copying your exampl
I can confirm the same behaviour with numpy '1.0.4.dev4271' on OS X
10.4with python
2.5.1 (installer from python.org).
For me the memory used by the python process grows at about 1MB/sec. The
memory isn't released when the loop is canceled.
___
Numpy-dis
Hi again
I watch the VmSize of the process using eg top or ps
If a is a list then it remains constant. If a is an ndarray as shown in the
example, then the VmSize grows quite rapidly.
Cheers
Robert
On 10/26/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
>
> Robert Crida wrote:
> > Hi
> >
> > I
Robert Crida wrote:
> Hi
>
> I don't think it is a python issue because if you change the line b =
> str(a) to just read
> str(a)
> then the problem still occurs.
>
> Also, if you change a to be a list instead of ndarray then the problem
> does not occur.
How do you know there is a memory leak ?
Hi
I don't think it is a python issue because if you change the line b = str(a)
to just read
str(a)
then the problem still occurs.
Also, if you change a to be a list instead of ndarray then the problem does
not occur.
Cheers
Robert
On 10/26/07, Matthew Brett <[EMAIL PROTECTED]> wrote:
>
> Hi,
Hi,
> I seem to have tracked down a memory leak in the string conversion mechanism
> of numpy. It is demonstrated using the following code:
>
> import numpy as np
>
> a = np.array([1.0, 2.0, 3.0])
> while True:
> b = str(a)
Would you not expect python rather than numpy to be dealing with the
Which version Python are you using ?
Matthieu
2007/10/26, Robert Crida <[EMAIL PROTECTED]>:
>
> Hi all
>
> I recently posted about a memory leak in numpy and failed to mention the
> version. The leak manifests itself in numpy-1.0.3.1 but is not present in
> numpy-1.0.2
>
> The following code repr
Hi all
I recently posted about a memory leak in numpy and failed to mention the
version. The leak manifests itself in numpy-1.0.3.1 but is not present in
numpy-1.0.2
The following code reproduces the bug:
import numpy as np
a = np.array([1.0, 2.0, 3.0])
while True:
b = str(a)
What happens
Robert Crida wrote:
> Hi all
>
> I seem to have tracked down a memory leak in the string conversion
> mechanism of numpy. It is demonstrated using the following code:
>
> import numpy as np
>
> a = np.array([1.0, 2.0, 3.0])
> while True:
> b = str(a)
>
> What happens above is that is repeatedl
Hi all
I seem to have tracked down a memory leak in the string conversion mechanism
of numpy. It is demonstrated using the following code:
import numpy as np
a = np.array([1.0, 2.0, 3.0])
while True:
b = str(a)
What happens above is that is repeatedly converted to a string. The process
size
Hi Cryille
On Mon, Oct 08, 2007 at 01:49:44AM +0200, Cyrille Rosset wrote:
> Hello,
> the following code seems to create a memory leak in Python.
> (around 230 MB).
> Any ideas what's wrong ?
The relevant ticket is here:
http://projects.scipy.org/scipy/numpy/ticket/570
Regards
Stéfan
__
Hello,
the following code seems to create a memory leak in Python.
(around 230 MB).
Any ideas what's wrong ?
I'm using python 2.5 and numpy 1.0.3
---
def toto(x):
return x**2
tutu=vectorize(toto)
Nbins=1
for i in xrange(1000):
c=tutu(arange(Nbins))
--
Thomas J. Duck wrote:
> Hi,
>
> There seems to be a memory leak when arrays are added in-place
> for mixed Numeric/numpy applications. For example, memory usage
> quickly ramps up when the following program is executed:
>
>
> import Numeric,numpy
> x = Numeric.zeros((2000,2000),typeco
Hi,
There seems to be a memory leak when arrays are added in-place
for mixed Numeric/numpy applications. For example, memory usage
quickly ramps up when the following program is executed:
import Numeric,numpy
x = Numeric.zeros((2000,2000),typecode=Numeric.Float64)
for j in range(200):
Cyrille Rosset wrote:
> Ok, that works fine with python.
> But not in ipython... is there some other trick ?
> (there's a whole collection of _* variables in there...)
And the Out[NN]'s, too. You should be able to del all of them:
del Out[NN], _NN, _, __, ___
--
Robert Kern
"I have come to b
Ok, that works fine with python.
But not in ipython... is there some other trick ?
(there's a whole collection of _* variables in there...)
Cyrille.
Robert Kern a écrit :
> Cyrille Rosset wrote:
>> Hi,
>>
>> I'm not sure this is the right mailing list for this, but it seems
>> there's a memory l
Cyrille Rosset wrote:
> Hi,
>
> I'm not sure this is the right mailing list for this, but it seems
> there's a memory leak when looking at flags :
>
> >>> from numpy import *
> >>> x=ones(5000) #==> python use 25% of memory (ok)
> >>> del x
> #==> memory usage fall back to almost ze
Hi,
I'm not sure this is the right mailing list for this, but it seems
there's a memory leak when looking at flags :
>>> from numpy import *
>>> x=ones(5000) #==> python use 25% of memory (ok)
>>> del x
#==> memory usage fall back to almost zero (as seen in top)
Thqt's good.
but if
Bryan Cole wrote:
>I'm not sure where best to post this, but I get a memory leak when using
>code with both numpy and FFT(from Numeric) together:
>
>e.g.
>
>
>
import numpy
import FFT
def test():
>... while 1:
>... data=numpy.random.random(2048)
>...
I'm not sure where best to post this, but I get a memory leak when using
code with both numpy and FFT(from Numeric) together:
e.g.
>>> import numpy
>>> import FFT
>>> def test():
... while 1:
... data=numpy.random.random(2048)
... newdata = FFT.real_fft(data)
>>> test()
and m
Keith Goodman wrote:
> Will the latest numpy from svn work with matplotlib 0.87.7?
It should. We are committed to backwards compatibility both at the Python level
and the C binary level.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made te
On 2/5/07, Francesc Altet <[EMAIL PROTECTED]> wrote:
> El dl 05 de 02 del 2007 a les 08:45 -0800, en/na Keith Goodman va
> escriure:
> > This eats up memory quickly on my system.
> >
> > import numpy.matlib as M
> >
> > def memleak():
> > a = M.randn(500, 1)
> > while True:
> > a =
El dl 05 de 02 del 2007 a les 08:45 -0800, en/na Keith Goodman va
escriure:
> This eats up memory quickly on my system.
>
> import numpy.matlib as M
>
> def memleak():
> a = M.randn(500, 1)
> while True:
> a = a.argsort(0)
Yeah, the guilty in this case is argsort():
http://proje
This eats up memory quickly on my system.
import numpy.matlib as M
def memleak():
a = M.randn(500, 1)
while True:
a = a.argsort(0)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/nu
93 matches
Mail list logo