Hi,
Following an ongoing discussion with S. Johnson, one of the developer
of fftw3, I would be interested in what people think about adding
infrastructure in numpy related to SIMD alignement (that is 16 bytes
alignement for SSE/ALTIVEC, I don't know anything about other archs).
The problem
What is ugly about the module? I like it!
What do you mean about recarray's? Do you think they are they not
appropriate for this type of thing?
When i get some time i'll run some tests versus SAS for the same
operations and do a speed comparison.
Question: Would there be an easy way to merge the
On Thu, 2 Aug 2007, Charles R Harris wrote:
> On X86 machines the main virtue would be smaller and more cache friendly
> arrays because double precision arithmetic is about the same speed as single
> precision, sometimes even a bit faster. The PPC architecture does have
> faster single than doub
As PyArray_DescrConverter return new references, I think there could
be many places were PyArray_Descr* objects get its reference count
incremented.
Here, I send a patch correcting this for array() and arange(), but not
sure if this is the more general solution.
BTW, please see my previous commen
This patch corrected the problem for me, numpy test pass...
On 8/2/07, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
> I think the problem is in _array_fromobject (seen as numpy.array in Python)
--
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIM
On 8/2/07, Warren Focke <[EMAIL PROTECTED]> wrote:
>
>
>
> On Thu, 2 Aug 2007, Lars Friedrich wrote:
>
> > What I understood is that numpy uses FFTPACK's algorithms.
>
> Sort of. It appears to be a hand translation from F77 to C.
>
> > From www.netlib.org/fftpack (is this the right address?) I too
I think the problem is in _array_fromobject (seen as numpy.array in Python)
This function parses its arguments by using the convertor
PyArray_DescrConverter2. which RETURNS A NEW REFERENCE!!! This
reference is never DECREF'ed.
BTW, A lesson I've learned of the pattern
if (!PyArg_ParseXXX())
Ups, I forgot to mention I was using gc.collect(), I accidentally
cleaned it my mail
Anyway, the following
import sys, gc
import numpy
def test():
a = numpy.zeros(5, dtype=float)
while 1:
gc.collect()
b = numpy.asarray(a, dtype=float); del b
gc.collect()
p
On 8/2/07, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
>
> using numpy-1.0.3, I believe there are a reference leak somewhere.
> Using a debug build of Python 2.5.1 (--with-pydebug), I get the
> following
>
> import sys, gc
> import numpy
>
> def testleaks(func, args=(), kargs={}, repeats=5):
> f
On Thu, 2 Aug 2007, Lars Friedrich wrote:
> What I understood is that numpy uses FFTPACK's algorithms.
Sort of. It appears to be a hand translation from F77 to C.
> From www.netlib.org/fftpack (is this the right address?) I took that
> there is a single-precision and double-precision-version
Ryan May wrote:
> Hi,
>
> I ran into this while debugging a script today:
>
> In [1]: import numpy as N
>
> In [2]: N.__version__
> Out[2]: '1.0.3'
>
> In [3]: d = N.array([32767], dtype=N.int16)
>
> In [4]: d + 32767
> Out[4]: array([-2], dtype=int16)
>
> In [5]: d[0] + 32767
> Out[5]: 65534
using numpy-1.0.3, I believe there are a reference leak somewhere.
Using a debug build of Python 2.5.1 (--with-pydebug), I get the
following
import sys, gc
import numpy
def testleaks(func, args=(), kargs={}, repeats=5):
for i in xrange(repeats):
r1 = sys.gettotalrefcount()
fun
Hi,
I ran into this while debugging a script today:
In [1]: import numpy as N
In [2]: N.__version__
Out[2]: '1.0.3'
In [3]: d = N.array([32767], dtype=N.int16)
In [4]: d + 32767
Out[4]: array([-2], dtype=int16)
In [5]: d[0] + 32767
Out[5]: 65534
In [6]: type(d[0] + 32767)
Out[6]:
In [7]: t
Hello,
David Cournapeau wrote:
> As far as I can read from the fft code in numpy, only double is
> supported at the moment, unfortunately. Note that you can get some speed
> by using scipy.fftpack methods instead, if scipy is an option for you.
What I understood is that numpy uses FFTPACK's alg
continue
except KeyError:
pass
P.__dict__[k] = v
P.ion()
del matplotlib, new, pylab
The result is "some" reduction in the number of non-pylab-specific
names in my "P"-module. However there seem to be still many extra
names left, l
15 matches
Mail list logo