[Numpy-discussion] ~2**32 byte tofile()/fromfile() limit in 64-bit Windows?

2010-11-03 Thread Martin Spacek
I just opened a new ticket (http://projects.scipy.org/numpy/ticket/1660), but I thought I'd bring it up here as well. I can't seem to get tofile() or save() to write anything much bigger than a 2**32 byte array to a file in Py 2.6.6 on 64-bit Windows. They both hang with no errors. Also, fromfil

Re: [Numpy-discussion] use index array of len n to select columns of n x m array

2010-08-06 Thread Martin Spacek
On 2010-08-06 13:11, Martin Spacek wrote: > Josef, I'd forgotten you could use None to increase the dimensionality of an > array. Neat. And, somehow, it's almost twice as fast as the Cython version!: > > >>> timeit a[np.arange(a.shape[0])[:, None], i] > 10

Re: [Numpy-discussion] use index array of len n to select columns of n x m array

2010-08-06 Thread Martin Spacek
On 2010-08-06 06:57, Keith Goodman wrote: > You can speed it up by getting rid of two copies: > > idx = np.arange(a.shape[0]) > idx *= a.shape[1] > idx += i Keith, you're right of course. I'd forgotten about your earlier suggestion about operating in-place. Here's my new version: def rowta

Re: [Numpy-discussion] use index array of len n to select columns of n x m array

2010-08-06 Thread Martin Spacek
Keith Goodman wrote: > Here's one way: > >>> a.flat[i + a.shape[1] * np.arange(a.shape[0])] > array([0, 3, 5, 6, 9]) I'm afraid I made my example a little too simple. In retrospect, what I really want is to be able to use a 2D index array "i", like this: >>> a = np.array([[ 0, 1, 2,

Re: [Numpy-discussion] use index array of len n to select columns of n x m array

2010-08-05 Thread Martin Spacek
josef.pkt wrote: >>> a = np.array([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) >>> i = np.array([0, 1, 1, 0, 1]) >>> a[range(a.shape[0]), i] array([0, 3, 5, 6, 9]) >>> a[np.arange(a.shape[0]), i] array([0, 3, 5, 6, 9]) Thank

[Numpy-discussion] use index array of len n to select columns of n x m array

2010-08-05 Thread Martin Spacek
I want to take an n x m array "a" and index into it using an integer index array "i" of length n that will pull out the value at the designated column from each corresponding row of "a". >>> a = np.arange(10) >>> a.shape = 5, 2 >>> a array([[0, 1], [2, 3], [4, 5], [6, 7]

Re: [Numpy-discussion] pickling/unpickling numpy.void and numpy.record for multiprocessing

2010-02-26 Thread Martin Spacek
On 2010-02-26 15:26, Pauli Virtanen wrote: > No, the unpickled void scalar will own its data. The problem is that > either the data is not saved correctly (unlikely), or it is unpickled > incorrectly. > > The relevant code path to look at is multiarraymodule:array_scalar -> > scalarapi.c:PyArray_Sc

Re: [Numpy-discussion] pickling/unpickling numpy.void and numpy.record for multiprocessing

2010-02-26 Thread Martin Spacek
On 2010-02-26 15:02, Robert Kern wrote: >> Is this a known limitation? > > Nope. New bug! Thanks! Good. I'm not crazy after all :) > Pickling of complete arrays works. A quick workaround would be to send > rank-0 scalars: > >Pool.map(map(np.asarray, x)) > > Or just tuples: > >Pool.map(map

[Numpy-discussion] pickling/unpickling numpy.void and numpy.record for multiprocessing

2010-02-26 Thread Martin Spacek
I have a 1D structured ndarray with several different fields in the dtype. I'm using multiprocessing.Pool.map() to iterate over this structured ndarray, passing one entry (of type numpy.void) at a time to the function to be called by each process in the pool. After much confusion about why this

Re: [Numpy-discussion] intersect1d for N input arrays

2009-10-16 Thread Martin Spacek
Robert Cimrman ntc.zcu.cz> writes: > > Hi Martin, > > thanks for your ideas and contribution. > > A few notes: I would let intersect1d as it is, and created a new function with another name for that (any > proposals?). Considering that most of arraysetops functions are based on sort, and in pa

[Numpy-discussion] intersect1d for N input arrays

2009-10-15 Thread Martin Spacek
I have a list of many arrays (in my case each is unique, ie has no repeated elements), and I'd like to extract the intersection of all of them, all in one go. I'm running numpy 1.3.0, but looking at today's rev of numpy.lib.arraysetops (http://svn.scipy.org/svn/numpy/trunk/numpy/lib/arraysetops.py)

Re: [Numpy-discussion] Loading a > GB file into array

2007-12-20 Thread Martin Spacek
>> By the way, I installed 64-bit linux (ubuntu 7.10) on the same machine, >> and now numpy.memmap works like a charm. Slicing around a 15 GB file is fun! >> > Thanks for the feedback ! > Did you get the kind of speed you need and/or the speed you were hoping for ? Nope. Like I wrote earlier, it s

Re: [Numpy-discussion] Loading a > GB file into array

2007-12-19 Thread Martin Spacek
Sebastian Haase wrote: > b) To my knowledge, any OS Linux, Windows an OSX can max. allocate > about 1GB of data - assuming you have a 32 bit machine. > The actual numbers I measured varied from about 700MB to maybe 1.3GB. > In other words, you would be right at the limit. > (For 64bit, you would h

Re: [Numpy-discussion] Loading a > GB file into array

2007-12-03 Thread Martin Spacek
Gael Varoquaux wrote: > Very interesting. Have you made measurements to see how many times you > lost one of your cycles. I made these kind of measurements on Linux using > the real-time clock with C and it was very interesting ( > http://www.gael-varoquaux.info/computers/real-time ). I want to red

Re: [Numpy-discussion] Loading a > GB file into array

2007-12-03 Thread Martin Spacek
Francesc Altet wrote: > Perhaps something that can surely improve your timings is first > performing a read of your data file(s) while throwing the data as you > are reading it. This serves only to load the file entirely (if you have > memory enough, but this seems your case) in OS page cache. T

Re: [Numpy-discussion] Loading a > GB file into array

2007-12-02 Thread Martin Spacek
Sebastian Haase wrote: > reading this thread I have two comments. > a) *Displaying* at 200Hz probably makes little sense, since humans > would only see about max. of 30Hz (aka video frame rate). > Consequently you would want to separate your data frame rate, that (as > I understand) you want to sav

Re: [Numpy-discussion] Loading a > GB file into array

2007-11-30 Thread Martin Spacek
Martin Spacek wrote: > Would it be better to load the file one > frame at a time, generating nframes arrays of shape (height, width), > and sticking them consecutively in a python list? I just tried this, and it works. Looks like it's all in physical RAM (no disk thrashing on t

Re: [Numpy-discussion] Loading a > GB file into array

2007-11-30 Thread Martin Spacek
Kurt Smith wrote: > You might try numpy.memmap -- others have had success with it for > large files (32 bit should be able to handle a 1.3 GB file, AFAIK). Yeah, I looked into numpy.memmap. Two issues with that. I need to eliminate as much disk access as possible while my app is running. I'm d

[Numpy-discussion] Loading a > GB file into array

2007-11-30 Thread Martin Spacek
I need to load a 1.3GB binary file entirely into a single numpy.uint8 array. I've been using numpy.fromfile(), but for files > 1.2GB on my win32 machine, I get a memory error. Actually, since I have several other python modules imported at the same time, including pygame, I get a "pygame parachute"

Re: [Numpy-discussion] BLAS and LAPACK used?

2007-05-17 Thread Martin Spacek
lorenzo bolla wrote: > Hi all, > I need to know the libraries (BLAS and LAPACK) which numpy has been > linked to, when I compiled it. > I can't remember which ones I used (ATLAS, MKL, etc...)... > Is there an easy way to find it out? > Thanks in advance, > Lorenzo Bolla. Yup: >>> import numpy

Re: [Numpy-discussion] Type conversion weirdness in numpy-1.0.2.win32-py2.4 binary

2007-05-08 Thread Martin Spacek
I just tried building the 1.0.2 release, and I still get the type conversion problem. Building from 1.0.3dev3736 makes the problem disappear. Was this an issue that was fixed recently? Martin Martin Spacek wrote: > In linux and win32 (numpy 1.0.1 release compiled from source,

[Numpy-discussion] Type conversion weirdness in numpy-1.0.2.win32-py2.4 binary

2007-05-07 Thread Martin Spacek
In linux and win32 (numpy 1.0.1 release compiled from source, and 1.0.3dev3726 respectively), I get the following normal behaviour: >>> import numpy as np >>> np.array([1.0, 2.0, 3.0, 4.0]) array([ 1., 2., 3., 4.]) >>> np.int32(np.array([1.0, 2.0, 3.0, 4.0])) array([ 1, 2, 3, 4]) But on thr

Re: [Numpy-discussion] Searching object arrays

2007-05-07 Thread Martin Spacek
Great, thanks Tim! Martin Timothy Hochberg wrote: > > Using np.equals instead of == seems to work: > > >>> i = np.array([0,1,2,None,3,4,None]) > >>> i > array([0, 1, 2, None, 3, 4, None], dtype=object) > >>> np.where(i == None) > () > >>> i == None > False > >>> np.where(np.equal(i, None))

[Numpy-discussion] Searching object arrays

2007-05-07 Thread Martin Spacek
I want to find the indices of all the None objects in an object array: >> import numpy as np >> i = np.array([0, 1, 2, None, 3, 4, None]) >> np.where(i == None) () Using == doesn't work the same way on object arrays as it does on, say, an array of int32. Any suggestions? Do I have to use a lo

Re: [Numpy-discussion] building with MKL in windows

2007-04-20 Thread Martin Spacek
Also, I found I had to remove the 2 lines in distutils/system_info.py that mention pthread, mkl_lapack32, mkl_lapack64 (see attached patch) since libraries with such names don't seem to exist in the MKL for windows and were generating linking errors. This obviously isn't the right thing to do, an

[Numpy-discussion] building with MKL in windows

2007-04-19 Thread Martin Spacek
Does anyone know the right way to get numpy to build on windows using Intel's MKL for LAPACK and BLAS libraries, under MSVC7.1? I just did a whole lot of trial-and-error getting it to build. I downloaded and installed MKL for windows from http://www.intel.com/cd/software/products/asmo-na/eng/3077

[Numpy-discussion] added 1.0.1 release notes to wiki

2007-01-22 Thread Martin Spacek
Just a note that I've copied over the 1.0.1 release notes from SourceForge: http://sourceforge.net/project/shownotes.php?group_id=1369&release_id=468153 over to the wiki: http://scipy.org/ReleaseNotes/NumPy_1.0 Should 1.0.1 get its own page, as previous 0.9.x releases did? Martin _

[Numpy-discussion] concatenate a tuple of lists 10x slower in numpy 1.0.1

2007-01-19 Thread Martin Spacek
Hello, I just upgraded from numpy 1.0b5 to 1.0.1, and I noticed that a part of my code that was using concatenate() was suddenly far slower. I downgraded to 1.0, and the slowdown disappeared. Here's the code and the profiler results for 1.0 and 1.0.1: >>> import numpy as np >>> np.version.version