I just opened a new ticket (http://projects.scipy.org/numpy/ticket/1660), but I
thought I'd bring it up here as well. I can't seem to get tofile() or save() to
write anything much bigger than a 2**32 byte array to a file in Py 2.6.6 on
64-bit Windows. They both hang with no errors. Also, fromfil
On 2010-08-06 13:11, Martin Spacek wrote:
> Josef, I'd forgotten you could use None to increase the dimensionality of an
> array. Neat. And, somehow, it's almost twice as fast as the Cython version!:
>
> >>> timeit a[np.arange(a.shape[0])[:, None], i]
> 10
On 2010-08-06 06:57, Keith Goodman wrote:
> You can speed it up by getting rid of two copies:
>
> idx = np.arange(a.shape[0])
> idx *= a.shape[1]
> idx += i
Keith, you're right of course. I'd forgotten about your earlier suggestion
about
operating in-place. Here's my new version:
def rowta
Keith Goodman wrote:
> Here's one way:
>
>>> a.flat[i + a.shape[1] * np.arange(a.shape[0])]
> array([0, 3, 5, 6, 9])
I'm afraid I made my example a little too simple. In retrospect, what I really
want is to be able to use a 2D index array "i", like this:
>>> a = np.array([[ 0, 1, 2,
josef.pkt wrote:
>>> a = np.array([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
>>> i = np.array([0, 1, 1, 0, 1])
>>> a[range(a.shape[0]), i]
array([0, 3, 5, 6, 9])
>>> a[np.arange(a.shape[0]), i]
array([0, 3, 5, 6, 9])
Thank
I want to take an n x m array "a" and index into it using an integer index
array
"i" of length n that will pull out the value at the designated column from each
corresponding row of "a".
>>> a = np.arange(10)
>>> a.shape = 5, 2
>>> a
array([[0, 1],
[2, 3],
[4, 5],
[6, 7]
On 2010-02-26 15:26, Pauli Virtanen wrote:
> No, the unpickled void scalar will own its data. The problem is that
> either the data is not saved correctly (unlikely), or it is unpickled
> incorrectly.
>
> The relevant code path to look at is multiarraymodule:array_scalar ->
> scalarapi.c:PyArray_Sc
On 2010-02-26 15:02, Robert Kern wrote:
>> Is this a known limitation?
>
> Nope. New bug! Thanks!
Good. I'm not crazy after all :)
> Pickling of complete arrays works. A quick workaround would be to send
> rank-0 scalars:
>
>Pool.map(map(np.asarray, x))
>
> Or just tuples:
>
>Pool.map(map
I have a 1D structured ndarray with several different fields in the dtype. I'm
using multiprocessing.Pool.map() to iterate over this structured ndarray,
passing one entry (of type numpy.void) at a time to the function to be called
by
each process in the pool. After much confusion about why this
Robert Cimrman ntc.zcu.cz> writes:
>
> Hi Martin,
>
> thanks for your ideas and contribution.
>
> A few notes: I would let intersect1d as it is, and created a new function with
another name for that (any
> proposals?). Considering that most of arraysetops functions are based on sort,
and in pa
I have a list of many arrays (in my case each is unique, ie has no repeated
elements), and I'd like to extract the intersection of all of them, all in one
go. I'm running numpy 1.3.0, but looking at today's rev of numpy.lib.arraysetops
(http://svn.scipy.org/svn/numpy/trunk/numpy/lib/arraysetops.py)
>> By the way, I installed 64-bit linux (ubuntu 7.10) on the same machine,
>> and now numpy.memmap works like a charm. Slicing around a 15 GB file is fun!
>>
> Thanks for the feedback !
> Did you get the kind of speed you need and/or the speed you were hoping for ?
Nope. Like I wrote earlier, it s
Sebastian Haase wrote:
> b) To my knowledge, any OS Linux, Windows an OSX can max. allocate
> about 1GB of data - assuming you have a 32 bit machine.
> The actual numbers I measured varied from about 700MB to maybe 1.3GB.
> In other words, you would be right at the limit.
> (For 64bit, you would h
Gael Varoquaux wrote:
> Very interesting. Have you made measurements to see how many times you
> lost one of your cycles. I made these kind of measurements on Linux using
> the real-time clock with C and it was very interesting (
> http://www.gael-varoquaux.info/computers/real-time ). I want to red
Francesc Altet wrote:
> Perhaps something that can surely improve your timings is first
> performing a read of your data file(s) while throwing the data as you
> are reading it. This serves only to load the file entirely (if you have
> memory enough, but this seems your case) in OS page cache. T
Sebastian Haase wrote:
> reading this thread I have two comments.
> a) *Displaying* at 200Hz probably makes little sense, since humans
> would only see about max. of 30Hz (aka video frame rate).
> Consequently you would want to separate your data frame rate, that (as
> I understand) you want to sav
Martin Spacek wrote:
> Would it be better to load the file one
> frame at a time, generating nframes arrays of shape (height, width),
> and sticking them consecutively in a python list?
I just tried this, and it works. Looks like it's all in physical RAM (no
disk thrashing on t
Kurt Smith wrote:
> You might try numpy.memmap -- others have had success with it for
> large files (32 bit should be able to handle a 1.3 GB file, AFAIK).
Yeah, I looked into numpy.memmap. Two issues with that. I need to
eliminate as much disk access as possible while my app is running. I'm
d
I need to load a 1.3GB binary file entirely into a single numpy.uint8
array. I've been using numpy.fromfile(), but for files > 1.2GB on my
win32 machine, I get a memory error. Actually, since I have several
other python modules imported at the same time, including pygame, I get
a "pygame parachute"
lorenzo bolla wrote:
> Hi all,
> I need to know the libraries (BLAS and LAPACK) which numpy has been
> linked to, when I compiled it.
> I can't remember which ones I used (ATLAS, MKL, etc...)...
> Is there an easy way to find it out?
> Thanks in advance,
> Lorenzo Bolla.
Yup:
>>> import numpy
I just tried building the 1.0.2 release, and I still get the type
conversion problem. Building from 1.0.3dev3736 makes the problem
disappear. Was this an issue that was fixed recently?
Martin
Martin Spacek wrote:
> In linux and win32 (numpy 1.0.1 release compiled from source,
In linux and win32 (numpy 1.0.1 release compiled from source, and
1.0.3dev3726 respectively), I get the following normal behaviour:
>>> import numpy as np
>>> np.array([1.0, 2.0, 3.0, 4.0])
array([ 1., 2., 3., 4.])
>>> np.int32(np.array([1.0, 2.0, 3.0, 4.0]))
array([ 1, 2, 3, 4])
But on thr
Great, thanks Tim!
Martin
Timothy Hochberg wrote:
>
> Using np.equals instead of == seems to work:
>
> >>> i = np.array([0,1,2,None,3,4,None])
> >>> i
> array([0, 1, 2, None, 3, 4, None], dtype=object)
> >>> np.where(i == None)
> ()
> >>> i == None
> False
> >>> np.where(np.equal(i, None))
I want to find the indices of all the None objects in an object array:
>> import numpy as np
>> i = np.array([0, 1, 2, None, 3, 4, None])
>> np.where(i == None)
()
Using == doesn't work the same way on object arrays as it does on, say,
an array of int32. Any suggestions? Do I have to use a lo
Also, I found I had to remove the 2 lines in distutils/system_info.py
that mention pthread, mkl_lapack32, mkl_lapack64 (see attached patch)
since libraries with such names don't seem to exist in the MKL for
windows and were generating linking errors.
This obviously isn't the right thing to do, an
Does anyone know the right way to get numpy to build on windows using
Intel's MKL for LAPACK and BLAS libraries, under MSVC7.1?
I just did a whole lot of trial-and-error getting it to build. I
downloaded and installed MKL for windows from
http://www.intel.com/cd/software/products/asmo-na/eng/3077
Just a note that I've copied over the 1.0.1 release notes from SourceForge:
http://sourceforge.net/project/shownotes.php?group_id=1369&release_id=468153
over to the wiki:
http://scipy.org/ReleaseNotes/NumPy_1.0
Should 1.0.1 get its own page, as previous 0.9.x releases did?
Martin
_
Hello,
I just upgraded from numpy 1.0b5 to 1.0.1, and I noticed that a part of
my code that was using concatenate() was suddenly far slower. I
downgraded to 1.0, and the slowdown disappeared. Here's the code
and the profiler results for 1.0 and 1.0.1:
>>> import numpy as np
>>> np.version.version
28 matches
Mail list logo