Re: [Numpy-discussion] Viewer for 2D Numpy arrays (GUI)

2010-09-17 Thread Kim Hansen
2010/9/17 Mayank P Jain > I thought about these options but what I need is excel like interface that > displays the values for each cell and one can modify and save the files. > > This would be convenient way of saving large files in less space and at the > same time, see them and would remove th

Re: [Numpy-discussion] Efficient array equivalent to cmp(x,y)

2009-09-17 Thread Kim Hansen
> > > > In [1]: a = np.array([0, 2, 4, 1, 3, 0, 3, 4, 0, 1]) > > In [2]: lim = 2 > > In [3]: np.sign(a - lim) > Out[3]: array([-1, 0, 1, -1, 1, -1, 1, 1, -1, -1]) > Dooh. Facepalm. I should have thought of that myself! Only one intermediate array needs to be created then. Thank you. That was

Re: [Numpy-discussion] Efficient array equivalent to cmp(x,y)

2009-09-17 Thread Kim Hansen
> > > > If there are no NaNs, you only need to make 2 masks by using ones > instead of empty. Not elegent but a little faster. > Good point! Thanks. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy

[Numpy-discussion] Efficient array equivalent to cmp(x,y)

2009-09-17 Thread Kim Hansen
Hi, Is there an array-like function equivalent with the builtin method for the Python single-valued comparison cmp(x,y)? What I would like is a cmp(a, lim), where a is an ndarray and lim is a single value, and then I need an array back of a's shape giving the elementwise comparison array([cmp(a[0

Re: [Numpy-discussion] Huge arrays

2009-09-10 Thread Kim Hansen
> > On 9-Sep-09, at 4:48 AM, Francesc Alted wrote: > > > Yes, this later is supported in PyTables as long as the underlying > > filesystem > > supports files > 2 GB, which is very usual in modern operating > > systems. > > I think the OP said he was on Win32, in which case it should be noted: > FAT

Re: [Numpy-discussion] saving incrementally numpy arrays

2009-08-10 Thread Kim Hansen
I have had some resembling challenges in my work, and here appending the nympy arrays to HDF5 files using PyTables has been the solution for me - that used in combination with lzo compression/decompression has lead to very high read/write performance in my application with low memory consumption. Y

Re: [Numpy-discussion] Not enough storage for memmap on 32 bit WinXP for accumulated file size above approx. 1 GB

2009-07-27 Thread Kim Hansen
2009/7/27 Sebastian Haase : > Is PyTables any option for you ? > > -- > Sebastian Haase > That may indeed be something for me! I had heard the name before but I never realized exactly what it was. However, i have just seen their first tutorial video, and it seems like a very, very useful package a

Re: [Numpy-discussion] Not enough storage for memmap on 32 bit WinXP for accumulated file size above approx. 1 GB

2009-07-27 Thread Kim Hansen
> > You could think about using some kind of virtualisation - this is > exactly the sort of situation where I find it really useful. > > You can run a 64 bit host OS, then have 32 bit XP as a 'guest' in > VMware or Virtualbox or some other virtualisation software. With > recent CPU's there is very

Re: [Numpy-discussion] Not enough storage for memmap on 32 bit WinXP for accumulated file size above approx. 1 GB

2009-07-27 Thread Kim Hansen
> > I think it would be quite complicated. One fundamental "limitation" of > numpy is that it views a contiguous chunk of memory. You can't have one > numpy array which is the union of two memory blocks with a hole in > between, so if you slice every 1000 items, the underlying memory of the > array

Re: [Numpy-discussion] Not enough storage for memmap on 32 bit WinXP for accumulated file size above approx. 1 GB

2009-07-27 Thread Kim Hansen
2009/7/24 David Cournapeau : > > Well, the questions has popped up a few times already, so I guess this > is not so obvious :) 32 bits architecture fundamentally means that a > pointer is 32 bits, so you can only address 2^32 different memory > locations. The 2Gb instead of 4Gb is a consequence on

Re: [Numpy-discussion] Not enough storage for memmap on 32 bit WinXP for accumulated file size above approx. 1 GB

2009-07-24 Thread Kim Hansen
>> >> I tried adding the /3GB switch to boot.ini as you suggested: >> multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP >> Professional" /noexecute=optin /fastdetect /3GB >> and rebooted the system. >> >> Unfortunately that did not change anything for me. I still hit a hard >> deck

Re: [Numpy-discussion] Not enough storage for memmap on 32 bit WinXP for accumulated file size above approx. 1 GB

2009-07-24 Thread Kim Hansen
2009/7/24 Citi, Luca : > Hello! > I have access to both a 32bit and a 64bit linux machine. > > I had to change your code (appended) because I got an error about > not being able to create a mmap larger than the file. > Here are the results... > > On the 32bit machine: > > lc...@xps2:~$ python /tmp/

Re: [Numpy-discussion] Not enough storage for memmap on 32 bit Win XP for accumulated file size above approx. 1 GB

2009-07-24 Thread Kim Hansen
2009/7/23 Charles R Harris : >> Maybe I am measuring memory usage wrong? > > Hmm, I don't know what you should be looking at in XP. Memmapped files are > sort of like virtual memory and exist in the address space even if they > aren't in physical memory.  When you address an element that isn't in >

Re: [Numpy-discussion] Not enough storage for memmap on 32 bit Win XP for accumulated file size above approx. 1 GB

2009-07-23 Thread Kim Hansen
2009/7/23 Charles R Harris : > > >> >> Is it due to the 32 bit OS I am using? > > It could be. IIRC, 32 bit windows gives user programs 2 GB of addressable > memory, so your files need to fit in that space even if the data is on disk. > You aren't using that much memory but you are close and it cou

[Numpy-discussion] Not enough storage for memmap on 32 bit Win XP for accumulated file size above approx. 1 GB

2009-07-23 Thread Kim Hansen
OS. Win XP SP3, 32 bits Python: 2.5.4 Numpy: 1.3.0 I have am having some major problems converting a 750 MB recarray into a 850 MB recarray To save RAM I would like to use a read-only and a writeable memap for the two recarrays during the conversion. So I do something like: import os from stat

Re: [Numpy-discussion] extract elements of an array that are contained in another array?

2009-06-04 Thread Kim Hansen
Concerning the name setmember1d_nu, I personally find it quite verbose and not the name I would expect as a non-insider coming to numpy and not knowing all the names of the more special hidden-away functions and not being a python-wiz either. I think ain(a,b) would be the name I had expected as an

Re: [Numpy-discussion] Numpy array in iterable

2009-03-05 Thread Kim Hansen
>2009/3/5 Robert Cimrman : > > Great! It's a nice use case for return_inverse=True in unique1d(). > > I have fixed the formatting, but cannot remove the previous comment. > > r. ;-) Thank you for fixing the formatting, --Kim ___ Numpy-discussion mailin

Re: [Numpy-discussion] Numpy array in iterable

2009-03-05 Thread Kim Hansen
>2009/3/5 Robert Cimrman : > I have added your implementation to > http://projects.scipy.org/numpy/ticket/1036 - is it ok with you to add > the function eventually into arraysetops.py, under the numpy (BSD) license? > > cheers, > r. > Yes, that would be fine with me. In fact that would be an honor!

Re: [Numpy-discussion] Numpy array in iterable

2009-03-05 Thread Kim Hansen
which was longer and included a loop. I do not know which is the most efficient one, but I understand this one better. -- Slaunger 2009/2/25 : > On Wed, Feb 25, 2009 at 7:28 AM, Kim Hansen wrote: >> Hi Numpy discussions >> Quite often I find myself wanting to generate a boolean m

Re: [Numpy-discussion] Numpy array in iterable

2009-02-25 Thread Kim Hansen
> I just looked under "set routines" in the help file. I really like the > speed of the windows help file. Is there a Numpy windows help file? Cool! But where is it? I can't find it in my numpy 1.2.1 installation?!? I like the Python 2.5 Windows help file too and I agree it is a fast and effic

Re: [Numpy-discussion] Numpy array in iterable

2009-02-25 Thread Kim Hansen
Yes, this is exactly what I was after, only the function name did not ring a bell (I still cannot associate it with something meaningful for my use case). Thanks! -- Slaunger 2009/2/25 : > On Wed, Feb 25, 2009 at 7:28 AM, Kim Hansen wrote: >> Hi Numpy discussions >> Quite ofte

[Numpy-discussion] Numpy array in iterable

2009-02-25 Thread Kim Hansen
Hi Numpy discussions Quite often I find myself wanting to generate a boolean mask for fancy slicing of some array, where the mask itself is generated by checking if its value has one of several relevant values (corresponding to states) So at the the element level thsi corresponds to checking if ele

Re: [Numpy-discussion] How to make "lazy" derived arrays in a recarray view of a memmap on large files

2009-01-19 Thread Kim Hansen
; as your starting point. > > If your code is meant only for yourself, I'd suggest to go with the > inconvenient but working method approach. If other people will want to > use your class in an array-like manner, you'd have to properly define > all the functionality that

[Numpy-discussion] How to make "lazy" derived arrays in a recarray view of a memmap on large files

2009-01-16 Thread Kim Hansen
Hi numpy forum I need to efficiently handle some large (300 MB) recordlike binary files, where some data fields are less than a byte and thus cannot be mapped in a record dtype immediately. I would like to be able to access these derived arrays in a memory efficient manner but I cannot figure out

[Numpy-discussion] How to handle fields less than one byte in a recarray

2009-01-15 Thread Kim Hansen
Hi Numpy forum Let me start out with a generic example: In [3]: test_byte_str = "".join(["\x12\x03", "\x23\x05", "\x35\x08"]) In [4]: desc = dtype({'names' : ["HIGHlow", "HIGH + low"], 'formats': [uint8, ui nt8]}) In [5]: r = rec.fromstring(test_byte_str, dtype=desc) In [6]: r[0] Out[6]: (18,

[Numpy-discussion] How to efficiently reduce a 1-dim 100-10000 element array with user defined binary function

2008-11-14 Thread Kim Hansen
Dear numpy-discussion, I am quite new to Python and numpy. I am working on a Python application using Scipy, where I need to unpack and pack quite large amounts of data (GBs) into data structures and convert them into other data structures. Until now the program has been running amazingly efficie