[Numpy-discussion] record array performance issue / bug

2015-11-21 Thread G Jones
Hi, Using the latest numpy from anaconda (1.10.1) on Python 2.7, I found that the following code works OK if npackets = 2, but acts bizarrely if npackets is large (2**12): --- npackets = 2**12 dlen=2048 PacketType = np.dtype([('timestamp','float64'), ('pkts',np.

Re: [Numpy-discussion] Transparently reading complex arrays from netcdf4

2014-03-30 Thread G Jones
. The later is generally a good practice when > writing library code, anyways, to catch unusual ndarray subclasses like > np.matrix. > > Stephan > > > On Sat, Mar 29, 2014 at 8:42 PM, G Jones wrote: > >> Hi Stephan, >> Thanks for the reply. I was thinking of somethin

Re: [Numpy-discussion] Transparently reading complex arrays from netcdf4

2014-03-29 Thread G Jones
> need will be read from disk and converted on the fly. > > Hope this helps! > > Cheers, > Stephan > > > > > On Sat, Mar 29, 2014 at 6:13 PM, G Jones wrote: > >> Hi, >> I am using netCDF4 to store complex data using the recommended strategy >

[Numpy-discussion] Transparently reading complex arrays from netcdf4

2014-03-29 Thread G Jones
Hi, I am using netCDF4 to store complex data using the recommended strategy of creating a compound data type with the real and imaginary parts. This all works well, but reading the data into a numpy array is a bit clumsy. Typically I do: nc = netCDF4.Dataset('my.nc') cplx_data = nc.groups['mygrou

Re: [Numpy-discussion] binary to ascii

2011-11-29 Thread G Jones
With pure python you can do: chr(int('0101', base=2)) 'A' On Tue, Nov 29, 2011 at 12:13 PM, Alex Ter-Sarkissov wrote: > hi eveyone, > > is there a simple command in numpy similar to matlab char(bin2dec('//some > binary value//')) to convert binary to characters and back? > > thanks > > ___

Re: [Numpy-discussion] Dealing with arrays

2011-09-20 Thread G Jones
data = [[1,2,1,1,4,2,1], [1,2,1,1,4,2,1,2,2,2,1,1,1],[1],[2]] def count_dict(arr): return dict([(x,(arr==x).sum()) for x in np.unique(arr)]) [count_dict(x) for x in data] yields: [{1: 4, 2: 2, 4: 1}, {1: 7, 2: 5, 4: 1}, {1: 1}, {2: 1}] not efficient, but it works On Tue, Sep 20, 2011 at 7:27

Re: [Numpy-discussion] Dealing with arrays

2011-09-20 Thread G Jones
If you know the values that you want to count, you could just do: (data_array == value).sum() to find the number of times that "value" occurs in "data_array". You could use np.unique(data_array) to find the unique values and then count the number of occurrences of each value. On Tue, Sep 20, 201

Re: [Numpy-discussion] Memory leak/fragmentation when using np.memmap

2011-05-18 Thread G Jones
others must work with such large datasets using numpy/python? Thanks, Glenn On Wed, May 18, 2011 at 4:21 PM, Pauli Virtanen wrote: > On Wed, 18 May 2011 15:09:31 -0700, G Jones wrote: > [clip] > > import numpy as np > > > > x = np.memmap('mybigfile.bin',mod

[Numpy-discussion] Memory leak/fragmentation when using np.memmap

2011-05-18 Thread G Jones
Hello, I need to process several large (~40 GB) files. np.memmap seems ideal for this, but I have run into a problem that looks like a memory leak or memory fragmentation. The following code illustrates the problem import numpy as np x = np.memmap('mybigfile.bin',mode='r',dtype='uint8') print x.s