Hi,
Using the latest numpy from anaconda (1.10.1) on Python 2.7, I found that
the following code works OK if npackets = 2, but acts bizarrely if npackets
is large (2**12):
---
npackets = 2**12
dlen=2048
PacketType = np.dtype([('timestamp','float64'),
('pkts',np.
. The later is generally a good practice when
> writing library code, anyways, to catch unusual ndarray subclasses like
> np.matrix.
>
> Stephan
>
>
> On Sat, Mar 29, 2014 at 8:42 PM, G Jones wrote:
>
>> Hi Stephan,
>> Thanks for the reply. I was thinking of somethin
> need will be read from disk and converted on the fly.
>
> Hope this helps!
>
> Cheers,
> Stephan
>
>
>
>
> On Sat, Mar 29, 2014 at 6:13 PM, G Jones wrote:
>
>> Hi,
>> I am using netCDF4 to store complex data using the recommended strategy
>
Hi,
I am using netCDF4 to store complex data using the recommended strategy of
creating a compound data type with the real and imaginary parts. This all
works well, but reading the data into a numpy array is a bit clumsy.
Typically I do:
nc = netCDF4.Dataset('my.nc')
cplx_data = nc.groups['mygrou
With pure python you can do:
chr(int('0101', base=2))
'A'
On Tue, Nov 29, 2011 at 12:13 PM, Alex Ter-Sarkissov wrote:
> hi eveyone,
>
> is there a simple command in numpy similar to matlab char(bin2dec('//some
> binary value//')) to convert binary to characters and back?
>
> thanks
>
> ___
data = [[1,2,1,1,4,2,1], [1,2,1,1,4,2,1,2,2,2,1,1,1],[1],[2]]
def count_dict(arr):
return dict([(x,(arr==x).sum()) for x in np.unique(arr)])
[count_dict(x) for x in data]
yields:
[{1: 4, 2: 2, 4: 1}, {1: 7, 2: 5, 4: 1}, {1: 1}, {2: 1}]
not efficient, but it works
On Tue, Sep 20, 2011 at 7:27
If you know the values that you want to count, you could just do:
(data_array == value).sum()
to find the number of times that "value" occurs in "data_array".
You could use np.unique(data_array) to find the unique values and then count
the number of occurrences of each value.
On Tue, Sep 20, 201
others
must work with such large datasets using numpy/python?
Thanks,
Glenn
On Wed, May 18, 2011 at 4:21 PM, Pauli Virtanen wrote:
> On Wed, 18 May 2011 15:09:31 -0700, G Jones wrote:
> [clip]
> > import numpy as np
> >
> > x = np.memmap('mybigfile.bin',mod
Hello,
I need to process several large (~40 GB) files. np.memmap seems ideal for
this, but I have run into a problem that looks like a memory leak or memory
fragmentation. The following code illustrates the problem
import numpy as np
x = np.memmap('mybigfile.bin',mode='r',dtype='uint8')
print x.s