http://houtwormbestrijding-houtwormbestrijding.nl/mlwoeh/ibivodpmj.ikuklorzxzgycwtj
Mag Gam
7/21/2013 7:24:11 AM
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
lib/em64t/libmkl_intel_thread.a
$(MKLROOT)/lib/em64t/libmkl_core.a -Wl,--end-group -openmp -lpthread
On Mon, Mar 14, 2011 at 11:58 PM, Ralf Gommers
wrote:
> On Tue, Mar 15, 2011 at 8:12 AM, Mag Gam wrote:
>> Trying to compile Numpy with Intel's MKL. I have exported the proper
>> pa
Trying to compile Numpy with Intel's MKL. I have exported the proper
paths for BLAS and LAPACK and I think the build script found it.
However, I am having a lot of trouble with ATLAS. What library file
should I use for it?
tia
___
NumPy-Discussion mailin
Planning to compile Numpy with Intel C compiler
(http://www.scipy.org/Installing_SciPy/Linux#head-7ce43956a69ec51c6f2cedd894a4715d5bfff974).
I was wondering if there was a benefit.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.s
en the access to the multiple dictionnary access. But
> don't forget, you change an algo of O(n), by O(nlogn) with a lower constant.
> So the n should not be too big. Just try different value.
>
> Frédéric Bastien
>
> On Thu, Jul 9, 2009 at 7:14 AM, Mag Gam wrote:
>>
The problem is the array is very large. We are talking about 200+ million rows.
On Thu, Jul 9, 2009 at 4:41 AM, David Warde-Farley wrote:
> On 9-Jul-09, at 1:12 AM, Mag Gam wrote:
>
>> Here is what I have, which does it 1x1:
>>
>> z={} #dictionary
>> r=csv.reader(f
Hey All
I am reading thru a file and trying to store the values into another
array, but instead of storing the values 1 by 1, I would like to store
them in bulk sets for optimization purposes.
Here is what I have, which does it 1x1:
z={} #dictionary
r=csv.reader(file)
for i,row in enumerate(r):
Is it possible to use loadtxt in a mult thread way? Basically, I want
to process a very large CSV file (100+ million records) and instead of
loading thousand elements into a buffer process and then load another
1 thousand elements and process and so on...
I was wondering if there is a technique wh
Fri, Jun 26, 2009 at 7:31 AM, Francesc Alted wrote:
> A Friday 26 June 2009 13:09:13 Mag Gam escrigué:
>> I really like the slice by slice idea!
>
> Hmm, after looking at the np.loadtxt() docstrings it seems it works by loading
> the complete file at once, so you shouldn't us
26 June 2009 12:38:11 Mag Gam escrigué:
>> Thanks everyone for the great and well thought out responses!
>>
>> To make matters worse, this is actually a 50gb compressed csv file. So
>> it looks like this, 2009.06.01.plasmasub.csv.gz
>> We get this data from anot
you have a some sample
code for mapping a compressed csv file into memory? and loading the
dataset into a dset (hdf5 structure)?
TIA
On Thu, Jun 25, 2009 at 9:50 PM, Anne
Archibald wrote:
> 2009/6/25 Mag Gam :
>> Hello.
>>
>> I am very new to NumPy and Python. We are doing
Hello.
I am very new to NumPy and Python. We are doing some research in our
Physics lab and we need to store massive amounts of data (100GB
daily). I therefore, am going to use hdf5 and h5py. The problem is I
am using np.loadtxt() to create my array and create a dataset
according to that. np.loadt
12 matches
Mail list logo