Pauli Virtanen :
> On Thu, 11 Nov 2010 00:43:32 +0100, LittleBigBrain wrote:
>
>> I am wondering, is numpy.convolve based on LAPACK routine? Can it be
>> speedup by using ATLAS?
>>
>
> LAPACK and Atlas do not AFAIK have convolution routines -- that's not
> linear algebra. MKL on the other
Matthew Brett :
> Hi,
>
> On Mon, Nov 8, 2010 at 10:34 AM, Pauli Virtanen wrote:
>
>> Mon, 08 Nov 2010 19:31:31 +0100, Pauli Virtanen wrote:
>>
>>
>>> ma, 2010-11-08 kello 18:56 +0100, LittleBigBrain kirjoitti:
>>>
In my system '<' is the native byte-order, but unless I change
Hi everyone,
I believe the overwrite option is used for reduce memory usage. But I
did following test, and find out it does not work at all. Maybe I
misunderstood the purpose of overwrite option. If anybody could explain
this, I shall highly appreciate your help.
>>> a=npy.random.randn(20,20)
>>>
Hi Everyone,
I am trying sparse matrix these days. I am wondering is there any way
I can access the sparse matrix with flattened index?
For example:
a=numpy.matrix([[0,1,2],[3,4,5])
matrix([[0, 1, 2],
[3, 4, 5]])
>>> >>>print a.flat[3]
>>>
3
>>> >>> a.flat[3]=10
>>> >>> print a
>>
braingateway :
>>>> aa=matrix([[-1, 2, 0],[0, 0, 3]])
>>>> aa
>>>>
> matrix([[-1, 2, 0],
> [ 0, 0, 3]])
>
>>>> aa.nonzero()
>>>>
> (matrix([[0, 0, 1]], dtype=int64), matrix([[0, 1, 2]], dtype=int64))
>
>>> aa=matrix([[-1, 2, 0],[0, 0, 3]])
>>> aa
matrix([[-1, 2, 0],
[ 0, 0, 3]])
>>> aa.nonzero()
(matrix([[0, 0, 1]], dtype=int64), matrix([[0, 1, 2]], dtype=int64))
*OK*
>>> npy.nonzero(aa.flat)
(array([0, 1, 5], dtype=int64),)
*OK*
>>> flatnonzero(aa)
matrix([[0, 0,
Vincent Schut :
> Hi, I'm running in this strange issue when using some pretty large
> float32 arrays. In the following code I create a large array filled with
> ones, and calculate mean and sum, first with a float64 version, then
> with a float32 version. Note the difference between the two. NB
Charles R Harris
On Sat, Oct 23, 2010 at 10:27 AM, braingateway <mailto:braingate...@gmail.com>> wrote:
Charles R Harris :
>
>
> On Sat, Oct 23, 2010 at 10:15 AM, Charles R Harris
> mailto:charlesr.har...@gmail.com>
<mailt
Charles R Harris :
>
>
> On Sat, Oct 23, 2010 at 10:15 AM, Charles R Harris
> mailto:charlesr.har...@gmail.com>> wrote:
>
>
>
> On Sat, Oct 23, 2010 at 9:44 AM, braingateway
> mailto:braingate...@gmail.com>> wrote:
>
> David Cournap
David Cournapeau :
2010/10/23 braingateway :
Hi everyone,
I noticed the numpy.memmap using RAM to buffer data from memmap files.
If I get a 100GB array in a memmap file and process it block by block,
the RAM usage is going to increasing with the process running until
there is no available
Hi everyone,
I noticed the numpy.memmap using RAM to buffer data from memmap files.
If I get a 100GB array in a memmap file and process it block by block,
the RAM usage is going to increasing with the process running until
there is no available space in RAM (4GB), even though the block size is
only
2010/7/28 Ken Watford :
> 2010/7/28 脑关生命科学仪器 :
>> it seems like pytable only support HDF5. I had some 500GB numerical arrays
>> to process. Pytable claims to have some advance feature to enhance
>> processing speed and largely reduce physical memory requirement. However, I
>> do not wanna touch the
it seems like pytable only support HDF5. I had some 500GB numerical arrays
to process. Pytable claims to have some advance feature to enhance
processing speed and largely reduce physical memory requirement. However, I
do not wanna touch the raw data I had. Simply because I do not have doubled
disks
13 matches
Mail list logo