I'm sorry that I don't have some example code for you, but you
probably need to break down the problem if you can't fit it into
memory: http://en.wikipedia.org/wiki/Overlap-add_method

Jonathan

On Tue, Feb 15, 2011 at 10:27 AM,  <josef.p...@gmail.com> wrote:
> On Tue, Feb 15, 2011 at 11:42 AM, Davide Cittaro
> <davide.citt...@ifom-ieo-campus.it> wrote:
>> Hi all,
>> I have to work with huge numpy.array (i.e. up to 250 M long) and I have to 
>> perform either np.correlate or np.convolve between those.
>> The process can only work on big memory machines but it takes ages. I'm 
>> writing to get some hint on how to speed up things (at cost of precision, 
>> maybe...), possibly using a moving window... is it correct to perform this:
>>
>> - given a window W and a step size S
>> - given data1 and data2
>> - pad with zeros data1 and data2 by adding W/2 0-arrays
>> - get the np.correlation like
>>
>> y = np.array([np.correlate(data1[x:x+W], data2[x:x+W], mode='valid') for x 
>> in  np.arange(0, len(data1) - W, S)]).ravel()
>>
>> instead of np.correlate(data1, data2, mode='same')
>
> If data2 is of similar length as data1, then you should use
> fftconvolve which is much faster for long arrays than np.correlate or
> convolve. I'm not sure about the rest.
>
> Josef
>
>
>> - interpolate the correlation using scipy.interpolate
>> ?
>>
>> Thanks
>> d
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to