On Mon, Jan 3, 2011 at 11:26, Eric Firing wrote:
> Instead of calculating statistics independently each time the window is
> advanced one data point, the statistics are updated. I have not done
> any benchmarking, but I expect this approach to be quick.
This might accumulate numerical errors. Bu
On Mon, Jan 3, 2011 at 10:52, Keith Goodman wrote:
> On Mon, Jan 3, 2011 at 7:41 AM, Erik Rigtorp wrote:
>> On Mon, Jan 3, 2011 at 10:36, Keith Goodman wrote:
>>> On Mon, Jan 3, 2011 at 5:37 AM, Erik Rigtorp wrote:
>>>
>>>> It's only a view o
On Mon, Jan 3, 2011 at 10:36, Keith Goodman wrote:
> On Mon, Jan 3, 2011 at 5:37 AM, Erik Rigtorp wrote:
>
>> It's only a view of the array, no copying is done. Though some
>> operations like np.std() will copy the array, but that's more of a
>> bug. In genera
On Mon, Jan 3, 2011 at 05:13, Sebastian Haase wrote:
> Hi Erik,
> This is really neat ! Do I understand correctly, that you mean by
> "stride tricks", that your rolling_window is _not_ allocating any new
> memory ?
Yes, it's only a view.
> IOW, If I have a large array using 500MB of memory, say
Hi,
Implementing moving average, moving std and other functions working
over rolling windows using python for loops are slow. This is a
effective stride trick I learned from Keith Goodman's
Bottleneck code but generalized into arrays of
any dimension. This trick allows the loop to be performed in
On Fri, Dec 31, 2010 at 02:13, Paul Ivanov wrote:
> Erik Rigtorp, on 2010-12-30 21:30, wrote:
>> Hi,
>>
>> I was trying to parallelize some algorithms and needed a writable
>> array shared between processes. It turned out to be quite simple and
>> gave a nice sp
Hi,
I just send a pull request for some faster NaN functions,
https://github.com/rigtorp/numpy.
I implemented the following generalized ufuncs: nansum(), nancumsum(),
nanmean(), nanstd() and for fun mean() and std(). It turns out that
the generalized ufunc mean() and std() is faster than the curr
Hi,
I was trying to parallelize some algorithms and needed a writable
array shared between processes. It turned out to be quite simple and
gave a nice speed up almost linear in number of cores. Of course you
need to know what you are doing to avoid segfaults and such. But I
still think something l