This would certainly be useful in my case as well. I originally tried doing
something similar:
fun = lambda x: (x.min(), x,max())
apply_along_axis(fun, -1, val_pts)
It turned out to be much slower, which I guess isn't too surprising.
Brad
On Sat, Jun 19, 2010 at 4:45 PM, Warren Weckesser <
war
Hmmm, if I force the reshaped array to be copied, it speeds up the min/max
and makes the overall operation a bit faster (times are below, generated
using line profiler with kernprof.py). I'd certainly like to get rid of
this copy() operation if possible. Is there any way to avoid it?
Brad
Line
On Thu, Jun 17, 2010 at 4:50 PM, Brad Buran wrote:
> I have a 1D array with >100k samples that I would like to reduce by
> computing the min/max of each "chunk" of n samples. Right now, my
> code is as follows:
>
> n = 100
> offset = array.size % downsample
> array_min = array[offset:].reshape((-
On Mon, Jun 21, 2010 at 10:40 AM, Neil Crighton wrote:
> Warren Weckesser enthought.com> writes:
>
>>
>> Benjamin Root wrote:
>> > Brad, I think you are doing it the right way, but I think what is
>> > happening is that the reshape() call on the sliced array is forcing a
>> > copy to be made firs
Warren Weckesser enthought.com> writes:
>
> Benjamin Root wrote:
> > Brad, I think you are doing it the right way, but I think what is
> > happening is that the reshape() call on the sliced array is forcing a
> > copy to be made first. The fact that the copy has to be made twice
> > just wor
Benjamin Root wrote:
> Brad, I think you are doing it the right way, but I think what is
> happening is that the reshape() call on the sliced array is forcing a
> copy to be made first. The fact that the copy has to be made twice
> just worsens the issue. I would save a copy of the reshape res
On Sat, Jun 19, 2010 at 15:37, Benjamin Root wrote:
> On that note, would it be a bad idea to have a function that returns a
> min/max tuple? Performing two iterations to gather the min and the max
> information versus a single iteration to gather both at the same time would
> be useful.
I have
Brad, I think you are doing it the right way, but I think what is happening
is that the reshape() call on the sliced array is forcing a copy to be made
first. The fact that the copy has to be made twice just worsens the issue.
I would save a copy of the reshape result (it is usually a view of the
I have a 1D array with >100k samples that I would like to reduce by
computing the min/max of each "chunk" of n samples. Right now, my
code is as follows:
n = 100
offset = array.size % downsample
array_min = array[offset:].reshape((-1, n)).min(-1)
array_max = array[offset:].reshape((-1, n)).max(-1