Hi,
On Mon, Jun 4, 2012 at 12:44 AM, srean wrote:
> Hi Wolfgang,
>
> I think you are looking for reduceat( ), in particular add.reduceat()
>
Indeed OP could utilize add.reduceat(...), like:
# tst.py
import numpy as np
def reduce(data, lengths):
ind, ends= np.r_[lengths, lengths], len
Hi Wolfgang,
I think you are looking for reduceat( ), in particular add.reduceat()
-- srean
On Thu, May 31, 2012 at 12:36 AM, Wolfgang Kerzendorf
wrote:
> Dear all,
>
> I have an ndarray which consists of many arrays stacked behind each other
> (only conceptually, in truth it's a normal 1d f
Hi Wolfgang,
I thought maybe there is a trick for your specific operation.
Your array stacking is a simple case of the group-by operation and
normalization is aggregation followed by update.
I believe group-by and aggregation are on the NumPy todo-list.
You may have to write a small extension modu
Hey Val,
Well it doesn't matter what I do, but specifically I do factor =
sum(data_array[start_point:start_point+length_data]) and then
data[array[start_point:start_point+length_data]) /= factor. and that for every
star_point and length data.
How to do this fast?
Cheers
Wolfgang
On 2012-05-
What do you mean by "normalized it"?
Could you give the output of your procedure for the sample input data.
Val
On Thu, May 31, 2012 at 12:36 AM, Wolfgang Kerzendorf wrote:
> Dear all,
>
> I have an ndarray which consists of many arrays stacked behind each other
> (only conceptually, in truth it
Dear all,
I have an ndarray which consists of many arrays stacked behind each other (only
conceptually, in truth it's a normal 1d float64 array).
I have a second array which tells me the start of the individual data sets in
the 1d float64 array and another one which tells me the length.
Example: