>
> In [4]: timeit get_slices_slow(data)
> 100 loops, best of 3: 3.51 ms per loop
>
> In [5]: timeit get_slices_fast(data)
> 1000 loops, best of 3: 1.76 ms per loop
>
> In [6]: timeit get_slices_faster(data)
> 1 loops, best of 3: 116 us per loop
>
> So using the fast bincount and array indexin
Is there a more efficient way to calculate the "slices" array below?
import numpy
import numpy.random
# In reality, this is between 1 and 50.
DIMENSIONS = 20
# In my real app, I have 100...1M data rows.
ROWS = 1000
DATA = numpy.random.random_integers(0,100,(ROWS,DIMENSIONS))
# This is between 0
Hi All,
I have this example program:
import numpy as np
import numpy.random as rnd
def dim_weight(X):
weights = X[0]
volumes = X[1]*X[2]*X[3]
res = np.empty(len(volumes), dtype=np.double)
for i,v in enumerate(volumes):
if v>5184:
res[i] = v/194.0
Given an array with two axes, sorted by a column 'SLICE_BY', how can I
extract slice indexes for rows with the same 'SLICE_BY' value?
Here is an example program, demonstrating the problem:
from numpy import *
a = random.randint(0,100,(20,4))
SLICE_BY = 0 # Make slices of array 'a' by column SLI