That's a very clever approach. I also found a way using the pandas library
with the groupby function.
points_df = pandas.DataFrame.from_records(buffer)
new_buffer = points_df.groupby(qcut(points_df.index, resolution**3)).mean()
I did the original approach with all of those loops because I need a
Eh. The order of the outputs will be different than your code, if that
makes a difference.
On Tue, Apr 5, 2016 at 3:31 PM, Eric Moore wrote:
> def reduce_data(buffer, resolution):
> thinned_buffer = np.zeros((resolution**3, 3))
>
> min_xyz = buffer.min(axis=0)
> max_xyz = buffer.max(
def reduce_data(buffer, resolution):
thinned_buffer = np.zeros((resolution**3, 3))
min_xyz = buffer.min(axis=0)
max_xyz = buffer.max(axis=0)
delta_xyz = max_xyz - min_xyz
inds_xyz = np.floor(resolution * (buffer - min_xyz) /
delta_xyz).astype(int)
# handle values right at
This wasn't intended to be a histogram, but you're right in that it would be
much better if I can just go through each point once and bin the results,
that makes more sense, thanks!
--
View this message in context:
http://numpy-discussion.10968.n7.nabble.com/Multidimension-array-access-in-C-via
The points are indeed arbitrarily spaced, and yes I have heard tale of using
spatial indices for this sort of problem, and it looks like that would be
the best bet for me. Thanks for the other suggestions as well!
--
View this message in context:
http://numpy-discussion.10968.n7.nabble.com/Mult
On Di, 2016-04-05 at 20:19 +0200, Sebastian Berg wrote:
> On Di, 2016-04-05 at 09:48 -0700, mpc wrote:
> > The idea is that I want to thin a large 2D buffer of x,y,z points
> > to
> > a given
> > resolution by dividing the data into equal sized "cubes" (i.e.
> > resolution is
> > number of cubes al
On Di, 2016-04-05 at 09:48 -0700, mpc wrote:
> The idea is that I want to thin a large 2D buffer of x,y,z points to
> a given
> resolution by dividing the data into equal sized "cubes" (i.e.
> resolution is
> number of cubes along each axis) and averaging the points inside each
> cube
> (if any).
>
On Tue, Apr 5, 2016 at 9:48 AM, mpc wrote:
> The idea is that I want to thin a large 2D buffer of x,y,z points to a
> given
> resolution by dividing the data into equal sized "cubes" (i.e. resolution
> is
> number of cubes along each axis) and averaging the points inside each cube
> (if any).
>
On Apr 5, 2016 9:39 AM, "mpc" wrote:
>
> This is the reason I'm doing this in the first place, because I made a
pure
> python version but it runs really slow for larger data sets, so I'm
> basically rewriting the same function but using the Python and Numpy C
API,
> but if you're saying it won't r
You might do better using scipy.spatial. It has very useful data structures
for handling spatial coordinates. I am not exactly sure how to use them for
this specific problem (not a domain expert), but I would imagine that the
QHull wrappers there might give you some useful tools.
Ben Root
On Tue,
The idea is that I want to thin a large 2D buffer of x,y,z points to a given
resolution by dividing the data into equal sized "cubes" (i.e. resolution is
number of cubes along each axis) and averaging the points inside each cube
(if any).
*# Fill up buffer data for demonstration purposes with
On 05.04.2016 13:24, Antoine Pitrou wrote:
> On Tue, 5 Apr 2016 08:39:39 -0700 (MST)
> mpc wrote:
>> This is the reason I'm doing this in the first place, because I made a pure
>> python version but it runs really slow for larger data sets, so I'm
>> basically rewriting the same function but using
On Tue, 5 Apr 2016 08:39:39 -0700 (MST)
mpc wrote:
> This is the reason I'm doing this in the first place, because I made a pure
> python version but it runs really slow for larger data sets, so I'm
> basically rewriting the same function but using the Python and Numpy C API,
> but if you're sayin
Its difficult to say why your code is slow without seeing it. i.e. are you
generating large temporaries? Or doing loops in python that can be pushed
down to C via vectorizing? It may or may not be necessary to leave python
to get things to run fast enough.
-Eric
On Tue, Apr 5, 2016 at 11:39 AM
This is the reason I'm doing this in the first place, because I made a pure
python version but it runs really slow for larger data sets, so I'm
basically rewriting the same function but using the Python and Numpy C API,
but if you're saying it won't run any faster then maybe I'm going at it the
wro
On Apr 4, 2016 1:58 PM, "mpc" wrote:
>
> Thanks for responding.
>
> It looks you made/found these yourself since I can't find anything like
this
> in the API. I can't believe it isn't, so convenient!
>
> By the way, from what I understand, the ':' is represented as
> *PySlice_New(NULL, NULL, NULL)
I think that I do, since I intend to do array specific operations on the
resulting column of data.
e.g:
*PyArray_Min*
*PyArray_Max*
which require a PyArrayObject argument
I also plan to use *PyArray_Where* to find individual point locations in
data columns x,y,z within a 3D range, but it doesn't
Yes, PySlice_New(NULL, NULL, NULL) is the same as ':'. Depending on what
exactly you want to do with the column once you've extracted it, this may
not be the best way to do it. Are you absolutely certain that you actually
need a PyArrayObject that points to the column?
Eric
On Mon, Apr 4, 2016
Thanks for responding.
It looks you made/found these yourself since I can't find anything like this
in the API. I can't believe it isn't, so convenient!
By the way, from what I understand, the ':' is represented as
*PySlice_New(NULL, NULL, NULL) *in the C API when accessing by index,
correct?
T
/* obj[ind] */
PyObject* DoIndex(PyObject* obj, int ind)
{
PyObject *oind, *ret;
oind = PyLong_FromLong(ind);
if (!oind) {
return NULL;
}
ret = PyObject_GetItem(obj, oind);
Py_DECREF(oind);
return ret;
}
/* obj[inds[0], inds[1], ... inds[n_ind-1]] */
PyObject* D
Hello,
is there a C-API function for numpy that can implement Python's
multidimensional indexing?
For example, if I had a 2d array:
PyArrayObject * M;
and an index
int i;
how do I extract the i-th row M[i,:] or i-th column M[:,i]?
Ideally it would be great if it returned another
21 matches
Mail list logo