On Tue, Apr 5, 2016 at 7:11 PM, Todd wrote:
> When you try to transpose a 1D array, it does nothing. This is the correct
> behavior, since it transposing a 1D array is meaningless. However, this can
> often lead to unexpected errors since this is rarely what you want. You can
> convert the arra
Todd,
Would you consider a 1D array to be a row vector or a column vector for the
purposes of transposition? The "correct" answer is not clear to me.
Juan.
On Wed, Apr 6, 2016 at 12:26 PM, Alan Isaac wrote:
> On 4/5/2016 10:11 PM, Todd wrote:
>
>> When you try to transpose a 1D array, it does
On 4/5/2016 10:11 PM, Todd wrote:
When you try to transpose a 1D array, it does nothing. This is the
correct behavior, since it transposing a 1D array is meaningless.
However, this can often lead to unexpected errors since this is rarely
what you want. You can convert the array to 2D, using `np
When you try to transpose a 1D array, it does nothing. This is the correct
behavior, since it transposing a 1D array is meaningless. However, this
can often lead to unexpected errors since this is rarely what you want.
You can convert the array to 2D, using `np.atleast_2d` or `arr[None]`, but
thi
That's a very clever approach. I also found a way using the pandas library
with the groupby function.
points_df = pandas.DataFrame.from_records(buffer)
new_buffer = points_df.groupby(qcut(points_df.index, resolution**3)).mean()
I did the original approach with all of those loops because I need a
Eh. The order of the outputs will be different than your code, if that
makes a difference.
On Tue, Apr 5, 2016 at 3:31 PM, Eric Moore wrote:
> def reduce_data(buffer, resolution):
> thinned_buffer = np.zeros((resolution**3, 3))
>
> min_xyz = buffer.min(axis=0)
> max_xyz = buffer.max(
def reduce_data(buffer, resolution):
thinned_buffer = np.zeros((resolution**3, 3))
min_xyz = buffer.min(axis=0)
max_xyz = buffer.max(axis=0)
delta_xyz = max_xyz - min_xyz
inds_xyz = np.floor(resolution * (buffer - min_xyz) /
delta_xyz).astype(int)
# handle values right at
This wasn't intended to be a histogram, but you're right in that it would be
much better if I can just go through each point once and bin the results,
that makes more sense, thanks!
--
View this message in context:
http://numpy-discussion.10968.n7.nabble.com/Multidimension-array-access-in-C-via
The points are indeed arbitrarily spaced, and yes I have heard tale of using
spatial indices for this sort of problem, and it looks like that would be
the best bet for me. Thanks for the other suggestions as well!
--
View this message in context:
http://numpy-discussion.10968.n7.nabble.com/Mult
On Di, 2016-04-05 at 20:19 +0200, Sebastian Berg wrote:
> On Di, 2016-04-05 at 09:48 -0700, mpc wrote:
> > The idea is that I want to thin a large 2D buffer of x,y,z points
> > to
> > a given
> > resolution by dividing the data into equal sized "cubes" (i.e.
> > resolution is
> > number of cubes al
On Di, 2016-04-05 at 09:48 -0700, mpc wrote:
> The idea is that I want to thin a large 2D buffer of x,y,z points to
> a given
> resolution by dividing the data into equal sized "cubes" (i.e.
> resolution is
> number of cubes along each axis) and averaging the points inside each
> cube
> (if any).
>
On Tue, Apr 5, 2016 at 9:48 AM, mpc wrote:
> The idea is that I want to thin a large 2D buffer of x,y,z points to a
> given
> resolution by dividing the data into equal sized "cubes" (i.e. resolution
> is
> number of cubes along each axis) and averaging the points inside each cube
> (if any).
>
On Apr 5, 2016 9:39 AM, "mpc" wrote:
>
> This is the reason I'm doing this in the first place, because I made a
pure
> python version but it runs really slow for larger data sets, so I'm
> basically rewriting the same function but using the Python and Numpy C
API,
> but if you're saying it won't r
You might do better using scipy.spatial. It has very useful data structures
for handling spatial coordinates. I am not exactly sure how to use them for
this specific problem (not a domain expert), but I would imagine that the
QHull wrappers there might give you some useful tools.
Ben Root
On Tue,
The idea is that I want to thin a large 2D buffer of x,y,z points to a given
resolution by dividing the data into equal sized "cubes" (i.e. resolution is
number of cubes along each axis) and averaging the points inside each cube
(if any).
*# Fill up buffer data for demonstration purposes with
On Apr 5, 2016 10:23 AM, "Matthew Brett" wrote:
>
> On Mon, Mar 28, 2016 at 2:33 PM, Matthew Brett
wrote:
> > Hi,
> >
> > Olivier Grisel and I are working on building and testing manylinux
> > wheels for numpy and scipy.
> >
> > We first thought that we should use ATLAS BLAS, but Olivier found th
On 05.04.2016 13:24, Antoine Pitrou wrote:
> On Tue, 5 Apr 2016 08:39:39 -0700 (MST)
> mpc wrote:
>> This is the reason I'm doing this in the first place, because I made a pure
>> python version but it runs really slow for larger data sets, so I'm
>> basically rewriting the same function but using
> Xianyi, the maintainer of OpenBLAS, is very helpfully running the
> OpenBLAS buildbot nightly tests with numpy and scipy:
>
> http://build.openblas.net/builders
>
> There is still one BLAS-related failure on these tests on AMD chips:
>
> https://github.com/xianyi/OpenBLAS-CI/issues/10
>
> I propo
On Tue, 5 Apr 2016 08:39:39 -0700 (MST)
mpc wrote:
> This is the reason I'm doing this in the first place, because I made a pure
> python version but it runs really slow for larger data sets, so I'm
> basically rewriting the same function but using the Python and Numpy C API,
> but if you're sayin
On Mon, Mar 28, 2016 at 2:33 PM, Matthew Brett wrote:
> Hi,
>
> Olivier Grisel and I are working on building and testing manylinux
> wheels for numpy and scipy.
>
> We first thought that we should use ATLAS BLAS, but Olivier found that
> my build of these could be very slow [1]. I set up a testin
Its difficult to say why your code is slow without seeing it. i.e. are you
generating large temporaries? Or doing loops in python that can be pushed
down to C via vectorizing? It may or may not be necessary to leave python
to get things to run fast enough.
-Eric
On Tue, Apr 5, 2016 at 11:39 AM
This is the reason I'm doing this in the first place, because I made a pure
python version but it runs really slow for larger data sets, so I'm
basically rewriting the same function but using the Python and Numpy C API,
but if you're saying it won't run any faster then maybe I'm going at it the
wro
22 matches
Mail list logo