On Mon, Jan 10, 2011 at 11:35 AM, Mark Wiebe wrote:
> I'm a bit curious why the jump from 1 to 2 threads is scaling so poorly.
> Your timings have improvement factors of 1.85, 1.68, 1.64, and 1.79. Since
> the computation is trivial data parallelism, and I believe it's still pretty
> far off th
I'm a bit curious why the jump from 1 to 2 threads is scaling so poorly.
Your timings have improvement factors of 1.85, 1.68, 1.64, and 1.79. Since
the computation is trivial data parallelism, and I believe it's still pretty
far off the memory bandwidth limit, I would expect a speedup of 1.95 or
On Mon, Jan 10, 2011 at 12:15, Nils Becker wrote:
> Robert,
>
> your answer does work: after indexing with () I can then further index
> into the datatype.
>
> In [115]: a_rank_0[()][0]
> Out[115]: 0.0
>
> I guess I just found the fact confusing that a_rank_1[0] and a_rank_0
> compare and print eq
On Mon, Jan 10, 2011 at 9:47 AM, Francesc Alted wrote:
>
>
> so, the new code is just < 5% slower. I suppose that removing the
> NPY_ITER_ALIGNED flag would give us a bit more performance, but that's
> great as it is now. How did you do that? Your new_iter branch in NumPy
> already deals with
Robert,
your answer does work: after indexing with () I can then further index
into the datatype.
In [115]: a_rank_0[()][0]
Out[115]: 0.0
I guess I just found the fact confusing that a_rank_1[0] and a_rank_0
compare and print equal but behave differently under indexing.
More precisely if I do
I
A Monday 10 January 2011 17:54:16 Mark Wiebe escrigué:
> > Apparently, you forgot to add the new_iterator_pywrap.h file.
>
> Oops, that's added now.
Excellent. It works now.
> The aligned case should just be a matter of conditionally removing
> the NPY_ITER_ALIGNED flag in two places.
Wow, t
On Mon, Jan 10, 2011 at 10:08, Nils Becker wrote:
> Hi,
>
> I noticed that I can index into a dtype when I take an element
> of a rank-1 array but not if I make a rank-0 array directly. This seems
> inconsistent. A bug?
Not a bug. Since there is no axis, you cannot use integers to index
into a ra
On Mon, Jan 10, 2011 at 2:05 AM, Francesc Alted wrote:
>
>
> Your patch looks mostly fine to my eyes; good job! Unfortunately, I've
> been unable to compile your new_iterator branch of NumPy:
>
> numpy/core/src/multiarray/multiarraymodule.c:45:33: fatal error:
> new_iterator_pywrap.h: El fitxer
Hi,
Spatial hashes are the common solution.
Another common optimization is using the distance squared for
collision detection. Since you do not need the expensive sqrt for
this calc.
cu.
On Mon, Jan 10, 2011 at 3:25 PM, Pascal wrote:
> Hi,
>
> On 01/10/2011 09:09 AM, EMMEL Thomas wrote:
>>
Hi,
I noticed that I can index into a dtype when I take an element
of a rank-1 array but not if I make a rank-0 array directly. This seems
inconsistent. A bug?
Nils
In [76]: np.version.version
Out[76]: '1.5.1'
In [78]: dt = np.dtype([('x', ' in ()
IndexError: 0-d arrays can't be indexed
In [
Hi,
On 01/10/2011 09:09 AM, EMMEL Thomas wrote:
>
> No I didn't, due to the fact that these values are coordinates in 3D (x,y,z).
> In fact I work with a list/array/tuple of arrays with 10 to 1M of
> elements or more.
> What I need to do is to calculate the distance of each of these elements
Hey back...
> >
> #~
> ~
> ~~~
> > def bruteForceSearch(points, point):
> >
> > minpt = min([(vec2Norm(pt, point), pt, i)
> > for i, pt in enumerate(points)], key=itemgetter(0))
> > return sqrt(minpt[0]), m
Hi all,
I have this problem: Given some point draw a circle centered in this
point with radius r. I'm doing that using numpy this way (Snippet code
from here [1]):
>>> # Create the initial black and white image
>>> import numpy as np
>>> from scipy import ndimage
>>> a = np.zeros((512, 512)).asty
A Monday 10 January 2011 11:05:27 Francesc Alted escrigué:
> Also, I'd like to try out the new thread scheduling that you
> suggested to me privately (i.e. T0T1T0T1... vs T0T0...T1T1...).
I've just implemented the new partition schema in numexpr
(T0T0...T1T1..., being the original T0T1T0T1...).
Hey,
On Mon, 2011-01-10 at 08:09 +, EMMEL Thomas wrote:
> #~
> def bruteForceSearch(points, point):
>
> minpt = min([(vec2Norm(pt, point), pt, i)
> for i, pt in enumerate(points)], key=itemgetter(0))
A Sunday 09 January 2011 23:45:02 Mark Wiebe escrigué:
> As a benchmark of C-based iterator usage and to make it work properly
> in a multi-threaded context, I've updated numexpr to use the new
> iterator. In addition to some performance improvements, this also
> made it easy to add optional out=
> -Original Message-
> From: numpy-discussion-boun...@scipy.org [mailto:numpy-discussion-
> boun...@scipy.org] On Behalf Of David Cournapeau
> Sent: Montag, 10. Januar 2011 10:15
> To: Discussion of Numerical Python
> Subject: Re: [Numpy-discussion] speed of numpy.ndarray compared
> toNumer
On Mon, Jan 10, 2011 at 6:04 PM, EMMEL Thomas wrote:
>
> Yes, of course and my real implementation uses exactly these methods,
> but there are still issues with the arrays.
Did you try kd-trees in scipy ?
David
___
NumPy-Discussion mailing list
NumPy-
> On Mon, Jan 10, 2011 at 5:09 PM, EMMEL Thomas
> wrote:
> > To John:
> >
> >> Did you try larger arrays/tuples? I would guess that makes a
> significant
> >> difference.
> >
> > No I didn't, due to the fact that these values are coordinates in 3D
> (x,y,z).
> > In fact I work with a list/array/tu
On Mon, Jan 10, 2011 at 5:09 PM, EMMEL Thomas wrote:
> To John:
>
>> Did you try larger arrays/tuples? I would guess that makes a significant
>> difference.
>
> No I didn't, due to the fact that these values are coordinates in 3D (x,y,z).
> In fact I work with a list/array/tuple of arrays with 100
To John:
> Did you try larger arrays/tuples? I would guess that makes a significant
> difference.
No I didn't, due to the fact that these values are coordinates in 3D (x,y,z).
In fact I work with a list/array/tuple of arrays with 10 to 1M of elements
or more.
What I need to do is to calculat
21 matches
Mail list logo