gt; than "Re: Contents of NumPy-Discussion digest..."
>
>
>
>
>
> --
>
> Message: 2
> Date: Wed, 7 May 2014 19:25:32 +0100
> From: Nathaniel Smith
> Subject: Re: [Numpy-discussion] IDL vs Python parallel computing
> To: Discussi
On 08.05.2014 02:48, Frédéric Bastien wrote:
> Just a quick question/possibility.
>
> What about just parallelizing ufunc with only 1 inputs that is c or
> fortran contiguous like trigonometric function? Is there a fast path in
> the ufunc mechanism when the input is fortran/c contig? If that is t
gt; than "Re: Contents of NumPy-Discussion digest..."
>
>
> --
>
> Message: 1
> Date: Wed, 07 May 2014 20:11:13 +0200
> From: Sturla Molden
> Subject: Re: [Numpy-discussion] IDL vs Python parallel c
Just a quick question/possibility.
What about just parallelizing ufunc with only 1 inputs that is c or fortran
contiguous like trigonometric function? Is there a fast path in the ufunc
mechanism when the input is fortran/c contig? If that is the case, it would
be relatively easy to add an openmp p
On 07.05.2014 20:11, Sturla Molden wrote:
> On 03/05/14 23:56, Siegfried Gonzi wrote:
>
> A more technical answer is that NumPy's internals does not play very
> nicely with multithreading. For examples the array iterators used in
> ufuncs store an internal state. Multithreading would imply an ex
On Wed, May 7, 2014 at 7:11 PM, Sturla Molden wrote:
> On 03/05/14 23:56, Siegfried Gonzi wrote:
> > I noticed IDL uses at least 400% (4 processors or cores) out of the box
> > for simple things like reading and processing files, calculating the
> > mean etc.
>
> The DMA controller is working a
On 03/05/14 23:56, Siegfried Gonzi wrote:
> I noticed IDL uses at least 400% (4 processors or cores) out of the box
> for simple things like reading and processing files, calculating the
> mean etc.
The DMA controller is working at its own pace, regardless of what the
CPU is doing. You cannot
On 05/05/14 17:02, Francesc Alted wrote:
> Well, this might be because it is the place where using several
> processes makes more sense. Normally, when you are reading files, the
> bottleneck is the I/O subsystem (at least if you don't have to convert
> from text to numbers), and for calculating
On 5/3/14, 11:56 PM, Siegfried Gonzi wrote:
> Hi all
>
> I noticed IDL uses at least 400% (4 processors or cores) out of the box
> for simple things like reading and processing files, calculating the
> mean etc.
>
> I have never seen this happening with numpy except for the linalgebra
> stuff (e.g
Hi all
I noticed IDL uses at least 400% (4 processors or cores) out of the box
for simple things like reading and processing files, calculating the
mean etc.
I have never seen this happening with numpy except for the linalgebra
stuff (e.g lapack).
Any comments?
Thanks,
Siegfried
--
The U
10 matches
Mail list logo