Hi. That is really amazing.
I checked out that numexpr branch and saw some strange results when
evaluating expressions on a multi-core i7 processor.
Running the numexpr.test() yields a few 'F', which I suppose are failing
tests. I tried to let the tests finish but it takes more than 20 min, is
ther
>> This is a slight digression: is there a way to have a out parameter
>> like semantics with numexpr. I have always used it as
>>
>> a[:] = numexpr(expression)
> In order to make sure the 1.6 nditer supports multithreading, I adapted
> numexpr to use it. The branch which does this is here:
> htt
On Wed, Jul 20, 2011 at 5:52 PM, srean wrote:
> >> I think this is essential to speed up numpy. Maybe numexpr could handle
> this in the future? Right now the general use of numexpr is result =
> numexpr.evaluate("whatever"), so the same problem seems to be there.
> >>
> >> With this I am not say
>> I think this is essential to speed up numpy. Maybe numexpr could handle this
>> in the future? Right now the general use of numexpr is result =
>> numexpr.evaluate("whatever"), so the same problem seems to be there.
>>
>> With this I am not saying that numpy is not worth it, just that for many
Wed, 20 Jul 2011 11:31:41 +, Pauli Virtanen wrote:
[clip]
> There is a sharp order-of-magnitude change of speed in malloc+memset of
> an array, which is not present in memset itself. (This is then also
> reflected in the Numpy performance -- floating point operations probably
> don't cost much
> with "gcc -O3 -ffast-math -march=native -mfpmath=sse" optimizations
> for the C code (involving SSE2 vectorization and whatnot, looking at
> the assembler output). Numpy is already going essentially at the maximum
> speed.
As a related side question that I've been wondering myself for some time
On Wed, Jul 20, 2011 at 3:57 AM, eat wrote:
> Perhaps slightly OT, but here is something very odd going on. I would expect
> the performance to be in totally different ballpark.
>>
>> >>> t=timeit.Timer('m =- 0.5', setup='import numpy as np;m =
>> >>> np.ones([8092,8092],float)')
>> >>> np.mean(t.
Wed, 20 Jul 2011 09:04:09 +, Pauli Virtanen wrote:
> Wed, 20 Jul 2011 08:49:21 +0200, Carlos Becker wrote:
>> Those are very interesting examples. I think that pre-allocation is
>> very important, and something similar happens in Matlab if no
>> pre-allocation is done: it takes 3-4x longer than
I will be away from my computer for a week, but what I could try today
shows that Matlab JIT is doing some tricks so the results I have shown
previously for Matlab are likely to be wrong.
In this sense, it seems to be that timings are similar between numpy
and matlab if Jit tricks are avoided
On Tue, Jul 19, 2011 at 11:49 PM, Carlos Becker wrote:
> Those are very interesting examples.
Cool.
> I think that pre-allocation is very
> important, and something similar happens in Matlab if no pre-allocation is
> done: it takes 3-4x longer than with pre-allocation.
Can you provide a simple
Hi,
On Wed, Jul 20, 2011 at 2:42 AM, Chad Netzer wrote:
> On Tue, Jul 19, 2011 at 6:10 PM, Pauli Virtanen wrote:
>
> >k = m - 0.5
> >
> > does here the same thing as
> >
> >k = np.empty_like(m)
> >np.subtract(m, 0.5, out=k)
> >
> > The memory allocation (empty_like and t
Wed, 20 Jul 2011 09:04:09 +, Pauli Virtanen wrote:
> Wed, 20 Jul 2011 08:49:21 +0200, Carlos Becker wrote:
>> Those are very interesting examples. I think that pre-allocation is
>> very important, and something similar happens in Matlab if no
>> pre-allocation is done: it takes 3-4x longer tha
Wed, 20 Jul 2011 08:49:21 +0200, Carlos Becker wrote:
> Those are very interesting examples. I think that pre-allocation is very
> important, and something similar happens in Matlab if no pre-allocation
> is done: it takes 3-4x longer than with pre-allocation. The main
> difference is that Matlab i
Den 20.07.2011 09:35, skrev Carlos Becker:
>
> In my case, sometimes it is required to process 1k images or more, and
> 2x speed improvement in this case means 2 hours of processing vs 4.
Can you demonstrate that Matlab is faster than NumPy for this task?
Sturla
_
Den 19.07.2011 11:05, skrev Carlos Becker:
N = 100;
tic;
for I=1:N
k = m - 0.5;
end
toc / N
m = rand(2000,2000);
Here, Matlab's JIT compiler can probably hoist the invariant out of the
loop, and just do
I=N
k = m - 0.5
Try thi
Hi all. Thanks for the feedback.
My point is not to start a war on matlab/numpy. This comes out of my wish to
switch from Matlab to something more appealing.
I like numpy and python, being a proper language (not like matlab scripts,
whose syntax is patched and destroyed as new versions come up).
I
On Wednesday, July 20, 2011, Carlos Becker wrote:
> Those are very interesting examples. I think that pre-allocation is very
> important, and something similar happens in Matlab if no pre-allocation is
> done: it takes 3-4x longer than with pre-allocation.The main difference is
> that Matlab is
Den 20.07.2011 08:49, skrev Carlos Becker:
>
> The main difference is that Matlab is able to take into account a
> pre-allocated array/matrix, probably avoiding the creation of a
> temporary and writing the results directly in the pre-allocated array.
>
> I think this is essential to speed up num
Den 19.07.2011 17:49, skrev Carlos Becker:
>
> - Matlab: 0.0089
> - Numpy: 0.051
> - Numpy with blitz: 0.0043
>
> Now blitz is considerably faster! Anyways, I am concerned about numpy
> being much slower, in this case taking 2x the time of the previous
> operation.
> I guess this is because of th
Those are very interesting examples. I think that pre-allocation is very
important, and something similar happens in Matlab if no pre-allocation is
done: it takes 3-4x longer than with pre-allocation.
The main difference is that Matlab is able to take into account a
pre-allocated array/matrix, prob
There is a total lack of vectorization in your code, so you are right
about the lack of vectorization.
What happens is that you see the result of the Matlab JIT compiler
speeding up the loop.
With a vectorized array expression, there will hardly be any difference.
Sturla
Den 19.07.2011 11:
On Tue, Jul 19, 2011 at 6:10 PM, Pauli Virtanen wrote:
> k = m - 0.5
>
> does here the same thing as
>
> k = np.empty_like(m)
> np.subtract(m, 0.5, out=k)
>
> The memory allocation (empty_like and the subsequent deallocation)
> costs essentially nothing, and there are no temp
On Tue, 19 Jul 2011 17:15:47 -0500, Chad Netzer wrote:
> On Tue, Jul 19, 2011 at 3:35 PM, Carlos Becker
[clip]
>> However, if I don't, I obtain this 4x penalty with numpy, even with the
>> 8092x8092 array. Would it be possible to do k = m - 0.5 and
>> pre-alllocate k such that python does not have
On Jul 19, 2011, at 3:15 PM, Chad Netzer wrote:
> %python
import timeit
import numpy as np
>
t=timeit.Timer('k = m - 0.5', setup='import numpy as np;m =
np.ones([8092,8092],float); k = np.zeros(m.size, m.dtype)')
np.mean(t.repeat(repeat=10, number=1))
> 0.5855752944946288
On Tue, Jul 19, 2011 at 3:35 PM, Carlos Becker wrote:
> Thanks Chad for the explanation on those details. I am new to python and I
> However, if I don't, I obtain this 4x penalty with numpy, even with the
> 8092x8092 array. Would it be possible to do k = m - 0.5 and pre-alllocate k
> such that py
Thanks Chad for the explanation on those details. I am new to python and I
still have a lot to learn, this was very useful.
Now I get similar results between matlab and numpy when I re-use the memory
allocated for m with 'm -= 0.5'.
However, if I don't, I obtain this 4x penalty with numpy, even wi
Carlos Becker wrote:
> Besides the matlab/numpy comparison, I think that there is an inherent
> problem with how expressions are handled, in terms of efficiency.
> For instance, k = (m - 0.5)*0.3 takes 52msec average here (2000x2000
> array), while k = (m - 0.5)*0.3*0.2 takes 0.079, and k = (m -
For such expressions you should try numexpr package: It allows the same type of
optimisation as Matlab does: run a single loop over the matrix elements
instead of repetitive loops and intermediate objects creation.
Nadav
> Besides the matlab/numpy comparison, I think that there is an inheren
On Tue, 19 Jul 2011 17:49:14 +0200, Carlos Becker wrote:
> I made more tests with the same operation, restricting Matlab to use a
> single processing unit. I got:
>
> - Matlab: 0.0063 sec avg
> - Numpy: 0.026 sec avg
> - Numpy with weave.blitz: 0.0041
To check if it's an issue with building witho
On Tue, Jul 19, 2011 at 2:27 PM, Carlos Becker wrote:
> Hi, everything was run on linux.
> Placing parentheses around the scalar multipliers shows that it seems to
> have to do with how expressions are handled, is there sometihng that can be
> done about this so that numpy can deal with expressio
On Tue, Jul 19, 2011 at 4:05 AM, Carlos Becker wrote:
> Hi, I started with numpy a few days ago. I was timing some array operations
> and found that numpy takes 3 or 4 times longer than Matlab on a simple
> array-minus-scalar operation.
Doing these kinds of timings correctly is a tricky issue, a
Hi, everything was run on linux.
I am using numpy 2.0.0.dev-64fce7c, but I tried an older version (cannot
remember which one now) and saw similar results.
Matlab is R2011a, and I used taskset to assign its process to a single core.
Linux is 32-bit, on Intel Core i7-2630QM.
Besides the matlab/num
On Tue, Jul 19, 2011 at 11:19 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
>
> On Tue, Jul 19, 2011 at 9:49 AM, Carlos Becker wrote:
>
>> I made more tests with the same operation, restricting Matlab to use a
>> single processing unit. I got:
>>
>> - Matlab: 0.0063 sec avg
>> - Numpy
On Tue, Jul 19, 2011 at 9:49 AM, Carlos Becker wrote:
> I made more tests with the same operation, restricting Matlab to use a
> single processing unit. I got:
>
> - Matlab: 0.0063 sec avg
> - Numpy: 0.026 sec avg
> - Numpy with weave.blitz: 0.0041
>
> Note that weave.blitz is even faster than Mat
I made more tests with the same operation, restricting Matlab to use a
single processing unit. I got:
- Matlab: 0.0063 sec avg
- Numpy: 0.026 sec avg
- Numpy with weave.blitz: 0.0041
Note that weave.blitz is even faster than Matlab (slightly).
I tried on an older computer, and I got similar resul
Hi Pauli, thanks for the quick answer.
Is there a way to check the optimization flags of numpy after
installation?
I am away of a matlab installation now, but I remember I saw a single
processor active with matlab. I will check it again soon
Thanks!
El 19/07/2011, a las 13:10, Pauli Virtan
Tue, 19 Jul 2011 11:05:18 +0200, Carlos Becker wrote:
> Hi, I started with numpy a few days ago. I was timing some array
> operations and found that numpy takes 3 or 4 times longer than Matlab on
> a simple array-minus-scalar operation.
> This looks as if there is a lack of vectorization, even thou
Hi, I started with numpy a few days ago. I was timing some array operations
and found that numpy takes 3 or 4 times longer than Matlab on a simple
array-minus-scalar operation.
This looks as if there is a lack of vectorization, even though this is just
a guess. I hope this is not reposting. I tried
38 matches
Mail list logo