Den 19.07.2011 17:49, skrev Carlos Becker:
>
> - Matlab: 0.0089
> - Numpy: 0.051
> - Numpy with blitz: 0.0043
>
> Now blitz is considerably faster! Anyways, I am concerned about numpy
> being much slower, in this case taking 2x the time of the previous
> operation.
> I guess this is because of th
Those are very interesting examples. I think that pre-allocation is very
important, and something similar happens in Matlab if no pre-allocation is
done: it takes 3-4x longer than with pre-allocation.
The main difference is that Matlab is able to take into account a
pre-allocated array/matrix, prob
There is a total lack of vectorization in your code, so you are right
about the lack of vectorization.
What happens is that you see the result of the Matlab JIT compiler
speeding up the loop.
With a vectorized array expression, there will hardly be any difference.
Sturla
Den 19.07.2011 11:
On Tue, Jul 19, 2011 at 6:10 PM, Pauli Virtanen wrote:
> k = m - 0.5
>
> does here the same thing as
>
> k = np.empty_like(m)
> np.subtract(m, 0.5, out=k)
>
> The memory allocation (empty_like and the subsequent deallocation)
> costs essentially nothing, and there are no temp
On Tue, 19 Jul 2011 17:15:47 -0500, Chad Netzer wrote:
> On Tue, Jul 19, 2011 at 3:35 PM, Carlos Becker
[clip]
>> However, if I don't, I obtain this 4x penalty with numpy, even with the
>> 8092x8092 array. Would it be possible to do k = m - 0.5 and
>> pre-alllocate k such that python does not have
On Jul 19, 2011, at 3:15 PM, Chad Netzer wrote:
> %python
import timeit
import numpy as np
>
t=timeit.Timer('k = m - 0.5', setup='import numpy as np;m =
np.ones([8092,8092],float); k = np.zeros(m.size, m.dtype)')
np.mean(t.repeat(repeat=10, number=1))
> 0.5855752944946288
On Tue, Jul 19, 2011 at 3:35 PM, Carlos Becker wrote:
> Thanks Chad for the explanation on those details. I am new to python and I
> However, if I don't, I obtain this 4x penalty with numpy, even with the
> 8092x8092 array. Would it be possible to do k = m - 0.5 and pre-alllocate k
> such that py
https://github.com/numpy/numpy/pull/116
This pull request deprecates direct access to PyArrayObject fields. This
direct access has been discouraged for a while through comments in the
header file and documentation, but up till now, there was no way to disable
it. I've created such a mechanism, and
Thanks Chad for the explanation on those details. I am new to python and I
still have a lot to learn, this was very useful.
Now I get similar results between matlab and numpy when I re-use the memory
allocated for m with 'm -= 0.5'.
However, if I don't, I obtain this 4x penalty with numpy, even wi
Carlos Becker wrote:
> Besides the matlab/numpy comparison, I think that there is an inherent
> problem with how expressions are handled, in terms of efficiency.
> For instance, k = (m - 0.5)*0.3 takes 52msec average here (2000x2000
> array), while k = (m - 0.5)*0.3*0.2 takes 0.079, and k = (m -
For such expressions you should try numexpr package: It allows the same type of
optimisation as Matlab does: run a single loop over the matrix elements
instead of repetitive loops and intermediate objects creation.
Nadav
> Besides the matlab/numpy comparison, I think that there is an inheren
On Tue, 19 Jul 2011 17:49:14 +0200, Carlos Becker wrote:
> I made more tests with the same operation, restricting Matlab to use a
> single processing unit. I got:
>
> - Matlab: 0.0063 sec avg
> - Numpy: 0.026 sec avg
> - Numpy with weave.blitz: 0.0041
To check if it's an issue with building witho
On Sun, Jul 17, 2011 at 11:55 PM, Chris Barker wrote:
> On 7/14/2011 8:04 PM, Christoph Gohlke wrote:
>
>> A patch for the build issues is attached. Remove the build directory
>> before rebuilding.
>>
>> Christoph,
>
> I had other issues (I think in one case, a *.c file was not getting
> re-built
On Sun, Jul 17, 2011 at 11:48 PM, Darren Dale wrote:
> In numpy.distutils.system info:
>
>default_x11_lib_dirs = libpaths(['/usr/X11R6/lib','/usr/X11/lib',
> '/usr/lib'], platform_bits)
>default_x11_include_dirs = ['/usr/X11R6/include','/usr/X11/include
On Tue, Jul 19, 2011 at 2:27 PM, Carlos Becker wrote:
> Hi, everything was run on linux.
> Placing parentheses around the scalar multipliers shows that it seems to
> have to do with how expressions are handled, is there sometihng that can be
> done about this so that numpy can deal with expressio
On Tue, Jul 19, 2011 at 4:05 AM, Carlos Becker wrote:
> Hi, I started with numpy a few days ago. I was timing some array operations
> and found that numpy takes 3 or 4 times longer than Matlab on a simple
> array-minus-scalar operation.
Doing these kinds of timings correctly is a tricky issue, a
Hi, everything was run on linux.
I am using numpy 2.0.0.dev-64fce7c, but I tried an older version (cannot
remember which one now) and saw similar results.
Matlab is R2011a, and I used taskset to assign its process to a single core.
Linux is 32-bit, on Intel Core i7-2630QM.
Besides the matlab/num
On Tue, Jul 19, 2011 at 11:19 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
>
> On Tue, Jul 19, 2011 at 9:49 AM, Carlos Becker wrote:
>
>> I made more tests with the same operation, restricting Matlab to use a
>> single processing unit. I got:
>>
>> - Matlab: 0.0063 sec avg
>> - Numpy
On Tue, Jul 19, 2011 at 9:49 AM, Carlos Becker wrote:
> I made more tests with the same operation, restricting Matlab to use a
> single processing unit. I got:
>
> - Matlab: 0.0063 sec avg
> - Numpy: 0.026 sec avg
> - Numpy with weave.blitz: 0.0041
>
> Note that weave.blitz is even faster than Mat
Yes, you're right. The problem is, when you use the first one, you may cause
a 'name pollution' to the current namespace.
read this: http://bytebaker.com/2008/07/30/python-namespaces/
cheers,
Chao
2011/7/19 Alex Ter-Sarkissov
> this is probably silly question, I've seen in this in one of the
On Tue, Jul 19, 2011 at 07:38, Andrea Cimatoribus
wrote:
> Dear all,
> I would like to avoid the use of a boolean array (mask) in the following
> statement:
>
> mask = (A != 0.)
> B = A[mask]
>
> in order to be able to move this bit of code in a cython script (boolean
> arrays are not yet im
I made more tests with the same operation, restricting Matlab to use a
single processing unit. I got:
- Matlab: 0.0063 sec avg
- Numpy: 0.026 sec avg
- Numpy with weave.blitz: 0.0041
Note that weave.blitz is even faster than Matlab (slightly).
I tried on an older computer, and I got similar resul
Dear all,
I would like to avoid the use of a boolean array (mask) in the following
statement:
mask = (A != 0.)
B = A[mask]
in order to be able to move this bit of code in a cython script (boolean
arrays are not yet implemented there, and they slow down execution a lot as
they can't be defin
Hi Pauli, thanks for the quick answer.
Is there a way to check the optimization flags of numpy after
installation?
I am away of a matlab installation now, but I remember I saw a single
processor active with matlab. I will check it again soon
Thanks!
El 19/07/2011, a las 13:10, Pauli Virtan
Tue, 19 Jul 2011 11:05:18 +0200, Carlos Becker wrote:
> Hi, I started with numpy a few days ago. I was timing some array
> operations and found that numpy takes 3 or 4 times longer than Matlab on
> a simple array-minus-scalar operation.
> This looks as if there is a lack of vectorization, even thou
this is probably silly question, I've seen in this in one of the tutorials:
from tkinter import *
import tkinter.messagebox
given that * implies importing the whole module, why would anyone bother
with importing a specific command on top of it?
___
Nu
Hi, I started with numpy a few days ago. I was timing some array operations
and found that numpy takes 3 or 4 times longer than Matlab on a simple
array-minus-scalar operation.
This looks as if there is a lack of vectorization, even though this is just
a guess. I hope this is not reposting. I tried
27 matches
Mail list logo