On 02/23/2011 05:45 AM, Sturla Molden wrote:
> I came accross some NumPy performance tests by NASA. Comparisons against
> pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> and Java. For those that are interested, it is here:
This is mostly a test of the blas/lapack used, so i
Den 23.02.2011 00:19, skrev Gökhan Sever:
>
> I am guessing ATLAS is thread aware since with N=15000 each of the
> quad core runs at %100. Probably MKL build doesn't bring much speed
> advantage in this computation. Any thoughts?
>
There are still things like optimal cache use, SIMD extensions,
On Tue, Feb 22, 2011 at 1:48 PM, Gael Varoquaux
wrote:
> Probably because the numpy binary that the author was using was compiled
> without a blas implementation, and just using numpy's internal
> lapack_lite. This is a common problem in real life.
Is there an easy way to check from within numpy
On Tue, Feb 22, 2011 at 2:44 PM, Alan G Isaac wrote:
>
>
> I don't believe the matrix multiplication results.
> Maybe I misunderstand them ...
>
> >>> t = timeit.Timer("np.dot(A,B)","import numpy as
> np;N=1500;A=np.random.random((N,N));B=np.random.random((N,N))")
> >>> print t.timeit(numb
On Tue, Feb 22, 2011 at 09:59:26PM +, Pauli Virtanen wrote:
> > Probably because the numpy binary that the author was using was compiled
> > without a blas implementation, and just using numpy's internal
> > lapack_lite. This is a common problem in real life.
> It doesn't use blas_lite at the
On Tue, 22 Feb 2011 22:48:09 +0100, Gael Varoquaux wrote:
[clip]
> Probably because the numpy binary that the author was using was compiled
> without a blas implementation, and just using numpy's internal
> lapack_lite. This is a common problem in real life.
It doesn't use blas_lite at the moment.
On Tue, 22 Feb 2011 16:44:56 -0500, Alan G Isaac wrote:
[clip]
> I don't believe the matrix multiplication results. Maybe I misunderstand
> them ...
>
> >>> t = timeit.Timer("np.dot(A,B)","import numpy as
> >>> np;N=1500;A=np.random.random((N,N));B=np.random.random((N,N))")
> >>> pr
On Tue, Feb 22, 2011 at 04:44:56PM -0500, Alan G Isaac wrote:
> On 2/22/2011 3:45 PM, Sturla Molden wrote:
> > I came accross some NumPy performance tests by NASA. Comparisons against
> > pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> > and Java. For those that are interest
On 2/22/2011 3:45 PM, Sturla Molden wrote:
> I came accross some NumPy performance tests by NASA. Comparisons against
> pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> and Java. For those that are interested, it is here:
> https://modelingguru.nasa.gov/docs/DOC-1762
I don'
Thanks for posting a nice report.
Akand
On Tue, Feb 22, 2011 at 2:45 PM, Sturla Molden wrote:
> I came accross some NumPy performance tests by NASA. Comparisons against
> pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> and Java. For those that are interested, it is here:
I came accross some NumPy performance tests by NASA. Comparisons against
pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
and Java. For those that are interested, it is here:
https://modelingguru.nasa.gov/docs/DOC-1762
Sturla
___
11 matches
Mail list logo