On Thu, 24 Feb 2011 14:26:01 -0500
Frédéric Bastien wrote:
> Hi,
>
> To print the information you can do:
>
> python -c 'import numpy;numpy.__config__.show()'
>
> You can access the info directly with:
>
> numpy.distutils.__config__.blas_opt_info['library_dirs']]
> numpy.distutils.__config__
Hi,
To print the information you can do:
python -c 'import numpy;numpy.__config__.show()'
You can access the info directly with:
numpy.distutils.__config__.blas_opt_info['library_dirs']]
numpy.distutils.__config__.blas_opt_info['libraries']]
numpy.distutils.__config__.blas_opt_info['include_dir
Den 23.02.2011 18:34, skrev Benjamin Root:
>
> Just to point out, this person used Matlab 2008. While I do not use
> Matlab on a regular basis, I have seen marked improvements in
> performance in subsequent versions.
They have got a JIT compiler for loops. Matlab is actually an easy
language t
On Wed, Feb 23, 2011 at 11:00 AM, Sturla Molden wrote:
> Den 23.02.2011 08:32, skrev David:
> > This is mostly a test of the blas/lapack used, so it is not very useful
> > IMO, except maybe to show that you can deal with non trivial problems on
> > top of python (surprisingly, many scientists pro
Den 23.02.2011 08:32, skrev David:
> This is mostly a test of the blas/lapack used, so it is not very useful
> IMO, except maybe to show that you can deal with non trivial problems on
> top of python (surprisingly, many scientists programming a fair bit are
> still unaware of the vectorizing concep
On 02/23/2011 05:45 AM, Sturla Molden wrote:
> I came accross some NumPy performance tests by NASA. Comparisons against
> pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> and Java. For those that are interested, it is here:
This is mostly a test of the blas/lapack used, so i
Den 23.02.2011 00:19, skrev Gökhan Sever:
>
> I am guessing ATLAS is thread aware since with N=15000 each of the
> quad core runs at %100. Probably MKL build doesn't bring much speed
> advantage in this computation. Any thoughts?
>
There are still things like optimal cache use, SIMD extensions,
On Tue, Feb 22, 2011 at 1:48 PM, Gael Varoquaux
wrote:
> Probably because the numpy binary that the author was using was compiled
> without a blas implementation, and just using numpy's internal
> lapack_lite. This is a common problem in real life.
Is there an easy way to check from within numpy
On Tue, Feb 22, 2011 at 2:44 PM, Alan G Isaac wrote:
>
>
> I don't believe the matrix multiplication results.
> Maybe I misunderstand them ...
>
> >>> t = timeit.Timer("np.dot(A,B)","import numpy as
> np;N=1500;A=np.random.random((N,N));B=np.random.random((N,N))")
> >>> print t.timeit(numb
On Tue, Feb 22, 2011 at 09:59:26PM +, Pauli Virtanen wrote:
> > Probably because the numpy binary that the author was using was compiled
> > without a blas implementation, and just using numpy's internal
> > lapack_lite. This is a common problem in real life.
> It doesn't use blas_lite at the
On Tue, 22 Feb 2011 22:48:09 +0100, Gael Varoquaux wrote:
[clip]
> Probably because the numpy binary that the author was using was compiled
> without a blas implementation, and just using numpy's internal
> lapack_lite. This is a common problem in real life.
It doesn't use blas_lite at the moment.
On Tue, 22 Feb 2011 16:44:56 -0500, Alan G Isaac wrote:
[clip]
> I don't believe the matrix multiplication results. Maybe I misunderstand
> them ...
>
> >>> t = timeit.Timer("np.dot(A,B)","import numpy as
> >>> np;N=1500;A=np.random.random((N,N));B=np.random.random((N,N))")
> >>> pr
On Tue, Feb 22, 2011 at 04:44:56PM -0500, Alan G Isaac wrote:
> On 2/22/2011 3:45 PM, Sturla Molden wrote:
> > I came accross some NumPy performance tests by NASA. Comparisons against
> > pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> > and Java. For those that are interest
On 2/22/2011 3:45 PM, Sturla Molden wrote:
> I came accross some NumPy performance tests by NASA. Comparisons against
> pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> and Java. For those that are interested, it is here:
> https://modelingguru.nasa.gov/docs/DOC-1762
I don'
Thanks for posting a nice report.
Akand
On Tue, Feb 22, 2011 at 2:45 PM, Sturla Molden wrote:
> I came accross some NumPy performance tests by NASA. Comparisons against
> pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> and Java. For those that are interested, it is here:
I came accross some NumPy performance tests by NASA. Comparisons against
pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
and Java. For those that are interested, it is here:
https://modelingguru.nasa.gov/docs/DOC-1762
Sturla
___
16 matches
Mail list logo