[Numpy-discussion] Revisiting numpy/scipy on 64 bit OSX

2008-08-21 Thread Michael Abshoff
Hi,

I have been playing with 64 bit numpy/scipy on OSX 10.5 Intel again. I 
thought everything worked after the last time we discussed this, but I 
noticed that I had Scipy import failures in for example optimize since 
the gfortran I used creates 32 bit code per default.

So I build a gcc 4.2.4 on OSX, then build python in 64 bit mode (fixing 
configure.in since Python assumes flags only available with the Apple 
gcc on Darwin), added a wrapper around gfortran that injects a "-m64" 
compile flag. Then I checkout out and build numpy (r5671), scipy (r4661) 
and the current nose release and ran the tests. I had some more failures 
than I expected (details below).

Any thoughts on the failures? I am about to build ATLAS and a couple 
other dependencies on that box.

William Stein is at Scipy 08 and there seems to be at least some 
interest in 64 bit OSX binaries. If anyone wants one let me know and I 
can put one together once Scipy 0.7 and Numpy 1.1.2 are out.

Cheers,

Michael

numpy failed tests:


File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/lib/tests/test_polynomial.py",
 
line 29, in test_polynomial
Failed example:
 p / q
Expected:
 (poly1d([ 0.]), poly1d([ 1.,  2.6667]))
Got:
 (poly1d([ 0.333]), poly1d([ 1.333,  2.667]))
**
File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/lib/tests/test_polynomial.py",
 
line 51, in test_polynomial
Failed example:
 p.integ()
Expected:
 poly1d([ 0.,  1.,  3.,  0.])
Got:
 poly1d([ 0.333,  1.   ,  3.   ,  0.   ])
**File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/lib/tests/test_polynomial.py",
 
line 53, in test_polynomialFailed example:
 p.integ(1)
Expected:
 poly1d([ 0.,  1.,  3.,  0.])
Got:
 poly1d([ 0.333,  1.   ,  3.   ,  0.   ])
**
File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/lib/tests/test_polynomial.py",
 
line 55, in test_polynomial
Failed example:
 p.integ(5)
Expected:
 poly1d([ 0.00039683,  0.0028,  0.025 ,  0.,  0. 
 ,
 0.,  0.,  0.])
Got:
 poly1d([ 0.   ,  0.003,  0.025,  0.   ,  0.   ,  0.   ,  0.   ,  0. 
   ])


File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/lib/tests/test_ufunclike.py",
 
line 48, in test_ufunclike
Failed example:
 U.log2(a)
Expected:
 array([ 2.169925  ,  1.20163386,  2.70043972])
Got:
 array([ 2.17 ,  1.202,  2.7  ])
**
File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/lib/tests/test_ufunclike.py",
 
line 53, in test_ufunclike
Failed example:
 U.log2(a, y)
Expected:
 array([ 2.169925  ,  1.20163386,  2.70043972])
Got:
 array([ 2.17 ,  1.202,  2.7  ])
**
File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/lib/tests/test_ufunclike.py",
 
line 55, in test_ufunclike
Failed example:
 y
Expected:
 array([ 2.169925  ,  1.20163386,  2.70043972])
Got:
 array([ 2.17 ,  1.202,  2.7  ])
==
FAIL: test_umath.TestComplexFunctions.test_it
--
Traceback (most recent call last):
   File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/nose/case.py", 
line 182, in runTest
 self.test(*self.arg)
   File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/core/tests/test_umath.py",
 
line 196, in test_it
 assert_almost_equal(fz.real, fr, err_msg='real part %s'%f)
   File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/numpy/testing/utils.py",
 
line 207, in assert_almost_equal
 assert round(abs(desired - actual),decimal) == 0, msg
AssertionError:
Items are not equal: real part 
  ACTUAL: -0.3010299956639812
  DESIRED: 0.0


scipy:

==
ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric)
--
Traceback (most recent call last):
   File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py",
 
line 204, in test_nonsymmetric_modes
 self.eval_evec(m,typ,k,which)
   File 
"/Users/mabshoff/64bit_numpy_scipy/lib/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py",
 
line 186, in eval_evec
 eval,evec

Re: [Numpy-discussion] Revisiting numpy/scipy on 64 bit OSX

2008-08-21 Thread Michael Abshoff
Stéfan van der Walt wrote:
> 2008/8/21 Michael Abshoff <[EMAIL PROTECTED]>:
>> William Stein is at Scipy 08 and there seems to be at least some
>> interest in 64 bit OSX binaries. If anyone wants one let me know and I
>> can put one together once Scipy 0.7 and Numpy 1.1.2 are out.
> 
> There is still interest in having a 64-bit OSX buildbot, which would
> allow us to support this platform.  Would you still be able to host
> it, as per our previous discussion?

Yep, I really got distracted the last months doing other things, but now 
the numpy and scipy upgrades in Sage are high priority since it fixes a 
number of build issues and segfaults on Solaris and OSX 64 bit :)

> Regards
> Stéfan

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Revisiting numpy/scipy on 64 bit OSX

2008-08-22 Thread Michael Abshoff
Robert Kern wrote:
> On Fri, Aug 22, 2008 at 07:00, Chris Kees
> <[EMAIL PROTECTED]> wrote:

Hi,

>> I've been experimenting with both a non-framework, non-universal 64-bit
>> build and a 4-way universal build of the python (2.6) trunk with numpy
>> 1.1.1. The non-framework 64 build appears to give me exactly the same
>> results from numpy.test() as the standard 32-bit version (as well as
>> allowing large arrays like numpy.zeros((1000,1000,1000),'d') ), which is
>>
>> 

I used a gcc 4.2.4 build from sources and last time I used XCode (and in 
addition a gfortran build from 4.2.3 sources) most things went fine. 
Note that I am using Python 2.5.2 and that I had to make configure.in 
not add some default BASEFLAGS since those added flags only exist for 
the Apple version of gcc. This was also numpy and scipy svn tip :)

>> Our numerical models also seem to run fine with it using 8-16G. The 4-way
>> universal python gives the same results in 32-bit  but when running in
>> 64-bit I get an error in the tests below, which I haven't had time to look
>> at.  It also gives the error
>>
> a = numpy.zeros((1000,1000,1000),'d')
>> Traceback (most recent call last):
>>  File "", line 1, in 
>> ValueError: dimensions too large.
> 
> Much of our configuration occurs by compiling small C programs and
> executing them. Probably, one of these got run in 32-bit mode, and
> that fooled the numpy build into thinking that it was for 32-bit only.

Yeah, building universal binaries is still fraught with issues since 
much code out there uses values derived at configure time for endianess 
and other values. IIRC Apple patches Python to make some of those 
constants functions, but that recollection might be wrong in case of 
Python. I usually use lipo to make universal binaries theses days to get 
around that limitation, but that means four compilations instead of two.

My main goal in all of this is a 64 bit Sage on OSX (which I am 
reasonable close to fully working), but due to above mentioned problems 
for example with gmp it seems unlikely that I can produce a universal 
version directly and lipo is a way out of this.

> Unfortunately, what you are trying to do is tantamount to
> cross-compiling, and neither distutils nor the additions we have built
> on top of it work very well with cross-compiling. It's possible that
> we could special case the configuration on OS X, though. Instead of
> trusting the results of the executables, we can probably recognize
> each of the 4 OS X variants through #ifdefs and reset the discovered
> results. This isn't easily extended to all platforms (which is why we
> went with the executable approach in the first place), but OS X on
> both 32-bit and 64-bit will be increasingly common but still
> manageable. I would welcome contributions in this area.

I am actually fairly confident that 64 bit Intel will dominate the Apple 
userbase in the short term. Every laptop and workstation on sale by 
Apple now and in the last two years or so has been 64 bit capable and 
for console applications OSX 10.4 or higher will do. So I see little 
benefit from doing 32 bit on OSX except for legacy support :). I know 
that Apple hardware tends to stick around longer, so even Sage will 
support 32 bit OSX for a while.

Cheers,

Michael
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building numpy locally but get error: undefined symbol: zgesdd_

2008-09-18 Thread Michael Abshoff
Robert Kern wrote:
> On Thu, Sep 18, 2008 at 06:26, Francis <[EMAIL PROTECTED]> wrote:
>> Thank you for your effort. I guess garnumpy reflects the idea in this
>> Pylab discussion: http://www.scipy.org/PyLab
>>
>> Again I get errors in libblas/lapack related to gfortran (local
>> variable problems). I replaced the libblas.a and the liblaplack.a by
>> the ones of sage. And started make install again. It seems to work
>> until it tries to configure/install umfpack.
>>
>> /users/francisd/garnumpy/lib/libblas.a(xerbla.o): In function
>> `xerbla_':
>> xerbla.f:(.text+0x1b): undefined reference to `_g95_get_ioparm'
> 
> Yes, SAGE uses g95, not gfortran.
> 

It depends :)

Per default on OSX and Linux x86 and x86-64 Sage provides a binary g95. 
You can use the environment variable SAGE_FORTRAN to set it to gfortran. 
Additionally there is some hacked in code in various places in setup.py 
of Scipy that autodetects which Fortran libraries (g95 or gfortran) 
ATLAS was build with and adds those to the LDFLAGS dynamically. That 
code it butt ugly (and it is Perl, too :)), but it does the job. 
Obviously it will not help if mix and match Fortran compilers.

Cheers,

Michael
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] complex roots() segfault on Solaris 10/x86 with numpy 1.2.rc1 using python 2.5.2

2008-09-22 Thread Michael Abshoff
Hi,

I have been having trouble with the following computation with numpy 
1.2.rc1:

Python 2.5.2 (r252:60911, Jul 14 2008, 15:20:38)
[GCC 4.2.4] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
 >>> from numpy import *
 >>> a = array([1,0,0,0,0,-1],dtype=complex)
array([ 1.+0.j,  0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j, -1.+0.j])
 >>> roots(a)
Segmentation Fault  (core dumped) python "$@"

This is python 2.5.2 build with gcc 4.2.4, numpy itself is build with 
"-O0", i.e. this is unlikely to be a compiler bug IMHO. This bug has 
been present in 1.0.4, 1.1.0 and it seems unfixed in 1.2.rc1. The numpy 
1.1 test suite passed with that install, I did not run the 1.2.rc1 one 
yet since I do not have nose installed.

gdb says nothing particularly helpful:

-bash-3.00$ gdb python
GNU gdb 6.6
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i386-pc-solaris2.10"...
(gdb) r
Starting program: /home/mabshoff/sage-3.0.5-x86-solaris/local/bin/python
warning: rw_common (): unable to read at addr 0xd3f08
warning: sol_thread_new_objfile: td_ta_new: Debugger service failed
warning: Unable to find dynamic linker breakpoint function.
GDB will be unable to debug shared library initializers
and track explicitly loaded dynamic code.
Python 2.5.2 (r252:60911, Jul 14 2008, 15:20:38)
[GCC 4.2.4] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
 >>> from numpy import *
 >>> a = array([1,0,0,0,0,-1],dtype=complex)
 >>> a
array([ 1.+0.j,  0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j, -1.+0.j])
 >>> roots(a)

Program received signal SIGSEGV, Segmentation fault.
0xfed622b3 in csqrt () from /lib/libm.so.2
(gdb) bt
#0  0xfed622b3 in csqrt () from /lib/libm.so.2
#1  0xfe681d5a in ?? ()
#2  0x in ?? ()
(gdb) quit

Any pointers?

Cheers,

Michael

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] complex roots() segfault on Solaris 10/x86 with numpy 1.2.rc1 using python 2.5.2

2008-09-22 Thread Michael Abshoff
David Cournapeau wrote:
> Michael Abshoff wrote:

Hi David,

>> This is python 2.5.2 build with gcc 4.2.4, numpy itself is build with 
>> "-O0", i.e. this is unlikely to be a compiler bug IMHO. This bug has 
>> been present in 1.0.4, 1.1.0 and it seems unfixed in 1.2.rc1. The numpy 
>> 1.1 test suite passed with that install, I did not run the 1.2.rc1 one 
>> yet since I do not have nose installed.
>>
>> gdb says nothing particularly helpful:
> 
> Yes, because you built python with -O0; you should add at least -g to
> the cflags, because here, you don't have debugging symbols, meaning the
> gdb won't be of much help unfortunately.
> 
> Would it be possible for you to rebuild python and numpy with debugging
> info ? 

Sorry for not being precise: Both python and numpy have been build with

OPT=-DNDEBUG -g -O0 -fwrapv -Wall -Wstrict-prototypes

i.e. "-O0" instead of "-O3". I am using ATLAS and netlib.org Lapack, so 
I will rebuild everything and run the BLAS as well as Lapack testers to 
make 100% sure everything is on working correctly. Since the above 
python is part of a Sage build and passes 99% of Sage's doctests I am 
pretty sure that ATLAS and Lapack do work, but one never knows. Since I 
am sleepy I will do all that tomorrow.

> I tried to look into that bug, but I can't reproduce it on
> opensolaris with gcc and python 2.4

Ok. Which compiler did you use?

> cheers,
> 
> David


Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] complex roots() segfault on Solaris 10/x86 with numpy 1.2.rc1 using python 2.5.2

2008-09-23 Thread Michael Abshoff
David Cournapeau wrote:
> Michael Abshoff wrote:

Hi David,

>> Sorry for not being precise: Both python and numpy have been build with
>>
>> OPT=-DNDEBUG -g -O0 -fwrapv -Wall -Wstrict-prototypes
> 
> Hm, strange. I don't know why you can't get any debug info, then.

well, it looks like some sort of stack corruption.

>> i.e. "-O0" instead of "-O3". I am using ATLAS and netlib.org Lapack, so 
>> I will rebuild everything and run the BLAS as well as Lapack testers to 
>> make 100% sure everything is on working correctly. Since the above 
>> python is part of a Sage build and passes 99% of Sage's doctests I am 
>> pretty sure that ATLAS and Lapack do work, but one never knows. Since I 
>> am sleepy I will do all that tomorrow.
> 
> It is indeed very likely the problem is there: typically, problem when
> passing complex number at the C/Fortran, this kind of thing.
> 
>> Ok. Which compiler did you use?
> 
> gcc 3.4 (the one give by open solaris by default), no blas/lapack. I
> actually would try this configuration first (no blas/lapack at all), to
> be sure it is blas/lapack or us.

I had a second wind and I just build python+numpy without blas and 
lapack and low and behold it works:

Python 2.5.2 (r252:60911, Sep 23 2008, 04:25:09)
[GCC 4.2.4] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
 >>> from numpy import *
 >>> a = array([1,0,0,0,0,-1],dtype=complex)
 >>> a
array([ 1.+0.j,  0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j, -1.+0.j])
 >>> roots(a)
array([-0.80901699+0.58778525j, -0.80901699-0.58778525j,
 0.30901699+0.95105652j,  0.30901699-0.95105652j, 
1.+0.j])
 >>> ^D

So it looks like gfortran 4.2.4 on Solaris + "-O3" miscompiles lapack, 
which is honestly unbelievable considering netlib.org lapack+BLAS are 
the yard stick for any Fortran compiler.

Sorry for the noise and thanks for your help.

> cheers,
> 
> David


Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] can't build numpy 1.2.0 under python 2.6 (windows-amd64) using VS9

2008-10-10 Thread Michael Abshoff
David Cournapeau wrote:
> Ravi wrote:

Hi,

>> Michael Abshoff already responded to the ATLAS question. I don't have access 
>> to a 64-bit Windows. Given the volume of legacy 32-bit applications where I 
>> work, there is no chance of 64-bit Windows access for me for at least 2 
>> years.
> 
> Windows 64 actually has a very nice feature: WoW (windows on windows).
> It can executes any 32 bits software, AFAIK (which does not run in ring
> 0 of course).

Well, I think that having a 64 bit native build of numpy/scipy using an 
efficient and non-commercial licensed BLAS/Lapack (i.e. not Intel MKL) 
can't be a bad thing :)

>> VS2008 with 32-bit windows should not have any problems (as you mentioned on 
>> the Wiki page referenced above).
> 
> I wish it were true :) I can't build numpy with mingw ATM, because of
> bugs in mingw. Things like:
> 
> http://bugs.python.org/issue3308

It might be possible to build python [2.5|2.6|3.0] with MinGW itself to 
avoid the runtime issue. At least Python 2.4 had problems when building 
it with MinGW and I never investigated if 2.5.x had fixed those issues.

>>  What needs to be done to figure out msvc9 
>> support on mingw and how can I help? I am a Windows n00b (mostly by choice) 
>> when it comes to platform-specific issues.
> 
> Then, I am afraid you won't be of much help, ufortunately.
> 
> cheers,
> 
> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] can't build numpy 1.2.0 under python 2.6 (windows-amd64) using VS9

2008-10-10 Thread Michael Abshoff
David Cournapeau wrote:

Hi David,

> 
> I started a wiki page on the issues related to windows, 64 bits and
> python 2.6 (those issues are somewhat related at some level):
> 
> http://scipy.org/scipy/numpy/wiki/MicrosoftToolchainSupport

Cool.

> If you want to help, you can try solving one problem. In particular, if
> you know how to build ATLAS with Visual studio (for 64 bits support), it
> would be really helpful,

The problem with 64 bit ATLAS support for MSVC is that as is that ATLAS 
uses (AFAIK) assembly code that is not compilable with the MSVC 
toolchain. Since the official MinGW cannot create 64 bit code (there is 
some experimental support I have not tried yet) the only hope at the 
moment (without converting the assembly) to use the Intel toolchain. I 
have not tried that yet.

The current ATLAS code even requires Cygwin, but there was recent 
activity on the ATLAS mailing list to support MinGW only. There are also 
issues with threading support with MinGW to winpthread, so there is some 
work ahead to fully support ATLAS with MSVC.

Clint Whaley is supposed to speak at Sage Days 11 in Austin in about a 
month and I had planned to investigate the possibilities of native 64 
bit ATLAS support for VC9. I had planned to spend some days after SD 11 
at Enthought and work on MSVC build issues as well as 64 bit OSX stuff 
for example, but I need to make plans shortly since I need to buy my 
plane ticket soon

> cheers,
> 
> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] can't build numpy 1.2.0 under python 2.6 (windows-amd64) using VS9

2008-10-10 Thread Michael Abshoff
David Cournapeau wrote:

Hi David,

> Michael Abshoff wrote:
>> Well, I think that having a 64 bit native build of numpy/scipy using an 
>> efficient and non-commercial licensed BLAS/Lapack (i.e. not Intel MKL) 
>> can't be a bad thing :)
> 
> Yes, of course. But it is useful to be able to use a 32 bits toolchain
> to produce 64 bits software.

Sure, but there isn't even a 32 bit gcc out there that can produce 64 
bit PE binaries (aside from the MinGW fork that AFAIK does not work 
particularly well and allegedly has issues with the cleanliness of some 
of the code which is allegedly the reason that the official MinGW people 
will not touch the code base) . It has been rumored for a while that 
there is a new version of SFU by Microsoft in the works based on gcc 
4.2.x that will be able create 64 bit PE binaries, but I have not 
actually talked to anybody who has access, so it could be just a rumor.

>> It might be possible to build python [2.5|2.6|3.0] with MinGW itself to 
>> avoid the runtime issue. At least Python 2.4 had problems when building 
>> it with MinGW and I never investigated if 2.5.x had fixed those issues.
> 
> Not being ABI compatible with Python from python.org is not an option,
> and building it with mingw would make it ABI incompatible for sure. I
> certainly wished they used an open source compiler to build the official
> python binary, but that's not the case.

Ok, that is a concern I usually do not have since I tend to build my own 
Python :).

I am pretty sure that building Python with MinGW will break ABI 
compatibility with Python 2.6. As has been discussed on this list more 
than once not even Python 2.5 build with MSVC 2003 is really compatible 
with C++ extensions build with MinGW.

> cheers,
> 
> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] can't build numpy 1.2.0 under python 2.6 (windows-amd64) using VS9

2008-10-11 Thread Michael Abshoff
David Cournapeau wrote:

> Michael Abshoff wrote:

Hi David,

>> Sure, but there isn't even a 32 bit gcc out there that can produce 64 
>> bit PE binaries (aside from the MinGW fork that AFAIK does not work 
>> particularly well and allegedly has issues with the cleanliness of some 
>> of the code which is allegedly the reason that the official MinGW people 
>> will not touch the code base) .
> 
> The biggest problem is that officially, there is still no gcc 4 release
> for mingw. I saw a gcc 4 section in cygwin, though, so maybe it is about
> to be released. There is no support at all for 64 bits PE in the 3 serie.

Yes, you are correct and I was wrong. I just checked out the mingw-64 
project and there has been a lot of activity the last couple month, 
including a patch to build pthread-win32 in 64 bit mode.

> I think binutils officially support 64 bits PE (I can build a linux
> hosted binutils for 64 bits PE with x86_64-pc-mingw32 as a target, and
> it seems to work: disassembling and co). gcc 4 can work, too (you can
> build a bootstrap C compiler which targets windows 64 bits IICR). The
> biggest problem AFAICS is the runtime (mingw64, which is indeed legally
> murky).

I would really like to find the actual reason *why* the legal status of 
the 64 bit MinGW port is murky (To my knowledge it has to do with taking 
code from the MS Platform toolkit - but that is conjecture), so I guess 
I will do the obvious thing and ask on the MinGW list :)

>> Ok, that is a concern I usually do not have since I tend to build my own 
>> Python :).
> 
> I would say that if you can build python by yourself on windows, you can
> certainly build numpy by yourself :) It took me quite a time to be able
> to build python on windows by myself from scratch.

Sure, I do see your point.

Accidentally someone posted about

http://debian-interix.net/

on the sage-windows list today. It offers a gcc 4.2 toolchain and AFAIK 
there is at least a patch set for ATLAS to make it work on Interix.

> cheers,
> 
> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simplifying compiler optimization flags logic (fortran compilers)

2008-11-01 Thread Michael Abshoff
Jarrod Millman wrote:
> On Sat, Nov 1, 2008 at 1:07 AM, Robert Kern <[EMAIL PROTECTED]> wrote:

Hi,

>> On Fri, Oct 31, 2008 at 05:25, David Cournapeau
>> <[EMAIL PROTECTED]> wrote:
>>>I was wondering whether it was really worth having a lot of magic
>>> going on in fcompilers for flags like -msse2 and co (everything done in
>>> get_flags_arch, for example). It is quite fragile (we had several
>>> problems wrt buggy compilers, buggy CPU detection), and I am not sure it
>>> buys us much anyway. Did some people notice a difference between
>>> gfortran -O3 -msse2 and gfortran -O3 ?
>> You're probably right.

we removed setting the various SSE flags in Sage's numpy install because 
they caused segfaults when using gfortran. I don't think that there is a 
significant performance difference with SSE for that code because we use 
Lapack and ATLAS build with SSE when it is available.

> I think it is probably best to take out some of the magic in fcompilers as 
> well.
> 


Cheers,

Michael
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fink python26 and numpy 1.2.1

2008-11-02 Thread Michael Abshoff
Gideon Simpson wrote:
> Not sure if this is an issue with numpy or an issue with fink python  
> 2.6, but when trying to build numpy, I get the following error:

Unfortunately numpy 1.2.x does not support Python 2.6. IIRC support is 
planned for numpy 1.3.

> gcc -L/sw/lib -bundle /sw/lib/python2.6/config -lpython2.6 build/ 
> temp.macosx-10.5-i386-2.6/numpy/core/src/multiarraymodule.o -o build/ 
> lib.macosx-10.5-i386-2.6/numpy/core/multiarray.so
> ld: library not found for -lpython2.6
> collect2: ld returned 1 exit status
> ld: library not found for -lpython2.6
> collect2: ld returned 1 exit status
> error: Command "gcc -L/sw/lib -bundle /sw/lib/python2.6/config - 
> lpython2.6 build/temp.macosx-10.5-i386-2.6/numpy/core/src/ 
> multiarraymodule.o -o build/lib.macosx-10.5-i386-2.6/numpy/core/ 
> multiarray.so" failed with exit status 1
> 
> 
> -gideon

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Segfault with dotblas on OS X 10.5.5/PPC (but not on Intel?)

2008-11-12 Thread Michael Abshoff
David Warde-Farley wrote:
> Hello folks,

Hi David,

> I'm doing some rather big matrix products on a G5, and ran into this.  
> Strangely on the same OS version on my Intel laptop, this isn't an  
> issue. Available memory isn't the problem either, I don't think, this  
> machine is pretty beefy.

Can you define that? People's definition vary :)

> I'm running the python.org 2.5.2 build of Python, and the latest SVN  
> build of numpy (though the same thing happened with 1.1.0).

IIRC that is a universal build for 32 bit PPC and Intel, so depending on 
the problem size 32 bits might be insufficient.

Cheers,

Michael
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and MKL, update

2008-11-13 Thread Michael Abshoff
David Cournapeau wrote:
> On Fri, Nov 14, 2008 at 5:23 AM, frank wang <[EMAIL PROTECTED]> wrote:
>> Hi,

Hi,

>> Can you provide a working example to build Numpy with MKL in window and
>> linux?
>> The reason I am thinking to build the system is that I need to make the
>> speed match with matlab.
> 
> The MKL will only help you for linear algebra, and more specifically
> for big matrices. If you build your own atlas, you can easily match
> matlab speed in that area, I think.

That is pretty much true in my experience for anything but Core2 Intel 
CPUs where GotoBLAS and the latest MKL have about a 25% advantage for 
large problems. That is to a large extend fixed in the development 
version of ATLAS, i.e. 3.9.4, where on Core2 the advantage melts to 
about 5% to 8%. Clint Whaley gave a talk at the BOF linear algebra 
session of Sage Days 11 this week, but his slides are not up in the wiki 
yet.

The advantage of the MKL is that one library works more or less optimal 
on all platforms, i.e. with and without SSE2 for example since the 
"right" routines are selected at run time. That makes the MKL much 
larger, too, so depending on what your goal is either one could be "better".

> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and MKL, update

2008-11-13 Thread Michael Abshoff
David Cournapeau wrote:
> On Fri, Nov 14, 2008 at 11:07 AM, Michael Abshoff
> <[EMAIL PROTECTED]> wrote:
>> David Cournapeau wrote:
>>> On Fri, Nov 14, 2008 at 5:23 AM, frank wang <[EMAIL PROTECTED]> wrote:
>>>> Hi,
>> Hi,
>>
>>>> Can you provide a working example to build Numpy with MKL in window and
>>>> linux?
>>>> The reason I am thinking to build the system is that I need to make the
>>>> speed match with matlab.
>>> The MKL will only help you for linear algebra, and more specifically
>>> for big matrices. If you build your own atlas, you can easily match
>>> matlab speed in that area, I think.
>> That is pretty much true in my experience for anything but Core2 Intel
>> CPUs where GotoBLAS and the latest MKL have about a 25% advantage for
>> large problems.
> 
> Note that I never said that ATLAS was faster than MKL/GotoBLAS :) 

:)

> I said you could match matlab performances (which itself, up to 6.* at
> least, used ATLAS; you could increase matlab performances by using
> your own ATLAS BTW).

Yes, back in the day I got a three fold speedup for a certain workload 
in Matlab by replacing BLAS and UMFPACK libraries.

> I don't think 25 % matter that much, because if
> it does, then you should not use python anyway in many cases (depends
> on the kind of problems of course, but I don't think most scientific
> problems reduce to just matrix product/inversion).

Sure, I agree here. But 25% performance for dgemm is significant for 
some workloads, but if you spend the vast majority of time in Python 
code it won't matter. And some times it is way more than that - see my 
remarks below.

>> The advantage of the MKL is that one library works more or less optimal
>> on all platforms, i.e. with and without SSE2 for example since the
>> "right" routines are selected at run time.
> 
> Agreed. As a numpy/scipy developer, I would actually be much more
> interested in work into that direction for ATLAS than trying to get a
> few % of peak speed.

Note that selecting non SSE2 versions of ATLAS can cause a significant 
slowdown, i.e. one day not too long ago Ondrej Certik and I were sitting 
in IRC in #sage-devel benchmarking some things. His Debian install was a 
factor of 12 slower than the same software that he had build with Sage 
and in the end it boiled down to non-SSE2 ATLAS vs. SSE2 ATLAS. That is 
a freak case, but I am sure more than enough people will get bitten by 
that issue since they installed "ATLAS" in Debian, but did not know 
about SSE2 ATLAS.

And a while back someone compared various numerical closed and open 
source projects in an article for some reknown Linux magazine, among 
them Sage. So they run a bunch of numerical benchmarks, namely FFT and 
SVD and Sage via numpy blew Matlab away by a factor of three for the SVD 
(The FFT looked not so good because Sage is still using GSL for FFT, but 
we will change that). Obviously that was not because numpy was clever 
about the SVD used (I know there are several version in Lapack, but the 
performance difference is usually small), but because Matlab used some 
generic version of BLAS (it was unclear form the article if it was MKL 
or ATLAS) and Sage used a custom build SSE2 version. The reviewer 
expressed admiration for numpy and its clever SVD implementation - Sigh.

> Deployment of ATLAS is really difficult ATM, and
> it means that practically, we lose a lot of performances because for
> distribution, you can't tune for every CPU out there, so we just use
> safe defaults. Same for linux distributions. It is a shame that Apple
> did not open source their Accelerate framework (based on ATLAS, at
> least for the BLAS/LAPACK part), because that's exactly what they did.

Yes, Clint has been in contact with Apple, but never got anything out of 
them. Too bad. The new ATLAS release should fix some build issues 
regarding the dreaded timing tolerance issue and also will work much 
better with threads since Clint rewrote the threading module so that the 
memory allocation is no longer the bottle neck. He also added native 
threading support for Windows, but that is not being tested yet, so 
hopefully it will work in a future version. The main issue here is that 
for assembly support Clint relies on gcc which is hardcoded into the 
Makefiles, so we discussed various options how that can be avoided, but 
so far nor progress can be reported.

> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing

2008-11-26 Thread Michael Abshoff
Travis E. Oliphant wrote:
> Hello,



Hi Travis,

> It is currently available as a single-click installer for Windows XP
> (x86), Mac OS X (a universal binary for OS X 10.4 and above), and
> RedHat 3 and 4 (x86 and amd64).

I am sure you mean RHEL 3 and 4? This "Redhat 3 and 4" always strikes me 
as vague :)



> 
> Enthought Build Team

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy errors when importing in Picalo

2008-11-26 Thread Michael Abshoff
igor Halperin wrote:
> Hi,

Hi

> I get numpy errors after I install Picalo (www.picalo.org 
> ) on Mac OS X 10.4.11 Tiger. I have tried to 
> import numpy in Picalo using the instructions in PicaloCookBook, p.101.
> I get this error message which I don't understand. 
> Per Picalo author (see below for his reply to my email to Picalo 
> discussion forum), I try it here. 
> 
>  I use numpy v. 1.0.4.  distributed with Scipy superpack 
> (http://macinscience.org/?page_id=6) 
> 
> Could anyone please help?

The problem is that numpy was build using a python that was build with 
ucs4 (it is a unicode thing) while the python you run (I assume the 
Apple one) is ucs2. To fix this either build your own numpy or get a 
binary one that is ucs2, but I have no clue where one would get such a 
thing.

> Thanks, and cheers
> Igor

Cheers,

Michael

> sys.path.append('/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/')
> import numpy
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/Applications/sage/local/lib/python2.5/site-packages/numpy/__init__.py", 
> line 93, in 
>   File 
> "/Applications/sage/local/lib/python2.5/site-packages/numpy/add_newdocs.py", 
> line 9, in 
>   File 
> "/Applications/sage/local/lib/python2.5/site-packages/numpy/lib/__init__.py", 
> line 4, in 
>   File 
> "/Applications/sage/local/lib/python2.5/site-packages/numpy/lib/type_check.py",
>  
> line 8, in 
>   File 
> "/Applications/sage/local/lib/python2.5/site-packages/numpy/core/__init__.py",
>  
> line 5, in 
> ImportError: 
> dlopen(/Applications/sage/local/lib/python2.5/site-packages/numpy/core/multiarray.so,
>  
> 2): Symbol not found: _PyUnicodeUCS4_FromUnicode
>   Referenced from: 
> /Applications/sage/local/lib/python2.5/site-packages/numpy/core/multiarray.so
>   Expected in: dynamic lookup
> 
>  Reply
>   
>  Forward
>   
>   
> 
> 
>   Conan C. Albrecht
> 
>  to me, users
> 
>   
> show details Nov 23 (3 days ago)  [smime.p7s]
>   
>   
> Reply
>   
>   
> 
> You're doing everything right from my perspective.  It looks like a 
> problem with NumPy.  The stack trace goes to multiarray.so in their core 
> toolkit.  I think you should hit their forums and see if they can help.
> 
> One idea is that Picalo uses unicode for all data values.  Perhaps numpy 
> can't handle unicode?
> 
> 
> 
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building numpy on solaris 10 x86

2008-11-26 Thread Michael Abshoff
David Cournapeau wrote:
> On Thu, Nov 27, 2008 at 1:16 AM, Peter Norton
> <[EMAIL PROTECTED]> wrote:
>>
>> On Tue, Nov 25, 2008 at 11:28 PM, David Cournapeau
>> <[EMAIL PROTECTED]> wrote:
>>> Charles R Harris wrote:

 What happens if you go the usual python setup.py {build,install} route?
>>> Won't go far since it does not handle sunperf.
>>>
>>> David
>>
>> Even though the regular build process appears to complete, it seems to be
>> doing the wrong thing. It seems, for instance, that lapack_lite.so is being
>> built as an executable:
>>
>> [EMAIL PROTECTED] 11:14 ~ $ gnu file
>> /usr/local/python-2.5.1/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
>> /usr/local/python-2.5.1/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so:
>> ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked
>> (uses shared libs), not stripped
>> ???

Hi,

> I think this is "expected" if python was built with one compiler and
> numpy with another (python with Forte and numpy with gcc). Distutils
> knows the options from python itself, wether it is optional in
> numscons (in theory, you can set it up to use python options or known
> configurations).

Hmm, I have recently build numpy 1.2.1 on FreeBSD 7 and had trouble with 
lapacK_lite.so. The fix was to add a "-shared" flag. I needed the same 
fix for Cygwin.

> I don't think you will have much hope with distutils, unless you are
> ready to add code by yourself (sunperf will be very difficult to
> support, though). 

Why? What do you think makes sunperf problematic? [Not that I want to do 
the work, just curious :)]

> The numscons error has nothing to do with solaris,
> the scons scripts should be there. Could you give me the full output
> of python setupscons.py scons ?
> 
> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build error on Solaris, No module named _md5

2008-12-09 Thread Michael Abshoff
Gong, Shawn (Contractor) wrote:
> hi list,

Hi Shawn,

> I tried to build numpy 1.2.1 on Solaris 9 with gcc 3.4.6
> 
> when I typed “python setup.py build”, I got error from hashlib.py
> 
>   File "/home/sgong/dev181/dist/lib/python2.5/hashlib.py", line 133, in 
> 
> 
> md5 = __get_builtin_constructor('md5')
> 
>   File "/home/sgong/dev181/dist/lib/python2.5/hashlib.py", line 60, in 
> __get_builtin_constructor
> 
> import _md5
> 
> ImportError: No module named _md5
> 
> I then tried python 2.6.1 instead of 2.5.2, but got the same error.
> 
> I did not get the error while building on Linux. But I performed steps 
> on Linux:
> 
> 1) copy *.a Atlas libraries to my local_install/atlas/
> 
> 2) ranlib *.a
> 
> 3) created a site.cfg
> 
> Do I need to do the same on Solaris?
> 
> Any help is appreciated.

This is a pure Python issue and has nothing to do with numpy. When 
Python was build for that install it did either not have access to 
OpenSSL or the Sun crypto libs or you are missing some bits that need to 
be installed on Solaris. Did you build that Python on your own or where 
did it come from?

> thanks,
> 
> Shawn
> 

Cheers,

Michael
> 
> 
> 
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Singular Matrix problem with Matplitlib in Numpy (Windows - AMD64)

2008-12-17 Thread Michael Abshoff
George wrote:
> David Cournapeau  gmail.com> writes:



Hi George,

> I have debugged some more but I am in deep (murky) waters, but I have also ran
> out of ideas. If anybody has some more suggestions, please post them.

Could you post a full example with additional version info that you are 
using? Ever since Sage upgraded to Matplotlib 0.98.3 I have been seeing 
issues with uninitilized values being used in certain code paths. This 
could be the source of potential trouble even though it doesn't seem to 
cause any observable trouble with gcc for example.

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Build errors

2009-01-15 Thread Michael Abshoff
Gael Varoquaux wrote:

Hi Gael,

> OK, here we go for the stupid questions showing that I really don't
> understaind building well:
> 
> I am building numpy on a Mandriva x86_64. I built an optimized ATLAS, and
> added the relevant lines to the site.cfg so that numpy does find the
> libraries. But now I get the following error at build:
> 
> /usr/bin/ld: /volatile/varoquau/usr/lib//libptcblas.a(cblas_dptgemm.o):
> relocation R_X86_64_32 against `a local symbol' can not be used when
> making a shared object; recompile with -fPIC
> /volatile/varoquau/usr/lib//libptcblas.a: could not read symbols: Bad
> value
> collect2: ld returned 1 exit status
> 
> I must confess I really have no clue what this means, and how I solve
> this.

You need to build dynamic version of ATLAS or alternatively make ATLAS 
use -fPIC during compilation when building static libs. Note that AFAIK 
ATLAS 3.8.2's make install does not copy over the dynamic libs, but you 
should be easily be able to copy them over manually.

The patch I am using is

--- Make.top.orig   2009-01-01 19:20:21.0 -0800
+++ Make.top2008-03-20 02:26:35.0 -0700
@@ -298,5 +298,11 @@
- chmod 0644 $(INSTdir)/libf77blas.a
- cp $(LIBdir)/libptcblas.a $(INSTdir)/.
- cp $(LIBdir)/libptf77blas.a $(INSTdir)/.
+   - cp $(LIBdir)/libatlas.so $(INSTdir)/.
+   - cp $(LIBdir)/libcblas.so $(INSTdir)/.
+   - cp $(LIBdir)/libf77blas.so $(INSTdir)/.
+   - cp $(LIBdir)/liblapack.so $(INSTdir)/.
+   - chmod 0644 $(INSTdir)/libatlas.so $(INSTdir)/liblapack.so \
+$(INSTdir)/libcblas.so $(INSTdir)/libcblas.so
- chmod 0644 $(INSTdir)/libptcblas.a $(INSTdir)/libptf77blas.a


but you would need to add the appropriate lines for the multi threaded 
libs, too. The install issue is fixed in ATLAS 3.9.x AFAIK.

> Cheers,
> 
> Gaël

Cheers,

Michael
Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Having trouble installing Numpy on OS X

2009-01-24 Thread Michael Abshoff
Robert Kern wrote:
> On Sat, Jan 24, 2009 at 18:31, Nat Wilson  wrote:
>> It throws this out.
>>
>> Python 2.6.1 (r261:67515, Jan 24 2009, 16:08:37)
>> [GCC 4.0.0 20041026 (Apple Computer, Inc. build 4061)] on darwin
>> Type "help", "copyright", "credits" or "license" for more information.
>>  >>> import os
>>  >>> os.urandom(16)
>> '\xe0;n\x8a*\xb4\x08N\x80<\xef\x9b*\x06\x1b\xc4'
>>  >>>
> 
> Well, looking at the C code for random_seed(), I don't see a way for
> it to return NULL without having an exception set (assuming that the
> Python API calls aren't buggy). Except maybe the assert() call in
> there. When you built your Python, are you sure that -DNDEBUG was
> being used?
> 

Well, the gcc used to compiler Python is rather ancient and that gcc 
release by Apple has the reputation to be "buggier than a Florida swamp 
in July" and at least for building Sage it is blacklisted. So I would 
suggest updating gcc by using some more recent XCode and trying again.

Cheers,

Michael
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] glibc error

2009-01-25 Thread Michael Abshoff
David Cournapeau wrote:
> Hoyt Koepke wrote:



> Actually, I would advise using only 3.8.2. Previous versions had bugs
> for some core routines used by numpy (at least 3.8.0 did). I am a bit
> surprised that a 64 bits-built atlas would be runnable at all in a 32
> bits binary - I would expect the link phase to fail if two different
> object formats are linked together.

Linking 32 and 64 bit ELF objects together in an extension will fail on 
any system but OSX where the ld will happily link together anything. 
Since that linker also does missing symbol lookup at runtime you will 
see some surprising distutils bugs when you thought that the build went 
perfectly, i.e. scipy 0.6 would not use the fortran compiler I would 
tell it to use, but one extension would use gfortran instead of 
sage_fortran when it was available in $PATH. sage_fortran would would 
just inject an "-m64" into the options and call gfortran. But with a few 
fortran objects being 32 bit some extensions in scipy would fail to 
import and it took me quite a while to track this one down. I haven't 
had time to test 0.7rc2 yet, but hopefully will do so in the next day or 
two.

> cheers,
> 
> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fast threading solution thoughts

2009-02-12 Thread Michael Abshoff
Sturla Molden wrote:
> On 2/12/2009 12:20 PM, David Cournapeau wrote:

Hi,

>> It does if you have access to the parallel toolbox I mentioned earlier
>> in this thread (again, no experience with it, but I think it is
>> specially popular on clusters; in that case, though, it is not limited
>> to thread-based implementation).
> 
> As has been mentioned, Matlab is a safer language for parallel computing 
>   as arrays are immutable. There is almost no need for synchronization 
> of any sort, except barriers.
> 
> Maybe it is time to implement an immutable ndarray subclass?
> 
> With immutable arrays we can also avoid making temporary arrays in 
> expressions like y = a*b + c. y just gets an expression and three 
> immutable buffers. And then numexpr (or something like it) can take care 
> of the rest.
> 
> As for Matlab, I have noticed that they are experimenting with CUDA now, 
> to use nvidia's processors for hardware-acceleration. As even modest 
> GPUs can yield hundreds of gigaflops, 

No even close. The current generation peaks at around 1.2 TFlops single 
precision, 280 GFlops double precision for ATI's hardware. The main 
problem with those numbers is that the memory on the graphics card 
cannot feed the data fast enough into the GPU to achieve theoretical 
peak. So those hundreds of GFlops are pure marketing :)

So in reality you might get anywhere from 20% to 60% (if you are lucky) 
locally before accounting for transfers from main memory to GPU memory 
and so on. Given that recent Intel CPUs give you about 7 to 11 Glops 
Double per core and libraries like ATLAS give you that performance today 
without the need to jump through hoops these number start to look a lot 
less impressive.

And Nvidia's number are lower than ATI's. NVidia's programming solution 
is much more advanced and rounded out compared to ATi's which is largely 
in closed beta. OpenCL is mostly vaporware at this point.

 > that is going to be hard to match
> (unless we make an ndarray that uses the GPU). But again, as the 
> performance of GPUs comes from massive multithreading, immutability may 
> be the key here as well.

I have a K10 system with two Tesla C1060 GPUs to play with and have 
thought about adding CUDABlas support to Numpy/Scipy, but it hasn't been 
a priority for me. My main interest here is finite field arithmetic by 
making FFPack via LinBox use CUDABlas. If anyone wants an account to 
make numpy/scipy optionally use CUDABlas feel free to ping me off list 
and I can set you up.

> Sturla Molden

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fast threading solution thoughts

2009-02-12 Thread Michael Abshoff
David Cournapeau wrote:
> Matthieu Brucher wrote:
>> For BLAS level 3, the MKL is parallelized (so matrix multiplication is).
>>   

Hi David,

> Same for ATLAS: thread support is one focus in the 3.9 serie, currently
> in development. 

ATLAS has had thread support for a long, long time. The 3.9 series just 
improves it substantially by using affinity when available and removes 
some long standing issues with allocation performance that one had to 
work around before by setting some defines at compile time.

 > I have never used it, I don't know how it compare to the
> MKL,

It does compare quite well and is more or less on par with the latest 
MKL releases in the 3.9 series. 3.8.2 is maybe 10% to 15% slower on i7 
as well as Core2 cores than the MKL. On big advantage of ATLAS is that 
it tends to work when using it with numpy/scipy unlike the Intel MKL 
where one has to work around a bunch of oddities and jump through hoops 
to get it to work. It seems that Intel must rename at least one library 
in each release of the MKL to keep build system maintainers occupied :)

The big disadvantage of ATLAS is that Windows support is currently 
limited to 32 bits, but 3.9.5 and higher have SFU/SUA support, so 64 bit 
support is possible. Clint told me the main issue here was lack of 
access and it isn't too high on his priority list.

> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fast threading solution thoughts

2009-02-12 Thread Michael Abshoff
Nathan Bell wrote:
> On Thu, Feb 12, 2009 at 8:19 AM, Michael Abshoff
>  wrote:

Hi,

>> No even close. The current generation peaks at around 1.2 TFlops single
>> precision, 280 GFlops double precision for ATI's hardware. The main
>> problem with those numbers is that the memory on the graphics card
>> cannot feed the data fast enough into the GPU to achieve theoretical
>> peak. So those hundreds of GFlops are pure marketing :)
>>
> 
> If your application is memory bandwidth limited, then yes you're not
> likely to see 100s of GFlops anytime soon.  However, compute limited
> application can and do achieve 100s of GFlops on GPUs.  Basic
> operations like FFTs and (level 3) BLAS are compute limited, as are
> the following applications:
> http://www.ks.uiuc.edu/Research/gpu/
> http://www.dam.brown.edu/scicomp/scg-media/report_files/BrownSC-2008-27.pdf

Yes, certainly. But Sturla implied that some "random consumer GPU" (to 
put a negative spin on it :) could do the above. There also seems to be 
a huge expectation that "porting your code to the GPU" will make it 10 
to 100 times faster. There are cases like that as mentioned above, but 
this only applies to a subset of problems.

Another problem is RAM for many datasets I work with and 512 to 1024 MB 
aren't just plainly cutting it. This means Tesla cards at $1k upward and 
all the sudden we are playing a different game.

9 months ago at the beginning when we started playing with CUDA we took 
a MacBook pro with a decent NVidia card and laughed hard after it become 
clear that its Core2 with either ATLAS or the AccelerateFramework (which 
is more or ATLAS for its BLAS bits) was faster than the build in NVidia 
card with either single or double precision. Surely, this is a consumer 
level laptop GPU, but I did expect more.

>> So in reality you might get anywhere from 20% to 60% (if you are lucky)
>> locally before accounting for transfers from main memory to GPU memory
>> and so on. Given that recent Intel CPUs give you about 7 to 11 Glops
>> Double per core and libraries like ATLAS give you that performance today
>> without the need to jump through hoops these number start to look a lot
>> less impressive.
> 
> You neglect to mention that CPUs, which have roughly 1/10th the memory
> bandwidth of high-end GPUs, are memory bound on the very same
> problems.  You will not see 7 to 11 GFLops on a memory bound CPU code
> for the same reason you argue that GPUs don't achieve 100s of GFLops
> on memory bound GPU codes.

I am seeing 7 to 11 GFLOP per core for matrix matrix multiplies on Intel 
CPUs using Strassen for matrix matrix multiplies. And we did scale out 
linear on 16 core Opterons as well as a 64 core Itanium box using ATLAS 
for BLAS level 3 matrix matrix multiplu. When you have multiple GPUs you 
do not have shared memory architectures (AFAIK the 4 GPU boxen sold by 
NVidia have fast buses between the cards, but aren't ccNUMA or anything 
like that - please correct me if I am wrong).

> In severely memory bound applications like sparse matrix-vector
> multiplication (i.e. A*x for sparse A) the best GPU performance you
> can expect is ~10 GFLops on the GPU and ~1 GFLop on the CPU (in double
> precision).  We discuss this problem in the following tech report:
> http://forums.nvidia.com/index.php?showtopic=83825

Ok, I care about dense operations primarily, but it is interesting to 
see that the GPU fares well on sparse LA.

> It's true that host<->device transfers can be a bottleneck.  In many
> cases, the solution is to simply leave the data resident on the GPU.

Well, that assumes you have enough memory locally for your working set. 
And if not you need to be clever about caching and I did not see any 
code in CUDA that takes care of that job for you. I have seen libraries 
like libflame that claim to do that for you, but I have not played with 
them yet.

> For instance, you could imagine a variant of ndarray that held a
> pointer to a device array.  Of course this requires that the other
> expensive parts of your algorithm also execute on the GPU so you're
> not shuttling data over the PCIe bus all the time.

Absolutely. I think that GPUs can fill a large niche for scientific 
computations, but it is not (yet?) the general purpose CPU it is 
sometimes made out to be.

> 
> Full Disclosure: I'm a researcher at NVIDIA

Cool. Thanks for the links by the way.

As I mentioned we have bought Tesla hardware and are working on getting 
our code to use GPUs for numerical linear algebra, exact linear algebra 
and shortly also things like monte carlo simulation. I do think that the 
GPU is extremely useful for much of the above, but there are plenty of 
programming issues to resolve and a lot of infrastructure code to be 
written before 

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Michael Abshoff
David Cournapeau wrote:
> Christian Heimes wrote:
>> David Cournapeau wrote:

Hi,

>> You may call me naive and ignorant. Is it really that hard to archive
>> some kind of poor man's concurrency? You don't have to parallelize
>> everything to get a speed up on multi core machines. Usually the compile
>> process from C/C++ file to an object files takes up most of the time.
>>
>> How about
>>
>> * assemble a list of all C/C++ source files of all extensions.
>> * compile all source files in parallel
>> * do the rest (linking etc.) in serial
>>   

With Sage we do the cythonization in parallel and for now build 
extension serially, but we have code to do that in parallel, too. Given 
that we are building 180 extensions or so the speedup is linear. I often 
do this using 24 cores, so it seems robust since I do work on Sage daily 
and often to test builds from scratch and I never had any problems with 
that code.

We use pyprocessing to launch the jobs and the changes to disutils are 
surprisingly small, but the original version of the patch broke the 
build of numpy/scipy, but I do believe the author already has a fix for 
that, too - he is just busy finishing his PhD thesis next month and will 
then be back to work on Sage. The plan here is to definitely push things 
back into Python so that all people building extensions can benefit.

> That's more or less how make works - it does not work very well IMHO.
> And doing the above correctly in distutils may be harder than it seems:
> both scons and waf had numerous problems with calling subtasks because
> of race conditions in subprocess for example (both have their own module
> for that).
> 
> More fundamentally though, I have no interest in working on distutils.

:)

> Not working on a DAG is fundamentally and hopelessly broken for a build
> tool, and this is unfixable in distutils. Everything is wrong, from the
> concepts to the UI through the implementation, to paraphrase a famous
> saying. There is nothing to save IMHO. Of course, someone else can work
> on it. I prefer working on a sane solution myself,
> 
> cheers,
> 
> David

To taunt Ondrej: A one minute build isn't forever - numpy is tiny and I 
understand why it might seem long compared to SymPy, but just wait until 
you add Cython extensions per default and those build times will go up 
substantially ;).

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Michael Abshoff
David Cournapeau wrote:
> Michael Abshoff wrote:
>> David Cournapeau wrote:

Hi David,

>> With Sage we do the cythonization in parallel and for now build 
>> extension serially, but we have code to do that in parallel, too. Given 
>> that we are building 180 extensions or so the speedup is linear. I often 
>> do this using 24 cores, so it seems robust since I do work on Sage daily 
>> and often to test builds from scratch and I never had any problems with 
>> that code.
>>   
> 
> Note that building from scratch is the easy case, specially in the case
> of parallel builds.

Sure, it also works for incremental builds and I do that many, many 
times a day, i.e. for each patch I merge into the Sage library. What 
gets recompiled is decided by our own dependency tracking code which we 
want to push into Cython itself. Figuring out dependencies on the fly 
without caching takes about 1s for the whole Sage library which includes 
parsing every Cython file.

Note that we build each extension in parallel, so if you depend on a lot 
of fortran or c code to be linked into one extension this obviously 
doesn't help much. The situation with Sage's extension is that we build 
external libraries ahead of time and 99% of extensions do not have 
additional C/C++ files and those who do have usually one to three extra 
files, so for our purposes this scales linear.

> Also, I would guess "cythonizing" is easy, at least
> if it is done entirely in python. Races conditions in subprocess are a
> real problem, it caused numerous issues in scons and waf, so I would be
> really surprised if it did not caused any trouble in distutils.
> Particularly, on windows, subprocess up to python 2.4 was problematic, I
> believe (I should really check, because I was not involved in the
> related discussions nor with the fixes in scons).

We used to use threads for the "parallel stuff" and it is indeed racy, 
but that was mostly observed when running doctests since we only had one 
current directory. All those problems went away once we started to use 
Pyprocessing and while there is some overhead for the forks it is 
drowned out by the build time when using 2 cors.

>> To taunt Ondrej: A one minute build isn't forever - numpy is tiny and I 
>> understand why it might seem long compared to SymPy, but just wait until 
>> you add Cython extensions per default and those build times will go up 
>> substantially
>>   
> 
> Building scipy installer on windows takes 1 hour, which is already
> relatively significant. 

Ouch. Is that without the dependencies, i.e. ATLAS?

I was curious how you build the various version of ATLAS, i.e. no SSE, 
SSE, SSE2, etc. Do you just set the arch via -A and build them all on 
the same box? [sorry for getting slightly OT here :)]

> But really, parallel builds is just a nice
> consequence of using a sane build tool. I simply cannot stand distutils
> anymore; it now feels even more painful than developing on windows.
> Every time you touch something, something else, totally unrelated breaks.

Yeah, distutils are a pain, but numpy extending/modifying them doesn't 
make it any cleaner :(. I am looking forward to the day NumScons is part 
of Numpy though.

> cheers,
> 
> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Michael Abshoff
David Cournapeau wrote:
> Michael Abshoff wrote:

Hi David,

>> Sure, it also works for incremental builds and I do that many, many 
>> times a day, i.e. for each patch I merge into the Sage library. What 
>> gets recompiled is decided by our own dependency tracking code which we 
>> want to push into Cython itself. Figuring out dependencies on the fly 
>> without caching takes about 1s for the whole Sage library which includes 
>> parsing every Cython file.
>>   
> 
> Hm, I think I would have to look at what sage is internally to really
> understand the implications. But surely, if you can figure out in one
> second the whole dependency for scipy, I would be more than impressed:
> you would beat waf and make at their own game.

I didn't write the code, but thanks :)

We used to cache the dependency tree by pickling it and if no time 
stamps changed we would reuse it, so that phase would be instant. Alas, 
there was one unfixed bug in it that hit you if you removed a extension 
or file. But the main author of that code (Craig Citro - to give credit 
where credit is due) has an idea how to fix it and once his PhD thesis 
is handed in will stomp that bug out.

>> We used to use threads for the "parallel stuff" and it is indeed racy, 
>> but that was mostly observed when running doctests since we only had one 
>> current directory. All those problems went away once we started to use 
>> Pyprocessing and while there is some overhead for the forks it is 
>> drowned out by the build time when using 2 cors.
>>   
> 
> Does pyprocessing work well on windows as well ? I have 0 experience
> with it.

It should, but I haven't tested it. The last official, stand alone 
pyprocessing we have in Sage causes trouble on FreeBSD 7 by segfaulting, 
so we will likely update to the backport from Python 2.6 soon since for 
now we are stuck at Python 2.5 until numpy/scipy and a bunch of other 
Python projects like NetworkX support it officially ;).

>> Ouch. Is that without the dependencies, i.e. ATLAS?
>>   
> 
> Yes - but I need to build scipy three times, for each ATLAS (if I could
> use numscons, it would be much better, since a library change is handled
> as a dependency in scons; with distutils, the only safe way is to
> rebuild from scratch for every configuration).

In Sage if we link static libs into extension we add a dependency to the 
header. But you could do the same for libatlas.a, so dropping in a new 
version and touching it should just rebuild the extensions depending on 
atlas. I have tested this extensively and not found any problem with 
that approach.

>> I was curious how you build the various version of ATLAS, i.e. no SSE, 
>> SSE, SSE2, etc. Do you just set the arch via -A and build them all on 
>> the same box? [sorry for getting slightly OT here :)]
>>   
> 
> This does not work: ATLAS will still use SSE if your CPU supports it,
> even if you force an arch without SSE. I tried two different things:
> first, using a patched qemu with options to emulate a P4 wo SSE, with
> SSE2 and with SSE3, but this does not work so well (the generated
> versions are too slow, and handling virtual machines in qemu is a bit of
> a pain). Now, I just build on different machines, and hope I won't need
> to rebuild them too often.

Ok, I now remember what I did about this problem two days ago since I 
want to build SSE2 only binary releases of Sage. Apparently there are 
people out there who aren't using Intel/AMD CPUs with SSE3 :)

In order to make ATLAS build without say SSE3 go into the config system 
and have the SSE3 probe return "FAILURE" unconditionally. That way ATLAS 
will only pick SSE2 even if the CPU handles more. I verified by using 
objdump that the resulting lib does not contain any PNI (==SSE3) 
instructions any more, see

  http://trac.sagemath.org/sage_trac/ticket/5219

A problem here is that you get odd arches without tuning info, i.e. 
P464SSE2, so one has to build tuning info one and drop it into 
subsequent builds of ATLAS. You will also have the problem of Hammer vs. 
P4 ATLAS kernels, so one day I will measure performance.

I meant to ask Clint about adding a configure switch for a maximum SSE 
level to ATLAS itself, but since I only got the problem solved two days 
ago I hadn't gotten around to it. Given that everything else is 
configurable it seems like he would welcome it.

If you want more details on where to poke around in the config system 
let me know.

> cheers,
> 
> David

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Got: "undefined symbol: PyUnicodeUCS2_FromUnicode" error

2009-04-08 Thread Michael Abshoff
charlie wrote:
> Hi All,

Hi Charlie,

> I got the  "undefined symbol: PyUnicodeUCS2_FromUnicode" error when
> importing numpy.
> 
> I have my own non-root version of python 2.5.4 final installed with
> --prefix=$HOME/usr.
> PYTHONHOME=$HOME/usr;
> PYTHONPATH="$PYTHONHOME/lib:$PYTHONHOME/lib/python2.5/site-packages/"
> install and import other modules works fine.
> I install numpy-1.3.0rc2 from the svn repository with "python setup.py
> install"
> then import numpy results in following error:
> *Python 2.5 (release25-maint, Jul 23 2008, 17:54:01)
> [GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.



> PyUnicodeUCS2_FromUnicode

numpy was build with a python configured with ucs2 while the python you 
have is build with ucs4.

 import Bio
 import sys*
> I am not sure where is trick is. As I checked the previous discussion, I
> found several people raised similar issue but no one has posted a final
> solution to this yet. So I can only ask for help here again! Thanks in
> advance!

To fix this build python with ucs2, i.e. "check configure --help" and 
set python to use ucs2.

> Charlie
> 

Cheers,

Michael

> 
> 
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy 1.3.0 test() failures on ubuntu 9.04

2009-04-23 Thread Michael Abshoff
Pauli Virtanen wrote:
> Thu, 23 Apr 2009 14:54:10 -0400, Chris Colbert wrote:
> [clip]
>> libatlas-sse2-dev
> [clip]
>> The pastebin links to site.cfg build.log and test.log are at the end of
>> this email. If anyone could help me out here as to what the problems may
>> be, i would appreciate it!
> 
> The SSE-optimized Atlas libraries shipped with Ubuntu 9.04 are broken:
> 
>   https://bugs.launchpad.net/bugs/363510
> 

Slightly OT: Is Ubuntu still shipping ATLAS 3.6.0 or am I 
misinterpreting the version number somehow?

Cheers,

Michael
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development

2008-05-20 Thread Michael Abshoff
Stéfan van der Walt wrote:
> 2008/5/20 Charles R Harris <[EMAIL PROTECTED]>:
>   
>> On Tue, May 20, 2008 at 9:48 AM, Jarrod Millman <[EMAIL PROTECTED]>
>> wrote:
>> 
>>> On Tue, May 20, 2008 at 8:37 AM, Charles R Harris
>>> <[EMAIL PROTECTED]> wrote:
>>>   
 Two of the buildbots are showing problems, probably installation
 related,
 but it would be nice to see all green before the release.
 
>>> Absolutely.  Thanks for catching this.
>>>   
>> It would be good if we could find a PPC to add to the buildbot in order to
>> catch endianess problems.
>> SPARC might also do for this.
>> 
>
> Absolutely!  If anybody has access to such a machine and is willing to
> let a build-slave run on it, please let me know and I'll send you the
> necessary configuration files.
>
>   

Hi Stefan,

I got access to Solaris 9/Sparc and Solaris 10/Sparc and am certainly 
willing to help out.

> Regards
> Stéfan
>   

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development

2008-05-20 Thread Michael Abshoff
Charles R Harris wrote:

Hi Chuck,
> Curious bug on Stefan's Ubuntu build client:
>
> ImportError: /usr/lib/atlas/libblas.so.3gf: undefined symbol: 
> _gfortran_st_write_done
> make[1]: *** [test] Error 1
> Anyone know what that is about?
>
You need to link -lgfortran since the blas you use was compiled with 
gfortran.

> Chuck
>
Cheers,

Michael

> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Michael Abshoff
Dan Yamins wrote:
>
>
Hello folks,

I did port Sage and hence Python with numpy and scipy to 64 bit OSX and 
below are some sample build instructions for just building python and 
numpy in 64 bit mode.
>
>
> Try
>
> In [3]: numpy.dtype(numpy.uintp).itemsize
> Out[3]: 4
>
> which is the size in bytes of the integer needed to hold a
> pointer. The output above is for 32 bit python/numpy.
>
> Chuck
>
>
> Check, the answer is 4, as you got for the 32-bit.   What would the 
> answer be on a 64-bit architecture?  Why is this diagnostic?
>
> Thanks!
> Dan
>
First try:

export OPT="-g -fwrapv -O3 -m64 -Wall -Wstrict-prototypes"
./configure --disable-toolbox-glue 
--prefix=/Users/mabshoff/64bitnumpy/python-2.5.2-bin  


checking for int... yes
checking size of int... 4
checking for long... yes
checking size of long... 4


Oops, make fail because of the above. Let's try again:

 ./configure --disable-toolbox-glue 
--prefix=/Users/mabshoff/64bitnumpy/python-2.5.2-bin --with-gcc="gcc -m64"


checking for int... yes
checking size of int... 4
checking for long... yes
checking size of long... 8


make && make install

then:

bsd:python-2.5.2-bin mabshoff$ file bin/python
bin/python: Mach-O 64-bit executable x86_64

Let's make the 64 bit python default:

bsd:64bitnumpy mabshoff$ export 
PATH=/Users/mabshoff/64bitnumpy/python-2.5.2-bin/bin/:$PATH
bsd:64bitnumpy mabshoff$ which python
/Users/mabshoff/64bitnumpy/python-2.5.2-bin/bin//python
bsd:64bitnumpy mabshoff$ file `which python`
/Users/mabshoff/64bitnumpy/python-2.5.2-bin/bin//python: Mach-O 64-bit 
executable x86_64

Let's build numpy 1.1.0:

bsd:64bitnumpy mabshoff$ tar xf numpy-1.1.0.tar.gz
bsd:64bitnumpy mabshoff$ cd numpy-1.1.0
bsd:numpy-1.1.0 mabshoff$ python setup.py install


bsd:python-2.5.2-bin mabshoff$ python
Python 2.5.2 (r252:60911, Jun  4 2008, 20:47:16)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
 >>> import numpy
 >>> numpy.dtype(numpy.uintp).itemsize
8
 >>> ^D
bsd:python-2.5.2-bin mabshoff$  

Voila ;)

Wi th numpy 1.0.4svn from 20080104 the numpy setup did not work since a 
conftest failed. I did report that to Stefan V. in IRC via #sage-devel 
and I also thought that this was still a problem with numpy 1.1.0, but 
fortuantely that was fixed.

Now on to the test suite:

 >>> numpy.test()
Numpy is installed in 
/Users/mabshoff/64bitnumpy/python-2.5.2-bin/lib/python2.5/site-packages/numpy
Numpy version 1.1.0
Python version 2.5.2 (r252:60911, Jun  4 2008, 20:47:16) [GCC 4.0.1 
(Apple Inc. build 5465)]
  Found 18/18 tests for numpy.core.tests.test_defmatrix
  Found 3/3 tests for numpy.core.tests.test_errstate
  Found 3/3 tests for numpy.core.tests.test_memmap
  Found 286/286 tests for numpy.core.tests.test_multiarray
  Found 70/70 tests for numpy.core.tests.test_numeric
  Found 36/36 tests for numpy.core.tests.test_numerictypes
  Found 12/12 tests for numpy.core.tests.test_records
  Found 143/143 tests for numpy.core.tests.test_regression
  Found 7/7 tests for numpy.core.tests.test_scalarmath
  Found 2/2 tests for numpy.core.tests.test_ufunc
  Found 16/16 tests for numpy.core.tests.test_umath
  Found 63/63 tests for numpy.core.tests.test_unicode
  Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
  Found 5/5 tests for numpy.distutils.tests.test_misc_util
  Found 2/2 tests for numpy.fft.tests.test_fftpack
  Found 3/3 tests for numpy.fft.tests.test_helper
  Found 24/24 tests for numpy.lib.tests.test__datasource
  Found 10/10 tests for numpy.lib.tests.test_arraysetops
  Found 1/1 tests for numpy.lib.tests.test_financial
  Found 53/53 tests for numpy.lib.tests.test_function_base
  Found 5/5 tests for numpy.lib.tests.test_getlimits
  Found 6/6 tests for numpy.lib.tests.test_index_tricks
  Found 15/15 tests for numpy.lib.tests.test_io
  Found 1/1 tests for numpy.lib.tests.test_machar
  Found 4/4 tests for numpy.lib.tests.test_polynomial
  Found 1/1 tests for numpy.lib.tests.test_regression
  Found 49/49 tests for numpy.lib.tests.test_shape_base
  Found 15/15 tests for numpy.lib.tests.test_twodim_base
  Found 43/43 tests for numpy.lib.tests.test_type_check
  Found 1/1 tests for numpy.lib.tests.test_ufunclike
  Found 89/89 tests for numpy.linalg.tests.test_linalg
  Found 3/3 tests for numpy.linalg.tests.test_regression
  Found 94/94 tests for numpy.ma.tests.test_core
  Found 15/15 tests for numpy.ma.tests.test_extras
  Found 17/17 tests for numpy.ma.tests.test_mrecords
  Found 36/36 tests for numpy.ma.tests.test_old_ma
  Found 4/4 tests for numpy.ma.tests.test_subclassing
  Found 7/7 tests for numpy.tests.test_random
  Found 16/16 tests for numpy.testing.tests.test_utils
  Found 5/5 tests for numpy.tests.test_ctypeslib
.

Re: [Numpy-discussion] A memory problem: why does mmap come up in numpy.inner?

2008-06-04 Thread Michael Abshoff
Jonathan Wright wrote:
> Dan Yamins wrote:
>   
>> On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris 
>> <[EMAIL PROTECTED] > wrote:
>>
>>
>> Are both python and your version of OS X fully 64 bits?
>>
>>
>> I'm not sure.  
>> 
> >From  python:
>
> python2.5 -c 'import platform;print platform.architecture()'
> ('32bit', 'ELF')
>
> versus :
>
> ('64bit', 'ELF')
>
> You can also try the unix file command (eg: from a terminal):
>
>   
Hi Jon,

as described in the other email in this thread about an hour ago I did 
build python 2.5.2 on OSX in 64 bit mode. As is the ctypes extension 
does not build due to libffi being too old. I manually got it to build 
[hackish, long story, details will go to the python dev list in a couple 
days] and now it works, i.e. twisted works correctly and the Sage 
notebook which depends on twisted passes doctests.

> $ file `which python2.5`
> /sware/exp/fable/standalone/redhate4-a64/bin/python: ELF 64-bit LSB 
> executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, 
> dynamically linked (uses shared libs), not stripped
>
> ...etc. We needed this for generating the .so library file name for 
> ctypes, and got the answer from comp.lang.python. I hope it also works 
> for OS X.
>
>   

Can you elaborate on this a little? I have a 64 bit python 2.5.2 and 
ctypes imports fine:

bsd:64bitnumpy mabshoff$ file `which python`
/Users/mabshoff/64bitnumpy/python-2.5.2-bin/bin//python: Mach-O 64-bit 
executable x86_64
bsd:64bitnumpy mabshoff$ python
Python 2.5.2 (r252:60911, Jun  4 2008, 21:59:02)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
 >>> import _ctypes
 >>> ^D

But  since I manually build it and copied it over python might not 
"know" it exists [excuse much lack of python-internals foo here]. Since 
I tested numpy-1.1 before I copied over the ctypes extension I deleted 
numpy-1.1 from site-packages and rebuild from fresh sources, but despite 
the ctypes extension existing and working it still fails the numpy 
ctypes test:


  Found 5/5 tests for numpy.tests.test_ctypeslib
ctypes is not available on this python: skipping the 
test (import error was: ctypes is not available.)
.
--
Ran 1275 tests in 1.235s

I am not what I would call familiar with numpy internals, so is there a 
magic thing I can do to make numpy aware that ctypes exists? I am 
willing to find out myself, but do not have the time today to go off and 
spend a day or two on this, especially if somebody on the list can just 
point me to the right spot ;)

Any input is appreciated.

> Best,
>
> Jon
>
>
>   
Cheers,

Michael

>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building 64-bit numpy on OSX

2008-06-06 Thread Michael Abshoff
Dan Yamins wrote:

Hi Dan,
> >./configure --disable-toolbox-glue 
> --prefix=/Users/mabshoff/64bitnumpy/python-2.5.2-bin --with-gcc="gcc 
> -m64"
>
>
> Let's build numpy 1.1.0:
>
> bsd:64bitnumpy mabshoff$ tar xf numpy-1.1.0.tar.gz
> bsd:64bitnumpy mabshoff$ cd numpy-1.1.0
> bsd:numpy-1.1.0 mabshoff$ python setup.py install
> 
>
>
>  
> Michael thanks for your help on building 64-bit python/numpy. I 
> followed your instructions, and was able to build the python portion.  
> (I did the test "python -c 'import platform;print 
> platform.architecture()' "  suggested by Jon and I got the 64-bit 
> result, so I think it worked properly.)Thanks very much for 
> this!   However, I was unable to get the numpy build to work.   When 
> running the setup.py install, I get all sorts of permission errors, so 
> I'm forced to run it as su.  (Is this a bad idea?) Anyhow, when I 
> do run "sudo python setup.py install" in the numpy-1.1.0 directory I 
> downloaded from SciPy website, the build apparently works.
If you were able to install python without sudo you should not need to 
use sudo to build numpy. I certainly did not need to do so. Can you post 
the exact sequence of commands you ran?

>   But when I go into python, I'm not able to import numpy.  
> Specifically, I get:
>
>daniel-yaminss-mac-pro:Desktop danielyamins$ python
>Python 2.5.2 (r252:60911, Jun  5 2008, 19:13:50)
>[GCC 4.0.1 (Apple Inc. build 5465)] on darwin  
>Type "help", "copyright", "credits" or "license" for more information.
>>>> import numpy
>   Traceback (most recent call last):
>  File "", line 1, in 
>  File "/usr/local/lib/python2.5/site-packages/numpy/__init__.py", 
> line 93, in 
>import add_newdocs
>  File 
> "/usr/local/lib/python2.5/site-packages/numpy/add_newdocs.py", line 9, 
> in 
>from lib import add_newdoc
>  File 
> "/usr/local/lib/python2.5/site-packages/numpy/lib/__init__.py", line 
> 4, in 
>from type_check import *
>  File 
> "/usr/local/lib/python2.5/site-packages/numpy/lib/type_check.py", line 
> 8, in 
>import numpy.core.numeric as _nx
>  File 
> "/usr/local/lib/python2.5/site-packages/numpy/core/__init__.py", line 
> 5, in 
>import multiarray
>   ImportError: 
> dlopen(/usr/local/lib/python2.5/site-packages/numpy/core/multiarray.so, 
> 2): no suitable image found.  Did find:
>   /usr/local/lib/python2.5/site-packages/numpy/core/multiarray.so: 
> no matching architecture in universal wrapper
>
>
> Does anyone know what this error means, and/or how to fix it?   Sorry 
> if it's obvious and I should know it alread.y
It is not obvious to me, but I do't see what is wrong either. Apple's 
python is not in /usr/local, so it does not sound like when you build 
numpy you ended up using the "wrong" python.

>
>
> Thanks!
> Dan
>
>
Cheers,

Michael
>
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building 64-bit numpy on OSX

2008-06-06 Thread Michael Abshoff
Charles R Harris wrote:
>
>
> On Fri, Jun 6, 2008 at 3:50 PM, Dan Yamins <[EMAIL PROTECTED] 
> > wrote:
>
>
>
>
>
> > I'm forced to run it as su.  (Is this a bad idea?)
> Anyhow, when I
> > do run "sudo python setup.py install" in the numpy-1.1.0
> directory I
> > downloaded from SciPy website, the build apparently works.
> If you were able to install python without sudo you should not
> need to
> use sudo to build numpy. I certainly did not need to do so.
> Can you post
> the exact sequence of commands you ran?
>
>
> Here they are:
>
>daniel-yaminss-mac-pro:numpy-1.1.0 danielyamins$ python
> setup.py install
>Running from numpy source directory.
>
>[a whole bunch of output, things apparently working for a
> little while. then:]
>
>copying build/lib.macosx-10.3-i386-2.5/numpy/__config__.py ->
> /usr/local/lib/python2.5/site-packages/numpy
>error: could not delete
> '/usr/local/lib/python2.5/site-packages/numpy/__config__.py':
> Permission denied
>
> This is what happens when I try to run the install without
> "sudo".  When I do "sudo python setup.py install" instead, the
> process finishes but then, as I said before, when I open a python
> interpreter and try to import numpy it fails in the way I posted
> previously.
>
>
>
> >It is not obvious to me, but I do't see what is wrong either. Apple's
> >python is not in /usr/local, so it does not sound like when you build
> >numpy you ended up using the "wrong" python.
>
> So, I think it's using the "right" python since I modified my
> .profile to make the 64-bit version stored in /usr/local the
> default.(and when I open python the "build" information gives
> the time of the new one I built.) 
>
>
> I don't think the permissions problem really matters, it just means 
> you don't write permissions in /usr/local/lib/python2.5/site-packages, 
> which is normal.  What I think matters is "no matching architecture in 
> universal wrapper". Hmmm. I wonder if you and Michael have the same 
> versions of OS X?

For the record: I am running OSX 10.5.2 on an Intel Mac.
> And why is dlopen looking for a universal library? One would hope that 
> distutils would have taken care of that.
>
Maybe Dan did overwrite some older 32 bit Python with the new 64 bit build?

> Out of curiosity, where are the 32/64 bit libraries normally put? Do 
> you have a /usr/local/lib32 or a /usr/local/lib64? What does
>
> file /usr/local/lib/python2.5/site-packages/numpy/core/multiarray.so
>
> do?
>
> Chuck
>
I will be traveling, so I will drop off the net for the next day or two, 
but I expect to catch up with email then.

Cheers,

Michael

>
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Code samples in docstrings mistaken as doctests

2008-06-23 Thread Michael Abshoff
Stéfan van der Walt wrote:
> 2008/6/24 Stéfan van der Walt <[EMAIL PROTECTED]>:
>> It should be fairly easy to execute the example code, just to make
>> sure it runs.  We can always work out a scheme to test its validity
>> later.

Hi,

> Mike Hansen just explained to me that the Sage doctest system sets the
> random seed before executing each test.  If we address
> 
> a) Random variables

we have some small extensions to the doctesting framework that allow us 
to mark doctests as "#random" so that the result it not checked. Carl 
Witty wrote some code that makes the random number generator in a lot of 
the Sage components behave consistently on all supported platforms.

> b) Plotting representations and
> c) Endianness

Yeah, the Sage test suite seems to catch at least one of those in every 
release cycle.

Another thing we just implemented is a "jar of pickles" that lets us 
verify that there is no cross platform issues (32 vs. 64 bits and big 
vs. little endian) as well as no problems with loading pickles from 
previous releases.

> we're probably halfway there.
> 
> Regards
> Stéfan

Cheers,

Michael

> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Code samples in docstrings mistaken as doctests

2008-06-23 Thread Michael Abshoff
Fernando Perez wrote:
> On Mon, Jun 23, 2008 at 4:58 PM, Michael Abshoff
> <[EMAIL PROTECTED]> wrote:

Hi Fernando,

>>> a) Random variables
>> we have some small extensions to the doctesting framework that allow us
>> to mark doctests as "#random" so that the result it not checked. Carl
>> Witty wrote some code that makes the random number generator in a lot of
>> the Sage components behave consistently on all supported platforms.
> 
> Care to share? (BSD, we can't even look at the Sage code).

I am not the author, so I need to find out who wrote the code, but I am 
sure it can be made BSD. We are also working on "doctest+timeit" to hunt 
for performance regressions, but that one is not ready for prime time yet.

> Cheers,
> 
> f

Cheers,

Michael


> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Code samples in docstrings mistaken as doctests

2008-06-23 Thread Michael Abshoff
Charles R Harris wrote:
> 
> 
> On Mon, Jun 23, 2008 at 5:58 PM, Michael Abshoff 
> <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> 
> wrote:
> 
> Stéfan van der Walt wrote:
>  > 2008/6/24 Stéfan van der Walt <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>>:
>  >> It should be fairly easy to execute the example code, just to make
>  >> sure it runs.  We can always work out a scheme to test its validity
>  >> later.
> 
> Hi,
> 
>  > Mike Hansen just explained to me that the Sage doctest system
> sets the
>  > random seed before executing each test.  If we address
>  >
>  > a) Random variables
> 
> we have some small extensions to the doctesting framework that allow us
> to mark doctests as "#random" so that the result it not checked. Carl
> Witty wrote some code that makes the random number generator in a lot of
> the Sage components behave consistently on all supported platforms.

Hi,

> 
> But there is more than one possible random number generator. If you do 
> that you are tied into one kind of generator and one kind of 
> initialization implementation.
> 
> Chuck
> 

Correct, but so far Carl has hooked into six out of the many random 
number generators in the various components of Sage. This way we can set 
a global seed and also more easily reproduce issues with algorithms 
where randomness plays a role without being forced to be on the same 
platform. There are still doctests in Sage where the randomness comes 
from sources not in randgen (Carl's code), but sooner or later we will 
get around to all of them.

Cheers,

Michael

> 
> 
> 
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion