Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
> > > Oups, safe for the "/trunk:1-2871" part. This should be deleted before > > a commit to the trunk, I think. > Yes, that's what I (quite unclearly) meant: since revision numbers are > per- repository in svn, I don't understand the point of tracking trunk > revisions: I would think that tracking

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Matthieu Brucher wrote: > > > 2008/1/8, Matthieu Brucher <[EMAIL PROTECTED] > >: > > > > 2008/1/8, David Cournapeau <[EMAIL PROTECTED] > >: > > Matthieu Brucher wrote: > > Hi David, > > > > How did you init

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
2008/1/8, Matthieu Brucher <[EMAIL PROTECTED]>: > > > > 2008/1/8, David Cournapeau <[EMAIL PROTECTED]>: > > > > Matthieu Brucher wrote: > > > Hi David, > > > > > > How did you initialize svnmerge ? > > As said in the numpy wiki. More precisely: > > - In a svn checkout of the trunk, do svn up to

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
2008/1/8, David Cournapeau <[EMAIL PROTECTED]>: > > Matthieu Brucher wrote: > > Hi David, > > > > How did you initialize svnmerge ? > As said in the numpy wiki. More precisely: > - In a svn checkout of the trunk, do svn up to be up to date > - svn copy TRUNK MY_BRANCH > - use svnmerge i

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Matthieu Brucher wrote: > Hi David, > > How did you initialize svnmerge ? As said in the numpy wiki. More precisely: - In a svn checkout of the trunk, do svn up to be up to date - svn copy TRUNK MY_BRANCH - use svnmerge init MY_BRANCH - svn ci -F svnmerge-commit.txt - svn switch

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
Hi David, How did you initialize svnmerge ? Matthieu 2008/1/8, David Cournapeau <[EMAIL PROTECTED]>: > > Fernando Perez wrote: > > On Jan 7, 2008 10:41 PM, David Cournapeau <[EMAIL PROTECTED]> > wrote: > > > >> Hi, > >> > >> for my work related on scons, I have a branch build_with_scons in >

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Fernando Perez wrote: > On Jan 7, 2008 10:54 PM, David Cournapeau <[EMAIL PROTECTED]> wrote: > > >> I understand this if doing the merge at hand with svn merge (that's what >> I did previously), but I am using svnmerge, which is supposed to avoid >> all this (I find the whole process extremely e

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Fernando Perez
On Jan 7, 2008 10:54 PM, David Cournapeau <[EMAIL PROTECTED]> wrote: > I understand this if doing the merge at hand with svn merge (that's what > I did previously), but I am using svnmerge, which is supposed to avoid > all this (I find the whole process extremely error-prone). More > specifically,

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Fernando Perez wrote: > On Jan 7, 2008 10:41 PM, David Cournapeau <[EMAIL PROTECTED]> wrote: > >> Hi, >> >> for my work related on scons, I have a branch build_with_scons in >> the numpy trunk, which I have initialized exactly as documented on the >> numpy wiki (http://projects.scipy.org/sci

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Fernando Perez
On Jan 7, 2008 10:41 PM, David Cournapeau <[EMAIL PROTECTED]> wrote: > Hi, > > for my work related on scons, I have a branch build_with_scons in > the numpy trunk, which I have initialized exactly as documented on the > numpy wiki (http://projects.scipy.org/scipy/numpy/wiki/MakingBranches). > W

[Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Hi, for my work related on scons, I have a branch build_with_scons in the numpy trunk, which I have initialized exactly as documented on the numpy wiki (http://projects.scipy.org/scipy/numpy/wiki/MakingBranches). When I try to update my branch with the trunk, I got suprising merge request,

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread David Cournapeau
Andrew Straw wrote: > dmitrey wrote: > >> The only one thing I'm very interested in for now - why the most >> simplest matrix operations are not implemented to be parallel in numpy >> yet (for several-CPU computers, like my AMD Athlon X2). >> > For what it's worth, sometimes I *want* my n

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Andrew Straw
dmitrey wrote: > The only one thing I'm very interested in for now - why the most > simplest matrix operations are not implemented to be parallel in numpy > yet (for several-CPU computers, like my AMD Athlon X2). For what it's worth, sometimes I *want* my numpy operations to happen only on one c

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread eric jones
Robert Kern wrote: > dmitrey wrote: > >> The only one thing I'm very interested in for now - why the most >> simplest matrix operations are not implemented to be parallel in numpy >> yet (for several-CPU computers, like my AMD Athlon X2). First of all >> it's related to matrix multiplication

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Timothy Hochberg
On Jan 7, 2008 2:00 PM, Charles R Harris <[EMAIL PROTECTED]> wrote: > Hi, > > On Jan 7, 2008 1:16 PM, Timothy Hochberg <[EMAIL PROTECTED]> wrote: > > > > > Another possible approach is to treat downcasting similar to underflow. > > That is give it it's own flag in the errstate and people can set i

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Matthieu Brucher
2008/1/7, dmitrey <[EMAIL PROTECTED]>: > > The only one thing I'm very interested in for now - why the most > simplest matrix operations are not implemented to be parallel in numpy > yet (for several-CPU computers, like my AMD Athlon X2). First of all > it's related to matrix multiplication and dev

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Robert Kern
dmitrey wrote: > The only one thing I'm very interested in for now - why the most > simplest matrix operations are not implemented to be parallel in numpy > yet (for several-CPU computers, like my AMD Athlon X2). First of all > it's related to matrix multiplication and devision, either point or

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread dmitrey
The only one thing I'm very interested in for now - why the most simplest matrix operations are not implemented to be parallel in numpy yet (for several-CPU computers, like my AMD Athlon X2). First of all it's related to matrix multiplication and devision, either point or matrix (i.e. like A\B,

Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Travis E. Oliphant
Andrew Straw wrote: > Hi, > > I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is > having troubles accessing a numpy scalar with the __array_interface__. > Is this supposed to work? Or should __array_interface__ trigger an > AttributeError on a numpy scalar? Note that I haven't do

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
On Monday 07 January 2008 03:09:33 pm Travis E. Oliphant wrote: > Darren Dale wrote: > > One of my collaborators would like to use 16bit float arrays. According > > to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to > > float16 in numpy.core.numerictypes, it appears that this shoul

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Brian Granger
Dmitrey, This work is being funded by a new NASA grant that I have at Tech-X Corporation where I work. The grant officially begins as of Jan 18th, so not much has been done as of this point. We have however been having some design discussions with various people. Here is a broad overview of the

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
On Monday 07 January 2008 03:53:06 pm Charles R Harris wrote: > Hi, > > On Jan 7, 2008 1:00 PM, Darren Dale <[EMAIL PROTECTED]> wrote: > > One of my collaborators would like to use 16bit float arrays. According > > to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to > > float16 in >

[Numpy-discussion] Unexpected integer overflow handling

2008-01-07 Thread Zachary Pincus
Hello all, On my (older) version of numpy (1.0.4.dev3896), I found several oddities in the handling of assignment of long-integer values to integer arrays: In : numpy.array([2**31], dtype=numpy.int8) --- ValueError

Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Travis E. Oliphant
Andrew Straw wrote: > Travis E. Oliphant wrote: > >> Andrew Straw wrote: >> >>> Hi, >>> >>> I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is >>> having troubles accessing a numpy scalar with the __array_interface__. >>> Is this supposed to work? Or should __array_interfa

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
Hi, On Jan 7, 2008 1:16 PM, Timothy Hochberg <[EMAIL PROTECTED]> wrote: > > Another possible approach is to treat downcasting similar to underflow. > That is give it it's own flag in the errstate and people can set it to > ignore, warn or raise on downcasting as desired. One could potentially hav

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Charles R Harris
Hi, On Jan 7, 2008 1:00 PM, Darren Dale <[EMAIL PROTECTED]> wrote: > One of my collaborators would like to use 16bit float arrays. According to > http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 > in > numpy.core.numerictypes, it appears that this should be possible, but t

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Zachary Pincus
Hello all, In order to help make things regarding this casting issue more explicit, let me present the following table of potential "down-casts". (Also, for the record, nobody is proposing automatic up-casting of any kind. The proposals on the table focus on preventing some or all implicit

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Robert Kern
dmitrey wrote: > Some days ago there was mentioned a parallel numpy that is developed by > Brian Granger. > > Does the project have any blog or website? Has it any description about > API and abilities? When 1st release is intended? It is just starting development. There is nothing to see, yet,

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Anne Archibald
On 07/01/2008, Timothy Hochberg <[EMAIL PROTECTED]> wrote: > I'm fairly dubious about assigning float to ints as is. First off it looks > like a bug magnet to me due to accidentally assigning a floating point value > to a target that one believes to be float but is in fact integer. Second, > C-sty

Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Andrew Straw
Travis E. Oliphant wrote: > Andrew Straw wrote: >> Hi, >> >> I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is >> having troubles accessing a numpy scalar with the __array_interface__. >> Is this supposed to work? Or should __array_interface__ trigger an >> AttributeError on a nu

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Timothy Hochberg
Another possible approach is to treat downcasting similar to underflow. That is give it it's own flag in the errstate and people can set it to ignore, warn or raise on downcasting as desired. One could potentially have two flags, one for downcasting across kinds (float->int, int->bool) and one for

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Francesc Altet
A Monday 07 January 2008, Nils Wagner escrigué: > On Mon, 7 Jan 2008 19:42:40 +0100 > > Francesc Altet <[EMAIL PROTECTED]> wrote: > > A Monday 07 January 2008, Nils Wagner escrigué: > >> >>> numpy.sqrt(numpy.array([-1.0], > >> > >>dtype=numpy.complex192)) > >> > >> Traceback (most recent call las

[Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread dmitrey
Some days ago there was mentioned a parallel numpy that is developed by Brian Granger. Does the project have any blog or website? Has it any description about API and abilities? When 1st release is intended? Regards, D ___ Numpy-discussion mailing l

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Travis E. Oliphant
Darren Dale wrote: > One of my collaborators would like to use 16bit float arrays. According to > http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in > numpy.core.numerictypes, it appears that this should be possible, but the > following doesnt work: > No, it's only p

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Matthieu Brucher
float16 is not defined in my version of Numpy :( Matthieu 2008/1/7, Darren Dale <[EMAIL PROTECTED]>: > > One of my collaborators would like to use 16bit float arrays. According to > http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 > in > numpy.core.numerictypes, it appears

Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Travis E. Oliphant
Andrew Straw wrote: > Hi, > > I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is > having troubles accessing a numpy scalar with the __array_interface__. > Is this supposed to work? Or should __array_interface__ trigger an > AttributeError on a numpy scalar? Note that I haven't do

[Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
One of my collaborators would like to use 16bit float arrays. According to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in numpy.core.numerictypes, it appears that this should be possible, but the following doesnt work: a=arange(10, dtype='float16') TypeError: data t

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Scott Ransom
On Monday 07 January 2008 02:13:56 pm Charles R Harris wrote: > On Jan 7, 2008 12:00 PM, Travis E. Oliphant <[EMAIL PROTECTED]> wrote: > > Charles R Harris wrote: > > > Hi All, > > > > > > I'm thinking that one way to make the automatic type conversion a > > > bit safer to use would be to add a CA

[Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Andrew Straw
Hi, I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is having troubles accessing a numpy scalar with the __array_interface__. Is this supposed to work? Or should __array_interface__ trigger an AttributeError on a numpy scalar? Note that I haven't done any digging on this myself..

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Charles R Harris
On Jan 7, 2008 12:08 PM, Anne Archibald <[EMAIL PROTECTED]> wrote: > On 07/01/2008, Charles R Harris <[EMAIL PROTECTED]> wrote: > > > > One place where Numpy differs from MatLab is the way memory is handled. > > MatLab is always generating new arrays, so for efficiency it is worth > > preallocatin

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
On Jan 7, 2008 12:00 PM, Travis E. Oliphant <[EMAIL PROTECTED]> wrote: > Charles R Harris wrote: > > Hi All, > > > > I'm thinking that one way to make the automatic type conversion a bit > > safer to use would be to add a CASTABLE flag to arrays. Then we could > > write something like > > > > a[..

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Anne Archibald
On 07/01/2008, Charles R Harris <[EMAIL PROTECTED]> wrote: > > One place where Numpy differs from MatLab is the way memory is handled. > MatLab is always generating new arrays, so for efficiency it is worth > preallocating arrays and then filling in the parts. This is not the case in > Numpy where

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Ryan May
Zachary Pincus wrote: >>> For large arrays, it makes since to do automatic >>> conversions, as is also the case in functions taking output arrays, >>> because the typecast can be pushed down into C where it is time and >>> space efficient, whereas explicitly converting the array uses up >>> tempora

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Nils Wagner
On Mon, 7 Jan 2008 19:42:40 +0100 Francesc Altet <[EMAIL PROTECTED]> wrote: > A Monday 07 January 2008, Nils Wagner escrigué: >> >>> numpy.sqrt(numpy.array([-1.0], >>dtype=numpy.complex192)) >> >> Traceback (most recent call last): >>File "", line 1, in >> AttributeError: 'module' object ha

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Travis E. Oliphant
Charles R Harris wrote: > Hi All, > > I'm thinking that one way to make the automatic type conversion a bit > safer to use would be to add a CASTABLE flag to arrays. Then we could > write something like > > a[...] = typecast(b) > > where typecast returns a view of b with the CASTABLE flag set so

Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2008-01-07 Thread Travis E. Oliphant
Bruce Sherwood wrote: > Okay, I've implemented the scheme below that was proposed by Scott > Daniels on the VPython mailing list, and it solves my problem. It's also > much faster than using numpy directly: even with the "def "and "if" > overhead: sqrt(scalar) is over 3 times faster than the num

[Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
Hi All, I'm thinking that one way to make the automatic type conversion a bit safer to use would be to add a CASTABLE flag to arrays. Then we could write something like a[...] = typecast(b) where typecast returns a view of b with the CASTABLE flag set so that the assignment operator can check wh

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Zachary Pincus
>> For large arrays, it makes since to do automatic >> conversions, as is also the case in functions taking output arrays, >> because the typecast can be pushed down into C where it is time and >> space efficient, whereas explicitly converting the array uses up >> temporary space. However, I can im

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Francesc Altet
A Monday 07 January 2008, Nils Wagner escrigué: > >>> numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192)) > > Traceback (most recent call last): >File "", line 1, in > AttributeError: 'module' object has no attribute > 'complex192' > > >>> numpy.__version__ > > '1.0.5.dev4673' It seems li

Re: [Numpy-discussion] Overloading sqrt(5.5)*myvector

2008-01-07 Thread Travis E. Oliphant
Bruce Sherwood wrote: > Sorry to repeat myself and be insistent, but could someone please at > least comment on whether I'm doing anything obviously wrong, even if you > don't immediately have a solution to my serious problem? There was no > response to my question (see copy below) which I sent

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Ryan May
Charles R Harris wrote: > > > On Jan 7, 2008 8:47 AM, Ryan May <[EMAIL PROTECTED] PROTECTED]>> wrote: > > Stuart Brorson wrote: > >>> I realize NumPy != Matlab, but I'd wager that most users would think > >>> that this is the natural behavior.. > >> Well, th

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Nils Wagner
On Mon, 7 Jan 2008 10:10:50 +0100 "Matthieu Brucher" <[EMAIL PROTECTED]> wrote: > i, > > I managed to reproduce your bugs on a FC6 box : import numpy as n > n.sqrt(n.array([-1.0],dtype = n.complex192)) > array([0.0+9.2747134e+492j], dtype=complex192) > n.sqrt(n.array([-1.0],dtyp

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Charles R Harris
On Jan 7, 2008 8:47 AM, Ryan May <[EMAIL PROTECTED]> wrote: > Stuart Brorson wrote: > >>> I realize NumPy != Matlab, but I'd wager that most users would think > >>> that this is the natural behavior.. > >> Well, that behavior won't happen. We won't mutate the dtype of the > array because > >>

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Ryan May
Stuart Brorson wrote: >>> I realize NumPy != Matlab, but I'd wager that most users would think >>> that this is the natural behavior.. >> Well, that behavior won't happen. We won't mutate the dtype of the array >> because >> of assignment. Matlab has copy(-on-write) semantics for things like s

Re: [Numpy-discussion] Moving away from svn ?

2008-01-07 Thread David Cournapeau
On Jan 7, 2008 4:41 AM, Travis E. Oliphant <[EMAIL PROTECTED]> wrote: > Robert Kern wrote: > > Travis E. Oliphant wrote: > > > > > >> I don't think it is time to move wholesale to something like Mercurial > >> or bzr. I would prefer it if all of the Enthought-hosted projects > >> moved to the (ne

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matts Bjoerck
Thanks for the fast replies, now I know it's not my machine that gives me trouble. In the meantime I tested a couple of other functions. It seems that all of them fail with complex192. /Matts In [19]: x192 = arange(0,2,0.5,dtype = complex192)*pi In [20]: x128 = arange(0,2,0.5,dtype = complex128

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread lorenzo bolla
It doesn't work on Windows, either. In [35]: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192)) Out[35]: array([0.0+2.9996087e-305j], dtype=complex192) In [36]: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex128)) Out[36]: array([ 0.+1.j]) In [37]: numpy.__version__ Out[37]: '1.0.5.dev45

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matthieu Brucher
i, I managed to reproduce your bugs on a FC6 box : >>> import numpy as n >>> n.sqrt(n.array([-1.0],dtype = n.complex192)) array([0.0+9.2747134e+492j], dtype=complex192) >>> n.sqrt(n.array([-1.0],dtype = n.complex128)) array([ 0.+1.j]) >>> x=n.array([0.0+0.0j, 1.0+0.0j], dtype=n.complex192) >>>

[Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matts Bjoerck
Hi, I've started using complex192 for some calculations and came across two things that seems to be bugs: In [1]: sqrt(array([-1.0],dtype = complex192)) Out[1]: array([0.0+-6.1646549e-4703j], dtype=complex192) In [2]: sqrt(array([-1.0],dtype = complex128)) Out[2]: array([ 0.+1.j]) In [3]: x Out