Re: [Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-01-26 Thread Paul Ivanov
Samuel John, on 2011-01-26 15:08, wrote: > Hi there! > > I have successfully built numpy 1.5 on ubuntu lucid (32 for now). > I think I got ATLAS/lapack/BLAS support, and if I > > ldd linalg/lapack_lite.so > I see that my libptf77blas.so etc. are successfully linked. :-) > > However, how to I fi

Re: [Numpy-discussion] einsum

2011-01-26 Thread Mark Wiebe
On Wed, Jan 26, 2011 at 8:29 PM, Joshua Holbrook wrote: > >> > >> The only disadvantage I see, is that choosing the axes to operate on > >> in a program or function requires string manipulation. > > > > > > One possibility would be for the Python exposure to accept lists or > tuples > > of integer

Re: [Numpy-discussion] einsum

2011-01-26 Thread Joshua Holbrook
>> >> The only disadvantage I see, is that choosing the axes to operate on >> in a program or function requires string manipulation. > > > One possibility would be for the Python exposure to accept lists or tuples > of integers.  The subscript 'ii' could be [(0,0)], and 'ij,jk->ik' could be > [(0,1

Re: [Numpy-discussion] einsum

2011-01-26 Thread Mark Wiebe
On Wed, Jan 26, 2011 at 5:23 PM, wrote: > > So, if I read the examples correctly we finally get dot along an axis > > np.einsum('ijk,ji->', a, b) > np.einsum('ijk,jik->k', a, b) > > or something like this. > > the notation might require getting used to but it doesn't look worse > than figuring o

Re: [Numpy-discussion] einsum

2011-01-26 Thread Mark Wiebe
On Wed, Jan 26, 2011 at 6:41 PM, Jonathan Rocher wrote: > Nice function, and wonderful that it speeds some tasks up. > > some feedback: the following notation is a little counter intuitive to me: > >>> np.einsum('i...->', a) > array([50, 55, 60, 65, 70]) > >>> np.sum(a, axis=0) > a

Re: [Numpy-discussion] Numpy 2.0 schedule

2011-01-26 Thread Charles R Harris
On Wed, Jan 26, 2011 at 1:10 PM, Mark Wiebe wrote: > On Wed, Jan 26, 2011 at 2:23 AM, Ralf Gommers > wrote: > >> On Wed, Jan 26, 2011 at 12:28 PM, Mark Wiebe wrote: >> >>> On Tue, Jan 25, 2011 at 5:18 PM, Charles R Harris < >>> charlesr.har...@gmail.com> wrote: >>> On Tue, Jan 25, 2011 at

Re: [Numpy-discussion] einsum

2011-01-26 Thread Jonathan Rocher
Nice function, and wonderful that it speeds some tasks up. some feedback: the following notation is a little counter intuitive to me: >>> np.einsum('i...->', a) array([50, 55, 60, 65, 70]) >>> np.sum(a, axis=0) array([50, 55, 60, 65, 70]) Since there is nothing after the ->, I exp

Re: [Numpy-discussion] einsum

2011-01-26 Thread josef . pktd
On Wed, Jan 26, 2011 at 7:35 PM, Benjamin Root wrote: > On Wednesday, January 26, 2011, Gael Varoquaux > wrote: >> On Thu, Jan 27, 2011 at 12:18:30AM +0100, Hanno Klemm wrote: >>> interesting idea. Given the fact that in 2-d euclidean metric, the >>> Einstein summation conventions are only a way

Re: [Numpy-discussion] einsum

2011-01-26 Thread T J
On Wed, Jan 26, 2011 at 5:02 PM, Joshua Holbrook wrote: >> Ah, sorry for misunderstanding.  That would actually be very difficult, >> as the iterator required a fair bit of fixes and adjustments to the core. >> The new_iterator branch should be 1.5 ABI compatible, if that helps. > > I see. Perhaps

Re: [Numpy-discussion] einsum

2011-01-26 Thread Joshua Holbrook
> Ah, sorry for misunderstanding. That would actually be very difficult, > as the iterator required a fair bit of fixes and adjustments to the core. > The new_iterator branch should be 1.5 ABI compatible, if that helps. I see. Perhaps the fixes and adjustments can/should be included with numpy st

Re: [Numpy-discussion] einsum

2011-01-26 Thread Benjamin Root
On Wednesday, January 26, 2011, Gael Varoquaux wrote: > On Thu, Jan 27, 2011 at 12:18:30AM +0100, Hanno Klemm wrote: >> interesting idea. Given the fact that in 2-d euclidean metric, the >> Einstein summation conventions are only a way to write out >> conventional matrix multiplications, do you co

Re: [Numpy-discussion] einsum

2011-01-26 Thread Hanno Klemm
Am 27.01.2011 um 00:29 schrieb Mark Wiebe: On Wed, Jan 26, 2011 at 3:18 PM, Hanno Klemm wrote: Mark, interesting idea. Given the fact that in 2-d euclidean metric, the Einstein summation conventions are only a way to write out conventional matrix multiplications, do you consider at some po

Re: [Numpy-discussion] einsum

2011-01-26 Thread Mark Wiebe
On Wed, Jan 26, 2011 at 3:18 PM, Hanno Klemm wrote: > > Mark, > > interesting idea. Given the fact that in 2-d euclidean metric, the > Einstein summation conventions are only a way to write out > conventional matrix multiplications, do you consider at some point to > include a non-euclidean metri

Re: [Numpy-discussion] einsum

2011-01-26 Thread Mark Wiebe
On Wed, Jan 26, 2011 at 3:05 PM, Joshua Holbrook wrote: > > > > It think his real question is whether einsum() and the iterator stuff > > can live in a separate module that *uses* a released version of numpy > > rather than a development branch. > > > > -- > > Robert Kern > > > > Indeed, I would l

Re: [Numpy-discussion] einsum

2011-01-26 Thread Gael Varoquaux
On Thu, Jan 27, 2011 at 12:18:30AM +0100, Hanno Klemm wrote: > interesting idea. Given the fact that in 2-d euclidean metric, the > Einstein summation conventions are only a way to write out > conventional matrix multiplications, do you consider at some point to > include a non-euclidean metr

Re: [Numpy-discussion] einsum

2011-01-26 Thread Hanno Klemm
Mark, interesting idea. Given the fact that in 2-d euclidean metric, the Einstein summation conventions are only a way to write out conventional matrix multiplications, do you consider at some point to include a non-euclidean metric in this thing? (As you have in special relativity, for e

Re: [Numpy-discussion] einsum

2011-01-26 Thread Joshua Holbrook
> > It think his real question is whether einsum() and the iterator stuff > can live in a separate module that *uses* a released version of numpy > rather than a development branch. > > -- > Robert Kern > Indeed, I would like to be able to install and use einsum() without having to install another

Re: [Numpy-discussion] einsum

2011-01-26 Thread Robert Kern
On Wed, Jan 26, 2011 at 16:43, Mark Wiebe wrote: > On Wed, Jan 26, 2011 at 2:01 PM, Joshua Holbrook > wrote: >> >> >> How closely coupled is this new code with numpy's internals? That is, >> could you factor it out into its own package? If so, then people could >> have immediate use out of it wi

Re: [Numpy-discussion] einsum

2011-01-26 Thread Mark Wiebe
On Wed, Jan 26, 2011 at 2:01 PM, Joshua Holbrook wrote: > > How closely coupled is this new code with numpy's internals? That is, > could you factor it out into its own package? If so, then people could > have immediate use out of it without having to integrate it into numpy > proper. The code

Re: [Numpy-discussion] einsum

2011-01-26 Thread Joshua Holbrook
On Wed, Jan 26, 2011 at 12:48 PM, Mark Wiebe wrote: > On Wed, Jan 26, 2011 at 1:36 PM, Joshua Holbrook > wrote: >> >> On Wed, Jan 26, 2011 at 11:27 AM, Mark Wiebe wrote: >> > I wrote a new function, einsum, which implements Einstein summation >> > notation, and I'd like comments/thoughts from pe

Re: [Numpy-discussion] einsum

2011-01-26 Thread Mark Wiebe
On Wed, Jan 26, 2011 at 1:36 PM, Joshua Holbrook wrote: > On Wed, Jan 26, 2011 at 11:27 AM, Mark Wiebe wrote: > > I wrote a new function, einsum, which implements Einstein summation > > notation, and I'd like comments/thoughts from people who might be > interested > > in this kind of thing. > > T

Re: [Numpy-discussion] einsum

2011-01-26 Thread Joshua Holbrook
On Wed, Jan 26, 2011 at 11:27 AM, Mark Wiebe wrote: > I wrote a new function, einsum, which implements Einstein summation > notation, and I'd like comments/thoughts from people who might be interested > in this kind of thing. This sounds really cool! I've definitely considered doing something lik

[Numpy-discussion] einsum

2011-01-26 Thread Mark Wiebe
I wrote a new function, einsum, which implements Einstein summation notation, and I'd like comments/thoughts from people who might be interested in this kind of thing. In testing it, it is also faster than many of NumPy's built-in functions, except for dot and inner. At the bottom of this email y

Re: [Numpy-discussion] Numpy 2.0 schedule

2011-01-26 Thread Mark Wiebe
On Wed, Jan 26, 2011 at 2:23 AM, Ralf Gommers wrote: > On Wed, Jan 26, 2011 at 12:28 PM, Mark Wiebe wrote: > >> On Tue, Jan 25, 2011 at 5:18 PM, Charles R Harris < >> charlesr.har...@gmail.com> wrote: >> >>> On Tue, Jan 25, 2011 at 1:13 PM, Travis Oliphant >> > wrote: >>> It may make s

Re: [Numpy-discussion] 3d plane to point cloud fitting using SVD

2011-01-26 Thread Huan Liu
Hi, I just confirmed Stefan's answer on one of the examples in http://www.mathworks.co.jp/matlabcentral/newsreader/view_thread/262996 matlab: A = randn(100,2)*[2 0;3 0;-1 2]'; A = A + randn(size(A))/3; [U,S,V] = svd(A); X = V(:,end) python: from numpy import * A = random.randn(100

Re: [Numpy-discussion] Numpy 2.0 schedule

2011-01-26 Thread Bruce Southey
On 01/25/2011 10:28 PM, Mark Wiebe wrote: On Tue, Jan 25, 2011 at 5:18 PM, Charles R Harris mailto:charlesr.har...@gmail.com>> wrote: On Tue, Jan 25, 2011 at 1:13 PM, Travis Oliphant mailto:oliph...@enthought.com>> wrote: On Jan 25, 2011, at 10:42 AM, Charles R Harris wrote:

[Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-01-26 Thread Samuel John
Hi there! I have successfully built numpy 1.5 on ubuntu lucid (32 for now). I think I got ATLAS/lapack/BLAS support, and if I > ldd linalg/lapack_lite.so I see that my libptf77blas.so etc. are successfully linked. :-) However, how to I find out, if (and where) libamd.a and libumfpack.a have been

Re: [Numpy-discussion] Numpy 2.0 schedule

2011-01-26 Thread David Cournapeau
On Wed, Jan 26, 2011 at 6:47 PM, Dag Sverre Seljebotn wrote: > On 01/26/2011 02:05 AM, David wrote: >> On 01/26/2011 01:42 AM, Charles R Harris wrote: >> >>> Hi All, >>> >>> Just thought it was time to start discussing a release schedule for >>> numpy 2.0 so we have something to aim at. I'm thinki

Re: [Numpy-discussion] tril, triu, document/ implementation conflict

2011-01-26 Thread eat
Hi, On Wed, Jan 26, 2011 at 2:35 PM, wrote: > On Wed, Jan 26, 2011 at 7:22 AM, eat wrote: > > Hi, > > > > I just noticed a document/ implementation conflict with tril and triu. > > According tril documentation it should return of same shape and data-type > as > > called. But this is not the ca

Re: [Numpy-discussion] tril, triu, document/ implementation conflict

2011-01-26 Thread josef . pktd
On Wed, Jan 26, 2011 at 7:22 AM, eat wrote: > Hi, > > I just noticed a document/ implementation conflict with tril and triu. > According tril documentation it should return of same shape and data-type as > called. But this is not the case at least with dtype bool. > > The input shape is referred a

[Numpy-discussion] tril, triu, document/ implementation conflict

2011-01-26 Thread eat
Hi, I just noticed a document/ implementation conflict with tril and triu. According tril documentation it should return of same shape and data-type as called. But this is not the case at least with dtype bool. The input shape is referred as (M, N) in tril and triu, but as (N, M) in tri. Inconsi

Re: [Numpy-discussion] Numpy 2.0 schedule

2011-01-26 Thread Ralf Gommers
On Wed, Jan 26, 2011 at 12:28 PM, Mark Wiebe wrote: > On Tue, Jan 25, 2011 at 5:18 PM, Charles R Harris < > charlesr.har...@gmail.com> wrote: > >> >> >> On Tue, Jan 25, 2011 at 1:13 PM, Travis Oliphant >> wrote: >> >>> >>> It may make sense for a NumPy 1.6 to come out in March / April in the >>

Re: [Numpy-discussion] Numpy 2.0 schedule

2011-01-26 Thread Dag Sverre Seljebotn
On 01/26/2011 02:05 AM, David wrote: > On 01/26/2011 01:42 AM, Charles R Harris wrote: > >> Hi All, >> >> Just thought it was time to start discussing a release schedule for >> numpy 2.0 so we have something to aim at. I'm thinking sometime in the >> period April-June might be appropriate. Ther