Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Nathaniel Smith
On May 22, 2015 2:40 PM, "Benjamin Root" wrote: > > Then add in broadcasting behavior... Vectorized functions broadcast over the vectorized dimensions, there's nothing special about @ in this regard. -n > On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote: >> >> On May 22, 2015 1:26 PM, "B

Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Alexander Belopolsky
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote: > For higher dimension inputs like (i, j, n, m) it acts like any other > gufunc (e.g., everything in np.linalg) Unfortunately, not everything in linalg acts the same way. For example, matrix_rank and lstsq don't. _

Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Benjamin Root
Then add in broadcasting behavior... On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote: > On May 22, 2015 1:26 PM, "Benjamin Root" wrote: > > > > That assumes that the said recently-confused ever get to the point of > understanding it... > > Well, I don't think it's that complicated really

Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Nathaniel Smith
On May 22, 2015 1:26 PM, "Benjamin Root" wrote: > > That assumes that the said recently-confused ever get to the point of understanding it... Well, I don't think it's that complicated really. For whatever that's worth :-). My best attempt is here, anyway: https://www.python.org/dev/peps/pep-04

Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Benjamin Root
That assumes that the said recently-confused ever get to the point of understanding it... and I personally don't do much matrix math work, so I don't have the proper mental context. I just know that coworkers are going to be coming to me asking questions because I am the de facto "python guy". So,

Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Nathaniel Smith
On May 22, 2015 11:34 AM, "Benjamin Root" wrote: > > At some point, someone is going to make a single documentation page describing all of this, right? Tables, mathtex, and such? I get woozy whenever I see this discussion go on. That does seem like a good idea, doesn't it. Following the principle

Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Benjamin Root
At some point, someone is going to make a single documentation page describing all of this, right? Tables, mathtex, and such? I get woozy whenever I see this discussion go on. Ben Root On Fri, May 22, 2015 at 2:23 PM, Nathaniel Smith wrote: > On May 22, 2015 11:00 AM, "Alexander Belopolsky" wr

Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Nathaniel Smith
On May 22, 2015 11:00 AM, "Alexander Belopolsky" wrote: > > > On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith wrote: > > > > .. there's been some discussion of the possibility of > > > adding specialized gufuncs for broadcasted vector-vector, > > vector-matrix, matrix-vector multiplication, whic

Re: [Numpy-discussion] Two questions about PEP 465 dot product

2015-05-22 Thread Alexander Belopolsky
On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith wrote: > > .. there's been some discussion of the possibility of > adding specialized gufuncs for broadcasted vector-vector, > vector-matrix, matrix-vector multiplication, which wouldn't do the > magic vector promotion that dot and @ do. This woul

Re: [Numpy-discussion] np.diag(np.dot(A, B))

2015-05-22 Thread Daπid
On 22 May 2015 at 12:15, Mathieu Blondel wrote: > Right now I am using np.sum(A * B.T, axis=1) for dense data and I have > implemented a Cython routine for sparse data. > I haven't benched np.sum(A * B.T, axis=1) vs. np.einsum("ij,ji->i", A, B) > yet since I am mostly interested in the sparse cas

Re: [Numpy-discussion] np.diag(np.dot(A, B))

2015-05-22 Thread Mathieu Blondel
Right now I am using np.sum(A * B.T, axis=1) for dense data and I have implemented a Cython routine for sparse data. I haven't benched np.sum(A * B.T, axis=1) vs. np.einsum("ij,ji->i", A, B) yet since I am mostly interested in the sparse case right now. When A and B are C-style and Fortran-style,

Re: [Numpy-discussion] np.diag(np.dot(A, B))

2015-05-22 Thread Nadav Horesh
There was an idea on this list to provide a function the run multiple dot on several vectors/matrices. It seems to be a particular implementation of this proposed function. Nadav. On 22 May 2015 11:58, David Cournapeau wrote: On Fri, May 22, 2015 at 5:39 PM, Mathieu Blondel mailto:math...

Re: [Numpy-discussion] np.diag(np.dot(A, B))

2015-05-22 Thread David Cournapeau
On Fri, May 22, 2015 at 5:39 PM, Mathieu Blondel wrote: > Hi, > > I often need to compute the equivalent of > > np.diag(np.dot(A, B)). > > Computing np.dot(A, B) is highly inefficient if you only need the diagonal > entries. Two more efficient ways of computing the same thing are > > np.sum(A * B

[Numpy-discussion] np.diag(np.dot(A, B))

2015-05-22 Thread Mathieu Blondel
Hi, I often need to compute the equivalent of np.diag(np.dot(A, B)). Computing np.dot(A, B) is highly inefficient if you only need the diagonal entries. Two more efficient ways of computing the same thing are np.sum(A * B.T, axis=1) and np.einsum("ij,ji->i", A, B). The first can allocate qui