On May 22, 2015 2:40 PM, "Benjamin Root" wrote:
>
> Then add in broadcasting behavior...
Vectorized functions broadcast over the vectorized dimensions, there's
nothing special about @ in this regard.
-n
> On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote:
>>
>> On May 22, 2015 1:26 PM, "B
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote:
> For higher dimension inputs like (i, j, n, m) it acts like any other
> gufunc (e.g., everything in np.linalg)
Unfortunately, not everything in linalg acts the same way. For example,
matrix_rank and lstsq don't.
_
Then add in broadcasting behavior...
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote:
> On May 22, 2015 1:26 PM, "Benjamin Root" wrote:
> >
> > That assumes that the said recently-confused ever get to the point of
> understanding it...
>
> Well, I don't think it's that complicated really
On May 22, 2015 1:26 PM, "Benjamin Root" wrote:
>
> That assumes that the said recently-confused ever get to the point of
understanding it...
Well, I don't think it's that complicated really. For whatever that's worth
:-). My best attempt is here, anyway:
https://www.python.org/dev/peps/pep-04
That assumes that the said recently-confused ever get to the point of
understanding it... and I personally don't do much matrix math work, so I
don't have the proper mental context. I just know that coworkers are going
to be coming to me asking questions because I am the de facto "python guy".
So,
On May 22, 2015 11:34 AM, "Benjamin Root" wrote:
>
> At some point, someone is going to make a single documentation page
describing all of this, right? Tables, mathtex, and such? I get woozy
whenever I see this discussion go on.
That does seem like a good idea, doesn't it. Following the principle
At some point, someone is going to make a single documentation page
describing all of this, right? Tables, mathtex, and such? I get woozy
whenever I see this discussion go on.
Ben Root
On Fri, May 22, 2015 at 2:23 PM, Nathaniel Smith wrote:
> On May 22, 2015 11:00 AM, "Alexander Belopolsky" wr
On May 22, 2015 11:00 AM, "Alexander Belopolsky" wrote:
>
>
> On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith wrote:
> >
> > .. there's been some discussion of the possibility of
>
> > adding specialized gufuncs for broadcasted vector-vector,
> > vector-matrix, matrix-vector multiplication, whic
On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith wrote:
>
> .. there's been some discussion of the possibility of
> adding specialized gufuncs for broadcasted vector-vector,
> vector-matrix, matrix-vector multiplication, which wouldn't do the
> magic vector promotion that dot and @ do.
This woul
On 22 May 2015 at 12:15, Mathieu Blondel wrote:
> Right now I am using np.sum(A * B.T, axis=1) for dense data and I have
> implemented a Cython routine for sparse data.
> I haven't benched np.sum(A * B.T, axis=1) vs. np.einsum("ij,ji->i", A, B)
> yet since I am mostly interested in the sparse cas
Right now I am using np.sum(A * B.T, axis=1) for dense data and I have
implemented a Cython routine for sparse data.
I haven't benched np.sum(A * B.T, axis=1) vs. np.einsum("ij,ji->i", A, B)
yet since I am mostly interested in the sparse case right now.
When A and B are C-style and Fortran-style,
There was an idea on this list to provide a function the run multiple dot on
several vectors/matrices. It seems to be a particular implementation of this
proposed function.
Nadav.
On 22 May 2015 11:58, David Cournapeau wrote:
On Fri, May 22, 2015 at 5:39 PM, Mathieu Blondel
mailto:math...
On Fri, May 22, 2015 at 5:39 PM, Mathieu Blondel
wrote:
> Hi,
>
> I often need to compute the equivalent of
>
> np.diag(np.dot(A, B)).
>
> Computing np.dot(A, B) is highly inefficient if you only need the diagonal
> entries. Two more efficient ways of computing the same thing are
>
> np.sum(A * B
Hi,
I often need to compute the equivalent of
np.diag(np.dot(A, B)).
Computing np.dot(A, B) is highly inefficient if you only need the diagonal
entries. Two more efficient ways of computing the same thing are
np.sum(A * B.T, axis=1)
and
np.einsum("ij,ji->i", A, B).
The first can allocate qui
14 matches
Mail list logo