On Mon, Jun 6, 2016 at 3:32 PM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> Since we are at it, should quadratic/bilinear forms get their own function
> too? That is, after all, what the OP was asking for.
>
If we have matvecmul and vecmul, then how to implement bilinear forms
effic
On Sun, Jun 5, 2016 at 5:41 PM, Stephan Hoyer wrote:
> If possible, I'd love to add new functions for "generalized ufunc" linear
> algebra, and then deprecate (or at least discourage) using the older
> versions with inferior broadcasting rules. Adding a new keyword arg means
> we'll be stuck with
On Di, 2016-06-07 at 00:32 +0200, Jaime Fernández del Río wrote:
> On Mon, Jun 6, 2016 at 9:35 AM, Sebastian Berg s.net> wrote:
> > On So, 2016-06-05 at 19:20 -0600, Charles R Harris wrote:
> > >
> > >
> > > On Sun, Jun 5, 2016 at 6:41 PM, Stephan Hoyer
> > > wrote:
> > > > If possible, I'd love
On Mon, Jun 6, 2016 at 9:35 AM, Sebastian Berg
wrote:
> On So, 2016-06-05 at 19:20 -0600, Charles R Harris wrote:
> >
> >
> > On Sun, Jun 5, 2016 at 6:41 PM, Stephan Hoyer
> > wrote:
> > > If possible, I'd love to add new functions for "generalized ufunc"
> > > linear algebra, and then deprecate
There I was thinking vector-vector inner product was in fact covered by
`np.inner`. Yikes, half inner, half outer.
As for names, I think `matvecmul` and `vecmul` do seem quite OK (probably
need `vecmatmul` as well, which does the same as `matmul` would for 1-D
first argument).
But as other sugges
On So, 2016-06-05 at 19:20 -0600, Charles R Harris wrote:
>
>
> On Sun, Jun 5, 2016 at 6:41 PM, Stephan Hoyer
> wrote:
> > If possible, I'd love to add new functions for "generalized ufunc"
> > linear algebra, and then deprecate (or at least discourage) using
> > the older versions with inferior
On Sun, Jun 5, 2016 at 6:41 PM, Stephan Hoyer wrote:
> If possible, I'd love to add new functions for "generalized ufunc" linear
> algebra, and then deprecate (or at least discourage) using the older
> versions with inferior broadcasting rules. Adding a new keyword arg means
> we'll be stuck with
A simple workaround gets the speed back:
In [11]: %timeit (X.T * A.dot(X.T)).sum(axis=0)
1 loop, best of 3: 612 ms per loop
In [12]: %timeit np.einsum('ij,ji->j', A.dot(X.T), X)
1 loop, best of 3: 414 ms per loop
If working as advertised, the code in gh-5488 will convert the
three-argument ein
On Sun, Jun 5, 2016 at 8:41 PM, Stephan Hoyer wrote:
> If possible, I'd love to add new functions for "generalized ufunc" linear
> algebra, and then deprecate (or at least discourage) using the older
> versions with inferior broadcasting rules. Adding a new keyword arg means
> we'll be stuck with
On Sun, Jun 5, 2016 at 5:08 PM, Mark Daoust wrote:
> Here's the einsum version:
>
> `es = np.einsum('Na,ab,Nb->N',X,A,X)`
>
> But that's running ~45x slower than your version.
>
> OT: anyone know why einsum is so bad for this one?
>
I think einsum can create some large intermediate arrays. It c
If possible, I'd love to add new functions for "generalized ufunc" linear
algebra, and then deprecate (or at least discourage) using the older
versions with inferior broadcasting rules. Adding a new keyword arg means
we'll be stuck with an awkward API for a long time to come.
There are three types
Here's the einsum version:
`es = np.einsum('Na,ab,Nb->N',X,A,X)`
But that's running ~45x slower than your version.
OT: anyone know why einsum is so bad for this one?
Mark Daoust
On Sat, May 28, 2016 at 11:53 PM, Scott Sievert
wrote:
> I recently ran into an application where I had to comput
I recently ran into an application where I had to compute many inner products
quickly (roughy 50k inner products in less than a second). I wanted a vector of
inner products over the 50k vectors, or `[x1.T @ A @ x1, …, xn.T @ A @ xn]`
with A.shape = (1k, 1k).
My first instinct was to look for a
13 matches
Mail list logo