On May 22, 2015 2:40 PM, "Benjamin Root" wrote:
>
> Then add in broadcasting behavior...
Vectorized functions broadcast over the vectorized dimensions, there's
nothing special about @ in this regard.
-n
> On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote:
>>
>> On May 22, 2015 1:26 PM, "B
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote:
> For higher dimension inputs like (i, j, n, m) it acts like any other
> gufunc (e.g., everything in np.linalg)
Unfortunately, not everything in linalg acts the same way. For example,
matrix_rank and lstsq don't.
_
Then add in broadcasting behavior...
On Fri, May 22, 2015 at 4:58 PM, Nathaniel Smith wrote:
> On May 22, 2015 1:26 PM, "Benjamin Root" wrote:
> >
> > That assumes that the said recently-confused ever get to the point of
> understanding it...
>
> Well, I don't think it's that complicated really
On May 22, 2015 1:26 PM, "Benjamin Root" wrote:
>
> That assumes that the said recently-confused ever get to the point of
understanding it...
Well, I don't think it's that complicated really. For whatever that's worth
:-). My best attempt is here, anyway:
https://www.python.org/dev/peps/pep-04
That assumes that the said recently-confused ever get to the point of
understanding it... and I personally don't do much matrix math work, so I
don't have the proper mental context. I just know that coworkers are going
to be coming to me asking questions because I am the de facto "python guy".
So,
On May 22, 2015 11:34 AM, "Benjamin Root" wrote:
>
> At some point, someone is going to make a single documentation page
describing all of this, right? Tables, mathtex, and such? I get woozy
whenever I see this discussion go on.
That does seem like a good idea, doesn't it. Following the principle
At some point, someone is going to make a single documentation page
describing all of this, right? Tables, mathtex, and such? I get woozy
whenever I see this discussion go on.
Ben Root
On Fri, May 22, 2015 at 2:23 PM, Nathaniel Smith wrote:
> On May 22, 2015 11:00 AM, "Alexander Belopolsky" wr
On May 22, 2015 11:00 AM, "Alexander Belopolsky" wrote:
>
>
> On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith wrote:
> >
> > .. there's been some discussion of the possibility of
>
> > adding specialized gufuncs for broadcasted vector-vector,
> > vector-matrix, matrix-vector multiplication, whic
On Thu, May 21, 2015 at 9:37 PM, Nathaniel Smith wrote:
>
> .. there's been some discussion of the possibility of
> adding specialized gufuncs for broadcasted vector-vector,
> vector-matrix, matrix-vector multiplication, which wouldn't do the
> magic vector promotion that dot and @ do.
This woul
On Fri, May 22, 2015 at 12:02 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
>
> On Thu, May 21, 2015 at 7:06 PM, Alexander Belopolsky
> wrote:
>
>> 1. Is there a simple expression using existing numpy functions that
>> implements PEP 465 semantics for @?
>>
>> 2. Suppose I have a fun
On Thu, May 21, 2015 at 7:06 PM, Alexander Belopolsky
wrote:
> 1. Is there a simple expression using existing numpy functions that
> implements PEP 465 semantics for @?
>
> 2. Suppose I have a function that takes two vectors x and y, and a matrix
> M and returns x.dot(M.dot(y)). I would like to
On Thu, May 21, 2015 at 6:06 PM, Alexander Belopolsky wrote:
> 1. Is there a simple expression using existing numpy functions that
> implements PEP 465 semantics for @?
Not yet.
> 2. Suppose I have a function that takes two vectors x and y, and a matrix M
> and returns x.dot(M.dot(y)). I would
1. Is there a simple expression using existing numpy functions that
implements PEP 465 semantics for @?
2. Suppose I have a function that takes two vectors x and y, and a matrix M
and returns x.dot(M.dot(y)). I would like to "vectorize" this function so
that it works with x and y of any ndim >= 1
13 matches
Mail list logo