In mathematics, and inner product is a sesquilinear form on pairs of vectors, so at the least it should return a scalar. In numpy inner is a sum over the last indices. OK, so we have
In [10]: inner(ones(2),ones(2)) Out[10]: 2.0 This doesn't work as an inner product for column vectors, which would be the usual textbook convention, but that's alright, it's not a 'real' inner product. But what happens when matrices are involved? In [11]: inner(mat(ones(2)),mat(ones(2))) Out[11]: array([[ 2.]]) Hmm, we get an array, not a scalar. Maybe we can cheat In [12]: mat(ones(2))*mat(ones(2)).T Out[12]: matrix([[ 2.]]) What about vdot (conjugate of the mathematical convention, i.e., the Dirac convention) In [17]: vdot(mat(ones(2)),mat(ones(2))) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/charris/<ipython console> ValueError: vectors have different lengths In [18]: vdot(mat(ones(2)),mat(ones(2)).T) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/charris/<ipython console> ValueError: vectors have different lengths Nope, vdot doesn't work for row and column vectors. So there is *no* builtin inner product that works for matrices. I wonder if we should have one, and if so, what it should be called. I think that vdot should probably be modified to do the job. There is also the question of whether or not v.T * v should be a scalar when v is a column vector. I believe that construction is commonly used in matrix algebra as an alias for the inner product, although strictly speaking it uses the mapping between a vector space and its dual that the inner product provides. Chuck
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion