Dear numpy historians,

When multiplying two arrays with numpy.dot, the summation is made over
the last index of the first argument, and over the *second-to-last*
index of the second argument.  I wonder why the convention has been
chosen like this?

The only reason I can think of is that this allows to use GEMM as a
building block also for the >2d case.  Is this the motivation?  However,
the actual implementation of numpy.dot uses GEMM only in the 2d x 2d
case...

Summation over the last index of the first argument and the first index
of the second would seem a more obvious choice.

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to