> Ben's suggestion is identical to:
>
> A = numpy.tensordot(P, C, axes=(1, 0))
Yes, that does the trick! Thank, very good idea.
Since i've build atlas with threading support, in the computation
of the dot product all four cpus go to 100%, which makes it quite fast.
I'm starting to love numpy ar
2011/2/9 Davide Lasagna :
> Hi,
>
> I want to compute the following dot product:
>
> P = np.array( [[ p11, p12 ], [p21, p22]] )
> C = np.array( [c1, c2] )
>
> a1 = p11*c1 + p12*c2
> a2 = p21*c1 + p22*c2
>
> P.shape = (n, n)
> C.shape = (n, m, l)
>
> and with a result as:
>
> A.shape = (n, m, l)
In
On Wed, Feb 9, 2011 at 9:53 AM, Davide Lasagna wrote:
> Hi,
>
> I want to compute the following dot product:
>
> P = np.array( [[ p11, p12 ], [p21, p22]] )
> C = np.array( [c1, c2] )
>
> where c1 and c2 are m*m matrices, so that
>
> C.shape = (2,m,m)
>
> I want to compute:
>
> A = np.array([a1, a2
Hi,
I want to compute the following dot product:
P = np.array( [[ p11, p12 ], [p21, p22]] )
C = np.array( [c1, c2] )
where c1 and c2 are m*m matrices, so that
C.shape = (2,m,m)
I want to compute:
A = np.array([a1, a2])
where a1 and a2 are two matrices m*m, from the dot product of P and C.