i have to reshape a matrix beta of the form (4**N, 4**N, 4**N, 4**N)
into betam like (16**N, 16**N) following:
betam = np.zeros((16**N,16**N), dtype = complex)
for k in xrange(16**N):
ind1 = np.mod(k,4**N)
ind2 = k/4**N
for l in xrange(16**N):
betam[k,l] = beta[np.mod(l,4**N),
we have had that discussion about ... two days ago. please look up
'How to sum weighted matrices' with at least two efficient solutions.
On Fri, Mar 11, 2011 at 02:00:09PM -0500, Josh Hykes wrote:
>I think you can use tensordot here. Maybe something like the
>following:
>
>from numpy
for your problem, you can do:
import numpy as np
weights = np.array([1,2])
matrix1 = np.ones((2,3))
matrix2 = 2*np.ones((2,3))
matrices = np.array([matrix1,matrix2])
weighted_sum = np.tensordot(weights, matrices, (0,0))
--
On Mon, Mar 07
I'll have to work with large hermitian matrices and calculate
traces, eigenvalues and perform several matric products. In order
to speed those up, i noticed that blas includes a function called
'zhemm' for efficient matrix products with at least one hermitian
matrix.
is there a way to call that on
I need to calculate several products of matrices where
at least one of them is always hermitian. The function
zhemm (in blas, level 3) seems to directly do that in
an efficient manner.
However ... how can i access that function and dirctly apply
it on numpy arrays?
If you know alternatives that a
My own solution (i just heard that a very similar fix is (about to
be) included in the new svn version) - for those who need a quickfix:
*) This quick and dirty solution is about a factor of 300 faster
for an input of 10^5 random numbers. Probably alot more for larger
vectors.
*) The deviation fr
The correlation of a large data (about 250k points) v can be checked
via
correlate(v,v,mode='full')
and ought to give the same result as the matlab function
xcorr(v)
FFT might speed up the evaluation ...
In my specific case:
xcorr takes about 0.2 seconds.
correlate takes about 70 seco