[Numpy-discussion] suggestions on optimising a special matrix reshape

2011-07-05 Thread qubax
i have to reshape a matrix beta of the form (4**N, 4**N, 4**N, 4**N) into betam like (16**N, 16**N) following: betam = np.zeros((16**N,16**N), dtype = complex) for k in xrange(16**N): ind1 = np.mod(k,4**N) ind2 = k/4**N for l in xrange(16**N): betam[k,l] = beta[np.mod(l,4**N),

Re: [Numpy-discussion] loop vectorization

2011-03-11 Thread qubax
we have had that discussion about ... two days ago. please look up 'How to sum weighted matrices' with at least two efficient solutions. On Fri, Mar 11, 2011 at 02:00:09PM -0500, Josh Hykes wrote: >I think you can use tensordot here. Maybe something like the >following: > >from numpy

Re: [Numpy-discussion] How to sum weighted matrices

2011-03-07 Thread qubax
for your problem, you can do: import numpy as np weights = np.array([1,2]) matrix1 = np.ones((2,3)) matrix2 = 2*np.ones((2,3)) matrices = np.array([matrix1,matrix2]) weighted_sum = np.tensordot(weights, matrices, (0,0)) -- On Mon, Mar 07

[Numpy-discussion] How to efficiently multiply 2**10 x 2**10 hermitian matrices

2010-12-30 Thread qubax
I'll have to work with large hermitian matrices and calculate traces, eigenvalues and perform several matric products. In order to speed those up, i noticed that blas includes a function called 'zhemm' for efficient matrix products with at least one hermitian matrix. is there a way to call that on

[Numpy-discussion] Efficient Matrix-matrix product of hermitian matrices, zhemm (blas) and numpy

2010-12-20 Thread qubax
I need to calculate several products of matrices where at least one of them is always hermitian. The function zhemm (in blas, level 3) seems to directly do that in an efficient manner. However ... how can i access that function and dirctly apply it on numpy arrays? If you know alternatives that a

Re: [Numpy-discussion] Correlation function about a factor of 100 slower than matlab/mathcad ... but with fft even worse ?

2009-11-25 Thread qubax
My own solution (i just heard that a very similar fix is (about to be) included in the new svn version) - for those who need a quickfix: *) This quick and dirty solution is about a factor of 300 faster for an input of 10^5 random numbers. Probably alot more for larger vectors. *) The deviation fr

[Numpy-discussion] Correlation function about a factor of 100 slower than matlab/mathcad ... but with fft even worse ?

2009-11-25 Thread qubax
The correlation of a large data (about 250k points) v can be checked via correlate(v,v,mode='full') and ought to give the same result as the matlab function xcorr(v) FFT might speed up the evaluation ... In my specific case: xcorr takes about 0.2 seconds. correlate takes about 70 seco