> Ben's suggestion is identical to:
>
> A = numpy.tensordot(P, C, axes=(1, 0))
Yes, that does the trick! Thank, very good idea.
Since i've build atlas with threading support, in the computation
of the dot product all four cpus go to 100%, which makes it quite fast.
I'm starting to love numpy ar
Hi,
I want to compute the following dot product:
P = np.array( [[ p11, p12 ], [p21, p22]] )
C = np.array( [c1, c2] )
where c1 and c2 are m*m matrices, so that
C.shape = (2,m,m)
I want to compute:
A = np.array([a1, a2])
where a1 and a2 are two matrices m*m, from the dot product of P and C.
Well, actually np.arange(2**24) was just to test the following line ;). I'm
particularly concerned about memory consumption rather than speed.
On 16 May 2010 22:53, Brent Pedersen wrote:
> On Sun, May 16, 2010 at 12:14 PM, Davide Lasagna
> wrote:
> > Hi all,
> > What is
Hi all,
What is the fastest and lowest memory consumption way to compute this?
y = np.arange(2**24)
bases = y[1:] + y[:-1]
Actually it is already quite fast, but i'm not sure whether it is occupying
some temporary memory
is the summation. Any help is appreciated.
Cheers
Davide
If your x data are equispaced I would do something like this
def derive( func, x):
"""
Approximate the first derivative of function func at points x.
"""
# compute the values of y = func(x)
y = func(x)
# compute the step
dx = x[1] - x[0]
# kernel array for second order accuracy centered
Hi all,
I noticed some performance problems with np.mean and np.std functions.
Here is the console output in ipython:
# make some test data
>>>: a = np.arange(80*64, dtype=np.float64).reshape(80, 64)
>>>: c = np.tile( a, [1, 1, 1])
>>>: timeit np.mean(c, axis=0)
1 loops, best of 3: 2.09 s pe
_something()
#
Is there any way to do what i think?? Can i obtain "pythonically" a list of
column arrays??
Any help is appreciated.
Cheers..
Davide Lasagna
Dip. Ingegneria Aerospaziale
Politecnico di Torino
Italia
___
NumPy-Discussion mailing l