Dag Sverre Seljebotn student.matnat.uio.no> writes:
>
> Paul Northug wrote:
> > I have a computation bounded by one step and I have always wondered
> > how to make it fast enough to be useable. I suspect that I have to use
> > an approximation, but I was hoping
I have a computation bounded by one step and I have always wondered
how to make it fast enough to be useable. I suspect that I have to use
an approximation, but I was hoping someone would spot a major
inefficiency in my implementation.
The calculation is a kind of outer product of two sets of time
Stéfan van der Walt sun.ac.za> writes:
>
> I haven't checked your code in detail, but I'll mention two common
> problems with this approach in case it fits:
>
> 1) When convolving a KxL with an MxN array, they both need to be zero
> padded to (K+M-1)x(L+N-1).
> 2) One of the signals needs to be
gmail.com> writes:
>
> On Mon, Apr 19, 2010 at 2:36 AM, Paul Northug gmail.com> wrote:
> >
> > #
> > import numpy as np
> > from scipy.signal import fftn, ifftn, correlate
> >
> > M, N, P, T = 2, 3, 4, 5
> >
> > phi = np.random.randn
I am having trouble reformulating a series of correlations as a single
fft, ifft pair.
I have a set of kernels phi : (M = channel, N = kernel, T = time)
correlated with signal a : (N, P+T-1) yielding x : (M, T).
The correlation, for now, is only in the last dimension, with the two
other dimension
Stéfan van der Walt sun.ac.za> writes:
>
> On 16 April 2010 21:35, Paul Northug gmail.com> wrote:
> > how is it stored in memory, as [1, 2, 3, 4] or [1, 3, 2, 4]?
>
> The latter:
>
> In [22]: np.fromstring(str(x.data))
> Out[22]: array([ 1., 3., 2., 4.]
I'd like to use numpy fortran ordering in order to use some external
libraries more easily, but it looks like I don't understand how it
works and it is causing transposes that are confusing me.
When I define an array as:
a = np.array([[1.,2.],[3.,4.]], order='F', type=np.float32)
how is it store
I would like to 'memoize' the objective, derivative and hessian
functions, each taking a 1d double ndarray argument X, that are passed
as arguments to
scipy.optimize.fmin_ncg.
Each of these 3 functions has calculations in common that are
expensive to compute and are a function of X. It seems fmin_