On Tue, May 4, 2010 at 9:23 PM, wrote:
> In [2] I didn't see anything about higher derivatives, so to get the
> Hessian I still had to do a finite difference (Jacobian) on the
> complex_step_grad. Even then the results look pretty good.
Yes, the traditional complex step does not solve the second
On Tue, May 4, 2010 at 8:23 PM, Guilherme P. de Freitas
wrote:
> On Tue, May 4, 2010 at 2:57 PM, Sebastian Walter
> wrote:
>> playing devil's advocate I'd say use Algorithmic Differentiation
>> instead of finite differences ;)
>> that would probably speed things up quite a lot.
>
> I would sugges
Hello,
I have the following arrays read as masked array.
I[10]: basic.data['Air_Temp'].mask
O[10]: array([ True, False, False, ..., False, False, False], dtype=bool)
[12]: basic.data['Press_Alt'].mask
O[12]: False
I[13]: len basic.data['Air_Temp']
-> len(basic.data['Air_Temp'])
O[13]: 1758
I forgot to mention one thing: if you are doing optimization, a good
solution is a modeling package like AMPL (or GAMS or AIMMS, but I only
know AMPL, so I will restrict my attention to it). AMPL has a natural
modeling language and provides you with automatic differentiation.
It's not free, but the
On Tue, May 4, 2010 at 2:57 PM, Sebastian Walter
wrote:
> playing devil's advocate I'd say use Algorithmic Differentiation
> instead of finite differences ;)
> that would probably speed things up quite a lot.
I would suggest that too, but aside from FuncDesigner[0] (reference in
the end), I could
playing devil's advocate I'd say use Algorithmic Differentiation
instead of finite differences ;)
that would probably speed things up quite a lot.
On Tue, May 4, 2010 at 11:36 PM, Davide Lasagna wrote:
> If your x data are equispaced I would do something like this
> def derive( func, x):
> """
If your x data are equispaced I would do something like this
def derive( func, x):
"""
Approximate the first derivative of function func at points x.
"""
# compute the values of y = func(x)
y = func(x)
# compute the step
dx = x[1] - x[0]
# kernel array for second order accuracy centered
On Tue, May 4, 2010 at 4:06 PM, gerardob wrote:
>
> Hello, I have written a very simple code that computes the gradient by finite
> differences of any general function. Keeping the same idea, I would like
> modify the code using numpy to make it faster.
> Any ideas?
> Thanks.
>
> def grad_fi
Hello, I have written a very simple code that computes the gradient by finite
differences of any general function. Keeping the same idea, I would like
modify the code using numpy to make it faster.
Any ideas?
Thanks.
def grad_finite_dif(self,x,user_data = None):
On Thu, Apr 29, 2010 at 12:30 PM, Pauli Virtanen wrote:
> Wed, 28 Apr 2010 14:12:07 -0400, Alan G Isaac wrote:
> [clip]
> > Here is a related ticket that proposes a more explicit alternative:
> > adding a ``dot`` method to ndarray.
> > http://projects.scipy.org/numpy/ticket/1456
>
> I kind of lik
On Tue, May 4, 2010 at 12:20 PM, S. Chris Colbert wrote:
> On Thu, 2009-03-12 at 19:59 +0100, Dag Sverre Seljebotn wrote:
> > (First off, is it OK to continue polling the NumPy list now and then on
> > Cython language decisions? Or should I expect that any interested Cython
> > users follow the Cy
On Thu, 2009-03-12 at 19:59 +0100, Dag Sverre Seljebotn wrote:
> (First off, is it OK to continue polling the NumPy list now and then on
> Cython language decisions? Or should I expect that any interested Cython
> users follow the Cython list?)
>
> In Python, if I write "-1 % 5", I get 4. Howeve
On 04/05/2010 14:09, Neal Becker wrote:
> denis wrote:
>> Neal,
>> I like the idea of a faster np.histogram / histogramdd;
>> but it would have to be compatible with numpy and pylab
>> or at least a clear, documented subset (doc first).
>
> The point is not to be faster, it's to be incremental
denis wrote:
> On 03/05/2010 16:02, Neal Becker wrote:
>> I have coded in c++ a histogram object that can be used as:
>>
>> h += my_sample
>>
>> or
>>
>> h += my_vector
>>
>> This is very useful in simulations which are looping and developing
>> results
>> incrementally. It would me great to have
On 03/05/2010 16:02, Neal Becker wrote:
> I have coded in c++ a histogram object that can be used as:
>
> h += my_sample
>
> or
>
> h += my_vector
>
> This is very useful in simulations which are looping and developing results
> incrementally. It would me great to have such a feature in numpy.
Ne
On Tue, May 4, 2010 at 7:05 AM, David Cournapeau wrote:
> On Mon, May 3, 2010 at 7:23 PM, Austin Bingham
> wrote:
>> Hi everyone,
>>
>> I've recently been developing a python module and C++ library in
>> parallel, with core functionality in python and C++ largely just
>> layered on top of the py
16 matches
Mail list logo