The general problem is not computationally tractable. You can trying
stochastic algorithms, like simulated annealing or genetic programmig, but
results depend on the problem. There is no point in computing derivatives
in that case either.
On Sunday, August 18, 2013, Ajo Fod wrote:
> Looks like Jo
Looks like Joptimizer is restricted to solving convex problems.
My application is to minimize a generic non-linear function with linear
constraints. Know of anything that does it?
-Ajo
On Thu, Aug 15, 2013 at 5:48 PM, Konstantin Berlin wrote:
> There would be an advantage, true. I don't know i
If you're talking about analysis.differentiation package. I'm looking
forward to using it.
Though, my main point in this thread was that if one can compute the
hessian, it should be possible in certain cases to speed up convergence
rather than depending on an update rule to estimate an *approximat
Le 16/08/2013 18:55, Ajo Fod a écrit :
> The algorithm computes the Hessian using an update rule. My question was
> what if you can compute the hessian analytically?
>
> Hessian: http://en.wikipedia.org/wiki/Hessian_matrix
> Gradient: http://en.wikipedia.org/wiki/Gradient
We do have support to he
The algorithm computes the Hessian using an update rule. My question was
what if you can compute the hessian analytically?
Hessian: http://en.wikipedia.org/wiki/Hessian_matrix
Gradient: http://en.wikipedia.org/wiki/Gradient
Cheers,
-Ajo
On Fri, Aug 16, 2013 at 9:39 AM, Luc Maisonobe wrote:
> L
Le 15/08/2013 22:59, Ajo Fod a écrit :
> Hello,
>
> Is'nt there an advantage to being able to compute the Jacobian of the
> gradient precisely at a point?
>
> If so, is there a class that uses the Jacobian instead of estimating the
> jacobian from the last few iteration as NonLinearConjugateGradi
Yes that works. Thanks!
On Thursday, August 15, 2013, Konstantin Berlin wrote:
> There would be an advantage, true. I don't know if commons has one
> (doesn't look like it). You can also try http://www.joptimizer.com/
>
> On Thu, Aug 15, 2013 at 4:59 PM, Ajo Fod >
> wrote:
> > Hello,
> >
> > Is'
There would be an advantage, true. I don't know if commons has one
(doesn't look like it). You can also try http://www.joptimizer.com/
On Thu, Aug 15, 2013 at 4:59 PM, Ajo Fod wrote:
> Hello,
>
> Is'nt there an advantage to being able to compute the Jacobian of the
> gradient precisely at a point
Hello,
Is'nt there an advantage to being able to compute the Jacobian of the
gradient precisely at a point?
If so, is there a class that uses the Jacobian instead of estimating the
jacobian from the last few iteration as NonLinearConjugateGradientOptimizer
does?
Thanks,
-Ajo