BFGS is actually Fletcher's (1970) variable metric code that I modified with 
him in
January 1976 in Dundee and then modified very, very slightly (to insist that 
termination
only occurred on a steepest descent search -- this messes up the approximated 
inverse
Hessian, but I have an experimental Rvmminx in the optimization and solver 
project on
R-forge for those interested).

I haven't looked at the internals of nlm to judge the precise update used. 
However, the
code is there in R-2.13.1 under src/appl/uncmin.c, where at first glance a 
simple finite
difference approximation to the Hessian seems to be used. (numDeriv would 
likely do
better, even without the analytic gradient, because it uses an extrapolation 
process to
improve the approximation.)

Given that you do not have  a huge problem, it is likely safer to use numDeriv 
to compute
the Hessian when you need it, preferably using the Jacobian function on the 
(analytic)
gradient if the latter is available. That is what we do in optimx. Currently I 
am doing a
"back to basics" refactoring of optimx. The updated version is working, but I 
expect it
will be a few more weeks before I have a sufficiently comprehensive set of 
tests in place
and run. However, if someone is eager, I can provide access to the code 
earlier. Contact
me off-list. The existing version on CRAN works reasonably well, but the code 
was getting
too heavily patched.

John Nash



Date: Thu, 22 Sep 2011 12:15:41 +1000 From: Amy Willis <amy.wil...@anu.edu.au> 
To:
r-help@r-project.org Subject: [R] nlm's Hessian update method Message-ID:
<343e6871-4564-4d9d-90f0-f2c9b30ea...@anu.edu.au> Content-Type: text/plain;
charset=us-ascii Hi R-help! I'm trying to understand how R's nlm function 
updates its
estimate of the Hessian matrix. The Dennis/Schnabel book cited in the 
references presents
a number of different ways to do this, and seems to conclude that the 
positive-definite
secant method (BFGS) works best in practice (p201). However, when I run my code 
through
the optim function with the method as "BFGS", slightly different estimates are 
produced to
that of nlm:
> > optim(strt,jointll2,method="BFGS",hessian=T)$par
   -0.4016808     0.6057144     0.3744790    -7.1819734     3.0230386     
0.4446641

> > nlm(jointll2,strt,hessian=T)$estimate
[1] -0.4016825  0.6057159  0.3744765 -7.1819737  3.0230386  0.4446623

Can anyone tell me if nlm employs the BFGS method for updating its estimates? 
Or does it
use another algorithm?


Thank you!



------------------------------


On 09/22/2011 06:00 AM, r-help-requ...@r-project.org wrote:
> Date: Thu, 22 Sep 2011 12:15:41 +1000 From: Amy Willis 
> <amy.wil...@anu.edu.au> To:
> r-help@r-project.org Subject: [R] nlm's Hessian update method Message-ID:
> <343e6871-4564-4d9d-90f0-f2c9b30ea...@anu.edu.au> Content-Type: text/plain;
> charset=us-ascii Hi R-help! I'm trying to understand how R's nlm function 
> updates its
> estimate of the Hessian matrix. The Dennis/Schnabel book cited in the 
> references presents
> a number of different ways to do this, and seems to conclude that the 
> positive-definite
> secant method (BFGS) works best in practice (p201). However, when I run my 
> code through
> the optim function with the method as "BFGS", slightly different estimates 
> are produced to
> that of nlm:
>> > optim(strt,jointll2,method="BFGS",hessian=T)$par
>    -0.4016808     0.6057144     0.3744790    -7.1819734     3.0230386     
> 0.4446641 
> 
>> > nlm(jointll2,strt,hessian=T)$estimate
> [1] -0.4016825  0.6057159  0.3744765 -7.1819737  3.0230386  0.4446623
> 
> Can anyone tell me if nlm employs the BFGS method for updating its estimates? 
> Or does it use another algorithm?
> 
> 
> Thank you!
> 
> 
> 
> ------------------------------

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to