As Gabor indicates, using a start based on a good approximation is
usually helpful, and nls() will generally find solutions to problems
where there are such starts, hence the SelfStart methods. The Marquardt
approaches are more of a pit-bull approach to the original
specification. They grind away at the problem without much finesse, but
generally get there eventually. If one is solving lots of problems of a
similar type, good starts are the way to go. One-off (or being lazy), I
like Marquardt.
It would be interesting to know what proportion of random starting
points in some reasonable bounding box get the "singular gradient"
message or other early termination with nls() vs. a Marquardt approach,
especially as this is a tiny problem. This is just one example of the
issue R developers face in balancing performance and robustness. The GN
method in nls() is almost always a good deal more efficient than
Marquardt approaches when it works, but suffers from a fairly high
failure rate.
JN
On 13-03-15 10:01 AM, Gabor Grothendieck wrote:
On Fri, Mar 15, 2013 at 9:45 AM, Prof J C Nash (U30A) <nas...@uottawa.ca> wrote:
Actually, it likely won't matter where you start. The Gauss-Newton direction
is nearly always close to 90 degrees from the gradient, as seen by turning
trace=TRUE in the package nlmrt function nlxb(), which does a safeguarded
Marquardt calculation. This can be used in place of nls(), except you need
to put your data in a data frame. It finds a solution pretty
straightforwardly, though with quite a few iterations and function
evaluations.
Interesting observation but it does converge in 5 iterations with the
improved starting value whereas it fails due to a singular gradient
with the original starting value.
Lines <- "
+ x y
+ 60 0.8
+ 80 6.5
+ 100 20.5
+ 120 45.9
+ "
DF <- read.table(text = Lines, header = TRUE)
# original starting value - singular gradient
nls(y ~ exp(a + b*x)+d,DF,start=list(a=0,b=0,d=1))
Error in nlsModel(formula, mf, start, wts) :
singular gradient matrix at initial parameter estimates
# better starting value - converges in 5 iterations
lm1 <- lm(log(y) ~ x, DF)
st <- setNames(c(coef(lm1), 0), c("a", "b", "d"))
nls(y ~ exp(a + b*x)+d, DF, start=st)
Nonlinear regression model
model: y ~ exp(a + b * x) + d
data: DF
a b d
-0.1492 0.0342 -6.1966
residual sum-of-squares: 0.5743
Number of iterations to convergence: 5
Achieved convergence tolerance: 6.458e-07
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.