The optimization algorithms did converge to a limit point.  But, not to a 
stationary point, i.e. a point in parameter space where the first and second 
order KKT conditions are satisfied.  If you check the gradient at the solution, 
you will see that it is quite large in magnitude relative to 0.  So, why did 
the algorithms declare convergence?  Convergence is based on absolute change in 
function value and/or relative change in parameter values between consecutive 
iterations.  This does not ensure that the KKT conditions are satisfied.

Now, to the real issue:  your problem is ill-posed.  As you can tell from the 
eigenvalues of the hessian, they vary over 9 orders of magnitude.  This may 
indicate a problem with the data in that the log-likelihood is 
over-parametrized relative to the information in the data set.  Get a better 
data set or formulate a simpler model, and the problem will disappear.

Best,
Ravi

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to