Of course, closed form hessian is great, but I would still check it using deriv3 or hessian() from numDeriv.
It should be noted that hessian() is so accurate that it can be a surrogate for the exact hessian. It is computationally demanding, but it is tolerable when you are computing it only once at the end of converged MLE. Ravi. ____________________________________________________________________ Ravi Varadhan, Ph.D. Assistant Professor, Division of Geriatric Medicine and Gerontology School of Medicine Johns Hopkins University Ph. (410) 502-2619 email: rvarad...@jhmi.edu ----- Original Message ----- From: avraham.ad...@guycarp.com Date: Thursday, June 18, 2009 10:54 am Subject: RE: [R] Matrix inversion-different answers from LAPACK and LINPACK To: Ravi Varadhan <rvarad...@jhmi.edu> Cc: r-help@r-project.org > Thank you. One question, though. In the case where I have closed form > formulæ for the Hessian, or the functions are parseable by "deriv3", > would > it be better to use that than the approximation? > > --Avraham > > > > > > "Ravi Varadhan" > > <rvarad...@jhmi.e > > du> > To > <avraham.ad...@guycarp.com>, > > 06/17/2009 06:50 "'Douglas Bates'" > > PM <ba...@stat.wisc.edu> > > > cc > <dmba...@gmail.com>, > > <r-help@r-project.org> > > > Subject > RE: [R] Matrix > inversion-different > answers from LAPACK and > LINPACK > > > > > > > > > > > > > > > > > Avraham, > > The hessian calculated by optim() can be inaccurate, since it uses an > inaccurate finite-difference approximation to the second partial > derivatives. A better approach is to use the hessian function, hessian(), > from "numDeriv" package, which uses an accurate, higher-order Richardson's > extrapolation scheme. > > Ravi. > > > ---------------------------------------------------------------------------- > > ------- > > Ravi Varadhan, Ph.D. > > Assistant Professor, The Center on Aging and Health > > Division of Geriatric Medicine and Gerontology > > Johns Hopkins University > > Ph: (410) 502-2619 > > Fax: (410) 614-9625 > > Email: rvarad...@jhmi.edu > > Webpage: > > > tml > > > > ---------------------------------------------------------------------------- > > -------- > > > -----Original Message----- > From: r-help-boun...@r-project.org [ On > Behalf Of avraham.ad...@guycarp.com > Sent: Wednesday, June 17, 2009 6:11 PM > To: Douglas Bates > Cc: dmba...@gmail.com; r-help@r-project.org > Subject: Re: [R] Matrix inversion-different answers from LAPACK and LINPACK > > > I will be the first one to admit I may be doing something stupid, > which is > why I am asking here. > > Yes, it is supposed to be a V-CoV matrix, but one found by numerical > iteration (a call to optim). I'm actually trying, in a sense, to reproduce > "hessian=TRUE" in cases where I know the analytic form of the first and > second partial derivatives of the distribution to which I am trying a > maximum likelihood fit. So I cannot guarantee that the result will be > positive semi-definite. I wanted to try the supposed increased speed > and > precision obtained by passing these to optim using BFGS, for example, > as > well as possibly being able to get parameter cross-correlations even > in > cases where the simplex result of Optim returns a degenerate hessian. > I'm > learning this as I go on using various texts (MASS, Crawley, etc.) and > internet webpages so it is more than likely that my ignorance is making > something go awry. If you can point me to any texts or sources which > you > would consider helpful and educational, I would very much appreciate > it. > > Thank you, > > --Avi > > > > > Douglas Bates > <ba...@stat.wisc. > edu> > To > Sent by: Albyn Jones <jo...@reed.edu> > dmba...@gmail.com > cc > avraham.ad...@guycarp.com, > r-help@r-project.org > 06/17/2009 05:55 Subject > PM Re: [R] Matrix inversion-different > answers from LAPACK and LINPACK > > > > > > > > > > > On Wed, Jun 17, 2009 at 2:02 PM, Albyn Jones<jo...@reed.edu> wrote: > > As you seem to be aware, the matrix is poorly conditioned: > > > >> kappa(PLLH,exact=TRUE) > > [1] 115868900869 > > > > It might be worth your while to think about reparametrizing. > > Also, if it is to be a variance-covariance matrix then it must be positive > semidefinite so you should be considering a Cholesky decomposition, > not a > QR > decomposition. > > I think we should insert code to print a warning > > "Just because you found a formula involving the inverse of a matrix doesn't > mean that this is a good way to calculate the result - in fact it is > usually > a very bad way." > > whenever a user asks for > > solve(x) > > I was corresponding with Tim Davis, an renowned numerical analyst who > wrote > the sparse matrix software used in the Matrix package, and he was horrified > that we even allow the one-argument form of the solve function. He said, > "But people will do very stupid things if you provide them with an > easy way > of asking for a matrix inverse" and I said, "Yep". > > I would amend > > fortune("rethink") > > If the answer is parse() you should usually rethink the question. > -- Thomas Lumley > R-help (February 2005) > > to say "parse() or solve(x)" > > > > > albyn > > > > On Wed, Jun 17, 2009 at 11:37:48AM -0400, avraham.ad...@guycarp.com > wrote: > >> > >> Hello. > >> > >> I am trying to invert a matrix, and I am finding that I can get > different > >> answers depending on whether I set LAPACK true or false using > "qr". I > had > >> understood that LAPACK is, in general more robust and faster than > LINPACK, > >> so I am confused as to why I am getting what seems to be invalid > answers. > >> The matrix is ostensibly the Hessian for a function I am optimizing. > >> I > want > >> to get the parameter correlations, so I need to invert the matrix. > >> There are times where the standard "solve(X)" returns an error, but > "solve(qr(X, > >> LAPACK=TRUE))" returns values. However, there are times, where the > latter > >> returns what seems to be bizarre results. > >> > >> For example, an example matrix is PLLH (Pareto LogLikelihood Hessian) > >> > >> alpha theta alpha > >> 1144.6262175141619082 -0.012907777205604828788 theta > >> -0.0129077772056048 0.000000155437688485563 > >> > >> Running plain "solve (PLLH)" or "solve (qr(PLLH))" returns: > >> > >> [,1] [,2] alpha > >> 0.0137466171688024 1141.53956787721 theta 1141.5395678772131305 > >> 101228592.41439932585 > >> > >> However, running "solve(qr(PLLH, LAPACK=TRUE)) returns: > >> > >> [,1] [,2] [1,] > >> 0.0137466171688024 0.0137466171688024 [2,] 1141.5395678772131305 > >> 1141.5395678772131305 > >> > >> which seems wrong as the original matrix had identical entries on > the > off > >> diagonals. > >> > >> I am neither a programmer nor an expert in matrix calculus, so I do > >> not understand why I should be getting different answers using > >> different libraries to perform the ostensibly same function. I > >> understand the extremely small value of d²X/d(theta)² (PLLH[2,2]) > may > >> be contributing > to > >> the error, but I would appreciate confirmation, or correction, from > >> the experts on this list. > >> > >> Thank you very much, > >> > >> --Avraham Adler > >> > >> > >> > >> PS: For completeness, the QR decompositions per "R" under LINPACK > and > >> LAPACK are shown below: > >> > >> > qr(PLLH) > >> $qr > >> alpha theta alpha > >> -1144.6262175869414932095 0.01290777720653695122277 theta > >> -0.0000112768491646264 0.00000000987863187747112 > >> > >> $rank > >> [1] 2 > >> > >> $qraux > >> [1] 1.99999999993641619511209 0.00000000987863187747112 > >> > >> $pivot > >> [1] 1 2 > >> > >> attr(,"class") > >> [1] "qr" > >> > > >> > >> > qr(PLLH, LAPACK=TRUE) > >> $qr > >> alpha theta alpha > >> -1144.62621758694149320945 0.01290777720653695122277 theta > >> -0.00000563842458249248 0.00000000987863187747112 > >> > >> $rank > >> [1] 2 > >> > >> $qraux > >> [1] 1.99999999993642 0.00000000000000 > >> > >> $pivot > >> [1] 1 2 > >> > >> attr(,"useLAPACK") > >> [1] TRUE > >> attr(,"class") > >> [1] "qr" > >> ______________________________________________ > >> R-help@r-project.org mailing list > >> > >> PLEASE do read the posting guide > > >> and provide commented, minimal, self-contained, reproducible code. > >> > > > > ______________________________________________ > > R-help@r-project.org mailing list > > > > PLEASE do read the posting guide > > > and provide commented, minimal, self-contained, reproducible code. > > > > ______________________________________________ > R-help@r-project.org mailing list > > PLEASE do read the posting guide > > and provide commented, minimal, self-contained, reproducible code. > > ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.