Thanks Xiaohui. I saw your solution a few days ago but only now realize what you're doing. It didn't at the time make sense to me to use the density from R directly. Now, I get it.
The solution you get also make sense because the par[2] is ~ 1/par[1] which is what the analytical MLEs imply. I had an error in my likelihood because I left out a term which Haris Skiadis pointed out privately. I fixed it ( but it still didn't converge ) and now I have something to shoot towards which will help me a lot to figuring out what's going so thanks very much. -----Original Message----- From: Xiaohui Chen [mailto:[EMAIL PROTECTED] Sent: Saturday, May 24, 2008 3:04 AM To: [EMAIL PROTECTED] Cc: r-help@r-project.org Subject: Re: [R] maximizing the gamma likelihood Hi Mark, As I said in earlier emails, you do NOT need to explicitly code the likelihood function for gamma distn. I just code you example as below: vsamples<- c(14.7, 18.8, 14, 15.9, 9.7, 12.8) gamma.nll <- function(par,data) -sum(dgamma(data,shape=par[1],scale=par[2],log=T)) optim(c(1,1),gamma.nll,data=vsamples,method='BFGS') > $par [1] 24.9327797 0.5743043 $value [1] 14.72309 The mle is pretty robust to the starting values: if you change the initialization to c(10,10), the results would be > $par [1] 25.0285880 0.5721142 $value [1] 14.72299 Anyway, for the standard distns, which R has implementation, you definitely don't have to do everything from scratch. Xiaohui [EMAIL PROTECTED] 写道: > for learning purposes and also to help someone, i used roger peng's > document to get the mle's of the gamma where the gamma is defined as > > f(y_i) = (1/gammafunction(shape)) * (scale^shape) * (y_i^(shape-1)) * > exp(-scale*y_i) > > ( i'm defining the scale as lambda rather than 1/lambda. various books > define it differently ). > > i found the likelihood to be n*shape*log(scale) + > (shape-1)*sum(log(y_i) - scale*sum(y_i) > then i wrote below which is just roger peng's likelihood example but > using the gamma instead of the normal. > I get estimates back but i separately found that the analytical mle of > the scale is equal to 1/ analytical mle(shape). > and my estimates aren't consistent with that fact ? this leads me to > assume that my estimates are not correct. > > can anyone tell me what i'm doing wrong. maybe my starting values are > too far off ? thanks. > > make.negloglik <- function(data, fixed=c(FALSE,FALSE)) { > op <- fixed > function(p) { > op[!fixed] <- p > shape <- exp(op[1]) > scale <- exp(op[2]) > a <- length(data)*shape*log(scale) > b <- (shape-1)*sum(log(data)) > c <- -1.0*scale*sum(data) > -(a + b + c) > } > } > > vsamples<- c(14.7, 18.8, 14, 15.9, 9.7, 12.8) > nLL <- make.negloglik(vsamples) > temp <- optim(c(scale=1,shape=1), nLL, method="BFGS")[["par"]] > estimates <- log(temp) > print(estimates) > > check <- estimates[1]/mean(vsamples) > print(check) > > ______________________________________________ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.