Luke Tierney wrote: [misc snippage] >> >> But I'd prefer to avoid the necessity for users to manipulate the >> environment of a function. I think the pattern >> >> model( f, data=d ) > > For working at the general likelihood I think is is better to > encourage the approach of definign likelihood constructor functions. > The problem with using f, data is that you need to mathc the names > used in f and in data, so either you have to explicitly write out f > with the names you have in data or you have to modify data to use the > names f likes -- in the running example think > > f <- function(lambda) -sum(dpois(x, lambda, log=T)) > d <- data.frame(y=rpois(10000, 12.34)) > > somebody has to connext up the x in f with the y in d. [more snippage]
That's not really worse than having to match the names in a model formula to the names of the data frame in lm(), is it? The thing that I'm looking for in these matters is a structure which allows us to operate on likelihood functions in a rational way, e.g. reparametrize them, join multiple likelihoods with some parameters in common, or integrate them. The join operation is illustrative: You can easily do negljoint <- function(alpha, beta, gamma, delta) negl1(alpha, beta, gamma) + negl2(beta, gamma, delta) and with a bit of diligence, this could be the result of Join(negl1, negl2). But if the convention is that likelihods have their their data as an argument, you also need to also automatically define a data argument fot negljoint, (presumably a list of two) and organize that the calls to negl1 and negl2 contains the appropriate subdata. It is the sort of thing that might be doable, but you'd rather do without. -pd -- O__ ---- Peter Dalgaard Ă˜ster Farimagsgade 5, Entr.B c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~~~~~~~~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907 ______________________________________________ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel