It's a bug resulting from the new svyquantile() implementation. It's fixed in
the development version, which you can get from r-forge here:
https://r-forge.r-project.org/R/?group_id=1788
-thomas
Thomas Lumley
Professor of Biostatistics
Fro
No, you can't (at the moment), though it shouldn't be too hard to extend.
I can't run your example, though. I get:
Error in eval(expr, envir, enclos) : object 'M' not found
-thomas
Thomas Lumley
Professor of Biostatistics
University of Auckland
___
gt; $Px
> [1] 4 10 10 8
>
> $tailleP
> [1] 4
>
> $res
> [1] 4 0 0 0
>
> I haven't problem in "essai" function but I can't correctly return "Px"
> vector.
> I d'ont understand why I get only the first number (number 4 in my exemp
nder complex
sampling. Biometrika, 100, 831-842.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
.m12, pop.m13, pop.m14, pop.m15,
>> pop.m16))
>>
>>
>> -Original Message-
>> From: Michael Willmorth
>> Sent: Saturday, June 07, 2014 9:23 AM
>> To: r-help@R-project.org
>> Subject: rake() error message
>>
>> I'm tea
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Thomas Lum
ion sizes, those are used
to compute probabilities, which are then used to compute weights.
The code works in terms of probabilities because that's fairly
standard in textbooks. It makes it easier for me to get the formulas
right.
-thomas
--
Thomas Lumley
Professo
on
> the database backend part of the process.
You might try MonetDB and its R interface -- it is fast for
aggregation operations, and either the current version or the upcoming
version has dplyr support.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University
another
> stratum. See 3.2.1 in http://books.google.fr/books?id=L96ludyhFBsC
> (look for "single" in the whole book to find it).
>
>
Or set options(survey.lonely.psu) to one of the other values. But merging
strata is probably better.
-thomas
--
Thomas Lumley
Professo
bles in a given analysis but not for all
the variables in your dataset is to use the database-backed designs and put
the data in something like SQLite.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
___
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/
> posting-guide.html
> and provide commented, minimal, self-
pse=".")
>
>
Backreferences
cat(
gsub("(([[:alnum:]]+\\.){3})([[:alnum:]]+)\\.",
"\\1\\2\n",
fake
)
)
That is, match three word/period sequences, match a word, match a period,
and output the first two things.
-thomas
--
the same name, and too many people would have to understand how they
are scoped.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://s
test.
>
>
This was a recent (well, 2007) change in behaviour. Previously the function
did some tricks to make either approach work, which could be described as
'clever' or 'too clever by half'.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of
-0.5810 0.1741 -3.34 0.0008
> stage=ib -0.4394 0.1899 -2.31 0.0207
> stage=iia 1.6565 0.2097 7.90 <0.0001
> stage=iib 1.6928 0.1979 8.55 <0.0001
> stage=iii 1.8211 0.2411 7.55 <0.0001
> sta
s
"The standard errors agree closely with survfit.coxph for independent
sampling when the model fits well, but are larger when the model fits
poorly. "
That is, the note is for the survival curve rather than the coefficients.
It's still surprising that there's a big difference,
something? Here is what I'm doing:
>>
>
> Survreg treats weights as case weights, and lm treats them as sampling
> weights.
>
Actually, lm() treats them as precision weights, not sampling weights, but
that's still the explanation.
-thomas
-
t; >> Bert Gunter
> >>> >> Genentech Nonclinical Biostatistics
> >>> >> (650) 467-7374
> >>> >>
> >>> >> "Data is not information. Information is not knowledge. And
> knowledge
> >>> >> is certainly not
LSE
> needed
>
> Thanks for reading my post, and thanks in advance for any help!
> Sincerely,
> Claire
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
>
e)-sigma)))
>
You can fit this accelerated-failure parametrisation of the Weibull with
survreg() in the survival package.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
R-hel
iles) for
unequal-probability samples.
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read th
og(longest chain) and copying can be reduced by using an index
> i and subsetting the original vector on each iteration. I think you could
> test for circularity by checking that the updated x are not a permutation
> of the kept x, all(x[y_idx[keep]] %in% x[keep]))
>
> Martin
>
&
e number of dimensions is not small I don't think there are any
algorithms taking less than n^2 time even on average.
In applications where I have seen large-n clustering it has mostly been
variants of k-means, which take kn time and space, not n^2.
Look at the Bioconductor flow-cytometry package
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
R-help@r-project.org m
a spline)
I'm not generally a fan of global goodness-of-fit tests, but this is
straightforward enough that I might add it to the survey package (though
that's not going to happen for a month or so).
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
urvey package to use sampling weights in the past,
> but according to post I found online from Thomas Lumley in mid-2012, R is
> currently not equipped to be able to do this.
>
> His post is here:
>
> http://r.789695.n4.nabble.com/sampling-weights-for-multilevel-models-tp4632947p4632
gaussian(link = "identity"))
>
> Thanks again,
> Sebastian
>
> --
> Sebastian Weirich, Dipl.-Psych.
>
> Institut zur Qualitätsentwicklung im Bildungswesen
> Humboldt-Universität zu Berlin
> Sitz: Hannoversche Straße 19, 10115 Berlin
> Postadresse: Unte
000 AIC: 2599
So, perhaps you could show us what you actually did, and what actually
happened, as the posting guidelines request.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
; > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
> --
> Paul E. Johnson
> Professor, Political Science Assoc. Director
> 1541 Lilac Lane, Room 504 Center for Research Methods
> University of Kansas
[[alternative HTML version deleted]]
>>
>> __**________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/**listinfo/r-help<https://stat.ethz.ch/mailman/listinfo/r-help>
>> PLEASE do read the posting guide htt
information at http://www.biostat.washington.edu/suminst/sisg/schedule
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Thomas Lumley
Professo
will just point
out that this 'simple model' is not lognormal. It is a model with normal
errors and log link, ie.
y ~ N(mu, sigma^2)
log(mu) = x \beta
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
_
uld reproduce the
reported error.
-thomas
> -pd
>
> On Apr 12, 2013, at 04:51 , Thomas Lumley wrote:
>
> > I don't get an error message (after I correct the missing line break
> after
> > the comment
> >
> >> b<- sapply(a, Cfun, upper=1)
>
_
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Thomas Lumley
ling -- to save memory it computes the standard errors
only at event times. It shouldn't be too hard to get it to extend that to
the last censoring time, but the reason it isn't too hard is that the curve
and standard error estimates don't change after the last failure tim
hand, if it is very large, you can thin it out to a uniform
sample by sampling from it with probability inversely proportional to the
original sampling probability.
- thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
_
uot;, srt=90)
>
> I would like the text to read:
>
> capacity 10^3 m^3
>
> (with ^ denoting superscript (i.e. each '3' as superscript).
>
What did you try?
Anyhow, this works
text(1,1,expression("capacity"~10^3~m^3))
-thomas
--
Thomas Lumley
Profe
;
The classical exact distribution for the Wilcoxon signed-rank test is
derived by assuming the data are from a continuous distribution, which
implies that there cannot be any ties in the differences. If there are
ties, the function uses a Normal approximation.
-thomas
--
Thomas Lumley
Professo
andard errors for a linear model whose coefficients have the
same interpretation as those from lmer.
If you need to estimate variance components, you currently need to use some
other software. Mixed-model estimation based on composite likelihood is on
my list of things to do, but not terribly high up.
z.ch/mailman/**listinfo/r-help<https://stat.ethz.ch/mailman/listinfo/r-help>
> PLEASE do read the posting guide http://www.R-project.org/**
> posting-guide.html <http://www.R-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, rep
r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproduci
AUC
were identical in all three cases.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
7, 10 Lower Kent Ridge Road,
> > 119076
> >
> >
> >
> >
> >
> > --
> > Department of Mathematics,
> > National University of Singapore,
> > Blk S17, 10 Lower Kent Ridge Road,
> > 119076
> >
> > [[alternative
really want differences in medians, look at differences in
medians. A permutation test or a bootstrap confidence interval is probably
the best way to do this.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
_
> >>
> > > >> (Dispersion parameter for quasibinomial family taken to be 23.14436)
> > > >>
> > > >> Null deviance: 7318.5 on 246 degrees of freedom
> > > >> Residual deviance: 5692.8 on 235 degrees of freedom
> > > >
e?
That should tell you whether there's some strange non-convexity going on or
whether the variable is just being put into the calculations backwards.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
th 1.
The fact that the variable is defined by the wrong name if yyrandom[1] is more of a problem.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
R-help@r-project.org maili
hts*(ReZ)^2) # SSR
> Rsq <- SSR/SST
>
> I don't understand what is wrong with the code. The sum square regression
> plus the sum square error do not add up to the sum square total in both the
> Y scale and Z scale. Y is a normal distribution and Z is log normal
With this density you can write down the density of the median or other
order statistic and thus write down an integral that gives the exact
variance. Better still, it's a polynomial, so you could evaluate the
integral exactly.
-thomas
--
Thomas Lumley
Professor of Bios
and remember where the branch cut goes on
the phase coordinate.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mail
When I add random noise in the
fourth decimal place, the matrix stops being singular.
-thomas
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/
ur predictor variable.
Using svsv[i] rather than names[i] should work. Or you can insert the
value of names[i] into the formula with
survdiff(eval(bquote(Surv(survival.m, survival) ~ .(names[i]))), data=svsv)
Even after you fix that, there's another problem, which is that your
code do
d the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org ma
lanta, GA 30341
> 770-488-0668
>
>
>
>
> [[alternative HTML version deleted]]
>
> __________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-
gov
>
> vide commented, minimal, self-contained, reproducible code.
> __
> R-help@r-project.org<mailto:R-help@r-project.org> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://
t(diag(coef(vv)))
api00api99
105.7489 112.8504
## delta method for standard error of square root of variance
> sqrt(vcov(vv)["api00","api00"]/(4*coef(vv)["api00","api00"]))
[1] 6.555219
-thomas
--
Thomas Lumley
Professor of Biostatistics
U
> data=sample, pps="brewer")
>
> svyciprop(~I(candidate1/totalVotes), design)
>
> ... I am assuming that the unit of analysis is the voting unit, right?
> and I am estimating an average among voting units?
>
You want a ratio estimator
svyratio(~candidate1, ~totalVotes,
yped at the global command prompt can return
anything other than 1, but maybe something is getting in between the
console and the evaluator. For example:
> testfun()
[1] 1
> print(testfun())
[1] 2
> capture.output(testfun())
[1] "[1] 6"
I don't see why a pure console prog
ent.frame(x1)))
}
testfun2(x1=1);
testfun1() never finds a1==1, but testfun2(3) does.
Remember, actual arguments to sapply() will be evaluated in the frame
sapply() is called from. It's only default arguments that are
evaluated inside the function.
--
Thomas Lumley
Professor of Bios
>> Division of Population Surveys
>> 1 Choke Cherry Road, Room 2-1071
>> Rockville, MD 20857
>>
>> Tel: 240-276-1070
>> Fax: 240-276-1260
>> e-mail:
>> pradip.muh...@samhsa.hhs.gov<mailto:pradip.muh...@samhsa.hhs.gov><mailto:pradip.muh...@
a.gov
>
> vide commented, minimal, self-contained, reproducible code.
> __
> R-help@r-project.org<mailto:R-help@r-project.org> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do
Why have you asked this question three times?
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R
a. Total sample size 7800
> Household is the BSU and where we need to calculate information on the
> individual level we are confident to be able to correct the sample weights
> for that.
That sounds plausible
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Au
ree.
>
> Also I do not understand, which models are shown there, e.g. the simple
> model just with an intercept and the variable GNI is not shown in the plot,
> why?
You asked for the two best models of each size, so you get the two
best models of each
male 0.5089488 0.5318942
>
> **
>
> use http://www.stata-press.com/data/r11/nhanes2f, clear
> svyset [pweight=finalwgt]
> svy: tabulate sex, percent se ci
> (running tabulate on estimation sample)
>
> Number of strata = 1
he Stata intervals were
asymmetric, but in fact they aren't]
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the post
ans who like the Wilcoxon test
(Frank Harrell comes to mind) like it because they believe stochastic
ordering is a reasonable assumption in the problems they work in, not
because they think you can do non-parametric testing in its absence.
-thomas
--
Thomas Lumley
Professor of Biostatistics
Uni
#x27; is higher.
The Wilcoxon test probably isn't very useful in a setting like this,
since its results really make sense only under 'stochastic ordering',
where the shift is in the same direction across the whole
distribution.
-thomas
--
Thomas Lumley
Professor of Biostatistics
o sophisticated things with it, but actually just
returns NA for all errors.
tryCatch() is also quieter.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/m
tr==12")[[1]]
> e
df$str == 12
> bquote(function(df) b<-.(e))
function(df) b <- df$str == 12
> eval(bquote(function(df) b<-.(e)))
function (df)
b <- df$str == 12
This saves more time than I expected, about 100ms per evaluation on my
computer.
-thomas
--
Thomas
etrisation. In a model with only an intercept, that would
be exp(intercept).
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read
m numbers from such a distribution?
Not directly, as far as I know, but you can easily simulate X|X>c by
transforming uniform random numbers using the inverse CDF, and Y|X=x
is univariate Normal with mean linear in x and variance independent of
x.
-thomas
--
Thomas Lumley
Professor of
m the_appropriate_table
where firm=", firms," and date>=", begindts, " and date <=", enddts)
lapply(queries, sqlQuery, channel=my.database.connection)
will return a list of data frames, one for each set of values.
-t
18.82551 5 0.00207139
>
> Using svydesign and svytable I _think_ this is how one would go about
> constructing a 2 x 2 table:
>
> tbl2<-svydesign( ~ Gender + Admit+Dept, weights=~Freq, data=DF)
> summary(dclus1)
> (tbl2by2 <- svytable(~ Gender + Admit+Dept,
n't be appropriate for
the survey design.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://ww
by(~HI_CHOL,~race+RIAGENDR,design=subset(postStratify(design1,~race+RIAGENDR+agecat,racegenderage),RIDAGEYR>=20),svymean,na.rm=TRUE)
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://st
advance for all replies!
>
> Peter
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, mini
by hand.
The function step() uses AIC. As far as I know, no-one has yet
constructed valid analogues of AIC,BIC,CIC, ... under complex sampling
(Alastair Scott and I are looking into it), so p-values are the only
option, making the process even less useful.
- thomas
--
Thomas Lumley
Professor of
lled "rqbr", which looks like it's part of rq(), called from
rq.fit.br()
Since the problem seems to be data-dependent, and happens with fairly
high frequency, you might want to use trace() to stick some sort of
data summary in before the call to rqbr, to see if anything obvious is
wrong w
SEXP a;
PROTECT(a=allocVector(REALSXP,1));
GetRNGstate();
REAL(a)[0]=rgamma(5000,1);
PutRNGstate();
UNPROTECT(1);
return (a);
}
- thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailin
ror: incompatible types in return
I thought the ANSI standard actually *required* a diagnostic for the
incompatible return types.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://s
'd be prepared to
implement it anyway.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www
t package. The example on the
withReplicates() help page shows how to do this for quantile
regression, and it should be similar.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
ht
ngeles
> https://joshuawiley.com/
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal,
They
influence the results more than observations with low weight. Your
code does the opposite.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-h
.R")
> function() { for (funcit.i in 1:k) { expr } }
>
> function() { for (funcit.i in 1:k) { expr } }
>
>
> This is on the same machine using (as far as I can tell) the
> same R engine. So why is the output different?
The "ugl
;- svydesign(id=~sib.pair.id, weights=~sampling.weights,
data=mydataset)
svyglm( response~predictor, family=quasibinomial(), design=mydesign)
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing
with the package vcd assocstats but
> without considering the survey package.
You can use svytable() to generate an estimated population table and
then feed that to assocstats().
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
___
the variance components for their own sake,
you need some other software.
I do have longer-term plans to add multilevel modelling capabilities
to the survey package, but it's harder than it may appear.
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
___
d as a way of viewing a large collection of best models, as in
the example for the plot() method, by setting nbest fairly large
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https:
11_AUTOPERCEPCIONSALUDGENERAL,Muestra.comp)
or if you want shorter names, create renamed variables in the design object:
Muestra.comp <- update(Muestra.comp, ocupacion =
M1_19_OCUPACIONPRINCIPALACTUAL, APES=
M3_11_AUTOPERCEPCIONSALUDGENERAL)
-thomas
--
Thomas Lumley
Professor of Biostatistics
Univ
; - attr(*, "dimnames")=List of 2
> ..$ : chr [1:260617] "1" "2" "3" "4" ...
> ..$ : NULL
>
As the help says, the default is predictions of the linear predictor.
To get predictions of the probability, use type="response"
Chisq”, na.rm= TRUE)
>
>
> (I feel like I may be overthinking this and the answer is much simpler)
>
>
You don't need to create two new variables; you just need a year variable
svychisq(~MyVar+Year, BothYears, statistic=”Chisq”, na.rm= TRUE)
tests whether MyVar is indep
been
> assigned separately for each of the variable. My question is: Is it
> possible to get 1 weight for each subject instead of 3 weights as shown in
> the package?
There *is* only one weight for each subject.
You are misinterpreting the internal structures of the
.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
You're not fitting the same model. Like SAS, Stata by default assumes
that random effects are independent of each other, so your Stata model
has correlation between the random effects forced to zero. The R
model estimates the correlation, and finds it to be far from zero
(-0.69).
rgument has been evaluated."
-thomas
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org
info/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Thomas Lumley
Professor of Biostatistics
University of Auckland
__
R-help@r-proj
1 - 100 of 591 matches
Mail list logo