Generally you should do the power analysis before collecting any data.
Since you have results it looks like you already have the data
collected.
But if you want to compute the power for a future study, one option is
to use simulation.
1. decide what the data will look like
2. decide how you will
ome statsiticians of
excelletn repute would say you never have justification to do so regardless of
any testing.)
--
David.
>
> Please share your thoughts...
>
> Thanks
>
> John
> From: Christopher W. Ryan
> To: array chip
> Sent: Tuesday, November 12, 2013 6:53
your thoughts...
Thanks
John
From: Christopher W. Ryan
Sent: Tuesday, November 12, 2013 6:53 PM
Subject: Re: [R] power analysis is applicable or not
John--
Well, my simple-minded way of thinking about these issues goes something
like this:
You want to know if
On Nov 12, 2013, at 6:10 PM, array chip wrote:
> Hi, this is a statistical question rather than a pure R question. I have got
> many help from R mailing list in the past, so would like to try here and
> appreciate any input:
>
> I conducted Mantel-Haenszel test to show that the performance of
Hi Terry, Greg, and Marc,
Thanks for your advice about this. I think I have a pretty good starting point
now for the analysis.
Appreciate your help.
Paul
--- On Wed, 7/18/12, Terry Therneau wrote:
From: Terry Therneau
Subject: Re: [R] Power analysis for Cox regression with a time
Marc gave the referencer for Schoenfeld's article. It's actually quite
simple.
Sample size for a Cox model has two parts:
1. Easy part: how many deaths to I need
d = (za + zb)^2 / [var(x) * coef^2]
za = cutoff for your alpah, usually 1.96 (.05 two-sided)
zb = cutoff for pow
o the actual analysis itself, I'll start out
>> using the steps you've listed and see where that takes me.
>>
>> Paul
>>
>> --- On *Fri, 7/13/12, Greg Snow <538...@gmail.com>* wrote:
>>
>>
>> From: Greg Snow <538...@gmail.com&g
where that takes me.
>
> Paul
>
> --- On *Fri, 7/13/12, Greg Snow <538...@gmail.com>* wrote:
>
>
> From: Greg Snow <538...@gmail.com>
> Subject: Re: [R] Power analysis for Cox regression with a time-varying
> covariate
> To: "Paul Miller"
> Cc:
he actual analysis itself, I'll start out using the steps
you've listed and see where that takes me.
Paul
--- On Fri, 7/13/12, Greg Snow <538...@gmail.com> wrote:
From: Greg Snow <538...@gmail.com>
Subject: Re: [R] Power analysis for Cox regression with a time-varying covari
For something like this the best (and possibly only reasonable) option
is to use simulation. I have posted on the general steps for using
simulation for power studies in this list and elsewhere before, but
probably never with coxph.
The general steps still hold, but the complicated part here will
May I suggest you consult your local statistician. For reasons that (s)he
can answer, your request makes little sense.
Hint: Nonlinear regression is much different than linear regression: The
design matrix -- and hence the variance of estimators -- is a function of
the parameters being estimated.
Dear Tom,
I think you failed to generate simulated outcome from the correct model. Hence
the zero variance of your random effects. Here is a better working example.
library(lme4)
fake2 <- expand.grid(Bleach = c("Control","Med","High"), Temp =
c("Cold","Hot"), Rep = factor(seq_len(3)), ID = seq
On Apr 19, 2011, at 8:43 AM, Schatzi wrote:
> "Inter ocular data"
> Quite amusing :)
> Thank you for the help. For some reason I was thinking that I could get the
> n values for the combined test, but that doesn't make sense as there could
> be an infinite number of combinations of n values.
> Tha
"Inter ocular data"
Quite amusing :)
Thank you for the help. For some reason I was thinking that I could get the
n values for the combined test, but that doesn't make sense as there could
be an infinite number of combinations of n values.
Thanks again for the replies.
--
View this message in conte
Yes, Richard Savage used to call this "inter ocular data";
the answer should leap up and strike you right between the eyes...
albyn
On Mon, Apr 18, 2011 at 05:23:05PM -0500, David Cross wrote:
> It seems to me, with deltas this large (relative to the SD), that a
> significance test is a moot poi
It seems to me, with deltas this large (relative to the SD), that a
significance test is a moot point!
David Cross
d.cr...@tcu.edu
www.davidcross.us
On Apr 18, 2011, at 5:14 PM, Albyn Jones wrote:
> First, note that you are doing two separate power calculations,
> one with n=2 and sd = 1.19,
First, note that you are doing two separate power calculations,
one with n=2 and sd = 1.19, the other with n=3 and sd = 4.35.
I will assume this was on purpose. Now...
> power.t.test(n = 2, delta = 13.5, sd = 1.19, sig.level = 0.05)
Two-sample t test power calculation
n = 2
Lewis G. Dean wrote:
>
>> post-hoc power analysis on a Wilcoxon test.
>
There is a (somewhat dated) list of "why-not" papers in
http://www.childrens-mercy.org/stats/size/posthoc.asp
Dieter
--
View this message in context:
http://r.789695.n4.nabble.com/Power-analysis-tp2524729p2525333.h
Hi:
Just to add to the discussion, see the following article by Russell Lenth on
the subject:
http://www.stat.uiowa.edu/techrep/tr378.pdf
Dennis
On Thu, Sep 2, 2010 at 3:59 PM, C Peng wrote:
>
> Agree with Greg's point. In fact it does not make logical sense in many
> cases. Similar to the us
Agree with Greg's point. In fact it does not make logical sense in many
cases. Similar to the use of the "statistically unreliable" reliability
measure Cronbach's alpha in some non-statistical fields.
--
View this message in context:
http://r.789695.n4.nabble.com/Power-analysis-tp2524729p2524907
Be happy, don't do post-hoc power analyses.
The standard "post-hoc power analysis" is actually counterproductive. It is
much better to just create confidence intervals. Or give a better
description/justification showing that your case is not the standard/worse than
useless version.
--
Gregor
Breslow & Day has a nice three page discussion in volume 2 of their
"Statistical Methods in Cancer Research". See pages 285-7. Most of the
gain in power comes from the decrease in degrees of freedom and only
if the trend is approximately linear. Alternatives that are quadratic
are not well
Hi Rick,
I understand the authors' point and also agree that post-hoc power
analysis is basically not telling me anything more than the p-value and
initial statistic for the test I am interested in computing power for.
Beta is a simple function of alpha, p, and the statistic.
On Wed, 2009-01-28 at 21:21 +0100, Stephan Kolassa wrote:
> Hi Adam,
>
> first: I really don't know much about MANOVA, so I sadly can't help you
> without learning about it an Pillai's V... which I would be glad to do,
> but I really don't have the time right now. Sorry!
>
> Second: you seem to
Thanks for the response, Stephan.
Really, I am trying to say, "My result is insignificant, my effect sizes are
tiny, you may want to consider the possibility that there really are no
meaningful differences." Computing post-hoc power makes a bit stronger of a
claim in this setting.
My real goal i
Hi Adam,
first: I really don't know much about MANOVA, so I sadly can't help you
without learning about it an Pillai's V... which I would be glad to do,
but I really don't have the time right now. Sorry!
Second: you seem to be doing a kind of "post-hoc power analysis", "my
result isn't signi
On Mon, 26 Jan 2009, Adam D. I. Kramer wrote:
On Mon, 26 Jan 2009, Charles C. Berry wrote:
If you know what a 'general linear hypothesis test' is see
http://cran.r-project.org/src/contrib/Archive/hpower/hpower_0.1-0.tar.gz
I do, and am quite interested, however this package will not i
On Mon, 26 Jan 2009, Charles C. Berry wrote:
If you know what a 'general linear hypothesis test' is see
http://cran.r-project.org/src/contrib/Archive/hpower/hpower_0.1-0.tar.gz
I do, and am quite interested, however this package will not install on R
2.8.1: First, it said that ther
If you know what a 'general linear hypothesis test' is see
http://cran.r-project.org/src/contrib/Archive/hpower/hpower_0.1-0.tar.gz
HTH,
Chuck
On Mon, 26 Jan 2009, Adam D. I. Kramer wrote:
On Mon, 26 Jan 2009, Stephan Kolassa wrote:
My (and, judging from previous traffic on R-he
On Mon, 26 Jan 2009, Stephan Kolassa wrote:
My (and, judging from previous traffic on R-help about power analyses,
also some other people's) preferred approach is to simply simulate an
effect size you would like to detect a couple of thousand times, run your
proposed analysis and look how often
Hi Adam,
My (and, judging from previous traffic on R-help about power analyses,
also some other people's) preferred approach is to simply simulate an
effect size you would like to detect a couple of thousand times, run
your proposed analysis and look how often you get significance. In your
si
http://www.amazon.com/Statistical-Power-Analysis-Behavioral-Sciences/dp/0805802835
Cohen's book was in fact the basis for the "pwr" package at CRAN.
And it does have a MANOVA power analysis, which was left out of the
"pwr" package.
On Mon, Jan 26, 2009 at 4:12 PM, Adam D. I. Kramer wrote:
> H
32 matches
Mail list logo