On 7 Apr 2000, dennis roberts wrote:

> i was not suggesting taking away from our arsenal of tricks ... but, since 
> i was one of those old guys too ... i am wondering if we were mostly lead 
> astray ...?
> 
> the more i work with statistical methods, the less i see any meaningful (at 
> the level of dominance that we see it) applications of hypothesis testing ...
> 
> here is a typical problem ... and we teach students this!
> 
> 1. we design a new treatment
> 2. we do an experiment
> 3. our null hypothesis is that both 'methods', new and old, produce the 
> same results
> 4. we WANT to reject the null (especially if OUR method is better!)
> 5. we DO a two sample t test (our t was 2.98 with 60 df)  and reject the 
> null ... and in our favor!
> 6. what has this told us?
> 
> if this is ALL you do ... what it has told you AT BEST is that ... the 
> methods probably are not the same ... but, is that the question of interest 
> to us?
> 
> no ... the real question is: how much difference is there in the two methods?
---------------------- >8 -----------------------

In one of his papers, Bob Frick has argues very persuasively that very
often (in experimental psychology, at least), this is NOT the real
question at all.  I think that is especially the case when you are testing
theories.  Suppose, for example that my theory of selective attention
posits that inhibition of the internal representations of distracting
items is an important mechanism of selection.  This idea has been testing
in so-called "negative priming" experiments.  (Negative priming refers to
the fact that subjects respond more slowly to an item that was previously
ignored, or is semantically related to a previously ignored item, than
they do to a novel item.) Negative priming is measured as a response time
difference between 2 conditions in an experiment.  The difference is
typically between about 20 and 40 milliseconds.  I think the important
thing to remember about this is that the researcher is not trying to
account for variability in response time per se, even though response time
is the dependent variable:  He or she is just using response time to
indirectly measure the object of real interest.  If one was trying to
account for overall variability in response time, the conditions of this
experiment would almost certainly not make the list of important
variables.  The researcher KNOWS that a lot of other things affect
response time, and some of them a LOT more than his experimental
conditions do.  However, because one is interested in testing a theory of
selective attention, this small difference between conditions is VERY
important, provided it is statistically significant (and there is
sufficient power);  and measures of effect size are not all that relevant. 

Just my 2 cents.
-- 
Bruce Weaver
[EMAIL PROTECTED]
http://www.angelfire.com/wv/bwhomedir/




===========================================================================
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to