[R] Bonferroni post hoc test in R for repeated measure ANOVA with mixed within and between subjects design

2015-07-06 Thread angelo.arc...@virgilio.it
Dear List Members, 



I need to perform a Bonferroni post hoc test in R on a table with 
three within subjects factors (Emotion, having 5 levels, Material, 
having 4 levels, Shoes, having 2 levels) and one between subject factor 
(Musician, having 2 levels). 


I normally use the Tukey method with the following formula

require(nlme)
lme_H2H = lme(H2H ~ Emotion*Material*Shoes*Musician, data=scrd, random = 
~1|Subject)
require(multcomp)
summary(glht(lme_H2H, linfct=mcp(Emotion = "Tukey")))



I am not able to find any reference that explains with an example of R
 code how to perform a post hoc test with the Bonferroni procedure. 
Can anyone provide an example to perform the same post hoc test in the 
code above but with Bonferroni instead of Tukey?


Thank you in advance



Angelo

   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sphericity for a repeated measure ANOVA with whithin and between subjects design, using R

2015-07-06 Thread angelo.arc...@virgilio.it
 Dear List Members,



I need to perform a four-way ANOVA with repeated measures and to 
calculate the sphericity for eventual correction of the degree of 
freedom of the F ratio.
I have three within subjects factors (Emotion, having 5 levels, 
Material, having 4 levels, Shoes, having 2 levels) and a between subject
 factor (Musician, having 2 levels). Without considering the sphericity I
 use the following formula

aov_SPL = aov(SPL ~ Emotion*Material*Shoes*Musician + 
Error(Subject/(Emotion*Material*Shoes)), data=scrd)



Unfortunately after having read lot of material online I did not 
arrive to a solution about how to calculate for my case the epsilon of 
the Greenhouse-Geisser method for degree of freedom adjustment.


I load my data in R with this formula:

scrd<- read.csv(file='/path to data/data.csv',sep=',',header=T)



and this is the structure of the imported table:

> head(scrd)
Subject Material   ShoesEmotion  H2H H2H_factor SPL_factor 
SPL_variation SPL Musician Weight Height
1  Subject1   Gravel dress_shoes AGGRESSIVE  468  0.736 11.591
21.283  97.383   no 90183
2  Subject1   Gravel dress_shoes  HAPPY  719  1.129  3.188
10.071  86.171   no 90183
3  Subject1   Gravel dress_shoes TENDER 1129  1.774  5.114
14.176  90.276   no 90183
4  Subject1   Gravel dress_shoesSAD 1010  1.587 13.102
22.347  98.447   no 90183
5  Subject1   Gravel dress_shoesNEUTRAL  736  1.156  3.161 
9.995  86.095   no 90183
6 Subject10   Gravel dress_shoes AGGRESSIVE  635  0.998 15.849
24.000 100.100  yes 70178
> 



I noticed that in the car package there is the Anova() function that 
comes with the Maulchy's sphericity test, but it does not take as an 
input a model generated with aov(). I need to use lm() instead, but 
since
I am not really proficient in R I do not know how to use the lm() 
starting from the loaded data and how to use Anova(). In addition, being
 a mixed design involving within and between subjects factors I wonder 
if Anova() is the most appropriate function to use for my case.


Is there anyone who could provide me with the R code to calculate the
 Maulchy's test and the epsilon of Greenhouse-Geisser for my case?


Thank you in advance
Best

Angelo


   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: RE: Bonferroni post hoc test in R for repeated measure ANOVA with mixed within and between subjects design

2015-07-06 Thread angelo.arc...@virgilio.it
Dear Michael,
thank you for your answer, however, I am not asking for the tukey
with the bonferroni adjustment, but doing the post hoc with the bonferroni 
method.
Apparently this is done easily in SPSS, I am wondering whether it is possible 
with R.

Can anyone help me?

Thanks in advance


Angelo



Messaggio originale
Da: meyner...@pg.com
Data: 6-lug-2015 17.52
A: "angelo.arc...@virgilio.it", 
"r-help@r-project.org"
Ogg: RE: [R] Bonferroni post hoc test in R for repeated measure ANOVA with 
mixed within and between subjects design

Untested, but if anything, your best bet is likely something like 

summary(glht(lme_H2H, linfct=mcp(Emotion = "Tukey")), 
test=adjusted("bonferroni"))

should work (despite the question why you'd want to use Bonferroni rather than 
Tukey
For a reference, see the book on the topic by the package authors. Might be in 
the paper, too, which is given by

citation("multcomp")

HTH, Michael


> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of
> angelo.arc...@virgilio.it
> Sent: Montag, 6. Juli 2015 16:01
> To: r-help@r-project.org
> Subject: [R] Bonferroni post hoc test in R for repeated measure ANOVA with
> mixed within and between subjects design
> 
> Dear List Members,
> 
> 
> 
> I need to perform a Bonferroni post hoc test in R on a table with three within
> subjects factors (Emotion, having 5 levels, Material, having 4 levels, Shoes,
> having 2 levels) and one between subject factor (Musician, having 2 levels).
> 
> 
> I normally use the Tukey method with the following formula
> 
> require(nlme)
> lme_H2H = lme(H2H ~ Emotion*Material*Shoes*Musician, data=scrd,
> random = ~1|Subject)
> require(multcomp)
> summary(glht(lme_H2H, linfct=mcp(Emotion = "Tukey")))
> 
> 
> 
> I am not able to find any reference that explains with an example of R  code
> how to perform a post hoc test with the Bonferroni procedure.
> Can anyone provide an example to perform the same post hoc test in the
> code above but with Bonferroni instead of Tukey?
> 
> 
> Thank you in advance
> 
> 
> 
> Angelo
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.



   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: Re: Sphericity for a repeated measure ANOVA with whithin and between subjects design, using R

2015-07-07 Thread angelo.arc...@virgilio.it
Thank you John, so far the easiest way is to use ezanova() since I do not have 
to reorder the data
and create iframe etc.


Best





Messaggio originale
Da: pda...@gmail.com
Data: 7-lug-2015 10.12
A: "John Fox"
Cc: , 
Ogg: Re: [R] Sphericity for a repeated measure ANOVA with whithin and between 
subjects design, using R

Another method goes via anova.mlm(), which is pretty straight forward once you 
figure out how to set up the necessary within-subject projections (contrasts). 
See

@Article{Rnews:Dalgaard:2007,
 author   = {Peter Dalgaard},
 title= {New Functions for Multivariate Analysis},
 journal  = {R News},
 year = 2007,
 volume   = 7,
 number   = 2,
 pages= {2--7},
}

and ?anova.mlm.

-pd

> On 06 Jul 2015, at 18:17 , John Fox  wrote:
> 
> Dear Angelo,
> 
> One way to do this is with the Anova() function in the car package; see the
> following article in the R Journal:
> 
> @Article{RJournal:2013-1:fox-friendly-weisberg,
> author   = {John Fox and Michael Friendly and Sanford Weisberg}, 
> title= {Hypothesis Tests for Multivariate Linear Models Using the
> {car} Package}, 
> journal  = {The R Journal},
> year = 2013,
> volume   = 5,
> number   = 1,
> pages= {39--53},
> month= jun,
> url  =
> {http://journal.r-project.org/archive/2013-1/RJournal_2013-1_fox-friendly-we
> isberg.pdf}
> 
> I hope this helps,
> John
> 
> 
> ---
> John Fox, Professor
> McMaster University
> Hamilton, Ontario, Canada
> http://socserv.socsci.mcmaster.ca/jfox/
> 
> 
> 
>> -Original Message-
>> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of
>> angelo.arc...@virgilio.it
>> Sent: July-06-15 10:05 AM
>> To: r-help@r-project.org
>> Subject: [R] Sphericity for a repeated measure ANOVA with whithin and
>> between subjects design, using R
>> 
>> Dear List Members,
>> 
>> 
>> 
>> I need to perform a four-way ANOVA with repeated measures and to
>> calculate the sphericity for eventual correction of the degree of
>> freedom of the F ratio.
>> I have three within subjects factors (Emotion, having 5 levels,
>> Material, having 4 levels, Shoes, having 2 levels) and a between subject
>> factor (Musician, having 2 levels). Without considering the sphericity
>> I
>> use the following formula
>> 
>> aov_SPL = aov(SPL ~ Emotion*Material*Shoes*Musician +
>> Error(Subject/(Emotion*Material*Shoes)), data=scrd)
>> 
>> 
>> 
>> Unfortunately after having read lot of material online I did not
>> arrive to a solution about how to calculate for my case the epsilon of
>> the Greenhouse-Geisser method for degree of freedom adjustment.
>> 
>> 
>> I load my data in R with this formula:
>> 
>> scrd<- read.csv(file='/path to data/data.csv',sep=',',header=T)
>> 
>> 
>> 
>> and this is the structure of the imported table:
>> 
>>> head(scrd)
>>   Subject Material   ShoesEmotion  H2H H2H_factor SPL_factor
>> SPL_variation SPL Musician Weight Height
>> 1  Subject1   Gravel dress_shoes AGGRESSIVE  468  0.736 11.591
>> 21.283  97.383   no 90183
>> 2  Subject1   Gravel dress_shoes  HAPPY  719  1.129  3.188
>> 10.071  86.171   no 90183
>> 3  Subject1   Gravel dress_shoes TENDER 1129  1.774  5.114
>> 14.176  90.276   no 90183
>> 4  Subject1   Gravel dress_shoesSAD 1010  1.587 13.102
>> 22.347  98.447   no 90183
>> 5  Subject1   Gravel dress_shoesNEUTRAL  736  1.156  3.161
>> 9.995  86.095   no 90183
>> 6 Subject10   Gravel dress_shoes AGGRESSIVE  635  0.998 15.849
>> 24.000 100.100  yes 70178
>>> 
>> 
>> 
>> 
>> I noticed that in the car package there is the Anova() function that
>> comes with the Maulchy's sphericity test, but it does not take as an
>> input a model generated with aov(). I need to use lm() instead, but
>> since
>> I am not really proficient in R I do not know how to use the lm()
>> starting from the loaded data and how to use Anova(). In addition, being
>> a mixed design involving within and between subjects factors I wonder
>> if Anova() is the most appropriate function to use for my case.
>> 
>> 
>> Is there anyone who could provide me with the R code to calculate the
>> Maulchy's test and the epsilon of Greenhouse-Geiss

[R] R: Re: R: RE: Bonferroni post hoc test in R for repeated measure ANOVA with mixed within and between subjects design

2015-07-07 Thread angelo.arc...@virgilio.it
Dear John,
thank you, I normally use Tuckey but I was told that Bonferroni is better, so I 
wanted to try.

For number of tests what do you mean exactly? How do I calculate it?

In the end is it sufficient to add  test=adjusted("bonferroni") as previously 
suggested?


Best

A.




Messaggio originale
Da: j...@mcmaster.ca
Data: 7-lug-2015 3.23
A: "angelo.arc...@virgilio.it"
Cc: , "r-help@r-project.org"
Ogg: Re: [R] R: RE: Bonferroni post hoc test in R for repeated measure ANOVA 
with mixed within and between subjects design

Dear Angelo,

The Bonferroni p-value is just the ordinary p-value times the number of tests, 
so, since R supports multiplication, you can apply the Bonferroni adjustment in 
R. Because Bonferroni tests for multiple comparisons can be very conservative, 
asking why you want to use them is a fair question.

Best,
 John


John Fox, Professor
McMaster University
Hamilton, Ontario, Canada
http://socserv.mcmaster.ca/jfox/



On Tue, 7 Jul 2015 00:50:49 +0200 (CEST)
 "angelo.arc...@virgilio.it"  wrote:
> Dear Michael,
> thank you for your answer, however, I am not asking for the tukey
> with the bonferroni adjustment, but doing the post hoc with the bonferroni 
> method.
> Apparently this is done easily in SPSS, I am wondering whether it is possible 
> with R.
> 
> Can anyone help me?
> 
> Thanks in advance
> 
> 
> Angelo
> 
> 
> 
> ----Messaggio originale
> Da: meyner...@pg.com
> Data: 6-lug-2015 17.52
> A: "angelo.arc...@virgilio.it", 
> "r-help@r-project.org"
> Ogg: RE: [R] Bonferroni post hoc test in R for repeated measure ANOVA with 
> mixed within and between subjects design
> 
> Untested, but if anything, your best bet is likely something like 
> 
> summary(glht(lme_H2H, linfct=mcp(Emotion = "Tukey")), 
> test=adjusted("bonferroni"))
> 
> should work (despite the question why you'd want to use Bonferroni rather 
> than Tukey
> For a reference, see the book on the topic by the package authors. Might be 
> in the paper, too, which is given by
> 
> citation("multcomp")
> 
> HTH, Michael
> 
> 
> > -Original Message-
> > From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of
> > angelo.arc...@virgilio.it
> > Sent: Montag, 6. Juli 2015 16:01
> > To: r-help@r-project.org
> > Subject: [R] Bonferroni post hoc test in R for repeated measure ANOVA with
> > mixed within and between subjects design
> > 
> > Dear List Members,
> > 
> > 
> > 
> > I need to perform a Bonferroni post hoc test in R on a table with three 
> > within
> > subjects factors (Emotion, having 5 levels, Material, having 4 levels, 
> > Shoes,
> > having 2 levels) and one between subject factor (Musician, having 2 levels).
> > 
> > 
> > I normally use the Tukey method with the following formula
> > 
> > require(nlme)
> > lme_H2H = lme(H2H ~ Emotion*Material*Shoes*Musician, data=scrd,
> > random = ~1|Subject)
> > require(multcomp)
> > summary(glht(lme_H2H, linfct=mcp(Emotion = "Tukey")))
> > 
> > 
> > 
> > I am not able to find any reference that explains with an example of R  code
> > how to perform a post hoc test with the Bonferroni procedure.
> > Can anyone provide an example to perform the same post hoc test in the
> > code above but with Bonferroni instead of Tukey?
> > 
> > 
> > Thank you in advance
> > 
> > 
> > 
> > Angelo
> > 
> > 
> > [[alternative HTML version deleted]]
> > 
> > __
> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-
> > guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> 
> 
> 
>
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.



   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: RE: R: RE: Bonferroni post hoc test in R for repeated measure ANOVA with mixed within and between subjects design

2015-07-07 Thread angelo.arc...@virgilio.it
Dear Michael,
I am very grateful to you for the detailed explanation, now everything is 
clear. In the end I will opt for the Tuckey method then.

Thanks

Best

A.




Messaggio originale
Da: meyner...@pg.com
Data: 7-lug-2015 8.58
A: "angelo.arc...@virgilio.it"
Cc: "r-help@r-project.org", "John Fox"
Ogg: RE: [R] R: RE: Bonferroni post hoc test in R for repeated measure ANOVA 
with mixed within and between subjects design

Angelo,

the conservatism of Bonferroni aside for a moment, I don't really see what you 
are after. 

mcp(Emotion = "Tukey")

just defines the contrasts to test; in this case, all pairwise comparisons. 

The test=adjusted("bonferroni") argument then says that these contrasts should 
not be corrected for according to Tukey's procedure, but using Bonferroni 
adjustment. 

So this should give you tests for all pairwise comparisons with multiplicity 
correction according to Bonferroni. I wonder how this differs from the results 
you get from SPSS. 

Seemingly, this is not what you want, so the question is what you really want. 
"Bonferroni method" does not indicate which comparisons / contrasts to look at, 
but just takes all those that you are interested in and multiply the 
corresponding p values with the number of comparisons (and making sure it does 
not exceed 1). As John indicated, that can easily be handled manually. Yet, you 
need to create the tests for all comparisons that you are interested in - if 
not via  linfct=mcp(Emotion = "Tukey"), you need to specify them otherwise (see 
the three options indicated on ?glht). The code I suggested offers a convenient 
shortcut in case you are interested in all pairwise comparisons and want them 
to be corrected according to Bonferroni, but if something else is of interest, 
you'd need to specify this (and let us know, as "Bonferroni method" does not 
give a clue about which comparisons to test).

NB: You may want to pay attention to the warning halfway down the helppage for 
?mcp; it may not be clear exactly which effects you want to compare; mcp uses 
just the main effects w/o interactions etc.

Michael


> -Original Message-
> From: John Fox [mailto:j...@mcmaster.ca]
> Sent: Dienstag, 7. Juli 2015 03:24
> To: angelo.arc...@virgilio.it
> Cc: Meyners, Michael; r-help@r-project.org
> Subject: Re: [R] R: RE: Bonferroni post hoc test in R for repeated measure
> ANOVA with mixed within and between subjects design
> 
> Dear Angelo,
> 
> The Bonferroni p-value is just the ordinary p-value times the number of
> tests, so, since R supports multiplication, you can apply the Bonferroni
> adjustment in R. Because Bonferroni tests for multiple comparisons can be
> very conservative, asking why you want to use them is a fair question.
> 
> Best,
>  John
> 
> 
> John Fox, Professor
> McMaster University
> Hamilton, Ontario, Canada
> http://socserv.mcmaster.ca/jfox/
> 
> 
> 
> On Tue, 7 Jul 2015 00:50:49 +0200 (CEST)  "angelo.arc...@virgilio.it"
>  wrote:
> > Dear Michael,
> > thank you for your answer, however, I am not asking for the tukey with
> > the bonferroni adjustment, but doing the post hoc with the bonferroni
> method.
> > Apparently this is done easily in SPSS, I am wondering whether it is 
> > possible
> with R.
> >
> > Can anyone help me?
> >
> > Thanks in advance
> >
> >
> > Angelo
> >
> >
> >
> > Messaggio originale
> > Da: meyner...@pg.com
> > Data: 6-lug-2015 17.52
> > A: "angelo.arc...@virgilio.it",
> > "r-help@r-project.org"
> > Ogg: RE: [R] Bonferroni post hoc test in R for repeated measure ANOVA
> > with mixed within and between subjects design
> >
> > Untested, but if anything, your best bet is likely something like
> >
> > summary(glht(lme_H2H, linfct=mcp(Emotion = "Tukey")),
> > test=adjusted("bonferroni"))
> >
> > should work (despite the question why you'd want to use Bonferroni
> > rather than Tukey For a reference, see the book on the topic by the
> > package authors. Might be in the paper, too, which is given by
> >
> > citation("multcomp")
> >
> > HTH, Michael
> >
> >
> > > -Original Message-
> > > From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of
> > > angelo.arc...@virgilio.it
> > > Sent: Montag, 6. Juli 2015 16:01
> > > To: r-help@r-project.org
> > > Subject: [R] Bonferroni post hoc test in R for repeated measure
> > > ANOVA with mixed within and between subjects design
> > >
> > > 

[R] Differences in output of lme() when introducing interactions

2015-07-20 Thread angelo.arc...@virgilio.it
Dear List Members, 



I am searching for correlations between a dependent variable and a 
factor or a combination of factors in a repeated measure design. So I 
use lme() function in R. However, I am getting very different results 
depending on whether I add on the lme formula various factors compared 
to when only one is present. If a factor is found to be significant, 
shouldn't remain significant also when more factors are introduced in 
the model?


I give an example of the outputs I get using the two models. In the first model 
I use one single factor:

library(nlme)
summary(lme(Mode ~ Weight, data = Gravel_ds, random = ~1 | Subject))
Linear mixed-effects model fit by REML
 Data: Gravel_ds 
  AIC  BIC   logLik
  2119.28 2130.154 -1055.64

Random effects:
 Formula: ~1 | Subject
(Intercept) Residual
StdDev:1952.495 2496.424

Fixed effects: Mode ~ Weight 
Value Std.Error DF   t-value p-value
(Intercept) 10308.966 2319.0711 95  4.445299   0.000
Weight-99.036   32.3094 17 -3.065233   0.007
 Correlation: 
   (Intr)
Weight -0.976

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max 
-1.74326719 -0.41379593 -0.06508451  0.39578734  2.27406649 

Number of Observations: 114
Number of Groups: 19 


As you can see the p-value for factor Weight is significant. 
This is the second model, in which I add various factors for searching their 
correlations:

library(nlme)
summary(lme(Mode ~ Weight*Height*Shoe_Size*BMI, data = Gravel_ds, random = ~1 | 
Subject))
Linear mixed-effects model fit by REML
 Data: Gravel_ds 
   AIC  BIClogLik
  1975.165 2021.694 -969.5825

Random effects:
 Formula: ~1 | Subject
(Intercept) Residual
StdDev:1.127993 2494.826

Fixed effects: Mode ~ Weight * Height * Shoe_Size * BMI 
Value Std.Error DFt-value p-value
(Intercept)   5115955  10546313 95  0.4850941  0.6287
Weight  -13651237   6939242  3 -1.9672518  0.1438
Height -18678 53202  3 -0.3510740  0.7487
Shoe_Size   93427213737  3  0.4371115  0.6916
BMI -13011088   7148969  3 -1.8199949  0.1663
Weight:Height   28128 14191  3  1.9820883  0.1418
Weight:Shoe_Size   351453186304  3  1.8864467  0.1557
Height:Shoe_Size -783  1073  3 -0.7298797  0.5183
Weight:BMI  19475 11425  3  1.7045450  0.1868
Height:BMI 226512118364  3  1.9136867  0.1516
Shoe_Size:BMI  329377190294  3  1.7308827  0.1819
Weight:Height:Shoe_Size  -706   371  3 -1.9014817  0.1534
Weight:Height:BMI-10963  3 -1.7258742  0.1828
Weight:Shoe_Size:BMI -273   201  3 -1.3596421  0.2671
Height:Shoe_Size:BMI-5858  3200  3 -1.8306771  0.1646
Weight:Height:Shoe_Size:BMI 2 1  3  1.3891782  0.2589
 Correlation: 
(Intr) Weight Height Sho_Sz BMIWght:H Wg:S_S 
Hg:S_S Wg:BMI Hg:BMI S_S:BM Wg:H:S_S W:H:BM W:S_S: H:S_S:
Weight  -0.895  
  
Height  -0.996  0.869   
  
Shoe_Size   -0.930  0.694  0.933
  
BMI -0.911  0.998  0.887  0.720 
  
Weight:Height0.894 -1.000 -0.867 -0.692 -0.997  
  
Weight:Shoe_Size 0.898 -0.997 -0.873 -0.700 -0.999  0.995   
  
Height:Shoe_Size 0.890 -0.612 -0.904 -0.991 -0.641  0.609  0.619
  
Weight:BMI   0.911 -0.976 -0.887 -0.715 -0.972  0.980  0.965  
0.637   
Height:BMI   0.900 -1.000 -0.875 -0.703 -0.999  0.999  0.999  
0.622  0.973
Shoe_Size:BMI0.912 -0.992 -0.889 -0.726 -0.997  0.988  0.998  
0.649  0.958  0.995 
Weight:Height:Shoe_Size -0.901  0.999  0.876  0.704  1.000 -0.997 -1.000 
-0.623 -0.971 -1.000 -0.997  
Weight:Height:BMI   -0.908  0.978  0.886  0.704  0.974 -0.982 -0.968 
-0.627 -0.999 -0.975 -0.961  0.973   
Weight:Shoe_Size:BMI-0.949  0.941  0.928  0.818  0.940 -0.946 -0.927 
-0.751 -0.980 -0.938 -0.924  0.9350.974  
Height:Shoe_Size:BMI-0.901  0.995  0.878  0

[R] R: Re: Differences in output of lme() when introducing interactions

2015-07-20 Thread angelo.arc...@virgilio.it
Dear Michael, 
thanks for your answer. Despite it answers to my initial question, it does not 
help me in finding the solution to my problem unfortunately.

Could you please tell me which analysis of the two models should I trust then?
My goal is to know whether participants’ choices
 of the dependent variable are linearly related to their own weight, height, 
shoe size and
 the combination of those effects. 
Would the analysis of model 2 be more 
correct than that of model 1? Which of the two analysis should I trust 
according to my goal? 
What is your recommendation?


Thanks in advance

Angelo





Messaggio originale
Da: li...@dewey.myzen.co.uk
Data: 20-lug-2015 17.56
A: "angelo.arc...@virgilio.it", 

Ogg: Re: [R] Differences in output of lme() when introducing interactions

In-line

On 20/07/2015 15:10, angelo.arc...@virgilio.it wrote:
> Dear List Members,
>
>
>
> I am searching for correlations between a dependent variable and a
> factor or a combination of factors in a repeated measure design. So I
> use lme() function in R. However, I am getting very different results
> depending on whether I add on the lme formula various factors compared
> to when only one is present. If a factor is found to be significant,
> shouldn't remain significant also when more factors are introduced in
> the model?
>

The short answer is 'No'.

The long answer is contained in any good book on statistics which you 
really need to have by your side as the long answer is too long to 
include in an email.

>
> I give an example of the outputs I get using the two models. In the first 
> model I use one single factor:
>
> library(nlme)
> summary(lme(Mode ~ Weight, data = Gravel_ds, random = ~1 | Subject))
> Linear mixed-effects model fit by REML
>   Data: Gravel_ds
>AIC  BIC   logLik
>2119.28 2130.154 -1055.64
>
> Random effects:
>   Formula: ~1 | Subject
>  (Intercept) Residual
> StdDev:1952.495 2496.424
>
> Fixed effects: Mode ~ Weight
>  Value Std.Error DF   t-value p-value
> (Intercept) 10308.966 2319.0711 95  4.445299   0.000
> Weight-99.036   32.3094 17 -3.065233   0.007
>   Correlation:
> (Intr)
> Weight -0.976
>
> Standardized Within-Group Residuals:
>  Min  Q1 Med  Q3 Max
> -1.74326719 -0.41379593 -0.06508451  0.39578734  2.27406649
>
> Number of Observations: 114
> Number of Groups: 19
>
>
> As you can see the p-value for factor Weight is significant.
> This is the second model, in which I add various factors for searching their 
> correlations:
>
> library(nlme)
> summary(lme(Mode ~ Weight*Height*Shoe_Size*BMI, data = Gravel_ds, random = ~1 
> | Subject))
> Linear mixed-effects model fit by REML
>   Data: Gravel_ds
> AIC  BIClogLik
>1975.165 2021.694 -969.5825
>
> Random effects:
>   Formula: ~1 | Subject
>  (Intercept) Residual
> StdDev:1.127993 2494.826
>
> Fixed effects: Mode ~ Weight * Height * Shoe_Size * BMI
>  Value Std.Error DFt-value p-value
> (Intercept)   5115955  10546313 95  0.4850941  0.6287
> Weight  -13651237   6939242  3 -1.9672518  0.1438
> Height -18678 53202  3 -0.3510740  0.7487
> Shoe_Size   93427213737  3  0.4371115  0.6916
> BMI -13011088   7148969  3 -1.8199949  0.1663
> Weight:Height   28128 14191  3  1.9820883  0.1418
> Weight:Shoe_Size   351453186304  3  1.8864467  0.1557
> Height:Shoe_Size -783  1073  3 -0.7298797  0.5183
> Weight:BMI  19475 11425  3  1.7045450  0.1868
> Height:BMI 226512118364  3  1.9136867  0.1516
> Shoe_Size:BMI  329377190294  3  1.7308827  0.1819
> Weight:Height:Shoe_Size  -706   371  3 -1.9014817  0.1534
> Weight:Height:BMI-10963  3 -1.7258742  0.1828
> Weight:Shoe_Size:BMI -273   201  3 -1.3596421  0.2671
> Height:Shoe_Size:BMI-5858  3200  3 -1.8306771  0.1646
> Weight:Height:Shoe_Size:BMI 2 1  3  1.3891782  0.2589
>   Correlation:
>  (Intr) Weight Height Sho_Sz BMIWght:H Wg:S_S 
> Hg:S_S Wg:BMI Hg:BMI S_S:BM Wg:H:S_S W:H:BM W:S_S: H:S_S:
> Weight  -0.895
> Height  -0.996  0.869
> Shoe_Size   -0.930  0.694  0.933
> BMI -0.911  0.998  0.887  0.720
> Weight:Height0.894 -1.000 -0.867 -0.692 -0.997
> Weight:Shoe_Size 0.898 -0.997 -0

[R] R: Re: R: Re: Differences in output of lme() when introducing interactions

2015-07-21 Thread angelo.arc...@virgilio.it
Dear Bert,
thank you for your feedback. Can you please provide some references online so I 
can improve "my ignorance"?
Anyways, please notice that it is not true that I do not know statistics and 
regressions at all, and I am strongly
convinced that my question can be of interest for some one else in the future. 

This is what forums serve for, isn't it? This is why people help each other, 
isn't it?

Moreover, don't you think that I would not have asked to this R forum if I had 
the possibility to ask or pay a statician?
Don't you think I have done already my best to study and learn before posting 
this message? Trust me, I have read different
online tutorials on lme and lmer, and I am confident that I have got the basic 
concepts. Still I have not found the answer
to solve my problem, so if you know the answer can you please give me some 
suggestions that can help me?

I do not have a book where to learn and unfortunately I have to analyze the 
results soon. Any help? Any online reference to-the-point
that can help me in solving this problem?

Thank you in advance

Best regards

Angelo 





Messaggio originale
Da: bgunter.4...@gmail.com
Data: 21-lug-2015 3.45
A: "angelo.arc...@virgilio.it"
Cc: , 
Ogg: Re: [R] R: Re: Differences in output of lme() when introducing interactions

I believe Michael's point is that you need to STOP asking such
questions and START either learning some statistics or work with
someone who already knows some. You should not be doing such analyses
on your own given your present state of statistical ignorance.

Cheers,
Bert


Bert Gunter

"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
   -- Clifford Stoll


On Mon, Jul 20, 2015 at 5:45 PM, angelo.arc...@virgilio.it
 wrote:
> Dear Michael,
> thanks for your answer. Despite it answers to my initial question, it does 
> not help me in finding the solution to my problem unfortunately.
>
> Could you please tell me which analysis of the two models should I trust then?
> My goal is to know whether participants’ choices
>  of the dependent variable are linearly related to their own weight, height, 
> shoe size and
>  the combination of those effects.
> Would the analysis of model 2 be more
> correct than that of model 1? Which of the two analysis should I trust 
> according to my goal?
> What is your recommendation?
>
>
> Thanks in advance
>
> Angelo
>
>
>
>
>
> Messaggio originale
> Da: li...@dewey.myzen.co.uk
> Data: 20-lug-2015 17.56
> A: "angelo.arc...@virgilio.it", 
> 
> Ogg: Re: [R] Differences in output of lme() when introducing interactions
>
> In-line
>
> On 20/07/2015 15:10, angelo.arc...@virgilio.it wrote:
>> Dear List Members,
>>
>>
>>
>> I am searching for correlations between a dependent variable and a
>> factor or a combination of factors in a repeated measure design. So I
>> use lme() function in R. However, I am getting very different results
>> depending on whether I add on the lme formula various factors compared
>> to when only one is present. If a factor is found to be significant,
>> shouldn't remain significant also when more factors are introduced in
>> the model?
>>
>
> The short answer is 'No'.
>
> The long answer is contained in any good book on statistics which you
> really need to have by your side as the long answer is too long to
> include in an email.
>
>>
>> I give an example of the outputs I get using the two models. In the first 
>> model I use one single factor:
>>
>> library(nlme)
>> summary(lme(Mode ~ Weight, data = Gravel_ds, random = ~1 | Subject))
>> Linear mixed-effects model fit by REML
>>   Data: Gravel_ds
>>AIC  BIC   logLik
>>2119.28 2130.154 -1055.64
>>
>> Random effects:
>>   Formula: ~1 | Subject
>>  (Intercept) Residual
>> StdDev:1952.495 2496.424
>>
>> Fixed effects: Mode ~ Weight
>>  Value Std.Error DF   t-value p-value
>> (Intercept) 10308.966 2319.0711 95  4.445299   0.000
>> Weight-99.036   32.3094 17 -3.065233   0.007
>>   Correlation:
>> (Intr)
>> Weight -0.976
>>
>> Standardized Within-Group Residuals:
>>  Min  Q1 Med  Q3 Max
>> -1.74326719 -0.41379593 -0.06508451  0.39578734  2.27406649
>>
>> Number of Observations: 114
>> Number of Groups: 19
>>
>>
>> As you can see the p-value for factor Weight is significant.
>> This is the second model, in which I add various factors for searching their 
>> correlations:
>>
>> library(nlm

[R] R: Re: R: Re: R: Re: Differences in output of lme() when introducing interactions

2015-07-21 Thread angelo.arc...@virgilio.it
Dear Michael,
thanks a lot. I am studying the marginality and I came across to this post:

http://www.ats.ucla.edu/stat/r/faq/type3.htm

Do you think that the procedure there described is the right one to solve my 
problem?

Would you have any other online resources to suggest especially dealing with R?

My department does not have a statician, so I have to find a solution with my 
own capacities.

Thanks in advance

Angelo




Messaggio originale
Da: li...@dewey.myzen.co.uk
Data: 21-lug-2015 11.58
A: "angelo.arc...@virgilio.it", 

Cc: 
Ogg: Re: R: Re: [R] R: Re: Differences in output of lme() when introducing 
interactions

Dear Angelo

I suggest you do an online search for marginality which may help to 
explain the relationship between main effects and interactions. As I 
said in my original email this is a complicated subject which we are not 
going to retype for you.

If you are doing this as a student I suggest you sue your university for 
failing to train you appropriately and if it is part of your employment 
I suggest you find a better employer.

On 21/07/2015 10:04, angelo.arc...@virgilio.it wrote:
> Dear Bert,
> thank you for your feedback. Can you please provide some references
> online so I can improve "my ignorance"?
> Anyways, please notice that it is not true that I do not know statistics
> and regressions at all, and I am strongly
> convinced that my question can be of interest for some one else in the
> future.
>
> This is what forums serve for, isn't it? This is why people help each
> other, isn't it?
>
> Moreover, don't you think that I would not have asked to this R forum if
> I had the possibility to ask or pay a statician?
> Don't you think I have done already my best to study and learn before
> posting this message? Trust me, I have read different
> online tutorials on lme and lmer, and I am confident that I have got the
> basic concepts. Still I have not found the answer
> to solve my problem, so if you know the answer can you please give me
> some suggestions that can help me?
>
> I do not have a book where to learn and unfortunately I have to analyze
> the results soon. Any help? Any online reference to-the-point
> that can help me in solving this problem?
>
> Thank you in advance
>
> Best regards
>
> Angelo
>
>
> Messaggio originale
> Da: bgunter.4...@gmail.com
> Data: 21-lug-2015 3.45
> A: "angelo.arc...@virgilio.it"
> Cc: , 
> Ogg: Re: [R] R: Re: Differences in output of lme() when introducing
> interactions
>
> I believe Michael's point is that you need to STOP asking such
> questions and START either learning some statistics or work with
> someone who already knows some. You should not be doing such analyses
> on your own given your present state of statistical ignorance.
>
> Cheers,
> Bert
>
>
> Bert Gunter
>
> "Data is not information. Information is not knowledge. And knowledge
> is certainly not wisdom."
> -- Clifford Stoll
>
>
> On Mon, Jul 20, 2015 at 5:45 PM, angelo.arc...@virgilio.it
>  wrote:
>  > Dear Michael,
>  > thanks for your answer. Despite it answers to my initial
> question, it does not help me in finding the solution to my problem
> unfortunately.
>  >
>  > Could you please tell me which analysis of the two models should
> I trust then?
>  > My goal is to know whether participants’ choices
>  >  of the dependent variable are linearly related to their own
> weight, height, shoe size and
>  >  the combination of those effects.
>  > Would the analysis of model 2 be more
>  > correct than that of model 1? Which of the two analysis should I
> trust according to my goal?
>  > What is your recommendation?
>  >
>  >
>  > Thanks in advance
>  >
>  > Angelo
>  >
>  >
>  >
>  >
>  >
>  > Messaggio originale
>  > Da: li...@dewey.myzen.co.uk
>  > Data: 20-lug-2015 17.56
>  > A: "angelo.arc...@virgilio.it",
> 
>  > Ogg: Re: [R] Differences in output of lme() when introducing
> interactions
>  >
>  > In-line
>  >
>  > On 20/07/2015 15:10, angelo.arc...@virgilio.it wrote:
>  >> Dear List Members,
>  >>
>  >>
>  >>
>  >> I am searching for correlations between a dependent variable and a
>  >> factor or a combination of factors in a repeated measure design.
> So I
>  >> use lme() function in R. However

[R] R: Re: Differences in output of lme() when introducing interactions

2015-07-22 Thread angelo.arc...@virgilio.it
Dear Terry,
I am very grateful to you for such a detailed and helpful answer.
Following your recommendation then I will skip the method presented at 
http://www.ats.ucla.edu/stat/r/faq/type3.htm

So far, based on my understanding of R I arrived to the conclusion that the 
correct way to see if there is
a correlation between my dependent variable (spectral centroid of a sound) and 
weight, height, and interaction
between weight and height of participants asked to create those sounds (in a 
repeated measure design) is:


lme_centroid <- lme(Centroid ~ Weight*Height*Shoe_Size, data = My_data, random 
= ~1 | Subject)

anova.lme(lme_centroid,type = "marginal")


Can anyone please confirm me that those formulas are actually correct and give 
the significant or
non significant p-values for the main effects and their interactions? I would 
prefer to use lme(), not lmer().

I am making the assumption of course that the model I am using (Centroid ~ 
Weight*Height*Shoe_Size) is 
the best fit for my data.

Thanks in advance

Angelo




Messaggio originale
Da: thern...@mayo.edu
Data: 22-lug-2015 15.15
A: , 
Ogg: Re:  Differences in output of lme() when introducing interactions

"Type III" is a peculiarity of SAS, which has taken root in the world.  There 
are 3 main 
questions wrt to it:

1. How to compute it (outside of SAS).  There is a trick using contr.treatment 
coding that 
works if the design has no missing factor combinations, your post has a link to 
such a 
description.  The SAS documentation is very obtuse, thus almost no one knows 
how to 
compute the general case.

2. What is it?  It is a population average.  The predicted average treatment 
effect in a 
balanced population-- one where all the factor combinations appeared the same 
number of 
times.  One way to compute 'type 3' is to create such a data set, get all the 
predicted 
values, and then take the average prediction for treatment A, average for 
treatment B, 
average for C, ...  and test "are these averages the same".   The algorithm of 
#1 above 
leads to another explanation which is a false trail, in my opinion.

3. Should you ever use it?  No.  There is a very strong inverse correlation 
between 
"understand what it really is" and "recommend its use".   Stephen Senn has 
written very 
intellgently on the issues.

Terry Therneau


On 07/22/2015 05:00 AM, r-help-requ...@r-project.org wrote:
> Dear Michael,
> thanks a lot. I am studying the marginality and I came across to this post:
>
> http://www.ats.ucla.edu/stat/r/faq/type3.htm
>
> Do you think that the procedure there described is the right one to solve my 
> problem?
>
> Would you have any other online resources to suggest especially dealing with 
> R?
>
> My department does not have a statician, so I have to find a solution with my 
> own capacities.
>
> Thanks in advance
>
> Angelo



   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Which is the best way of reporting results of lme() in two different possible cases?

2015-07-24 Thread angelo.arc...@virgilio.it
 
Dear List Memebers,
I need some advise about how to report the output of lme(). When searching for 
correlations between between a dependent variable 
and a factor or a combination of factors in a repeated measure design 
with lme() I noticed that I can encounter two types of results, and I am
 wondering which is the best way to report each of them in a journal 
publication. It is not clear to me when I should report the values of 
the beta coefficient together with the t-test value and p-value, or the 
beta coefficient with F value and p-value.


Let’s have as a reference the following two models:


MODEL TYPE 1: fixed effects only 

lme_Weigth <- lme(Sound_Feature ~ Weight, data = My_Data, random = ~1 | Subject)
summary(lme_Weigth)

lme_Height <- lme(Sound_Feature ~ Height, data = My_Data, random = ~1 | Subject)
summary(lme_Height)


MODEL TYPE 2: Fixed and interaction effects together

lme_Interaction <- lme(Sound_Feature ~ Weight*Height, data = My_Data, random = 
~1 | Subject)

summary(lme_Interaction)  
anova.lme(lme_Interaction, type = "marginal").


RESULTS CASE 1: Applying model type 2 I do not get any significant p-value so 
there is no interaction effect. Therefore I check
the simplified model type 1, and I get for both Height and Weight significant 
p-values.


RESULTS CASE 2: Applying model type 2 I get a significant p-value so 
there is an interaction effect. Therefore I do not check
the simplified model type 1 for the two factors separately. Moreover, in
 the results of model type 2 I can also see that the fixed effects of 
both factors are significant.


I am not sure if in presence of an interaction it is correct to 
report the significant interactions of the separate factors, since I 
read somewhere that it does not make too much sense. Am I wrong?


My attempt in reporting the results for the two cases is the following. Can you 
please tell me it I am right?


“We performed a linear mixed effects analysis of the relationship 
between Sound_Feature and Height and Weight. As fixed effects, we 
entered Height and Weight (without interaction term) into a first model,
 and we included the interaction effect into a second model. As random 
effects, we had intercepts for subjects.”


RESULTS CASE 1: “Results showed that Sound_Feature was linearly 
related to Height (beta = value, t(df)= value, p < 0.05) and Weight 
(beta = value, t(df)= value, p < 0.05), but no to their interaction 
effect.”


RESULTS CASE 2:  “Results showed that Sound_Feature was linearly 
related to Height (beta = value, F(df)= value, p < 0.05) and Weight 
(beta = value, F(df)= value, p < 0.05), and to their interaction 
effect (beta = value, F(df)= value, p < 0.05).”


Basically I used for reporting the beta value in the 2 cases I use 
the output of summary(). In the case 1, I report the value of the 
t-test, still taken from summary. But for case 2 I do not report the 
t-test, I report the F value as result of anova.lme(lme_Interaction, 
type = "marginal").


Is this the correct way of proceeding in the results reporting?


I give an example of the outputs I get using the two models for the three cases:


RESULTS CASE 1:

> ### Sound_Level_Peak vs Weight*Height ###
> 
>
> 
> library(nlme)
> lme_Sound_Level_Peak <- lme(Sound_Level_Peak ~ Weight*Height, data = 
> My_Data1, random = ~1 | Subject)
> 
> summary(lme_Sound_Level_Peak)
Linear mixed-effects model fit by REML
 Data: My_Data1 
   AIC  BIClogLik
  716.2123 732.4152 -352.1061

Random effects:
 Formula: ~1 | Subject
(Intercept) Residual
StdDev:5.470027 4.246533

Fixed effects: Sound_Level_Peak ~ Weight * Height 
  Value Std.Error DFt-value p-value
(Intercept)   -7.185833  97.56924 95 -0.0736485  0.9414
Weight 0.993543   1.63151 15  0.6089715  0.5517
Height-0.076300   0.55955 15 -0.1363592  0.8934
Weight:Height -0.005403   0.00898 15 -0.6017421  0.5563
 Correlation: 
  (Intr) Weight Height
Weight-0.927  
Height-0.994  0.886   
Weight:Height  0.951 -0.996 -0.919

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max 
-2.95289464 -0.51041805 -0.06414148  0.48562230  2.95415889 

Number of Observations: 114
Number of Groups: 19 


> anova.lme(lme_Sound_Level_Peak,type = "marginal")
  numDF denDF   F-value p-value
(Intercept)   195 0.0054241  0.9414
Weight115 0.3708463  0.5517
Height115 0.0185938  0.8934
Weight:Height 115 0.3620936  0.5563
> 
> 





> ### Sound_Level_Peak vs Weight ###
> 
> library(nlme)
> summary(lme(Sound_Level_Peak ~ Weight, data = My_Data1, random = ~1 | 
> Subject))
Linear mixed-effects model fit by REML
 Data: My_Data1 
   AIC  BIClogLik
  706.8101 717.6841 -349.4051

Random effects:
 Formula: ~1 | Subject
(Intercept) Residual
StdDev:5.717712 4.246533

Fixed effects: Sound_Lev

[R] How to plot results from lme in presence of a significant interaction

2015-11-22 Thread angelo.arc...@virgilio.it
Dear list members,
I wonder which is the best way to plot in r the 
results from the lme function, in presence of a significant interaction.
 My model has two interacting fixed effects and a random effect. The 
analysis is from an experiment where 19 participants had to adjust the 
Centroid parameter of some sounds stimuli, and I want to assess whether
 there is a relationship between their choices and their height and 
weight. There were 12 stimuli repeated twice for a total of 24 trials.
Here is the output of my analysis:

> library(nlme)
> lme_Centroid <- lme(Centroid ~ Weight*Height, data = scrd, random = ~1 | 
Subject)
> 
> summary(lme_Centroid)
Linear mixed-effects model fit by REML
 Data: scrd 
   AIC  BIClogLik
  25809.38 25840.69 -12898.69

Random effects:
 Formula: ~1 | Subject
(Intercept) Residual
StdDev:398.9658  3027.67

Fixed effects: Centroid ~ Weight * Height 
   Value Std.Error   DF   t-value p-value
(Intercept)   -20232.203  9101.096 1349 -2.223051  0.0264
Weight   478.854   152.184   15  3.146536  0.0067
Height   140.44052.194   15  2.690751  0.0168
Weight:Height -2.725 0.838   15 -3.253770  0.0053
 Correlation: 
  (Intr) Weight Height
Weight-0.927  
Height-0.994  0.886   
Weight:Height  0.951 -0.996 -0.919

Standardized Within-Group Residuals:
   Min Q1Med Q3Max 
-1.5059828 -0.8664208 -0.213  0.7098706  2.3620633 

Number of Observations: 1368
Number of Groups: 19 



I
 do not know how to represent in R these results. I tried 
xyplot(Centroid ~  Weight * Height, type = c("p","r"), data = scrd) but I
 guess it is wrong.

Thank you in advance

Best regards

Angelo
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: Re: How to plot results from lme in presence of a significant interaction

2015-11-23 Thread angelo.arc...@virgilio.it

Dear Bert and R list,
actually I had tried the interaction plot function, but I deduced that it was 
not the correct function since it gave me an empty plot (no lines). This should 
not be correct because as you can see from the results of the analysis I 
reported in the previous e-mail, there is a significant interaction.

The syntax for the function I used was interaction.plot(scrd$Weight,
scrd$Height,scrd$Centroid)

I want to plot my data and visualize the regression line showing the linear 
relationship between my dependent variable and the interaction between Weight 
and Height of the participants. 
How this is done in R?

Thank you

Best regards

Angelo



>Messaggio originale
>Da: bgunter.4...@gmail.com
>Data: 22-nov-2015 23.45
>A: "angelo.arc...@virgilio.it"
>Cc: "r-help"
>Ogg: Re: [R] How to plot results from lme in presence of a significant 
interaction
>
>I would have thought the first place to look would be ?interaction.plot
>
>Cheers,
>
>Bert
>Bert Gunter
>
>"Data is not information. Information is not knowledge. And knowledge
>is certainly not wisdom."
>   -- Clifford Stoll
>
>
>On Sun, Nov 22, 2015 at 2:44 PM, angelo.arc...@virgilio.it
> wrote:
>> Dear list members,
>> I wonder which is the best way to plot in r the
>> results from the lme function, in presence of a significant interaction.
>>  My model has two interacting fixed effects and a random effect. The
>> analysis is from an experiment where 19 participants had to adjust the
>> Centroid parameter of some sounds stimuli, and I want to assess whether
>>  there is a relationship between their choices and their height and
>> weight. There were 12 stimuli repeated twice for a total of 24 trials.
>> Here is the output of my analysis:
>>
>> > library(nlme)
>> > lme_Centroid <- lme(Centroid ~ Weight*Height, data = scrd, random = 
~1 | Subject)
>> >
>> > summary(lme_Centroid)
>> Linear mixed-effects model fit by REML
>>  Data: scrd
>>AIC  BIClogLik
>>   25809.38 25840.69 -12898.69
>>
>> Random effects:
>>  Formula: ~1 | Subject
>> (Intercept) Residual
>> StdDev:398.9658  3027.67
>>
>> Fixed effects: Centroid ~ Weight * Height
>>Value Std.Error   DF   t-value p-value
>> (Intercept)   -20232.203  9101.096 1349 -2.223051  0.0264
>> Weight   478.854   152.184   15  3.146536  0.0067
>> Height   140.44052.194   15  2.690751  0.0168
>> Weight:Height -2.725 0.838   15 -3.253770  0.0053
>>  Correlation:
>>   (Intr) Weight Height
>> Weight-0.927
>> Height-0.994  0.886
>> Weight:Height  0.951 -0.996 -0.919
>>
>> Standardized Within-Group Residuals:
>>Min Q1Med Q3Max
>> -1.5059828 -0.8664208 -0.213  0.7098706  2.3620633
>>
>> Number of Observations: 1368
>> Number of Groups: 19
>>
>>
>>
>> I
>>  do not know how to represent in R these results. I tried
>> xyplot(Centroid ~  Weight * Height, type = c("p","r"), data = scrd) but I
>>  guess it is wrong.
>>
>> Thank you in advance
>>
>> Best regards
>>
>> Angelo
>> [[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.
html
>> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Question about lme syntax

2015-11-23 Thread angelo.arc...@virgilio.it
Dear list,
I need an help to understand the syntax of lme to fit my model according to the 
analysis I want to perform.

My dependent variable resulted from a perceptual experiment in which responses 
of participants were measured twice for each provided stimulus. My goal is to 
verify whether the responses depend on two properties of the participants that 
I know to be related to each other (height and weight, so they need to be 
considered together as an interaction). More importantly, I need to understand 
how this relationship modulates according to the type of stimulus participants 
were presented to.

Based on my understanding of lme syntax, the formula I have to use should be 
the following (because I am only interested in the interaction factor of Weight 
and Height)

lme_dv <- lme(dv ~ Weight:Height:Stimulus_Type, data = scrd, random = ~ 1 | 
Subject)

Am I correct?


Thank you in advance

Best regards

Angelo



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: RE: How to plot results from lme in presence of a significant interaction

2015-11-23 Thread angelo.arc...@virgilio.it

Dear prof. Fox,
thank you very much

Best regards

Angelo

>Messaggio originale
>Da: j...@mcmaster.ca
>Data: 23-nov-2015 12.50
>A: "angelo.arc...@virgilio.it"
>Cc: "r-help@r-project.org"
>Ogg: RE: [R] How to plot results from lme in presence of a significant 
interaction
>
>Dear Angelo,
>
>You might try the Effect() function in the effects package: plot(Effect(c
("Weight", "Height"), lme_Centroid)) .
>
>I hope this helps,
> John
>
>-
>John Fox, Professor
>McMaster University
>Hamilton, Ontario
>Canada L8S 4M4
>Web: socserv.mcmaster.ca/jfox
>
>
>
>> -Original Message-
>> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of
>> angelo.arc...@virgilio.it
>> Sent: November 22, 2015 5:44 PM
>> To: r-help@r-project.org
>> Subject: [R] How to plot results from lme in presence of a significant 
interaction
>> 
>> Dear list members,
>> I wonder which is the best way to plot in r the results from the lme 
function, in
>> presence of a significant interaction.
>>  My model has two interacting fixed effects and a random effect. The 
analysis is
>> from an experiment where 19 participants had to adjust the Centroid
>> parameter of some sounds stimuli, and I want to assess whether  there is a
>> relationship between their choices and their height and weight. There were 
12
>> stimuli repeated twice for a total of 24 trials.
>> Here is the output of my analysis:
>> 
>> > library(nlme)
>> > lme_Centroid <- lme(Centroid ~ Weight*Height, data = scrd, random = 
~1 |
>> Subject)
>> >
>> > summary(lme_Centroid)
>> Linear mixed-effects model fit by REML
>>  Data: scrd
>>AIC  BIClogLik
>>   25809.38 25840.69 -12898.69
>> 
>> Random effects:
>>  Formula: ~1 | Subject
>> (Intercept) Residual
>> StdDev:398.9658  3027.67
>> 
>> Fixed effects: Centroid ~ Weight * Height
>>Value Std.Error   DF   t-value p-value
>> (Intercept)   -20232.203  9101.096 1349 -2.223051  0.0264
>> Weight   478.854   152.184   15  3.146536  0.0067
>> Height   140.44052.194   15  2.690751  0.0168
>> Weight:Height -2.725 0.838   15 -3.253770  0.0053
>>  Correlation:
>>   (Intr) Weight Height
>> Weight-0.927
>> Height-0.994  0.886
>> Weight:Height  0.951 -0.996 -0.919
>> 
>> Standardized Within-Group Residuals:
>>Min Q1Med Q3Max
>> -1.5059828 -0.8664208 -0.213  0.7098706  2.3620633
>> 
>> Number of Observations: 1368
>> Number of Groups: 19
>> 
>> 
>> 
>> I
>>  do not know how to represent in R these results. I tried xyplot(Centroid ~
>> Weight * Height, type = c("p","r"), data = scrd) but I  guess it is wrong.
>> 
>> Thank you in advance
>> 
>> Best regards
>> 
>> Angelo
>>  [[alternative HTML version deleted]]
>> 
>> __
>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-
>> guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: Re: Question about lme syntax

2015-11-23 Thread angelo.arc...@virgilio.it
Dear Prof. Andrew Robinson,
I am very grateful to you for your enlightening answer

All the best

Angelo





Messaggio originale

Da: a.robin...@ms.unimelb.edu.au

Data: 23-nov-2015 20.50

A: "angelo.arc...@virgilio.it"

Cc: "R help (r-help@r-project.org)"

Ogg: Re: [R] Question about lme syntax



Hi Angelo,
it's dangerous to fit a model that includes interaction effects but omits main 
effects.  Among other things, what can happen is that the statistical tests 
become scale dependent, which is most unattractive.  
I think that you should include the main effects in your model, even as 
nuisance variables, and test the interaction using the model that includes them.
BTW, your question might better be located with the mixed-effects models 
special interest group. 
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

Best wishes
Andrew
On Mon, Nov 23, 2015 at 9:19 PM, angelo.arc...@virgilio.it 
 wrote:
Dear list,

I need an help to understand the syntax of lme to fit my model according to the 
analysis I want to perform.



My dependent variable resulted from a perceptual experiment in which responses 
of participants were measured twice for each provided stimulus. My goal is to 
verify whether the responses depend on two properties of the participants that 
I know to be related to each other (height and weight, so they need to be 
considered together as an interaction). More importantly, I need to understand 
how this relationship modulates according to the type of stimulus participants 
were presented to.



Based on my understanding of lme syntax, the formula I have to use should be 
the following (because I am only interested in the interaction factor of Weight 
and Height)



lme_dv <- lme(dv ~ Weight:Height:Stimulus_Type, data = scrd, random = ~ 1 | 
Subject)



Am I correct?





Thank you in advance



Best regards



Angelo







[[alternative HTML version deleted]]



__

R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see

https://stat.ethz.ch/mailman/listinfo/r-help

PLEASE do read the posting guide http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.



-- 
Andrew Robinson
Deputy Director, CEBRA, School of Biosciences
Reader & Associate Professor in Applied Statistics  Tel: (+61) 0403 138 955
School of Mathematics and StatisticsFax: +61-3-8344 4599
University of Melbourne, VIC 3010 Australia
Email: a.robin...@ms.unimelb.edu.auWebsite: 
http://www.ms.unimelb.edu.au/~andrewpr

MSME: http://www.crcpress.com/product/isbn/9781439858028
FAwR: http://www.ms.unimelb.edu.au/~andrewpr/FAwR/
SPuR: http://www.ms.unimelb.edu.au/spuRs/






[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Is manova() correct for this analysis?

2015-11-23 Thread angelo.arc...@virgilio.it
Dear list members,
I have to perform the following analysis but I do not know which function in R 
must be used. My guess is to use manova().

During an experiment I presented participants with some sound stimuli and I 
asked them to 
modify two parameters of the sound (Centroid and Sound_Level_Peak) to reach a 
given 
experimental goal. The initial sounds (preset sounds) had a value for those two 
parameters. 
What I am interested in is whether participants' modifications of the two 
parameters of the 
sound stimuli resulted in values actually different from the initial values of 
the parameters
of the preset sounds.

To give an idea, some rows of my data set are the following:

> head(scrd) 
Stimulus_Type   CentroidSound_Level_PeakPreset
Stimulus_A  1960.2  -20.963   no
Stimulus_A  5317.2  -42.741   no
.
Stimulus_B  11256.0-16.480   no
Stimulus_B  9560.3  -19.682   no
.
.
Stimulus_A  1900.2  -18.63 yes
Stimulus_A  5617.6  -44.41 yes
Stimulus_B  12056.0-17.80 yes
Stimulus_B  8960.5  -21.82 yes


This is the analysis I performed with manova():

> fit <- manova(cbind(Centroid,Sound_Level_Peak)~ Stimulus_Type*Preset, 
> data=scrd)
> summary(fit, test="Pillai")
   Df  Pillai approx F num Df den Df  Pr(>F)
Stimulus_Type  11 0.91888  106.629 22   2760 < 2e-16 ***
Preset  1 0.003432.371  2   1379 0.09378 .  
Stimulus_Type:Preset   11 0.011550.729 22   2760 0.81348
Residuals1380   
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> 

If I am not wrong, these results say that for each stimulus type there is no 
difference 
between the patterns of the two parameters in the preset and modified 
conditions.

Can anyone please tell me if I am correct or suggest how to perform in R such 
an analysis?

Thanks in advance

Best

Angelo

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Syntax error in using Anova (car package)

2015-11-25 Thread angelo.arc...@virgilio.it
Dear list members,
I am getting an error while performing a repeated measures MANOVA using the 
Anova function 
of the "car" package. I want to apply it on the results of an experiment 
involving 19 participants, 
who were subjected to 36 stimuli, each stimulus was repeated twice for a total 
of 72 trials 
per subject. Participants had to adjust two parameters of sounds, Centroid and 
Sound_Level_Peak, 
for each stimulus. This is the head of my dataset (dependent variables: 
Centroid and 
Sound_Level_Peak; independent variables: Mat (6 levels) and Sh (2 levels)). 

> head(scrd)
Subject Mat   Sh  CentroidSound_Level_Peak
1 Subject1  C DS1960.2   -20.963
2 Subject1  C SN5317.2   -42.741
3 Subject1  G DS   11256.0   -16.480
4 Subject1  G SN9560.3   -19.682
5 Subject1  M DS4414.1   -33.723
6 Subject1  M SN4946.1   -23.648


Based on my understanding of the online material I found, this is the procedure 
I used:

idata <- data.frame(scrd$Subject)
mod.ok <- lm(cbind(Centroid,Sound_Level_Peak) ~  Mat*Sh,data=scrd)
av.ok <- Anova(mod.ok, idata=idata, idesign=~scrd$Subject)


I get the following error

Error in check.imatrix(X.design) : 
  Terms in the intra-subject model matrix are not orthogonal.


Can anyone please tell me what is wrong in my formulas?

Thanks in advance

Best regards

Angelo





[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: RE: Syntax error in using Anova (car package)

2015-11-25 Thread angelo.arc...@virgilio.it


Dear Prof. John Fox, 
thanks a lot for your answer. Do you mean that my data set should have 19 rows 
(one for each of the 19 subjects)
and 144 columns (that is 72 trials * 2 dependent variables)? So should the 
dataframe look like this?

Subject Stimulus_1.Centroid.repetition1  Stimulus_1.Centroid.repetition2 
Stimulus_1.Peak.repetition1  Stimulus_1.Peak.repetition2 
Subject11000 2000  
10   20
Subject2500  
6005   6
..
SubjectN


However, differently from the example reported in the document you kindly 
provided, my experiment
has two dependent variables. 
My guess is that the analysis should be the following (considering 12 types of 
stimuli and 6 repetitions 
for each of them, and 2 dependent variables)



stimulus_type <- factor(rep(c("Stimulus_1", "Stimulus_2", "Stimulus_3", 
"Stimulus_4", "Stimulus_5", "Stimulus_6", 
 "Stimulus_7", "Stimulus_8", "Stimulus_9", "Stimulus_10", "Stimulus_11", 
"Stimulus_12"), c(6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6)), 
levels=c("Stimulus_1", "Stimulus_2", "Stimulus_3", "Stimulus_4", "Stimulus_5", 
"Stimulus_6", 
 "Stimulus_7", "Stimulus_8", "Stimulus_9", "Stimulus_10", "Stimulus_11", 
"Stimulus_12"))
 
repetitions <- ordered(rep(1:6, 12))

idata <- data.frame(stimulus_type, repetitions)

Notably, now idata has 72 rows (should it have 144 rows instead?). Then I 
continue with:


mod.ok <- lm(cbind(Stimulus_1.Centroid.repetition1, ., Stimulus_12.Peak.
repetition2) ~  Subject, data=scrd)

av.ok <- Anova(mod.ok, idata=idata, idesign=~stimulus_type*repetitions)


Am I correct?

Thanks in advance

Best regards

Angelo




>Messaggio originale
>Da: j...@mcmaster.ca
>Data: 25-nov-2015 17.23
>A: "angelo.arc...@virgilio.it"
>Cc: "r-help@r-project.org"
>Ogg: RE: Syntax error in using Anova (car package)
>
>Dear Angelo,
>
>I'm afraid that this is badly confused. To use Anova() for repeated measures, 
the data must be in "wide" format, with one row per subject. To see how this 
works, check out the OBrienKaiser example in ?Anova and ?OBrienKaiser, or for 
more detail, the R Journal paper at <{http://journal.r-project.org/archive/2013-
1/RJournal_2013-1_fox-friendly-weisberg.pdf>.
>
>I hope this helps,
> John
>
>---
>John Fox, Professor
>McMaster University
>Hamilton, Ontario, Canada
>http://socserv.socsci.mcmaster.ca/jfox/
>
>
>
>> -Original Message-
>> From: angelo.arc...@virgilio.it [mailto:angelo.arc...@virgilio.it]
>> Sent: Wednesday, November 25, 2015 11:30 AM
>> To: r-help@r-project.org
>> Cc: Fox, John
>> Subject: Syntax error in using Anova (car package)
>> 
>> Dear list members,
>> I am getting an error while performing a repeated measures MANOVA using
>> the Anova function
>> of the "car" package. I want to apply it on the results of an experiment
>> involving 19 participants,
>> who were subjected to 36 stimuli, each stimulus was repeated twice for a
>> total of 72 trials
>> per subject. Participants had to adjust two parameters of sounds,
>> Centroid and Sound_Level_Peak,
>> for each stimulus. This is the head of my dataset (dependent variables:
>> Centroid and
>> Sound_Level_Peak; independent variables: Mat (6 levels) and Sh (2
>> levels)).
>> 
>> > head(scrd)
>> Subject Mat   Sh  CentroidSound_Level_Peak
>> 1 Subject1  C DS1960.2   -20.963
>> 2 Subject1  C SN5317.2   -42.741
>> 3 Subject1  G DS   11256.0   -16.480
>> 4 Subject1  G SN9560.3   -19.682
>> 5 Subject1  M DS4414.1   -33.723
>> 6 Subject1  M SN4946.1   -23.648
>> 
>> 
>> Based on my understanding of the online material I found, this is the
>> procedure I used:
>> 
>> idata <- data.frame(scrd$Subject)
>> mod.ok <- lm(cbind(Centroid,Sound_Level_Peak) ~  Mat*Sh,data=scrd)
>> av.ok <- Anova(mod.ok, idata=idata, idesign=~scrd$Subject)
>> 
>> 
>> I get the following error
>> 
>> Error in check.imatrix(X.design) :
>>   Terms in the intra-subject model matrix are not orthogonal.
>> 
>> 
>> Can anyone please tell me what is wrong in my formulas?
>> 
>> Thanks in advance
>> 
>> Best regards
>> 
>> Angelo
>> 
>> 
>> 
>> 
>
>

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: Re: R: RE: Syntax error in using Anova (car package)

2015-11-26 Thread angelo.arc...@virgilio.it
Dear Bert,
I do not have any statistical consultant that I can find near me or pay. Yes, 
I was confused by some statistical
concept, but I remember you, with humility, that human beings have the 
possibility to learn. Some even 
learn fast! Don't under estimate the will of a person of learning.

However, it is my hope that you are the only one that feels so much pissed off 
by my HELP requests!

The point of a forum is to provide help, don't you think? And there might be 
people in future that reading
my posts over internet can find solutions to the same problem. Do you know 
that there are many
people around that do not have a statistical consultant available and try to 
learn by themselves?

I do hope that some one could provide an answer to my last question, since it 
is just a syntactical question
and this is the most appropriate forum in the world that can help me (and 
maybe others in future) in 
understanding and solving the problem.

Thank you 

Best regards

Angelo


>Messaggio originale
>Da: bgunter.4...@gmail.com
>Data: 26-nov-2015 3.24
>A: "angelo.arc...@virgilio.it"
>Cc: , "r-help@r-project.org"
>Ogg: Re: [R] R: RE: Syntax error in using Anova (car package)
>
>At this point, it seem obvious to me that you would benefit by local
>statistical consulting, rather than further badgering on this list, as
>you seem confused by both the underlying statistical concepts and how
>they need to be handled in R/car. Pursuing your current course seems
>destined to lead to folly.
>
>Of course, both you and John (and others) are free to disagree ...
>
>Cheers,
>Bert
>
>
>
>Bert Gunter
>
>"Data is not information. Information is not knowledge. And knowledge
>is certainly not wisdom."
>   -- Clifford Stoll
>
>
>On Wed, Nov 25, 2015 at 7:04 PM, angelo.arc...@virgilio.it
> wrote:
>>
>>
>> Dear Prof. John Fox,
>> thanks a lot for your answer. Do you mean that my data set should have 19 
rows
>> (one for each of the 19 subjects)
>> and 144 columns (that is 72 trials * 2 dependent variables)? So should the
>> dataframe look like this?
>>
>> Subject Stimulus_1.Centroid.repetition1  Stimulus_1.Centroid.
repetition2
>> Stimulus_1.Peak.repetition1  Stimulus_1.Peak.repetition2
>> Subject11000 2000
>> 10   20
>> Subject2500
>> 6005   6
>> ..
>> SubjectN
>>
>>
>> However, differently from the example reported in the document you kindly
>> provided, my experiment
>> has two dependent variables.
>> My guess is that the analysis should be the following (considering 12 types 
of
>> stimuli and 6 repetitions
>> for each of them, and 2 dependent variables)
>>
>>
>>
>> stimulus_type <- factor(rep(c("Stimulus_1", "Stimulus_2", "Stimulus_3",
>> "Stimulus_4", "Stimulus_5", "Stimulus_6",
>>  "Stimulus_7", "Stimulus_8", "Stimulus_9", "Stimulus_10", "Stimulus_11",
>> "Stimulus_12"), c(6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6)),
>> levels=c("Stimulus_1", "Stimulus_2", "Stimulus_3", "Stimulus_4", 
"Stimulus_5",
>> "Stimulus_6",
>>  "Stimulus_7", "Stimulus_8", "Stimulus_9", "Stimulus_10", "Stimulus_11",
>> "Stimulus_12"))
>>
>> repetitions <- ordered(rep(1:6, 12))
>>
>> idata <- data.frame(stimulus_type, repetitions)
>>
>> Notably, now idata has 72 rows (should it have 144 rows instead?). Then I
>> continue with:
>>
>>
>> mod.ok <- lm(cbind(Stimulus_1.Centroid.repetition1, ., Stimulus_12.
Peak.
>> repetition2) ~  Subject, data=scrd)
>>
>> av.ok <- Anova(mod.ok, idata=idata, idesign=~stimulus_type*repetitions)
>>
>>
>> Am I correct?
>>
>> Thanks in advance
>>
>> Best regards
>>
>> Angelo
>>
>>
>>
>>
>>>Messaggio originale
>>>Da: j...@mcmaster.ca
>>>Data: 25-nov-2015 17.23
>>>A: "angelo.arc...@virgilio.it"
>>>Cc: "r-help@r-project.org"
>>>Ogg: RE: Syntax error in using Anova (car package)
>>>
>>>Dear Angelo,
>>>
>>>I'm afraid that this is badly confused. To use Anova() for repeated 
measures,
>> the data must be in "wide" format, with one row per subject. To see how 
this
>> works, check out the OBrienKaiser example in ?Anova and ?OB

[R] Problem with ANOVA repeated measures: "Error() model is singular"

2011-05-21 Thread angelo.arc...@virgilio.it
Hello everybody,
I need an help because I don´t know if the command for the ANOVA analysis I am 
performing in R is correct. Indeed using the function aov I get the following 
error:"In aov (..) Error() model is singular"

The structure of my table is the following: subject, stimulus, condition, sex, 
response

Example:

subject  stimulus condition sex  response
subject1gravelEXP1M   59.8060
subject2gravelEXP1M   49.9880
subject3gravelEXP1M   73.7420
subject4gravelEXP1M   45.5190
subject5gravelEXP1M   51.6770
subject6gravelEXP1M   42.1760
subject7gravelEXP1M   56.1110
subject8gravelEXP1M   54.9500
subject9gravelEXP1M   62.6920
subject10gravelEXP1M   50.7270
subject1gravelEXP2M   70.9270
subject2gravelEXP2M   61.3200
subject3gravelEXP2M   70.2930
subject4gravelEXP2M   49.9880
subject5gravelEXP2M   69.1670
subject6gravelEXP2M   62.2700
subject7gravelEXP2M   70.9270
subject8gravelEXP2M   63.6770
subject9gravelEXP2M   72.4400
subject10gravelEXP2M   58.8560
subject11gravelEXP1F   46.5750
subject12gravelEXP1F   58.1520
subject13gravelEXP1F   57.4490
subject14gravelEXP1F   59.8770
subject15gravelEXP1F   55.5480
subject16gravelEXP1F   46.2230
subject17gravelEXP1F   63.3260
subject18gravelEXP1F   60.6860
subject19gravelEXP1F   59.4900
subject20gravelEXP1F   52.6630
subject11gravelEXP2F   55.7240
subject12gravelEXP2F   66.4220
subject13gravelEXP2F   65.9300
subject14gravelEXP2F   61.8120
subject15gravelEXP2F   62.5160
subject16gravelEXP2F   65.5780
subject17gravelEXP2F   59.5600
subject18gravelEXP2F   63.8180
subject19gravelEXP2F   61.4250
.
.
.
.



As you can notice each subject repeated the evaluation in 2 conditions (EXP1 
and EXP2).

What I am interested in is to know if there are significant differences between 
the evaluations of the males and the females.

This is the command I used to perform the ANOVA with repeated measures:

aov1 = aov(response ~ stimulus*sex + Error(subject/(stimulus*sex)), data=scrd)
summary(aov1)


I get the following error:

> aov1 = aov(response ~ stimulus*sex + Error(subject/(stimulus*sex)), 
data=scrd)
Warning message:
In aov(response ~ stimulus * sex + Error(subject/(stimulus * sex)),  :
Error() model is singular
> summary(aov1)

Error: subject
  Df  Sum Sq Mean Sq F value Pr(>F)
sex1  166.71  166.72   1.273  0.274
Residuals 18 2357.29  130.96   

Error: subject:stimulus
  Df Sum Sq Mean Sq F value Pr(>F)
stimulus   6 7547.9 1257.98 35.9633 <2e-16 ***
stimulus:sex   6   94.2   15.70  0.4487 0.8445
Residuals108 3777.8   34.98   
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 

Error: Within
   Df Sum Sq Mean Sq F value Pr(>F)
Residuals 420 9620.6  22.906   
> 


The thing is that looking at the data it is evident for me that there is a 
difference between male and females, because for each stimulus I always get
a mean higher for the males rather than the females. 
Therefore the ANOVA should indicate significant differences


Is there anyone who can suggest me where I am wrong?


Finally, I know that in R there are two libraries on linear mixed models called 
nlme and lme4, but I have never used it so far and I don´t know if I have to 
utilize it for my case.
Is it the case to utilize it? If yes, could you please provide a quick R example
of a command which could solve my problem?

Thanks in advance!

Best regards  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error with glht function: Error in mcp2matrix(model, linfct = linfct) : Variable(s) 'Type' have been specified in 'linfct' but cannot be found in 'model'!

2012-06-22 Thread angelo.arc...@virgilio.it
 Dear list members,
I get the following error when using the glht function to perform a post hoc 
analysis for an ANOVA with repeated measures:


require(nlme)
lme_H2H_musicians = lme(H2H ~ Emotion*Material, data=musicians, random = 
~1|Subject)
require(multcomp)
summary(glht(lme_H2H_musicians, linfct=mcp(Type = "Tukey")), test = 
adjusted(type = "bonferroni"))
Error in mcp2matrix(model, linfct = linfct) : 
  Variable(s) 'Type' have been specified in 'linfct' but cannot be found in 
'model'! 



I don´t understand why I get this error, but at the same time I am not an 
expert at all of R.

I can give you some details more about the experimental design I use and what I 
want to do with the analysis, 
so maybe there is a problem with the model I am trying to use, and I need some 
advices.


I performed an experiment where some musicians had to walk with 5 different 
emotions 
(sad, tender, neutral, happy, aggressive) while listening to 4 types of
sounds (metal, wood, gravel, snow) and I want to see if the sound affects the 
walking pace.
I measured for each walk the average time between steps (H2H, i.e. heel-to-heel)
Each trial was repeated twice.

The table in .csv format can be downloaded here: 
https://dl.dropbox.com/u/3288659/Results_data_R.csv


The ANOVA that I performed before the post hoc test is the following:

aov_H2H_musicians = aov(H2H ~ Emotion*Material + 
Error(Subject/(Emotion*Material)), data=musicians)
summary(aov_H2H_musicians)




Thanks in advance

All the best  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: Error with glht function: Error in mcp2matrix(model, linfct = linfct) : Variable(s) 'Type' have been specified in 'linfct' but cannot be found in 'model'!

2012-06-22 Thread angelo.arc...@virgilio.it
Hello everybody,
problem solved, there was a typo.

I wrote Type instead of Material


Best




Messaggio originale

Da: angelo.arc...@virgilio.it

Data: 22-giu-2012 11.05

A: 

Ogg: Error with glht function:  Error in mcp2matrix(model, linfct = linfct) :   
Variable(s) 'Type' have been specified in 'linfct' but cannot be found in 
'model'!



 Dear list members,
I get the following error when using the glht function to perform a post hoc 
analysis for an ANOVA with repeated measures:


require(nlme)
lme_H2H_musicians = lme(H2H ~ Emotion*Material, data=musicians, random = 
~1|Subject)
require(multcomp)
summary(glht(lme_H2H_musicians, linfct=mcp(Type = "Tukey")), test = 
adjusted(type = "bonferroni"))
Error in mcp2matrix(model, linfct = linfct) : 
  Variable(s) 'Type' have been specified in 'linfct' but cannot be found in 
'model'! 



I don´t understand why I get this error, but at the same time I am not an 
expert at all of R.

I can give you some details more about the experimental design I use and what I 
want to do with the analysis, 
so maybe there is a problem with the model I am trying to use, and I need some 
advices.


I performed an experiment where some musicians had to walk with 5 different 
emotions 
(sad, tender, neutral, happy, aggressive) while listening to 4 types of
sounds (metal, wood, gravel, snow) and I want to see if the sound affects the 
walking pace.
I measured for each walk the average time between steps (H2H, i.e. heel-to-heel)
Each trial was repeated twice.

The table in .csv format can be downloaded here: 
https://dl.dropbox.com/u/3288659/Results_data_R.csv


The ANOVA that I performed before the post hoc test is the following:

aov_H2H_musicians = aov(H2H ~ Emotion*Material + 
Error(Subject/(Emotion*Material)), data=musicians)
summary(aov_H2H_musicians)




Thanks in advance

All the best



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] MANOVA with repeated measures in R

2012-08-03 Thread angelo.arc...@virgilio.it
  

Dear list member,

I deperately need an help in performing a MANOVA in R, but I encountered some
problems both in the design and in the synthax with R.



I conducted a listening experiment in which 16 participants had to rate the 
audio 

stimuli along 5 scales representing an emotion (sad, tender, neutral, happy and 
aggressive). 

Each audio stimulus was synthesized in order to represent a particular emotion. 

Participants had to move 5 sliders each of which corresponded to one of the 5 
emotions.

The sliders range was [0,10] but participants were only informed about the 
extremities of 

the sliders (not at all - very much). There was not a force choice, therefore 
potentially each 

audio stimulus could be rated with all the scales (e.g. sad = 0.1, tender = 
2.5, 

neutral = 2., happy = 8.3, aggressive = 1.7).

There were 40 stimuli, each stimulus was repeated twice, for a total of 80 
trials.

I want to demonstrate that the created stimuli were actually correctly 
classified in the 

corresponding emotion. For example I expect that happy sounds result in happy 
ratings 

by participants and that these happy ratings are greater than the other 4 
responses.



To analyze the data I want to use a MANOVA with repeated measures. For this 
purpose 

I would like to use the audio stimulus as independent variable having 40 
levels, while 

the 5 responses as dependent variables. Since each individual has been measured 
twice, 

I include a within-subjects factor for trial number.



However, with 40 levels I would have 39 degrees of freedom, that with only 16 

participants is not appropriate. For this reason I have also grouped the audio 
stimuli 

by their type. So my independent variable could be Trial_type, having 20 levels.

Unfortunately, reducing in this way is still too few for 16 participants.

Therefore my idea is to perform a MANOVA not on the whole table, but separately

for each subset defining an emotion. In this way I would have just 4 lvels. 

My question is: is this a correct approach to analyze data?

Or it is better to use other strategies?



For example, looking at the following .csv table which can be downloaded here: 



https://dl.dropbox.com/u/3288659/Results_data_listening_Test_Grouped.csv

I create the subset for emotion Sad, and then I try to perform the MANOVA with 



repeated measures on it:

Sad <- subset(scrd, Emotion == "Sad")



model.emotions<-lm(cbind(Sad,Tender,Neutral,Happy,Aggressive) ~ 
Trial_type,data=scrd)

idata<-data.frame(scrd$Trial_number)

aov.emotions<-anova(model.emotions,idata=idata, idesign=~ Trial_type, 
type="III")



Unfortunately I get the following error which I am not able to solve:



> aov.emotions<-anova(model.emotions,idata=idata, idesign=~Trial, type="III")

Error in cbind(M, X) : number of rows of matrices must match (see arg 2)



I am not fully sure of the above code, since I am not an expert in R. Can you 
please correct them 

showing the correct R code?



To the experiment was performed by two groups of listeners: musicians and 
non-musicians. I created 

two plots of the results, on for the groups of musicians and the other for the 
group of non-musicians:

https://dl.dropbox.com/u/3288659/exp2_musicians.pdf

https://dl.dropbox.com/u/3288659/exp2_non_musicians.pdf



Finally, I was not able to find any post hoc test to apply to the result of the 
MANOVA in case of 

a significant main effect. Any suggestion?



Thanks in advance



Best regards



Angelo





   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] prop.test() and the simultaneous confidence interval for multiple proportions in R

2011-12-08 Thread angelo.arc...@virgilio.it
Dear list members,
I want to perform in R the analysis "simultaneous confidence interval for 
multiple proportions", as illustrated in the article of Agresti et al. (2008) 
"Simultaneous confidence intervals for comparing binomial parameter", 
Biometrics 64, 1270-1275.

If I am not wrong the R function implementing the Agresti et al. method is 
prop.test(). I ask an help because I have some difficulties in reading the 
output of that function.

As a case study, I need to apply such analysis on the following simple prolbem:

I did an experiment in which 12 participants had to choose between 3 conditions 
when provided with 3 stimuli.

Stimulus  Condition1  Condition2 Condition 3
A9  1  2
B   10  2  0
C8  2  2

My goal is to prove that it is not by chance that Condition 1 is preferred 
rather than the other two conditions.

So, I apply the function prop.test(), summing the values of Conditions 2 and 3):

table<-matrix(c(9,3,10,2,8,4),ncol=2,byrow=T)
rownames(table)<-c("stimulusA","stimulusB","stimulusC")
colnames(table)<-c("Condition1","Conditions2and3")

> table
  Condition1 Conditions2and3
stimulusA  9   3
stimulusB 10   2
stimulusC  8   4


prop.test(table)

> prop.test(table)

3-sample test for equality of proportions without continuity correction

data:  table 
X-squared = 0.8889, df = 2, p-value = 0.6412
alternative hypothesis: two.sided 
sample estimates:
   prop 1prop 2prop 3 
0.750 0.833 0.667 

Warning message:
In prop.test(table) : Chi-squared approximation may be incorrect

I don't understand where I can deduct that Condition1 is more preferred than 
Conditions 2 and 3. 
Should I simply look at the p-value?

The fact is that  tried with a more extreme example, but the p-value results 
still above 0.05:
This is the table I used:

> table2
  Condition1 Condition2
stimulusA 12  0
stimulusB 10  2
stimulusC 11  1

> table2<-matrix(c(12,0,10,2,11,1),ncol=2,byrow=T)
> rownames(table2)<-c("stimulusA","stimulusB","stimulusC")
>  colnames(table2)<-c("Condition1","Condition2")
>  prop.test(table2)

3-sample test for equality of proportions without continuity correction

data:  table2 
X-squared = 2.1818, df = 2, p-value = 0.3359
alternative hypothesis: two.sided 
sample estimates:
   prop 1prop 2prop 3 
1.000 0.833 0.917 

Warning message:
In prop.test(table2) : Chi-squared approximation may be incorrect



Could you please enlighten me?


Thanks in advance

   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R: prop.test() and the simultaneous confidence interval for multiple proportions in R

2011-12-09 Thread angelo.arc...@virgilio.it
Hello, 
is there anyone who has some ideas about the problem I posted?

Help!   
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.