Hello, I have problems interpreting the results of a Friedman test. It seems to me that the p-value resulting from a Friedman test and with it the "significance" has to be interpreted in another way than the p-value resulting from e.g. ANOVA?
Let me describe the problem with some detail: I'm testing a lot of different hypotheses in my observer study and only for some the premises for performing an ANOVA are fulfilled (tested with Shapiro Wilk and Bartlett). For the others I perform a Friedman test. To my surprise, the p-value of the Friedman test is < 0.05 for all my tested hypotheses. Thus, I tried to "compare" the results with the results of an ANOVA by performing both test methods (Friedman, ANOVA) to a given set of data. While ANOVA results in p = 0.34445 (--> no significant difference between the groups), the Friedman test results in p = 1.913e-06 (--> significant difference between the groups?). How can this be? Or am I doing something wrong? I have three measured values for each condition. For ANOVA I use them all, for the Friedman test I calculated the geometric mean of the three values, since this test does not work with replicated values. Is this a crude mistake? Thanks in advance for any help. Doerte ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.