What you may want is a measure of inter-rater reliablity though what you are discussing is not the way it is normally used. Try googling "inter-rater reliability" or have a look at a text book like Introduction to classical and modern test theory Linda Crocker and James Algina. New York : Holt, Rinehart, and Winston, 1986.
--- context grey <[EMAIL PROTECTED]> wrote: > Hello, > > I have a survey in which a number of people rated a > set of items on a > 1..5 scale. I believe it would be desirable to > argue > that the > people's responses are correlated, and thus that the > rating task > makes sense to people. > > Is there a standard approach to this? With only 2 > people, > the correlation coefficient between their responses > would > be an interpretable number, (though probably there > is > some > stronger way to assess whether the results are > _significantly_ correlated). > > With N>2 people, there is the correlation matrix, > but > it does not > give a nice single number. Thinking, the > determinant > of > the matrix of response vectors might be a > possibility, > > since it will be low if the rows are correlated. > Though, it should be normalized somehow to be > interpretable. > > Is there a standard approach to this problem? > > Thanks for any advice > > > > > > ____________________________________________________________________________________ > Be a better friend, newshound, and > > ______________________________________________ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, > reproducible code. > ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.