I'm looking for an extension of kappa to measure agreement among multiple
raters when there can be more than one response per subject.  For example,
say a group of doctors assign diseases to patients.  Each patient will be
assigned one to many diseases, and the number of doctors assigning diseases
to any one patient will be two to many.

Here's an extremely simple example of the type of data I might have (two
patients, three doctors, five diagnoses):

pat<-c('a','a','b','b','b')
doc<-c('x','y','x','y','z')
dx1<-c('1','2','3','4','5')
dx2<-c('2','','4','','')
df<-data.frame(pat=pat,doc=doc,dx1=dx1,dx2=dx2)
df

I found a paper that can address this, although I can't find any reference
to it on cran.  The other versions of kappa I have found on cran don't
address the multi-response case.  I would rather not reinvent the wheel if
this has already been implemented in R.  If anyone can help it would be
greatly appreciated.

Cheers,

Luk Arbuckle

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to