I want to evaluate the overall accuracy rate of some clustering methods. For
that, I have created 10 different data sets (by simulation) and I have
measured the accuracy (events well-classified divided by total number of
events) of each method on each data set. So at the end I have a total of 10
measures of accuracy for each method.

If I calculate the standard deviation of 10 accuracy measures, what means
that measure? Is the method with the lowest standard deviation of accuracy
the method with the highest reproducibility/repeatibility or whatelse? What
would be the right term?

Thank you
-- 
View this message in context: 
http://n4.nabble.com/Standard-deviation-of-an-accuracy-rate-tp1819788p1819788.html
Sent from the R help mailing list archive at Nabble.com.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to