On Thu, Jan 20, 2011 at 03:14:01PM -0800, Changbin Du wrote: > ROCR >
I appreciate this information, which is new for me. Up to now, i was using the function get.auc <- function(statistic, label, negative, positive) { xmove <- as.numeric(label == negative) ymove <- as.numeric(label == positive) stopifnot(xmove + ymove == 1) rank.stat <- rank(statistic, ties.method="min") steps <- aggregate(cbind(xmove, ymove), by=list(rank.stat), sum) n <- nrow(steps) x <- c(0, cumsum(steps[n:1, 2])) y <- c(0, cumsum(steps[n:1, 3])) sum(diff(x) * (y[1:n] + y[2:(n+1)]))/(2*max(x)*max(y)) } CRAN package ROCR allows to compute many different measures and visualisations of classifier performance. In particular, AUC may be computed as follows library(ROCR) n <- 50 label <- ordered(rep(c("c1", "c2"), length=n)) set.seed(12345) statistic <- rnorm(n) + (label == "c2") pred <- prediction(statistic, label) AUC <- performance(pred, "auc")@y.values[[1]] cbind(AUC, diff=AUC - get.auc(statistic, label, "c1", "c2")) # AUC diff # [1,] 0.7392 0 The difference is not always exactly zero, but is at the level of the machine rounding error. Petr Savicky. > > > On Thu, Jan 20, 2011 at 3:04 PM, He, Yulei <h...@hcp.med.harvard.edu> wrote: > > > Hi, there. > > > > Suppose I already have sensitivities and specificities. What is the quick > > R-function to calculate AUC for the ROC plot? There seem to be many R > > functions to calculate AUC. > > > > Thanks. > > > > Yulei > > > > ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.