Hi Andra, I have been doing some ROC analysis for a new diagnosis test. I used the pROC package to assess thresholds and compare different diagnosis tests to a "gold standard". In your case, let say the gold standard are the observed values y0.
Here is an example: y0 <- sample(0:1,50,replace=TRUE) # Make observed binomial values test1<-sample(0:100,50,replace=TRUE)/100 y1 <- ifelse(y0==0,test,1-test) # Make first predicted model values test2<-sample(0:100,50,replace=TRUE)/100 y2 <- ifelse(y0==0,test,1-test) # make 2nd predicted model values library(pROC) i1<-roc(response=y0,predictor=y1,percent=TRUE, plot=TRUE, of="threshold",ci=T, lwd=1,lty=2,thresholds="best", asp=1) i2<-roc(response=y0,predictor=y2,percent=TRUE, plot=TRUE, of="threshold",ci=T, lwd=1,lty=3,thresholds="best", add=T) coords(i1,x="best",best.method="youden") # Best threshold of y1 with the Youden index coords(i2,x="best",best.method="youden") # Best threshold of y1 with the Youden index roc.test(i1,i2) # Compare the performance of the best threshold of y1 and y2 See ?pROC for more details. Hope this help, Rock -- View this message in context: http://r.789695.n4.nabble.com/ROCR-package-question-for-evaluating-two-regression-models-tp3787301p3789946.html Sent from the R help mailing list archive at Nabble.com. ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.