Dear R-Users!

I am using the svm function (e1071 package) to classify two groups using a
set of 180 indicator variables. Now I am confused about the cross-validation
procedure.

(A) On one hand I use the setting cross=10 in the svm function to run 10
cross-validation iterations and to get an estimate of the svm's performance
in prediction.

(B) On the other hand most tutorials I found recommended separating the set
into two sets using one set for training of the svm and the other for
testing the predictive power of the trained svm.

My understanding would be that (A) and (B) are alternative ways to estimate
the performance of the svm. Or would I have to implement both?

Many thanks for your help!
Jokel

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to