Evan DeCorte wrote:
Thanks for the great feedback. Conceptually I understand how you would go about testing out of sample performance. It seems like accuracy() would be the best way to test out of forecast performance and will help to automate the construction of statistics I would have calculated on my own. However, the real question now is how do you loop through a time series and automatically split a time series into training and testing sets. I know how I would do it for individual sets but to do so manually over a large number of time series seems excessively burdensome.

You don't have to do it manually. For example, if you want to do 10-fold cross-validation, and you have a time series of length n, then split it into n/10 blocks, e.g. using the index i <- rep(1:10, each=n/10) (assuming n is divisible by 10), loop 10 times using 9 blocks as training and 1 block as test (different test block each time) and measure the MSE for each repetition. Repeat this for all your time series.


--
Gad Abraham
Dept. CSSE and NICTA
The University of Melbourne
Parkville 3010, Victoria, Australia
email: gabra...@csse.unimelb.edu.au
web: http://www.csse.unimelb.edu.au/~gabraham

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to