On Sat, Aug 21, 2010 at 6:48 PM, Laura S <lesla...@gmail.com> wrote: > Dear all: > > Any suggestions are much appreciated. I am looking for a way to make a > series of similar, but slightly modified, .r files. > > My issue is automating making 320 .r files that change the for(i in 1:x) in > my base .r file (as well as other elements, e.g., the load(...), > setwd(...)). For smaller jobs running on a single computer with batch files, > I have been manually changing the for(i in 1:x) line, etc.. > > Why does this matter to me? I am planning on running a simulation experiment > on a linux cluster as a serial job. Although not elegant, it has been > suggested I make 320 .r files so qsub runs one .r file and then selects > other jobs. Thus, the manual route I am currently using would take a very > long time (given multiple runs of 320 .r files, given experimental > replication).
qsub? Are you using the Sun Grid Engine or some other queue submission system? It should be possible to pass a parameter that gets through to your R process. I wrote some docs on something like that, geared for our local HPC which uses SGE: http://www.maths.lancs.ac.uk/~rowlings/HPC/RJobs/ The crux of which is to get the TASK_ID variable from the environment and use that to do slightly different things in a batched submission. You get a TASK_ID if you submit the job as a task array. If you really do have to make 320 .R files, then look into the brew package, which is a simple templating system. Create a analysis.brew file that has tagged template variables in it (its something like <%= i %>) and then run brew 320 times. Barry ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.