Hi All!
I have some experience with R, but less experience writing scripts using
R and have run into a challenge that I hope someone can help me with.
I have multiple .csv files of data each with the same 3 columns of
data, but potentially of varying lengths (some data files are from short
measurements, others from longer ones). One file for example might look
like this...
Time, O2_conc, Chla_conc
0,270,300
10, 260, 280
20, 245, 268
30, 233, 238
40, 222, 212
50, 215, 201
60, 208, 193
70, 206, 191
80, 207,189
90, 206, 186
100, 206, 183
110, 207, 178
120, 205, 174
130, 240, 171
140, 270, 155
I am looking for an efficient means of batch (or sequentially)
processing these files so that I can
1. import each data file
2. find the minimum value recorded in column 2 and the previous 5 data
points
3. and average these 10 values to get a mean, minimum value.
Currently I have imported the data files using the following
filenames=list.files()
library(plyr)
import.list=adply(filenames, 1, read.csv)
and I know how to write a code to calculate the minimum value and the 5
preceding values in a single column, in a single file. I think the
problem I am running into is scaling this code up so that I can import
multiple files and calculating mean, minimum value for the 2^nd column
in each of them.
Can anyone offer some advice on how to batch processes a whole bunch of
files? I need to load them in, but then analyze them too.
Thank you so much,
Nate
[[alternative HTML version deleted]]
______________________________________________
[email protected] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.