Not sure about getting the file names, but you are 'extending' the
data structure on each iteration, which is inefficient; try 'lapply'
instead:
small.data <- do.call(rbind, lapply(mysites, function(.file){
try(base <- read.table(.file, sep=";", header=T, as.is=T,
fileEncoding="windows
Hello all,
Is there any way to get each file from a website list and aggregate in a
data frame?
Otherwise I have to type 23 thousand web address into a long script like it:
base1 <- read.table("site 1", sep=";", header=T,
fileEncoding="windows-1252")
base2 <- read.table("site 2", sep=";", header=T
2 matches
Mail list logo