h the table and build a new table up with the
info I want, or is there a smarter way to do it?
If a smarter way, what is that smarter way.
Thanks,
Bryan Rasmussen
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel
Hi,
I have some XML files that have a processing instruction directly
after the XML declaration
when I do
kgroup.reading <- character(0)
for (file in file_list){kgroup.reading <-
xmlParseDoc(file.path("c:","projects","respositories","dk","004",file))}
I get the error
file name :1: parser error :
I suppose what is really wanted is a way to associate a the parts of a
graph with a timeline a la gapminder.
Cheers,
Bryan Rasmussen
On 8/8/07, Mike Lawrence <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> Just thought I'd share something I discovered last night. I was
>
obviously because otherwise why was the data
maintained in 20 files for web site usage per week if the files
themselves were not very simple) I am looking for examples of going
over a bunch of excel files with R, extracting the data and then
aggregating for analysis.
Cheers,
Bryan Rasmussen