I don't have R on the netbook I'm writing this from, but I *think* the
following will work:
> 1. how can i only read in the files with the string patterns ggg or fff as
> part of the file names?
> for instance, I only need the file names with the ggg or fff in it
> x_ggg_y_1.txt
>
Dear all,
I am aware of the URLdecode(..) function and was wondering if there
was something similar for HTML?
For example, I would like to convert strings like this:
> x <- "isn't"
into this:
> "isn't"
Many thanks for your time,
Tony Breyal
# O/S: Windows Vista 32 bit
# R version 2.11.0 (201
sub(".*\\[ID: ([[:digit:]]+).*", "\\1 ", input)
> >> x.writer <- sub(".*\\[Writer:([^]]+).*", '\\1', input)
> >> x.rating <- sub(".*\\[Rating: ([0-9.]+).*", '\\1', input)
> >> x.prog <- sub(".*\\](.*)&
Dear all
Lets say I have a plain text file as follows:
> cat(c("[ID: 001 ] [Writer: Steven Moffat ] [Rating: 8.9 ] Doctor Who",
+ "[ID: 002 ] [Writer: Joss Whedon ] [Rating: 8.8 ] Buffy",
+ "[ID: 003 ] [Writer: J. Michael Straczynski ] [Rating: 7.4 ]
Babylon [5]"),
+ sep = "\n",
Thank you guys, both solutions work great! Seems I have two new
packages to investigate :)
Regards,
Tony Breyal
On 31 Mar, 14:20, Tony B wrote:
> Dear all,
>
> Lets say I have the following:
>
> > x <- c("Eve: Going to try something new today...", "Adam:
Dear all,
Lets say I have the following:
> x <- c("Eve: Going to try something new today...", "Adam: Hey @Eve, how are
> you finding R? #rstats", "Eve: @Adam, It's awesome, so much better at
> statistics that #Excel ever was! @Cain & @Able disagree though :(", "Adam:
> @Eve I'm sure they'll so
I only really need the base packages, but otherwise I suppose the most
useful for me are:
(1) RCurl
(2) plyr
(3) XML
On 2 Mar, 20:13, Ralf B wrote:
> Hi R-fans,
>
> I would like put out a question to all R users on this list and hope
> it will create some feedback and discussion.
>
> 1) What are
Background: During my uni days, I was taught to use MAPLE, MATLAB,
SPSS, SAS, C++ and Java. Then after uni, several years went by without
me ever using any of them again and was told to just use Excel. Then I
started my PhD and was told I should start using R instead (something
I'd never even heard
2 3 1
Cheers!
Tony Breyal
On 20 Jan, 16:37, Henrique Dallazuanna wrote:
> Try with tapply:
>
> with(do.call(rbind, df.list), tapply(Score, list(Date, Time, Show), length))
>
>
>
> On Wed, Jan 20, 2010 at 10:20 AM, Tony B wrote:
> > Dear all,
>
> >
Dear all,
Lets say I have several data frames as follows:
> set.seed(42)
> dates <- as.Date(c("2010-01-19", "2010-01-20"))
> times <- c("09:30:00", "11:30:00", "13:30:00", "15:30:00")
> shows <- c("Red Dwarf", "Being Human", "Doctor Who")
>
> df1 <- data.frame(Date = dates[1], Time = times[1], Sh
Copied/pasted from my earlier reply:
It's been a while since I've
done this, but if memory serves, the basic process was to download
xpdf and add it to the windows path, thus making it accessable from
within R. Two methods follow:
Method One (easiest) - using the awesome ?system command:
(1) Do
To be honest, I've never figured this out either. However, if you're
familiar with the gmail way of threading emails, just bookmark and use
google groups:
http://groups.google.co.uk/group/r-help-archive
It's the easiest way I've personally found, and has a very intuitive
interface.
HTH,
Tony Bre
It's been a long time since i read the tutorials, but 'I think', the
reason you get those notifications is because the html code is
malformed, meaning that some of the opening tags '' don't have
corresponding end tags etc.
The XML package seems rather good at working with malformed code, and
ther
Not sure if my code was attached in that last post:
library(RCurl)
library(XML)
html <- getURL("http://www.omegahat.org/RSXML/index.html";)
html.tree <- htmlTreeParse(html, useInternalNodes = TRUE, error =
function(...){})
On 25 Nov, 16:21, Peng Yu wrote:
> On Wed, Nov 25, 2009 at 12:19 AM, cls
Cls59 is correct that there is a lot of example code, just look in ?
htmlTreeParse and you'll get most of what you need i think.
here's some simplified code I use a lot of (XPath expressions are used
to parse the code):
# libraries
library(RCurl)
library(XML)
# google url
my.url <- "http://www.g
Hi Philip,
If i understood correctly, you just wish to get the urls from a given
google search? I have some old code you could adapt which extracts the
main links from a google search. It makes use of XPath expressions
using the lovely XML and RCurl packages:
> library(XML)
> library(RCurl)
>
> g
16 matches
Mail list logo