> You can use "getNodeSet" function to extract whatever links or texts that
> you want from that page.
>
>
> I hope this helps.
>
> Best,
> Heramb
>
>
>
> On Wed, Sep 19, 2012 at 10:26 PM, CPV wrote:
>
>> Thanks again,
>>
>> I run the scr
t clone g...@github.com:omegahat/RHTMLForms.git
>
> and that has the fixes for handling the degenerate forms with
> no arguments.
>
>D.
>
> On 9/19/12 7:51 AM, CPV wrote:
> > Thank you for your help Duncan,
> >
> > I have been trying what you sugges
ption(u, FALSE)
> fun = createFunction(forms[[1]])
>
> Then we can use
>
> fun(.curl = curl)
>
> instead of
>
> postForm(site, disclaimer_action="I Agree")
>
> This helps to abstract the details of the form.
>
> D.
>
> On 9/18/12 5:57 PM, CP
Hi, I am starting coding in r and one of the things that i want to do is to
scrape some data from the web.
The problem that I am having is that I cannot get passed the disclaimer
page (which produces a session cookie). I have been able to collect some
ideas and combine them in the code below but I
4 matches
Mail list logo