On Mon, 2002-02-18 at 14:19, daniel wrote:
> i'm not sure if this is off topic or not
> but i think that's my problem...
> here's what i want to do
> 
> 1. grab a page off the web
> 2. process it with a perl script
> 
> that's it
> i thought something like
>   wget www.site.com/index.html | perlscript.pl
> would work
> but no
> instead it just downloaded the index.html file and exited
> 
> this has got to be easy
> i just have no idea how to do it
> suggestions?

Sure is an easy one. The answer is to do it all in perl, and your script
will be portable, even to messydos,


    use HTTP::Request;
    use LWP::UserAgent;

    my $ua = LWP::UserAgent->new; #instantiate a user agent
    my $request = HTTP::Request->new(GET => "www.site.com/index.html");
    my $response = $ua->request($request);
    
    if( $response->is_success ) {
        # do something useful or..
        print $response->content;
    } else {
        # if it fails print the error and save it as the output
        print $response->error_as_HTML;
    }


Easy eh?

See the HTTP::Request and LWP::UserAgent man pages for details.

hth
charles



_______________________________________________
Redhat-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to