This is what I mostly use:
wget -r -np -nH --convert-links http://www.domain.edu/page

the -r recursively retrieves subdirectories and referencing
pages
-np = -no-parent makes sure it doesn't go all over the
entire web.
-nH = no host directories
--convert-links makes sure the page works off the files on
your local system

-jackp


--- stan <[EMAIL PROTECTED]> wrote:
> I want to make a copy of a certain web site to place on
> my internal
> wbserver. I'm trying to figure out the ocrretc options to
> use with wget to
> do this. Everytning I do seems to downlad way too much
> stuff.
> 
> Specificly I want to make a copy of
> http://www.backupcentral.com/amanda. I
> only wnat the foward links from this page that refer to
> the book itself.
> 
> I;ve tried thinhs likke --mirrot and --convert-links, but
> I wind up having
> wget chase loinks all over the web. How can I restrict it
> to jsut follow
> links on this site itseelf?
> 
> Thanks.
> 
> -- 
> "They that would give up essential liberty for temporary
> safety deserve
> neither liberty nor safety."
>                                               -- Benjamin Franklin
> 
> 
> -- 
> To UNSUBSCRIBE, email to
> [EMAIL PROTECTED] 
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
> 


__________________________________________________
Do you Yahoo!?
Yahoo! Web Hosting - establish your business online
http://webhosting.yahoo.com


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to