-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 09 Dec 2002 10:01:06 -0700, Joe Giles wrote:
> I have been tasked with grabbing various internal websites that have > basic text data on them and grabbing and storing that data in another > text file without opening up the web page itself. Make sense? Well, I don't understand what you mean with "opening up the web site". If you don't have access to the web server machine's file system, you can only reach any web pages by loading them via the web server. > www.myinternalwebsite.com has a location called > deployment_packages.htm. I want to grab the data from that page (Which > is plain text) and dump it into another text document not located on > the same server. I would like to do this from a cron job if possible. > > Is there a way to do this with Linux? Not limited to this one: links -dump http://example.com/foo.html > foo.txt - -- -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) iD8DBQE99NOO0iMVcrivHFQRAjIkAJ9Q2KJ5+65KNfTZ7jPKVMQouQTGQQCffcy4 IJ9wRVLeABfrtsdI9Mxc214= =y82R -----END PGP SIGNATURE----- -- redhat-list mailing list unsubscribe mailto:[EMAIL PROTECTED]?subject=unsubscribe https://listman.redhat.com/mailman/listinfo/redhat-list