On December 3, 2005 10:09 am Robert Persson was like:
> On December 3, 2005 05:40 am Martins Steinbergs was like:
> > if there isn't any files or folders under /websites then it isn't problem
> > with httrack. if mirroring goes wrong, then there at least should be
> > project folder containing hts-
Robert Persson wrote:
The trouble is that I have a bookmark file with several hundred entries. wget
is supposed to be fairly good at extracting urls from text files, but it
couldn't handle this particular file.
my previous message assumes that your bookmark file is in reality a HTML
file.
--
Robert Persson wrote:
The trouble is that I have a bookmark file with several hundred entries. wget
is supposed to be fairly good at extracting urls from text files, but it
couldn't handle this particular file.
Try this:
emerge HTML-Tree
then as a normal user, run this script like so (where
On December 3, 2005 05:40 am Martins Steinbergs was like:
> if there isn't any files or folders under /websites then it isn't problem
> with httrack. if mirroring goes wrong, then there at least should be
> project folder containing hts-cash folder and hts-log.txt; index.html
> files. sorry, not mu
On 12/3/05, Robert Persson <[EMAIL PROTECTED]> wrote:
>
> The trouble is that I have a bookmark file with several hundred entries. wget
> is supposed to be fairly good at extracting urls from text files, but it
> couldn't handle this particular file.
>
I don't know what the exact format of your pa
On Saturday 03 December 2005 09:04, Robert Persson wrote:
> I wasn't running it as root. The strange thing is that httrack did start
> creating a directory structure in ~/websites consisting of a couple of
> dozen directories or so (e.g.
> ~/websites/politics/www.fromthewilderness.com/free/ww3/), b
On December 2, 2005 06:40 am Martins Steinbergs was like:
> if httrack is runing as root all stuff goes to /root/websites/ , explored
> there?
I wasn't running it as root. The strange thing is that httrack did start
creating a directory structure in ~/websites consisting of a couple of dozen
dir
On December 2, 2005 07:42 am Billy Holmes was like:
> Robert Persson wrote:
> > I have been trying all afternoon to make local copies of web pages from a
> > netscape bookmark file. I have been wrestling with httrack (through
>
> wget -r http://$site/
>
> have you tried that, yet?
The trouble is t
Robert Persson wrote:
I have been trying all afternoon to make local copies of web pages from a
netscape bookmark file. I have been wrestling with httrack (through
wget -r http://$site/
have you tried that, yet?
--
gentoo-user@gentoo.org mailing list
On Friday 02 December 2005 15:42, Robert Persson wrote:
>
> Permissions are fine and there is quite a bit of space on the disk. httrack
> creates directories in ~/websites, but no other files, despite the fact
> that it claims to be downloading bucketloads of them.
> --
> Robert Persson
>
> "Don't
On December 2, 2005 01:37 am Martins Steinbergs was like:
> if there realy no files and dirs created in ~/websites folder, try to check
> write permissions or is there any space left.
Permissions are fine and there is quite a bit of space on the disk. httrack
creates directories in ~/websites, b
On December 2, 2005 01:05 am Neil Bothwick was like:
> wget will accept most files containing URLs, it doesn't have to be a
> straight list. Try feeding it your bookmark file as is.
Tried that. It borked. :-(
--
Robert Persson
"Don't use nuclear weapons to troubleshoot faults."
(US Air Force In
On Friday 02 December 2005 07:25, Shawn Singh wrote:
> I guess I'm not exactly sure what you're trying to do, but when I want to
> get a local copy of a website I do this:
>
> nohup wget -m http://www.someUrL.org &
>
> Shawn
>
> On 12/2/05, Robert Persson <[EMAIL PROTECTED]> wrote:
> > I have been
On Thu, 1 Dec 2005 17:41:36 -0800, Robert Persson wrote:
> One option would be to feed wget a list of urls. The trouble is I don't
> know how to turn an html bookmark file into a simple list of urls. I
> imagine I could do it in sed if I spent enough time to learn sed, but
> my afternoon has gone
I guess I'm not exactly sure what you're trying to do, but when I want to get a local copy of a website I do this:
nohup wget -m http://www.someUrL.org &
ShawnOn 12/2/05, Robert Persson <[EMAIL PROTECTED]> wrote:
I have been trying all afternoon to make local copies of web pages from anetscape bo
I have been trying all afternoon to make local copies of web pages from a
netscape bookmark file. I have been wrestling with httrack (through
khttrack), pavuk and wget, but none of them work. httrack and pavuk seem to
claim they can do the job, but they can't, or at least not in any way an
ordi
16 matches
Mail list logo