Dean Scothern [[EMAIL PROTECTED]] writes:
> I need to rsync lots and lots of files and dirs, 1.8M files and or
> 30000 dirs. I seen in the FAQ trying to do this in one lump will use
> up a lot of ram. Is there any progress on the mod to fix this? I'm
> doing find tricks at the moment, but the approach is klungy and
> unsatisfactory (--delete springs to mind). I'm not a c programmer
> sadly (perl is more my bag) otherwise I'd offer to help. I'll gladly
> beta test though (5M files, 50000 dirs, about 140G inc every
> night). I'm not using the latest just the most upto date debian
> (potato) version. So how about it (very pretty please;)
It's not a pure rsync solution, but one possibility is just to run rsync
more than once to cover the entire tree. Depending on how it's
structured, just going down one or two directory levels may give you
more workable subsets of the entire filesystem.
Since you're accustomed to Perl, it would probably be easy to write a
small Perl wrapper that scanned the tree to break it into subsets, no?
-- David
/-----------------------------------------------------------------------\
\ David Bolen \ E-mail: [EMAIL PROTECTED] /
| FitLinxx, Inc. \ Phone: (203) 708-5192 |
/ 860 Canal Street, Stamford, CT 06902 \ Fax: (203) 316-5150 \
\-----------------------------------------------------------------------/