On 10/02/08 04:28, James Youngman wrote:
On Wed, Oct 1, 2008 at 12:15 PM, Mag Gam <[EMAIL PROTECTED]> wrote:
I was wondering if its possible to run updatedb on a very large
filesystem (6 TB). Has anyone done this before? I plan on running this
on a weekly basis, but I was wondering if updatedb was faster than a
simple 'find'. Are there any optimizations in 'updatedb' ?
With findutils you can update several parts of the directory tree in
parallel, or update various parts on a different time schedule.
Here's an example with three directory trees searched in parallel with
one being searched remotely on another server and then combined with a
canned list of files from a part of the filesystem that never changes.
find /usr -print0 > /var/tmp/usr.files0 &
find /var -print0 > /var/tmp/var.files0 &
find /home -print0 > /var/tmp/home.files0 &
ssh nfs-server 'find /srv -print0' > /var/tmp/srv.files0 &
wait
Since find is so disk-intensive, isn't this is only of benefit if
/usr, /var and /home are on different devices?
sort -f -z /var/tmp/archived-stuff.files.0 /var/tmp/usr.files0
/var/tmp/var.files0 /var/tmp/home.files0 /var/tmp/srv.files0 |
/usr/lib/locate/frcode -0 > /var/tmp/locatedb.new
rm -f /var/tmp/usr.files0 /var/tmp/var.files0 /var/tmp/home.files0
/var/tmp/srv.files0
cp /var/cache/locate/locatedb /var/cache/locate/locatedb.old
mv /var/tmp/locatedb.new /var/cache/locate/locatedb
--
Ron Johnson, Jr.
Jefferson LA USA
"Do not bite at the bait of pleasure till you know there is no
hook beneath it." -- Thomas Jefferson
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]