Hello all,
I have a small beowulf cluster of Scientific Linux 4.4 machines with
common NIS logins and NFS shared home directories. In the short
term, I'd rather not buy a tape drive for backups. Instead, I've got
a jury-rigged backup scheme. The node that serves the home
directories via NFS runs a nightly tar job (through cron),
[EMAIL PROTECTED]> tar cf home_backup.tar ./home
[EMAIL PROTECTED]> mv home_backup.tar /data/backups/
where /data/backups is a folder that's shared (via NFS) across the
cluster. The actual backup then occurs when the other machines in
the cluster (via cron) copy home_backup.tar to a private (root-access-
only) local directory.
[EMAIL PROTECTED]> cp /mnt/server-data/backups/home_backup.tar
/private_data/
where "/mnt/server-data/backups/" is where the server's "/data/
backups/" is mounted, and where /private_data/ is a folder on
client's local disk.
Here's the problem I'm seeing with this scheme. users on my cluster
have quite a bit of stuff stored in their home directories, and
home_backup.tar is large (~4GB). When I try the cp command on
client, only 142MB of the 4.2GB is copied over (this is repeatable -
not a random error, and always about 142MB). The cp command doesn't
fail, rather, it quits quietly. Why would only some of the file be
copied over? Is there a limit on the size of files which can be
transferred via NFS? There's certainly sufficient space on disk for
the backups (both client's and server's disks are 300GB SATA drives,
formatted to ext3)
I'm using the standard NFS that's available in SL43, config is
basically default.
regards,
Nathan Moore
- - - - - - - - - - - - - - - - - - - - - - -
Nathan Moore
Physics, Pasteur 152
Winona State University
[EMAIL PROTECTED]
AIM:nmoorewsu
- - - - - - - - - - - - - - - - - - - - - - -
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf