> From: rdiff-backup-users-bounces+rdiff- > [email protected] [mailto:rdiff-backup-users- > [email protected]] On Behalf Of Alvin > Starr > > Clearly there are hundreds of better ways to back up a sparse file. > > The point is that I sort of expected rdiff-backup to be as smart as tar > and rsync in that perspective.
I certainly haven't had any good experiences backing up (or even copying) sparse files with tar. Yes I've done it, but by default it's not supported (unless you add the switch) and even with that switch, I wouldn't call it a good experience. No matter how you cut it, you have to read the entire sparse file (including empty space) the question is whether or not sparseness is preserved on the destination. There is unfortunately no such thing as a flag or an attribute you can check on a file to see if it's sparse; your only choice is to simply read every file, and optionally apply sparseness to a destination. But since you have no good way to know if the source is sparse, you just unconditionally make every file on the destination sparse. For large sparse files, as suggested, it's much better to backup with a tool that recognizes the internal contents of the file. Something which can read the structure, and only copy out the useful parts. Not to mention, if it's a database file, it's important to ensure data integrity. You don't want to be reading byte # 178,343,543,344 with 877,344,563,233 to go, and some other process writes to the file, thus making all of your work so far invalid. Or use compression. Cuz guess what, a large sequence of 0's is highly compressible. ;-) _______________________________________________ rdiff-backup-users mailing list at [email protected] https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
