On Sat, Apr 11, 2009 at 01:47:08PM +0100, James Youngman wrote: > Here's my current backup arrangement: > > Data is stored in filesystems on LVM volumes over RAID1. While RAID1 > presents some protection from disk failure, it gives no protection > against data corruption due to flaky hardware or data loss caused by > fire or theft. > > Therefore I have an offsite backup arrangement. This consists of two > rsync backups. One backup goes to a local disk (different disk > manufacturer, different disk controller) and the other rsync backup is > to a disk at work. This works a bit but the outgoing bandwidth on my > cable connection is low (about 0.3 Mbps). If I make a large change to > the machine (e.g. dist-upgrade) I physically swap the home and work > backup disks (this is the main reason for keeping the local backup > too). This at least allows me to place an upper limit on the amount > of data I would lose in the case of (e.g.) a fire. > > However, there are two respects in which I think some improvement > would be useful: > > (1) Quite a lot of the files on my system are files I never expect to > change again. I plan to write a few scripts which will tell me if a > file that hadn't been modified in, say, two years was in fact recently > modified. This could give me early warning that the disk controller > has gone berserk (again). > > (2) It would be useful to have a historic backup capability too (e.g. > the way the filesystem looked yesterday, last week, last month and a > year ago), at least for filesystems like /home. > > What are good solutions for doing (2)? (Please only recommend > software you're using yourself :) > > Thanks, > James.
I use duplicity <http://duplicity.nongnu.org/> --- Henri Salo -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org