On Sun, 14 Aug 2022 19:03:25 -0400, Rich Freeman wrote: > > On Sun, Aug 14, 2022 at 6:44 PM Dale <rdalek1...@gmail.com> wrote: > > > > Right now, I'm using rsync which doesn't compress files but does just > > update things that have changed. I'd like to find some way, software > > but maybe there is already a tool I'm unaware of, to compress data and > > work a lot like rsync otherwise. > > So, how important is it that it work exactly like rsync? > > I use duplicity, in part because I've been using it forever. Restic > seems to be a similar program most are using these days which I > haven't looked at super-closely but I'd look at that first if starting > out. > > Duplicity uses librsync, so it backs up exactly the same data as rsync > would, except instead of replicating entire files, it creates streams > of data more like something like tar. So if you back up a million > small files you might get out 1-3 big files. It can compress and > encrypt the data as you wish. The downside is that you don't end up > with something that looks like your original files - you have to run > the restore process to extract them all back out. It is extremely > space-efficient though - if 1 byte changes in the middle of a 10GB > file you'll end up just backing up maybe a kilobyte or so (whatever > the block size is), which is just like rsync. > > Typically you rely on metadata to find files that change which is > fast, but I'm guessing you can tell these programs to do a deep scan > which of course requires reading the entire contents, and that will > discover anything that was modified without changing ctime/mtime. > > The output files can be split to any size, and the index info (the > metadata) is separate from the raw data. If you're storing to > offline/remote/cloud/whatever storage typically you keep the metadata > cached locally to speed retrieval and to figure out what files have > changed for incrementals. However, if the local cache isn't there > then it will fetch just the indexes from wherever it is stored > (they're small). > > It has support for many cloud services - I store mine to AWS S3. > > There are also some options that are a little closer to rsync like > rsnapshot and burp. Those don't store compressed (unless there is an > option for that or something), but they do let you rotate through > multiple backups and they'll set up hard links/etc so that they are > de-duplicated. Of course hard links are at the file level so if 1 > byte inside a file changes you'll end up with two full copies. It > will still only transfer a single block so the bandwidth requirements > are similar to rsync.
I have been using restic for a while, and although it does not do compression, there are a couple of nice things it does -- if a file is in more than one location, or if you rename the file, its smart enough not to backup any data at all, just the metadata. Also, you never have to delete the whole backup and start over like you have to do with duplicity, you can just delete backups older than a certain number of days and you are good to go. Its in go, so building can be a pain and I don't like programs which download gobs of stuff from the internet to build, but it seems to work quite well. -- Your life is like a penny. You're going to lose it. The question is: How do you spend it? John Covici wb2una cov...@ccs.covici.com