Tanner Danzey <arkan...@gmail.com> 2011-11-01 18:50:
Generally, using git is a bad idea for backups (from what I've read)

git stores it's data uncompressed and inefficiently. If you are backing
up things like configuration files or web pages that can change a lot,
sure, but for storing binary files with git, I'd recommend against it,
since binaries vary greatly from version to version (unlike text files)
and you'd just accumulate tons of useless binaries. programs like
duplicity and rsync are great for backups, though.

Agreed. There are lots of other spin offs, each with their own pros and cons: rsnapshot, rdiff, etc. I personally use some homegrown perl, rsync, and zfs snapshots (transparent compression, dedup, each snapshot looks like a full backup, etc.). I'm sure you could use something like btrfs in that scheme as well.

However, using git, hg, svn, whatever, for storing your config file repositories for something like cfengine, puppet, whatever is a good idea, but that's a different issue than backups.

in all, the drawbacks outweigh the benefits of using a code management
tool to back up entire systems...

On Tue, 2011-11-01 at 23:16 +0200, Andrey Utkin wrote:
Hi all! Long live the gentoo masters!
I'd like to hear from anybody who uses (or tried) git on production
servers for saving the points of possible restore. Please, share your
practices, like commit patterns, .gitignore contents, etc. I've begun
to use it a couple of days ago for that, and pointed out some issues.
I control the whole root fs with git.
The problematic part is bunch of files that update frequently, but i
am not familiar with them and i'm not sure if system will load without
them.
Namely, these are files in /usr/lib64/portage/pym/
Also wtmp, utmp files hurt - likely without them box won't boot, but
they shouldn't be in git control, too, coz they update often.
Thus, backup restoring requires not git repo only, but also some tar of base?

Attachment: signature.asc
Description: Digital signature

Reply via email to