Package: backupninja
Version: 0.9.6-4
Severity: normal

I've discussed this on IRC with micah, and he agrees there's an usecase here.

I'm trying to build a setup which makes some non-trivial data preparation
before the actual .rdiff run. This includes snapshotting virtual servers,
filesystems and making individual rdiffs of the host and virtual servers
(openvz, vserver and kvm).

However, using backupninja in the current state is dangerous, because I
can easily end up with a mangled rdiff-backup increments history:

16:16 < jordi> say your scripts to create a snapshot and so on fail; 
               backupninja would continue processing scripts and if you backup 
               tne snapshot mountpoint, you'd end up with 0 bytes of data
16:46 < jordi> micah: I get a FAILED, but my rdiff-backup is fucked up by then 
               due to the last backup having erased everything
16:46 < micah> jordi: in most cases you want the subsequent actions to fire, 
               even if it means you send to your backup server 0 bytes (for 
               example maybe you also do a sql dump)
16:46 < micah> jordi: your rdiff-backup should never erase everything
16:47 < jordi> er, right, I mean it will just backup 0 bytes
16:47 < jordi> which is pretty bad for my backup history
16:47 < micah> it doesn't make your deltas big or anything
16:48 < micah> it means that you transport off site 0 bytes, which makes sense 
               because your previous actions failed to produce data that should 
               be sent off site
16:48 < jordi> if next day it doesn't fail, doesn't that mean I'll have a huge 
               increment after that?
16:48 < micah> rdiff-backup does fine in this case, and you get a FAILED 
               indication so you know that you have something to fix
16:49 < jordi> ie, day-before-failure had a big base, day-of-failure shrinks to 
               0, day-after wouldn't generate a huge delta between the fixed 
               backup and the one before?
16:49 < micah> i dont know about your source data, but if one day you have 
               10,000bytes and then the next you have 1, then probably 
               rdiff-backup thinks you deleted everything
16:50 < jordi> right, what about if I re-add 10.000 bytes the next run?
16:50 < micah> i see what you mean
16:50 < micah> so you want rdiff-backup not to fire unless your source location 
               is populated with data
16:50 < jordi> yes
[...]
16:59 < micah> jordi: yeah, i could see that as being useful. or a site-wide 
               config that says "FAIL_ENTIRE_BACKUP_IF_ANY_ACTION_FAILS=no"
17:00 < jordi> micah: yeah, the wide config would be useful, I'd probably use 
               it right now

So, you could obviously do hackery and generate your .rdiff file if
checks confirm your data directory is sane, but this diverges from what
backupninja should be doing, ie making stuff easy.

Thanks,
Jordi

-- System Information:
Debian Release: squeeze/sid
  APT prefers unstable
  APT policy: (990, 'unstable'), (500, 'stable'), (1, 'experimental')
Architecture: i386 (i686)

Kernel: Linux 2.6.29-2-686 (SMP w/1 CPU core)
Locale: lang=ca_es.ut...@valencia, lc_ctype=ca_es.ut...@valencia (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to