Hi,

Christoph Anton Mitterer wrote:
> I'm looking for a backup solution with quite some specific needs,...

My scdbackup with xorriso as backend might nearly do what
you want:
  http://scdbackup.sourceforge.net/main_eng.html

It initially demands some configuration effort
  http://scdbackup.sourceforge.net/examples.html
  http://scdbackup.sourceforge.net/README

xorriso is available as Debian package, or as source tarball
  https://www.gnu.org/software/xorriso/xorriso-1.4.0.tar.gz


> - hard links must be retained
> - it must be possible to backup to split media (e.g. mutliple CDs)

This is not easy to achieve.
Consider one hardlink sibling in one fileystem and the other
in the next filesystem.


> - file times, owners (ideally as IDs and names), permissions, XATTRS,
>  ACLs must all be retained

scdbackup records them in scripts with setfattr and setfacl,
which it stores in data files in the backup.
xorriso records them too, but restoring those will need xorriso,
because Linux does not interpret AAIP (which is not astounding
because it is a libisofs extension to ISO 9660).

ISO 9660 has the advantage of being readable nearly everywhere
as long as you restrict yourself to single-session and data
files smaller than 4 GiB. (BSD and Solaris ended development of
their ISO 9660 readers long ago. Meanwhile one can smell their
fermentation state.)


> - I want always *full* files to be backuped
>   - a single file shouldn't be split over multiple backup media
>     (unless this isn't possible otherweise, because all targets are
>     smaller than the file size)

Putting together the pieces of large files is of course a pain
at restore time.


> - ideally the program would offer two modes:
>   - either trying to keep "neighbouring" files (i.e. those that are
>     close to each other in the directory hierarchy) closely on the 
>     split target mediums

You mean those with neighboring names, i assume.
(More or less the same as alphabetic ordering by name.)


>  - or trying to be as space efficient as possible (i.e. place files so
>    that space is used most efficiently

After a few years of cramming files, i switched to alphabetic.


> - catalogues should be made, of both, all files and the files on a
>   certain medium, also as a help in the disaster case

Not to forget checksums.
In most dangerous storage environments one should consider to
make several identical copies of the media and have means to
recognize good blocks.


> - as I've said, incremental backups should be possible,... but that
>  should also work when I move files around

So timestamps are not enough.
scdbackup has two ways of incremental backups:
- If inode numbers are persistent and ( devide numbers are persistent
  or mount points always lead to the same filesystems), then it is
  possible to decide by the outcome of stat(2).
- Much slower is decision on base of MD5 of file content.


> Well I guess it wouldn't be too difficult to script most of this,

scdbackup needed about seven years of try-and-error.
It can be done faster if you have a clear plan.
Beware of being sucked into neighboring topics like DVD burning
or backup-grade ISO 9660 production.


Have a nice day :)

Thomas

Reply via email to