I'm running backups on 3 Linux systems, one of the systems is a Cobalt
Qube. All the backups are done using GNU tar. It works OK but the
estimation time on the backups is nasty. I think I'll turn off the
estimation and just run full dumps every day. The Qube is the slow system
with the problem being related to the Linux filesystem code which reads
directory entries; The system is a news server and has a couple of
directories which have a lot of files.
My question is this. Why run a separate estimate at all? Why not just
monitor the last couple of backups and extrapolate?
i.e.
Day 1 incremental /home 100mb
Day 2 incremental /home 110mb
Day 3 incremental /home 120mb
Day 4 incremental /home ???mb
--
Colin Smith