On 08/06/2011 08:35 AM, Claus-Justus Heine wrote:
Hi there,
I'm experiencing quite high memory usage during regressions; I have a backup
server with only 2G of RAM. I'm doing daily backups. Sometimes a backup fails,
and then, of course, rdiff-backup first recovers the most recent backup which
did not fail. During this process, rdiff-backup blows up to approx. 3G of RAM.
Then things start to slow down (swapping). It's a quite large backup set, about
400G, with a large history. I doesn't seem to be a memory leak, as the memory
usage stay at 3G. Just seems a little bit too much in principle.
Regression is concerned with only the two most recent sessions, so
the amount of history should be irrelevant. What is the total
number of files being backed up and the size of the uncompressed
mirror_metadata snapshot?
zcat file_statistics.{latest_timestamp}.data.gz | tr '\0' '\n' | wc -l
zcat mirror_metadata.{latest_timestamp}.snapshot.gz | wc -c
That would be more indicative of the amount of data that needs to be
kept in memory during the regression. FWIW, I'm seeing memory usage
of about 480MB during regression of a backup of about 250,000 files,
though the number of changed files needing to be regressed is quite
small (~1000).
--
Bob Nichols "NOSPAM" is really part of my email address.
Do NOT delete it.
_______________________________________________
rdiff-backup-users mailing list at [email protected]
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki