On 10/18/22 1:37 AM, Jonas Schöpf wrote:
I deleted a lot of stuff on my machine (around 200GB) and if I do a new backup 
I have to abort it after 9 hours as it does not finish.

iostat -x -m 3 shows that %util is always between 70 and 95.
So I assume that IO is again a bottleneck, but shouldn't it be easy for 
rdiff-backup to just delete the files from the backup?
Is there a way how I can speed up this process?

Deleting a file is not a simple operation. Since rdiff-backup always works 
backward from the current mirror, restoring a file as of some previous date 
requires storing a snapshot of the last known state of that file in the 
increments. By default, those snapshots are compressed except for file names 
that match the --no-compression-regexp (see Globals.py for the default 
expression), and that compression can take a lot of time.

You might want to take a look at the "--no-compression" option in the manpage.

--
Bob Nichols     "NOSPAM" is really part of my email address.
                Do NOT delete it.


Reply via email to