Alessandro Baggi wrote: > Borg seems very promising but I performs only push request at the moment > and I need pull request. It offers deduplication, encryption and much > more. > > One word on deduplication: it is a great feature to save space, with > deduplication compression ops (that could require much time) are avoided > but remember that with deduplication for multiple backups only one > version of this files is deduplicated. So if this file get corrupted > (for every reason) it will be compromised on all previous backups jobs > performed, so the file is lost. For this I try to avoid deduplication on > important backup dataset.
Not sure if true - for example you make daily, weekly and monthly backups (classical) Lets focus on the daily part. On day 3 the files is broken. You have to recover from day 2. The file is not broken for day 2 - correct?! > but remember that with deduplication for multiple backups only one > version of this files is deduplicated. I do not know how you come to the conclusion regarding this. This is not how deduplication works. At least not according my understanding. The documentation describes the process of backing up and deduplication such that file chunks are being read and compared. If they are different the new chunk is backuped. Remember this is done for each backup. If you want to restore a previous one obviously the file will be reconstructed based on previously store/backuped information. regards