> Like I say I like and use rsnapshot in some places, but speed and
> resource efficiency are not its winning points.
I have never used Rsnapshot, but I used Rsync backups for many years and
then moved to Bup. The time to perform backups has been *very*
substantially shortened by moving
stem is still there regardless of the timing within the
backup procedure.
> > - Walking the entire tree of the most recent backup once to cp -l it and
> > then;
>
> rsnapshot only renames directories when rotating backups then does rsync
> with hard links to the newest
Okay yes
y the backup system
> - Walking the entire tree of the most recent backup once to cp -l it and
> then;
rsnapshot only renames directories when rotating backups then does rsync
with hard links to the newest
> This rsnapshot I have is really quite slow with only two 7200rpm HDDs.
> It s
On Tue 08 Oct 2024 at 06:37:43 (+0200), to...@tuxteam.de wrote:
> On Mon, Oct 07, 2024 at 08:44:44PM +0100, Jonathan Dowland wrote:
> > On Mon Oct 7, 2024 at 9:37 AM BST, Michel Verdier wrote:
> > > Do you mean inodes expensive ? Which filesystem do you used ?
> >
> > It was 18 years ago so I can'
Hi,
On Mon, Oct 07, 2024 at 07:52:55PM -0600, Charles Curley wrote:
> I've used rsnapshot for several years now with no such issue. My
> rsnapshot repository resides on ext4, on its own LVM logical volume, on
> top of an encrypted RAID 5 array on four four terabyte spinning rust
> drives.
>
> /cr
any way, you'll get a new copy.
i.e. if you have a 1GiB log file /var/log/somelog and you append one
byte to it, rsync will take care of only transferring one byte, but both
the old 1GiB file and the new 1GiB-and-1-byte version will be stored in
their entirety in your backups.
Other backup
On 2024-10-07, Jonathan Dowland wrote:
> It was 18 years ago so I can't remember that clearly, but I think it was
> a mixture of inodes expense and an enlarged amount of CPU time with the
> file churn (mails moved from new to cur, and later to a separate archive
> Maildir, that sort of thing). It
On 2024-10-07 21:06, Dan Ritter wrote:
Possibly of interest: Debian package rdfind:
Description: find duplicate files utility
rdfind is a program to find duplicate files and optionally list,
delete
them or replace them with symlinks or hard links. It is a command
line program written in c
On Mon, Oct 07, 2024 at 08:44:44PM +0100, Jonathan Dowland wrote:
> On Mon Oct 7, 2024 at 9:37 AM BST, Michel Verdier wrote:
> > Do you mean inodes expensive ? Which filesystem do you used ?
>
> It was 18 years ago so I can't remember that clearly, but I think it was
> a mixture of inodes expense
On Mon, 07 Oct 2024 20:44:44 +0100
"Jonathan Dowland" wrote:
> It was 18 years ago so I can't remember that clearly, but I think it
> was a mixture of inodes expense and an enlarged amount of CPU time
> with the file churn (mails moved from new to cur, and later to a
> separate archive Maildir, t
On 10/7/24 16:06, Dan Ritter wrote:
e...@gmx.us wrote:
I use rdiff to do the backups on the "server" ... and ran into that
problem, so what I did was write a series of scripts that relinked
identical files.
Possibly of interest: Debian package rdfind:
Description: find dupli
e...@gmx.us wrote:
>
> I use rdiff to do the backups on the "server" (its job is serving video
> content to the TV box over NFS) and ran into that problem, so what I did was
> write a series of scripts that relinked identical files. It's not perfect,
> I suspect t
On Sun Oct 6, 2024 at 9:24 PM BST, eben wrote:
> I use rdiff to do the backups on the "server" (its job is serving video
> content to the TV box over NFS) and ran into that problem, so what I did was
> write a series of scripts that relinked identical files. It's not pe
On Mon Oct 7, 2024 at 9:37 AM BST, Michel Verdier wrote:
> Do you mean inodes expensive ? Which filesystem do you used ?
It was 18 years ago so I can't remember that clearly, but I think it was
a mixture of inodes expense and an enlarged amount of CPU time with the
file churn (mails moved from new
On 2024-10-06, Jonathan Dowland wrote:
> At the time I was using rsnapshot, I was subscribed to some very high
> traffic mailing lists (such as LKML), and storing the mail in Maildir
> format (=1 file per email). rsnapshot's design of lots of hardlinks for
> files that are present in more than on
around on my storage: that
resulted in the new locations being considered "new", and the next
backup increment being comparatively large.
I use rdiff to do the backups on the "server" (its job is serving video
content to the TV box over NFS) and ran into that problem, so wh
On Wed Oct 2, 2024 at 12:33 AM BST, Default User wrote:
> May I ask why you decided to switch from rsnapshot to rdiff-backup, and
> then to borg?
Sure!
At the time I was using rsnapshot, I was subscribed to some very high
traffic mailing lists (such as LKML), and storing the mail in Maildir
forma
On Sep 30, 2024, Default User wrote:
> (...)
> So, is there a consensus on which would be better:
> 1) continue to "mirror" drive A to drive B?
> or,
> 2) alternate backups daily between drives A and B?
Primarily, I do (1); though every so often I do a variation of (
On Mon, 2024-09-30 at 21:55 +0100, Jonathan Dowland wrote:
> On Mon Sep 30, 2024 at 5:39 PM BST, Default User wrote:
> > So, is there a consensus on which would be better:
> > 1) continue to "mirror" drive A to drive B?
> > or,
> > 2) alternate backups daily
> Also why I would not want all backup-storage devices connected
> simultaneously. All it takes is one piece of software going haywire
> and you may have a situation where both the original and all backups
> are corrupted simultaneously.
You can minimize this risk by having them bo
On 9/30/24 09:39, Default User wrote:
Hi!
On a thread at another mailing list, someone mentioned that they, each
day, alternate doing backups between two external usb drives. That got
me to thinking (which is always dangerous) . . .
I have a full backup on usb external drive A, "refr
On 30 Sep 2024 13:12 -0400, from hunguponcont...@gmail.com (Default User):
>> Having both drives connected and spinning simultaneusly creates a
>> window of opportunity for some nasty ransomware (or a software bug,
>> mistake, power surge, whatever) to destroy both backups.
Al
On 30 Sep 2024 19:28 +0100, from debianu...@woodall.me.uk (Tim Woodall):
> On a slight tangent, how does rsnapshot deal with ext4 uninited extents?
However rsync does.
For the actual file copying, rsnapshot largely just delegates to rsync
with --link-dest. Looks like at least the Debian packaged
On Mon Sep 30, 2024 at 5:39 PM BST, Default User wrote:
> So, is there a consensus on which would be better:
> 1) continue to "mirror" drive A to drive B?
> or,
> 2) alternate backups daily between drives A and B?
I'd go for (B), especially if you're contin
On Mon, 30 Sep 2024, Default User wrote:
Hi!
On a thread at another mailing list, someone mentioned that they, each
day, alternate doing backups between two external usb drives. That got
me to thinking (which is always dangerous) . . .
I have a full backup on usb external drive A, "refr
ity for some nasty ransomware (or a software bug,
> mistake, power surge, whatever) to destroy both backups.
Good point.
> Of course it
> is safer to always have one copy offline.
>
True. But easier (and cheaper) said than done. I'm "working on it".
>From the o
to destroy both backups. Of course it
is safer to always have one copy offline.
Hi!
On a thread at another mailing list, someone mentioned that they, each
day, alternate doing backups between two external usb drives. That got
me to thinking (which is always dangerous) . . .
I have a full backup on usb external drive A, "refreshed" daily using
rsnapshot. Then, ev
On Tue, Sep 03, 2024 at 14:08:37 +0200, Erwan David wrote:
> I backup them on other disks, but if my PC crashed, how an I use this
> to install a new PC, with same packages ?
If all you want is the list of packages, your best bet would be to ignore
*all* of those files, and create a new backup of:
In /var/backups I find
alternatives.tar.0
apt.extended_states.0
dpkg.arch.0
dpkg.diversions.0
dpkg.statoverride.0
dpkg.status.0
and older versions (upto .6.gz) of same files.
I understand it is backup of different dpkg/apt states. But what I do
not find is how to use those files ?
I
. It knows exactly every little
thing that changed to your files since last time you backed it up,
without having to scan everything. Even if you manually try to fake
the datestamps etc. Finding that information is more or less instant,
making backups easy.
the previous last disc to the then-current time.
I use my own software for making incremental multi-volume backups, based
on file timestamps (m and c), inode numbers, and content checksums.
http://scdbackup.webframe.org/main_eng.html
http://scdbackup.webframe.org/examples.html#incremental
The
On 1/22/24 20:30, Charles Curley wrote:
On Mon, 22 Jan 2024 18:27:51 -0800
David Christensen wrote:
debian-user:
I have a SOHO file server with ~1 TB of data. I would like archive
the data by burning it to a series of optical discs organized by time
(e.g. mtime). I expect to periodically bu
On 1/22/24 19:44, gene heskett wrote:
On 1/22/24 21:28, David Christensen wrote:
debian-user:
I have a SOHO file server with ~1 TB of data. I would like archive
the data by burning it to a series of optical discs organized by time
(e.g. mtime). I expect to periodically burn additional discs
On Mon, 22 Jan 2024 18:27:51 -0800
David Christensen wrote:
> debian-user:
>
> I have a SOHO file server with ~1 TB of data. I would like archive
> the data by burning it to a series of optical discs organized by time
> (e.g. mtime). I expect to periodically burn additional discs in the
> futu
On 1/22/24 21:28, David Christensen wrote:
debian-user:
I have a SOHO file server with ~1 TB of data. I would like archive the
data by burning it to a series of optical discs organized by time (e.g.
mtime). I expect to periodically burn additional discs in the future,
each covering a span o
debian-user:
I have a SOHO file server with ~1 TB of data. I would like archive the
data by burning it to a series of optical discs organized by time (e.g.
mtime). I expect to periodically burn additional discs in the future,
each covering a span of time from the previous last disc to the
t
On Mon 18 Apr 2022 at 16:06:48 (-0400), Default User wrote:
> BTW, I think I have narrowed the previous restore problem down to what I
> believe is a "buggy" early UEFI implementation on my computer (circa 2014).
> Irrelevant now; I have re-installed with BIOS (not UEFI) booting and MBR
> (not GP
On Tue 19 Apr 2022 at 07:19:58 (+0200), DdB wrote:
> So i came up with the idea to create a sort of inventory using a sparse
> copy of empty files only (using mkdir, truncate + touch). The space
> requirements are affordable (like 2.3M for an inventory representing
> 3.5T of data). The effect bein
Hello,
Am 11.04.2022 um 04:58 schrieb Default User:
> So . . . what IS the correct way to make "backups of backups"?
>
I don't know that for sure, but at first glance, i dont understand the
complexity of your setup either. Seems to by quite elaborate, which is
certain
On 11/4/22 10:58, Default User wrote:
So . . . what IS the correct way to make "backups of backups"?
Sorry to take so long to respond. I am traveling and have only short
periods that I can spend on non-pressing matters.
To answer your question: the method that gets you the
On 4/18/22 13:06, Default User wrote:
Finally, fun fact:
Many years ago, at a local Linux user group meeting, Sun Microsystems put
on a demonstration of their ZFS filesystem. To prove how robust it was,
they pulled the power cord out of the wall socket on a running desktop
computer. Then they pl
> >> #!/bin/sh
> >> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> >> /media/default/MSD1/ /media/default/MSD2/
> >>
> >>
> >> Use a version control system for system administration. Create a
> >> project for every machine.
version control system for system administration. Create a
project for every machine. Check in system configuration files,
scripts, partition table backups, encryption header backups, RAID header
backups, etc.. Maintain a plain text log file with notes of what you
did (e.g. console sessions), when
em.
> >>
> >> No problem, I say. I will just use Timeshift to restore from its backup
> of
> >> a few hours earlier.
> >>
> >> But that did not work, even after deleting the extra directory, and
> trying
> >> restores from multiple Timeshift backups.
ackup of
a few hours earlier.
But that did not work, even after deleting the extra directory, and trying
restores from multiple Timeshift backups.
Anyway, I never could fix the problem. But I did take it as an opportunity
to "start over". I put in a new(er) SSD, and did a fresh ins
se as a backup device, labeled
>> MSD1.
>> >>> - another identical usb hard drive, labeled MSD2, to use as a copy of
>> the
>> >>> backups on MSD1.
>> >>> - the computer and all storage devices are formatted ext4, not
>> encrypted.
, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1.
- the computer and all storage devices are formatted ext4, not encrypted.
- two old Clonezilla disk images from when I installed Debian 11 last year
(probably irrelevant).
- Timeshift to daily
as a backup device, labeled MSD1.
> > - another identical usb hard drive, labeled MSD2, to use as a copy of the
> > backups on MSD1.
> > - the computer and all storage devices are formatted ext4, not encrypted.
> > - two old Clonezilla disk images from when I installed Debian 11 last
On 4/10/22 19:58, Default User wrote:
Hello!
My setup:
- single home x86-64 computer running Debian 11 Stable, up to date.
- one 4-Tb external usb hard drive to use as a backup device, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1.
- the
On Sun, Apr 10, 2022 at 11:13 PM David wrote:
> On Mon, 11 Apr 2022 at 12:59, Default User
> wrote:
>
> > Then I try to use rsync to make an identical copy of backup device MSD1
> on an absolutely identical 4-Tb external usb hard drive,
> > labeled MSD2, using this command:
> >
> > sudo rsync -a
On Mon, 11 Apr 2022 at 12:59, Default User wrote:
> Then I try to use rsync to make an identical copy of backup device MSD1 on an
> absolutely identical 4-Tb external usb hard drive,
> labeled MSD2, using this command:
>
> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> /media/defa
Hello!
My setup:
- single home x86-64 computer running Debian 11 Stable, up to date.
- one 4-Tb external usb hard drive to use as a backup device, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1.
- the computer and all storage devices are
mick crane wrote:
> On 2020-10-13 00:46, Dan Ritter wrote:
> > mick crane wrote:
> > >
>
> This looks like good advice, thanks Dan and all.
> One thing I wonder about if I reboot and change boot order to start windows
> is if I might create some confusion on the network as pfsense PC does DHCP
>
On 2020-10-13 00:46, Dan Ritter wrote:
mick crane wrote:
might I ask a favour for information on accepted wisdom for this stuff
?
I being a home user have pfsense on old lenovo between ISP router and
switch
to PCs
another old buster lenovo doing email
another Buster PC I do bits of programmi
m that runs across sftp or
rsync-over-ssh, that would be much better. Or you can plug an
external USB disk into the Windows machine and ask it to store
the backups there directly.
-dsr-
I am a long time user of LuckyBackup, and am very satisfied.
experimenting with Clear Linux OS system, I have been looking for a
backup solution LuckyBackup is not readily available.
Clear OS provides KopiaUI ...reading the Kopia webpage and YouTube
tutorial the KopiaUI app seems to be worthwhi
On Thu, Aug 20, 2020 at 11:33:34AM +1200, Ben Caradoc-Davies wrote:
On 20/08/2020 10:08, David Christensen wrote:
On 2020-08-13 01:31, David Christensen wrote:
Without knowing anything about your resources, needs,
expectations, "consistent backup plan", etc., and given the
choices ext2, ext3,
On 20/08/2020 10:08, David Christensen wrote:
On 2020-08-13 01:31, David Christensen wrote:
Without knowing anything about your resources, needs, expectations,
"consistent backup plan", etc., and given the choices ext2, ext3, or
ext4 for an external USB drive presumably to store backup
reposit
On 2020-08-13 01:31, David Christensen wrote:
On 8/12/20 5:14 PM, rhkra...@gmail.com wrote:
I'm getting closer to setting up a consistent backup plan, backing up
to an
external USB drive. I'm wondering about a reasonable filesystem to
use, I
think I want to stay in the ext2/3/4 family, and I'm
On Vi, 14 aug 20, 10:31:51, David Wright wrote:
>
> I'm dubious whether I shall ever start using these filesystems.
> I create multiple backups on ext4 filesystems on LUKS, and keep
> MD5 digests of their contents. Would that qualify as your
> "additional tools
#x27;t see
> > any
> > point (and don't want to learn) either of them at this point -- I don't see
> > much need for a backup filesystem.)
>
> As has been stated already, both btrfs and ZFS have built-in bitrot
> protections that are very useful for backups and archi
c) run the backup
(d) unmount
When you discover your media is corrupt/broken, yon restart with a
new medium.
If you need any redundancy, you keep several backups in parallel
(which you keep physically separate, so your house burning down
doesn't catch all of them at once).
Adjust accordin
On Thu, Aug 13, 2020 at 09:32:13PM +, ghe2001 wrote:
Two for sure and put them in a RAID1 -- formatted ext4. And watch that
mdstat.
And a third or fourth to see if you can get ZFS going.
For playing around with tech, sure: for part of a mundane, reliable
backup strategy for the OP, and as
(Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't see
> any
> point (and don't want to learn) either of them at this point -- I don't see
> much need for a backup filesystem.)
As has been stated already, both btrfs and ZFS have built-in
On 2020-08-13 01:31, David Christensen wrote:
> Migrating to ZFS was non-trivial, and I am still wresting with
> disaster preparedness.
I should have qualified that -- when I used ZFS only as a volume manager
and file system, it was not much harder than md and ext4. You could put
a GPT partiti
On 8/13/20 13:52, rhkra...@gmail.com wrote:
> On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
>> Debian ZFS root (and boot) is not *that* hard; see the instructions at
>>
>> https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
>> uster%20Root%20on%20ZFS.html
>>
>>
On Thursday, August 13, 2020 04:09:46 PM David Christensen wrote:
> On 2020-08-13 12:52, rhkra...@gmail.com wrote:
> > On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
> >> I would recommend installing from buster-backports to get the current
> >> openzfs release which includes improvements
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
‐‐‐ Original Message ‐‐‐
On Thursday, August 13, 2020 2:50 PM, Dan Ritter wrote:
> D. R. Evans wrote:
>
> > Greg Wooledge wrote on 8/13/20 2:29 PM:
> >
> > > The simplest answer would be to use ext4.
> >
> > I concur, given the OP's use c
D. R. Evans wrote:
> Greg Wooledge wrote on 8/13/20 2:29 PM:
>
> >
> > The simplest answer would be to use ext4.
> >
>
> I concur, given the OP's use case. And I speak as someone who raves about ZFS
> at every reasonable opportunity :-)
Also concur. But by all means buy a spare drive and expe
Greg Wooledge wrote on 8/13/20 2:29 PM:
>
> The simplest answer would be to use ext4.
>
I concur, given the OP's use case. And I speak as someone who raves about ZFS
at every reasonable opportunity :-)
Doc
--
Web: http://enginehousebooks.com/drevans
signature.asc
Description: OpenPGP d
On Thu, Aug 13, 2020 at 01:09:46PM -0700, David Christensen wrote:
> On 2020-08-13 12:52, rhkra...@gmail.com wrote:
> > * Most of my backup will be done from a Wheezy system -- can I install
> > ZFS
> > on Wheezy?
>
> I do not see any ZFS packages for Wheezy:
>
> The simplest answer would be
On 2020-08-13 12:52, rhkra...@gmail.com wrote:
On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
Debian ZFS root (and boot) is not *that* hard; see the instructions at
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
uster%20Root%20on%20ZFS.html
They certainl
On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
> Debian ZFS root (and boot) is not *that* hard; see the instructions at
>
> https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
> uster%20Root%20on%20ZFS.html
>
> They certainly are not harder than installing early
there is
> any
> good reason to use anything beyond ext2?
>
I've been using an external USB drive for backups for years (more specifically,
a regular HDD in a USB enclosure), it works reasonably well. I use ext4.
ext2 is more prone to lose stuff and become corrupted if your PC sh
ve figured out how to install Debian with ZFS on root;
> STFW for details.) There is a 'contrib' ZFS kernel package available
> that can be installed on a working Debian system. This makes it
> possible to use ZFS for most everything except boot and root. ZFS is
> mature and re
On Wed, Aug 12, 2020 at 09:15:21PM -0600, Charles Curley wrote:
> On Wed, 12 Aug 2020 20:14:03 -0400
> rhkra...@gmail.com wrote:
>
> > I'm getting closer to setting up a consistent backup plan, backing up
> > to an external USB drive. I'm wondering about a reasonable
> > filesystem to use, I thin
On Thu, Aug 13, 2020 at 12:55:35PM +1200, Ben Caradoc-Davies wrote:
> On 13/08/2020 12:14, rhkra...@gmail.com wrote:
> >I'm getting closer to setting up a consistent backup plan, backing up to an
> >external USB drive. I'm wondering about a reasonable filesystem to use, I
> >think I want to stay i
e and reliable. I use ZFS for FreeBSD system disks, file server
live data, backups, archives, and images. Migrating to ZFS was
non-trivial, and I am still wresting with disaster preparedness.
David
x27;m wondering if there is any good reason to use anything beyond ext2?
I use my external USB drives for off-site backup, so I use ext4 on top
of an encrypted partition.
http://charlescurley.com/blog/index.html
Start with
http://charlescurley.com/blog/posts/2019/Nov/02/backups-on-linux/ and
wo
On 8/12/2020 7:14 PM, rhkra...@gmail.com wrote:
I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive. I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use
in Debian.
My backups are pigz-compressed tar archives, encrypted with gpg
symmetric encryption, with a "pigz -0" outer wrapper to add a 32-bit
checksum wrapper for convenient verification with "gzip -tv" or similar
without requiring decryption. Archives are written to both e
On 13/8/20 10:14 am, rhkra...@gmail.com wrote:
I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive. I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use a
I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive. I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use anything beyond ext2?
(Some day I'll try ZFS o
ngs as simple as possible but not more
> simple”).
>
> I have been testing it with toy cases to have at least some experience
> with it before using it for my real backups.
>
> Using a Git checkout of the latest release I get this warning: “Using a
> pure-python msgpack! This
On Sun, 20 Aug 2017 20:04:57 -0500
Mario Castelán Castro wrote:
> On 2017-08-19 23:07 -0400 Celejar wrote:
> >There's Borg, which apparently has good deduplication. I've just
> >started using it, but it's a very sophisticated and quite popular piece
> >of software, judging by chatter in various
repository on some other storage medium besides the day to day operating
> cache, of the data you will need to recover and restore normal
> operations should your main drive become unusable with no signs of ill
> health until its falls over.
That's not the only job of backup
> > > Hello.
> > > >
> > > > Currently I use rsync to make the backups of my personal data,
> > > > including some manually selected important files of system
> > > > configuration. I keep old backups to be more safe from the
> > &g
On Sun, 20 Aug 2017 02:05:46 -0400
Gene Heskett wrote:
> On Saturday 19 August 2017 23:07:01 Celejar wrote:
>
> > On Thu, 17 Aug 2017 11:47:34 -0500
> >
> > Mario Castelán Castro wrote:
> > > Hello.
> > >
> > > Currently I use rsync to make th
th toy cases to have at least some experience
with it before using it for my real backups.
Using a Git checkout of the latest release I get this warning: “Using a
pure-python msgpack! This will result in lower performance.”. Yet I have
the Debian package “python3-msgpack“. Do you know what the pro
tool for my case. I do not need any highly sophisticated
tools. As I noted in the first message, I only want to backup a personal
computer to an USB drive.
Since I must manually connect the USB drive to make the backups, there is
no point in automatizing it with cron. Network backups are irrelevant
in m
ry well done collection of programs.
It very efficiently does incremental backups to several types of media
-- Gene goes to disk, I go to tape (takes forever, but there are
several little boxes containing backups that are nowhere near a
failure point).
It backs up in tar (or dump) files so you can res
On Saturday 19 August 2017 23:07:01 Celejar wrote:
> On Thu, 17 Aug 2017 11:47:34 -0500
>
> Mario Castelán Castro wrote:
> > Hello.
> >
> > Currently I use rsync to make the backups of my personal data,
> > including some manually selected important files of sys
On Thu, 17 Aug 2017 11:47:34 -0500
Mario Castelán Castro wrote:
> Hello.
>
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more safe from the scenario where
On 2017-08-18 23:53 +0100 Liam O'Toole wrote:
>I use duplicity for exactly this scenario. See the wiki page[1] to get
>started.
>
>1: https://wiki.debian.org/Duplicity
Judging from a quick glance at that project's homepage in GNU Savannah,
this seem indeed to be the right tool for the job, but I
On 2017-08-17, Mario Castelán Castro wrote:
> Hello.
>
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more safe from the scenario where I have deleted
> some
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, Aug 17, 2017 at 07:33:53PM -0500, Mario Castelán Castro wrote:
> On 17/08/17 15:51, to...@tuxteam.de wrote:
[...]
> > [...] And yes, there's a wiki entry encouraging "in-line" quoting [1].
>
> Ah, I see. I rarely check the Debian Wiki becaus
On 17/08/17 15:51, to...@tuxteam.de wrote:
> On Thu, Aug 17, 2017 at 03:24:35PM -0500, Mario Castelán Castro wrote:
> [...]
>
> But in general, folks here tend to be tolerant. And yes, there's a
> wiki entry encouraging "in-line" quoting [1].
Ah, I see. I rarely check the Debian Wiki because it i
On 17/08/17 13:31, Nicolas George wrote:
> [[elided]]
>
> No, it is the other way around: we rsync the data to a directory stored
> on a btrfs filesystem, and then we make a snapshot of that directory.
> With btrfs's CoW, only the parts of the files that have changed use
> space.
Thanks for the c
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, Aug 17, 2017 at 03:24:35PM -0500, Mario Castelán Castro wrote:
> On 17/08/17 13:31, Nicolas George wrote:
[...]
> > Please remember not to top-post.
>
> Both bottom posting and top posting each have their own disadvantages.
The general conv
1 - 100 of 532 matches
Mail list logo