Take a look at "rsnapshot". It uses "rsync --link-dest" and/or "cp -
al" to do exactly what you like about "cp --backup=t". It maintains
a series of snapshots of the filesystem with separate copies of
changed files but only one copy of unchanged files.
rsnapshot overlays all that with a si
Sean Zimmermann <[EMAIL PROTECTED]>:
> Paul E Condon mesanetworks.net> writes:
> >
> > The difference is that afio compresses each input file individually, so
> > if there is a read/write error, only one file is lost from the archive.
>
> I have one final question: some people have brought up
On Sat, Oct 13, 2007 at 11:11:35AM -0600, Paul E Condon wrote:
[snip: remote backups]
> It doesn't appear from the man page that rsync has the equivalent of
> cp --backup=t
> I use this and it is important to me. Nothing ever is deleted from my
> backup until I do a clean-up sweep on it (which I h
On Sat, Oct 13, 2007 at 10:16:03AM -0400, Douglas A. Tutty wrote:
> On Fri, Oct 12, 2007 at 10:07:28PM -0600, Paul E Condon wrote:
> > On Sat, Oct 13, 2007 at 02:59:32AM +, Sean Zimmermann wrote:
> >
> > > I have one final question: some people have brought up the strength of
> > > programs l
On Fri, Oct 12, 2007 at 10:07:28PM -0600, Paul E Condon wrote:
> On Sat, Oct 13, 2007 at 02:59:32AM +, Sean Zimmermann wrote:
>
> > I have one final question: some people have brought up the strength of
> > programs like afio that compress files individually to protect against
> > corruption
On Sat, Oct 13, 2007 at 02:59:32AM +, Sean Zimmermann wrote:
> I have one final question: some people have brought up the strength of
> programs like afio that compress files individually to protect against
> corruption. Most of the things I archive are large image or movie files
> (which ty
Paul E Condon mesanetworks.net> writes:
>
> The difference is that afio compresses each input file individually, so
> if there is a read/write error, only one file is lost from the archive.
> (Actually, there are a lot more differences - to start with the options
> are totally different syntax an
On Thu, Oct 11, 2007 at 11:09:04PM -0400, Celejar wrote:
> On Thu, 11 Oct 2007 20:48:10 +0200
> "Manon Metten" <[EMAIL PROTECTED]> wrote:
>
> > Hi Sean,
> >
> > You might consider using Lha. It does the same as tar and bzip2 together
>
> Tar itself integrates bzip2 via the 'j' switch.
>
> > Man
On Oct 11, 2007, at 2:53 PM, Carl Johnson wrote:
Are you sure that you are not talking about afio? I looked at the
documentation for cpio, and there is no mention of compression (for
etch).
You're probably right. I tend to conflate the two in my mind.
--
To UNSUBSCRIBE, email to [EMAIL P
David Brodbeck <[EMAIL PROTECTED]> writes:
> On Oct 11, 2007, at 5:01 AM, Sean Zimmermann wrote:
> > If I ignored the indexing issue (since most of my work with tar is
> > large,
> > non-incremental backups where I typically restore the entire
> > contents -
> > it would be nice if there was index
On Thu, 11 Oct 2007 20:48:10 +0200
"Manon Metten" <[EMAIL PROTECTED]> wrote:
> Hi Sean,
>
> You might consider using Lha. It does the same as tar and bzip2 together
Tar itself integrates bzip2 via the 'j' switch.
> Manon.
Celejar
--
mailmin.sourceforge.net - remote access via secure (OpenPGP)
Hi Sean,
You might consider using Lha. It does the same as tar and bzip2 together
(although you can disable compression).
It has a simple syntax. You can also view the contents of the archive and
even extract one single file from it.
Example (suppose I have a 'work' dir with a.o. the file 'abc' i
On Oct 11, 2007, at 5:01 AM, Sean Zimmermann wrote:
If I ignored the indexing issue (since most of my work with tar is
large,
non-incremental backups where I typically restore the entire
contents -
it would be nice if there was indexing, but is not a huge problem),
should I still use somethi
Douglas A. Tutty porchlight.ca> writes:
> Tar archive isn't designed for this, since its designed for sequential
> devices. Have you considered using another archive format? Perhaps
> iso? You can split and join iso files, mount them with loop mount,
> compress, burn, whatever.
>
> Doug.
If
On Wed, Oct 10, 2007 at 09:43:33AM +, Sean Zimmermann wrote:
> Also, is there some way to index a large tar file, so if I want to extract a
> file at the end of a large archive, tar doesn't have to seek through the
> entire
> archive to get the file?
Tar archive isn't designed for this, s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/10/07 04:43, Sean Zimmermann wrote:
> Hello.
>
> I frequently work with large tar archives that often need to be split into
> smaller pieces. Up until now, I've used tar -m to create multi-part archives.
> I
- -M
> recently read that I can us
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/10/2007 11:43 AM, Sean Zimmermann wrote:
> Hello.
Hi Sean
> I frequently work with large tar archives that often need to be split into
> smaller pieces. Up until now, I've used tar -m to create multi-part archives.
> I
> recently read that I c
Hello.
I frequently work with large tar archives that often need to be split into
smaller pieces. Up until now, I've used tar -m to create multi-part archives. I
recently read that I can use split to do the same thing. Is there an advantage
or disadvantage to using split over tar -m?
Also, is th
18 matches
Mail list logo