Quoting Theodore Tso ([EMAIL PROTECTED]):
> On Sat, Jul 15, 2006 at 03:01:55AM +0200, Zoran Dzelajlija wrote:
> > > Can you send me an raw e2image dump of the filesystem?
> > 
> > Will do when it finishes.  bzip2 of 200GB seems to be really slow, even if
> > it's mostly zeroes.  I suppose you wouldn't mind if I sent a tar.bz2 with
> > the image?
> 
> A tar.bz2 of what?

Of the image, which is currently dumped on another partition.

# e2image -r /dev/hdb1 hdb1.e2i
# ls -la hdb1.e2i
-rw-------  1 root  root  200047001601 Jul 15 03:09 hdb1.e2i
# tar cfS hdb1.e2i.tar hdb1.e2i (handles sparse files)
# bzip2 hdb1.e2i.tar

The result is at

http://ext3:[EMAIL PROTECTED]/~jelly/linux/hdb1.e2i.tar.bz2
http://ext3:[EMAIL PROTECTED]/~jelly/linux/hdb1.e2i.tar.bz2.md5sum

>   It's important to creating the e2image file using
> 
>             e2image -r /dev/hda1 - | bzip2 > hda1.e2i.bz2
> 
> because when I uncompress the file using:
> 
>       bunzip2 < hda1.e2i.bz2 | make-sparse >hda1.e2i
> 
> to create the file as a sparse, which makes takes less space and is
> faster to create.

I was thinking that with

e2image -r /dev/hda1 - | bzip2

bzip2 has to read up all 200GB on stdin, and if it's dumped to a file it
just writes the non-sparse data.  I realize now that compressing the data
from file instead of from pipe is not any faster on Linux, because there's
no way for userspace to know where the holes are and my tar has to read the
whole image anyway.  Oh well.  However at your end, bzip2 would still have to
dump 200GB to stdout, while using

tar xfSj theres-a-sparse-file-in-this-archive.tar.bz2

will just uncompress the archive (about .5GB), and tar will seek over the
holes when creating the file.

Zoran


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to