Hi Bob,

On Mon, 17.09.2007 at 11:47:08 -0600, Bob Proulx <[EMAIL PROTECTED]> wrote:
> Toni Mueller wrote:
> > Michael Stone <[EMAIL PROTECTED]> wrote:
> > > Toni Mueller wrote:
> > > ># dd if=/dev/zero of=/srv/xen/disks/vm1.xm obs=1024K count=20000 
> > > >seek=10000 conv=notrunc
> > > >The effect, apart from growing the file to an unspecified size (it came
> > > >out at 31G instead of 20)
> > > 
> > > Well, you told it to write 20000 1M blocks after seeking 10000 1M 
> > > blocks into the file. (That comes to 30G.) 
> 
> I see the notrunc flag.  Because there is a seek this confuses things.
> I don't think notrunc is appropriate here.  It would make dd modify an
> existing file.  But the seek will affect this.

at least I intended to extend an existing file in that dd run. I did
this because I had no luck creating the file in one go, so I wanted to
create it in a piecemeal fashion.

> Are you trying to modify an existing file?  This makes all results
> dependent upon the previous existence and contents of the output file.

Yes. The file was there, and afair had around 12G already, but I
thought that I'd start to continue writing at 10G.

> The seek=10000 says to skip 10000 obs blocks in the output.
> 10000 * 1024 * 1024 = 10485760000 bytes.

Sounds ok for me.

> The obs was set to 1M but the ibs is left at the default 512 bytes.
> This means that dd will perform 2048 reads to buffer up 1M followed by
> 1 write to write the 1M to the output repeated 20000 times.

I also tried to just set bs, but to no avail. Since I was only reading
/dev/zero anyway, I thought that would not matter, but in practice it
turned out that dd would not read 2048 reads and then write a single 1M
block, but you hilight that I probably just sent you the wrong command
out of the selection I tried, while still being too much agitated.

> The count applies to the ibs and not the obs.  The default ibs is 512
> bytes.  This produces 20000 512-byte blocks for a total of 10240000
> bytes out.

Oh.

> This is dependent upon the file not existing previously.  If a file
> previously exists and the file is larger than this result then the
> file will not be truncated because of the notrunc flag.  See this
> example:

I'm not so much concerned with trunc vs. notrunc, but with the fact
that I had severe trouble creating the file at all. In my case, just
having a big file was important, not how big it was exactly.

> Please describe exactly what you are trying to accomplish and then an
> improved dd invocation can be crafted for you.

Originally, I just tried to create a 20G file in one go:

# dd if=/dev/zero of=/srv/xen/disks/vm1.xm bs=1024k count=20000&

I only started to try other variants of calling dd after that crashed
at around 9-10G.

> My tests using valgrind show dd consuming only 1,069,029 bytes.  I am
> not seeing a large memory growth from dd and cannot recreate that
> aspect of the problem.

I was watching dd with top until X was terminated, and top reported
some 77M right from the start.

> If a process was consuming a very large amount of memory then the
> linux kernel memory overcommit configuration and the linux kernel out
> of memory killer will come into play.  Search the web for linux kernel
> out of memory killer and memory overcommit and you will find much
> discussion about it.  Start here:

Thank you for the links. I already figured that the kernel was trying
to save itself, but have no experience (and not fiddled) with any
configuration in that area.

> personally too often.  I always disable linux kernel memory
> overcommit.

I am just using the Debian stock kernel as-is, and configured nothing
myself.

> That portion does not sound like a bug in dd but rather a misfeature
> of the linux kernel itself.

Ok. That still leaves me with the question why dd runs amok (otherwise
OOM killer would not have kicked in). I can rule out other sources for
the problem because this is a single-user machine, and I didn't dare to
touch anything (after the first X crash, anyway).



Best,
--Toni++




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to