Package: hurd
Version: N/A
Severity: normal

Hi,

again, I have no test case for this report, but only analysis of the code.
I am sure *if* it can occur, it is quite rare, so it's not urgent.

In ext2fs/pager.c (diskfs_grow), it seems that a file could shrink by one
block. Assume new_end_block > end_block, and
dn->last_page_partially_writable, and old_page_end_block > end_block,
AND diskfs_catch_exceptions fails (this is a rather strong assumption, I
don't know under which circumstances it might happen).

              err = diskfs_catch_exception ();
              while (!err && end_block < writable_end)
                {
                  block_t disk_block;
                  err = ext2_getblk (node, end_block++, 1, &disk_block);
                }
              diskfs_end_catch_exception ();

              if (err)
                /* Reflect how much we allocated successfully.  */
                new_size = (end_block - 1) << log2_block_size;

The while loop is not entered, because of ERR. This means that end_block is
still old_size >> log2_block_size. So, because ERR is true, new_size is set to
   (end_block - 1) << log2_block_size
== ((old_size >> log2_block_size) - 1) << log2_block_size
== old_size - block_size
   (or something worse if old_size < block_size and an underrun occurred).

The "- 1", which is meant to take care of the additional end_block++ in the
body of the while loop, is harmful here.

Thanks,
Marcus

-- System Information
Debian Release: woody
Kernel Version: Linux ulysses 2.2.12 #7 Mon Sep 27 01:09:52 CEST 1999 i586 unknown

Reply via email to