** Description changed: + [SRU Justfication, Artful] + + [Impact] + + Booting the 4.13 Artful kernel with vagrant using VirtualBox trips the + warning: + + [ 61.010337] VFS: brelse: Trying to free free buffer + [ 61.114875] ------------[ cut here ]------------ + [ 61.114886] WARNING: CPU: 0 PID: 683 at /build/linux-XO_uEE/linux-4.13.0/fs/buffer.c:1205 __brelse+0x21/0x30 + + and a failed resize of a partition. The root cause has been bisected + down to the following commmit: + + commit c20cfc27a47307e811346f85959cf3cc07ae42f9 + Author: Christoph Hellwig <h...@lst.de> + Date: Wed Apr 5 19:21:07 2017 +0200 + + block: stop using blkdev_issue_write_same for zeroing + + + [Fix] + The Upstream commit directly fixes this issue: + + commit d5ce4c31d6df518dd8f63bbae20d7423c5018a6c + Author: Ilya Dryomov <idryo...@gmail.com> + Date: Mon Oct 16 15:59:10 2017 +0200 + + block: cope with WRITE ZEROES failing in blkdev_issue_zeroout() + + ..however we also require a backport of the following upstream commit to + apply the above commit cleanly. + + commit 425a4dba7953e35ffd096771973add6d2f40d2ed + Author: Ilya Dryomov <idryo...@gmail.com> + Date: Mon Oct 16 15:59:09 2017 +0200 + + block: factor out __blkdev_issue_zero_pages() + + [Testscase] + + On Ubuntu Xenial: + + 1. sudo apt-get install virtualbox vagrant + 2. edit /etc/group and add one's user name to the vboxusers group + 3. log out log back + 4. vagrant init ubuntu/artful64 + 5. vagrant up + 6. vagrant ssh + 7. dmesg | grep "VFS: brelse" + + without the fix one will see the VFS brelse warning message and the / + partition will not have been resized. + + with a fixed system there is is no VFS vbrelse warning and / as been + resized as expected. + + [Regresion potential] + These patches touch the blk library so potentially it could break the block layer and corrupt data on disk. However these are upstream fixes that address the buggy commit c20cfc27a47307e811346f85959cf3cc07ae42f9 and are known to address the bug. + + ------------------------------- + + After building a new vagrant instance using the ubuntu/artful64 box (v20171023.1.0), the size of the filesystem seems to be much too small. Here's the output of `df -h` on the newly built instance: vagrant@ubuntu-artful:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 991M 0 991M 0% /dev tmpfs 200M 3.2M 197M 2% /run /dev/sda1 2.2G 2.1G 85M 97% / tmpfs 999M 0 999M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 999M 0 999M 0% /sys/fs/cgroup vagrant 210G 182G 28G 87% /vagrant tmpfs 200M 0 200M 0% /run/user/1000 - For comparison, here is the same from the latest zesty64 box: ubuntu@ubuntu-zesty:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 992M 0 992M 0% /dev tmpfs 200M 3.2M 197M 2% /run /dev/sda1 9.7G 2.5G 7.3G 26% / tmpfs 999M 0 999M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 999M 0 999M 0% /sys/fs/cgroup vagrant 210G 183G 28G 88% /vagrant tmpfs 200M 0 200M 0% /run/user/1000 - - With artful64, the size of /dev/sda1 is reported as 2.2G, which results in 97% of disk usage immediately after building, even though the disk size is 10G, as reported by the fdisk: - + With artful64, the size of /dev/sda1 is reported as 2.2G, which results + in 97% of disk usage immediately after building, even though the disk + size is 10G, as reported by the fdisk: vagrant@ubuntu-artful:~$ sudo fdisk -l Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x4ad77c39 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 20971486 20969439 10G 83 Linux - Disk /dev/sdb: 10 MiB, 10485760 bytes, 20480 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes - - Almost any additional installation results in a "No space left on device" error. + Almost any additional installation results in a "No space left on + device" error.
-- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1726818 Title: vagrant artful64 box filesystem too small Status in cloud-images: Confirmed Status in linux package in Ubuntu: In Progress Bug description: [SRU Justfication, Artful] [Impact] Booting the 4.13 Artful kernel with vagrant using VirtualBox trips the warning: [ 61.010337] VFS: brelse: Trying to free free buffer [ 61.114875] ------------[ cut here ]------------ [ 61.114886] WARNING: CPU: 0 PID: 683 at /build/linux-XO_uEE/linux-4.13.0/fs/buffer.c:1205 __brelse+0x21/0x30 and a failed resize of a partition. The root cause has been bisected down to the following commmit: commit c20cfc27a47307e811346f85959cf3cc07ae42f9 Author: Christoph Hellwig <h...@lst.de> Date: Wed Apr 5 19:21:07 2017 +0200 block: stop using blkdev_issue_write_same for zeroing [Fix] The Upstream commit directly fixes this issue: commit d5ce4c31d6df518dd8f63bbae20d7423c5018a6c Author: Ilya Dryomov <idryo...@gmail.com> Date: Mon Oct 16 15:59:10 2017 +0200 block: cope with WRITE ZEROES failing in blkdev_issue_zeroout() ..however we also require a backport of the following upstream commit to apply the above commit cleanly. commit 425a4dba7953e35ffd096771973add6d2f40d2ed Author: Ilya Dryomov <idryo...@gmail.com> Date: Mon Oct 16 15:59:09 2017 +0200 block: factor out __blkdev_issue_zero_pages() [Testscase] On Ubuntu Xenial: 1. sudo apt-get install virtualbox vagrant 2. edit /etc/group and add one's user name to the vboxusers group 3. log out log back 4. vagrant init ubuntu/artful64 5. vagrant up 6. vagrant ssh 7. dmesg | grep "VFS: brelse" without the fix one will see the VFS brelse warning message and the / partition will not have been resized. with a fixed system there is is no VFS vbrelse warning and / as been resized as expected. [Regresion potential] These patches touch the blk library so potentially it could break the block layer and corrupt data on disk. However these are upstream fixes that address the buggy commit c20cfc27a47307e811346f85959cf3cc07ae42f9 and are known to address the bug. ------------------------------- After building a new vagrant instance using the ubuntu/artful64 box (v20171023.1.0), the size of the filesystem seems to be much too small. Here's the output of `df -h` on the newly built instance: vagrant@ubuntu-artful:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 991M 0 991M 0% /dev tmpfs 200M 3.2M 197M 2% /run /dev/sda1 2.2G 2.1G 85M 97% / tmpfs 999M 0 999M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 999M 0 999M 0% /sys/fs/cgroup vagrant 210G 182G 28G 87% /vagrant tmpfs 200M 0 200M 0% /run/user/1000 For comparison, here is the same from the latest zesty64 box: ubuntu@ubuntu-zesty:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 992M 0 992M 0% /dev tmpfs 200M 3.2M 197M 2% /run /dev/sda1 9.7G 2.5G 7.3G 26% / tmpfs 999M 0 999M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 999M 0 999M 0% /sys/fs/cgroup vagrant 210G 183G 28G 88% /vagrant tmpfs 200M 0 200M 0% /run/user/1000 With artful64, the size of /dev/sda1 is reported as 2.2G, which results in 97% of disk usage immediately after building, even though the disk size is 10G, as reported by the fdisk: vagrant@ubuntu-artful:~$ sudo fdisk -l Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x4ad77c39 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 20971486 20969439 10G 83 Linux Disk /dev/sdb: 10 MiB, 10485760 bytes, 20480 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Almost any additional installation results in a "No space left on device" error. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-images/+bug/1726818/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp