Package: lvm2 Version: 2.02.95-4 Severity: normal
This is a bug in LVM2 running on a freshly installed Debian 6.0.6 system that was then immediately upgraded to testing. root@lettuce:/# lvcreate --type raid5 -i3 -L 10GiB -n test_raid5 lettuce Using default stripesize 64.00 KiB Rounding size (160 extents) up to stripe boundary size (162 extents) Logical volume "test_raid5" created root@lettuce:/# lvdisplay lettuce/test_raid5 --- Logical volume --- LV Path /dev/lettuce/test_raid5 LV Name test_raid5 VG Name lettuce LV UUID LcZhEP-uX5I-vg9y-2hUN-TROs-gRZV-QM5Tm8 LV Write Access read/write LV Creation host, time lettuce, 2012-10-28 01:49:20 +1100 LV Status available # open 0 LV Size 10.12 GiB Current LE 162 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1024 Block device 254:33 root@lettuce:/# lvextend -L +10GiB lettuce/test_raid5 Extending logical volume test_raid5 to 20.12 GiB Internal error: _alloc_init called for non-virtual segment with no disk space. root@lettuce:/# lvm version LVM version: 2.02.95(2) (2012-03-06) Library version: 1.02.74 (2012-03-06) Driver version: 4.22.0 Relevant output from dmesg: [ 1018.116621] device-mapper: raid: Superblocks created for new array [ 1018.136202] md/raid:mdX: not clean -- starting background reconstruction [ 1018.136232] md/raid:mdX: device dm-32 operational as raid disk 3 [ 1018.136240] md/raid:mdX: device dm-30 operational as raid disk 2 [ 1018.136246] md/raid:mdX: device dm-28 operational as raid disk 1 [ 1018.136251] md/raid:mdX: device dm-26 operational as raid disk 0 [ 1018.137297] md/raid:mdX: allocated 4280kB [ 1018.137456] md/raid:mdX: raid level 5 active with 4 out of 4 devices, algorithm 2 [ 1018.137464] RAID conf printout: [ 1018.137470] --- level:5 rd:4 wd:4 [ 1018.137476] disk 0, o:1, dev:dm-26 [ 1018.137481] disk 1, o:1, dev:dm-28 [ 1018.137486] disk 2, o:1, dev:dm-30 [ 1018.137491] disk 3, o:1, dev:dm-32 [ 1018.137501] Choosing daemon_sleep default (5 sec) [ 1018.137506] created bitmap (4 pages) for device mdX [ 1018.176462] mdX: bitmap file is out of date, doing full recovery [ 1018.184818] mdX: bitmap initialized from disk: read 1/1 pages, set 6912 of 6912 bits [ 1018.194336] md: resync of RAID array mdX [ 1018.194345] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 1018.194351] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 1018.194365] md: using 128k window, over a total of 3538944k. [ 1088.249766] md: mdX: resync done. [ 1088.286216] RAID conf printout: [ 1088.286225] --- level:5 rd:4 wd:4 [ 1088.286232] disk 0, o:1, dev:dm-26 [ 1088.286238] disk 1, o:1, dev:dm-28 [ 1088.286243] disk 2, o:1, dev:dm-30 [ 1088.286247] disk 3, o:1, dev:dm-32 Other information: root@lettuce:/# vgdisplay lettuce --- Volume group --- VG Name lettuce System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 57 VG Access read/write VG Status resizable MAX LV 0 Cur LV 11 Open LV 4 Max PV 0 Cur PV 4 Act PV 4 VG Size 10.92 TiB PE Size 64.00 MiB Total PE 178848 Alloc PE / Size 44627 / 2.72 TiB Free PE / Size 134221 / 8.19 TiB VG UUID fXqFmJ-1LEq-uf3v-eySJ-vlG9-oC1d-I4Y0SI root@lettuce:/# pvs PV VG Fmt Attr PSize PFree /dev/sda2 lettuce lvm2 a-- 2.73t 2.73t /dev/sdb2 lettuce lvm2 a-- 2.73t 34.12g /dev/sdc2 lettuce lvm2 a-- 2.73t 2.71t /dev/sdd2 lettuce lvm2 a-- 2.73t 2.73t -- System Information: Debian Release: 6.0.6 APT prefers stable-updates APT policy: (500, 'stable-updates'), (500, 'stable') Architecture: i386 (i686) Kernel: Linux 3.4.2-linode44 (SMP w/4 CPU cores) Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968) Shell: /bin/sh linked to /bin/dash -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org