On Tue, Jul 13, 2010 at 04:02:42PM +0200, you (Martin Matuska) sent the 
following to the -current list:
>  Dear community,
> 
> Feel free to test everything and don't forget to report any bugs found.

When I create a raidz pool of 3 equally sized hdd's (3x2Tb WD caviar green 
drives) the reported available space by zpool and zfs is VERY different (not 
just the known differences).

On a 9.0-CURRENT amd64 box:

# uname -a
FreeBSD trinity.lordsith.net 9.0-CURRENT FreeBSD 9.0-CURRENT #1: Tue Jul 13 
21:58:14 UTC 2010     r...@trinity.lordsith.net:/usr/obj/usr/src/sys/trinity  
amd64

# zpool create pool1 raidz ada2 ada3 ada4
# zpool list pool1
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
pool1  5.44T   147K  5.44T     0%  ONLINE  -

# ada drives dmesg output:
ada2 at ahcich4 bus 0 scbus5 target 0 lun 0
ada2: <WDC WD20EARS-00MVWB0 50.0AB50> ATA-8 SATA 2.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada3 at ahcich5 bus 0 scbus6 target 0 lun 0
ada3: <WDC WD20EARS-00MVWB0 50.0AB50> ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada4 at ahcich6 bus 0 scbus7 target 0 lun 0
ada4: <WDC WD20EADS-11R6B1 80.00A80> ATA-8 SATA 2.x device
ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada4: Command Queueing enabled
ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)

zfs list however only shows:
# zfs list pool1
NAME    USED  AVAIL  REFER  MOUNTPOINT
pool1  91.9K  3.56T  28.0K  /pool1

I just lost the space of an entire hdd!

To rule out a possible drive issue I created a raidz pool based on 3 65m files.

# dd if=/dev/zero of=/file1 bs=1m count=65 
# dd if=/dev/zero of=/file2 bs=1m count=65 
# dd if=/dev/zero of=/file3 bs=1m count=65 
# zpool create test raidz /file1 /file2 /file3
#
# zpool list test
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
test   181M   147K   181M     0%  ONLINE  -
# zfs list test
NAME   USED  AVAIL  REFER  MOUNTPOINT
test  91.9K  88.5M  28.0K  /test

When I create a non-redundant storage pool using the same 3 files or 3 drives 
the available space reported by zfs is what I'm expecting to see though so it 
looks like creating a raidz storage pool is showing very weird behavior.

This doesn't have as much to do with the ZFS v15 bits commited to -HEAD since I 
have the exact same behavior on a 8.0-RELEASE-p2 i386 box with ZFS v14.

A friend of mine is running osol build 117 but he created his raidz pool on an 
even older build though.
His raidz pool also uses 3 equally-sized drives (3x2Tb) and his raidz pool is 
showing:

% zfs list -r pool2
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
pool2                                          3.32T  2.06T  3.18T  
/export/pool2
% df -h pool2
Filesystem             size   used  avail capacity  Mounted on
pool2                  5.4T   3.2T   2.1T    61%    /export/pool2

To run further tests he also created a test raidz pool using 3 65m files:

% zfs list test2
NAME    USED  AVAIL  REFER  MOUNTPOINT
test2  73.5K   149M    21K  /test2

So on osol build 117 the available space is what I'm expecting to see whereas 
on FreeBSD 9.0-CURRENT amd64 and 8.0-RELEASE-p2 i386 

Is someone having the same issues?

Cheers,
marco

Attachment: pgpQwcquU4UgC.pgp
Description: PGP signature

Reply via email to