I've been testing out the v28 patch code for a month now, and I've yet to 
report any real issues other than what is mentioned below. 

I'll detail some of the things I've tested, hopefully the stability of v28 in 
FreeBSD will convince others to give it a try so the final release of v28 will 
be as solid as possible.

I've been using FreeBSD 9.0-CURRENT as of Dec 12th, and 8.2PRE as of Dec 16th

What's worked well:

- I've made and destroyed small raidz's (3-5 disks), large 26 disk raid-10's, 
and a large 20 disk raid-50.
- I've upgraded from v15, zfs 4, no issues on the different arrays noted above
- I've confirmed that a v15 or v28 pool will import into Solaris 11 Express, 
and vice versa, with the exception about dual log or cache devices noted below. 
- I've run many TB of data through the ZFS storage via benchmarks from my VM's 
connected via NFS, to simple copies inside the same pool, or copies from one 
pool to another. 
- I've tested pretty much every compression level, and changing them as I tweak 
my setup and try to find the best blend.
- I've added and subtracted many a log and cache device, some in failed states 
from hot-removals, and the pools always stayed intact.


Issues:

- Import of pools with multiple cache or log devices. (May be a very minor 
point)

A v28 pool created in Solaris 11 Express with 2 or more log devices, or 2 or 
more cache devices won't import in FreeBSD 9. This also applies to a pool that 
is created in FreeBSD, is imported in Solaris to have the 2 log devices added 
there, then exported and attempted to be imported back in FreeBSD. No errors, 
zpool import just hangs forever. If I reboot into Solaris, import the pool, 
remove the dual devices, then reboot into FreeBSD, I can then import the pool 
without issue. A single cache, or log device will import just fine. 
Unfortunately I deleted my witness-enabled FreeBSD-9 drive, so I can't easily 
fire it back up to give more debug info. I'm hoping some kind soul will attempt 
this type of transaction and report more detail to the list.

Note - I just decided to try adding 2 cache devices to a raidz pool in FreeBSD, 
export, and then importing, all without rebooting. That seems to work. BUT - As 
soon as you try to reboot FreeBSD with this pool staying active, it hangs on 
boot. Booting into Solaris, removing the 2 cache devices, then booting back 
into FreeBSD then works. Something is kept in memory between exporting then 
importing that allows this to work.  



- Speed. (More of an issue, but what do we do?)

Wow, it's much slower than Solaris 11 Express for transactions. I do understand 
that Solaris will have a slight advantage over any port of ZFS. All of my speed 
tests are made with a kernel without debug, and yes, these are -CURRENT and 
-PRE releases, but the speed difference is very large.

At first, I thought it may be more of an issue with the ix0/Intel X520DA2 10Gbe 
drivers that I'm using, since the bulk of my tests are over NFS (I'm going to 
use this as a SAN via NFS, so I test in that environment). 

But - I did a raw cp command from one pool to another of several TB. I executed 
the same command under FreeBSD as I did under Solaris 11 Express. When executed 
in FreeBSD, the copy took 36 hours. With a fresh destination pool of the same 
settings/compression/etc under Solaris, the copy took 7.5 hours. 

Here's a quick breakdown of the difference in speed I'm seeing between Solaris 
11 Express and FreeBSD. The test is Performance Test 6.1 on a Windows 2003 
server, connected via NFS to the FreeBSD or Solaris box.  More details are 
here: 
http://christopher-technicalmusings.blogspot.com/2011/01/solaris-11-express-faster-speed-for-san.html

Solaris 11 Express svn_151a

903 MBs - Fileserver
466 MBs - Webserver
53 MBs - Workstation
201 MBs - Database

FreeBSD-9.0 Current @ Dec 12th 2010 w/v28 Patch, all Debug off

95 MBs - Fileserver
60 MBs - Webserver
30 MBs - Workstation
32 MBs - Database

Massive difference as you can see. Same machine, different boot drives. That's 
a real 903 MBs on the Solaris side as well - No cache devices or ZIL in place, 
just a basic raidz 5 disk pool. I've tried many a tweak to get these speeds up 
higher. The old v15 could hit mid 400's for the Fileserver test with 
zil_disable on, but that's no longer an option for v28 pools. I should compile 
my various test results into a summary and make a separate blog entry for those 
who care, as I also fiddled with vfs.nfsrv.async with little luck. I took great 
care to make sure the ZFS details were the same across the tests. 

9 is faster than 8.2 for speed by a small amount. Between v28 pools and v15 
pools there is speed degradation on both 8.2 and 9, but nothing as big as the 
difference between Solaris and FreeBSD.

I haven't benchmarked OpenSolaris or any type of Solaris older than 11, so I'm 
not sure if this is a recent speed boost from the Solaris camp, or if it's 
always been there.

As always, I'm delighted about the work that PJD and others have poured into 
ZFS on FreeBSD. I'm not knocking this implementation, just pointing out some 
points that may not be as well known. 

I am continuing to play with Solaris 11 Express, but I'm keeping my pools at 
v28 so I can boot into either OS to test. 


_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to