james bardin wrote:
Hello,

Has anyone here seen any numbers, or tested themselves, the rebuild

Hi James,

Yes, we test this (by purposely failing a drive on the storage we ship to our customers).

times for large raid arrays (raid 6 specifically)?

Yes, RAID6 with up to 24 drives per RAID, up to 2TB drives. Due to the bit error rate failure models for the uncorrectable errors on disks, RAID5 is *strongly* contra-indicated for storage of more than a few small TB, and more than a few drives (less then 5 and less than 1TB). The risk of a second failure during rebuild is simply unacceptably high, which would/does permanently take out your data in RAID5.

I can't seem to find anything concrete to go by, and I haven't had to
rebuild anything larger than a few TB. What happens when you loose a
2TB drive in a 20TB array for instance? Do any hardware raid solutions
help. I don't think ZFS is an option right now, so I'm looking at

We have customers with 32TB raw per RAID, and when a drive fails, it rebuilds. Rebuild time is a function of how fast the card is set up to do rebuilds, you can tune the better cards in terms of "background" rebuild performance. For low rebuild speeds, we have seen 24 hours+, for high rebuild speeds, we have seen 12-15 hours for the 32TB.

ZFS is probably not what you want to do ... building a critical dependency upon a product that has a somewhat uncertain future ...

Joe


--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
       http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to