>
> Hi, To Nate and John Slivko who both responded, Thanks!
>
> I've used RAID before but I just didn't think about it this way.
> I guess I was thinking inside the RAID box since I'm used to
> RAID as a hardware only thing.  This is a very neat way to use
> software RAID.
>
>
> Are there any special implications or cosideration when using a large ( >
> 100's GB's) database on a software RAID filesystem?


I would think so. The main one being that raid controllers are designed
to handle failed drives gracefully (good SCSI ones, ide raid cards
aren't really up to the task for the most part). Standard SCSI controllers
are not, and a failed drive could cause a system failure even with
raid(much more likely with IDE raid). For most, RAID is about protecting
the integrity of data rather then ensuring maximum uptime. I purchased
3 3ware based machines which run 200+ GB raid arrays(using 80GB maxtor
IDE drives). The systems run raid 10, but even with raid 10, a disk
failure produces a kernel panic. So its obvious that the 3ware controller
doesn't mask the failure of the raid drive to the OS, despite when
a single failure doesn't lose any data since the rest is available on
the 2nd disk of the mirror(RAID 10 for me = 3 x RAID 1(2-disk) arrays
configured in raid0, so you can lose up to 3 disks, 1 from each array
and lot lose any data provided the controller does its job).

100GBs of databases makes me think the system(s) would be very critical
so downtime may not be tolerated as much as most of my systems.

if protecting the integrity of the data is more important then protecting
uptime/availability then software raid is a good cost effective
alternative. But in a DB enviornment I think that introduces another,
rather complex layer to the integrity variable. You could still get
data curroption in a crash, even if the actual files are intact.

nate






-- 
redhat-list mailing list
unsubscribe mailto:[EMAIL PROTECTED]?subject=unsubscribe
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to