On Wed, Dec 18, 2002 at 02:52:28PM -0800, nate wrote:
> have you tested your raid1 ? I have only had 1 failure with software raid
> 1 on linux out of about 10 arrays. And when it failed the system went with
> it. IMO, the point of software raid is to protect data, not protect uptime.
> Data can still be lost due to unclean unmount and stuff, but much of the
> data would be intact. 

I'll disagree with your opinion.  I have been managing VMS systems for
about 20 years, and the last 12 or so have been managing them with
software-based RAID.  I have *never* lost a system when a shadowed disk
failed, and I have *never* lost data.  I've never lost data unless I've 
had multiple simultaneous failures (yes, this has happened a few times).  A
properly architected software mirroring subsystem will not lose data,
and it will be transparent to *all* I/O, including swapping I/O.  With
hardware and software RAID combined (we use hardware RAID on each of 2
data centers, and then software RAID across the data centers), I've
never lost a single byte of data.  To over-simplify things, the I/O is
not committed back to the host application until the write is
acknowledged on both sides.  If you cache your writes, however, and
trust your I/O subsystem to get it right and it doesn't, well, you've
shot yourself in the foot.
 
> I have even had hardware raid 10 (3ware 6800 series) take down a system when
> a disk(1 out of 6) dies. That happened everytime a disk failed. system
> would immediately kernel panic.

Then the RAID subsystem is broken.  With hardware RAID, the OS does not
have to know that there is any hardware protection underneath.  The only
thing my VMS systems know about is the ability to send the error logging
information back to the hosts.  My Unix systems on an EMC Symmetrix do
not even do that - they just see disks that are always available.

> I would not trust software raid on a system that seems to be as "vital"
> as yours seems to be. go with a hardware SCSI raid controller with SCA
> hotswap drives. I have had more then 30 IDE disk failures in the past couple
> of years, my SCSI disk failures have been a fraction, 1 or 2 drives. I have
> had about twice as many IDE disks in use as SCSI.

I've had dozens of SCSI disks fail. It's the luck of the draw - the
media is the same these days.

-- 
Ed Wilts, Mounds View, MN, USA
mailto:[EMAIL PROTECTED]
Member #1, Red Hat Community Ambassador Program



-- 
redhat-list mailing list
unsubscribe mailto:[EMAIL PROTECTED]?subject=unsubscribe
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to