Ben Russo said:

> Really?  I haven't had the fun of playing with any IDE HW Raid devices..
> But do you have LOTS of experiences with this?  Or is it just a few cases?
> I am curious because I am considering purchasing some Promise HW IDE RAID
> arrays that have SCSI Host Interfaces?  They just seem so damn cheap.


well I can share some of my experiences. I'm sure they are somewhat
biased but I've done a lot of research on it...

About a year and a half ago I deployed 3 hardware raid 5 systems, each
had 4 75GB IBM 75GXP disks in hardware raid 5 on 3ware 6800 8-port
IDE controllers, and 1 30GB IBM disk connected to an intel ide controller
(MB was intel L440GX+). Systems were single proc P3-800 with 256MB ECC
ram, 300 watt power supplies, 4U extended ATX chassis with 3 large fans,
not sure exact size but I would guess 5"x5".

anyways, over the span of 6 months I would estimate we had about 10
disk failures on these systems alone. on one unlucky weekend we had
4 disks fail accross 2 systems in 3 days, wiping out the entire raid
array in the process on one of the systems(the other system, the 2nd
disk that had failed was a brand new one that was fedex'd overnight
for saturday delivery, packed in the standard 2-3" of foam, it was
DOA(bad sectors on arrival, the controller refused to use it).

We replaced the disk coolers, power supplies, the disks themselves,
finally settled on 6 x 80GB maxtor drives in raid 10, with a 30GB
maxtor drive as the boot drive for each system, 450watt power supplies,
failure rates went way down, but we still had I think 3 disk failures
in the past year(far better then what IBM gave us). A 4th disk is on
the verge of failure.

For the longest time I placed blame on these systems, I thought perhaps
it was the power supply, it was the raid card .. but as time went on,
I had disks(these were all IBMs, at the time I had everything IBM,
since I had such good experience in the past with their 10-36GB drives)
fail in sun ultra 10s, P3s, P4s, other dual proc p3s, home systems,
work systems .. co worker's home systems.. It was a distinct pattern.
and of course theres a lawsuit against IBM for the drives(which I'm
one of the main people involved).

Maxtor has been much better, though as above they are not perfect.
I bought a 100GB maxtor drive around aguust 2001, the drive sat in
a very well run setup, the drive was cool or cold to the touch at
all times, connected to a good battery backup, on a 300 to 450
watt power supply(I upgraded halfway through). In april of this
year the drive hiccuped and I lost some data(pissed me off the
most when all my MRTG logs were gone). It continued working, I
migrated again this august to my new western digital 8MB cache
raid1 setup, and retired the maxtor to inactive duty, shortly
after it started spitting out errors(kernel would say device is
not ready even though it wasn't even mounted). So I convinced maxtor
to RMA it, reciveed the new drive, haven't had a chance to try it
yet. But I did have at least 3 80GB maxtor drives fail in my 3ware
systems, and 1 more on the fritz, it acted like it failed 2 or 3 times
but has recovered every time sofar and the last remaining IT guy
at the company is too busy to worry about it, the system it's on
is a backup server, so if it were to lose all it's data it wouldn't
be a big loss, just a pain to rebuild everything.

I've read also that many IDE manufacturers have reduced their warranties
to 1 year, with the exception of a few models like my WD special
editions which I hear have a better warranty.. But to be honest I have
lost more drives(most of which are IDE) in the past 2 years then I
have since I started using computers back in around '91. I can count
the # of failed drives before that probably on one hand. It's quite
scary to me. I mean I shouldn't have to resort to raid to maintain
data integrity. I mean, a drive life of 2 years or 3 years would
be fine, but having drives die before even 1 year is up is just
plain horrible. And having replacement drives die within months of
replacing them is even worse.

much of my failures occured in a climate controlled enviornment, with
good quality 1.4KVA battery backup systems ..so it's not like I treated
them badly, they had the best care of all of the systems on the network
and they still had massive failures.

I literally spent several days worth of time replacing disks, rebuilding
systems and reinstalling them. our backup servers for example, it could
take up to 2 hours to replace a disk, they were real hard to work with,
having so many ide cables and power cables in such a small spot, and me
having big hands, doesn't make for easy drive replacement :(

And I have heard stories from others as well, so I know it's not
me, I remember last year on the phone with 3ware, they specifically
told me they had customers returning literally thousands of IBM 75GXP
drives due to failures. I am happy sofar with my WD special edition
drives, but I only got them a few months ago, who knows what they will
be like in a year.

meanwhile I have several SCSI drives that have been humming for years
and years without the slightest problem. A couple of my SCSI controllers
are more then 6 years old :) still use them everyday ..

my experiences are probably biased but I've had some real bad luck
with IDE drives.

IDE is still good for big storage at low cost.. but I would reccomend
SCSI over IDE for reliability's sake. I haven't used many HP disks
myself(aside from ones in my HPUX machines which are probably HP). Most
of the SCSI disks I deployed were IBM as well, with a few being quantum(now
maxtor I think), and a few seagates. But the vast majority of the
SCSI disks were IBM. Much of my home equipment is IBM disks too, since
I haven't bought a new SCSI disk in ages, most of what I bought was
used..but still no troubles.

nate





-- 
redhat-list mailing list
unsubscribe mailto:[EMAIL PROTECTED]?subject=unsubscribe
https://listman.redhat.com/mailman/listinfo/redhat-list

Reply via email to