Jason Costomiris wrote:
> 
> On Sat, Mar 18, 2000 at 02:58:52PM -0500, Trevor Astrope wrote:
> : A document on the redhat.com site says raiding your swap partitions is a
> : bad idea, because you will take a big performance hit if memory needed for
> : the raid in case of failure resides in swap. The machine I am configuring
> : is a dual 650 with 640mb of ram, but I thought the machine this is
> : replacing  (a p3-450 with 256mb ram was enough and it goes a few kb into
> : swap). 

   Going a few kb into swap is a good thing.  It's in some cases faster
to read from swap than directly from disk.  It's only bad when you start
going a few megs into swap.

> :My question here is: if I just use 2 swap partitions that are not
> : raided and the memory needed for the raid lives in swap on the primary
> : drive that failed, are you not in the same position you were if you raided
> : your swap in the first place?
> 
> Never, ever, not in a million years should you mkswap on an md partition.
> The kernel's VM code "raid0's" the swap partitions internally, without the
> extra md code.
> 
> Besides, I'm now going to play the part of the "big dumb guy" :) who asks
> why on earth you're going to trust a critical system to software raid...
> One can only assume we're talking about a critical system here, after all,
> dual P-III/650's, 640MB RAM?  That's definitely a big boy.
> 
> I've been using refurb'd AMI Megaraid 466's, that I've been getting with
> 16MB of cache onboard for $180 (warranted for a year), and U2-LVD drives.
> The performance not only blows the doors off of software raid, it's more
> reliable.  Take for example, a site by a company we're incubating
> They do MP3 streaming of independent music.  A Megaraid 466, 16MB cache
> and 6 IBM 36G U2W drives are on that puppy.  It's a RAID 5 w/a hot
> spare, so it's 144G.  I've got reiserfs running on that raid, and wow,
> is it ever FAST.  I'm a whole lot more comfortable with hardware raid.
> That machine can have two drives fail, and all I have to do is slam in
> new drives, and sit back while the raid rebuilds, automagically.  Hot-swap
> drives also mean it can happen with the system up and running.
> 
> It's nice that the kernel supports "poor man's raid", which I've used on my
> home PC before, but when you can find raid cards at a reasonable price that
> have good support under Linux, why bother?  I'd say *exactly* the same
> thing if we were talking about NT's software raid, or anyone else's for
> that matter..


  For RAID 0 or RAID 1 software RAID is often faster as long as you
don't mind the cpu over head.  So I'd argue that it's handy for 0, and
1.  RAID 5 on the other hand is pretty much always faster /w a RAID
controller.

-- 
You've got to be the dumbest newbie I've ever seen.
You've got white out all over your screen.
(Wierd Al Yankovic - It's all about the Pentiums)
Samuel J. Flory <[EMAIL PROTECTED]>


-- 
To unsubscribe: mail [EMAIL PROTECTED] with "unsubscribe"
as the Subject.

Reply via email to