On Sun, 19 Mar 2000, Samuel Flory wrote:
> Jason Costomiris wrote:
> >
> > On Sat, Mar 18, 2000 at 02:58:52PM -0500, Trevor Astrope wrote:
> > :My question here is: if I just use 2 swap partitions that are not
> > : raided and the memory needed for the raid lives in swap on the primary
> > : drive that failed, are you not in the same position you were if you raided
> > : your swap in the first place?
> >
> > Never, ever, not in a million years should you mkswap on an md partition.
> > The kernel's VM code "raid0's" the swap partitions internally, without the
> > extra md code.
Maybe I should explain myself... What I would like to do is use raid1 to
mirror the 2 drives I have. If one goes, I understand that I would still
be able to operate on the other drive until I could take the machine down
and replace the hosed drive...cold swap, if you will. Am I under a
mistaken impression? Is this just not possible? If I had 2 separate swap
partitions and a drive went, would that take the machine down? A machine
that is still up but slowed down seems preferable to me. Of course, I am
hoping to have a 2 node cluster set up, but one can never have too much
redundancy...
> > It's nice that the kernel supports "poor man's raid", which I've used on my
> > home PC before, but when you can find raid cards at a reasonable price that
> > have good support under Linux, why bother? I'd say *exactly* the same
> > thing if we were talking about NT's software raid, or anyone else's for
> > that matter..
You answered your own question: poor man's raid. While the price of
the raid cards isn't so much of an issue, the price of the drives to do
hardware raid5 is. On the otherhand, if hardware raid1 gives much more
reliability over software raid1, than I'd be all for that. But I didn't
think it did. With the machine I described, I could not get the budget for
hardware raid with hot swappable drives or I definitely would have. I had
to make my choices and the machine runs a lot of database intensive cgi
scripts, so I put most of my money in the cpu's and memory. I hoped I
could get a measure of reliability by having 2 10k rpm UW3 drives and use
poor man's software raid 1. :-)
On another note, do you or anyone else have a good method for setting up
raid partitions? Using disk druid, when I create the linux native boot
partitions, the /boot on /dev/sda1 goes fine, but when I go to create a
/boot2 on /dev/sdb, it gets renamed to /dev/sdb5 when I add further
partitions to that drive. What I was thinking of doing is to install
regular linux native partitions on /dev/sda and then set up the partitions
manually on /dev/sdb and then reinstall without repartitioning.
Has this situation been improved at all in 6.2beta?
Thanks for your feedback.
Regards,
Trevor Astrope
[EMAIL PROTECTED]
--
To unsubscribe: mail [EMAIL PROTECTED] with "unsubscribe"
as the Subject.