Trevor Astrope wrote:
>
> On Sun, 19 Mar 2000, Samuel Flory wrote:
>
> > Jason Costomiris wrote:
> > >
> > > On Sat, Mar 18, 2000 at 02:58:52PM -0500, Trevor Astrope wrote:
> > > :My question here is: if I just use 2 swap partitions that are not
> > > : raided and the memory needed for the raid lives in swap on the primary
> > > : drive that failed, are you not in the same position you were if you raided
> > > : your swap in the first place?
> > >
> > > Never, ever, not in a million years should you mkswap on an md partition.
> > > The kernel's VM code "raid0's" the swap partitions internally, without the
> > > extra md code.
>
> Maybe I should explain myself... What I would like to do is use raid1 to
> mirror the 2 drives I have. If one goes, I understand that I would still
> be able to operate on the other drive until I could take the machine down
> and replace the hosed drive...cold swap, if you will. Am I under a
> mistaken impression? Is this just not possible? If I had 2 separate swap
> partitions and a drive went, would that take the machine down? A machine
> that is still up but slowed down seems preferable to me. Of course, I am
> hoping to have a 2 node cluster set up, but one can never have too much
> redundancy...
>
Hmm that's an intesting question I'll have to try that with one of my
test machines. If you are actively using swap that could be bad.
Personally I advocate not using swap at all, and just having enough
memory. (This is always suprisingly unpopular.) If I were you I'd put
the swap partition on a RAID device, and hope I never had to swap. I'd
limit your swap space to 128-256 megs as you won't be able use more than
64 megs on a single logical device under normal circumstances. (Your
machine will crawl to a near halt before the time you hit 64 megs of
swap useage.)
> You answered your own question: poor man's raid. While the price of
> the raid cards isn't so much of an issue, the price of the drives to do
> hardware raid5 is. On the otherhand, if hardware raid1 gives much more
> reliability over software raid1, than I'd be all for that. But I didn't
> think it did. With the machine I described, I could not get the budget for
> hardware raid with hot swappable drives or I definitely would have. I had
> to make my choices and the machine runs a lot of database intensive cgi
> scripts, so I put most of my money in the cpu's and memory. I hoped I
> could get a measure of reliability by having 2 10k rpm UW3 drives and use
> poor man's software raid 1. :-)
>
RAID 1 in software isn't that bad, but personally I'd use HD RAID as
it simplifies things. (I've nearly trashed a server running SW RAID,
when I was attempting to "fix" it late one night.) In your case you
will lose some cpu cycles to SW RAID, but not much. In some cases SW
RAID 0 and 1 will beat your $2000+ RAID controller. Of course IO
benchmarks don't take into to account the possiblity of a heavy cpu
load.
> On another note, do you or anyone else have a good method for setting up
> raid partitions? Using disk druid, when I create the linux native boot
> partitions, the /boot on /dev/sda1 goes fine, but when I go to create a
> /boot2 on /dev/sdb, it gets renamed to /dev/sdb5 when I add further
> partitions to that drive. What I was thinking of doing is to install
> regular linux native partitions on /dev/sda and then set up the partitions
> manually on /dev/sdb and then reinstall without repartitioning.
>
> Has this situation been improved at all in 6.2beta?
>
In theroy you can create SW raid devices in the installer. I haven't
looked at it as VA doesn't support SW RAID. (To many customers didn't
properly understand it, and lost everything.) I know a number of people
who successfully have used SW RAID, as well as a few who lost everything
with a simple mistake.
--
You've got to be the dumbest newbie I've ever seen.
You've got white out all over your screen.
(Wierd Al Yankovic - It's all about the Pentiums)
Samuel J. Flory <[EMAIL PROTECTED]>
--
To unsubscribe: mail [EMAIL PROTECTED] with "unsubscribe"
as the Subject.