I have a box that is soon due to be upgraded to etch. I suspect that
one of the disks is getting a little old (some odd logs, a couple of
kernel panics) - so I wondered if now was a good time to test out
raid (since I will be replacing disks anyway). I've used lvm2 quite a
bit - but not on top of raid - so I'd like to test it out.
Here's what I was thinking. First - I get hold of three 250G IDE
disks. I install these in a chassis that I have going spare at the
minute. Install etch using both RAID5 and lvm2. Migrate any data/
config that I want to keep from the primary box - then when all is OK
- move the disks over.
I'm leaning towards doing the whole thing on raid/lvm2 - root, boot,
the lot - since I'd like for the machine to be able to boot whichever
risk dies
Questions:
1) Given that all the info on the disks should be self consistent and
given that no other disks are present in the system - should the
moving of the disks go OK? I'm a little unclear if software raid
copes with this OK -
2) I read in the wiki/bts that grub doesn't like booting from a raid
partition - but that lilo is OK. How does this work - say hda dies -
how does the boot loader get found?
3) This is a standard machine. That means 2x IDE bus. I see from
http://www.tldp.org/HOWTO/Software-RAID-HOWTO-4.html that it is
recommended only to have one IDE disk per bus. So - should I go for 2
disks - and fall back to RAID 1?
Any good pointers on artciles about using raid under debian?
Thanks in advance.
Chris Searle
[EMAIL PROTECTED]
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]