On Tue, 2 Sep 2008 15:12:18 -0400 "Tom Allison" <[EMAIL PROTECTED]> wrote:
> On 9/1/08, Claudius Hubig <[EMAIL PROTECTED]> wrote: > > Tom Allison <[EMAIL PROTECTED]> wrote: > > > From what I recall reading the logs at startup if I put my boot > > > system > > >on a software raid 1 it appears to boot from disk #1 then mount > > >the RAID and finish from there. > > > > > >Am I correct so far? > > > > > >The ultimate question is this: > > >If I have a disk failure on a boot/raid1 system (/dev/hda), can I > > >simply replace that dead disk with a new one (empty, formated, > > >partition, doesn't matter?) and it will magically boot from the > > >available disk (/dev/hdb) and fix itself? Or is there more to > > >this? > > > > I got something very similiar to your setup and have to say - no, it > > won't. You'll have to make your BIOS boot from the second disk (and > > have to install grub in the MBR before) or use a rescue disk to boot > > the system. Then, adjust the partitions on the new drive and add > > them to your raid. > > > > You can, however, configure your BIOS that it tries to boot from > > every available hard disk and switches to your second disk when the > > first one fails. Nonetheless, this disk needs a valid MBR as well. > > > > Greetings, > > > > Claudius > > I'm going to sound dumb, but isn't that just marking it bootable and > then running grub on the second disk to set the grup boot files in > place on the second disk? Well, RAID1 mirrors the drive so /boot lives on both drives, but grub legacy (grub2 is out, but not the default yet) doesn't understand multidisk devices (e.g. RAID) so it just reads the disk as if it were a regular partition rather than RAID. Also, because it doesn't understand RAID, you have to manually run GRUB for both drives in order to update the the MBR (the MBR is the master boot record, and in the same sector as the partition tables and is the code that calls grub in order to do the booting after being called by the machine's BIOS). To do that device (hd0) /dev/sda root (hd0,0) setup (hd0) repeat for the second disk (e.g. /dev/sdb) sda / sdb are scsi disks hda / hdb / hdc /hdd are ide disks When using IDE RAID you should have the two drives on different controllers (i.e. different cables), which would usually give you /dev/hda and /dev/hdc, assuming you configure the drives as master. The reason for this is that drive failure on an IDE controller usually takes out both drives on the controller so RAID doesn't save you. If you have them on separate controllers, even if you system dies because the drive went, the second drive is still valid. If you have them on the same controller (cable) you can end up with neither drive containing valid drive, which means the RAID didn't help you. Regards, Daniel -- And that's my crabbing done for the day. Got it out of the way early, now I have the rest of the afternoon to sniff fragrant tea-roses or strangle cute bunnies or something. -- Michael Devore GnuPG Key Fingerprint 86 F5 81 A5 D4 2E 1F 1C http://gnupg.org The C Shore: http://www.wightman.ca/~cshore
signature.asc
Description: PGP signature