MikeB wrote:
>I think you'll find you need
>boot=/dev/md0
>
>And if you're using a recent version of lilo (21.7-3 for example) it will
>install the loader on both disks.
>Here's my lilo.conf:
>boot = /dev/md0
>lba32
>delay = 50      # optional, for systems that boot very quickly
>vga = normal    # force sane state
>root = current  # use "current" root
>image = /vmlinuz
>   label = linux
>   read-only
>image = /vmlinuz.old
>   label = linux.old
>   read-only # Non-UMSDOS filesystems should be mounted read-only for
>checking

Thanks for everyone's feedback.  I made some progress by changing to 
a single lilo.conf file and changing boot to /dev/md0, but I still 
can't get my system to boot when sdb is offline.

When I ran lilo, it only mentioned writing to /dev/sdb in the output. 
This leads me to believe that boot=/dev/md0 isn't writing the MBR to 
/dev/sda, and is further suggested by the fact that the system won't 
boot when /dev/sdb is offline.  Is there something else I can do to 
force lilo to write the MDB on BOTH disks?

Here's my current lilo.conf:
boot = /dev/md0
lba32
delay = 50      # optional, for systems that boot very quickly
vga = normal    # force sane state
root = current  # use "current" root
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
message=/boot/message
default=linux

image=/boot/vmlinuz-2.2.16-22enterprise
         label=linux
         initrd=/boot/initrd-2.2.16-22enterprise.img
         read-only
         root=/dev/md0


I tried changing boot to /dev/sda and running lilo again, and it 
wrote the MDB on sda, but still stalls at LI when sdb is off.

I also upgraded to the latest lilo from the developer site (21.7-5) 
and re-ran lilo against my conf, but that didn't change the above 
results.

Finally, I tried Al's suggestion (below, I hope he doesn't mind me 
posting his personal reply), but that resulted in the same behavior 
as when I posted: boots fine when both members are online, but gets 
stuck at "LI" when either one is offline.

Any other suggestions?

Thanks,

Dale

>       Dale,
>
>       I've messed with this quite a bit as well (Mandrake 8.0, 2.4.3 kernel,
>RAID1 using two or three disks), and I can not get LILO bootloaders to
>reliably boot off an md device if the first disk in the array is
>unavailable.  I'm sure there are people on this list that have it
>working, and I know the developers of this capability are here and can
>comment on this, but it has not worked reliably for me.
>
>       So...   I went for something more simple (IMHO) than asking LILO and md
>to take care of this for me: avoid booting off the md device by writing
>slightly different bootloaders out to the MBR on each sd device in the
>array, as such...
>
>- Make one copy of /etc/lilo.conf for each disk, i.e., lilo-sda.conf,
>   lilo-sdb.conf, etc.
>
>- Edit each copy as such:
>   - Make the "boot=" line match the device, i.e., boot=/dev/sda,
>     boot=/dev/sdb, etc.
>   - Keep a different map file for each bootloader by editing the
>     "map=" entry in each file, i.e., map=map-sda, map=map-sdb, etc.
>     Remember that map files are kept in /boot.
>
>- Write a boot loader to each disk, i.e., "lilo -C /etc/lilo-sda.conf",
>   "lilo -C /etc/lilo-sdb.conf", etc.
>
>         Now, the box can correctly boot off *any* disk in the array,
>without relying on the md driver, and then proceed to mount the raid
>partitions (possibly in degraded mode, if one or more disks are gone).
>
>         Caveat: I've only done all this with a raid-1 set, and have no
>first-hand
>knowledge for other raid configs.
>
>         Hope that helps!
>
>-Al
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]

Reply via email to