Package: installation-reports
Version: 2.23
Severity: normal
-- Package-specific info:
Boot method: netinst CD
Image version: etch RC1
Date: 2006-06-14
Base System Installation Checklist:
[O] = OK, [E] = Error (please elaborate below), [ ] = didn't
try it
Initial boot: [O]
Detect network card: [O]
Configure network: [O]
Detect CD: [O]
Load installer modules: [O]
Detect hard drives: [O]
Partition hard drives: [E]
Install base system: [O]
Clock/timezone setup: [O]
User/password setup: [O]
Install tasks: [O]
Install boot loader: [O]
Overall install: [E]
Comments/Problems:
Install process itself went quite smoothly. However, what
got installed was broken.
I configured the machine as follows:
drive 1, partition 1: 200mb partition
drive 2, partition 1: 200mb partition
put into a RAID1 and used as an ext3 partition for /boot.
drive 1, partition 2: rest of drive
drive 2, partition 2: rest of drive
put into a RAID1 partition and used as an LVM partition.
An LVM volume group was created using that raid1 partition.
A volume was created for /.
The system comes up fine and correctly configured after it
has been booted. However, it doesn't shutdown properly.
Because / is on LVM the volume group refuses to deactivate
because / is still mounted (even though it is only read only
by that point). As a result, mdadm won't shut down the raid
device. When the system is booted back up, the raid
partition is marked dirty (because it doesn't shutdown
properly but reboots anyway) and ends up rebuilding.
Potential solutions I can think of:
-Some kind of read-only arrangement for LVM and/or RAID.
-Some kind of chroot hack so that / can actually be
unmounted.
I ended up reinstalling with / not in LVM. However I would
rather have everything managed by LVM. If grub supported it
I'd like to have /boot in LVM and just have one big LVM
volume.
If the problem is not going to get fixed, the installer
should at least warn you that putting / into LVM on RAID is
not going to work as well as you hope it is.
Ideally the installer would also recognize that /boot has
been put onto both drives with RAID1 and would install grub
on both drives and have a menu entry for each drive with one
configured as a fallback. This would allow the system to
still boot with either of the drives having failed. I don't
know how much of this belongs to the grub package and
update-grub and how much is has to do with the installer.
==============================================
Installer lsb-release:
==============================================
DISTRIB_ID=Debian
DISTRIB_DESCRIPTION="Debian GNU/Linux installer"
DISTRIB_RELEASE="3.1 (installer build 20061102)"
X_INSTALLATION_MEDIUM=cdrom
==============================================
--
Daniel Dent
If it ain't broke, tweak it!
--
OmegaSphere Inc. - Your IT Experts - http://www.omegasphere.net/
Web Hosting, SSL Certificates, Domain Names
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]