OK, I have an answer to why this particular system ran into problems but
a different one didn't. I noticed it in the stuff that apport dumped to
this ticket.

Unbeknownst to me, these two disks were previously part of a 4-disk
RAID5 array, built from /dev/sda - /dev/sdd. /dev/sda and /dev/sdb
already had a superblock associated with them.

I partitioned both with a 20GB root partition up front so that the end-
user will be able to flop between successive Ubuntu releases, and the
remaining 1.4whatever TB of each was partitioned as type "fd" to be
RAIDed together for data. The initial OS installation went just fine.

However, it looks like when mdadm was installed, it saw the superblocks
for /dev/sda and sdb as well as for /dev/sda2 and /dev/sdb2 and set
things up accordingly. An initial ramdisk was created that knew about
both arrays, and the failure was apparently in it trying to start the
broken RAID5 array.

Blowing away the superblocks on /dev/sda and sdb solved this particular
problem - mdadm now installs without causing any further problems.

However, what I don't get is why a broken array (with no mount point
specified) would result in an unbootable system., rootfs in this case is
on a single drive's partition and is not part of any array.

As to whether this is the cause of anyone else's problem, I don't know.
I reckon it's pretty unlikely, actually - we have quite a few disks
lying around that have been used in 4-disk rackmount servers that have
been pulled at one point or another, which is probably a lot less likely
to have happened in someone's home setup.

-- 
Installing mdadm package breaks bootup.
https://bugs.launchpad.net/bugs/158918
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to