I all, I have the same problem with a fresh installation of Ubuntu
12.04.3 server, but I found a workaround that seems to work, without
reinstallation.

Problem:
I firstly installed the os on 2 hd configured as raid1 with swap (md0) and / 
(md1) partitions.
Secondly I added 2 more disks and via webmin I created a new raid1 with a 
single ext4 partition (md2) and have it mounted it at boot under /mnt/raid 
(again, via webmin).
It worked for a week, both md1 and md2, and very likely I updated the kernel 
image via aptitude (now I have 3.8.0-30-generic ) but I haven't rebooted it 
after. Today I needed to reboot it and it didn't. It didn't even show the boot 
messages.
At boot time, changing the boot options and removing the line gfx_mode 
$linux_gfx_mode, made it output the boot status messages, showing it was 
eventually ending to a initramfs console.
It did boot when choosing to use an older linux version from the boot menu 
(like 3.8.0-29-generic), even if during the boot process it asked if i wanted 
to skip the raid setup for md2. With the previous image it showed the boot 
output.
However, it was not able to mount md2 and showed a new raid md127.

Solution:
So after reading this report and other pages, here is the workaround I used:
1. booted with a previous version and logged in
2. changed /etc/mdadm/mdadm.cnf
From:
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=5e60b492:1adc87ee:8e9a341d:df94a8da 
name=my-server:0
ARRAY /dev/md/1 metadata=1.2 UUID=799d1457:5131d974:1ca3384e:ce9e9e77 
name=my-server:1
# This file was auto-generated on Wed, 11 Sep 2013 14:33:45 +0200
# by mkconf $Id$
DEVICE /dev/sdc1 /dev/sdd1
ARRAY /dev/md2 level=raid1 devices=/dev/sdc1,/dev/sdd1

To:
ARRAY /dev/md/0 metadata=1.2 UUID=5e60b492:1adc87ee:8e9a341d:df94a8da 
name=my-server:0
ARRAY /dev/md/1 metadata=1.2 UUID=799d1457:5131d974:1ca3384e:ce9e9e77 
name=my-server:1
ARRAY /dev/md/2 metadata=1.2 UUID=563e2286:2f0115a0:bc92cb19:3383af19 
name=my-server:2

That is I changed the configuration of the 3rd raid using the UUID of the disks.
To get the UUID of the disks, I used sudo blkid. sdc1 and sdd1 had the 
following values:
/dev/sdc1: UUID="563e2286-2f01-15a0-bc92-cb193383af19" 
UUID_SUB="6ec201e5-eaa4-3309-054e-8ec96c26da49" LABEL="my-server:2" 
TYPE="linux_raid_member"
/dev/sdd1: UUID="563e2286-2f01-15a0-bc92-cb193383af19" 
UUID_SUB="722bad2f-37f4-3c8e-5619-ca413c9d0b82" LABEL="my-server:2" 
TYPE="linux_raid_member"

That is, I used for the UUID of the raid md2 the UUID of sdc1 & sdd1 and
their label my-server:2. Note that the UUID of sdc1 & sdd1 and label are
the same but and the only difference is the UUID_SUB.

3. I changed /etc/fstab removing the entry related to md2 and replaced with one 
using UUID:
UUID=0ae06695-1a89-4c05-af34-68465224a71c       /mnt/raid       ext4    
defaults0       0

4. I issued a "sudo update-initramfs -u"

5. reboot.
It worked for me. I think there are some issues when not using the UUID values.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/990913

Title:
  RAID goes into degrade mode on every boot 12.04 LTS server

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/990913/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to