I have a similar problem on a live system.
I installed jessie and later set up RAID1, making sure to install grub on both 
drives.
I can boot fine with both disks installed.
When I powerdown and remove a single drive boot will fail after a few seconds.

By removing the quiet and splash params from the boot line I can confirm the 
"running /scripts/local-block" messages.
Avter the rootdelay timeout I get a (initramfs): prompt.
At this prompt I can run "mdadm --run /dev/md127" then "exit" and the system 
boots.

Better would be to have a message stating that the raid is degraded, and allow 
to boot it as such.

On Wed, 19 Aug 2015 11:17:49 +0000 Martin Minne <martin.mi...@koezio.co> wrote:
> On Sun, 26 Apr 2015 22:27:10 +0200 Florian Attenberger 
> <florian.attenber...@kthread.org> wrote:
> > Hello,
> >
> > i can reproduce this reliably, in a kvm vm.
> > This is only broken if you remove the disks while powered down.
> > Hot-Remove, reboot, Hot-Add (and Probably Cold-Add) works as it should.
> >
> > Cheers,
> >
> > Florian
> >
> > --
> >
> >
> >
> 
> 
> 
> Hello, i have the same problem as described in the 1st post with RAID10 and 
> you are right, hot-remove/reboot/hot-add works as it should.
> 
> Did you find a solution to boot with a hard drive missing?
> 
> Thank you

Reply via email to