I have an identical setup here (root FS on top of LVM on top of LUKS on top of MD) and can confirm the bug. I'm running Debian Buster 10.2.
One little comment. The MD array requires manual activation (with mdadm --run /dev/md0) in order for the system to boot. But I found that this manual procedure is only needed, apparently, once (i.e., the first time the array is seen in a degraded state). Subsequent boots will go fine, even if the MD array continues to be degraded. Thanks Magnus for reporting the bug and for the patch. It also works on my system. ;) :) I did some further tests on a virt-manager VM, plugging and unplugging discs. In my tests, I found the patch has a little side effect when the lacking disc is plugged in again. With no patch, I found that after adding the disc again to the VM, the array with the rootfs is automatically resynced (the one with /boot is not). root@debian10-mdadm:~# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda3[1] sdb3[2] 3877888 blocks super 1.2 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk md0 : active raid1 sdb2[2] 248832 blocks super 1.2 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> root@debian10-mdadm:~# mdadm --manage /dev/md0 --re-add /dev/sda2 mdadm: re-added /dev/sda2 (md1 is the array for the rootFS/LVM/LUKS. md0 is the array for /boot) With the patch, the behavior is a bit different. No array is automatically resynced: you have to do it, manually, with mdadm. root@debian10-mdadm:~# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sdb3[2] 3877888 blocks super 1.2 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk md0 : active raid1 sdb2[2] 248832 blocks super 1.2 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> root@debian10-mdadm:~# mdadm --manage /dev/md0 --re-add /dev/sda2 mdadm: re-added /dev/sda2 root@debian10-mdadm:~# mdadm --manage /dev/md1 --re-add /dev/sda3 mdadm: re-added /dev/sda3 (md1 is the array for the rootFS/LVM/LUKS. md0 is the array for /boot) I'm not completely sure if this side effect is relevant or not. A long time ago (>10 years), Linux MD RAID always re-added, IIRC, missing discs from the array (if they were previously members of it). This was the expected behavior. Nowadays, I've seen, IIRC, that sometimes it happens and sometimes not (depending on the distro, initramfs, mdadm version, etc.). Don't know what is the expected behavior (if any) these days. Regards, Ethereal.