The 'fake' RAID controller has two disks attached. Each is configured as separate RAID-0 stripe arrays - this is because the controller doesn't support JBOD pass-through.
This issue is caused by the LVM volumes on a fake-RAID device failing to be ready because the LVM VG_CADDY/rootfs is not created since the VG's container partition "/dev/mapper/pdc_ecjaiecgch4" does not get created: (initramfs) cat /proc/version Linux version 3.13.0-24-generic (buildd@panlong) (gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) ) #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 (initramfs) cat /proc/cmdline BOOT_IMAGE=/vmlinuz-3.13.0-24-generic root=/dev/mapper/VG_CADDY-rootfs ro debug bootdegraded=true libata.force=noncq console=tty0 console=ttyS0,115200n8 netconsole=@10.254.1.3/eth0,@10.254.1.51/ --debug --verbose break=top nomdmonddf nomdmonisw (initramfs) dmesg | grep -C2 sdb [ 97.610562] scsi 11:0:1:0: Direct-Access ATA ST380011A 3.06 PQ: 0 ANSI: 5 [ 97.610762] sd 11:0:1:0: [sdb] 156299375 512-byte logical blocks: (80.0 GB/74.5 GiB) [ 97.610786] sd 11:0:1:0: Attached scsi generic sg2 type 0 [ 97.610893] sd 11:0:1:0: [sdb] Write Protect is off [ 97.610895] sd 11:0:1:0: [sdb] Mode Sense: 00 3a 00 00 [ 97.610942] sd 11:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 97.675153] sdb: sdb1 sdb2 sdb3 sdb4 [ 97.675669] sd 11:0:1:0: [sdb] Attached SCSI disk -- <27>[ 100.196769] systemd-udevd[176]: inotify_add_watch(7, /dev/sdb3, 10) failed: No such file or directory <27>[ 100.196845] systemd-udevd[168]: inotify_add_watch(7, /dev/sdb1, 10) failed: No such file or directory <27>[ 100.197798] systemd-udevd[177]: inotify_add_watch(7, /dev/sdb4, 10) failed: No such file or directory <27>[ 100.198351] systemd-udevd[175]: inotify_add_watch(7, /dev/sdb2, 10) failed: No such file or directory <27>[ 100.198375] systemd-udevd[179]: inotify_add_watch(7, /dev/sdc2, 10) failed: No such file or directory <27>[ 100.198911] systemd-udevd[180]: inotify_add_watch(7, /dev/sdc3, 10) failed: No such file or directory (initramfs) dmesg | grep 'too small' [ 100.346794] device-mapper: table: 252:13: dm-0 too small for target: start=13574144, len=142725198, dev_size=156298401 # fake RAID raw device (initramfs) kpartx -l /dev/sdb sdb1 : 0 2014 /dev/sdb 34 sdb2 : 0 1024000 /dev/sdb 2048 sdb3 : 0 12023809 /dev/sdb 1288192 sdb4 : 0 142725198 /dev/sdb 13574144 # same device through the device-mapper viewpoint (initramfs) kpartx -l /dev/mapper/pdc_ecjaiecgch Alternate GPT is invalid, using primary GPT. pdc_ecjaiecgch1 : 0 2014 /dev/mapper/pdc_ecjaiecgch 34 pdc_ecjaiecgch2 : 0 1024000 /dev/mapper/pdc_ecjaiecgch 2048 pdc_ecjaiecgch3 : 0 12023809 /dev/mapper/pdc_ecjaiecgch 1288192 pdc_ecjaiecgch4 : 0 142725198 /dev/mapper/pdc_ecjaiecgch 13574144 I've moved the VG to another disk on a plain PATA controller for now. I'll investigate more when I have time. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1358491 Title: [Trusty] fails to boot with kernels later than v3.11: systemd- udevd[133]: conflicting device node Status in “linux” package in Ubuntu: In Progress Bug description: I have a lab server named 'caddy' that is used for data recovery and forensics of disk drives. It has hot-swap drive bays for various disk interface types. Amongst others it has a Promise FasTrak TX2000 IDE 'fake' RAID controller. It was upgraded from Saucy to Trusty. After the upgrade the server fails to boot using kernel version 3.13.0-24-generic during early udev whilst still in the initrd. Errors of the form: [ 6.989549] systemd-udevd[137]: inotify_add_watch(7, /dev/sdi2, 10) failed: No such file or directory ... [ 7.092733] systemd-udevd[133]: conflicting device node '/dev/mapper/pdc_ecjaiecgch1' found, link to '/dev/dm-2' will not be created are reported for some devices, usually the Promise 'fake' RAID devices. The system hangs at that point without ever dropping to a busybox shell. Starting with an earlier Saucy kernel version 3.11.0-12-generic allows the server to start successfully. After some research it appears that maybe this is due to an incompatibility between systemd-udevd and device-mapper and/or dmraid- activate. I read in a similar Fedora bug report a comment by Kay Seivers: https://bugzilla.redhat.com/show_bug.cgi?id=867593#c11 "Device-mapper seems to mknod() things in /dev, which just can not work correctly today. There is nothing udev can fix here, it will never touch any device node, which should not exist in the first place, that is in the way." I've tried breaking initrd, but unless it is done at 'top' udevd starts and the system hits this problem. Serial console logs of the failed and successful boot attempts are attached To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1358491/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp