I'm able to reproduce the same behavior that Serge sees when running in
a VM, and now have a clear understanding of what's happening here at the
top level.

The init-bottom script, as noted earlier, is set -e; and udevadm control
exits non-zero if it hits its timeout.  These two factors combined mean
that if udev gets stuck this way, the mount -o move *is never called*,
because the script terminates immediately after the udevadm call.

This also entirely explains the behavior seen in bug #833783 (in fact, I
just reproduced this problem here) - the difference between systems not
finding the local disks, and systems panicing due to missing
/dev/console, is down to what device nodes happen to be prepopulated on
the root filesystem.  So marking bug #833783 as a duplicate of this one.

I am currently investigating both the most appropriate way to ensure
that udevadm control --exit doesn't have to hit its timeout (raising the
timeout slightly, so that udevd times out first and exits, *should* be
sufficient but doesn't seem to be), and why udev is getting stuck in
seemingly ordinary boot scenarios (pristine vm).

The original problem reported, which involves a missing firmware, seems
to have its own cause in the kernel since the firmware should not fail
to load.  I understand the kernel team is already looking into this.

** Summary changed:

- boot failures because 'udevadm exit' does not kill udevd worker threads
+ boot failures because 'udevadm exit' times out while udevd waits for an 
already-dead thread

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/818177

Title:
  boot failures because 'udevadm exit' times out while udevd waits for
  an already-dead thread

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-release-notes/+bug/818177/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to