Let me try one last time to separate the issues. ** The UEFI issue (a side issue)
The installer works in two completely different ways, depending on whether the system booted via UEFI or BIOS. But it does not show whether it is installing in UEFI or BIOS mode. Hence the user has little way, short of guesswork, to know how to partition the system correctly. Many systems can boot from a USB stick in either mode. If you don't tell it, you get whatever the system chose. So: (1) The installer *could* tell you which mode it's running in, but it doesn't. If you don't realise you've booted via UEFI mode and that the system is going to configure UEFI booting, and decide to partition manually, then you don't realise that you need a UEFI boot partition. (2) The system *could* warn you that you have a missing UEFI boot partition when installing in UEFI mode, but it doesn't. Those points have now been raised separately in issue #1609715. However the only relevance here is it gives a way to reproduce the main problem. ** Broken recovery mode (the main issue) The point I tried to raise in this issue is the brokenness of recovery mode when you have a system with some sort of corruption. The UEFI missing-boot-partition problem is just one specific way to reproduce the brokenness in recovery mode. Reproducible cases are good; they allow things to be fixed. There are however many other different ways the system could be broken and recovery mode would not work. With an older version of Ubuntu, I could simply log in, poke around, look at logs, find the problem and fix it. With ubuntu 16.04, I have now experienced a situation where recovery mode is broken. I described what happens at the top of this issue. Basically you can start a recovery shell, and 50% of your keystrokes are thrown away; and then a few minutes later the recovery shell quits and recovery mode locks up. I suspect this is something to do with systemd sitting in the background launching stuff when it thinks dependencies have been met, and terminating stuff when it thinks it would be a good idea to do so. For recovery mode, I just want a shell. Let me do my job. Please spawn me a shell connected to the console, reliably. That's it. No shells vanishing and reappearing. No timeouts because filesystems haven't yet been mounted or because networking is not up. That's the whole point of recovery mode - to have sufficient access to be able to fix those things. For now, the best workaround seems to be to boot from an Ubuntu 14.04 USB, and then mount the system disk. But it makes me sad that 16.04 has become less good in this respect than it was before. It seems to be a regression in how easy it is to recover a broken system. Of course, this only affects systems which require some sort of maintenance - but it's a fact of life that systems *do* get into states which require fixing. That's it. If you have never had to use recovery mode, and hence don't care about it, then you are lucky. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1609475 Title: recovery mode completely broken by systemd Status in linux package in Ubuntu: Incomplete Bug description: Installing Ubuntu 16.04.1 on an identical pair of Intel NUC5CPYH machines (with 8GB RAM and Crucial BX200 SSD). There is a problem running on this machine, but the problem report here is specifically about how systemd makes this impossible to debug. Symptoms: * Installation proceeds normally. I installed with 4 partitions: 10GB /, 20GB /var, 202GB unused, 8GB swap * On reboot strange things happen. The system doesn't come up fully; sometimes it reports "NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [systemd-udevd:1148]" So I try to boot again this time following "Advanced options for Ubuntu", "Ubuntu, with Linux 4.4.0-31-generic (recovery mode)" It appears to boot fine. From the Recovery Menu I select "root: Drop to root shell prompt", then "Press Enter for maintenance". All is good so far: I get a prompt. However while I sit looking at this screen, after about two minutes a bunch of systemd messages scroll up. I captured them as best as I can with a camera: [ OK ] Reached target Timers. [ OK ] Reached target Login Prompts. [ OK ] Started Stop ureadahead data collection 45s after completed startup [ OK ] Reached target System Time Synchronized. [ OK ] Reached target Sockets. Starting Create Volatile Files and Directories... [ OK ]Started Set console scheme. [ OK ] Started Tell Plymouth To Write Out Runtime Data. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. [FAILED] Failed to start LSB: AppArmor initialization. See 'systemctl status apparmor.service' for details. Starting Raise network interfaces... [ OK ] Started Raise network interface. [ OK ] Reached target Network. [ OK ] Reached target Network is Online. Starting iSCSI initiator daemon (iscsid)... [ OK ] Started Set console font and keymap. [ OK ] Started iSCSI initiator daemon (iscsid). Starting Login to default iSCSI targets... [ OK ] Created slice system-getty.slice. [ OK ] Started Login to default iSCSI targets. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. At this point it hangs for a few more seconds. Then a few more lines flash up onto the screen - too fast to see, although I think one of the lines has the ctrl-D for maintenance message. Then I can see the Recovery Menu again, *but the keyboard apparently does not work*. That is, I cannot move the selection up or down: it appears completely dead at this point. Alt-F2 switches me to a screen which is completely black apart from flashing cursor, and Alt-F1 puts me back to the frozen recovery menu. However, hitting Enter *does* give me a command line prompt again! But then pressing up and down selects the recovery menu. It appears that the shell and the recovery menu are both fighting over the keyboard. By pressing cursor down repeatedly, it appears about 50% of them cause the recovery menu to move. This is completely pants: if I boot into recovery mode, I *don't* want systemd nonsense, I want to see a sequential series of bootup steps; and when I get a shell, I want that shell to be mine on the console with no interference - and not taken away again. Lots of people say "systemd sucks", but I am submitting this in the hope that providing a *specific* way that it sucks might help get it fixed. (I have had a number of other cases of system recovery being frustrated by systemd, but this time I thought I would at least document the specifics) --- AlsaVersion: Advanced Linux Sound Architecture Driver Version k4.4.0-31-generic. AplayDevices: Error: [Errno 2] No such file or directory ApportVersion: 2.20.1-0ubuntu2.1 Architecture: amd64 ArecordDevices: Error: [Errno 2] No such file or directory AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/by-path', '/dev/snd/hwC0D2', '/dev/snd/hwC0D0', '/dev/snd/pcmC0D3p', '/dev/snd/pcmC0D1p', '/dev/snd/pcmC0D0c', '/dev/snd/pcmC0D0p', '/dev/snd/controlC0', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1: Card0.Amixer.info: Error: [Errno 2] No such file or directory Card0.Amixer.values: Error: [Errno 2] No such file or directory DistroRelease: Ubuntu 16.04 HibernationDevice: RESUME=UUID=8c695f64-12a0-4748-a431-7ab97a1e9042 InstallationDate: Installed on 2016-08-04 (33 days ago) InstallationMedia: Ubuntu-Server 16.04.1 LTS "Xenial Xerus" - Release amd64 (20160719) IwConfig: Error: [Errno 2] No such file or directory Lsusb: Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 8087:0a2a Intel Corp. Bus 001 Device 002: ID 05e3:0610 Genesys Logic, Inc. 4-port hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub NonfreeKernelModules: zfs zunicode zcommon znvpair zavl Package: linux (not installed) ProcEnviron: LANGUAGE=en_GB:en TERM=xterm-256color PATH=(custom, no user) LANG=en_GB.UTF-8 SHELL=/bin/bash ProcFB: 0 inteldrmfb ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.4.0-31-generic.efi.signed root=UUID=a91f753b-69af-4125-a03d-0dcb63d55d38 ro net.ifnames=0 ProcVersionSignature: Ubuntu 4.4.0-31.50-generic 4.4.13 RelatedPackageVersions: linux-restricted-modules-4.4.0-31-generic N/A linux-backports-modules-4.4.0-31-generic N/A linux-firmware 1.157.2 RfKill: Error: [Errno 2] No such file or directory Tags: xenial Uname: Linux 4.4.0-31-generic x86_64 UpgradeStatus: No upgrade log present (probably fresh install) UserGroups: _MarkForUpload: True dmi.bios.date: 05/03/2016 dmi.bios.vendor: Intel Corp. dmi.bios.version: PYBSWCEL.86A.0054.2016.0503.1546 dmi.board.name: NUC5CPYB dmi.board.vendor: Intel Corporation dmi.board.version: H61145-407 dmi.chassis.type: 3 dmi.modalias: dmi:bvnIntelCorp.:bvrPYBSWCEL.86A.0054.2016.0503.1546:bd05/03/2016:svn:pn:pvr:rvnIntelCorporation:rnNUC5CPYB:rvrH61145-407:cvn:ct3:cvr: To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1609475/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp