Sorry, I'm not understanding your comment. What I understood is that the systemd developers said "It is not our responsibility if ubuntu does not boot, this is not a systemd bug! In fact, the issue in booting is caused by some quota related units, shipped by Ubuntu, that are not part of systemd". Then, I saw that the faulty systemd units are shipped with the "quota" package, just as you say. Yet, the upstream quota package does not contain these units. The systemd quota related units and scripts are a downstream ubuntu (maybe from debian) addition (in fact, they live in the debian dir of the package).
This is why I was reporting that the systemd developers suggest that this is a downstream bug (a bug in the Ubuntu distro), not an upstream bug (a bug in systemd 231). I never implied that ubuntu forgot to ship some units. I'm only saying that the quota units shipped with ubuntu evidently mistake something, because when they are masked the system boots, otherwise it doesn't. Possibly, the issue is only triggered if one has quotas+lvm2+fake or soft raid. This may explain why the mistake got unnoticed when it slipped in. I suppose that this bug should be fixed by the maintainers of the quota package in ubuntu, possibly with the help of the systemd maintainers. ** Package changed: systemd (Ubuntu) => quota (Ubuntu) -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to systemd in Ubuntu. https://bugs.launchpad.net/bugs/1635023 Title: Regression: quotas prevent yakkety from booting (boots once every ~4 attempts) Status in systemd: New Status in quota package in Ubuntu: New Bug description: After an upgrade from xenial to yakkety, the system does not boot reliably. Only 1 boot over 3-4 seems to succeed. Quite often, the system seems to hang during boot and then drops into an emergency prompt. Pressing esc during the boot to show what is going on reveals that there are processes waiting for the disks to become ready. This is likely to be related to the specific system configuration, with a raid array of two disks managed by dmraid (it is an intel fake raid) and lvm2 on top of it. Most stuff is on it in logical disks, but for the boot partition that is on a fast ssd, and the root of the filesystem that is on lvm2 with a physical volume spanning the rest of the ssd. The very same configuration worked just fine on xenial. To make the matter worse, it is impossible to boot in "recovery mode" to get a root prompt, because after you login as root something suddenly times out and the screen gets messed up (not a graphics card issue, but pieces of messages popping out here and there, the recovery mode menu suddenly reappearing, etc. Please give this bug appropriately high priority, because it prevents servers from coming up after power failures. ProblemType: Bug DistroRelease: Ubuntu 16.10 Package: systemd 231-9git1 ProcVersionSignature: Ubuntu 4.8.0-22.24-generic 4.8.0 Uname: Linux 4.8.0-22-generic x86_64 ApportVersion: 2.20.3-0ubuntu8 Architecture: amd64 Date: Wed Oct 19 21:34:16 2016 EcryptfsInUse: Yes MachineType: Dell Inc. Precision WorkStation T5400 ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.8.0-22-generic root=/dev/mapper/vg_ssd-root ro quiet splash mtrr_gran_size=2g mtrr_chunk_size=2g resume=/dev/sdc3 SourcePackage: systemd UpgradeStatus: Upgraded to yakkety on 2016-10-18 (1 days ago) dmi.bios.date: 04/30/2012 dmi.bios.vendor: Dell Inc. dmi.bios.version: A11 dmi.board.name: 0RW203 dmi.board.vendor: Dell Inc. dmi.chassis.type: 7 dmi.chassis.vendor: Dell Inc. dmi.modalias: dmi:bvnDellInc.:bvrA11:bd04/30/2012:svnDellInc.:pnPrecisionWorkStationT5400:pvr:rvnDellInc.:rn0RW203:rvr:cvnDellInc.:ct7:cvr: dmi.product.name: Precision WorkStation T5400 dmi.sys.vendor: Dell Inc. To manage notifications about this bug go to: https://bugs.launchpad.net/systemd/+bug/1635023/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp