Am 29.06.2015 um 21:36 schrieb jon:
On Mon, 2015-06-29 at 20:50 +0200, Lennart Poettering wrote:On Mon, 29.06.15 19:20, jon ([email protected]) wrote:Reversing the logic by adding a "mustexist" fstab option and keeping the default behaviour would fix it.At this time, systemd has been working this way for 5y now. The behaviour it implements is also the right behaviour I am sure, and the "nofail" switch predates systemd even.I disagree strongly. As I said the "option" did not do anything... so the change only really happened when systemd coded it. Very people are using systemd, so this change may be "stable old code" in your world, in my world it "new" and its behaviour is "wrong" !
while i often disagree with systemd developers *that* behavior is simply correct
Hence I am very sure the default behaviour should stay the way it is.Your default behaviour or mine ! Many people that I know who run linux for real work have been using systemd for 5 mins, most have yet to discover it at all !
well, it needs much more than 5 minutes to get into a new core part of the system and that said: Fedora users are using systemd now since 2011 me including in production
The first I knew about is when Debian adopted it, I have been using systemd for a few hours only. It may be your 5 year old pet, but to me it just a new set of problems to solve.
you can't blame others for that. systemd was available and widely known before
I normally install machines with Debian stable, I am just discovering systemd for the first time.
and *that* is the problem not a *minor* change, i would understand if you have to change /etc/fstab for 5000 machines but even then: large setups are maintained not by login everywhere manually and make the same change, especially that you can make this change *before* upgrades because it don't harm sysvinit systems, the have no problem with "nofail"
Bringing up networking/sshd in parallel to the admin shell would also mitigate my issue....That's a distro decision really. Note though that many networking implementations as well as sshd are actually not ready to run in early-boot, like the emergecny mode is. i.e. they assume access to /var works, use PAM, and so on, which you better avoid if you want to run in that boot phase.Hmmm ... it used to be possible with telnetd, so I suspect it is still possible with sshd.
not relieable, the emergency shell is even there when mounting the rootfs fails and then you can't bring up most services
but i agree that *trying to bring up network and sshd* would be not a bad idea, in case the problem is with a unimportant datadisk it may help
This is the "problem" with systemd, by changing one small behaviour it now requires many many changes to get a truly useful system behaviour back.
honestly it is not normal that mountpoints disappear like in your case and even if - a machine which is that important usually has a *tested* setup and is reachable via KVM or something similar
As you might know my company cares about containers, big servers primarily, while I personally run things on a laptop and a smaller server on the Internet. Hence believe me that I usually care about laptop setups at least as much as for server setups.Nope, did not know that, interesting.
you know Redhat? read recent IT news about Redhat and containers
signature.asc
Description: OpenPGP digital signature
_______________________________________________ systemd-devel mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/systemd-devel
