Am Wed, 25 Feb 2015 19:25:08 +0100
schrieb Marc Joliet <mar...@gmx.de>:

> Am Wed, 25 Feb 2015 07:01:59 -0500
> schrieb Rich Freeman <ri...@gentoo.org>:
> 
> > On Wed, Feb 25, 2015 at 2:50 AM, Marc Joliet <mar...@gmx.de> wrote:
> > > Am Tue, 24 Feb 2015 16:44:59 -0500
> > > schrieb Rich Freeman <ri...@gentoo.org>:
> > >
> > >> > === Timers ===
> > >> >
> > >> > Can a systemd timer depend on a mount point such that it waits until 
> > >> > the mount
> > >> > point exists before running?  Or will it fail after a timeout?  I want 
> > >> > to
> > >> > research this myself, but haven't gotten around to it yet.
> > >>
> > >> So, timer units are units, and units can have dependencies, and mounts
> > >> can be dependencies since mounts are units.  However, if you set the
> > >> dependency on the timer itself, then the timer won't start running
> > >> until the mount exists.  You probably want the depencency to be on the
> > >> service started by the timer (so the timer is watching the clock, but
> > >> the service won't start without the mount).
> > >
> > > Wait, so the timer won't start watching the clock until its dependencies 
> > > are
> > > met (i.e, the mount point appears)?  Is that what you mean?  Because that 
> > > might
> > > be more in line with what I want (though I'm not sure yet).
> > 
> > If you set the dependency on the timer, then the timer doesn't start
> > watching the clock until they're met.  If you set the dependency on
> > the service started by the timer then it will watch the clock but not
> > launch the service if the dependency isn't met.  You can set the
> > dependency in either or both places.  The timer and the service are
> > both units.
> 
> OK, I think I got it.

I finally looked at this more closely yesterday.  No, dependencies don't do
what I want.  If a dependency is not met, a unit goes into a failed state.
Since the problem is that my external drive doesn't show up properly until I
unplug it and plug it back in (well, strictly speaking, its device name shows
up, but not its partitions), that won't work because that would require manual
intervention, which I would very much like to avoid.

> > >> If you set a
> > >> Requires=foo.mount and After=foo.mount, then the service shouldn't run
> > >> unless foo.mount is available.  I suspect systemd will attempt to
> > >> mount the filesystem when it runs the service, and you'll get units in
> > >> the failed state if that doesn't work.

Exactly, setting a dependency on a mount point will make systemd attempt to
mount the file system before starting the unit.  Its the fact that it goes into
a failed state if that attempt fails that's the problem.  Again: manual
intervention.

> > >> However, I haven't tested any of this.  I suspect it wouldn't take
> > >> much to work this out.  I have a mount dependency in one of my
> > >> services.  Just look at the mount units in /run/systemd/generator for
> > >> the name of the mount unit systemd is creating from fstab.
> > >
> > > Right, so IIUC, I would have a oneshot service that does the backup, and 
> > > the timer
> > > runs that, and of course the timer can depend on the mount point.  And if 
> > > the
> > > mount point doesn't exist, then the service started by the timer will 
> > > fail.
> > >
> > > What I would prefer to have is a timer that only runs if *both* the time 
> > > *and*
> > > mount conditions are met.  Skimming the man page, this does not seem 
> > > possible.
> > > I suppose it would be nice if timers learned "conditions" on which they 
> > > should
> > > wait in addition to the time condition, but maybe that's outside the 
> > > scope of
> > > systemd?
> > 
> > I think if you just set the dependency on the service you'll get the
> > behavior you desire.  Systemd will try to mount the backup filesystem,
> > and if that fails it won't run the backup.
> > 
> > You can set conditions on units as well, like only running if they're
> > on AC power or on amd64 or to run one unit the first time you start a
> > service and a different unit every other time.  Some of that was
> > designed to implement some of the stateless system features they're
> > adding to systemd.
> 
> Right, I'll have a look at them.

The problem with conditions (as they exist in systemd currently) is the same as
with dependencies: the unit does not wait until the condition is met, but
immediately stops (only that it doesn't enter a failed state).  I mean, this is
what conditions in systemd are *supposed* to do, and they do they're
designated job, but I would like the timer to wait until the condition is met
and *then* run the job. I.e., I want a *delay*.

An example of where something like this exists is fcron's lavg* options, where
the jobs is delayed until the specified load average is reached.  I want this,
only for mount points.

Another possibility would be something akin to PartOf that would additionally
link two units at *startup*, i.e., the depending unit starts when its
dependency appears.  Then the timers would come and go along with the mount
point, and at bootup, when the drive doesn't show up properly, once I get it
going the timers show up and elapse appropriately.  However, I'm not sure
whether that would be a good/robust system design (e.g., would that mask error
modes I care about?).

*sigh*

Maybe I'm over-thinking this.

Anyway, what I ended up doing is setting Restart=on-failure with appropriate
intervals so that I get a five minute window to unplug the drive and plug it
back in.  My backup script already returns appropriate error codes, so this
just worked.

I'm not entirely happy, but it's still better (in various ways) than what I had
with fcron, despite the greater verbosity of systemd timers compared to crontab
entries.

Greetings
-- 
Marc Joliet
--
"People who think they know everything really annoy those of us who know we
don't" - Bjarne Stroustrup

Attachment: pgp6JJqx5kVBQ.pgp
Description: Digitale Signatur von OpenPGP

Reply via email to