NeilBrown пишет:
On Wed, 13 Nov 2013 22:11:27 +0600 "Alexander E. Patrakov"
<[email protected]> wrote:
2013/11/13 NeilBrown <[email protected]>:
On Tue, 12 Nov 2013 19:01:49 +0400 Andrey Borzenkov <[email protected]>
wrote:
Something like
[email protected]
[Timer]
OnCalendar=+5s
[email protected]
[Service]
Type=oneshot
ExecStart=/sbin/mdadm -IRs %n
udev rule
... SYSTEMD_WANTS=mdadm-last-resort@$ENV{SOMETHING_UNIQUE}.timer
Thanks. This certainly looks interesting and might be part of a solution.
However it gets the timeout test backwards.
I don't want to set the timeout when the array starts to appear. I want to
set the time out when someone wants to use the array.
If no-one is waiting for the array device, then there is no point forcing it.
That's why I want to plug into the timeout that systemd already has.
Maybe that requirement isn't really necessary though. I'll experiment with
your approach.
It is useless to even try to plug into the existing systemd timeout,
for a very simple reason. in the setups where your RAID array is not
on the top of the storage device hierarchy, systemd does not know that
it wants your RAID array to appear.
So the statement "If no-one is waiting for the array device, then
there is no point forcing it" is false, because there is no way to
know that no-one is waiting.
"useless" seems a bit harsh. "not-optimal" may be true.
If systemd was waiting for a device, then it is clear that something was
waiting for something. In this case it might be justified to activate as
much as possible in the hope that the important things will get activated.
This is what "mdadm -IRs" does. It activates all arrays that are still
inactive but have enough devices to become active (though degraded). It
isn't selective.
If they are deep in some hierarchy, then udev will pull it all together and
the root device will appear.
If systemd is not waiting for a device, then there is no justification for
prematurely starting degraded arrays.
Maybe I could get emergency.service to run "mdadm -IRs" and if that actually
started anything, then to somehow restart local-fs.target. Might that be
possible?
If this is the way forward, then we, obviously, should think about a
general mechanism that is useful not only for mdadm, but also to other
layered storage implementations such as dm-raid, or maybe multi-device
btrfs, and that is useful if more than one of these technologies are
used on top of each other. This by necessity leads to multiple emergency
missing-device handlers. And then a question immediately appears, in
which order the emergency handlers should be tried, because all that is
known at the time of emergency is that some device listed in /etc/fstab
is missing. I suspect that the answer is "in arbitrary order" or even
"in parallel", but then there is a chance that one run of all of them
will not be enough.
This is not a criticism, just something to be fully thought out before
starting an implementation.
--
Alexander E. Patrakov
_______________________________________________
systemd-devel mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/systemd-devel