Let's close this as our kernels pretty much all support ZFS and LXD is a
snap and therefore does not need additional userspace tools.
** Changed in: zfs-linux (Ubuntu)
Status: In Progress => Won't Fix
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Marking the LXD side of this fixed as we're now shipping as a snap by
default and the snap contains zfs.
** Changed in: lxd (Ubuntu)
Status: Incomplete => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bug
** Changed in: zfs-linux (Ubuntu)
Status: Triaged => In Progress
** Changed in: zfs-linux (Ubuntu)
Assignee: (unassigned) => Colin Ian King (colin-king)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpa
Colin: This is not what this issue is about.
This issue is about getting the ZFS tools installed by default in server
images, with the problem that doing so now would result in zfs-zed
running all the time for everyone, regardless of whether they use ZFS or
not.
What we want is:
- Don't load the
This was fixed on the following version:
zfs-linux (0.6.5.9-5ubuntu1) zesty; urgency=medium
* Resynchronize with Debian, remianing changes:
- Load zfs module unconditionally for zesty
-- Aron Xu Mon, 20 Mar 2017 11:24:41 +0800
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed
As a note here, previous version of zfs-linux in Ubuntu loads the zfs
kernel modules for everyone who has the package installed
unconditionally, and since zfs-linux/0.6.5.9-4 (synced from Debian) they
are only loaded when the system has at least one zpool configured
(ConditionPathExists=/etc/zfs/zp
The pools are imported by either zfs-import-scan.service or zfs-import-
cache.service. (Which service runs depends on whether
/etc/zfs/zpool.cache exists.) They both call `zpool import -a` plus some
other arguments. In other words, `zpool import -a` is being run
unconditionally, whether pools exist
> The udev event is going to fire before the pool is imported.
So how does a pool get imported, what triggers that if it's not block
devices appearing? Whatever does that import, couldn't that start
zed.service then instead of the udev rule?
--
You received this bug notification because you are
@pitti, the ID_FS_TYPE is zfs_member, not zfs. The service is, as you
listed, zed.service. Modifying your rule then is:
ACTION!="remove", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="zfs_member",
ENV{SYSTEMD_WANTS}+="zed.service"
However, if zed.service is going to exit if there is no pool imported,
thi
So the problem is that zfs does its own loop mount handling without
using a loop device so without a block uevent...
Triggering on a uevent would only work for pools that are using physical
devices, not for those using file as loops, which is unfortunately a
rather common setup for LXD users who d
> I believe the zed systemd unit should at the very least be modified
not to start inside containers
That can be done with ConditionVirtualization=!container
> (b) there were an /etc/default/zed which enabled one to disable zed
altogether.
Please don't do that. /etc/default files should never ha
Right, so unfortunately we can't base it on whether the zfs module is
loaded, as it will effectively always be loaded as soon as we pre-
install zfsutils-linux in our images.
Now what we could do I guess is:
- Don't start ANY of the 3 zfs systemd units in containers (that should be
pretty trivia
The ZFS module is loaded automatically by zpool-import-scan/zpool-
import-cache.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1624540
Title:
please have lxd recommend zfs
To manage notifications a
Right, the only issue is how to address the legitimate concerns of
people not using ZFS. Ideally, we'd be:
* detecting the need for it on boot and starting it if relevant
* also starting it whenever a zpool is created
This way, it's there whenever you need it, otherwise not. I'm pretty
sure the
It would be unfortunate to not have zed running once people start
creating pools just because zed provides quality feedback on faults, and
as rlaager pointed out, the fault management intelligence in zed will
only improve over time and so catching issues early with zed is part of
the story that mak
I think it would be acceptable, for now, if (a) the zed init script
avoided starting in containers, and (b) there were an /etc/default/zed
which enabled one to disable zed altogether. Would that cause a problem
for people creating their first pool, where their docs expect zed to be
running?
Mark
It is important that zed run in all cases where a pool (with real disks)
exists. This will only get more important over time, as the fault
management code is improved. (Intel is actively working on this.)
It seems reasonable to not start zed in a container, though.
For the second piece, only runn
So I've confirmed that the python3 side of this has been resolved.
zfsutils-linux as it is in yakkety right now has now been changed over
to python3 by Colin.
Though, as discussed by e-mail when Dustin first brought this up, there
is a second issue I think we should have resolved before we start
i
Gack. Is there a newer zfs-utils that is python3?
Mark
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1624540
Title:
please have lxd recommend zfs
To manage notifications about this bug go to:
http
Marking as incomplete since we can't recommend zfsutils-linux so long as
it depends on python2.7.
** Changed in: lxd (Ubuntu)
Status: Triaged => Incomplete
** Changed in: lxd (Ubuntu)
Assignee: Stéphane Graber (stgraber) => (unassigned)
--
You received this bug notification because
20 matches
Mail list logo