ZFS 0.7.9 was released in Cosmic (18.10). You could update to Cosmic.
Alternatively, on 18.04, you can install the HWE kernel package: linux-
image-generic-hwe-18.04
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
http
Your upgrade is done, but for the record, installing the HWE kernel
doesn't remove the old kernel. So you still have the option to go back
to that in the GRUB menu.
Also, once you're sure the HWE kernel is working, you'll probably want
to remove the linux-image-generic package so you're not contin
What was the expected behavior from your perspective?
The ZFS utilities are useless without a ZFS kernel module. It seems to
me that this is working fine, and installing the ZFS utilities in this
environment doesn’t make sense.
--
You received this bug notification because you are a member of Ke
I closed this as requested, but I'm actually going to reopen it to see
what people think about the following...
Is there a "default" kernel in Ubuntu? I think there is, probably linux-
generic.
So perhaps this dependency should be changed:
OLD: zfs-modules | zfs-dkms
NEW: linux-generic | zfs-modu
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1862661
Title:
zfs-mount.service and others fail inside unpriv conta
What was the expected result? Are you expecting to be able to just
install ZFS in a container (but not use it)? Or are you expecting it to
actually work? The user space tools can’t do much of anything without
talking to the kernel.
--
You received this bug notification because you are a member of
The AES-GCM performance improvements patch has been merged to master. This also
included the changes to make encryption=on mean aes-256-gcm:
https://github.com/zfsonlinux/zfs/commit/31b160f0a6c673c8f926233af2ed6d5354808393
--
You received this bug notification because you are a member of Kernel
John Gray: Everything else aside, you should mirror your swap instead of
striping it (which I think is what you're doing). With your current
setup, if a disk dies, your system will crash.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs
This is a tricky one because all of the dependencies make sense in
isolation. Even if we remove the dependency added by that upstream
OpenZFS commit, given that modern systems use zfs-mount-generator,
systemd-random-seed.service is going to Require= and After= var-
lib.mount because of its Requires
I didn't get a chance to test the patch. I'm running into unrelated
issues.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577
Title:
Encrypted swap won't load on 20.04 with zfs
I think it used to be the case that zfsutils-linux depended on zfs-dkms
which was then provided by the kernel packages. That seems like a way to
solve this. Given that dkms is for dynamic kernel modules, it was always
a bit weird to see the kernel providing that. It should probably be that
zfsutils
Can you share a bit more details about how you have yours setup? What
does your partition table look like, what does the MD config look like,
what do you have in /etc/fstab for swap, etc.? I'm running into weird
issues with this configuration, separate from this bug.
@didrocks: I'll try to get thi
I have confirmed that the fix in -proposed fixes the issue for me.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1872863
Title:
QEMU/KVM display is garbled when booting from kernel EFI
brian-willoughby (and pranav.bhattarai):
The original report text confirms that "The exit code is 0, so update-
grub does not fail as a result." That matches my understanding (as
someone who has done a lot of ZFS installs maintaining the upstream
Root-on-ZFS HOWTO) that this is purely cosmetic.
I
There is another AES-GCM performance acceleration commit for systems
without MOVBE.
--
Richard
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1881107
Title:
zfs: backport AES-GCM p
seth-arnold, the ZFS default is actltype=off, which means that ACLs are
disabled. (I don't think the NFSv4 ACL support in ZFS is wired up on
Linux.) It's not clear to me why this is breaking with ACLs off.
--
You received this bug notification because you are a member of Kernel
Packages, which is
I have submitted this upstream:
https://github.com/openzfs/zfs/pull/10388
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1875577
Title:
Encrypted swap won't load on 20.04 with zfs ro
The fix here seems fine, given that you're going for minimal impact in
an SRU. I agree that the character restrictions are such that the pool
names shouldn't actually need to be escaped. That's not to say that I
would remove the _proper_ quoting of variables that currently exists
upstream, as it's
Can you provide the following details on your datasets' mountpoints.
zfs get mountpoint,canmount -t filesystem
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1846424
Title:
19.10 ZFS
You have two datasets with mountpoint=/ (and canmount=on) which is going
to cause problems like this.
vms/roots/mate-1804 mountpoint / local
vms/roots/mate-1804 canmountondefault
vms/roots/xubuntu-1804 mountpoint / local
vm
As the error message indicates, /vms and /hp-data are not empty. ZFS, by
default, will not mount over non-empty directories.
There are many ways to fix this, but here's something that is probably
the safest:
Boot up in rescue mode. If it is imported, export the hp-data pool with
`zpool export hp-
The size of the pool is not particularly relevant. It sounds like you
think I'm asking you to backup and restore your pool, which I definitely
am not. A pool "import" is somewhat like "mounting" a pool (though it's
not literally mounting, because mounting is something that happens with
filesystems)
That has the same error so you are using the same two pools. Please
follow the instructions I’ve given and fix this once so you are in a
fully working state. Once things are working, then you can retry
whatever upgrade steps you think break it.
--
You received this bug notification because you ar
Do NOT upgrade your bpool.
The dangerous warning is a known issue. There has been talk of an
upstream feature that would allow a nice fix for this, but nobody has
taken up implementing it yet. I wonder how hard it would be to
temporarily patch zpool status / zpool upgrade to not warn about /
upgra
What is the installer doing for swap? The upstream HOWTO uses a zvol and
then this is necessary: “The RESUME=none is necessary to disable
resuming from hibernation. This does not work, as the zvol is not
present (because the pool has not yet been imported) at the time the
resume script runs. If it
I've commented upstream (with ZFS) that we should fake the pre-
allocation (i.e. return success from fallocate() when mode == 0) because
with ZFS it's worthless at best and counterproductive at worst:
https://github.com/zfsonlinux/zfs/issues/326#issuecomment-540162402
Replies (agreeing or disagre
The error is again related to something trying to mount at /. That means
you have something setup wrong. If it was setup properly, nothing should
be trying to _automatically_ (i.e. canmount=on) mount at /. (In a root-
on-ZFS setup, the root filesystem is canmount=noauto and mounted by the
initramfs
You had a setup with multiple root filesystems which each had
canmount=on and mountpoint=/. So they both tried to automatically mount
at /. (When booting in the root-on-ZFS config, one was already mounted
as your root filesystem.) ZFS, unlike other Linux filesystems, refuses
to mount over non-empty
This is not a bug as far as I can see. This looks like the snapshot has
no unique data so its USED is 0. Note that REFER is non-zero.
** Changed in: zfs-linux (Ubuntu)
Status: New => Invalid
--
You received this bug notification because you are a member of Kernel
Packages, which is subscr
osprober complaining about ZFS is a known issue. I don’t know if I
bothered to file a bug report, so this will probably be the report for
that.
Side question: where did you find an installer image with ZFS support? I
tried the daily yesterday but I had no ZFS option.
** Changed in: zfs-linux (Ubu
The osprober part is a duplicate of #1847632.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847927
Title:
Upgrading of 20191010 installed on ZFS will lead to "device-mapper:
relo
*** This bug is a duplicate of bug 1847628 ***
https://bugs.launchpad.net/bugs/1847628
** This bug has been marked a duplicate of bug 1847628
When using swap in ZFS, system stops when you start using swap
--
You received this bug notification because you are a member of Kernel
Packages, w
** Also affects: ubiquity (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1847628
Title:
When using swap in ZFS, system stops when
> "com.sun:auto-snapshot=false" do we need to add that or does our zfs
not support it?
You do not need that. That is used by some snapshot tools, but Ubuntu is
doing its own zsys thing.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-
This is probably an issue of incompatible pool features. Check what you
have active on the Ubuntu side:
zpool get all | grep feature | grep active
Then compare that to the chart here:
http://open-zfs.org/wiki/Feature_Flags
There is an as-yet-unimplemented proposal upstream to create a features
I'm not sure if userobj_accounting and/or project_quota have
implications for send stream compatibility, but my hunch is that they do
not. large_dnode is documented as being an issue, but since your
receiver supports that, that's not it.
I'm not sure what the issue is, nor what a good next step wo
I received the email of your latest comment, but oddly I’m not seeing it
here.
Before you go to all the work to rebuild the system, I think you should
do some testing to determine exactly what thing is breaking the send
stream compatibility. From your comment about your laptop, it sounds
like you
Should it set KEYMAP=y too, like cryptsetup does?
I've created a PR upstream and done some light testing:
https://github.com/zfsonlinux/zfs/pull/9723
Are you able to confirm that this fixes the issue wherever you were
seeing it?
--
You received this bug notification because you are a member of
> I think "zfs mount -a" should NOT try to mount datasets with
> mountpoint "/"
There is no need for this to be (confusingly, IMHO) special-cased in
zpool mount.
You should set canmount=noauto on your root filesystems (the ones with
mountpoint=/). The initramfs handles mounting the selected root
Which specific filesystems are failing to mount?
Typically, this situation occurs because something is misconfigured, so
the mount fails, so files end up inside what should otherwise be empty
mountpoint directories. Then, even once the original problem is fixed,
the non-empty directories prevent Z
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1852854
Title:
Update of zfs-linux fails
Status in zfs-linux packag
If the pool has an _active_ (and not "read-only compatible") feature
that GRUB does not understand, then GRUB will (correctly) refuse to load
the pool. Accordingly, you will be unable to boot.
Some features go active immediately, and others need you to enable some
filesystem-level feature or take
This is an interesting approach. I figured the installer should prompt
for encryption, and it probably still should, but if the performance
impact is minimal, this does have the nice property of allowing for
enabling encryption post-install.
It might be worthwhile (after merging the SIMD fixes) to
Here are some quick performance comparisons:
https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997
In summary, "the GCM run is approximately 1.15 times faster than the CCM
run. Please also note that this PR doesn't improve AES-CCM performance,
so if this gets merged, the speed differe
I have come up with a potential security flaw with this design:
The user installs Ubuntu with this fixed passphrase. This is used to
derive the "user key", which is used to encrypt the "master key", which
is used to encrypt their data. The encrypted version of the master key
is obviously written t
I put these questions to Tom Caputi, who wrote the ZFS encryption. The
quoted text below is what I asked him, and the unquoted text is his
response:
> 1. Does ZFS rewrite the wrapped/encrypted master key in place? If
>not, the old master key could be retrieved off disk, decrypted
>with the
Try adding "After=multipathd.service" to zfs-import-cache.service and
zfs-import-pool.service. If that fixes it, then we should probably add
that upstream.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.l
The last we heard on this, FreeBSD was apparently not receiving the send
stream, even though it supports large_dnode:
https://zfsonlinux.topicbox.com/groups/zfs-
discuss/T187d60c7257e2eb6-M14bb2d52d4d5c230320a4f56/feature-
incompatibility-between-ubuntu-19-10-and-freebsd-12-0
That's really bizarr
So, one of two things is true:
A) ZFS on Linux is generating the stream incorrectly.
B) FreeBSD is receiving the stream incorrectly.
I don't have a good answer as to how we might differentiate those two.
Filing a bug report with FreeBSD might be a good next step. But like I
said, a compact reprodu
In terms of a compact reproducer, does this work:
# Create a temp pool with large_dnode enabled:
truncate -s 1G lp1854982.img
sudo zpool create -d -o feature@large_dnode=enabled lp1854982
$(pwd)/lp1854982.img
# Create a dataset with dnodesize=auto
sudo zfs create -o dnodesize=auto lp1854982/ldn
** Changed in: zfs-linux (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854982
Title:
Lost compatibilty for backup between Ubuntu 19.10 and
The FreeBSD bug report:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=243730
Like I said, boiling this down to a test case would likely help a lot.
Refusing to do so and blaming the people giving you free software and
free support isn’t helpful.
** Bug watch added: bugs.freebsd.org/bugzilla/
There does seem to be a real bug here. The problem is that we don’t know
if it is on the ZoL side or the FreeBSD side. The immediate failure is
that “zfs recv” on the FreeBSD side is failing to receive the stream. So
that is the best place to start figuring out why. If it turns out that
ZoL is gene
** Bug watch added: Github Issue Tracker for ZFS #9443
https://github.com/zfsonlinux/zfs/issues/9443
** Also affects: zfs via
https://github.com/zfsonlinux/zfs/issues/9443
Importance: Unknown
Status: Unknown
--
You received this bug notification because you are a member of Kernel
I've given this a lot of thought. For what it's worth, if it were my
decision, I would first put your time into making a small change to the
installer to get the "encryption on" case perfect, rather than the
proposal in this bug.
The installer currently has:
O Erase disk an install Ubuntu
War
> It is not appropriate to require the user to type a password on every
> boot by default; this must be opt-in.
Agreed.
The installer should prompt (with a checkbox) for whether the user wants
encryption. It should default to off. If the user selects the checkbox,
prompt them for a passphrase. Se
We discussed this at the January 7th OpenZFS Leadership meeting. The
notes and video recording are now available.
The meeting notes are in the running document here (see page 2 right now, or
search for this Launchpad bug number):
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoL
You original scrub took just under 4.5 hours. Have you let the second
scrub run anywhere near that long? If not, start there.
The new scrub code uses a two-phase approach. First it works through
metadata determining what (on-disk) blocks to scrub. Second, it does the
actual scrub. This allows ZFS
This was added a LONG time ago. The interesting question here is: if you
previously deleted it, why did it come back? Had you deleted it though?
It sounds like you weren’t aware of this file.
You might want to edit it in place, even just to comment out the job.
That would force dpkg to give you a
zfs-linux (0.6.5.6-2) unstable; urgency=medium
...
* Scrub all healthy pools monthly from Richard Laager
So Debian stretch, but not Ubuntu 16.04.
Deleting the file should be safe, as dpkg should retain that. It sounds
like you never deleted it, as you didn’t have it before this upgrade. So
it
@gustypants: Sorry, the other one is scan, not pool. Are you using a
multipath setup? Does the pool import fine if you do it manually once
booted?
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.
I think there are multiple issues here. If it's just multipath, that
issue should be resolved by adding After=multipathd.service to zfs-
import-{cache,scan}.service.
For other issues, I wonder if this is cache file related. I'd suggest
checking that the cache file exists (I expect it would), and t
This is a known issue which will hopefully be improved by 20.04 or so.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1843298
Title:
Upgrade of datapool to ZFS 0.8 creates a problem
I’m not aware of anything new starting scrubs. Scrubs are throttled and
usually the complaint is that they are throttled too much, not too
little. Having two pools on the same disk is likely the issue. That
should be avoided, with the exception of a small boot pool on the same
disk as the root pool
I haven't had a chance to write and test the zpool.cache copying. I keep
meaning to get to it every day, but pushing it back for lack of time.
The zfs-initramfs script in 16.04 (always) and in 18.04 (by default)
runs a plain `zpool import`.
ZoL 0.7.5 has a default search order for imports that pr
This is particularly annoying for me too.
All of my virtual machines use linux-image-generic because I need linux-
image-extra to get the i6300esb watchdog driver for the KVM watchdog.
This change forces the amd64-microcode and intel-microcode packages to
be installed on all of my VMs.
--
You re
@sdeziel, I agree 100%.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-meta in Ubuntu.
https://bugs.launchpad.net/bugs/1738259
Title:
need to ensure microcode updates are available to all bare-metal
installs of Ubuntu
Status i
Is there something inherent in snaps that makes this easier or better
than debs? For example, do snaps support multiple installable versions
of the same package name?
If snaps aren’t inherently better, the same thing could be done with
debs using the usual convention for having multiple versions i
I don't have permissions to change this, but my recommendation would be
to set this as "Won't Fix". It's my understanding that zfs-auto-snapshot
is more-or-less unmaintained upstream. I know I've seen recommendations
to switch to something else (e.g. sanoid) on issues there.
--
You received this
Try adding initramfs as an option in /etc/crypttab. That's the approach
I use when putting the whole pool on a LUKS device, and is necessary due
to: https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906
--
You received this bug notification because you are a member of Kernel
Packages,
If the pool is on top of LUKS (a relatively common configuration when
ZFS and cryptsetup are both being used), then you'd need cryptsetup
first. My advice is that you should either stop encrypting swap or start
encrypting the whole pool. Hopefully in another (Ubuntu) release or two,
we'll have nati
I really don’t know what to suggest here. As you mentioned, this used to
work. If you are only using LUKS for swap, maybe you could just remove
it from crypttab and run the appropriate commands manually in rc.local
or a custom systemd unit.
--
You received this bug notification because you are a
This has presumably regressed in 17.04 as they replaced the initrd code
(somewhat by accident). I'm going to review all this stuff and get some
fixes in soon, so it should be re-fixed by the next LTS.
--
You received this bug notification because you are a member of Kernel
Packages, which is subs
I suspect that wouldn't work, for one reason or another. The upstream
one has more features, probably.
I'd rather just keep this Debian-specific. The initramfs script is
likely to have distro specific code. I don't see the idea of one unified
script working out well.
--
You received this bug not
Why is the second disk missing? If you accidentally added it and ended
up with a striped pool, as long as both disks are connected, you can
import the pool normally. Then use the new device_removal feature to
remove the new disk from the pool.
If you've done something crazy like pulled the disk an
device_removal only works if you can import the pool normally. That is
what you should have used after you accidentally added the second disk
as another top-level vdev. Whatever you have done in the interim,
though, has resulted in the second device showing as FAULTED. Unless you
can fix that, devi
That sounds like a missing dependency on python3-distutils.
But unless you're running a custom kernel, Ubuntu is shipping the ZFS module
now:
https://bugs.launchpad.net/ubuntu/+source/linux-raspi/+bug/1884110
--
You received this bug notification because you are a member of Kernel
Packages, whi
See also this upstream PR: https://github.com/openzfs/zfs/pull/9414
and the one before it: https://github.com/openzfs/zfs/pull/8667
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/171876
Here is a completely untested patch that takes a different approach to
the same issue. If this works, it seems more suitable for upstreaming,
as the existing list_zvols seems to be the place where properties are
checked. Can either of you test this? If this looks good, I'll submit it
upstream.
**
I've posted this upstream (as a draft PR, pending testing) at:
https://github.com/openzfs/zfs/pull/10662
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1888405
Title:
zfsutils-linux:
Did you destroy and recreate the pool after disabling dedup? Otherwise
you still have the same dedup table and haven’t really accomplished
much.
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.ne
You could shrink the DDT by making a copy of the files in place (with
dedup off) and deleting the old file. That only requires enough extra
space for a single file at a time. This assumes no snapshots.
If you need to preserve snapshots, another option would be to send|recv
a dataset at a time. If
The limit in the code does seem to be 64 MiB. I'm not sure why this
isn't working. I am not even close to an expert on this part of OpenZFS,
so all I can suggest is to file a bug report upstream:
https://github.com/openzfs/zfs/issues/new
--
You received this bug notification because you are a mem
This has regressed in Zesty, because someone replaced the zfs-initramfs
script.
** Changed in: zfs-linux (Ubuntu)
Status: Fix Released => Confirmed
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.l
On 04/22/2017 12:36 PM, Sam Van den Eynde wrote:
> Experienced this with Ubuntu Zesty. Xenial seems to ship with a
> different zfs script for the initrd.
Who completely replaced the zfs-initramfs script?
Was there a particular reason for this massive change, and was it
discussed anywhere?
This c
** Tags added: regression-release
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1685528
Title:
ZFS initramfs mounts dataset explicitly set not to be mounted, causing
boot process
** Tags removed: patch
** Tags added: regression-release zesty
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1550301
Title:
ZFS: Set elevator=noop on disks in the root pool
Status
On 04/24/2017 12:02 AM, Petter Reinholdtsen wrote:
> [Richard Laager]
>> Who completely replaced the zfs-initramfs script?
>
> You can find out who commited what in the Debian package by looking in
> the package maintenance git repository available from
> http://anonscm.deb
Why do you have multiple pools on the same disks? That's very much not a
best practice or even typical ZFS installation.
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Incomplete
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed
ZFS already limits the amount of IO that a scrub can do. Putting
multiple pools on the same disk defeats ZFS's IO scheduler.* Scrubs are
just one example of the performance problems that will cause. I don't
think we should complicate the scrub script to accommodate this
scenario.
My suggestion is
I have a related question... as far as I'm aware, the ZoL
kernel<->userspace interface is still not versioned:
https://github.com/zfsonlinux/zfs/issues/1290
Effectively, this means that the version of zfsutils-linux must always
match the version of the kernel modules. What is the plan to handle th
I need to do some testing, but we might want to consider using the cache
file. An approach (suggested to my by ryao, I think) was that we first
import the root pool read-only, copy the cache file out of it, export
the pool, and then import the pool read-write using the cache file.
--
You received
I fixed this upstream, which was released in 0.7.4. Bionic has 0.7.5.
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Fix Committed
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.
** Changed in: zfs-linux (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1600060
Title:
ZFS "partially filled holes lose birth time"
16.04's HWE's updates will top out at the kernel version shipped in
18.04. I assume this is because you can then just use 18.04.
See:
https://wiki.ubuntu.com/Kernel/RollingLTSEnablementStack
as linked from:
https://wiki.ubuntu.com/Kernel/LTSEnablementStack
--
You received this bug notification b
Native encryption was merged to master but has not been released in a
tagged version. There are actually a couple of issues that will result
in on-disk format changes. It should be the major feature for the 0.8.0
release.
** Changed in: zfs-linux (Ubuntu)
Status: New => Invalid
--
You rec
zfs-load-module.service seems to have a Requires on itself? That has to
be wrong.
Also, zfs-import-cache.service and zfs-import-scan.service need an After
=zfs-load-module.service. They're not getting one automatically because
of DefaultDependencies=no (which seems appropriate here, so leave that
I updated to the version from -proposed and rebooted. I verified that no
units failed on startup.
** Tags added: verification-done-artful
--
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs
Public bug reported:
I just noticed on my test VM of artful that zfs-import-cache.service
does not have a ConditionPathExists=/etc/zfs/zpool.cache. Because of
that, it fails on startup, since the cache file does not exist.
This line is being deleted by
debian/patches/ubuntu-load-zfs-unconditiona
samvde, can you provide your `zfs list` output? The script seems
designed to only import filesystems *below* the filesystem that is the
root filesystem. In the typical case, the root filesystem is something
like rpool/ROOT/ubuntu. There typically shouldn't be children of
rpool/ROOT/ubuntu.
--
You
1 - 100 of 242 matches
Mail list logo