Re: [Kernel-packages] [Bug 1838276] Re: zfs-module depedency selects random kernel package to install
Would turning the hard dependency into a somewhat softer recommendation be a possible solution? Do zfsutils require the module to be actually installed, or would rw access to /dev/zfs suffice? Regards, Hajo Möller Richard Laager schrieb am Mi., 31. Juli 2019, 00:50: > I closed this as requested, but I'm actually going to reopen it to see > what people think about the following... > > Is there a "default" kernel in Ubuntu? I think there is, probably linux- > generic. > > So perhaps this dependency should be changed: > OLD: zfs-modules | zfs-dkms > NEW: linux-generic | zfs-modules | zfs-dkms > > That way, if you have something satisfying the zfs-modules dependency, > it is fine. If you don't, it will install the default kernel. > > On the other hand, if you don't already have the default kernel, you're > clearly in some sort of special case, so I'm not sure what sane thing > can be done. So that might argue against this. > > -- > You received this bug notification because you are subscribed to zfs- > linux in Ubuntu. > Matching subscriptions: zfs-linux > https://bugs.launchpad.net/bugs/1838276 > > Title: > zfs-module depedency selects random kernel package to install > > To manage notifications about this bug go to: > > https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1838276/+subscriptions > -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1838276 Title: zfs-module depedency selects random kernel package to install Status in zfs-linux package in Ubuntu: New Bug description: In MAAS (ephemeral environment) or LXD where no kernel package is currently installed; installing the zfsutils-linux package will pull in a kernel package from the zfs-modules dependency. 1) # lsb_release -rd Description: Ubuntu Eoan Ermine (development branch) Release: 19.10 2) n/a 3) zfsutils-linux installed without pulling in a random kernel 4) # apt install --dry-run zfsutils-linux Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common libzfs2linux libzpool2linux linux-image-unsigned-5.0.0-1010-oem-osp1 linux-modules-5.0.0-1010-oem-osp1 os-prober zfs-zed Suggested packages: multiboot-doc grub-emu xorriso desktop-base fdutils linux-oem-osp1-tools linux-headers-5.0.0-1010-oem-osp1 nfs-kernel-server samba-common-bin zfs-initramfs | zfs-dracut The following NEW packages will be installed: grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common libzfs2linux libzpool2linux linux-image-unsigned-5.0.0-1010-oem-osp1 linux-modules-5.0.0-1010-oem-osp1 os-prober zfs-zed zfsutils-linux 0 upgraded, 12 newly installed, 0 to remove and 1 not upgraded. Inst grub-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) Inst grub2-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) Inst grub-pc-bin (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) Inst grub-pc (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) [] Inst grub-gfxpayload-lists (0.7 Ubuntu:19.10/eoan [amd64]) Inst linux-modules-5.0.0-1010-oem-osp1 (5.0.0-1010.11 Ubuntu:19.10/eoan [amd64]) Inst linux-image-unsigned-5.0.0-1010-oem-osp1 (5.0.0-1010.11 Ubuntu:19.10/eoan [amd64]) Inst os-prober (1.74ubuntu2 Ubuntu:19.10/eoan [amd64]) Inst libzfs2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64]) Inst libzpool2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64]) Inst zfsutils-linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64]) Inst zfs-zed (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64]) Conf grub-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) Conf grub2-common (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) Conf grub-pc-bin (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) Conf grub-pc (2.04-1ubuntu2 Ubuntu:19.10/eoan [amd64]) Conf grub-gfxpayload-lists (0.7 Ubuntu:19.10/eoan [amd64]) Conf linux-modules-5.0.0-1010-oem-osp1 (5.0.0-1010.11 Ubuntu:19.10/eoan [amd64]) Conf linux-image-unsigned-5.0.0-1010-oem-osp1 (5.0.0-1010.11 Ubuntu:19.10/eoan [amd64]) Conf os-prober (1.74ubuntu2 Ubuntu:19.10/eoan [amd64]) Conf libzfs2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64]) Conf libzpool2linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64]) Conf zfsutils-linux (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64]) Conf zfs-zed (0.8.1-1ubuntu7 Ubuntu:19.10/eoan [amd64]) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1838276/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
Re: [Kernel-packages] [Bug 1766308] Re: inexplicably large file reported by zfs filesystem
Zero-filled data is not compressed by the set compression algorithm but gets filtered by zero-length encoding, so the zeros never hit lz4 and compressratio does not include them. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1766308 Title: inexplicably large file reported by zfs filesystem Status in zfs-linux package in Ubuntu: Triaged Bug description: I have a zfs filesystem containing a single qemu kvm disk image. The vm has been working normally. The image file is only allocated 10G, however today I became aware that the file, when examined from the ZFS host (hypervisor) is reporting an inexplicable, massive file size of around 12T. 12T is larger than the pool itself. Snapshots or other filesystem features should not be involved. I'm suspicious that the file(system?) has been corrupted. root@fusion:~# ls -l /data/store/vms/plexee/ total 6164615 -rw-r--r-- 1 root root 12201321037824 Apr 23 11:49 plexee-root ^^ !! root@fusion:~# qemu-img info /data/store/vms/plexee/plexee-root image: /data/store/vms/plexee//plexee-root file format: qcow2 virtual size: 10G (10737418240 bytes) disk size: 5.9G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false root@fusion:~# zfs list rpool/DATA/fusion/store/vms/plexee NAME USED AVAIL REFER MOUNTPOINT rpool/DATA/fusion/store/vms/plexee 5.88G 484G 5.88G /data/store/vms/plexee root@fusion:~# zfs get all rpool/DATA/fusion/store/vms/plexee NAMEPROPERTY VALUE SOURCE rpool/DATA/fusion/store/vms/plexee type filesystem - rpool/DATA/fusion/store/vms/plexee creation Mon Mar 26 9:50 2018 - rpool/DATA/fusion/store/vms/plexee used 5.88G - rpool/DATA/fusion/store/vms/plexee available 484G - rpool/DATA/fusion/store/vms/plexee referenced5.88G - rpool/DATA/fusion/store/vms/plexee compressratio 1.37x - rpool/DATA/fusion/store/vms/plexee mounted yes - rpool/DATA/fusion/store/vms/plexee quota none default rpool/DATA/fusion/store/vms/plexee reservation none default rpool/DATA/fusion/store/vms/plexee recordsize128K default rpool/DATA/fusion/store/vms/plexee mountpoint /data/store/vms/plexee inherited from rpool/DATA/fusion rpool/DATA/fusion/store/vms/plexee sharenfs off default rpool/DATA/fusion/store/vms/plexee checksum on default rpool/DATA/fusion/store/vms/plexee compression lz4 inherited from rpool rpool/DATA/fusion/store/vms/plexee atime off inherited from rpool rpool/DATA/fusion/store/vms/plexee devices off inherited from rpool rpool/DATA/fusion/store/vms/plexee exec on default rpool/DATA/fusion/store/vms/plexee setuidon default rpool/DATA/fusion/store/vms/plexee readonly off default rpool/DATA/fusion/store/vms/plexee zoned off default rpool/DATA/fusion/store/vms/plexee snapdir hidden default rpool/DATA/fusion/store/vms/plexee aclinheritrestricted default rpool/DATA/fusion/store/vms/plexee canmount on default rpool/DATA/fusion/store/vms/plexee xattr on default rpool/DATA/fusion/store/vms/plexee copies1 default rpool/DATA/fusion/store/vms/plexee version 5 - rpool/DATA/fusion/store/vms/plexee utf8only off - rpool/DATA/fusion/store/vms/plexee normalization none - rpool/DATA/fusion/store/vms/plexee casesensitivity sensitive - rpool/DATA/fusion/store/vms/plexee vscan off default rpool/DATA/fusion/store/vms/plexee nbmandoff default rpool/DATA/fusion/store/vms/plexee sharesmb off default rpool/DATA/fusion/store/vms/plexee refquota none default rpool/DATA/fusion/store/vms/plexee refreservationnone default rpool/DATA/fusion/
[Kernel-packages] [Bug 1683487] Re: zfs-dkms 0.6.5.9-5ubuntu4: zfs kernel module failed to build. Upgrading from 16.10 to 17.04
*** This bug is a duplicate of bug 1683340 *** https://bugs.launchpad.net/bugs/1683340 ** This bug has been marked a duplicate of bug 1683340 zfs-dkms 0.6.5.9-5ubuntu4: zfs kernel module failed to build -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1683487 Title: zfs-dkms 0.6.5.9-5ubuntu4: zfs kernel module failed to build. Upgrading from 16.10 to 17.04 Status in zfs-linux package in Ubuntu: New Bug description: Failure after upgrading from 16.10 to 17.04 ProblemType: Package DistroRelease: Ubuntu 17.04 Package: zfs-dkms 0.6.5.9-5ubuntu4 ProcVersionSignature: Ubuntu 4.10.0-19.21-generic 4.10.8 Uname: Linux 4.10.0-19-generic x86_64 NonfreeKernelModules: openafs zfs zunicode zavl zcommon znvpair ApportVersion: 2.20.4-0ubuntu4 Architecture: amd64 DKMSBuildLog: DKMS make.log for zfs-0.6.5.9 for kernel 4.10.0-19-generic (x86_64) Mon Apr 17 11:33:32 CDT 2017 make: *** No targets specified and no makefile found. Stop. DKMSKernelVersion: 4.10.0-19-generic Date: Mon Apr 17 11:33:34 2017 InstallationDate: Installed on 2016-12-26 (111 days ago) InstallationMedia: Ubuntu 16.10 "Yakkety Yak" - Release amd64 (20161012.2) PackageArchitecture: all PackageVersion: 0.6.5.9-5ubuntu4 RelatedPackageVersions: dpkg 1.18.10ubuntu2 apt 1.4 SourcePackage: zfs-linux Title: zfs-dkms 0.6.5.9-5ubuntu4: zfs kernel module failed to build UpgradeStatus: Upgraded to zesty on 2017-04-17 (0 days ago) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1683487/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1597895] [NEW] i915 crashes, lightdm unresponsive and VT switching impossible
Public bug reported: Some times after locking the screen and waking it up some time later via Fn (XF86WakeUp) lightdm accepts no input and Ctrl+Alt+Fx no longer work to switch to another VT. This is not always reproducible but I think it's triggered by clicking on lightdm's login dialog before it's ready. What usually happens when I wake up the screen: - the screen shows the lightdm dialog without a blinking cursor for a second - screen turns black (off?) for a second, as if its resolution changes - screen comes back, shows lightdm dialog, cursor begins to blink I will try to reproduce by clicking on the dialog before the screen goes black soon. This may be related to https://bugs.launchpad.net/ubuntu/+source /xserver-xorg-video-intel/+bug/1568604 - which affects my installation, too. Here's the relevant snippet from /var/log/syslog, I pressed the power button a few seconds after realizing the lock up: Jun 30 16:24:26 sbooblehat lightdm[4225]: ** (lightdm:4225): WARNING **: Error using VT_WAITACTIVE 7 on /dev/tty0: Interrupted system call Jun 30 16:24:26 sbooblehat org.gtk.vfs.Daemon[16609]: A connection to the bus can't be made Jun 30 16:24:28 sbooblehat acpid: client 16583[0:0] has disconnected Jun 30 16:24:29 sbooblehat systemd[1]: Started Getty on tty6. Jun 30 16:24:46 sbooblehat systemd[1]: Stopping User Manager for UID 108... Jun 30 16:24:46 sbooblehat systemd[16595]: Reached target Shutdown. Jun 30 16:24:46 sbooblehat systemd[16595]: Starting Exit the Session... Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Default. Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Basic System. Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Paths. Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Timers. Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Sockets. Jun 30 16:24:46 sbooblehat systemd[16595]: Received SIGRTMIN+24 from PID 19198 (kill). Jun 30 16:24:46 sbooblehat systemd[1]: Stopped User Manager for UID 108. Jun 30 16:24:46 sbooblehat systemd[1]: Removed slice User Slice of lightdm. Jun 30 16:25:01 sbooblehat CRON[19216]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) Jun 30 16:29:41 sbooblehat acpid: client connected from 19395[0:0] Jun 30 16:29:41 sbooblehat kernel: [23583.257132] [ cut here ] Jun 30 16:29:41 sbooblehat kernel: [23583.257170] WARNING: CPU: 2 PID: 4240 at /build/linux-BvkamA/linux-4.4.0/drivers/gpu/drm/drm_irq.c:1326 drm_wait_one_vblank+0x1b5/0x1c0 [drm]() Jun 30 16:29:41 sbooblehat kernel: [23583.257170] vblank wait timed out on crtc 0 Jun 30 16:29:41 sbooblehat kernel: [23583.257206] Modules linked in: rfcomm xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 xt_tcpudp ebtable_filter ebtables ip6table_filter ip6_tables pci_stub vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter ip_tables xt_conntrack x_tables nf_nat nf_conntrack br_netfilter bridge stp llc aufs bnep qmi_wwan cdc_wdm usbnet mii nls_iso8859_1 arc4 uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core v4l2_common videodev media btusb btrtl btbcm btintel bluetooth qcserial usb_wwan usbserial intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul snd_hda_codec_hdmi snd_hda_codec_conexant ath9k snd_hda_codec_generic ath9k_common ath9k_hw snd_hda_intel snd_hda_codec input_leds snd_hda_core joydev snd_hwdep ath serio_raw mac80211 snd_pcm cfg80211 lpc_ich thinkpad_acpi nvram snd_ seq_midi snd_seq_midi_event mei_me shpchp mei snd_rawmidi snd_seq snd_seq_device snd_timer snd soundcore mac_hid kvm_intel kvm irqbypass ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi parport_pc ppdev lp sunrpc parport autofs4 zfs(PO) zunicode(PO) zcommon(PO) znvpair(PO) spl(O) zavl(PO) btrfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear i915 aesni_intel aes_x86_64 glue_helper lrw gf128mul ablk_helper cryptd i2c_algo_bit drm_kms_helper syscopyarea psmouse e1000e sysfillrect sdhci_pci sysimgblt sdhci fb_sys_fops ahci drm libahci ptp pps_core wmi fjes video Jun 30 16:29:41 sbooblehat kernel: [23583.257257] CPU: 2 PID: 4240 Comm: Xorg Tainted: P OE 4.4.0-28-generic #47-Ubuntu Jun 30 16:29:41 sbooblehat kernel: [23583.257258] Hardware name: LENOVO 4173AM4/4173AM4, BIOS 8CET59WW (1.39 ) 04/29/2015 Jun 30 16:29:41 sbooblehat kernel: [23583.257262] 0286 36d2e1e7 8803d1c9f838 813eb1a3 Jun 30 16:29:41 sbooblehat kernel: [23583.257264] 8803d1c9f880 c007eae8 8803d1c9f870 81081102 Jun 30 16:29:41 sbooblehat kernel: [23583.257265] 880404860800 137a Jun 30 16:29:41 sbooblehat kernel: [23583.257266] Ca
[Kernel-packages] [Bug 1597895] Re: i915 crashes, lightdm unresponsive and VT switching impossible
If I remember correctly the issue appeared shortly before 16.04 was released, but I may have overlooked it before. I will test the latest upstream kernel once I find time to create a fresh installation on some other media, this system boots off ZFS and thus doesn't work with 4.7 as of now. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1597895 Title: i915 crashes, lightdm unresponsive and VT switching impossible Status in linux package in Ubuntu: Incomplete Bug description: Some times after locking the screen and waking it up some time later via Fn (XF86WakeUp) lightdm accepts no input and Ctrl+Alt+Fx no longer work to switch to another VT. This is not always reproducible but I think it's triggered by clicking on lightdm's login dialog before it's ready. What usually happens when I wake up the screen: - the screen shows the lightdm dialog without a blinking cursor for a second - screen turns black (off?) for a second, as if its resolution changes - screen comes back, shows lightdm dialog, cursor begins to blink I will try to reproduce by clicking on the dialog before the screen goes black soon. This may be related to https://bugs.launchpad.net/ubuntu/+source /xserver-xorg-video-intel/+bug/1568604 - which affects my installation, too. Here's the relevant snippet from /var/log/syslog, I pressed the power button a few seconds after realizing the lock up: Jun 30 16:24:26 sbooblehat lightdm[4225]: ** (lightdm:4225): WARNING **: Error using VT_WAITACTIVE 7 on /dev/tty0: Interrupted system call Jun 30 16:24:26 sbooblehat org.gtk.vfs.Daemon[16609]: A connection to the bus can't be made Jun 30 16:24:28 sbooblehat acpid: client 16583[0:0] has disconnected Jun 30 16:24:29 sbooblehat systemd[1]: Started Getty on tty6. Jun 30 16:24:46 sbooblehat systemd[1]: Stopping User Manager for UID 108... Jun 30 16:24:46 sbooblehat systemd[16595]: Reached target Shutdown. Jun 30 16:24:46 sbooblehat systemd[16595]: Starting Exit the Session... Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Default. Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Basic System. Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Paths. Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Timers. Jun 30 16:24:46 sbooblehat systemd[16595]: Stopped target Sockets. Jun 30 16:24:46 sbooblehat systemd[16595]: Received SIGRTMIN+24 from PID 19198 (kill). Jun 30 16:24:46 sbooblehat systemd[1]: Stopped User Manager for UID 108. Jun 30 16:24:46 sbooblehat systemd[1]: Removed slice User Slice of lightdm. Jun 30 16:25:01 sbooblehat CRON[19216]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) Jun 30 16:29:41 sbooblehat acpid: client connected from 19395[0:0] Jun 30 16:29:41 sbooblehat kernel: [23583.257132] [ cut here ] Jun 30 16:29:41 sbooblehat kernel: [23583.257170] WARNING: CPU: 2 PID: 4240 at /build/linux-BvkamA/linux-4.4.0/drivers/gpu/drm/drm_irq.c:1326 drm_wait_one_vblank+0x1b5/0x1c0 [drm]() Jun 30 16:29:41 sbooblehat kernel: [23583.257170] vblank wait timed out on crtc 0 Jun 30 16:29:41 sbooblehat kernel: [23583.257206] Modules linked in: rfcomm xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 xt_tcpudp ebtable_filter ebtables ip6table_filter ip6_tables pci_stub vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter ip_tables xt_conntrack x_tables nf_nat nf_conntrack br_netfilter bridge stp llc aufs bnep qmi_wwan cdc_wdm usbnet mii nls_iso8859_1 arc4 uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core v4l2_common videodev media btusb btrtl btbcm btintel bluetooth qcserial usb_wwan usbserial intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul snd_hda_codec_hdmi snd_hda_codec_conexant ath9k snd_hda_codec_generic ath9k_common ath9k_hw snd_hda_intel snd_hda_codec input_leds snd_hda_core joydev snd_hwdep ath serio_raw mac80211 snd_pcm cfg80211 lpc_ich thinkpad_acpi nvram sn d_seq_midi snd_seq_midi_event mei_me shpchp mei snd_rawmidi snd_seq snd_seq_device snd_timer snd soundcore mac_hid kvm_intel kvm irqbypass ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi parport_pc ppdev lp sunrpc parport autofs4 zfs(PO) zunicode(PO) zcommon(PO) znvpair(PO) spl(O) zavl(PO) btrfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear i915 aesni_intel aes_x86_64 glue_helper lrw gf128mul ablk_helper cryptd i2c_algo_bit drm_kms_helper syscopyarea psmouse e1000e sysfillrect sdhci_pci sysimgblt sdhci fb_sys_fops ahci drm libahci ptp pps_core wmi fjes video Jun 30 16:29:41
[Kernel-packages] [Bug 1636517] Re: zfs: importing zpool with vdev on zvol hangs kernel
I proposed a patch making use of autoconf upstream at https://github.com/zfsonlinux/zfs/pull/5336 -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1636517 Title: zfs: importing zpool with vdev on zvol hangs kernel Status in linux package in Ubuntu: Triaged Status in zfs-linux package in Ubuntu: New Bug description: if a zvol of an existing, already imported zpool is a vdev of another zpool, a call to "zpool import" will everything zfs related. the stack trace is as follows: [] taskq_wait+0x74/0xe0 [spl] [] taskq_destroy+0x4b/0x100 [spl] [] vdev_open_children+0x12d/0x180 [zfs] [] vdev_root_open+0x3c/0xc0 [zfs] [] vdev_open+0xf5/0x4d0 [zfs] [] spa_load+0x39e/0x1c60 [zfs] [] spa_tryimport+0xad/0x450 [zfs] [] zfs_ioc_pool_tryimport+0x64/0xa0 [zfs] [] zfsdev_ioctl+0x44b/0x4e0 [zfs] [] do_vfs_ioctl+0x29f/0x490 [] SyS_ioctl+0x79/0x90 [] entry_SYSCALL_64_fastpath+0x16/0x71 [] 0x I traced this back to 193fb6a2c94fab8eb8ce70a5da4d21c7d4023bee (merged in 4.4.0-6.21), which added a second parameter to lookup_bdev without patching the zfs module (which needs to special case the vdev-on-zvol case, and uses this exact method only in this special casing code path). attached you can find the output of "zfs send -R" ing such a zvol ("brokenvol.raw"), running "zfs receive POOL/TARGET < FILE" followed by "zpool import" should reproduce the hang. ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: linux-image-4.4.0-45-generic 4.4.0-45.66 ProcVersionSignature: Ubuntu 4.4.0-45.66-generic 4.4.21 Uname: Linux 4.4.0-45-generic x86_64 NonfreeKernelModules: zfs zunicode zcommon znvpair zavl AlsaDevices: total 0 crw-rw 1 root audio 116, 1 Oct 25 15:46 seq crw-rw 1 root audio 116, 33 Oct 25 15:46 timer AplayDevices: Error: [Errno 2] No such file or directory: 'aplay' ApportVersion: 2.20.1-0ubuntu2.1 Architecture: amd64 ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord' AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1: Date: Tue Oct 25 15:49:51 2016 HibernationDevice: RESUME=/dev/mapper/xenial--vg-swap_1 InstallationDate: Installed on 2016-10-25 (0 days ago) InstallationMedia: Ubuntu-Server 16.04.1 LTS "Xenial Xerus" - Release amd64 (20160719) IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig' Lsusb: Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub MachineType: QEMU Standard PC (i440FX + PIIX, 1996) PciMultimedia: ProcFB: 0 qxldrmfb ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-45-generic root=/dev/mapper/hostname--vg-root ro RelatedPackageVersions: linux-restricted-modules-4.4.0-45-generic N/A linux-backports-modules-4.4.0-45-generic N/A linux-firmware1.157.4 RfKill: Error: [Errno 2] No such file or directory: 'rfkill' SourcePackage: linux UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 04/01/2014 dmi.bios.vendor: SeaBIOS dmi.bios.version: rel-1.9.3-0-ge2fc41e-prebuilt.qemu-project.org dmi.chassis.type: 1 dmi.chassis.vendor: QEMU dmi.chassis.version: pc-i440fx-2.7 dmi.modalias: dmi:bvnSeaBIOS:bvrrel-1.9.3-0-ge2fc41e-prebuilt.qemu-project.org:bd04/01/2014:svnQEMU:pnStandardPC(i440FX+PIIX,1996):pvrpc-i440fx-2.7:cvnQEMU:ct1:cvrpc-i440fx-2.7: dmi.product.name: Standard PC (i440FX + PIIX, 1996) dmi.product.version: pc-i440fx-2.7 dmi.sys.vendor: QEMU To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1636517/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp
[Kernel-packages] [Bug 1636517] Re: zfs: importing zpool with vdev on zvol hangs kernel
Colin, thank you for the fix, I will switch to xenial-proposed now. As a followup, upstream merged my PR so 1014-kernel-lookup-bdev.patch may be removed once the next ZFS on Linux release gets synced to us, hopefully in time for zesty. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1636517 Title: zfs: importing zpool with vdev on zvol hangs kernel Status in linux package in Ubuntu: Triaged Status in zfs-linux package in Ubuntu: Fix Released Status in linux source package in Xenial: New Status in zfs-linux source package in Xenial: Fix Committed Status in linux source package in Yakkety: New Status in zfs-linux source package in Yakkety: Fix Committed Status in linux source package in Zesty: Triaged Status in zfs-linux source package in Zesty: Fix Released Bug description: [SRU Request][Xenial][Yakkety] if a zvol of an existing, already imported zpool is a vdev of another zpool, a call to "zpool import" will everything zfs related. the stack trace is as follows: [] taskq_wait+0x74/0xe0 [spl] [] taskq_destroy+0x4b/0x100 [spl] [] vdev_open_children+0x12d/0x180 [zfs] [] vdev_root_open+0x3c/0xc0 [zfs] [] vdev_open+0xf5/0x4d0 [zfs] [] spa_load+0x39e/0x1c60 [zfs] [] spa_tryimport+0xad/0x450 [zfs] [] zfs_ioc_pool_tryimport+0x64/0xa0 [zfs] [] zfsdev_ioctl+0x44b/0x4e0 [zfs] [] do_vfs_ioctl+0x29f/0x490 [] SyS_ioctl+0x79/0x90 [] entry_SYSCALL_64_fastpath+0x16/0x71 [] 0x [Fix] zfsutils-linux: Zesty: https://launchpadlibrarian.net/290907232/zfs- linux_0.6.5.8-0ubuntu4_0.6.5.8-0ubuntu5.diff.gz Yakkety, likewise Xenial, likewise Sync'd fixes into kernel repos, patches in: http://kernel.ubuntu.com/~cking/zfs-lp-1636517 [Regression Potential] Minimal. This just touched one line in the zfs module module/zfs/zvol.cand a shim wrapper in include/linux/blkdev_compat.h Tested and passes with the ubuntu kernel team autotest client zfs regression tests. = I traced this back to 193fb6a2c94fab8eb8ce70a5da4d21c7d4023bee (erged in 4.4.0-6.21), which added a second parameter to lookup_bdev without patching the zfs module (which needs to special case the vdev-on-zvol case, and uses this exact method only in this special casing code path). attached you can find the output of "zfs send -R" ing such a zvol ("brokenvol.raw"), running "zfs receive POOL/TARGET < FILE" followed by "zpool import" should reproduce the hang. ProblemType: Bug DistroRelease: Ubuntu 16.04 Package: linux-image-4.4.0-45-generic 4.4.0-45.66 ProcVersionSignature: Ubuntu 4.4.0-45.66-generic 4.4.21 Uname: Linux 4.4.0-45-generic x86_64 NonfreeKernelModules: zfs zunicode zcommon znvpair zavl AlsaDevices: total 0 crw-rw 1 root audio 116, 1 Oct 25 15:46 seq crw-rw 1 root audio 116, 33 Oct 25 15:46 timer AplayDevices: Error: [Errno 2] No such file or directory: 'aplay' ApportVersion: 2.20.1-0ubuntu2.1 Architecture: amd64 ArecordDevices: Error: [Errno 2] No such file or directory: 'arecord' AudioDevicesInUse: Error: command ['fuser', '-v', '/dev/snd/seq', '/dev/snd/timer'] failed with exit code 1: Date: Tue Oct 25 15:49:51 2016 HibernationDevice: RESUME=/dev/mapper/xenial--vg-swap_1 InstallationDate: Installed on 2016-10-25 (0 days ago) InstallationMedia: Ubuntu-Server 16.04.1 LTS "Xenial Xerus" - Release amd64 (20160719) IwConfig: Error: [Errno 2] No such file or directory: 'iwconfig' Lsusb: Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub MachineType: QEMU Standard PC (i440FX + PIIX, 1996) PciMultimedia: ProcFB: 0 qxldrmfb ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.4.0-45-generic root=/dev/mapper/hostname--vg-root ro RelatedPackageVersions: linux-restricted-modules-4.4.0-45-generic N/A linux-backports-modules-4.4.0-45-generic N/A linux-firmware1.157.4 RfKill: Error: [Errno 2] No such file or directory: 'rfkill' SourcePackage: linux UpgradeStatus: No upgrade log present (probably fresh install) dmi.bios.date: 04/01/2014 dmi.bios.vendor: SeaBIOS dmi.bios.version: rel-1.9.3-0-ge2fc41e-prebuilt.qemu-project.org dmi.chassis.type: 1 dmi.chassis.vendor: QEMU dmi.chassis.version: pc-i440fx-2.7 dmi.modalias: dmi:bvnSeaBIOS:bvrrel-1.9.3-0-ge2fc41e-prebuilt.qemu-project.org:bd04/01/2014:svnQEMU:pnStandardPC(i440FX+PIIX,1996):pvrpc-i440fx-2.7:cvnQEMU:ct1:cvrpc-i440fx-2.7: dmi.product.name: Standard PC (i440FX + PIIX, 1996) dmi.product.version: pc-i440fx-2.7 dmi.sys.vendor: QEMU To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1636517/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-p
Re: [Kernel-packages] [Bug 1639963] [NEW] zfs get compressratio returns bogus results
ZFS works as intended, here. Zero-filled blocks are not counted into the compression ratio, as they get dismissed early in the compression pipeline and never hit the disks as used data: # zfs create sbblht/test # dd if=/dev/zero of=/test/a bs=16M count=8 8+0 records in 8+0 records out 134217728 bytes (134 MB, 128 MiB) copied, 0.0718708 s, 1.9 GB/s root@sbooblehat:~# du -shx /test/ 1.0K/test/ root@sbooblehat:~# du -shx --apparent-size /test/ 129M/test/ # zfs list -o used,logicalused,ratio sbblht/test USED LUSED RATIO 19K 9.50K 1.00x -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1639963 Title: zfs get compressratio returns bogus results Status in zfs-linux package in Ubuntu: Invalid Bug description: I'm running a just-installed-today copy of 16.04.1, installed from an ISO also just downloaded today from releases.ubuntu.com. root@xenial:~# lsb_release -rd Description: Ubuntu 16.04.1 LTS Release: 16.04 root@xenial:~# apt-cache policy zfsutils-linux zfsutils-linux: Installed: 0.6.5.6-0ubuntu14 Candidate: 0.6.5.6-0ubuntu14 Version table: *** 0.6.5.6-0ubuntu14 500 500 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.6.5.6-0ubuntu8 500 500 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages I noticed that zfs get compressratio was not returning good results, and tested to ensure it was the compressratio reporting that was broken, not the actual compression. Demonstration: root@xenial:~# zfs get compress data/test NAME PROPERTY VALUE SOURCE data/test compression gzip local root@xenial:~# zfs list data/test NAMEUSED AVAIL REFER MOUNTPOINT data/test 2.64G 855G 2.64G /data/test Note that data/test has inline gzip compression on, and shows 2.64G of data USED. root@xenial:~# du -hs --apparent-size /data/test 9.2G /data/test root@xenial:~# du -hs /data/test 2.7G /data/test Now note that du correctly shows us that we have 9.2G of data stored in data/test (about 7G of which is actually a dump from /bin/zero for max compressibility, the other bit being an Ubuntu VM image file). We should be seeing a compressratio of about 3.41x. root@xenial:~# zfs get compressratio data/test NAME PROPERTY VALUE SOURCE data/test compressratio 1.01x - But we're seeing 1.01x. Obviously very much not correct. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639963/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp