Hello Heitor, or anyone else affected,

Accepted zfs-linux into jammy-proposed. The package will build now and
be available at https://launchpad.net/ubuntu/+source/zfs-
linux/2.1.5-1ubuntu6~22.04.6 in a few hours, and then in the -proposed
repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.  Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, what testing has been
performed on the package and change the tag from verification-needed-
jammy to verification-done-jammy. If it does not fix the bug for you,
please add a comment stating that, and change the tag to verification-
failed-jammy. In either case, without details of your testing we will
not be able to proceed.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance for helping!

N.B. The updated package will be released to -updates after the bug(s)
fixed by this package have been verified and the package has been in
-proposed for a minimum of 7 days.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/2115683

Title:
  ZFS hangs when writing to pools with high object count

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Jammy:
  Fix Committed
Status in zfs-linux source package in Noble:
  Fix Committed

Bug description:
  [Impact]
  ZFS pools become completely unresponsive, with inflight I/O stalling with 
kernel spews similar to the one below:

      crash> bt -s 835
       PID: 835     TASK: ffff9ef78c6d2880 CPU: 1   COMMAND: "txg_quiesce"
       #0 [ffffaf7242e53ce8] __schedule+648 at ffffffffbcc01248
       #1 [ffffaf7242e53d90] schedule+46 at ffffffffbcc0165e
       #2 [ffffaf7242e53db0] cv_wait_common+258 at ffffffffc05224a2 [spl]
       #3 [ffffaf7242e53e18] __cv_wait+21 at ffffffffc0522505 [spl]
       #4 [ffffaf7242e53e28] txg_quiesce+384 at ffffffffc06f3f70 [zfs]
       #5 [ffffaf7242e53e78] txg_quiesce_thread+205 at ffffffffc06f40bd [zfs]
       #6 [ffffaf7242e53ec0] thread_generic_wrapper+100 at ffffffffc052d314 
[spl]
       #7 [ffffaf7242e53ee8] kthread+214 at ffffffffbbb32ce6
       #8 [ffffaf7242e53f28] ret_from_fork+70 at ffffffffbba66b76
       #9 [ffffaf7242e53f50] ret_from_fork_asm+27 at ffffffffbba052ab

  This typically happens when creating new files on ZFS pools with a
  high objnum count beyond 2^32 values. Due to a bug in the object
  allocation function dmu_object_alloc_impl(), values beyond the 32-bit
  threshold get silently truncated causing the function to keep trying
  to allocate space in chunks that are already full.

  [Test Plan]
  We've been able to consistently reproduce this on ZFS pools with very high 
object number count. Using the attached zfs_write_unified.py script, we can 
cause a pool to hang due to this bug within a couple of days. Below is a high 
level summary of the test procedure:

  1. Create a ZFS pool with total capacity above 2TB (this is required so that 
we can hit the high objnum count):
  ubuntu@wringer-wooster:~$ zfs list
  NAME            USED  AVAIL  REFER  MOUNTPOINT
  pooltest       6.22T   660G    96K  /pooltest
  pooltest/data  6.21T   660G  6.21T  /pooltest/data

  2. Run zfs_write_unified.py against the test pool:
  ubuntu@wringer-wooster:~# python3 zfs_write_unified.py . $(nproc)

  3. Monitor pool throughput through `zfs iostat` or similar, until no
  new transactions get sync'ed to disk (or until a similar kernel spew
  to the one above starts getting logged)

  Once the pool has enough objects, the problem will manifest almost
  immediately. It's very easy to verify this fix by running
  zfs_write_unified.py on an affected pool, as `zpool iostat` will
  report disk activity.

  [Where problems could occur]
  The fix is fairly straightforward, as we're changing the P2ALIGN macro to an 
equivalent that is able to handle typecast values above 32 bits 
(P2ALIGN_TYPED). This shouldn't affect current existing pools, as this code is 
only exercised when creating new objects (files, directories, snapshots, etc).

  We should test the write path extensively after this change, to make
  sure there are no other hangs when using the new P2ALIGN_MACRO. Any
  potential regressions due to this will affect the object allocation
  path, so we should see similar kernel spews stating that `txg_sync` or
  `txg_quiesce` are hanging:

  [179404.940783] INFO: task txg_quiesce:2203494 blocked for more than 122 
seconds.
  [179404.944987]       Tainted: P           OE      6.8.0-1020-aws 
#22~22.04.1-Ubuntu
  [179404.949205] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.

  [Other info]
  This fix has been upstream since May/2024, and is included with ZFS releases 
starting with 2.2.5. As such, Jammy and Noble are affected, and releases 
starting with Oracular already have this fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/2115683/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to