Hi,

I found a use-after-free in f2fs_compress_write_end_io() that is the
same class of bug as CVE-2026-23234 (UAF in f2fs_write_end_io()) but
in the compressed page write completion path. The CVE-2026-23234 fix
does not cover this function.

Bug description
---------------
In f2fs_compress_write_end_io() (fs/f2fs/compress.c:1478), the
completion callback accesses sbi after an operation that allows a
concurrent unmount to free it.

The unmount path in f2fs_put_super() (super.c:1985) calls:
    f2fs_wait_on_all_pages(sbi, F2FS_WB_CP_DATA);

This waits until get_pages(sbi, F2FS_WB_CP_DATA) returns 0, then
proceeds to tear down sbi (super.c:2028-2030, kfree at 5377/5458).

Compressed writeback bios use type F2FS_WB_CP_DATA (because
WB_DATA_TYPE evaluates f2fs_is_compressed_page() as true at
compress.c:1483-1484). This means f2fs_wait_on_all_pages structurally
waits for all compressed bio completions to call dec_page_count()
before unmount proceeds. A bio cannot complete after sbi is freed
via the normal path — the wait loop creates an ordering dependency.

However, the ordering only guarantees that dec_page_count() has run.
It does NOT guarantee that the completion callback has finished all
subsequent sbi accesses. The race is:

    f2fs_compress_write_end_io():
      sbi = bio->bi_private               // line 1481
      dec_page_count(sbi, type)            // line 1492 — decrements counter
        → if this is the last bio, counter hits 0
        → f2fs_wait_on_all_pages() on unmount CPU returns
        → f2fs_put_super() continues:
            f2fs_destroy_page_array_cache(sbi)  // super.c:2030
            kfree(sbi)                          // super.c:5377 or 5458
      [callback still executing on this CPU:]
      for (i = 0; ...) {
          end_page_writeback(cic->rpages[i])    // line 1500
      }
      page_array_free(sbi, ...)                 // line 1503 — UAF

Once dec_page_count() brings the F2FS_WB_CP_DATA counter to zero,
f2fs_wait_on_all_pages returns on the unmount CPU, and f2fs_put_super
proceeds to free sbi. The completion callback on the bio CPU is still
between lines 1492 and 1503, and the subsequent page_array_free(sbi,
...) dereferences sbi->page_array_slab_size (comparison at
compress.c:43) and sbi->page_array_slab (passed to kmem_cache_free
at compress.c:44), both within the freed f2fs_sb_info structure.

Note: dec_page_count() at line 1492 runs for EVERY compressed folio
in the bio, but page_array_free() at line 1503 only runs when
atomic_dec_return(&cic->pending_pages) reaches zero. The race
specifically requires the LAST bio completion for a given cic —
earlier completions return at line 1495 and never reach
page_array_free.

Impact
------
f2fs with compression enabled (compress_algorithm=lz4/zstd) is the
default on Android devices (Pixel 6+, Samsung Galaxy S21+). Any
unprivileged app performing file I/O on a compressed f2fs directory
triggers compressed writeback. The UAF is on f2fs_sb_info which is a
large structure containing slab cache pointers, making heap-spray-based
exploitation feasible for privilege escalation.

Affected versions
-----------------
All kernels since f2fs compression writeback support was introduced
(v5.6+, commit 4c8ff709c6 "f2fs: support data compression") through
at least v6.19. The CVE-2026-23234 patch for f2fs_write_end_io() does
not fix this function.

Note: f2fs_write_end_io() in data.c:317 has the same structural problem
for non-compressed folios. CVE-2026-23234 addressed it there. Both
functions share the root cause: missing lifetime management on sbi
across async bio completion.

Fix
---
The root cause is that after dec_page_count() unblocks unmount, the
callback continues to access sbi via page_array_free(). The narrowest
correct fix is to ensure all sbi accesses in the callback complete
before the counter decrement that signals unmount.

Approach 1 (reorder): Move page_array_free() before dec_page_count().
This is not straightforward because dec_page_count is per-folio but
page_array_free only runs on the last pending page (after the
atomic_dec_return check at line 1494). The per-folio dec_page_count
at line 1492 can unblock unmount before the last-page path reaches
page_array_free at line 1503.

Approach 2 (cache sbi fields): Cache sbi->page_array_slab and
sbi->page_array_slab_size into local variables at function entry
(before dec_page_count), then use the cached values in the
page_array_free equivalent at line 1503. This avoids dereferencing
sbi after it may have been freed:

--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1478,6 +1478,8 @@ void f2fs_compress_write_end_io(struct bio *bio,
struct folio *folio)
 {
  struct page *page = &folio->page;
  struct f2fs_sb_info *sbi = bio->bi_private;
+ struct kmem_cache *pa_slab = sbi->page_array_slab;
+ unsigned int pa_slab_size = sbi->page_array_slab_size;
  struct compress_io_ctx *cic = folio->private;
  enum count_type type = WB_DATA_TYPE(folio,
  f2fs_is_compressed_page(folio));
@@ -1497,7 +1499,12 @@ void f2fs_compress_write_end_io(struct bio
*bio, struct folio *folio)
  end_page_writeback(cic->rpages[i]);
  }

- page_array_free(sbi, cic->rpages, cic->nr_rpages);
+ /*
+ * Use cached slab fields: after dec_page_count above, unmount may
+ * have freed sbi. This is the compress-path analog of CVE-2026-23234.
+ */
+ if (likely(sizeof(struct page *) * cic->nr_rpages <= pa_slab_size))
+ kmem_cache_free(pa_slab, cic->rpages);
+ else
+ kfree(cic->rpages);
  kmem_cache_free(cic_entry_slab, cic);
 }

This is safe because sbi is still valid at function entry (the
F2FS_WB_CP_DATA counter is nonzero, preventing unmount from
proceeding). The cached values are read before dec_page_count()
at line 1492, which is the operation that can unblock unmount.

Approach 3 (superblock reference): atomic_inc(&sbi->sb->s_active) at
bio allocation (data.c:470), deactivate_super(sbi->sb) at bio
completion (data.c, before bio_put). This is the most robust fix but
note: deactivate_super() can call down_write(&sb->s_umount) — a
sleeping lock — if it drops the last s_active reference. In practice
this is safe because the VFS mount holds an independent s_active
reference until unmount completes, ensuring the bio completion call
is never the last release. Nevertheless, if this approach is chosen,
a comment documenting this invariant should be added.

Fixes: 4c8ff709c6 ("f2fs: support data compression")
Reported-by: George Saad <[email protected]>

Thanks,
George


_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to