On Sun, Mar 22, 2026 at 09:18:51AM -0700, Andrew Morton wrote:
> On Sun, 22 Mar 2026 14:10:31 +0800 Li Wang <[email protected]> wrote:
>
> > This patchset aims to fix various spurious failures and improve the overall
> > robustness of the cgroup zswap selftests.
>
> AI review has questions:
> https://sashiko.dev/#/patchset/[email protected]
> [Sashiko comments in patch 4/7]
> ...
> Could we update this loop, along with the identical loops in
> alloc_anon_noexit() and alloc_anon_50M_check_swap() shown below, to use
> sysconf(_SC_PAGESIZE) instead?
I found that Waiman submit another patch that do same thing like this
suggestion. I'd consider to merge that one into my patch 4/7.
So, let me talk to Waiman first.
> [Sashiko comments in patch 5/7]
> ..
> if (zswpin < MB(24) / sysconf(_SC_PAGESIZE)) {
> Should these also be updated to use the new global pagesize variable for
> consistency? Subsequent patches in the series do not seem to correct this
> omission.
Good catch, that remainds should be corrected too.
> If control_allocation is NULL, wouldn't the loop immediately dereference it
> and cause an unhandled segmentation fault rather than a graceful test
> failure?
That's right, but better to be resolved in another series, not this one.
> However, there does not appear to be a corresponding munmap() call in the
> test's cleanup path. Although the OS reclaims this memory when the test
> process exits, should this explicit unmap be added for a balanced resource
> lifecycle within the test?
That's right, but better to be resolved in another series, not this one.
> [Sashiko comments in patch 6/7]
> ...
> If malloc returns a null pointer in a memory-constrained environment, the
> loop will unconditionally dereference it. Should there be a null check
> before the loop?
That's right, but better to be resolved in another series, not this one.
> The test data is generated by writing a single 'a' character per page, leaving
> the rest zero-filled:
> for (int i = 0; i < control_allocation_size; i += pagesize)
> control_allocation[i] = 'a';
> This makes the data highly compressible. Because memory.max is set to half of
> control_allocation_size, 512 pages are pushed into zswap.
> 512 pages of mostly zeros can compress down to roughly 11 to 15 kilobytes
> using compressors like zstd, which is well below the 65536 byte (64k)
> zswap.max limit on a 64k page system.
> Since the limit might not be reached, writeback might never trigger,
> causing the test to falsely fail. Should the test use incompressible data
> or a lower fixed limit?
If Sashiko suggests reducing compressibility, we'd need to fill a significant
fraction of each page with varied data, but that would work against the test:
zswap would reject poorly compressing pages and send them straight to swap,
and memory.stat:zswapped might never reach the threshold the test checks
with cg_read_key_long(..., "zswapped") < 1.
So, at most I'd keep the data highly compressible and just ensure non-zero,
unique-per-page markers.
i.e.
control_allocation[i] = (char)((i / pagesize) + 1);
--
Regards,
Li Wang