On Mon, Mar 23, 2026 at 05:12:27PM -0700, Yosry Ahmed wrote:
> On Sun, Mar 22, 2026 at 8:23 PM Li Wang <[email protected]> wrote:
> >
> > On Sun, Mar 22, 2026 at 09:18:51AM -0700, Andrew Morton wrote:
> > > On Sun, 22 Mar 2026 14:10:31 +0800 Li Wang <[email protected]> wrote:
> > >
> > > > This patchset aims to fix various spurious failures and improve the 
> > > > overall
> > > > robustness of the cgroup zswap selftests.
> > >
> > > AI review has questions:
> > >       
> > > https://sashiko.dev/#/patchset/[email protected]
> >
> > > [Sashiko comments in patch 4/7]
> > > ...
> > > Could we update this loop, along with the identical loops in
> > > alloc_anon_noexit() and alloc_anon_50M_check_swap() shown below, to use
> > > sysconf(_SC_PAGESIZE) instead?
> >
> > I found that Waiman submit another patch that do same thing like this
> > suggestion. I'd consider to merge that one into my patch 4/7.
> >
> > So, let me talk to Waiman first.
> 
> Probably fits better in your patch.
> 
> > > The test data is generated by writing a single 'a' character per page, 
> > > leaving
> > > the rest zero-filled:
> >
> > >       for (int i = 0; i < control_allocation_size; i += pagesize)
> > >               control_allocation[i] = 'a';
> >
> > > This makes the data highly compressible. Because memory.max is set to 
> > > half of
> > > control_allocation_size, 512 pages are pushed into zswap.
> >
> > > 512 pages of mostly zeros can compress down to roughly 11 to 15 kilobytes
> > > using compressors like zstd, which is well below the 65536 byte (64k)
> > > zswap.max limit on a 64k page system.
> >
> > > Since the limit might not be reached, writeback might never trigger,
> > > causing the test to falsely fail. Should the test use incompressible data
> > > or a lower fixed limit?
> >
> > If Sashiko suggests reducing compressibility, we'd need to fill a 
> > significant
> > fraction of each page with varied data, but that would work against the 
> > test:
> >
> > zswap would reject poorly compressing pages and send them straight to swap,
> > and memory.stat:zswapped might never reach the threshold the test checks
> > with cg_read_key_long(..., "zswapped") < 1.
> >
> > So, at most I'd keep the data highly compressible and just ensure non-zero,
> > unique-per-page markers.
> 
> Sashiko claims that 512 pages will end up consuming 11K to 15K in
> zswap with this setup, do you know what the actual number is?

Not very sure, I guess each 64K page contains 1 byte of 'a' and 65535 bytes
of zero. A single page like that compresses down to roughly 20–30 bytes
(a short literal plus a very long zero run, plus frame/header overhead).
So the estimate is roughly 512 × 25 bytes ≈ 12.8 KB, which is where the
"11 to 15 kilobytes" ballpark comes from.

> Especially with different compressors? If it's close to 64K, this
> might be a problem.

Yes, good point, when I swith to use 'zstd' compressor, it doesn't work.


> Maybe we can fill half of each page with increasing values? It should
> still be compressible but not too compressible.

I tried, this method works on Lzo algorithm but not Zstd.
Anyway, I am still investigating.


-- 
Regards,
Li Wang


Reply via email to