On Tue, Mar 24, 2026 at 07:49:17PM -0700, Yosry Ahmed wrote:
> On Tue, Mar 24, 2026 at 7:26 PM Li Wang <[email protected]> wrote:
> >
> > On Tue, Mar 24, 2026 at 01:28:12PM -0700, Yosry Ahmed wrote:
> > > > > Sashiko claims that 512 pages will end up consuming 11K to 15K in
> > > > > zswap with this setup, do you know what the actual number is?
> > > >
> > > > Not very sure, I guess each 64K page contains 1 byte of 'a' and 65535
> > > > bytes
> > > > of zero. A single page like that compresses down to roughly 20–30 bytes
> > > > (a short literal plus a very long zero run, plus frame/header overhead).
> > > > So the estimate is roughly 512 × 25 bytes ≈ 12.8 KB, which is where the
> > > > "11 to 15 kilobytes" ballpark comes from.
> > > >
> > > > > Especially with different compressors? If it's close to 64K, this
> > > > > might be a problem.
> > > >
> > > > Yes, good point, when I swith to use 'zstd' compressor, it doesn't work.
> > > >
> > > > > Maybe we can fill half of each page with increasing values? It should
> > > > > still be compressible but not too compressible.
> > > >
> > > > I tried, this method works on Lzo algorithm but not Zstd.
> > > > Anyway, I am still investigating.
> > >
> > > Do you mean the compressibility is still very high on zstd? I vaguely
> > > remember filling a page with repeating patterns (e.g. alphabet
> > > letters) seemed to produce a decent compression ratio, but I don't
> > > remember the specifics.
> > >
> > > I am pretty sure an LLM could figure out what values will work for
> > > different compression algorithms :)
> >
> > Well, I have tried many ways for ditry each page of the total, but none
> > works on zstd compreseor.
> >
> > e.g,.
> >
> > --- a/tools/testing/selftests/cgroup/test_zswap.c
> > +++ b/tools/testing/selftests/cgroup/test_zswap.c
> > @@ -9,6 +9,7 @@
> > #include <string.h>
> > #include <sys/wait.h>
> > #include <sys/mman.h>
> > +#include <sys/random.h>
> >
> > #include "kselftest.h"
> > #include "cgroup_util.h"
> > @@ -473,8 +474,12 @@ static int test_no_invasive_cgroup_shrink(const char
> > *root)
> > if (cg_enter_current(control_group))
> > goto out;
> > control_allocation = malloc(control_allocation_size);
> > - for (int i = 0; i < control_allocation_size; i += page_size)
> > - control_allocation[i] = (char)((i / page_size) + 1);
> > + unsigned int nr_pages = control_allocation_size/page_size;
> > + for (int i = 0; i < nr_pages; i++) {
> > + unsigned long off = (unsigned long)i * page_size;
> > + memset(&control_allocation[off], 0, page_size);
> > + getrandom(&control_allocation[off], nr_pages/2, 0);
>
> This should be page_size/2, right?
Ah, that's right.
> nr_pages is 1024 IIUC, so that's 512 bytes only. If the page size is
> 64K, we're leaving 63.5K (99% of the page) as zeroes.
nr_pages is 512, but you're right on the analysis.
> > + }
> > if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
> > goto out;
> >
> > Even I tried to set random data for all of the pages, they still not
> > works (zstd). But it can be worked on lzo compressor, I don't know if
> > the zstd have any addtional configures or I missed anything there.
> >
> > My current thought is just to satisfy the lzo (default) compressor in
> > this patch series, and leave the zstd for additional work.
> >
> > What do you think? any better idea?
>
> Let's check if using page_size/2 fixes it first. If a page is 100%
> filled with random data it should be incompressible, so I would be
> surprised if 50% random data yields a very high compression ratio.
>
> It would also help if you check what the compression ratio actually is
> (i.e. compressed_size / uncompressed_size).
Randomly dirty the page_size/2 always led to OOM, so I swith to page_size/4,
it looks like both algorithms compress pages successfully, but zstd doesn't
update 'zswpwb' stat so that test result fails.
Considering that zswap writeback is asynchronous, I additianlly introduced
a polling method for checking 500 times, but 'zswpwb' still returned zero.
==== Test results ====
lzo:
# uncompressed: 51511296, compressed: 13353876, ratio: 0.26
# get_cg_wb_count(wb_group) is 206, get_cg_wb_count(control_group) is 0
ok 7 test_no_invasive_cgroup_shrink
zstd:
# uncompressed: 48037888, compressed: 12019013, ratio: 0.25
# get_cg_wb_count(wb_group) is 0, get_cg_wb_count(control_group) is 0
not ok 7 test_no_invasive_cgroup_shrink
==== debug code for the above output ====
long zswapped = cg_read_key_long(control_group, "memory.stat", "zswapped");
long zswap_compressed = cg_read_key_long(control_group, "memory.stat",
"zswap");
ksft_print_msg("uncompressed: %ld, compressed: %ld, ratio: %.2f\n",
zswapped, zswap_compressed,
(double)zswap_compressed / zswapped);
ksft_print_msg("get_cg_wb_count(wb_group) is %zu,
get_cg_wb_count(control_group) is %zu\n",
get_cg_wb_count(wb_group),
get_cg_wb_count(control_group));
In summary, the problem is that 'zswpwb' does not update when zswap is executed
under the zstd algorithm. I'd debugging this issue separately from the kernel
side.
--
Regards,
Li Wang