Public bug reported:

This is a request for comment regarding adjusting zram-config to limit
memory consumption, rather than to limit amount of memory to be swapped.

Under the current script (in 16.10), about 1/2 of RAM can be swapped to
zram.  This may consume 1/6 of RAM space or 1/4 of RAM space (3:1 and
2:1 compression, respectively), for example, depending on actual
compression performance.

This modification instead says each zram device can use up a portion of
RAM.  Instead of specifying 1/2 RAM as the swap area, it specifies 100%.
On a machine with 8GB of RAM, for example, it will expose 8GB of swap;
and it will limit zram to consuming 4GB of real RAM to store the
compressed data.

Because zram generally gets 3:1 to 4:1, it would be more-realistic to
create 1.5-2 times the zswap space.  For example, the 8GB system would
have 12G of swap space, but only use up to 4G of memory for it.  At 3:1
compression, that would use all of the zram swap space.

Essentially, the actual size of the device limits how much uncompressed
RAM you can swap out; while the mem_limit limits the amount of RAM the
zram device can use, including the compressed data and the control data.


Further Discussion:

Note that, in my experience, zram is fast.  I've run a 1GB server with
this script and gone 350MB into zram swap, with 40MB of available
memory, just by starting up a Gitlab docker container and logging in:

Tasks: 184 total,   1 running, 183 sleeping,   0 stopped,   0 zombie
%Cpu(s):  6.0 us,  2.6 sy,  0.0 ni, 90.7 id,  0.0 wa,  0.0 hi,  0.3 si,  0.3 st
KiB Mem :  1016220 total,    76588 free,   826252 used,   113380 buff/cache
KiB Swap:  1016216 total,   666920 free,   349296 used.    45324 avail Mem

This got up as far as 700MB of Swap used in just a few clicks through
the application.  It still returned commit diffs and behaved as if it
was running on ample RAM--I did this during a migration from a system
with 32GB RAM, over 10GB of which is disk cache.

As such, I see no problem essentially doubling the amount of reachable
RAM--and I do exactly that on 1GB and 2GB servers running large working
sets, with active working sets larger than physical RAM space, and with
vm.swappiness set to 100.

Note that swapping 5:1 even if you are getting 5:1 ratios would involve
a lot of swapping and thus a lot of LZO computation.  For this reason,
more-than-double might be unwise in a generic sense.  A doubling of RAM
is using 50% of RAM space to store 1.5x the swap--so 1.5GB zram devices
with 0.5GB mem_limit and at least 3:1 compression average.

The script I have provided allocates at most 50% to store at most 100%.
It is likely to represent a multiplication of working space by 1.6.

I have provided the entire script, rather than a diff, as a diff is
about the same size.

** Affects: zram-config (Ubuntu)
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1654777

Title:
  zram-config control by maximum amount of RAM usage

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zram-config/+bug/1654777/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to