I've experimented with this on eoan and focal (ZFS 0.8.1 and ZFS 0.8.3)
on a 4GB VM image.  I set the /etc/modprobe.d/zfs.conf as follows:

options zfs zfs_arc_max=134217728
# 128 MB

And rebooted.  I then exercised the zfs with various greps and git logs
on the linux git repository while running:

while true; do cat /proc/spl/kstat/zfs/arcstats | grep -m 1 size | awk
'{ print $3 / (1024 * 1024) }'; sleep 1; done

The output shows that one does get arc sizes greater than the 128MB
while doing a lot of I/O but it settles down to sub 128MB once the I/O
activity is finished:

2.93295
2.93295
2.93295
63.8354
131.395
127.909
148.942
128.498
138.224
145.271
133.244
134.122
138.132
144.814
129.4
134.788
135.666
140.428
143.811
146.734
132.323
136.709
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
143.225
130.353
130.34
130.34
130.34
138.63
140.16
139.826
139.826
139.826
139.921
139.921
133.32
122.35
115.929
111.941
110.478
111.219
111.219

Two observations:

1. The driver does some sanity checks; if zfs_arc_max is too small (<
64MB) or too large (> physical memory size) then the setting is ignored
and the default is used.  One can double check if the setting has been
updated by checking with c_max, e.g.:

grep c_max /proc/spl/kstat/zfs/arcstats
c_max                           4    134217728

If c_max is not the same as zfs_arc_max then the zfs_arc_max has worked.
Can you sanity check that?

2. With a high amount of throughput the arcstats show that the "size"
stat can overshot the zfs_arc_max setting but once I/O has completed it
will setting down to the threshold set.  This setting is not updated
directly and is only sync'd during an arc_kstat update. So maybe it's a
snapshot that maybe transient and possibly not completely up to date.
Keeping the stats exactly up to date would cause a performance
bottleneck, so the stats are aggregated - I believe these values are not
exactly 100% correct as there is a performance impact in gathering and
summing the stats.

Perhaps you could experiment with a low zfs_arc_max setting (e.g. 128MB
134217728) and see if this works OK and then bumping up the value to see
if it stops working; then we can work out why it's going wrong at that
point.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1854480

Title:
  zfs_arc_max not working anymore in zfs 0.8.1

Status in zfs-linux package in Ubuntu:
  Incomplete

Bug description:
  In the past I could limit the size of L1ARC by specifying "options zfs
  zfs_arc_max=3221225472" in /etc/modprobe.d/zfs.conf. I tried even to
  fill /sys/module/zfs/parameters/zfs_arc_max directly, but none of
  those methods limits the size of L1ARC. It worked nicely in zfs 0.7.x.

  Nowadays I use a nvme drive with 3400 MB/s read throughput and I see
  not much difference between e.g booting the system from nvme and
  rebooting the system from L1ARC. Having a 96-99% hit rate for the
  L1ARC, I like to free up some memory, thus avoiding some delay of
  freeing memory from L1ARC while loading a VM.

  -------------------------------------------------------------------------
  UPDATE:

  The system only limits the L1ARC, if we directly after the login write
  the zfs_arc_max value to the file
  /sys/module/zfs/parameters/zfs_arc_max.

  ------------------------------------------------------------------------

  ProblemType: Bug
  DistroRelease: Ubuntu 19.10
  Package: zfsutils-linux 0.8.1-1ubuntu14.1
  ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7
  Uname: Linux 5.3.0-23-generic x86_64
  NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
  ApportVersion: 2.20.11-0ubuntu8.2
  Architecture: amd64
  CurrentDesktop: ubuntu:GNOME
  Date: Fri Nov 29 06:19:15 2019
  InstallationDate: Installed on 2019-11-25 (3 days ago)
  InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017)
  SourcePackage: zfs-linux
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission 
denied: '/etc/sudoers.d/zfs']

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854480/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to