I would like to reopen this issue. I installed Ubuntu 20.04.1 LTS Server (no zfs root, 16GB RAM) and added zfs volumes. I tried to limit the arc with
`options zfs zfs_arc_max=134217728` in `/etc/modprobe.d/zfs.conf` to 128MiB (just to try) I updated the initial ramdisk with `update-initramfs -u` and rebooted the machine. Then I observe, that the arc still grows to ~6GB > 128M. ``` al@nas:~$ cat /etc/modprobe.d/zfs.conf; cat /sys/module/zfs/parameters/zfs_arc_max; cat /proc/spl/kstat/zfs/arcstats options zfs zfs_arc_max=134217728 134217728 12 1 0x01 98 26656 9388908203 858454750163 name type data hits 4 2341389 misses 4 69038 demand_data_hits 4 409755 demand_data_misses 4 10337 demand_metadata_hits 4 1892418 demand_metadata_misses 4 11520 prefetch_data_hits 4 20609 prefetch_data_misses 4 36648 prefetch_metadata_hits 4 18607 prefetch_metadata_misses 4 10533 mru_hits 4 332923 mru_ghost_hits 4 0 mfu_hits 4 1981310 mfu_ghost_hits 4 0 deleted 4 21 mutex_miss 4 0 access_skip 4 0 evict_skip 4 99 evict_not_enough 4 0 evict_l2_cached 4 0 evict_l2_eligible 4 201728 evict_l2_ineligible 4 34816 evict_l2_skip 4 0 hash_elements 4 70022 hash_elements_max 4 70117 hash_collisions 4 2316 hash_chains 4 1152 hash_chain_max 4 2 p 4 4173713920 c 4 8316876800 c_min 4 519804800 c_max 4 8316876800 size 4 6496367160 compressed_size 4 5849741824 uncompressed_size 4 6240316928 overhead_size 4 416625152 hdr_size 4 23995776 data_size 4 5993091584 metadata_size 4 273275392 dbuf_size 4 44426616 dnode_size 4 120015552 bonus_size 4 41562240 anon_size 4 18524672 anon_evictable_data 4 0 anon_evictable_metadata 4 0 mru_size 4 4014986752 mru_evictable_data 4 3841052672 mru_evictable_metadata 4 24693760 mru_ghost_size 4 0 mru_ghost_evictable_data 4 0 mru_ghost_evictable_metadata 4 0 mfu_size 4 2232855552 mfu_evictable_data 4 1694498304 mfu_evictable_metadata 4 15691264 mfu_ghost_size 4 0 mfu_ghost_evictable_data 4 0 mfu_ghost_evictable_metadata 4 0 l2_hits 4 0 l2_misses 4 0 l2_feeds 4 0 l2_rw_clash 4 0 l2_read_bytes 4 0 l2_write_bytes 4 0 l2_writes_sent 4 0 l2_writes_done 4 0 l2_writes_error 4 0 l2_writes_lock_retry 4 0 l2_evict_lock_retry 4 0 l2_evict_reading 4 0 l2_evict_l1cached 4 0 l2_free_on_write 4 0 l2_abort_lowmem 4 0 l2_cksum_bad 4 0 l2_io_error 4 0 l2_size 4 0 l2_asize 4 0 l2_hdr_size 4 0 memory_throttle_count 4 0 memory_direct_count 4 0 memory_indirect_count 4 0 memory_all_bytes 4 16633753600 memory_free_bytes 4 8671588352 memory_available_bytes 3 8411688960 arc_no_grow 4 0 arc_tempreserve 4 0 arc_loaned_bytes 4 0 arc_prune 4 0 arc_meta_used 4 503275576 arc_meta_limit 4 6237657600 arc_dnode_limit 4 623765760 arc_meta_max 4 509194840 arc_meta_min 4 16777216 async_upgrade_sync 4 3864 demand_hit_predictive_prefetch 4 33669 demand_hit_prescient_prefetch 4 0 arc_need_free 4 0 arc_sys_free 4 259902400 arc_raw_size 4 0 ``` -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to zfs-linux in Ubuntu. https://bugs.launchpad.net/bugs/1854480 Title: zfs_arc_max not working anymore in zfs 0.8.1 Status in zfs-linux package in Ubuntu: Fix Released Bug description: In the past I could limit the size of L1ARC by specifying "options zfs zfs_arc_max=3221225472" in /etc/modprobe.d/zfs.conf. I tried even to fill /sys/module/zfs/parameters/zfs_arc_max directly, but none of those methods limits the size of L1ARC. It worked nicely in zfs 0.7.x. Nowadays I use a nvme drive with 3400 MB/s read throughput and I see not much difference between e.g booting the system from nvme and rebooting the system from L1ARC. Having a 96-99% hit rate for the L1ARC, I like to free up some memory, thus avoiding some delay of freeing memory from L1ARC while loading a VM. ------------------------------------------------------------------------- UPDATE: The system only limits the L1ARC, if we directly after the login write the zfs_arc_max value to the file /sys/module/zfs/parameters/zfs_arc_max. ------------------------------------------------------------------------ ProblemType: Bug DistroRelease: Ubuntu 19.10 Package: zfsutils-linux 0.8.1-1ubuntu14.1 ProcVersionSignature: Ubuntu 5.3.0-23.25-generic 5.3.7 Uname: Linux 5.3.0-23-generic x86_64 NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair ApportVersion: 2.20.11-0ubuntu8.2 Architecture: amd64 CurrentDesktop: ubuntu:GNOME Date: Fri Nov 29 06:19:15 2019 InstallationDate: Installed on 2019-11-25 (3 days ago) InstallationMedia: Ubuntu 19.10 "Eoan Ermine" - Release amd64 (20191017) SourcePackage: zfs-linux UpgradeStatus: No upgrade log present (probably fresh install) modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission denied: '/etc/sudoers.d/zfs'] To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1854480/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp