On 3/16/26 14:08, Alan Somers wrote:
> On Mon, Mar 16, 2026 at 2:41 PM Garrett Wollman <[email protected]> 
> wrote:
>>
>> . . .
>> -GAWollman
> 
> I once saw a similar bug.  In my case I had a process that mmap()ed
> some very large files on fusefs, consuming lots of inactive pages.
> And when the system comes under memory pressure, it asks ARC to evict
> first.  So the ARC would end up shrinking down to arc_min every time.
> In my case, the solution was to set vfs.fusefs.data_cache_mode=0 .  I
> suspect that similar bugs could be possible with UFS or tmpfs, if they
> have giant files that are mmaped().

ZFS is documented to have the property: "This approach provides
coherency between memory-mapped and IO access as the expense of wasted
memory due to having two copies of the file in memory and extra overhead
caused by the need to copy the contents between the two copies."
(Chapter 10, page 548, last bullet item of the 2nd edition of the design
and implementation book.)

> 
> A less effective workaround was to set vfs.zfs.arc.min to some
> reasonable value.  That can prevent ARC from shrinking too far.  You
> could try that.
> 
> Another thing you could try is to run "vmstat -o" when the system is
> in the problematic state.  That will show you which vm objects are
> using the most inactive pages.
> 
> Hope this helps,
> -Alan
> 
> 


-- 
===
Mark Millard
marklmi at yahoo.com

Reply via email to