On Apr 22, 2025, at 21:59, Mark Millard <mark...@yahoo.com> wrote:

> Sulev-Madis Silber <freebsd-current-freebsd-org111_at_ketas.si.pri.ee> wrote 
> on
> Date: Wed, 23 Apr 2025 04:31:41 UTC :
> 
> https://forums.freebsd.org/threads/server-freezes-when-using-git-to-update-ports-tree.88651/
> 
> That, in turn mentions:
> 
> the remote console shows an unresponsive, frozen OS, unable to interact with.
> 
> 
> If FreeBSD 13.4 can still swapping out process kernel
> stacks, you may want the likes of /etc/sysctl.conf
> to have:
> 
> #
> # Together this pair avoids swapping out the process kernel stacks.
> # This avoids processes for interacting with the system from being
> # hung-up by such.
> vm.swap_enabled=0
> vm.swap_idle_enabled=0
> 
> (I've no clue that that is why you lost control but
> it may be a possibility.)
> 
> (main [FreeBSD 15] no longer does such swapping out of any
> process kernel stacks and the 2 settings have been removed.)

Are  you using a file system based SWAP space? Vs. a
Partition or Slice based SWAP space?

Quoting:

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=206048#c7

on why it should be Partition/Slice based:

QUOTE
On 2017-Feb-13, at 7:20 PM, Konstantin Belousov <kostikbel at gmail.com> wrote
on the freebsd-arm list:

. . .

swapfile write requires the write request to come through the filesystem
write path, which might require the filesystem to allocate more memory
and read some data. E.g. it is known that any ZFS write request
allocates memory, and that write request on large UFS file might require
allocating and reading an indirect block buffer to find the block number
of the written block, if the indirect block was not yet read.

As result, swapfile swapping is more prone to the trivial and unavoidable
deadlocks where the pagedaemon thread, which produces free memory, needs
more free memory to make a progress. Swap write on the raw partition over
simple partitioning scheme directly over HBA are usually safe, while e.g.
zfs over geli over umass is the worst construction.
END QUOTE

Note the references to ZFS and GELI. Your forum notes reference such.


A separate tunable: in case "was killed: failed to reclaim memory"
is involved but not reported/recorded: in /boot/loader.conf

#
# Delay when persistent low free RAM leads to
# Out Of Memory killing of processes:
vm.pageout_oom_seq=120



Separate question: why did some forum top runs show
qemu-system-arm threads? That could be a significant
competition for RAM+SWAP.


===
Mark Millard
marklmi at yahoo.com


Reply via email to