On 5/23/25 15:00, Mark Millard wrote:
Dennis Clarke <dclarke_at_blastwave.org> wrote on
Date: Fri, 23 May 2025 17:45:17 UTC :

I have been watching qt6-webengine-6.8.3 fail over and over and over
for some days now and it takes with it a pile of other stuff.

In the log I see this unscripted trash of a message :

[00:05:03] FAILED: v8_context_snapshot.bin
[00:05:03] /usr/local/bin/python3.11
../../../../../qtwebengine-everywhere-src-6.8.3/src/3rdparty/chromium/build/gn_run_binary.p
y ./v8_context_snapshot_generator --output_file=v8_context_snapshot.bin
[00:05:03]
[00:05:03]
[00:05:03] #
[00:05:03] # Fatal error in , line 0
[00:05:03] # Oilpan: Out of memory
[00:05:03] #
[00:05:03] #

Way to little context so all I can do is basically form
questions at this point.


Sorry ... I just realized that other people replied to me OFF-LIST and
that is not helpful to others.

So the machine titan is fairly beefy :

titan#
titan# uname -apKU
FreeBSD titan 15.0-CURRENT FreeBSD 15.0-CURRENT #1 main-n277353-19419d36cf2a: Mon May 19 20:40:28 UTC 2025 root@titan:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 amd64 1500043 1500043
titan#
titan# sysctl hw.model
hw.model: Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
titan#
titan# sysctl hw.ncpu
hw.ncpu: 64
titan#
titan# sysctl hw.physmem
hw.physmem: 549598998528
titan#
titan# sysctl hw.freemem
sysctl: unknown oid 'hw.freemem'
titan#
titan# sysctl kstat.zfs.misc.arcstats.memory_free_bytes
kstat.zfs.misc.arcstats.memory_free_bytes: 404796436480
titan# sysctl vm.kmem_map_free
vm.kmem_map_free: 431405096960
titan#

Also plenty of storage and local NVMe stuff etc etc and dual
NVidia GPU's that do nothing at all.  For now.

We ( myself and others ) have already found that the problem was
me. No big surprise.

USE_TMPFS=yes
TMPFS_LIMIT=32
MAX_MEMORY=32
# MAX_FILES=1024
MAX_EXECUTION_TIME=172800
PARALLEL_JOBS=64
PREPARE_PARALLEL_JOBS=64

That was the problem in the poudriere config.

I commented out the MAX_MEMORY and TMPFS_LIMIT and then watched
as www/qt6-webengine built just fine. Guess the jail needed more
than 32G eh?

I assume that you have not explicitly restricted the memory
space for any processes, so that RAM+SWAP is fully available
to everything. If not, you need to report on the details.


Yup .. I had restrictions in place. Those very very few packages
are hogs. Just massive running pigs for memory it seems.


How much RAM? How much SWAP space? (So: how much RAM+SWAP?)
(RAM+SWAP does not vary per process tree or per builder,
presuming no deliberate restrictions have been placed.)

512G mem and 32G swap which never gets touched.


Do you even have "whatever it seems to want" configured
for the RAM+SWAP? (I'm guessing that you do not know that
the "128G" figure is in fact involved.)


I commented out those restrictions. Makes me worry that some other
packages will come along and fail because they need 384G of mem or
something silly like that. I have been advised ( in the last hour )
that chromium ports generate +40000 source files and such. That is
just abusive but the way of the future I am sure.


How many parallel builders are active in the bulk run
as the bulk build approaches the failure?

I think 64 max.


How much RAM+SWAP in use by other builders or other things
on the system as the system progresses to that failure (not
after the failure)?


....

*sigh*

The problem was me.

ZFS (and its ARC)? UFS? If ZFS: Any tuning?


No tuning. It just works(tm) and that is ZFS.


Basically: all the significant sources of competing for
RAM+SWAP?

.... ===
Mark Millard
marklmi at yahoo.com




It feels like the correct approach is just give everything to the
poudriere bulk situation and then watch for flames.

No flames? No smoke? Great .. it is working.



--
--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken

Reply via email to