On 19/02/2026 07:02, Samuel Thibault wrote:
Hello,

Michael Kelly, le jeu. 19 févr. 2026 06:02:36 +0000, a ecrit:
I ran that sequence for around 30
successful builds until my virtual machine locked up with a panic:

panic ../vm/vm_page.c:1618: vm_page_alloc_pa: vm_page: privileged thread
unable to allocate page
Which is essentially an out of memory situation. In principle the
swapping code tries to keep on the safe side to make sure privileged
threads can allocate, but "privileged threads" is probably too
inclusive. AFAIK only the default pager and rumpdisk get privileged.
There is also notably vm_map_lock() that gets privileged. Is the
backtrace of this panic after a vm_map_lock()?

On a 2GB i386 virtual machine there should be around 100M remaining when normal allocation is paused. That is a lot of memory being used by privileged threads thereafter to satisfy the pageout. I think that something is going wrong and I'll be looking into it over the days ahead.

One instance that occurred whilst building haskell-hedgehog:

login: panic ../vm/vm_page.c:1618: vm_page_alloc_pa: vm_page: privileged thread unable to allocate page
Debugger invoked: panic
Kernel Breakpoint trap, eip 0xffffffff8100c956, code 0, cr2 ffffffffb2e19b00
Stopped       at  Debugger+0x15:      TODO
Debugger(...)+0x15
Panic(...)+0xb8
vm_page_alloc_pa(...)+0x2da
vm_page_convert(...)+0x57
vm_fault_page(...)+0x832
vm_fault(...)+0x5d0
vm_fault_wire(...)+0x75
vm_map_pageable_scan(...)+0x154
vm_map_protect(...)+0x1c5
vm_protect(...)+0x62
_Xvm_protect(...)+0x7e
ipc_kobject_server(...)+0xac
mach_msg_trap(...)+0x8a0
syscall64(...)+0xe3

And another whilst running stress-ng. This thread is rumpdisk.server:

db>  trace /tu $task3.22
Debugger(c10f69a0,f6a20c38,1,c10009bf,3c7)+0x13
Panic(c10ee02a,652,c10d649c,c10dd9e4,f6a20ce8)+0x7a
vm_page_alloc_pa(0,3,3,c1010d08)+0x23f
vm_page_convert(f672fd60,d79240b0,0,c10421d5,0)+0x48
vm_fault_page(d79240b0,0,3,0,0,f672fddc,f672fde0,f672fde4,0,0,f672fddc,c1010cc0)
+0xd29
vm_fault(f59ecec8,72f9000,3,1,0,0,f04cda58,0)+0x52a
vm_fault_wire(f59ecec8,f04cda58,80001513,f672fee0,f59ecec8)+0x66
vm_map_pageable_scan(c1047c8c,f59ecec8,f59eced0,f04cda58,f59ecec8)+0x107
vm_map_pageable(f59ecec8,72f9000,72fa000,3,0,0,1,f04cda68)+0x106
vm_map_enter(f59ecec8,f672ffb0,1000,0,1,0,0,0,3,7,1,30)+0x3dd
vm_allocate(f59ecec8,f672ffb0,1000,1,cbaace8,f672ffb0,4,c1032e41)+0x4b
syscall_vm_allocate(2,cbaace8,1000,1,f612d440)+0x35
>>>>> user space <<<<<
syscall_vm_allocate(0x82401bc)(0x81b01f3(2,cbaace8,1000,1,1000)
rumpdisk_device_read(0x8049f5c)(20069e50,7e,12,0,135b08)
_Xdevice_read(0x804cd6e)(cbaae30,cbace40,cbaad98,cbaef3c,cbace40)
machdev_demuxer(0x804a93b)(cbaae30,cbace40,cbaaddc,0,cbace40)
synchronized_demuxer(0x804dfbb)(cbaae30,cbace40,0,0,1712)
mach_msg_server_timeout(0x81b0818)(cbaeee8,2000,10,900,2710)
thread_function(0x804e0eb)(1,10000,0,0,0)
ports_manage_port_operations_multithread(0x804e329)(20002d90,8049c70,1d4c0,927c0,0)
rumpdisk_multithread_server(0x8049c5f)(0,837bd74,cbaefe8,819df45,0)

Mike.

Reply via email to