On Sun, Aug 21, 2016 at 8:17 PM, sven falempin <[email protected]> wrote:
> > > On Sun, Aug 21, 2016 at 4:57 PM, Stuart Henderson <[email protected]> > wrote: > >> On 2016-08-20, sven falempin <[email protected]> wrote: >> > On Sat, Aug 20, 2016 at 3:50 PM, Stuart Henderson <[email protected]> >> > wrote: >> > >> >> This report is totally useless without a dmesg. >> >> We don't know which version,which arch, and a bunch of other >> >> things that would be included in it. >> >> >> >> >> > Yes i just leave it in Misc , because i think the problem is acutally >> not >> > openBSD related. >> > Unless work-binpatch59-amd64 is dirty . >> >> So 5.9 + patches. It's probably worth trying -current and see if it >> behaves >> any better. >> >> > For those interested this is related to the amount of cores i give to the > VM. > The problem does not occur if i put a 1 socket , 4 cores config in qemu but > it does with a 2 socket 4 cores, and also 1 socket 6 cores. > > This makes very difficult to know where is the problem qemu or openBSD ? > > Moreover the device is actually used and only with high load i can create > the > problem, i d like > > Using systat i saw a very high load of softnet and way to much fork, that > i will > work on reducing. But that s about it. > > load averages: 15.13, 15.59, 16.02 > XXXXXXXXXXXXX 02:11:50 > 187 processes: 3 running, 180 idle, 4 on processor > up 1 > day, 3:05 > CPU0 states: 0.0% user, 9.7% nice, 45.4% system, 26.3% interrupt, 18.7% > idle > CPU1 states: 0.0% user, 6.2% nice, 61.3% system, 6.6% interrupt, 25.9% > idle > CPU2 states: 0.0% user, 4.5% nice, 65.0% system, 1.0% interrupt, 29.5% > idle > CPU3 states: 0.0% user, 15.8% nice, 70.8% system, 1.9% interrupt, 11.4% > idle > Memory: Real: 617M/1633M act/tot Free: 6299M Cache: 714M Swap: 0K/182M > > This is after reducing the load a bit. > > I will try current if the problem persist, to get some maybe useful back > traces. > > Problem did occur again :'(, will try to update to snapshot or current given the -current state. Looks like the correctly reported bug by Giovanni It was the middle of night, and all i have is some screenshot but i transcripted here the ?double free? i forgot cpu2 :S mach ddbcpu 0 Stopped at Debugger+0x9: leave Debugger() at Debugger+0x9 x86_ipi_handler at x86_ipi_handler+0x76 Xresume_lapic_ipi() at Xresume_lapic_ipi+0x1c --- interrupt --- __mp_lock()+0x42 virtio_pci_intr+0x4b intr_handler+0x67 intr_ioapic_level22+0xcd --- interrupt --- __mp_lock()+0x42 syscall+0x2a5 --- syscall number 198 --- end of kernel end trace frame 0x12e3378fb5700 count: 6 0x12e407b1f43a mach ddbcpu 1 Stopped at Debugger+0x9: leave Debugger() at Debugger+0x9 x86_ipi_handler at x86_ipi_handler+0x76 Xresume_lapic_ipi() at Xresume_lapic_ipi+0x1c --- interrupt --- __mp_lock()+0x42 syscall+0x2a5 --- syscall number 4 --- end of kernel end trace frame 0x112e8f5fc4f0 count: 10 0x112e19aaf79a dev = 0x410, block = 8, fs=/var/www/json_data panic ffs_blkfree: freeing free block Stopped at Debugger+0x9 TID PID UID PRFLAGS PFLAGS CPU 21187 21187 0 0x2 0 0 perl *23127 27210 0 0 0x4000000 3 jsondb debugger panic ffs_blkfree ffs_indirtrunc ffs_truncate ufs_onactive VOP_INACTIVE vput ufs_remove VOP_REMOVE dounlinkat syscall syscall --- number 10 --- end <<Hardware>> is (dmesg in thread) : -smp sockets=1,cores=4 -drive file=/var/lib/images/100/vm-100-disk-2.qcow2,if=none,id=drive-virtio1,cache=writeback,format=qcow2,aio=native -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -drive file=/var/lib/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,cache=writeback,format=qcow2,aio=native -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa QEMU emulator version 1.4.1 Even if something silly is happening inside Qemu, it may helps ? Cheers. -- --------------------------------------------------------------------------------------------------------------------- () ascii ribbon campaign - against html e-mail /\

