** Description changed: This problem is a mix between running certain versions of 32bit Linux kernel dom0 on certain versions of 64bit Xen hypervisor, combined with certain memory clamping settings (dom0_mem=xM, without setting the max limit). Xen 4.4.2 + Linux 3.13.x Xen 4.5.0 + linux 3.19.x Xen 4.6.0 + linux 4.0.x Xen 4.6.0 + linux 4.1.x -> all boot without messages Xen 4.5.1 + Linux 4.2.x Xen 4.6.0 + Linux 4.2.x Xen 4.6.0 + Linux 4.3.x * dom0_mem 512M, 4096M, or unlimited -> boot without messages * dom0_mem between 1024M and 3072M (inclusive) -> bad page messages (but finishes boot) Xen 4.6.0 + Linux 4.4.x Xen 4.6.0 + Linux 4.5.x Xen 4.6.0 + Linux 4.6-rc6 The boot for 512M,4096M, and unlimited looks good as well. Though trying to start a domU without dom0_mem set caused a crash when ballooning (but I think this should be a seperate bug) Using a dom0_mem range between 1G and 3G it looks like still producing the bad page flags bug message and additionally panicking + reboot. The bad page bug generally looks like this (the pfn numbers seem to be towards the end of the allocated range. [ 8.980150] BUG: Bad page state in process swapper/0 pfn:7fc22 [ 8.980238] page:f4566550 count:0 mapcount:0 mapping: (null) index:0x0 [ 8.980328] flags: 0x7000400(reserved) [ 8.980486] page dumped because: PAGE_FLAGS_CHECK_AT_PREP flag set [ 8.980575] bad because of flags: [ 8.980688] flags: 0x400(reserved) [ 8.980844] Modules linked in: [ 8.980960] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G B 4.2.0-19- generic #23-Ubuntu [ 8.981084] Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0 08/31/2012 [ 8.981177] c1a649a7 23e07668 00000000 e9cafce4 c175e501 f4566550 e9cafd08 c 1166897 [ 8.981608] c19750a4 e9d183ec 0007fc22 007fffff c1975630 c1978e86 00000001 e 9cafd74 [ 8.982074] c1169f83 00000002 00000141 0004a872 c1af3644 00000000 ee44bce4 e e44bce4 [ 8.982506] Call Trace: [ 8.982582] [<c175e501>] dump_stack+0x41/0x52 [ 8.982666] [<c1166897>] bad_page+0xb7/0x110 [ 8.982749] [<c1169f83>] get_page_from_freelist+0x2d3/0x610 [ 8.982838] [<c116a4f3>] __alloc_pages_nodemask+0x153/0x910 [ 8.982926] [<c122ee62>] ? find_entry.isra.13+0x52/0x90 [ 8.983013] [<c11b0f75>] ? kmem_cache_alloc_trace+0x175/0x1e0 [ 8.983102] [<c10b1c96>] ? __raw_callee_save___pv_queued_spin_unlock+0x6/0x10 [ 8.983223] [<c11b0ddd>] ? __kmalloc+0x21d/0x240 [ 8.983308] [<c119cc2e>] __vmalloc_node_range+0x10e/0x210 [ 8.983433] [<c1148fa7>] ? bpf_prog_alloc+0x37/0xa0 [ 8.983518] [<c119cd96>] __vmalloc_node+0x66/0x70 [ 8.983604] [<c1148fa7>] ? bpf_prog_alloc+0x37/0xa0 [ 8.983689] [<c119cdd4>] __vmalloc+0x34/0x40 [ 8.983773] [<c1148fa7>] ? bpf_prog_alloc+0x37/0xa0 [ 8.983859] [<c1148fa7>] bpf_prog_alloc+0x37/0xa0 [ 8.983944] [<c167cc8c>] bpf_prog_create+0x2c/0x90 [ 8.984034] [<c1b6741e>] ? bsp_pm_check_init+0x11/0x11 [ 8.984121] [<c1b68401>] ptp_classifier_init+0x2b/0x44 [ 8.984207] [<c1b6749a>] sock_init+0x7c/0x83 [ 8.984291] [<c100211a>] do_one_initcall+0xaa/0x200 [ 8.984376] [<c1b6741e>] ? bsp_pm_check_init+0x11/0x11 [ 8.984463] [<c1b1654c>] ? repair_env_string+0x12/0x54 [ 8.984551] [<c1b16cf6>] ? kernel_init_freeable+0x126/0x1d9 [ 8.984726] [<c1755fb0>] kernel_init+0x10/0xe0 [ 8.984846] [<c10929b1>] ? schedule_tail+0x11/0x50 [ 8.984932] [<c1764141>] ret_from_kernel_thread+0x21/0x30 [ 8.985019] [<c1755fa0>] ? rest_init+0x70/0x70 - break-fix: 92923ca3aacef63 4b50bcc7eda4d3cc9e3f2a0aa60e590fedf728c5 + break-fix: 92923ca3aacef63c92dc297a75ad0c6dfe4eab37 + 4b50bcc7eda4d3cc9e3f2a0aa60e590fedf728c5
-- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1576564 Title: Xen 32bit dom0 on 64bit hypervisor: bad page flags Status in linux package in Ubuntu: Confirmed Status in xen package in Ubuntu: Invalid Status in linux source package in Wily: Confirmed Status in xen source package in Wily: Invalid Status in linux source package in Xenial: Confirmed Status in xen source package in Xenial: Invalid Bug description: This problem is a mix between running certain versions of 32bit Linux kernel dom0 on certain versions of 64bit Xen hypervisor, combined with certain memory clamping settings (dom0_mem=xM, without setting the max limit). Xen 4.4.2 + Linux 3.13.x Xen 4.5.0 + linux 3.19.x Xen 4.6.0 + linux 4.0.x Xen 4.6.0 + linux 4.1.x -> all boot without messages Xen 4.5.1 + Linux 4.2.x Xen 4.6.0 + Linux 4.2.x Xen 4.6.0 + Linux 4.3.x * dom0_mem 512M, 4096M, or unlimited -> boot without messages * dom0_mem between 1024M and 3072M (inclusive) -> bad page messages (but finishes boot) Xen 4.6.0 + Linux 4.4.x Xen 4.6.0 + Linux 4.5.x Xen 4.6.0 + Linux 4.6-rc6 The boot for 512M,4096M, and unlimited looks good as well. Though trying to start a domU without dom0_mem set caused a crash when ballooning (but I think this should be a seperate bug) Using a dom0_mem range between 1G and 3G it looks like still producing the bad page flags bug message and additionally panicking + reboot. The bad page bug generally looks like this (the pfn numbers seem to be towards the end of the allocated range. [ 8.980150] BUG: Bad page state in process swapper/0 pfn:7fc22 [ 8.980238] page:f4566550 count:0 mapcount:0 mapping: (null) index:0x0 [ 8.980328] flags: 0x7000400(reserved) [ 8.980486] page dumped because: PAGE_FLAGS_CHECK_AT_PREP flag set [ 8.980575] bad because of flags: [ 8.980688] flags: 0x400(reserved) [ 8.980844] Modules linked in: [ 8.980960] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G B 4.2.0-19- generic #23-Ubuntu [ 8.981084] Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0 08/31/2012 [ 8.981177] c1a649a7 23e07668 00000000 e9cafce4 c175e501 f4566550 e9cafd08 c 1166897 [ 8.981608] c19750a4 e9d183ec 0007fc22 007fffff c1975630 c1978e86 00000001 e 9cafd74 [ 8.982074] c1169f83 00000002 00000141 0004a872 c1af3644 00000000 ee44bce4 e e44bce4 [ 8.982506] Call Trace: [ 8.982582] [<c175e501>] dump_stack+0x41/0x52 [ 8.982666] [<c1166897>] bad_page+0xb7/0x110 [ 8.982749] [<c1169f83>] get_page_from_freelist+0x2d3/0x610 [ 8.982838] [<c116a4f3>] __alloc_pages_nodemask+0x153/0x910 [ 8.982926] [<c122ee62>] ? find_entry.isra.13+0x52/0x90 [ 8.983013] [<c11b0f75>] ? kmem_cache_alloc_trace+0x175/0x1e0 [ 8.983102] [<c10b1c96>] ? __raw_callee_save___pv_queued_spin_unlock+0x6/0x10 [ 8.983223] [<c11b0ddd>] ? __kmalloc+0x21d/0x240 [ 8.983308] [<c119cc2e>] __vmalloc_node_range+0x10e/0x210 [ 8.983433] [<c1148fa7>] ? bpf_prog_alloc+0x37/0xa0 [ 8.983518] [<c119cd96>] __vmalloc_node+0x66/0x70 [ 8.983604] [<c1148fa7>] ? bpf_prog_alloc+0x37/0xa0 [ 8.983689] [<c119cdd4>] __vmalloc+0x34/0x40 [ 8.983773] [<c1148fa7>] ? bpf_prog_alloc+0x37/0xa0 [ 8.983859] [<c1148fa7>] bpf_prog_alloc+0x37/0xa0 [ 8.983944] [<c167cc8c>] bpf_prog_create+0x2c/0x90 [ 8.984034] [<c1b6741e>] ? bsp_pm_check_init+0x11/0x11 [ 8.984121] [<c1b68401>] ptp_classifier_init+0x2b/0x44 [ 8.984207] [<c1b6749a>] sock_init+0x7c/0x83 [ 8.984291] [<c100211a>] do_one_initcall+0xaa/0x200 [ 8.984376] [<c1b6741e>] ? bsp_pm_check_init+0x11/0x11 [ 8.984463] [<c1b1654c>] ? repair_env_string+0x12/0x54 [ 8.984551] [<c1b16cf6>] ? kernel_init_freeable+0x126/0x1d9 [ 8.984726] [<c1755fb0>] kernel_init+0x10/0xe0 [ 8.984846] [<c10929b1>] ? schedule_tail+0x11/0x50 [ 8.984932] [<c1764141>] ret_from_kernel_thread+0x21/0x30 [ 8.985019] [<c1755fa0>] ? rest_init+0x70/0x70 break-fix: 92923ca3aacef63c92dc297a75ad0c6dfe4eab37 4b50bcc7eda4d3cc9e3f2a0aa60e590fedf728c5 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1576564/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp