i booted a c4.8xlarge flavor AWS instance and got the same memory/numa layout as comment 16. To clarify though, the /proc/iomem output isn't representative of the actual memory layout; specifically it is:
[ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009e000-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000efffffff] usable [ 0.000000] BIOS-e820: [mem 0x00000000fc000000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000f0fffefff] usable and the SRAT divides it into 2 nodes as: [929310.710905] SRAT: Node 0 PXM 0 [mem 0x00000000-0xefffffff] [929310.710906] SRAT: Node 0 PXM 0 [mem 0x100000000-0x778efffff] [929310.710907] SRAT: Node 1 PXM 1 [mem 0x778f00000-0xf0fffffff] so the node ranges are set up as: [929310.854161] On node 0 totalpages: 7769757 [929310.854162] DMA zone: 64 pages used for memmap [929310.854162] DMA zone: 21 pages reserved [929310.854163] DMA zone: 3997 pages, LIFO batch:0 [929310.854196] mminit::memmap_init Initialising map node 0 zone 0 pfns 1 -> 4096 [929310.854265] DMA32 zone: 15296 pages used for memmap [929310.854266] DMA32 zone: 978944 pages, LIFO batch:31 [929310.854299] mminit::memmap_init Initialising map node 0 zone 1 pfns 4096 -> 1048576 [929310.869608] Normal zone: 106044 pages used for memmap [929310.869611] Normal zone: 6786816 pages, LIFO batch:31 [929310.869647] mminit::memmap_init Initialising map node 0 zone 2 pfns 1048576 -> 7835392 [929310.975013] On node 1 totalpages: 7958783 [929310.975018] Normal zone: 124356 pages used for memmap [929310.975019] Normal zone: 7958783 pages, LIFO batch:31 [929310.975055] mminit::memmap_init Initialising map node 1 zone 2 pfns 7835392 -> 15794175 node 0 DMA and DMA32 ranges are normal, ending at 0x1000 and 0x100000, respectively. The Normal zone for node 0 ends at 0x778f00, and the Normal zone for node 1 ends at 0xf0ffff. Since PAGE_SHIFT is 12 and pageblock_order (with this system config) is (21 - 12 = 9): node 0 Normal zone ends on a pageblock boundary, while node 1 Normal zone ends 1 page short of a pageblock boundary. Preliminary note: the SRAT table seems to be incorrect; it spans node 1 all the way to 0xf0fffffff, but e820 memory, and the node 1 Normal zone, only reach 0xf0fffefff. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1497428 Title: kernel BUG at /build/buildd/linux-3.13.0/mm/page_alloc.c:968 Status in linux package in Ubuntu: In Progress Status in linux source package in Trusty: In Progress Bug description: The kernel triggers a BUG when it finds it is in move_freepages() but the start and end pfns for the move are in different zones. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1497428/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp