Public bug reported:

kernel:
Linux t35lp11 5.4.0-25-generic #29-Ubuntu SMP Fri Apr 17 15:05:32 UTC 2020 
s390x s390x s390x GNU/Linux

How to reproduce:
1. Disable hotplugable memory
 # chmem -d 1G
2. Look at the kernel messages
 # dmesg -T

Then you should see the following:
...
[Mon Apr 20 10:28:30 2020] page:000003d083000000 refcount:1 mapcount:0 
mapping:00000000eb7f3600 index:0x0 compound_mapcount: 0
[Mon Apr 20 10:28:30 2020] flags: 0x7fffe0000010200(slab|head)
[Mon Apr 20 10:28:30 2020] raw: 07fffe0000010200 0000000000000100 
0000000000000122 00000000eb7f3600
[Mon Apr 20 10:28:30 2020] raw: 0000000000000000 0035006a00000000 
ffffffff00000001 0000000000000000
[Mon Apr 20 10:28:30 2020] page dumped because: unmovable page
[Mon Apr 20 10:28:30 2020] page:000003d082564000 refcount:1 mapcount:0 
mapping:00000000a88d3600 index:0x0 compound_mapcount: 0
[Mon Apr 20 10:28:30 2020] flags: 0x7fffe0000010200(slab|head)
[Mon Apr 20 10:28:30 2020] raw: 07fffe0000010200 0000000000000100 
0000000000000122 00000000a88d3600
[Mon Apr 20 10:28:30 2020] raw: 0000000000000000 004e009c00000000 
ffffffff00000001 0000000000000000
[Mon Apr 20 10:28:30 2020] page dumped because: unmovable page
[Mon Apr 20 10:28:30 2020] page:000003d081fb0000 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0
[Mon Apr 20 10:28:30 2020] flags: 0x3fffe0000000000()
[Mon Apr 20 10:28:30 2020] raw: 03fffe0000000000 0000000000000100 
0000000000000122 0000000000000000
[Mon Apr 20 10:28:30 2020] raw: 0000000000000000 0000000000000000 
ffffffff00000001 0000000000000000
[Mon Apr 20 10:28:30 2020] page dumped because: unmovable page
[Mon Apr 20 10:28:30 2020] page:000003d080000000 refcount:1 mapcount:0 
mapping:0000000000000000 index:0x0
[Mon Apr 20 10:28:30 2020] flags: 0x3fffe0000001000(reserved)
[Mon Apr 20 10:28:30 2020] raw: 03fffe0000001000 000003d080000008 
000003d080000008 0000000000000000
[Mon Apr 20 10:28:30 2020] raw: 0000000000000000 0000000000000000 
ffffffff00000001 0000000000000000
[Mon Apr 20 10:28:30 2020] page dumped because: unmovable page

...

------------------

Maybe helpful information:

------------------
/proc/zoneinfo:
Node 0, zone      DMA
  per-node stats
      nr_inactive_anon 37
      nr_active_anon 10899
      nr_inactive_file 60239
      nr_active_file 12110
      nr_unevictable 4237
      nr_slab_reclaimable 19489
      nr_slab_unreclaimable 21873
      nr_isolated_anon 0
      nr_isolated_file 0
      workingset_nodes 0
      workingset_refault 0
      workingset_activate 0
      workingset_restore 0
      workingset_nodereclaim 0
      nr_anon_pages 13413
      nr_mapped    19622
      nr_file_pages 74144
      nr_dirty     2
      nr_writeback 0
      nr_writeback_temp 0
      nr_shmem     79
      nr_shmem_hugepages 0
      nr_shmem_pmdmapped 0
      nr_file_hugepages 0
      nr_file_pmdmapped 0
      nr_anon_transparent_hugepages 0
      nr_unstable  0
      nr_vmscan_write 0
      nr_vmscan_immediate_reclaim 0
      nr_dirtied   18735
      nr_written   17734
      nr_kernel_misc_reclaimable 0
  pages free     519949
        min      1066
        low      1590
        high     2114
        spanned  524288
        present  524288
        managed  524265
        protection: (0, 1714, 1714)
      nr_free_pages 519949
      nr_zone_inactive_anon 0
      nr_zone_active_anon 0
      nr_zone_inactive_file 0
      nr_zone_active_file 0
      nr_zone_unevictable 0
      nr_zone_write_pending 0
      nr_mlock     0
      nr_page_table_pages 0
      nr_kernel_stack 0
      nr_bounce    0
      nr_zspages   0
      nr_free_cma  0
      numa_hit     5685
      numa_miss    0
      numa_foreign 0
      numa_interleave 78
      numa_local   5685
      numa_other   0
  pagesets
    cpu: 0
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 1
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 2
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 3
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 4
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 5
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 6
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 7
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
  node_unreclaimable:  0
  start_pfn:           0
Node 0, zone   Normal
  pages free     124082
        min      892
        low      1331
        high     1770
        spanned  524288
        present  524288
        managed  439016
        protection: (0, 0, 0)
      nr_free_pages 124082
      nr_zone_inactive_anon 37
      nr_zone_active_anon 10899
      nr_zone_inactive_file 60239
      nr_zone_active_file 12110
      nr_zone_unevictable 4237
      nr_zone_write_pending 2
      nr_mlock     4237
      nr_page_table_pages 961
      nr_kernel_stack 8640
      nr_bounce    0
      nr_zspages   0
      nr_free_cma  0
      numa_hit     318257
      numa_miss    0
      numa_foreign 0
      numa_interleave 25536
      numa_local   318257
      numa_other   0
  pagesets
    cpu: 0
              count: 4
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 1
              count: 60
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 2
              count: 165
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 3
              count: 83
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 4
              count: 61
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 5
              count: 323
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 6
              count: 8
              high:  378
              batch: 63
  vm stats threshold: 40
    cpu: 7
              count: 0
              high:  378
              batch: 63
  vm stats threshold: 40
  node_unreclaimable:  0
  start_pfn:           524288
Node 0, zone  Movable
  pages free     0
        min      0
        low      0
        high     0
        spanned  0
        present  0
        managed  0
        protection: (0, 0, 0)


------------------
# lsmem -a
RANGE                                 SIZE   STATE REMOVABLE BLOCK
0x0000000000000000-0x000000003fffffff   1G  online       yes     0
0x0000000040000000-0x000000007fffffff   1G  online       yes     1
0x0000000080000000-0x00000000bfffffff   1G  online       yes     2
0x00000000c0000000-0x00000000ffffffff   1G  online       yes     3
0x0000000100000000-0x000000013fffffff   1G offline               4
0x0000000140000000-0x000000017fffffff   1G offline               5
0x0000000180000000-0x00000001bfffffff   1G offline               6
0x00000001c0000000-0x00000001ffffffff   1G offline               7
0x0000000200000000-0x000000023fffffff   1G offline               8
0x0000000240000000-0x000000027fffffff   1G offline               9
0x0000000280000000-0x00000002bfffffff   1G offline              10
0x00000002c0000000-0x00000002ffffffff   1G offline              11
0x0000000300000000-0x000000033fffffff   1G offline              12
0x0000000340000000-0x000000037fffffff   1G offline              13
0x0000000380000000-0x00000003bfffffff   1G offline              14
0x00000003c0000000-0x00000003ffffffff   1G offline              15
0x0000000400000000-0x000000043fffffff   1G offline              16
0x0000000440000000-0x000000047fffffff   1G offline              17
0x0000000480000000-0x00000004bfffffff   1G offline              18
0x00000004c0000000-0x00000004ffffffff   1G offline              19
0x0000000500000000-0x000000053fffffff   1G offline              20
0x0000000540000000-0x000000057fffffff   1G offline              21
0x0000000580000000-0x00000005bfffffff   1G offline              22
0x00000005c0000000-0x00000005ffffffff   1G offline              23
0x0000000600000000-0x000000063fffffff   1G offline              24
0x0000000640000000-0x000000067fffffff   1G offline              25
0x0000000680000000-0x00000006bfffffff   1G offline              26
0x00000006c0000000-0x00000006ffffffff   1G offline              27
0x0000000700000000-0x000000073fffffff   1G offline              28
0x0000000740000000-0x000000077fffffff   1G offline              29
0x0000000780000000-0x00000007bfffffff   1G offline              30
0x00000007c0000000-0x00000007ffffffff   1G offline              31
0x0000000800000000-0x000000083fffffff   1G offline              32
0x0000000840000000-0x000000087fffffff   1G offline              33
0x0000000880000000-0x00000008bfffffff   1G offline              34
0x00000008c0000000-0x00000008ffffffff   1G offline              35

Memory block size:         1G
Total online memory:       4G
Total offline memory:     32G

Memory block size:         1G
Total online memory:      34G
Total offline memory:      2G

** Affects: ubuntu-z-systems
     Importance: High
     Assignee: Skipper Bug Screeners (skipper-screen-team)
         Status: New

** Affects: linux (Ubuntu)
     Importance: Undecided
     Assignee: Canonical Kernel Team (canonical-kernel-team)
         Status: New


** Tags: architecture-s39064 bugnameltc-185365 severity-high 
targetmilestone-inin2004

** Tags added: architecture-s39064 bugnameltc-185365 severity-high
targetmilestone-inin2004

** Changed in: ubuntu
     Assignee: (unassigned) => Skipper Bug Screeners (skipper-screen-team)

** Package changed: ubuntu => linux (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1873762

Title:
  [Ubuntu 20.04] memory hotplug triggers page migration warnings

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1873762/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to