the creation of a special type of
writable memory (shadow stack) that is only writable in limited specific
ways. Previously, changes were proposed to core MM code to teach it to
decide when to create normally writable memory or the special shadow stack
writable memory, but David Hildenbrand suggested
On 01.03.23 08:03, Christophe Leroy wrote:
Le 27/02/2023 à 23:29, Rick Edgecombe a écrit :
The x86 Control-flow Enforcement Technology (CET) feature includes a new
type of memory called shadow stack. This shadow stack memory has some
unusual properties, which requires some core mm changes to f
On 28.02.23 16:50, Palmer Dabbelt wrote:
On Fri, 13 Jan 2023 09:10:19 PST (-0800), da...@redhat.com wrote:
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the offset. This reduces the maximum swap space per file: on 32bit
to 16 GiB (was 32 GiB).
Seems fine to me, I doubt a
On 27.02.23 20:46, Geert Uytterhoeven wrote:
Hi David,
On Mon, Feb 27, 2023 at 6:01 PM David Hildenbrand wrote:
/*
* Externally used page protection values.
diff --git a/arch/microblaze/include/asm/pgtable.h
b/arch/microblaze/include/asm/pgtable.h
index 42f5988e998b..7e3de54bf426
/*
* Externally used page protection values.
diff --git a/arch/microblaze/include/asm/pgtable.h
b/arch/microblaze/include/asm/pgtable.h
index 42f5988e998b..7e3de54bf426 100644
--- a/arch/microblaze/include/asm/pgtable.h
+++ b/arch/microblaze/include/asm/pgtable.h
@@ -131,10 +131,10 @@ exte
On 26.02.23 21:13, Geert Uytterhoeven wrote:
Hi David,
Hi Geert,
On Fri, Jan 13, 2023 at 6:16 PM David Hildenbrand wrote:
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the s
nux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@lists.infradead.org
Cc: xen-de...@lists.xenproject.org
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Tested-by: Pengfei Xu
Suggested-by: David Hildenbrand
Signed-off-by: Rick Edgecombe
---
Hi Non-x86 A
On 07.02.23 01:32, Mark Brown wrote:
On Fri, Jan 13, 2023 at 06:10:04PM +0100, David Hildenbrand wrote:
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit from the
offset. This reduces the maximum swap space per file to 64 GiB (was 128
GiB).
While at it drop the PTE_TYPE_
off-by: Mike Rapoport (IBM)
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
ntation of pfn_valid() and drop its per-architecture definitions.
Signed-off-by: Mike Rapoport (IBM)
Acked-by: Arnd Bergmann
Acked-by: Guo Ren # csky
Acked-by: Huacai Chen # LoongArch
Acked-by: Stafford Horne# OpenRISC
---
LGTM with the fixup
Reviewed-by: David H
h and
drop redundant definitions.
Signed-off-by: Mike Rapoport (IBM)
Reviewed-by: Geert Uytterhoeven
Acked-by: Geert Uytterhoeven
---
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.inf
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
On 13.01.23 18:10, David Hildenbrand wrote:
We want to implement __HAVE_ARCH_PTE_SWP_EXCLUSIVE on all architectures.
Let's extend our sanity checks, especially testing that our PTE bit
does not affect:
* is_swap_pte() -> pte_present() and pte_none()
* the swap entry + type
* pte_swp_so
esent(), pte_none() and HW happy. For now, let's keep it simple
because there might be something non-obvious.
Cc: Guo Ren
Signed-off-by: David Hildenbrand
---
arch/csky/abiv1/inc/abi/pgtable-bits.h | 13 +
arch/csky/abiv2/inc/abi/pgtable-bits.h | 19 ---
arch/csky/i
cannot
be used, and reusing it avoids having to steal one bit from the swap
offset.
While at it, mask the type in __swp_entry().
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Signed-off-by: David Hildenbrand
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 38 +
le at it, mask the type in __swp_entry(); use some helper definitions
to make the macros easier to grasp.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: "H. Peter Anvin"
Signed-off-by: David Hildenbrand
---
arch/x86/include/asm/pgtabl
__HAVE_ARCH_PTE_SWP_EXCLUSIVE is now supported by all architectures that
support swp PTEs, so let's drop it.
Signed-off-by: David Hildenbrand
---
arch/alpha/include/asm/pgtable.h | 1 -
arch/arc/include/asm/pgtable-bits-arcv2.h| 1 -
arch/arm/include/asm/pgta
k 1100 and 1110 now identify swap PTEs.
While at it, remove SWP_TYPE_BITS (not really helpful as it's not used in
the actual swap macros) and mask the type in __swp_entry().
Cc: Chris Zankel
Cc: Max Filippov
Signed-off-by: David Hildenbrand
---
arch/xtensa/include/asm
k the type in __swp_entry().
Cc: Richard Weinberger
Cc: Anton Ivanov
Cc: Johannes Berg
Signed-off-by: David Hildenbrand
---
arch/um/include/asm/pgtable.h | 37 +--
1 file changed, 35 insertions(+), 2 deletions(-)
diff --git a/arch/um/include/asm/pgtable.h b/ar
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen bit was effectively unused.
While at it, mask the type in __swp_entry().
Cc: "David S. Miller"
Signed-off-by: Da
archs. Note that the old documentation was
wrong: we use 20 bit for the offset and the reserved bits were 8 instead
of 7 bits in the ascii art.
Cc: "David S. Miller"
Signed-off-by: David Hildenbrand
---
arch/sparc/include/asm/pgtable_32.h | 27 ++-
arch/sp
While at it, mask the type in __swp_entry().
Cc: Yoshinori Sato
Cc: Rich Felker
Signed-off-by: David Hildenbrand
---
arch/sh/include/asm/pgtable_32.h | 54 +---
1 file changed, 42 insertions(+), 12 deletions(-)
diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/
pe in __swp_entry().
Cc: Paul Walmsley
Cc: Palmer Dabbelt
Cc: Albert Ou
Signed-off-by: David Hildenbrand
---
arch/riscv/include/asm/pgtable-bits.h | 3 +++
arch/riscv/include/asm/pgtable.h | 29 ++-
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/arch/
Signed-off-by: David Hildenbrand
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 22 +
arch/powerpc/include/asm/nohash/32/pte-40x.h | 6 ++---
arch/powerpc/include/asm/nohash/32/pte-44x.h | 18 --
arch/powerpc/include/asm/nohash/32/pte-85xx.h | 4 ++--
arch/powe
bit avoids having to steal one bit from the swap offset.
Cc: "James E.J. Bottomley"
Cc: Helge Deller
Signed-off-by: David Hildenbrand
---
arch/parisc/include/asm/pgtable.h | 41 ---
1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/arch/parisc/i
d-off-by: David Hildenbrand
---
arch/openrisc/include/asm/pgtable.h | 41 +
1 file changed, 36 insertions(+), 5 deletions(-)
diff --git a/arch/openrisc/include/asm/pgtable.h
b/arch/openrisc/include/asm/pgtable.h
index 6477c17b3062..903b32d662ab 100644
--- a/arch/ope
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using the yet-unused bit
31.
Cc: Thomas Bogendoerfer
Signed-off-by: David Hildenbrand
---
arch/nios2/include/asm/pgtable-bits.h | 3 +++
arch/nios2/include/asm/pgtable.h | 22 +-
2 files changed, 24 insertions(
ts for the swap type and document the layout.
Bits 26--31 should get ignored by hardware completely, so they can be
used.
Cc: Dinh Nguyen
Signed-off-by: David Hildenbrand
---
arch/nios2/include/asm/pgtable.h | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/
confusing, document it a bit better.
While at it, mask the type in __swp_entry()/mk_swap_pte().
Cc: Thomas Bogendoerfer
Signed-off-by: David Hildenbrand
---
arch/mips/include/asm/pgtable-32.h | 88 ++
arch/mips/include/asm/pgtable-64.h | 23 ++--
arch/mips/include/asm
out a little bit harder to decipher.
While at it, drop the comment from paulus---copy-and-paste leftover
from powerpc where we actually have _PAGE_HASHPTE---and mask the type in
__swp_entry_to_pte() as well.
Cc: Michal Simek
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/mcf_pgtable.h
mask the type in __swp_entry().
Cc: Geert Uytterhoeven
Cc: Greg Ungerer
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/mcf_pgtable.h | 36 --
arch/m68k/include/asm/motorola_pgtable.h | 38 +--
arch/m68k/include/asm/sun3_pgtable.h
The definitions are not required, let's remove them.
Cc: Geert Uytterhoeven
Cc: Greg Ungerer
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/pgtable_no.h | 6 --
1 file changed, 6 deletions(-)
diff --git a/arch/m68k/include/asm/pgtable_no.h
b/arch/m68k/includ
PMDs and could also be used
in swap PMD context later.
Cc: Huacai Chen
Cc: WANG Xuerui
Signed-off-by: David Hildenbrand
---
arch/loongarch/include/asm/pgtable-bits.h | 4 +++
arch/loongarch/include/asm/pgtable.h | 39 ---
2 files changed, 39 insertions(+), 4 dele
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen bit is effectively unused.
While at it, also mask the type in __swp_entry().
Signed-off-by: David Hildenbrand
---
arch
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit from the
offset. This reduces the maximum swap space per file to 16 GiB (was 32
GiB).
While at it, mask the type in __swp_entry().
Cc: Brian Cain
Signed-off-by: David Hildenbrand
---
arch/hexagon/include/asm/pgtable.h
with "Linux PTEs" not "hardware PTEs". Also, properly mask the type in
__swp_entry().
Cc: Russell King
Signed-off-by: David Hildenbrand
---
arch/arm/include/asm/pgtable-2level.h | 3 +++
arch/arm/include/asm/pgtable-3level.h | 3 +++
arch/arm/include/
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using bit 5, which is yet
unused. The only important parts seems to be to not use _PAGE_PRESENT
(bit 9).
Cc: Vineet Gupta
Signed-off-by: David Hildenbrand
---
arch/arc/include/asm/pgtable-bits-arcv2.h | 27 ---
1 file ch
y
Cc: Matt Turner
Signed-off-by: David Hildenbrand
---
arch/alpha/include/asm/pgtable.h | 41
1 file changed, 37 insertions(+), 4 deletions(-)
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 9e45f6735d5d..970abf511b13 100644
when the swap PTE layout differs
heavily from ordinary PTEs. Let's properly construct a swap PTE from
swap type+offset.
Signed-off-by: David Hildenbrand
---
mm/debug_vm_pgtable.c | 23 ++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/mm/debug_vm_p
owerpc/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE on 32bit book3s"
-> Fixup swap PTE description
David Hildenbrand (26):
mm/debug_vm_pgtable: more pte_swp_exclusive() sanity checks
alpha/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE
arc/mm: support __HAVE_ARCH_PTE_SWP_E
) didn't call it pte_mksoft_clean().
Grepping for "pte_swp.*soft_dirty" gives you the full picture.
Thanks!
David
Huacai
On Tue, Dec 6, 2022 at 10:48 PM David Hildenbrand wrote:
This is the follow-up on [1]:
[PATCH v2 0/8] mm: COW fixes part 3: reliable GUP R/
On 06.12.22 15:47, David Hildenbrand wrote:
This is the follow-up on [1]:
[PATCH v2 0/8] mm: COW fixes part 3: reliable GUP R/W FOLL_GET of
anonymous pages
After we implemented __HAVE_ARCH_PTE_SWP_EXCLUSIVE on most prominent
enterprise architectures, implement
On 08.12.22 09:52, David Hildenbrand wrote:
On 07.12.22 14:55, Christophe Leroy wrote:
Le 06/12/2022 à 15:47, David Hildenbrand a écrit :
We already implemented support for 64bit book3s in commit bff9beaa2e80
("powerpc/pgtable: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE for book3s&quo
On 07.12.22 14:55, Christophe Leroy wrote:
Le 06/12/2022 à 15:47, David Hildenbrand a écrit :
We already implemented support for 64bit book3s in commit bff9beaa2e80
("powerpc/pgtable: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE for book3s")
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIV
Supported by all architectures that support swp PTEs, so let's drop it.
Signed-off-by: David Hildenbrand
---
arch/alpha/include/asm/pgtable.h | 1 -
arch/arc/include/asm/pgtable-bits-arcv2.h| 1 -
arch/arm/include/asm/pgtable.h | 1 -
arch/arm64/includ
le at it, mask the type in __swp_entry(); use some helper definitions
to make the macros easier to grasp.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: "H. Peter Anvin"
Signed-off-by: David Hildenbrand
---
arch/x86/include/asm/pgtabl
k 1100 and 1110 now identify swap PTEs.
While at it, remove SWP_TYPE_BITS (not really helpful as it's not used in
the actual swap macros) and mask the type in __swp_entry().
Cc: Chris Zankel
Cc: Max Filippov
Signed-off-by: David Hildenbrand
---
arch/xtensa/include/asm
k the type in __swp_entry().
Cc: Richard Weinberger
Cc: Anton Ivanov
Cc: Johannes Berg
Signed-off-by: David Hildenbrand
---
arch/um/include/asm/pgtable.h | 37 +--
1 file changed, 35 insertions(+), 2 deletions(-)
diff --git a/arch/um/include/asm/pgtable.h b/ar
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen bit was effectively unused.
While at it, mask the type in __swp_entry().
Cc: "David S. Miller"
Signed-off-by: Da
archs. Note that the old documentation was
wrong: we use 20 bit for the offset and the reserved bits were 8 instead
of 7 bits in the ascii art.
Cc: "David S. Miller"
Signed-off-by: David Hildenbrand
---
arch/sparc/include/asm/pgtable_32.h | 27 ++-
arch/sp
While at it, mask the type in __swp_entry().
Cc: Yoshinori Sato
Cc: Rich Felker
Signed-off-by: David Hildenbrand
---
arch/sh/include/asm/pgtable_32.h | 54 +---
1 file changed, 42 insertions(+), 12 deletions(-)
diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/
pe in __swp_entry().
Cc: Paul Walmsley
Cc: Palmer Dabbelt
Cc: Albert Ou
Signed-off-by: David Hildenbrand
---
arch/riscv/include/asm/pgtable-bits.h | 3 +++
arch/riscv/include/asm/pgtable.h | 29 ++-
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/arch/
Signed-off-by: David Hildenbrand
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 22 +
arch/powerpc/include/asm/nohash/32/pte-40x.h | 6 ++---
arch/powerpc/include/asm/nohash/32/pte-44x.h | 18 --
arch/powerpc/include/asm/nohash/32/pte-85xx.h | 4 ++--
arch/powe
cannot
be used, and reusing it avoids having to steal one bit from the swap
offset.
While at it, mask the type in __swp_entry().
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Signed-off-by: David Hildenbrand
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 38 +
bit avoids having to steal one bit from the swap offset.
Cc: "James E.J. Bottomley"
Cc: Helge Deller
Signed-off-by: David Hildenbrand
---
arch/parisc/include/asm/pgtable.h | 41 ---
1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/arch/parisc/i
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using the yet-unused bit
31.
Cc: Thomas Bogendoerfer
Signed-off-by: David Hildenbrand
---
arch/nios2/include/asm/pgtable-bits.h | 3 +++
arch/nios2/include/asm/pgtable.h | 22 +-
2 files changed, 24 insertions(
d-off-by: David Hildenbrand
---
arch/openrisc/include/asm/pgtable.h | 41 +
1 file changed, 36 insertions(+), 5 deletions(-)
diff --git a/arch/openrisc/include/asm/pgtable.h
b/arch/openrisc/include/asm/pgtable.h
index 6477c17b3062..903b32d662ab 100644
--- a/arch/ope
ts for the swap type and document the layout.
Bits 26--31 should get ignored by hardware completely, so they can be
used.
Cc: Dinh Nguyen
Signed-off-by: David Hildenbrand
---
arch/nios2/include/asm/pgtable.h | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/
confusing, document it a bit better.
While at it, mask the type in __swp_entry()/mk_swap_pte().
Cc: Thomas Bogendoerfer
Signed-off-by: David Hildenbrand
---
arch/mips/include/asm/pgtable-32.h | 86 ++
arch/mips/include/asm/pgtable-64.h | 23 ++--
arch/mips/include/asm
out a little bit harder to decipher.
While at it, drop the comment from paulus---copy-and-paste leftover
from powerpc where we actually have _PAGE_HASHPTE---and mask the type in
__swp_entry_to_pte() as well.
Cc: Michal Simek
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/mcf_pgtable.h
mask the type in __swp_entry().
Cc: Geert Uytterhoeven
Cc: Greg Ungerer
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/mcf_pgtable.h | 36 --
arch/m68k/include/asm/motorola_pgtable.h | 38 +--
arch/m68k/include/asm/sun3_pgtable.h
PMDs and could also be used
in swap PMD context later.
Cc: Huacai Chen
Cc: WANG Xuerui
Signed-off-by: David Hildenbrand
---
arch/loongarch/include/asm/pgtable-bits.h | 4 +++
arch/loongarch/include/asm/pgtable.h | 39 ---
2 files changed, 39 insertions(+), 4 dele
The definitions are not required, let's remove them.
Cc: Geert Uytterhoeven
Cc: Greg Ungerer
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/pgtable_no.h | 6 --
1 file changed, 6 deletions(-)
diff --git a/arch/m68k/include/asm/pgtable_no.h
b/arch/m68k/includ
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen bit is effectively unused.
While at it, also mask the type in __swp_entry().
Signed-off-by: David Hildenbrand
---
arch
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit from the
offset. This reduces the maximum swap space per file to 16 GiB (was 32
GiB).
While at it, mask the type in __swp_entry().
Cc: Brian Cain
Signed-off-by: David Hildenbrand
---
arch/hexagon/include/asm/pgtable.h
esent(), pte_none() and HW happy. For now, let's keep it simple
because there might be something non-obvious.
Cc: Guo Ren
Signed-off-by: David Hildenbrand
---
arch/csky/abiv1/inc/abi/pgtable-bits.h | 13 +
arch/csky/abiv2/inc/abi/pgtable-bits.h | 19 ---
arch/csky/i
with "Linux PTEs" not "hardware PTEs". Also, properly mask the type in
__swp_entry().
Cc: Russell King
Signed-off-by: David Hildenbrand
---
arch/arm/include/asm/pgtable-2level.h | 3 +++
arch/arm/include/asm/pgtable-3level.h | 3 +++
arch/arm/include/
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using bit 5, which is yet
unused. The only important parts seems to be to not use _PAGE_PRESENT
(bit 9).
Cc: Vineet Gupta
Signed-off-by: David Hildenbrand
---
arch/arc/include/asm/pgtable-bits-arcv2.h | 27 ---
1 file ch
only 32bit swap entries. So the
lower 32bit are zero in a swap PTE and we could have taken a bit in
there as well.
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Matt Turner
Signed-off-by: David Hildenbrand
---
arch/alpha/include/asm/pgtable.h | 41
1 file change
when the swap PTE layout differs
heavily from ordinary PTEs. Let's properly construct a swap PTE from
swap type+offset.
Signed-off-by: David Hildenbrand
---
mm/debug_vm_pgtable.c | 23 ++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/mm/debug_vm_p
.com/aarcange/kernel-testcases-for-v5.11/-/blob/main/page_count_do_wp_page-swap.c
[3]
https://gitlab.com/davidhildenbrand/scratchspace/-/blob/main/test_swp_exclusive.c
David Hildenbrand (26):
mm/debug_vm_pgtable: more pte_swp_exclusive() sanity checks
alpha/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSI
Let's communicate driver-managed regions to memblock, to properly
teach kexec_file with CONFIG_ARCH_KEEP_MEMBLOCK to not place images on
these memory regions.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git
to the system by a driver; memory might not actually be
physically hotunpluggable. kexec *must not* indicate this memory to
the second kernel and *must not* place kexec-images on this memory.
Signed-off-by: David Hildenbrand
---
include/linux/memblock.h | 16 +
ly stumble over memblocks with wrong flags, which will be
important in a follow-up patch that introduces a new flag to properly
handle add_memory_driver_managed().
Acked-by: Geert Uytterhoeven
Acked-by: Heiko Carstens
Signed-off-by: David Hildenbrand
---
arch/arc/mm/init.c | 4 ++-
hotunplug, kexec has to
be re-armed to update the memory map for the second kernel and to place the
kexec-images somewhere else.
Signed-off-by: David Hildenbrand
---
include/linux/memblock.h | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/include/linux/memblock.h b/inclu
y ignoring the error.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9fd0be32a281..917b3528636d 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1384,
: linux-s...@vger.kernel.org
Cc: linux...@kvack.org
Cc: ke...@lists.infradead.org
David Hildenbrand (5):
mm/memory_hotplug: handle memblock_add_node() failures in
add_memory_resource()
memblock: improve MEMBLOCK_HOTPLUG documentation
memblock: allow to specify flags with memblock_add_node()
me
On 30.09.21 23:21, Mike Rapoport wrote:
On Wed, Sep 29, 2021 at 06:54:01PM +0200, David Hildenbrand wrote:
On 29.09.21 18:39, Mike Rapoport wrote:
Hi,
On Mon, Sep 27, 2021 at 05:05:17PM +0200, David Hildenbrand wrote:
Let's add a flag that corresponds to IORESOURCE_SYSRAM_DRIVER_MA
On 29.09.21 18:39, Mike Rapoport wrote:
Hi,
On Mon, Sep 27, 2021 at 05:05:17PM +0200, David Hildenbrand wrote:
Let's add a flag that corresponds to IORESOURCE_SYSRAM_DRIVER_MANAGED.
Similar to MEMBLOCK_HOTPLUG, most infrastructure has to treat such memory
like ordinary MEMBLOCK_NONE m
On 29.09.21 18:25, Mike Rapoport wrote:
On Mon, Sep 27, 2021 at 05:05:16PM +0200, David Hildenbrand wrote:
We want to specify flags when hotplugging memory. Let's prepare to pass
flags to memblock_add_node() by adjusting all existing users.
Note that when hotplugging memory the syst
Intended subject was "[PATCH v1 0/4] mm/memory_hotplug: full support for
add_memory_driver_managed() with CONFIG_ARCH_KEEP_MEMBLOCK"
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradea
Let's communicate driver-managed regions to memblock, to properly
teach kexec_file with CONFIG_ARCH_KEEP_MEMBLOCK to not place images on
these memory regions.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git
_MANAGED) next. This prepares architectures
that need CONFIG_ARCH_KEEP_MEMBLOCK, such as arm64, for virtio-mem
support.
Signed-off-by: David Hildenbrand
---
include/linux/memblock.h | 16 ++--
kernel/kexec_file.c | 5 +
mm/memblock.c| 4
3 files changed,
within one memblock call.
Signed-off-by: David Hildenbrand
---
arch/arc/mm/init.c | 4 ++--
arch/ia64/mm/contig.c| 2 +-
arch/ia64/mm/init.c | 2 +-
arch/m68k/mm/mcfmmu.c| 3 ++-
arch/m68k/mm/motorola.c | 6 --
arch/mips/loongso
y ignoring the error.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9fd0be32a281..917b3528636d 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1384,
s.infradead.org
David Hildenbrand (4):
mm/memory_hotplug: handle memblock_add_node() failures in
add_memory_resource()
memblock: allow to specify flags with memblock_add_node()
memblock: add MEMBLOCK_DRIVER_MANAGED to mimic
IORESOURCE_SYSRAM_DRIVER_MANAGED
mm/memory_ho
page)
{
Acked-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
_pages(struct page *page, unsigned long pfn,
unsigned int order)
@@ -7276,7 +7276,7 @@ static void __ref alloc_node_mem_map(struct pglist_data
*pgdat)
pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n",
__func__, pgdat->nod
nown, the PFN can be used to index
-appropriate `node_mem_map` array to access the `struct page` and
-the offset of the `struct page` from the `node_mem_map` plus
-`node_start_pfn` is the PFN of that page.
-
SPARSEMEM
=====
Reviewed-by: David Hildenbrand
TA() gets
- * optimized to &contig_page_data at compile-time.
+ * For the case of non-NUMA systems the NODE_DATA() gets optimized to
+ * &contig_page_data at compile-time.
*/
static inline struct zonelist *node_zonelist(int nid, gfp_t flags)
{
Reviewed-by: David Hildenbr
h-order allocations like THP are likely to be
- * unsupported and the premature reclaim offsets the advantage of long-term
- * fragmentation avoidance.
- */
-int watermark_boost_factor __read_mostly;
-#else
int watermark_boost_factor __read_mostly = 15000;
-#endif
int watermark_scale_facto
On 02.06.21 12:53, Mike Rapoport wrote:
From: Mike Rapoport
DISCONTIGMEM was replaced by FLATMEM with freeing of the unused memory map
in v5.11.
Remove the support for DISCONTIGMEM entirely.
Signed-off-by: Mike Rapoport
Acked-by: David Hildenbrand
--
Thanks,
David / dhildenb
node_set_online(1);
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
memblock_reserve(virt_to_phys((void *)initrd_start),
-INITRD_SIZE);
- }
- }
-#endif /* CONFIG_BLK_DEV_INITRD */
-}
-
-void __init paging_init(void)
-{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
-
E_SHIFT
- 10),
- totalcma_pages << (PAGE_SHIFT - 10),
+ totalcma_pages << (PAGE_SHIFT - 10)
#ifdefCONFIG_HIGHMEM
- totalhigh_pages() << (PAGE_SHIFT - 10),
+ , totalhigh_pages() << (PAGE_SHIFT - 10)
#endif
- s
, page, zone))
- continue;
-
Right, pfn_to_online_page() -> pfn_valid() / pfn_valid_within() should
handle that.
Acked-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
On 12.04.20 21:48, Mike Rapoport wrote:
> From: Baoquan He
>
> When called during boot the memmap_init_zone() function checks if each PFN
> is valid and actually belongs to the node being initialized using
> early_pfn_valid() and early_pfn_in_nid().
>
> Each such check may cost up to O(log(n)) w
On 15.10.19 13:50, David Hildenbrand wrote:
On 15.10.19 13:47, Michal Hocko wrote:
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i
On 15.10.19 13:47, Michal Hocko wrote:
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i, end_pfn = start_pfn + nr_pages
On 15.10.19 11:21, Anshuman Khandual wrote:
alloc_gigantic_page() implements an allocation method where it scans over
various zones looking for a large contiguous memory block which could not
have been allocated through the buddy allocator. A subsequent patch which
tests arch page table helpers n
100 matches
Mail list logo