On 10/15/19 2:48 PM, Linus Torvalds wrote:
> On Tue, Oct 15, 2019 at 12:19 PM Vineet Gupta
> wrote:
>> This is a non-functional change anyways since ARC has software page walker
>> with 2 lookup levels (pgd -> pte)
>
> Could we encourage other architectures to do the same, and get rid of
> all use
On Tue, Oct 15, 2019 at 12:19 PM Vineet Gupta
wrote:
>
> This is a non-functional change anyways since ARC has software page walker
> with 2 lookup levels (pgd -> pte)
Could we encourage other architectures to do the same, and get rid of
all uses of __ARCH_USE_5LEVEL_HACK?
Linus
___
On Tue, Oct 15, 2019 at 12:19 PM Vineet Gupta
wrote:
>
> This came up when removing __ARCH_HAS_5LEVEL_HACK for ARC as code bloat
> from this routine not required in a 2-level paging setup
Similarly acked,
Linus
___
linux-snps-arc mailing lis
On Tue, Oct 15, 2019 at 12:19 PM Vineet Gupta
wrote:
>
> This came up when removing __ARCH_HAS_5LEVEL_HACK for ARC as code bloat
> from this routine which is not required in a 2-level paging setup
Ack, looks good.
Linus
___
linux-snps-arc m
Hi,
This series came out of seemingly naive exceursion into understanding/removing
__ARCH_USE_5LEVEL_HACK from ARC port. With removal of 5LEVEL_HACK some
extraneous code was bein generated
| bloat-o-meter2 vmlinux-[AB]*
| add/remove: 0/0 grow/shrink: 3/0 up/down: 130/0 (130)
| function
Add the intermedivmlinux-A-baseline vmlinux-B-elide-ARCH_USE_5LEVEL_HACK
This is a non-functional change anyways since ARC has software page walker
with 2 lookup levels (pgd -> pte)
There is slight code bloat due to pulling in needless p*d_free_tlb()
macros which needs to be addressed seperately.
... independent of __ARCH_HAS_4LEVEL_HACK
This came up when removing __ARCH_HAS_5LEVEL_HACK for ARC as code bloat
from this routine which is not required in a 2-level paging setup
Note: The primary change is alternate defines for pud_free_tlb() but
while here removed empty stubs for __pud_free_tl
... independent of __ARCH_HAS_5LEVEL_HACK
This came up when removing __ARCH_HAS_5LEVEL_HACK for ARC as code bloat
from this routine not required in a 2-level paging setup
| bloat-o-meter2 vmlinux-C-elide-pud_free_tlb vmlinux-D-elide-p4d_free_tlb
| add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-104
This removes the code for 2 level paging as seen on ARC
| bloat-o-meter2 vmlinux-D-elide-p4d_free_tlb vmlinux-E-elide-p?d_clear_bad
| add/remove: 0/2 grow/shrink: 0/0 up/down: 0/-40 (-40)
| function old new delta
| pud_clear_bad
Note that pmd routine folding can be tricky as even in 2-level setup
(where pmd is folded) most pmd routines refer to upper levels.
This one can surely be elided however.
| bloat-o-meter2 vmlinux-E-elide-p?d_clear_bad vmlinux-F-elide-pmd_free_tlb
| add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-112
From: Thomas Gleixner
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT.
Both PREEMPT and PREEMPT_RT require the same functionality which today
depends on CONFIG_PREEMPT.
Switch the entry code over to use CONFIG_PREEMPTION.
Cc: Vineet Gupta
Cc: linux-snps-arc@lists.infra
8549][T1] [ cut here ]
> [ 13.898549][T1] kernel BUG at ./include/linux/mm.h:1107!
> [ 13.898549][T1] invalid opcode: [#1] SMP DEBUG_PAGEALLOC KASAN PTI
> [ 13.898549][T1] CPU: 0 PID: 1 Comm: swapper/0 Not tainted
> 5.4.0-rc3-next-20191
On Tue, 2019-10-15 at 14:51 +0530, Anshuman Khandual wrote:
> +static unsigned long __init get_random_vaddr(void)
> +{
> + unsigned long random_vaddr, random_pages, total_user_pages;
> +
> + total_user_pages = (TASK_SIZE - FIRST_USER_ADDRESS) / PAGE_SIZE;
> +
> + random_pages = get_rand
_BUG_ON_PAGE(PagePoisoned(p))
[ 13.898549][T1] [ cut here ]
[ 13.898549][T1] kernel BUG at ./include/linux/mm.h:1107!
[ 13.898549][T1] invalid opcode: [#1] SMP DEBUG_PAGEALLOC KASAN PTI
[ 13.898549][T1] CPU: 0 PID: 1 C
The x86 will crash with linux-next during boot due to this series (v5) with the
config below plus CONFIG_DEBUG_VM_PGTABLE=y. I am not sure if v6 would address
it.
https://raw.githubusercontent.com/cailca/linux-mm/master/x86.config
[ 33.862600][T1] page:ea000900 is uninitialized and
On 10/15/2019 05:16 PM, Michal Hocko wrote:
> On Tue 15-10-19 14:51:42, Anshuman Khandual wrote:
>> This adds tests which will validate architecture page table helpers and
>> other accessors in their compliance with expected generic MM semantics.
>> This will help various architectures in valida
On Tue 15-10-19 14:09:56, Michal Hocko wrote:
> On Tue 15-10-19 13:50:02, David Hildenbrand wrote:
> > On 15.10.19 13:47, Michal Hocko wrote:
> > > On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
> > > [...]
> > > > > -static bool pfn_range_valid_gigantic(struct zone *z,
> > > > > -
On Tue 15-10-19 13:50:02, David Hildenbrand wrote:
> On 15.10.19 13:47, Michal Hocko wrote:
> > On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
> > [...]
> > > > -static bool pfn_range_valid_gigantic(struct zone *z,
> > > > - unsigned long start_pfn, unsigned long nr_pages)
On 15.10.19 13:50, David Hildenbrand wrote:
On 15.10.19 13:47, Michal Hocko wrote:
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i,
On 15.10.19 13:47, Michal Hocko wrote:
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i, end_pfn = start_pfn + nr_pages;
- struc
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
> > -static bool pfn_range_valid_gigantic(struct zone *z,
> > - unsigned long start_pfn, unsigned long nr_pages)
> > -{
> > - unsigned long i, end_pfn = start_pfn + nr_pages;
> > - struct page *page;
> > -
> > - for (i
On Tue 15-10-19 14:51:42, Anshuman Khandual wrote:
> This adds tests which will validate architecture page table helpers and
> other accessors in their compliance with expected generic MM semantics.
> This will help various architectures in validating changes to existing
> page table helpers or add
On 15.10.19 11:21, Anshuman Khandual wrote:
alloc_gigantic_page() implements an allocation method where it scans over
various zones looking for a large contiguous memory block which could not
have been allocated through the buddy allocator. A subsequent patch which
tests arch page table helpers n
On 10/15/2019 04:15 PM, Michal Hocko wrote:
> On Tue 15-10-19 14:51:41, Anshuman Khandual wrote:
> [...]
>> +/**
>> + * alloc_gigantic_page_order() -- tries to allocate given order of pages
>> + * @order: allocation order (greater than MAX_ORDER)
>> + * @gfp_mask: GFP mask to use during c
On Tue 15-10-19 14:51:41, Anshuman Khandual wrote:
[...]
> +/**
> + * alloc_gigantic_page_order() -- tries to allocate given order of pages
> + * @order: allocation order (greater than MAX_ORDER)
> + * @gfp_mask:GFP mask to use during compaction
> + * @nid: allocation node
> + * @node
Hi Daniel,
On 13.10.2019 21:16, Daniel Lezcano wrote:
> Hi Claudiu,
>
> sorry for the delay, I was OoO again.
No problem, thank you for your reply.
>
> On 03/10/2019 12:43, claudiu.bez...@microchip.com wrote:
>>
>>
>> On 02.10.2019 16:35, Claudiu Beznea wrote:
>>> Hi Daniel,
>>>
>>> Taking int
alloc_gigantic_page() implements an allocation method where it scans over
various zones looking for a large contiguous memory block which could not
have been allocated through the buddy allocator. A subsequent patch which
tests arch page table helpers needs such a method to allocate PUD_SIZE
sized
This adds tests which will validate architecture page table helpers and
other accessors in their compliance with expected generic MM semantics.
This will help various architectures in validating changes to existing
page table helpers or addition of new ones.
Test page table and memory pages creati
This series adds a test validation for architecture exported page table
helpers. Patch in the series adds basic transformation tests at various
levels of the page table. Before that it exports gigantic page allocation
function from HugeTLB.
This test was originally suggested by Catalin during arm6
29 matches
Mail list logo