On 3/17/26 5:36 AM, Lance Yang wrote:
> 
> On Wed, Feb 25, 2026 at 08:26:50PM -0700, Nico Pache wrote:
>> From: Baolin Wang <[email protected]>
>>
>> If any order (m)THP is enabled we should allow running khugepaged to
>> attempt scanning and collapsing mTHPs. In order for khugepaged to operate
>> when only mTHP sizes are specified in sysfs, we must modify the predicate
>> function that determines whether it ought to run to do so.
>>
>> This function is currently called hugepage_pmd_enabled(), this patch
>> renames it to hugepage_enabled() and updates the logic to check to
>> determine whether any valid orders may exist which would justify
>> khugepaged running.
>>
>> We must also update collapse_allowable_orders() to check all orders if
>> the vma is anonymous and the collapse is khugepaged.
>>
>> After this patch khugepaged mTHP collapse is fully enabled.
>>
>> Signed-off-by: Baolin Wang <[email protected]>
>> Signed-off-by: Nico Pache <[email protected]>
>> ---
>> mm/khugepaged.c | 30 ++++++++++++++++++------------
>> 1 file changed, 18 insertions(+), 12 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 388d3f2537e2..e8bfcc1d0c9a 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -434,23 +434,23 @@ static inline int collapse_test_exit_or_disable(struct 
>> mm_struct *mm)
>>              mm_flags_test(MMF_DISABLE_THP_COMPLETELY, mm);
>> }
>>
>> -static bool hugepage_pmd_enabled(void)
>> +static bool hugepage_enabled(void)
>> {
>>      /*
>>       * We cover the anon, shmem and the file-backed case here; file-backed
>>       * hugepages, when configured in, are determined by the global control.
>> -     * Anon pmd-sized hugepages are determined by the pmd-size control.
>> +     * Anon hugepages are determined by its per-size mTHP control.
>>       * Shmem pmd-sized hugepages are also determined by its pmd-size 
>> control,
>>       * except when the global shmem_huge is set to SHMEM_HUGE_DENY.
>>       */
>>      if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>>          hugepage_global_enabled())
>>              return true;
>> -    if (test_bit(PMD_ORDER, &huge_anon_orders_always))
>> +    if (READ_ONCE(huge_anon_orders_always))
>>              return true;
>> -    if (test_bit(PMD_ORDER, &huge_anon_orders_madvise))
>> +    if (READ_ONCE(huge_anon_orders_madvise))
>>              return true;
>> -    if (test_bit(PMD_ORDER, &huge_anon_orders_inherit) &&
>> +    if (READ_ONCE(huge_anon_orders_inherit) &&
>>          hugepage_global_enabled())
>>              return true;
>>      if (IS_ENABLED(CONFIG_SHMEM) && shmem_hpage_pmd_enabled())
>> @@ -521,8 +521,14 @@ static unsigned int collapse_max_ptes_none(unsigned int 
>> order)
>> static unsigned long collapse_allowable_orders(struct vm_area_struct *vma,
>>                      vm_flags_t vm_flags, bool is_khugepaged)
>> {
>> +    unsigned long orders;
>>      enum tva_type tva_flags = is_khugepaged ? TVA_KHUGEPAGED : 
>> TVA_FORCED_COLLAPSE;
>> -    unsigned long orders = BIT(HPAGE_PMD_ORDER);
>> +
>> +    /* If khugepaged is scanning an anonymous vma, allow mTHP collapse */
>> +    if (is_khugepaged && vma_is_anonymous(vma))
>> +            orders = THP_ORDERS_ALL_ANON;
>> +    else
>> +            orders = BIT(HPAGE_PMD_ORDER);
>>
>>      return thp_vma_allowable_orders(vma, vm_flags, tva_flags, orders);
>> }
> 
> IIUC, an anonymous VMA can pass collapse_allowable_orders() even if it
> is smaller than 2MB ...
> 
> But collapse_scan_mm_slot() still scans only full PMD-sized windows:
> 
>               hstart = round_up(vma->vm_start, HPAGE_PMD_SIZE);
>               hend = round_down(vma->vm_end, HPAGE_PMD_SIZE);
>               if (khugepaged_scan.address > hend) {
>                       cc->progress++;
>                       continue;
>               }
> 
> and hugepage_vma_revalidate() still requires PMD suitability:
> 
>       /* Always check the PMD order to ensure its not shared by another VMA */
>       if (!thp_vma_suitable_order(vma, address, PMD_ORDER))
>               return SCAN_ADDRESS_RANGE;
> 
> 
>> @@ -531,7 +537,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
>>                        vm_flags_t vm_flags)
>> {
>>      if (!mm_flags_test(MMF_VM_HUGEPAGE, vma->vm_mm) &&
>> -        hugepage_pmd_enabled()) {
>> +        hugepage_enabled()) {
>>              if (collapse_allowable_orders(vma, vm_flags, 
>> /*is_khugepaged=*/true))
>>                      __khugepaged_enter(vma->vm_mm);
> 
> I wonder if we should also require at least one PMD-sized scan window
> here? Not a big deal, just might be good to tighten the gate a bit :)

IIUC, you are worried that we are operating on VMAs smaller than a PMD?
thp_vma_allowable_orders should guard from that via thp_vma_suitable. the
revalidation also checks this in hugepage_vma_revalidate() and is the reason we
must leave the suitable_order check in revalidate() checking the PMD_ORDER than
than the attempted collapse order.

lmk if that clears things up!

Thanks
-- Nico

> 
> Apart from that, LGTM!
> Reviewed-by: Lance Yang <[email protected]>
> 


Reply via email to