On Fri, Apr 17, 2026 at 10:44:18PM -0400, Zi Yan wrote:
>collapse_file() requires FSes supporting large folio with at least
>PMD_ORDER, so replace the READ_ONLY_THP_FOR_FS check with that.
>MADV_COLLAPSE ignores shmem huge config, so exclude the check for shmem.
>
>While at it, replace VM_BUG_ON with VM_WARN_ON_ONCE.
>
>Add a helper function mapping_pmd_thp_support() for FSes supporting large
>folio with at least PMD_ORDER.
>
>Signed-off-by: Zi Yan <[email protected]>
>---
> include/linux/pagemap.h | 10 ++++++++++
> mm/khugepaged.c         |  5 +++--
> 2 files changed, 13 insertions(+), 2 deletions(-)
>
>diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>index ec442af3f886..c3cb1ec982cd 100644
>--- a/include/linux/pagemap.h
>+++ b/include/linux/pagemap.h
>@@ -524,6 +524,16 @@ static inline bool mapping_large_folio_support(const 
>struct address_space *mappi
>       return mapping_max_folio_order(mapping) > 0;
> }
> 
>+static inline bool mapping_pmd_thp_support(const struct address_space 
>*mapping)
>+{
>+      /* AS_FOLIO_ORDER is only reasonable for pagecache folios */
>+      VM_WARN_ONCE((unsigned long)mapping & FOLIO_MAPPING_ANON,
>+                      "Anonymous mapping always supports PMD THP");

Nit: afraid not, at least when running on architectures without PMD leaf
entries ...

Maybe better to say this helper is only meaningful for pagecache-backed
mappings. Anonymous mappings should not reach here.

>+
>+      return mapping_max_folio_order(mapping) >= PMD_ORDER;
>+}
>+
>+
> /* Return the maximum folio size for this pagecache mapping, in bytes. */
> static inline size_t mapping_max_folio_size(const struct address_space 
> *mapping)
> {
>diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>index b8452dbdb043..3eb5d982d3d3 100644
>--- a/mm/khugepaged.c
>+++ b/mm/khugepaged.c
>@@ -1892,8 +1892,9 @@ static enum scan_result collapse_file(struct mm_struct 
>*mm, unsigned long addr,
>       int nr_none = 0;
>       bool is_shmem = shmem_file(file);
> 
>-      VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
>-      VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
>+      /* MADV_COLLAPSE ignores shmem huge config, so do not check shmem */
>+      VM_WARN_ON_ONCE(!is_shmem && !mapping_pmd_thp_support(mapping));

With [1], can we drop !is_shmem here as well? shmem would then always
call mapping_set_large_folios(inode->i_mapping):

---8<---
diff --git a/mm/shmem.c b/mm/shmem.c
index 4ecefe02881d..dafbea53b22d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3087,10 +3087,7 @@ static struct inode *__shmem_get_inode(struct mnt_idmap 
*idmap,
        cache_no_acl(inode);
        if (sbinfo->noswap)
                mapping_set_unevictable(inode->i_mapping);
-
-       /* Don't consider 'deny' for emergencies and 'force' for testing */
-       if (sbinfo->huge)
-               mapping_set_large_folios(inode->i_mapping);
+       mapping_set_large_folios(inode->i_mapping);
 
        switch (mode & S_IFMT) {
        default:
--

But we can do that in a follow-up, once the revert lands :)

[1] 
https://lore.kernel.org/linux-mm/b2c7deee259a94b0d00a7c320d8d24d2c421f761.1776908112.git.baolin.w...@linux.alibaba.com/

Reply via email to