From: Mike Kravetz Date: Tue, 13 Oct 2020 23:56:42 +0000 (-0700) Subject: hugetlb: add lockdep check for i_mmap_rwsem held in huge_pmd_share X-Git-Tag: v5.15~2704^2~30 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=0bf7b64e6e51eb69cf6fce7c9f7ff44840393e64;p=platform%2Fkernel%2Flinux-starfive.git hugetlb: add lockdep check for i_mmap_rwsem held in huge_pmd_share As a debugging aid, huge_pmd_share should make sure i_mmap_rwsem is held if necessary. To clarify the 'if necessary', expand the comment block at the beginning of huge_pmd_share. No functional change. The added i_mmap_assert_locked() call is only enabled if CONFIG_LOCKDEP. Ideally, this should have been included with commit 34ae204f1851 ("hugetlbfs: remove call to huge_pte_alloc without i_mmap_rwsem"). Signed-off-by: Mike Kravetz Signed-off-by: Andrew Morton Cc: Matthew Wilcox Cc: Michal Hocko Cc: "Kirill A . Shutemov" Cc: Davidlohr Bueso Link: https://lkml.kernel.org/r/20200911201248.88537-1-mike.kravetz@oracle.com Signed-off-by: Linus Torvalds --- diff --git a/mm/hugetlb.c b/mm/hugetlb.c index cc70e54..2fb9a4c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5337,10 +5337,16 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, * !shared pmd case because we can allocate the pmd later as well, it makes the * code much cleaner. * - * This routine must be called with i_mmap_rwsem held in at least read mode. - * For hugetlbfs, this prevents removal of any page table entries associated - * with the address space. This is important as we are setting up sharing - * based on existing page table entries (mappings). + * This routine must be called with i_mmap_rwsem held in at least read mode if + * sharing is possible. For hugetlbfs, this prevents removal of any page + * table entries associated with the address space. This is important as we + * are setting up sharing based on existing page table entries (mappings). + * + * NOTE: This routine is only called from huge_pte_alloc. Some callers of + * huge_pte_alloc know that sharing is not possible and do not take + * i_mmap_rwsem as a performance optimization. This is handled by the + * if !vma_shareable check at the beginning of the routine. i_mmap_rwsem is + * only required for subsequent processing. */ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) { @@ -5357,6 +5363,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) if (!vma_shareable(vma, addr)) return (pte_t *)pmd_alloc(mm, pud, addr); + i_mmap_assert_locked(mapping); vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) { if (svma == vma) continue;