mm: don't do validate_mm() unnecessarily and without mmap locking
authorLinus Torvalds <torvalds@linux-foundation.org>
Tue, 4 Jul 2023 02:29:48 +0000 (19:29 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 4 Jul 2023 14:22:59 +0000 (07:22 -0700)
This is an addition to commit ae80b4041984 ("mm: validate the mm before
dropping the mmap lock"), because it turns out there were two problems,
but lockdep just stopped complaining after finding the first one.

The do_vmi_align_munmap() function now drops the mmap lock after doing
the validate_mm() call, but it turns out that one of the callers then
immediately calls validate_mm() again.

That's both a bit silly, and now (again) happens without the mmap lock
held.

So just remove that validate_mm() call from the caller, but make sure to
not lose any coverage by doing that mm sanity checking in the error path
of do_vmi_align_munmap() too.

Reported-and-tested-by: kernel test robot <oliver.sang@intel.com>
Link: https://lore.kernel.org/lkml/ZKN6CdkKyxBShPHi@xsang-OptiPlex-9020/
Fixes: 408579cd627a ("mm: Update do_vmi_align_munmap() return semantics")
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/mmap.c

index 547b405..204ddcd 100644 (file)
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2571,6 +2571,7 @@ end_split_failed:
        __mt_destroy(&mt_detach);
 start_split_failed:
 map_count_exceeded:
+       validate_mm(mm);
        return error;
 }
 
@@ -3019,12 +3020,9 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
                bool unlock)
 {
        struct mm_struct *mm = vma->vm_mm;
-       int ret;
 
        arch_unmap(mm, start, end);
-       ret = do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
-       validate_mm(mm);
-       return ret;
+       return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
 }
 
 /*