mm: lock newly mapped VMA with corrected ordering
authorHugh Dickins <hughd@google.com>
Sat, 8 Jul 2023 23:04:00 +0000 (16:04 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sat, 8 Jul 2023 23:44:11 +0000 (16:44 -0700)
Lockdep is certainly right to complain about

  (&vma->vm_lock->lock){++++}-{3:3}, at: vma_start_write+0x2d/0x3f
                 but task is already holding lock:
  (&mapping->i_mmap_rwsem){+.+.}-{3:3}, at: mmap_region+0x4dc/0x6db

Invert those to the usual ordering.

Fixes: 33313a747e81 ("mm: lock newly mapped VMA which can be modified after it becomes visible")
Cc: stable@vger.kernel.org
Signed-off-by: Hugh Dickins <hughd@google.com>
Tested-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/mmap.c

index 84c71431a5273830e40704dbc1a91ac79e21dda3..3eda23c9ebe7a60427443fec3c8da8f38c163832 100644 (file)
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2809,11 +2809,11 @@ cannot_expand:
        if (vma_iter_prealloc(&vmi))
                goto close_and_free_vma;
 
+       /* Lock the VMA since it is modified after insertion into VMA tree */
+       vma_start_write(vma);
        if (vma->vm_file)
                i_mmap_lock_write(vma->vm_file->f_mapping);
 
-       /* Lock the VMA since it is modified after insertion into VMA tree */
-       vma_start_write(vma);
        vma_iter_store(&vmi, vma);
        mm->map_count++;
        if (vma->vm_file) {