mm/mmap: Fix extra maple tree write
authorLiam R. Howlett <Liam.Howlett@oracle.com>
Thu, 6 Jul 2023 18:51:35 +0000 (14:51 -0400)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 19 Jul 2023 14:22:16 +0000 (16:22 +0200)
based on commit 0503ea8f5ba73eb3ab13a81c1eefbaf51405385a upstream.

This was inadvertently fixed during the removal of __vma_adjust().

When __vma_adjust() is adjusting next with a negative value (pushing
vma->vm_end lower), there would be two writes to the maple tree.  The
first write is unnecessary and uses all allocated nodes in the maple
state.  The second write is necessary but will need to allocate nodes
since the first write has used the allocated nodes.  This may be a
problem as it may not be safe to allocate at this time, such as a low
memory situation.  Fix the issue by avoiding the first write and only
write the adjusted "next" VMA.

Reported-by: John Hsu <John.Hsu@mediatek.com>
Link: https://lore.kernel.org/lkml/9cb8c599b1d7f9c1c300d1a334d5eb70ec4d7357.camel@mediatek.com/
Cc: stable@vger.kernel.org
Cc: linux-mm@kvack.org
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
mm/mmap.c

index 1597a96..41a240b 100644 (file)
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -767,7 +767,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
        }
        if (end != vma->vm_end) {
                if (vma->vm_end > end) {
-                       if (!insert || (insert->vm_start != end)) {
+                       if ((vma->vm_end + adjust_next != end) &&
+                           (!insert || (insert->vm_start != end))) {
                                vma_mas_szero(&mas, end, vma->vm_end);
                                mas_reset(&mas);
                                VM_WARN_ON(insert &&