mm/mmap: change vma iteration order in do_vmi_align_munmap()
authorLiam R. Howlett <Liam.Howlett@oracle.com>
Mon, 24 Jul 2023 18:31:57 +0000 (14:31 -0400)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 18 Aug 2023 17:12:50 +0000 (10:12 -0700)
commit6935e052557caaa8e1ee0a7d85faeb55853d2e0e
tree69cd4f023ce1d6887b7cb9eca4976a338edd3d40
parentfec29364348fec535c55708b1f4025b321aba572
mm/mmap: change vma iteration order in do_vmi_align_munmap()

By delaying the setting of prev/next VMA until after the write of NULL,
the probability of the prev/next VMA already being in the CPU cache is
significantly increased, especially for larger munmap operations.  It
also means that prev/next will be loaded closer to when they are used.

This requires changing the loop type when gathering the VMAs that will
be freed.

Since prev will be set later in the function, it is better to reverse
the splitting direction of the start VMA (modify the new_below argument
to __split_vma).

Using the vma_iter_prev_range() to walk back to the correct location in
the tree will, on the most part, mean walking within the CPU cache.
Usually, this is two steps vs a node reset and a tree re-walk.

Link: https://lkml.kernel.org/r/20230724183157.3939892-16-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Peng Zhang <zhangpeng.00@bytedance.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/mmap.c