From: Liam R. Howlett Date: Tue, 6 Sep 2022 19:48:47 +0000 (+0000) Subject: kernel/fork: use maple tree for dup_mmap() during forking X-Git-Tag: v6.6.17~6355^2~381 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=c9dbe82cb99db5b6029c6bc43fcf7881d3f50268;p=platform%2Fkernel%2Flinux-rpi.git kernel/fork: use maple tree for dup_mmap() during forking The maple tree was already tracking VMAs in this function by an earlier commit, but the rbtree iterator was being used to iterate the list. Change the iterator to use a maple tree native iterator and switch to the maple tree advanced API to avoid multiple walks of the tree during insert operations. Unexport the now-unused vma_store() function. For performance reasons we bulk allocate the maple tree nodes. The node calculations are done internally to the tree and use the VMA count and assume the worst-case node requirements. The VM_DONT_COPY flag does not allow for the most efficient copy method of the tree and so a bulk loading algorithm is used. Link: https://lkml.kernel.org/r/20220906194824.2110408-15-Liam.Howlett@oracle.com Signed-off-by: Liam R. Howlett Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Tested-by: Yu Zhao Cc: Catalin Marinas Cc: David Hildenbrand Cc: David Howells Cc: Davidlohr Bueso Cc: SeongJae Park Cc: Sven Schnelle Cc: Will Deacon Signed-off-by: Andrew Morton --- diff --git a/include/linux/mm.h b/include/linux/mm.h index 3701da1..646ea4d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2599,8 +2599,6 @@ extern bool arch_has_descending_max_zone_pfns(void); /* nommu.c */ extern atomic_long_t mmap_pages_allocated; extern int nommu_shrink_inode_mappings(struct inode *, size_t, size_t); -/* mmap.c */ -void vma_mas_store(struct vm_area_struct *vma, struct ma_state *mas); /* interval_tree.c */ void vma_interval_tree_insert(struct vm_area_struct *node, diff --git a/kernel/fork.c b/kernel/fork.c index 2733642..16970c3 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -583,8 +583,9 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, struct vm_area_struct *mpnt, *tmp, *prev, **pprev; struct rb_node **rb_link, *rb_parent; int retval; - unsigned long charge; + unsigned long charge = 0; LIST_HEAD(uf); + MA_STATE(old_mas, &oldmm->mm_mt, 0, 0); MA_STATE(mas, &mm->mm_mt, 0, 0); uprobe_start_dup_mmap(); @@ -620,7 +621,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, goto out; prev = NULL; - for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) { + + retval = mas_expected_entries(&mas, oldmm->map_count); + if (retval) + goto out; + + mas_for_each(&old_mas, mpnt, ULONG_MAX) { struct file *file; if (mpnt->vm_flags & VM_DONTCOPY) { @@ -703,6 +709,8 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, mas.index = tmp->vm_start; mas.last = tmp->vm_end - 1; mas_store(&mas, tmp); + if (mas_is_err(&mas)) + goto fail_nomem_mas_store; mm->map_count++; if (!(tmp->vm_flags & VM_WIPEONFORK)) @@ -726,6 +734,9 @@ out: fail_uprobe_end: uprobe_end_dup_mmap(); return retval; + +fail_nomem_mas_store: + unlink_anon_vmas(tmp); fail_nomem_anon_vma_fork: mpol_put(vma_policy(tmp)); fail_nomem_policy: