Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/vma: remove mmap() retry merge

We have now introduced a mechanism that obviates the need for a
reattempted merge via the mmap_prepare() file hook, so eliminate this
functionality altogether.

The retry merge logic has been the cause of a great deal of complexity in
the past and required a great deal of careful manoeuvring of code to
ensure its continued and correct functionality.

It has also recently been involved in an issue surrounding maple tree
state, which again points to its problematic nature.

We make it much easier to reason about mmap() logic by eliminating this
and simply writing a VMA once. This also opens the doors to future
optimisation and improvement in the mmap() logic.

For any device or file system which encounters unwanted VMA fragmentation
as a result of this change (that is, having not implemented .mmap_prepare
hooks), the issue is easily resolvable by doing so.

Link: https://lkml.kernel.org/r/d5d8fc74f02b89d6bec5ae8bc0e36d7853b65cda.1746792520.git.lorenzo.stoakes@oracle.com
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Jann Horn <jannh@google.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Lorenzo Stoakes and committed by
Andrew Morton
3c06ee7c 439b3fb0

-14
-14
mm/vma.c
··· 24 24 void *vm_private_data; 25 25 26 26 unsigned long charged; 27 - bool retry_merge; 28 27 29 28 struct vm_area_struct *prev; 30 29 struct vm_area_struct *next; ··· 2416 2417 !(map->flags & VM_MAYWRITE) && 2417 2418 (vma->vm_flags & VM_MAYWRITE)); 2418 2419 2419 - /* If the flags change (and are mergeable), let's retry later. */ 2420 - map->retry_merge = vma->vm_flags != map->flags && !(vma->vm_flags & VM_SPECIAL); 2421 2420 map->flags = vma->vm_flags; 2422 2421 2423 2422 return 0; ··· 2618 2621 2619 2622 if (have_mmap_prepare) 2620 2623 set_vma_user_defined_fields(vma, &map); 2621 - 2622 - /* If flags changed, we might be able to merge, so try again. */ 2623 - if (map.retry_merge) { 2624 - struct vm_area_struct *merged; 2625 - VMG_MMAP_STATE(vmg, &map, vma); 2626 - 2627 - vma_iter_config(map.vmi, map.addr, map.end); 2628 - merged = vma_merge_existing_range(&vmg); 2629 - if (merged) 2630 - vma = merged; 2631 - } 2632 2624 2633 2625 __mmap_complete(&map, vma); 2634 2626