Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mmap locking API: convert mmap_sem comments

Convert comments that reference mmap_sem to reference mmap_lock instead.

[akpm@linux-foundation.org: fix up linux-next leftovers]
[akpm@linux-foundation.org: s/lockaphore/lock/, per Vlastimil]
[akpm@linux-foundation.org: more linux-next fixups, per Michel]

Signed-off-by: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Liam Howlett <Liam.Howlett@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ying Han <yinghan@google.com>
Link: http://lkml.kernel.org/r/20200520052908.204642-13-walken@google.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Michel Lespinasse and committed by
Linus Torvalds
c1e8d7c6 3e4e28c5

+351 -352
+5 -5
Documentation/admin-guide/mm/numa_memory_policy.rst
··· 364 364 365 365 2) for querying the policy, we do not need to take an extra reference on the 366 366 target task's task policy nor vma policies because we always acquire the 367 - task's mm's mmap_sem for read during the query. The set_mempolicy() and 368 - mbind() APIs [see below] always acquire the mmap_sem for write when 367 + task's mm's mmap_lock for read during the query. The set_mempolicy() and 368 + mbind() APIs [see below] always acquire the mmap_lock for write when 369 369 installing or replacing task or vma policies. Thus, there is no possibility 370 370 of a task or thread freeing a policy while another task or thread is 371 371 querying it. 372 372 373 373 3) Page allocation usage of task or vma policy occurs in the fault path where 374 - we hold them mmap_sem for read. Again, because replacing the task or vma 375 - policy requires that the mmap_sem be held for write, the policy can't be 374 + we hold them mmap_lock for read. Again, because replacing the task or vma 375 + policy requires that the mmap_lock be held for write, the policy can't be 376 376 freed out from under us while we're using it for page allocation. 377 377 378 378 4) Shared policies require special consideration. One task can replace a 379 - shared memory policy while another task, with a distinct mmap_sem, is 379 + shared memory policy while another task, with a distinct mmap_lock, is 380 380 querying or allocating a page based on the policy. To resolve this 381 381 potential race, the shared policy infrastructure adds an extra reference 382 382 to the shared policy during lookup while holding a spin lock on the shared
+1 -1
Documentation/admin-guide/mm/userfaultfd.rst
··· 33 33 The real advantage of userfaults if compared to regular virtual memory 34 34 management of mremap/mprotect is that the userfaults in all their 35 35 operations never involve heavyweight structures like vmas (in fact the 36 - ``userfaultfd`` runtime load never takes the mmap_sem for writing). 36 + ``userfaultfd`` runtime load never takes the mmap_lock for writing). 37 37 38 38 Vmas are not suitable for page- (or hugepage) granular fault tracking 39 39 when dealing with virtual address spaces that could span
+1 -1
Documentation/filesystems/locking.rst
··· 615 615 locking rules: 616 616 617 617 ============= ======== =========================== 618 - ops mmap_sem PageLocked(page) 618 + ops mmap_lock PageLocked(page) 619 619 ============= ======== =========================== 620 620 open: yes 621 621 close: yes
+2 -2
Documentation/vm/transhuge.rst
··· 98 98 99 99 To make pagetable walks huge pmd aware, all you need to do is to call 100 100 pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the 101 - mmap_sem in read (or write) mode to be sure a huge pmd cannot be 101 + mmap_lock in read (or write) mode to be sure a huge pmd cannot be 102 102 created from under you by khugepaged (khugepaged collapse_huge_page 103 - takes the mmap_sem in write mode in addition to the anon_vma lock). If 103 + takes the mmap_lock in write mode in addition to the anon_vma lock). If 104 104 pmd_trans_huge returns false, you just fallback in the old code 105 105 paths. If instead pmd_trans_huge returns true, you have to take the 106 106 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
+1 -1
arch/arc/mm/fault.c
··· 141 141 } 142 142 143 143 /* 144 - * Fault retry nuances, mmap_sem already relinquished by core mm 144 + * Fault retry nuances, mmap_lock already relinquished by core mm 145 145 */ 146 146 if (unlikely((fault & VM_FAULT_RETRY) && 147 147 (flags & FAULT_FLAG_ALLOW_RETRY))) {
+1 -1
arch/arm/kernel/vdso.c
··· 240 240 return PTR_ERR_OR_ZERO(vma); 241 241 } 242 242 243 - /* assumes mmap_sem is write-locked */ 243 + /* assumes mmap_lock is write-locked */ 244 244 void arm_install_vdso(struct mm_struct *mm, unsigned long addr) 245 245 { 246 246 struct vm_area_struct *vma;
+1 -1
arch/arm/mm/fault.c
··· 293 293 fault = __do_page_fault(mm, addr, fsr, flags, tsk); 294 294 295 295 /* If we need to retry but a fatal signal is pending, handle the 296 - * signal first. We do not need to release the mmap_sem because 296 + * signal first. We do not need to release the mmap_lock because 297 297 * it would already be released in __lock_page_or_retry in 298 298 * mm/filemap.c. */ 299 299 if (fault_signal_pending(fault, regs)) {
+1 -1
arch/ia64/mm/fault.c
··· 86 86 #ifdef CONFIG_VIRTUAL_MEM_MAP 87 87 /* 88 88 * If fault is in region 5 and we are in the kernel, we may already 89 - * have the mmap_sem (pfn_valid macro is called during mmap). There 89 + * have the mmap_lock (pfn_valid macro is called during mmap). There 90 90 * is no vma for region 5 addr's anyway, so skip getting the semaphore 91 91 * and go directly to the exception handling code. 92 92 */
+1 -1
arch/microblaze/mm/fault.c
··· 124 124 /* When running in the kernel we expect faults to occur only to 125 125 * addresses in user space. All other faults represent errors in the 126 126 * kernel and should generate an OOPS. Unfortunately, in the case of an 127 - * erroneous fault occurring in a code path which already holds mmap_sem 127 + * erroneous fault occurring in a code path which already holds mmap_lock 128 128 * we will deadlock attempting to validate the fault against the 129 129 * address space. Luckily the kernel only validly references user 130 130 * space from well defined areas of code, which are listed in the
+1 -1
arch/nds32/mm/fault.c
··· 210 210 211 211 /* 212 212 * If we need to retry but a fatal signal is pending, handle the 213 - * signal first. We do not need to release the mmap_sem because it 213 + * signal first. We do not need to release the mmap_lock because it 214 214 * would already be released in __lock_page_or_retry in mm/filemap.c. 215 215 */ 216 216 if (fault_signal_pending(fault, regs)) {
+1 -1
arch/powerpc/include/asm/pkeys.h
··· 101 101 102 102 /* 103 103 * Returns a positive, 5-bit key on success, or -1 on failure. 104 - * Relies on the mmap_sem to protect against concurrency in mm_pkey_alloc() and 104 + * Relies on the mmap_lock to protect against concurrency in mm_pkey_alloc() and 105 105 * mm_pkey_free(). 106 106 */ 107 107 static inline int mm_pkey_alloc(struct mm_struct *mm)
+3 -3
arch/powerpc/kvm/book3s_hv_uvmem.c
··· 47 47 * Locking order 48 48 * 49 49 * 1. kvm->srcu - Protects KVM memslots 50 - * 2. kvm->mm->mmap_sem - find_vma, migrate_vma_pages and helpers, ksm_madvise 50 + * 2. kvm->mm->mmap_lock - find_vma, migrate_vma_pages and helpers, ksm_madvise 51 51 * 3. kvm->arch.uvmem_lock - protects read/writes to uvmem slots thus acting 52 52 * as sync-points for page-in/out 53 53 */ ··· 402 402 mig.dst = &dst_pfn; 403 403 404 404 /* 405 - * We come here with mmap_sem write lock held just for 406 - * ksm_madvise(), otherwise we only need read mmap_sem. 405 + * We come here with mmap_lock write lock held just for 406 + * ksm_madvise(), otherwise we only need read mmap_lock. 407 407 * Hence downgrade to read lock once ksm_madvise() is done. 408 408 */ 409 409 ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
+1 -1
arch/powerpc/mm/book3s32/tlb.c
··· 129 129 130 130 /* 131 131 * It is safe to go down the mm's list of vmas when called 132 - * from dup_mmap, holding mmap_sem. It would also be safe from 132 + * from dup_mmap, holding mmap_lock. It would also be safe from 133 133 * unmap_region or exit_mmap, but not from vmtruncate on SMP - 134 134 * but it seems dup_mmap is the only SMP case which gets here. 135 135 */
+2 -2
arch/powerpc/mm/book3s64/hash_pgtable.c
··· 237 237 * to hugepage, we first clear the pmd, then invalidate all 238 238 * the PTE entries. The assumption here is that any low level 239 239 * page fault will see a none pmd and take the slow path that 240 - * will wait on mmap_sem. But we could very well be in a 240 + * will wait on mmap_lock. But we could very well be in a 241 241 * hash_page with local ptep pointer value. Such a hash page 242 242 * can result in adding new HPTE entries for normal subpages. 243 243 * That means we could be modifying the page content as we ··· 251 251 * Now invalidate the hpte entries in the range 252 252 * covered by pmd. This make sure we take a 253 253 * fault and will find the pmd as none, which will 254 - * result in a major fault which takes mmap_sem and 254 + * result in a major fault which takes mmap_lock and 255 255 * hence wait for collapse to complete. Without this 256 256 * the __collapse_huge_page_copy can result in copying 257 257 * the old content.
+1 -1
arch/powerpc/mm/book3s64/subpage_prot.c
··· 225 225 if (!spt) { 226 226 /* 227 227 * Allocate subpage prot table if not already done. 228 - * Do this with mmap_sem held 228 + * Do this with mmap_lock held 229 229 */ 230 230 spt = kzalloc(sizeof(struct subpage_prot_table), GFP_KERNEL); 231 231 if (!spt) {
+5 -5
arch/powerpc/mm/fault.c
··· 138 138 * 2. T1 : set AMR to deny access to pkey=4, touches, page 139 139 * 3. T1 : faults... 140 140 * 4. T2: mprotect_key(foo, PAGE_SIZE, pkey=5); 141 - * 5. T1 : enters fault handler, takes mmap_sem, etc... 141 + * 5. T1 : enters fault handler, takes mmap_lock, etc... 142 142 * 6. T1 : reaches here, sees vma_pkey(vma)=5, when we really 143 143 * faulted on a pte with its pkey=4. 144 144 */ ··· 525 525 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); 526 526 527 527 /* 528 - * We want to do this outside mmap_sem, because reading code around nip 528 + * We want to do this outside mmap_lock, because reading code around nip 529 529 * can result in fault, which will cause a deadlock when called with 530 - * mmap_sem held 530 + * mmap_lock held 531 531 */ 532 532 if (is_user) 533 533 flags |= FAULT_FLAG_USER; ··· 539 539 /* When running in the kernel we expect faults to occur only to 540 540 * addresses in user space. All other faults represent errors in the 541 541 * kernel and should generate an OOPS. Unfortunately, in the case of an 542 - * erroneous fault occurring in a code path which already holds mmap_sem 542 + * erroneous fault occurring in a code path which already holds mmap_lock 543 543 * we will deadlock attempting to validate the fault against the 544 544 * address space. Luckily the kernel only validly references user 545 545 * space from well defined areas of code, which are listed in the ··· 615 615 return user_mode(regs) ? 0 : SIGBUS; 616 616 617 617 /* 618 - * Handle the retry right now, the mmap_sem has been released in that 618 + * Handle the retry right now, the mmap_lock has been released in that 619 619 * case. 620 620 */ 621 621 if (unlikely(fault & VM_FAULT_RETRY)) {
+1 -1
arch/powerpc/mm/pgtable.c
··· 306 306 pmd = pmd_offset(pud, addr); 307 307 /* 308 308 * khugepaged to collapse normal pages to hugepage, first set 309 - * pmd to none to force page fault/gup to take mmap_sem. After 309 + * pmd to none to force page fault/gup to take mmap_lock. After 310 310 * pmd is set to none, we do a pte_clear which does this assertion 311 311 * so if we find pmd none, return. 312 312 */
+3 -3
arch/powerpc/platforms/cell/spufs/file.c
··· 325 325 return VM_FAULT_SIGBUS; 326 326 327 327 /* 328 - * Because we release the mmap_sem, the context may be destroyed while 328 + * Because we release the mmap_lock, the context may be destroyed while 329 329 * we're in spu_wait. Grab an extra reference so it isn't destroyed 330 330 * in the meantime. 331 331 */ ··· 334 334 /* 335 335 * We have to wait for context to be loaded before we have 336 336 * pages to hand out to the user, but we don't want to wait 337 - * with the mmap_sem held. 338 - * It is possible to drop the mmap_sem here, but then we need 337 + * with the mmap_lock held. 338 + * It is possible to drop the mmap_lock here, but then we need 339 339 * to return VM_FAULT_NOPAGE because the mappings may have 340 340 * hanged. 341 341 */
+1 -1
arch/riscv/mm/fault.c
··· 114 114 115 115 /* 116 116 * If we need to retry but a fatal signal is pending, handle the 117 - * signal first. We do not need to release the mmap_sem because it 117 + * signal first. We do not need to release the mmap_lock because it 118 118 * would already be released in __lock_page_or_retry in mm/filemap.c. 119 119 */ 120 120 if (fault_signal_pending(fault, regs))
+1 -1
arch/s390/kvm/priv.c
··· 1122 1122 } 1123 1123 1124 1124 /* 1125 - * Must be called with relevant read locks held (kvm->mm->mmap_sem, kvm->srcu) 1125 + * Must be called with relevant read locks held (kvm->mm->mmap_lock, kvm->srcu) 1126 1126 */ 1127 1127 static inline int __do_essa(struct kvm_vcpu *vcpu, const int orc) 1128 1128 {
+1 -1
arch/s390/mm/fault.c
··· 507 507 if (IS_ENABLED(CONFIG_PGSTE) && gmap && 508 508 (flags & FAULT_FLAG_RETRY_NOWAIT)) { 509 509 /* FAULT_FLAG_RETRY_NOWAIT has been set, 510 - * mmap_sem has not been released */ 510 + * mmap_lock has not been released */ 511 511 current->thread.gmap_pfault = 1; 512 512 fault = VM_FAULT_PFAULT; 513 513 goto out_up;
+16 -16
arch/s390/mm/gmap.c
··· 300 300 EXPORT_SYMBOL_GPL(gmap_get_enabled); 301 301 302 302 /* 303 - * gmap_alloc_table is assumed to be called with mmap_sem held 303 + * gmap_alloc_table is assumed to be called with mmap_lock held 304 304 */ 305 305 static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, 306 306 unsigned long init, unsigned long gaddr) ··· 466 466 * Returns user space address which corresponds to the guest address or 467 467 * -EFAULT if no such mapping exists. 468 468 * This function does not establish potentially missing page table entries. 469 - * The mmap_sem of the mm that belongs to the address space must be held 469 + * The mmap_lock of the mm that belongs to the address space must be held 470 470 * when this function gets called. 471 471 * 472 472 * Note: Can also be called for shadow gmaps. ··· 534 534 * 535 535 * Returns 0 on success, -ENOMEM for out of memory conditions, and -EFAULT 536 536 * if the vm address is already mapped to a different guest segment. 537 - * The mmap_sem of the mm that belongs to the address space must be held 537 + * The mmap_lock of the mm that belongs to the address space must be held 538 538 * when this function gets called. 539 539 */ 540 540 int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr) ··· 655 655 goto out_up; 656 656 } 657 657 /* 658 - * In the case that fixup_user_fault unlocked the mmap_sem during 658 + * In the case that fixup_user_fault unlocked the mmap_lock during 659 659 * faultin redo __gmap_translate to not race with a map/unmap_segment. 660 660 */ 661 661 if (unlocked) ··· 669 669 EXPORT_SYMBOL_GPL(gmap_fault); 670 670 671 671 /* 672 - * this function is assumed to be called with mmap_sem held 672 + * this function is assumed to be called with mmap_lock held 673 673 */ 674 674 void __gmap_zap(struct gmap *gmap, unsigned long gaddr) 675 675 { ··· 882 882 if (fixup_user_fault(current, mm, vmaddr, fault_flags, &unlocked)) 883 883 return -EFAULT; 884 884 if (unlocked) 885 - /* lost mmap_sem, caller has to retry __gmap_translate */ 885 + /* lost mmap_lock, caller has to retry __gmap_translate */ 886 886 return 0; 887 887 /* Connect the page tables */ 888 888 return __gmap_link(gmap, gaddr, vmaddr); ··· 953 953 * -EAGAIN if a fixup is needed 954 954 * -EINVAL if unsupported notifier bits have been specified 955 955 * 956 - * Expected to be called with sg->mm->mmap_sem in read and 956 + * Expected to be called with sg->mm->mmap_lock in read and 957 957 * guest_table_lock held. 958 958 */ 959 959 static int gmap_protect_pmd(struct gmap *gmap, unsigned long gaddr, ··· 999 999 * Returns 0 if successfully protected, -ENOMEM if out of memory and 1000 1000 * -EAGAIN if a fixup is needed. 1001 1001 * 1002 - * Expected to be called with sg->mm->mmap_sem in read 1002 + * Expected to be called with sg->mm->mmap_lock in read 1003 1003 */ 1004 1004 static int gmap_protect_pte(struct gmap *gmap, unsigned long gaddr, 1005 1005 pmd_t *pmdp, int prot, unsigned long bits) ··· 1035 1035 * Returns 0 if successfully protected, -ENOMEM if out of memory and 1036 1036 * -EFAULT if gaddr is invalid (or mapping for shadows is missing). 1037 1037 * 1038 - * Called with sg->mm->mmap_sem in read. 1038 + * Called with sg->mm->mmap_lock in read. 1039 1039 */ 1040 1040 static int gmap_protect_range(struct gmap *gmap, unsigned long gaddr, 1041 1041 unsigned long len, int prot, unsigned long bits) ··· 1124 1124 * if reading using the virtual address failed. -EINVAL if called on a gmap 1125 1125 * shadow. 1126 1126 * 1127 - * Called with gmap->mm->mmap_sem in read. 1127 + * Called with gmap->mm->mmap_lock in read. 1128 1128 */ 1129 1129 int gmap_read_table(struct gmap *gmap, unsigned long gaddr, unsigned long *val) 1130 1130 { ··· 1729 1729 * shadow table structure is incomplete, -ENOMEM if out of memory and 1730 1730 * -EFAULT if an address in the parent gmap could not be resolved. 1731 1731 * 1732 - * Called with sg->mm->mmap_sem in read. 1732 + * Called with sg->mm->mmap_lock in read. 1733 1733 */ 1734 1734 int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, 1735 1735 int fake) ··· 1813 1813 * shadow table structure is incomplete, -ENOMEM if out of memory and 1814 1814 * -EFAULT if an address in the parent gmap could not be resolved. 1815 1815 * 1816 - * Called with sg->mm->mmap_sem in read. 1816 + * Called with sg->mm->mmap_lock in read. 1817 1817 */ 1818 1818 int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, 1819 1819 int fake) ··· 1897 1897 * shadow table structure is incomplete, -ENOMEM if out of memory and 1898 1898 * -EFAULT if an address in the parent gmap could not be resolved. 1899 1899 * 1900 - * Called with sg->mm->mmap_sem in read. 1900 + * Called with sg->mm->mmap_lock in read. 1901 1901 */ 1902 1902 int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, 1903 1903 int fake) ··· 1981 1981 * Returns 0 if the shadow page table was found and -EAGAIN if the page 1982 1982 * table was not found. 1983 1983 * 1984 - * Called with sg->mm->mmap_sem in read. 1984 + * Called with sg->mm->mmap_lock in read. 1985 1985 */ 1986 1986 int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, 1987 1987 unsigned long *pgt, int *dat_protection, ··· 2021 2021 * shadow table structure is incomplete, -ENOMEM if out of memory, 2022 2022 * -EFAULT if an address in the parent gmap could not be resolved and 2023 2023 * 2024 - * Called with gmap->mm->mmap_sem in read 2024 + * Called with gmap->mm->mmap_lock in read 2025 2025 */ 2026 2026 int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, 2027 2027 int fake) ··· 2100 2100 * shadow table structure is incomplete, -ENOMEM if out of memory and 2101 2101 * -EFAULT if an address in the parent gmap could not be resolved. 2102 2102 * 2103 - * Called with sg->mm->mmap_sem in read. 2103 + * Called with sg->mm->mmap_lock in read. 2104 2104 */ 2105 2105 int gmap_shadow_page(struct gmap *sg, unsigned long saddr, pte_t pte) 2106 2106 {
+1 -1
arch/s390/mm/pgalloc.c
··· 114 114 spin_lock_bh(&mm->page_table_lock); 115 115 116 116 /* 117 - * This routine gets called with mmap_sem lock held and there is 117 + * This routine gets called with mmap_lock lock held and there is 118 118 * no reason to optimize for the case of otherwise. However, if 119 119 * that would ever change, the below check will let us know. 120 120 */
+1 -1
arch/sh/mm/cache-sh4.c
··· 182 182 * accessed with (hence cache set) is in accord with the physical 183 183 * address (i.e. tag). It's no different here. 184 184 * 185 - * Caller takes mm->mmap_sem. 185 + * Caller takes mm->mmap_lock. 186 186 */ 187 187 static void sh4_flush_cache_mm(void *arg) 188 188 {
+1 -1
arch/sh/mm/fault.c
··· 326 326 return 1; 327 327 } 328 328 329 - /* Release mmap_sem first if necessary */ 329 + /* Release mmap_lock first if necessary */ 330 330 if (!(fault & VM_FAULT_RETRY)) 331 331 mmap_read_unlock(current->mm); 332 332
+1 -1
arch/sparc/mm/fault_64.c
··· 70 70 } 71 71 72 72 /* 73 - * We now make sure that mmap_sem is held in all paths that call 73 + * We now make sure that mmap_lock is held in all paths that call 74 74 * this. Additionally, to prevent kswapd from ripping ptes from 75 75 * under us, raise interrupts around the time that we look at the 76 76 * pte, kswapd will have to wait to get his smp ipi response from
+1 -1
arch/um/kernel/skas/mmu.c
··· 114 114 mm->context.stub_pages[0] = virt_to_page(__syscall_stub_start); 115 115 mm->context.stub_pages[1] = virt_to_page(mm->context.id.stack); 116 116 117 - /* dup_mmap already holds mmap_sem */ 117 + /* dup_mmap already holds mmap_lock */ 118 118 err = install_special_mapping(mm, STUB_START, STUB_END - STUB_START, 119 119 VM_READ | VM_MAYREAD | VM_EXEC | 120 120 VM_MAYEXEC | VM_DONTCOPY | VM_PFNMAP,
+1 -1
arch/um/kernel/tlb.c
··· 348 348 if (ret) { 349 349 printk(KERN_ERR "fix_range_common: failed, killing current " 350 350 "process: %d\n", task_tgid_vnr(current)); 351 - /* We are under mmap_sem, release it such that current can terminate */ 351 + /* We are under mmap_lock, release it such that current can terminate */ 352 352 mmap_write_unlock(current->mm); 353 353 force_sig(SIGKILL); 354 354 do_signal(&current->thread.regs);
+1 -1
arch/unicore32/mm/fault.c
··· 246 246 fault = __do_pf(mm, addr, fsr, flags, tsk); 247 247 248 248 /* If we need to retry but a fatal signal is pending, handle the 249 - * signal first. We do not need to release the mmap_sem because 249 + * signal first. We do not need to release the mmap_lock because 250 250 * it would already be released in __lock_page_or_retry in 251 251 * mm/filemap.c. */ 252 252 if (fault_signal_pending(fault, regs))
+1 -1
arch/x86/events/core.c
··· 2178 2178 * userspace with CR4.PCE clear while another task is still 2179 2179 * doing on_each_cpu_mask() to propagate CR4.PCE. 2180 2180 * 2181 - * For now, this can't happen because all callers hold mmap_sem 2181 + * For now, this can't happen because all callers hold mmap_lock 2182 2182 * for write. If this changes, we'll need a different solution. 2183 2183 */ 2184 2184 mmap_assert_write_locked(mm);
+1 -1
arch/x86/include/asm/mmu.h
··· 45 45 #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS 46 46 /* 47 47 * One bit per protection key says whether userspace can 48 - * use it or not. protected by mmap_sem. 48 + * use it or not. protected by mmap_lock. 49 49 */ 50 50 u16 pkey_allocation_map; 51 51 s16 execute_only_pkey;
+4 -4
arch/x86/include/asm/pgtable-3level.h
··· 39 39 * pte_offset_map_lock() on 32-bit PAE kernels was reading the pmd_t with 40 40 * a "*pmdp" dereference done by GCC. Problem is, in certain places 41 41 * where pte_offset_map_lock() is called, concurrent page faults are 42 - * allowed, if the mmap_sem is hold for reading. An example is mincore 42 + * allowed, if the mmap_lock is hold for reading. An example is mincore 43 43 * vs page faults vs MADV_DONTNEED. On the page fault side 44 44 * pmd_populate() rightfully does a set_64bit(), but if we're reading the 45 45 * pmd_t with a "*pmdp" on the mincore side, a SMP race can happen 46 46 * because GCC will not read the 64-bit value of the pmd atomically. 47 47 * 48 48 * To fix this all places running pte_offset_map_lock() while holding the 49 - * mmap_sem in read mode, shall read the pmdp pointer using this 49 + * mmap_lock in read mode, shall read the pmdp pointer using this 50 50 * function to know if the pmd is null or not, and in turn to know if 51 51 * they can run pte_offset_map_lock() or pmd_trans_huge() or other pmd 52 52 * operations. 53 53 * 54 - * Without THP if the mmap_sem is held for reading, the pmd can only 54 + * Without THP if the mmap_lock is held for reading, the pmd can only 55 55 * transition from null to not null while pmd_read_atomic() runs. So 56 56 * we can always return atomic pmd values with this function. 57 57 * 58 - * With THP if the mmap_sem is held for reading, the pmd can become 58 + * With THP if the mmap_lock is held for reading, the pmd can become 59 59 * trans_huge or none or point to a pte (and in turn become "stable") 60 60 * at any time under pmd_read_atomic(). We could read it truly 61 61 * atomically here with an atomic64_read() for the THP enabled case (and
+3 -3
arch/x86/kernel/cpu/resctrl/pseudo_lock.c
··· 1326 1326 * pseudo-locked region will still be here on return. 1327 1327 * 1328 1328 * The mutex has to be released temporarily to avoid a potential 1329 - * deadlock with the mm->mmap_sem semaphore which is obtained in 1330 - * the device_create() and debugfs_create_dir() callpath below 1331 - * as well as before the mmap() callback is called. 1329 + * deadlock with the mm->mmap_lock which is obtained in the 1330 + * device_create() and debugfs_create_dir() callpath below as well as 1331 + * before the mmap() callback is called. 1332 1332 */ 1333 1333 mutex_unlock(&rdtgroup_mutex); 1334 1334
+3 -3
arch/x86/kernel/cpu/resctrl/rdtgroup.c
··· 3199 3199 * during the debugfs directory creation also &sb->s_type->i_mutex_key 3200 3200 * (the lockdep class of inode->i_rwsem). Other filesystem 3201 3201 * interactions (eg. SyS_getdents) have the lock ordering: 3202 - * &sb->s_type->i_mutex_key --> &mm->mmap_sem 3203 - * During mmap(), called with &mm->mmap_sem, the rdtgroup_mutex 3202 + * &sb->s_type->i_mutex_key --> &mm->mmap_lock 3203 + * During mmap(), called with &mm->mmap_lock, the rdtgroup_mutex 3204 3204 * is taken, thus creating dependency: 3205 - * &mm->mmap_sem --> rdtgroup_mutex for the latter that can cause 3205 + * &mm->mmap_lock --> rdtgroup_mutex for the latter that can cause 3206 3206 * issues considering the other two lock dependencies. 3207 3207 * By creating the debugfs directory here we avoid a dependency 3208 3208 * that may cause deadlock (even though file operations cannot
+1 -1
arch/x86/kernel/ldt.c
··· 8 8 * 9 9 * Lock order: 10 10 * contex.ldt_usr_sem 11 - * mmap_sem 11 + * mmap_lock 12 12 * context.lock 13 13 */ 14 14
+6 -6
arch/x86/mm/fault.c
··· 865 865 * 2. T1 : set PKRU to deny access to pkey=4, touches page 866 866 * 3. T1 : faults... 867 867 * 4. T2: mprotect_key(foo, PAGE_SIZE, pkey=5); 868 - * 5. T1 : enters fault handler, takes mmap_sem, etc... 868 + * 5. T1 : enters fault handler, takes mmap_lock, etc... 869 869 * 6. T1 : reaches here, sees vma_pkey(vma)=5, when we really 870 870 * faulted on a pte with its pkey=4. 871 871 */ ··· 1231 1231 * Kernel-mode access to the user address space should only occur 1232 1232 * on well-defined single instructions listed in the exception 1233 1233 * tables. But, an erroneous kernel fault occurring outside one of 1234 - * those areas which also holds mmap_sem might deadlock attempting 1234 + * those areas which also holds mmap_lock might deadlock attempting 1235 1235 * to validate the fault against the address space. 1236 1236 * 1237 1237 * Only do the expensive exception table search when we might be at 1238 1238 * risk of a deadlock. This happens if we 1239 - * 1. Failed to acquire mmap_sem, and 1239 + * 1. Failed to acquire mmap_lock, and 1240 1240 * 2. The access did not originate in userspace. 1241 1241 */ 1242 1242 if (unlikely(!mmap_read_trylock(mm))) { ··· 1289 1289 * If for any reason at all we couldn't handle the fault, 1290 1290 * make sure we exit gracefully rather than endlessly redo 1291 1291 * the fault. Since we never set FAULT_FLAG_RETRY_NOWAIT, if 1292 - * we get VM_FAULT_RETRY back, the mmap_sem has been unlocked. 1292 + * we get VM_FAULT_RETRY back, the mmap_lock has been unlocked. 1293 1293 * 1294 - * Note that handle_userfault() may also release and reacquire mmap_sem 1294 + * Note that handle_userfault() may also release and reacquire mmap_lock 1295 1295 * (and not return with VM_FAULT_RETRY), when returning to userland to 1296 1296 * repeat the page fault later with a VM_FAULT_NOPAGE retval 1297 1297 * (potentially after handling any pending signal during the return to ··· 1310 1310 } 1311 1311 1312 1312 /* 1313 - * If we need to retry the mmap_sem has already been released, 1313 + * If we need to retry the mmap_lock has already been released, 1314 1314 * and if there is a fatal signal pending there is no guarantee 1315 1315 * that we made any progress. Handle this case first. 1316 1316 */
+1 -1
drivers/char/mspec.c
··· 64 64 * This structure is shared by all vma's that are split off from the 65 65 * original vma when split_vma()'s are done. 66 66 * 67 - * The refcnt is incremented atomically because mm->mmap_sem does not 67 + * The refcnt is incremented atomically because mm->mmap_lock does not 68 68 * protect in fork case where multiple tasks share the vma_data. 69 69 */ 70 70 struct vma_data {
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
··· 186 186 * disabled. The memory must be pinned and mapped to the hardware when 187 187 * this is called in hqd_load functions, so it should never fault in 188 188 * the first place. This resolves a circular lock dependency involving 189 - * four locks, including the DQM lock and mmap_sem. 189 + * four locks, including the DQM lock and mmap_lock. 190 190 */ 191 191 #define read_user_wptr(mmptr, wptr, dst) \ 192 192 ({ \
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
··· 237 237 CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 1); 238 238 WREG32(mmCP_HQD_PQ_DOORBELL_CONTROL, data); 239 239 240 - /* read_user_ptr may take the mm->mmap_sem. 240 + /* read_user_ptr may take the mm->mmap_lock. 241 241 * release srbm_mutex to avoid circular dependency between 242 242 * srbm_mutex->mm_sem->reservation_ww_class_mutex->srbm_mutex. 243 243 */
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.c
··· 224 224 CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 1); 225 225 WREG32(mmCP_HQD_PQ_DOORBELL_CONTROL, data); 226 226 227 - /* read_user_ptr may take the mm->mmap_sem. 227 + /* read_user_ptr may take the mm->mmap_lock. 228 228 * release srbm_mutex to avoid circular dependency between 229 229 * srbm_mutex->mm_sem->reservation_ww_class_mutex->srbm_mutex. 230 230 */
+3 -3
drivers/gpu/drm/i915/gem/i915_gem_userptr.c
··· 203 203 mmap_write_lock(mm->mm); 204 204 mutex_lock(&mm->i915->mm_lock); 205 205 if (mm->mn == NULL && !err) { 206 - /* Protected by mmap_sem (write-lock) */ 206 + /* Protected by mmap_lock (write-lock) */ 207 207 err = __mmu_notifier_register(&mn->mn, mm->mm); 208 208 if (!err) { 209 209 /* Protected by mm_lock */ ··· 522 522 523 523 /* Spawn a worker so that we can acquire the 524 524 * user pages without holding our mutex. Access 525 - * to the user pages requires mmap_sem, and we have 526 - * a strict lock ordering of mmap_sem, struct_mutex - 525 + * to the user pages requires mmap_lock, and we have 526 + * a strict lock ordering of mmap_lock, struct_mutex - 527 527 * we already hold struct_mutex here and so cannot 528 528 * call gup without encountering a lock inversion. 529 529 *
+1 -1
drivers/gpu/drm/i915/i915_perf.c
··· 3676 3676 * buffered data written by the GPU besides periodic OA metrics. 3677 3677 * 3678 3678 * Note we copy the properties from userspace outside of the i915 perf 3679 - * mutex to avoid an awkward lockdep with mmap_sem. 3679 + * mutex to avoid an awkward lockdep with mmap_lock. 3680 3680 * 3681 3681 * Most of the implementation details are handled by 3682 3682 * i915_perf_open_ioctl_locked() after taking the &perf->lock
+3 -3
drivers/gpu/drm/ttm/ttm_bo_vm.c
··· 58 58 goto out_clear; 59 59 60 60 /* 61 - * If possible, avoid waiting for GPU with mmap_sem 61 + * If possible, avoid waiting for GPU with mmap_lock 62 62 * held. We only do this if the fault allows retry and this 63 63 * is the first attempt. 64 64 */ ··· 131 131 { 132 132 /* 133 133 * Work around locking order reversal in fault / nopfn 134 - * between mmap_sem and bo_reserve: Perform a trylock operation 134 + * between mmap_lock and bo_reserve: Perform a trylock operation 135 135 * for reserve, and if it fails, retry the fault after waiting 136 136 * for the buffer to become unreserved. 137 137 */ 138 138 if (unlikely(!dma_resv_trylock(bo->base.resv))) { 139 139 /* 140 140 * If the fault allows retry and this is the first 141 - * fault attempt, we try to release the mmap_sem 141 + * fault attempt, we try to release the mmap_lock 142 142 * before waiting 143 143 */ 144 144 if (fault_flag_allow_retry_first(vmf->flags)) {
+1 -1
drivers/infiniband/core/uverbs_main.c
··· 835 835 return; 836 836 837 837 /* 838 - * The umap_lock is nested under mmap_sem since it used within 838 + * The umap_lock is nested under mmap_lock since it used within 839 839 * the vma_ops callbacks, so we have to clean the list one mm 840 840 * at a time to get the lock ordering right. Typically there 841 841 * will only be one mm, so no big deal.
+1 -1
drivers/infiniband/hw/hfi1/mmu_rb.c
··· 333 333 334 334 /* 335 335 * Work queue function to remove all nodes that have been queued up to 336 - * be removed. The key feature is that mm->mmap_sem is not being held 336 + * be removed. The key feature is that mm->mmap_lock is not being held 337 337 * and the remove callback can sleep while taking it, if needed. 338 338 */ 339 339 static void handle_remove(struct work_struct *work)
+1 -1
drivers/media/v4l2-core/videobuf-dma-sg.c
··· 533 533 } else { 534 534 /* NOTE: HACK: videobuf_iolock on V4L2_MEMORY_MMAP 535 535 buffers can only be called from videobuf_qbuf 536 - we take current->mm->mmap_sem there, to prevent 536 + we take current->mm->mmap_lock there, to prevent 537 537 locking inversion, so don't take it here */ 538 538 539 539 err = videobuf_dma_init_user_locked(&mem->dma,
+2 -3
drivers/misc/cxl/cxllib.c
··· 245 245 dar += page_size) { 246 246 if (dar < vma_start || dar >= vma_end) { 247 247 /* 248 - * We don't hold the mm->mmap_sem semaphore 249 - * while iterating, since the semaphore is 250 - * required by one of the lower-level page 248 + * We don't hold mm->mmap_lock while iterating, since 249 + * the lock is required by one of the lower-level page 251 250 * fault processing functions and it could 252 251 * create a deadlock. 253 252 *
+4 -4
drivers/misc/sgi-gru/grufault.c
··· 42 42 } 43 43 44 44 /* 45 - * Find the vma of a GRU segment. Caller must hold mmap_sem. 45 + * Find the vma of a GRU segment. Caller must hold mmap_lock. 46 46 */ 47 47 struct vm_area_struct *gru_find_vma(unsigned long vaddr) 48 48 { ··· 58 58 * Find and lock the gts that contains the specified user vaddr. 59 59 * 60 60 * Returns: 61 - * - *gts with the mmap_sem locked for read and the GTS locked. 61 + * - *gts with the mmap_lock locked for read and the GTS locked. 62 62 * - NULL if vaddr invalid OR is not a valid GSEG vaddr. 63 63 */ 64 64 ··· 198 198 * Only supports Intel large pages (2MB only) on x86_64. 199 199 * ZZZ - hugepage support is incomplete 200 200 * 201 - * NOTE: mmap_sem is already held on entry to this function. This 201 + * NOTE: mmap_lock is already held on entry to this function. This 202 202 * guarantees existence of the page tables. 203 203 */ 204 204 static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr, ··· 569 569 } 570 570 571 571 /* 572 - * This is running in interrupt context. Trylock the mmap_sem. 572 + * This is running in interrupt context. Trylock the mmap_lock. 573 573 * If it fails, retry the fault in user context. 574 574 */ 575 575 gts->ustats.fmm_tlbmiss++;
+1 -1
drivers/oprofile/buffer_sync.c
··· 486 486 487 487 /* Sync one of the CPU's buffers into the global event buffer. 488 488 * Here we need to go through each batch of samples punctuated 489 - * by context switch notes, taking the task's mmap_sem and doing 489 + * by context switch notes, taking the task's mmap_lock and doing 490 490 * lookup in task->mm->mmap to convert EIP into dcookie/offset 491 491 * value. 492 492 */
+2 -2
drivers/staging/android/ashmem.c
··· 555 555 556 556 /* 557 557 * Holding the ashmem_mutex while doing a copy_from_user might cause 558 - * an data abort which would try to access mmap_sem. If another 558 + * an data abort which would try to access mmap_lock. If another 559 559 * thread has invoked ashmem_mmap then it will be holding the 560 560 * semaphore and will be waiting for ashmem_mutex, there by leading to 561 561 * deadlock. We'll release the mutex and take the name to a local ··· 586 586 * Have a local variable to which we'll copy the content 587 587 * from asma with the lock held. Later we can copy this to the user 588 588 * space safely without holding any locks. So even if we proceed to 589 - * wait for mmap_sem, it won't lead to deadlock. 589 + * wait for mmap_lock, it won't lead to deadlock. 590 590 */ 591 591 char local_name[ASHMEM_NAME_LEN]; 592 592
+1 -1
drivers/staging/comedi/comedi_fops.c
··· 2325 2325 int retval = 0; 2326 2326 2327 2327 /* 2328 - * 'trylock' avoids circular dependency with current->mm->mmap_sem 2328 + * 'trylock' avoids circular dependency with current->mm->mmap_lock 2329 2329 * and down-reading &dev->attach_lock should normally succeed without 2330 2330 * contention unless the device is in the process of being attached 2331 2331 * or detached.
+1 -1
drivers/tty/vt/consolemap.c
··· 12 12 * Fix bug in inverse translation. Stanislav Voronyi <stas@cnti.uanet.kharkov.ua>, Dec 1998 13 13 * 14 14 * In order to prevent the following circular lock dependency: 15 - * &mm->mmap_sem --> cpu_hotplug.lock --> console_lock --> &mm->mmap_sem 15 + * &mm->mmap_lock --> cpu_hotplug.lock --> console_lock --> &mm->mmap_lock 16 16 * 17 17 * We cannot allow page fault to happen while holding the console_lock. 18 18 * Therefore, all the userspace copy operations have to be done outside
+7 -7
drivers/vfio/pci/vfio_pci.c
··· 1185 1185 1186 1186 /* 1187 1187 * We need to get memory_lock for each device, but devices 1188 - * can share mmap_sem, therefore we need to zap and hold 1188 + * can share mmap_lock, therefore we need to zap and hold 1189 1189 * the vma_lock for each device, and only then get each 1190 1190 * memory_lock. 1191 1191 */ ··· 1375 1375 1376 1376 /* 1377 1377 * Lock ordering: 1378 - * vma_lock is nested under mmap_sem for vm_ops callback paths. 1378 + * vma_lock is nested under mmap_lock for vm_ops callback paths. 1379 1379 * The memory_lock semaphore is used by both code paths calling 1380 1380 * into this function to zap vmas and the vm_ops.fault callback 1381 1381 * to protect the memory enable state of the device. 1382 1382 * 1383 - * When zapping vmas we need to maintain the mmap_sem => vma_lock 1383 + * When zapping vmas we need to maintain the mmap_lock => vma_lock 1384 1384 * ordering, which requires using vma_lock to walk vma_list to 1385 - * acquire an mm, then dropping vma_lock to get the mmap_sem and 1385 + * acquire an mm, then dropping vma_lock to get the mmap_lock and 1386 1386 * reacquiring vma_lock. This logic is derived from similar 1387 1387 * requirements in uverbs_user_mmap_disassociate(). 1388 1388 * 1389 - * mmap_sem must always be the top-level lock when it is taken. 1389 + * mmap_lock must always be the top-level lock when it is taken. 1390 1390 * Therefore we can only hold the memory_lock write lock when 1391 - * vma_list is empty, as we'd need to take mmap_sem to clear 1391 + * vma_list is empty, as we'd need to take mmap_lock to clear 1392 1392 * entries. vma_list can only be guaranteed empty when holding 1393 1393 * vma_lock, thus memory_lock is nested under vma_lock. 1394 1394 * 1395 1395 * This enables the vm_ops.fault callback to acquire vma_lock, 1396 1396 * followed by memory_lock read lock, while already holding 1397 - * mmap_sem without risk of deadlock. 1397 + * mmap_lock without risk of deadlock. 1398 1398 */ 1399 1399 while (1) { 1400 1400 struct mm_struct *mm = NULL;
+1 -1
drivers/xen/gntdev.c
··· 1014 1014 * to the PTE from going stale. 1015 1015 * 1016 1016 * Since this vma's mappings can't be touched without the 1017 - * mmap_sem, and we are holding it now, there is no need for 1017 + * mmap_lock, and we are holding it now, there is no need for 1018 1018 * the notifier_range locking pattern. 1019 1019 */ 1020 1020 mmu_interval_read_begin(&map->notifier);
+2 -2
fs/coredump.c
··· 393 393 * of ->siglock provides a memory barrier. 394 394 * 395 395 * do_exit: 396 - * The caller holds mm->mmap_sem. This means that the task which 396 + * The caller holds mm->mmap_lock. This means that the task which 397 397 * uses this mm can't pass exit_mm(), so it can't exit or clear 398 398 * its ->mm. 399 399 * ··· 401 401 * It does list_replace_rcu(&leader->tasks, &current->tasks), 402 402 * we must see either old or new leader, this does not matter. 403 403 * However, it can change p->sighand, so lock_task_sighand(p) 404 - * must be used. Since p->mm != NULL and we hold ->mmap_sem 404 + * must be used. Since p->mm != NULL and we hold ->mmap_lock 405 405 * it can't fail. 406 406 * 407 407 * Note also that "g" can be the old leader with ->mm == NULL
+1 -1
fs/exec.c
··· 1091 1091 /* 1092 1092 * Make sure that if there is a core dump in progress 1093 1093 * for the old mm, we get out and die instead of going 1094 - * through with the exec. We must hold mmap_sem around 1094 + * through with the exec. We must hold mmap_lock around 1095 1095 * checking core_state and changing tsk->mm. 1096 1096 */ 1097 1097 mmap_read_lock(old_mm);
+1 -1
fs/ext2/file.c
··· 79 79 /* 80 80 * The lock ordering for ext2 DAX fault paths is: 81 81 * 82 - * mmap_sem (MM) 82 + * mmap_lock (MM) 83 83 * sb_start_pagefault (vfs, freeze) 84 84 * ext2_inode_info->dax_sem 85 85 * address_space->i_mmap_rwsem or page_lock (mutually exclusive in DAX)
+3 -3
fs/ext4/super.c
··· 93 93 * i_mmap_rwsem (inode->i_mmap_rwsem)! 94 94 * 95 95 * page fault path: 96 - * mmap_sem -> sb_start_pagefault -> i_mmap_sem (r) -> transaction start -> 96 + * mmap_lock -> sb_start_pagefault -> i_mmap_sem (r) -> transaction start -> 97 97 * page lock -> i_data_sem (rw) 98 98 * 99 99 * buffered write path: 100 - * sb_start_write -> i_mutex -> mmap_sem 100 + * sb_start_write -> i_mutex -> mmap_lock 101 101 * sb_start_write -> i_mutex -> transaction start -> page lock -> 102 102 * i_data_sem (rw) 103 103 * ··· 107 107 * i_data_sem (rw) 108 108 * 109 109 * direct IO: 110 - * sb_start_write -> i_mutex -> mmap_sem 110 + * sb_start_write -> i_mutex -> mmap_lock 111 111 * sb_start_write -> i_mutex -> transaction start -> i_data_sem (rw) 112 112 * 113 113 * writepages:
+2 -2
fs/kernfs/file.c
··· 652 652 * The following is done to give a different lockdep key to 653 653 * @of->mutex for files which implement mmap. This is a rather 654 654 * crude way to avoid false positive lockdep warning around 655 - * mm->mmap_sem - mmap nests @of->mutex under mm->mmap_sem and 655 + * mm->mmap_lock - mmap nests @of->mutex under mm->mmap_lock and 656 656 * reading /sys/block/sda/trace/act_mask grabs sr_mutex, under 657 - * which mm->mmap_sem nests, while holding @of->mutex. As each 657 + * which mm->mmap_lock nests, while holding @of->mutex. As each 658 658 * open file has a separate mutex, it's okay as long as those don't 659 659 * happen on the same file. At this point, we can't easily give 660 660 * each file a separate locking class. Let's differentiate on
+3 -3
fs/proc/base.c
··· 2333 2333 /* 2334 2334 * We need two passes here: 2335 2335 * 2336 - * 1) Collect vmas of mapped files with mmap_sem taken 2337 - * 2) Release mmap_sem and instantiate entries 2336 + * 1) Collect vmas of mapped files with mmap_lock taken 2337 + * 2) Release mmap_lock and instantiate entries 2338 2338 * 2339 2339 * otherwise we get lockdep complained, since filldir() 2340 - * routine might require mmap_sem taken in might_fault(). 2340 + * routine might require mmap_lock taken in might_fault(). 2341 2341 */ 2342 2342 2343 2343 for (vma = mm->mmap, pos = 2; vma; vma = vma->vm_next) {
+3 -3
fs/proc/task_mmu.c
··· 593 593 if (pmd_trans_unstable(pmd)) 594 594 goto out; 595 595 /* 596 - * The mmap_sem held all the way back in m_start() is what 596 + * The mmap_lock held all the way back in m_start() is what 597 597 * keeps khugepaged out of here and from collapsing things 598 598 * in here. 599 599 */ ··· 752 752 } 753 753 } 754 754 #endif 755 - /* mmap_sem is held in m_start */ 755 + /* mmap_lock is held in m_start */ 756 756 walk_page_vma(vma, &smaps_walk_ops, mss); 757 757 } 758 758 ··· 1827 1827 if (is_vm_hugetlb_page(vma)) 1828 1828 seq_puts(m, " huge"); 1829 1829 1830 - /* mmap_sem is held by m_start */ 1830 + /* mmap_lock is held by m_start */ 1831 1831 walk_page_vma(vma, &show_numa_ops, md); 1832 1832 1833 1833 if (!md->pages)
+9 -9
fs/userfaultfd.c
··· 369 369 * FAULT_FLAG_KILLABLE are not straightforward. The "Caution" 370 370 * recommendation in __lock_page_or_retry is not an understatement. 371 371 * 372 - * If FAULT_FLAG_ALLOW_RETRY is set, the mmap_sem must be released 372 + * If FAULT_FLAG_ALLOW_RETRY is set, the mmap_lock must be released 373 373 * before returning VM_FAULT_RETRY only if FAULT_FLAG_RETRY_NOWAIT is 374 374 * not set. 375 375 * 376 376 * If FAULT_FLAG_ALLOW_RETRY is set but FAULT_FLAG_KILLABLE is not 377 377 * set, VM_FAULT_RETRY can still be returned if and only if there are 378 - * fatal_signal_pending()s, and the mmap_sem must be released before 378 + * fatal_signal_pending()s, and the mmap_lock must be released before 379 379 * returning it. 380 380 */ 381 381 vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) ··· 396 396 * FOLL_DUMP case, anon memory also checks for FOLL_DUMP with 397 397 * the no_page_table() helper in follow_page_mask(), but the 398 398 * shmem_vm_ops->fault method is invoked even during 399 - * coredumping without mmap_sem and it ends up here. 399 + * coredumping without mmap_lock and it ends up here. 400 400 */ 401 401 if (current->flags & (PF_EXITING|PF_DUMPCORE)) 402 402 goto out; 403 403 404 404 /* 405 - * Coredumping runs without mmap_sem so we can only check that 406 - * the mmap_sem is held, if PF_DUMPCORE was not set. 405 + * Coredumping runs without mmap_lock so we can only check that 406 + * the mmap_lock is held, if PF_DUMPCORE was not set. 407 407 */ 408 408 mmap_assert_locked(mm); 409 409 ··· 422 422 /* 423 423 * If it's already released don't get it. This avoids to loop 424 424 * in __get_user_pages if userfaultfd_release waits on the 425 - * caller of handle_userfault to release the mmap_sem. 425 + * caller of handle_userfault to release the mmap_lock. 426 426 */ 427 427 if (unlikely(READ_ONCE(ctx->released))) { 428 428 /* ··· 481 481 if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) 482 482 goto out; 483 483 484 - /* take the reference before dropping the mmap_sem */ 484 + /* take the reference before dropping the mmap_lock */ 485 485 userfaultfd_ctx_get(ctx); 486 486 487 487 init_waitqueue_func_entry(&uwq.wq, userfaultfd_wake_function); ··· 890 890 * Flush page faults out of all CPUs. NOTE: all page faults 891 891 * must be retried without returning VM_FAULT_SIGBUS if 892 892 * userfaultfd_ctx_get() succeeds but vma->vma_userfault_ctx 893 - * changes while handle_userfault released the mmap_sem. So 893 + * changes while handle_userfault released the mmap_lock. So 894 894 * it's critical that released is set to true (above), before 895 - * taking the mmap_sem for writing. 895 + * taking the mmap_lock for writing. 896 896 */ 897 897 mmap_write_lock(mm); 898 898 still_valid = mmget_still_valid(mm);
+1 -1
fs/xfs/xfs_file.c
··· 1173 1173 * Locking for serialisation of IO during page faults. This results in a lock 1174 1174 * ordering of: 1175 1175 * 1176 - * mmap_sem (MM) 1176 + * mmap_lock (MM) 1177 1177 * sb_start_pagefault(vfs, freeze) 1178 1178 * i_mmaplock (XFS - truncate serialisation) 1179 1179 * page_lock (MM)
+7 -7
fs/xfs/xfs_inode.c
··· 145 145 * 146 146 * i_rwsem -> i_mmap_lock -> page_lock -> i_ilock 147 147 * 148 - * mmap_sem locking order: 148 + * mmap_lock locking order: 149 149 * 150 - * i_rwsem -> page lock -> mmap_sem 151 - * mmap_sem -> i_mmap_lock -> page_lock 150 + * i_rwsem -> page lock -> mmap_lock 151 + * mmap_lock -> i_mmap_lock -> page_lock 152 152 * 153 - * The difference in mmap_sem locking order mean that we cannot hold the 153 + * The difference in mmap_lock locking order mean that we cannot hold the 154 154 * i_mmap_lock over syscall based read(2)/write(2) based IO. These IO paths can 155 - * fault in pages during copy in/out (for buffered IO) or require the mmap_sem 155 + * fault in pages during copy in/out (for buffered IO) or require the mmap_lock 156 156 * in get_user_pages() to map the user pages into the kernel address space for 157 157 * direct IO. Similarly the i_rwsem cannot be taken inside a page fault because 158 - * page faults already hold the mmap_sem. 158 + * page faults already hold the mmap_lock. 159 159 * 160 160 * Hence to serialise fully against both syscall and mmap based IO, we need to 161 161 * take both the i_rwsem and the i_mmap_lock. These locks should *only* be both ··· 1630 1630 return 0; 1631 1631 /* 1632 1632 * If we can't get the iolock just skip truncating the blocks 1633 - * past EOF because we could deadlock with the mmap_sem 1633 + * past EOF because we could deadlock with the mmap_lock 1634 1634 * otherwise. We'll get another chance to drop them once the 1635 1635 * last reference to the inode is dropped, so we'll never leak 1636 1636 * blocks permanently.
+2 -2
fs/xfs/xfs_iops.c
··· 28 28 #include <linux/fiemap.h> 29 29 30 30 /* 31 - * Directories have different lock order w.r.t. mmap_sem compared to regular 31 + * Directories have different lock order w.r.t. mmap_lock compared to regular 32 32 * files. This is due to readdir potentially triggering page faults on a user 33 33 * buffer inside filldir(), and this happens with the ilock on the directory 34 34 * held. For regular files, the lock order is the other way around - the 35 - * mmap_sem is taken during the page fault, and then we lock the ilock to do 35 + * mmap_lock is taken during the page fault, and then we lock the ilock to do 36 36 * block mapping. Hence we need a different class for the directory ilock so 37 37 * that lockdep can tell them apart. 38 38 */
+2 -2
include/linux/fs.h
··· 1679 1679 * 1680 1680 * Since page fault freeze protection behaves as a lock, users have to preserve 1681 1681 * ordering of freeze protection and other filesystem locks. It is advised to 1682 - * put sb_start_pagefault() close to mmap_sem in lock ordering. Page fault 1682 + * put sb_start_pagefault() close to mmap_lock in lock ordering. Page fault 1683 1683 * handling code implies lock dependency: 1684 1684 * 1685 - * mmap_sem 1685 + * mmap_lock 1686 1686 * -> sb_start_pagefault 1687 1687 */ 1688 1688 static inline void sb_start_pagefault(struct super_block *sb)
+1 -1
include/linux/huge_mm.h
··· 248 248 return !pmd_none(pmd) && !pmd_present(pmd); 249 249 } 250 250 251 - /* mmap_sem must be held on entry */ 251 + /* mmap_lock must be held on entry */ 252 252 static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, 253 253 struct vm_area_struct *vma) 254 254 {
+1 -1
include/linux/mempolicy.h
··· 31 31 * Locking policy for interlave: 32 32 * In process context there is no locking because only the process accesses 33 33 * its own state. All vma manipulation is somewhat protected by a down_read on 34 - * mmap_sem. 34 + * mmap_lock. 35 35 * 36 36 * Freeing policy: 37 37 * Mempolicy objects are reference counted. A mempolicy will be freed when
+5 -5
include/linux/mm.h
··· 402 402 * @FAULT_FLAG_WRITE: Fault was a write fault. 403 403 * @FAULT_FLAG_MKWRITE: Fault was mkwrite of existing PTE. 404 404 * @FAULT_FLAG_ALLOW_RETRY: Allow to retry the fault if blocked. 405 - * @FAULT_FLAG_RETRY_NOWAIT: Don't drop mmap_sem and wait when retrying. 405 + * @FAULT_FLAG_RETRY_NOWAIT: Don't drop mmap_lock and wait when retrying. 406 406 * @FAULT_FLAG_KILLABLE: The fault task is in SIGKILL killable region. 407 407 * @FAULT_FLAG_TRIED: The fault has been tried once. 408 408 * @FAULT_FLAG_USER: The fault originated in userspace. ··· 452 452 * fault_flag_allow_retry_first - check ALLOW_RETRY the first time 453 453 * 454 454 * This is mostly used for places where we want to try to avoid taking 455 - * the mmap_sem for too long a time when waiting for another condition 455 + * the mmap_lock for too long a time when waiting for another condition 456 456 * to change, in which case we can try to be polite to release the 457 - * mmap_sem in the first round to avoid potential starvation of other 458 - * processes that would also want the mmap_sem. 457 + * mmap_lock in the first round to avoid potential starvation of other 458 + * processes that would also want the mmap_lock. 459 459 * 460 460 * Return: true if the page fault allows retry and this is the first 461 461 * attempt of the fault handling; false otherwise. ··· 582 582 * (vma,addr) marked as MPOL_SHARED. The shared policy infrastructure 583 583 * in mm/mempolicy.c will do this automatically. 584 584 * get_policy() must NOT add a ref if the policy at (vma,addr) is not 585 - * marked as MPOL_SHARED. vma policies are protected by the mmap_sem. 585 + * marked as MPOL_SHARED. vma policies are protected by the mmap_lock. 586 586 * If no [shared/vma] mempolicy exists at the addr, get_policy() op 587 587 * must return NULL--i.e., do not "fallback" to task or system default 588 588 * policy.
+1 -1
include/linux/mm_types.h
··· 344 344 * can only be in the i_mmap tree. An anonymous MAP_PRIVATE, stack 345 345 * or brk vma (with NULL file) can only be in an anon_vma list. 346 346 */ 347 - struct list_head anon_vma_chain; /* Serialized by mmap_sem & 347 + struct list_head anon_vma_chain; /* Serialized by mmap_lock & 348 348 * page_table_lock */ 349 349 struct anon_vma *anon_vma; /* Serialized by page_table_lock */ 350 350
+4 -4
include/linux/mmu_notifier.h
··· 122 122 123 123 /* 124 124 * invalidate_range_start() and invalidate_range_end() must be 125 - * paired and are called only when the mmap_sem and/or the 125 + * paired and are called only when the mmap_lock and/or the 126 126 * locks protecting the reverse maps are held. If the subsystem 127 127 * can't guarantee that no additional references are taken to 128 128 * the pages in the range, it has to implement the ··· 213 213 }; 214 214 215 215 /* 216 - * The notifier chains are protected by mmap_sem and/or the reverse map 216 + * The notifier chains are protected by mmap_lock and/or the reverse map 217 217 * semaphores. Notifier chains are only changed when all reverse maps and 218 - * the mmap_sem locks are taken. 218 + * the mmap_lock locks are taken. 219 219 * 220 220 * Therefore notifier chains can only be traversed when either 221 221 * 222 - * 1. mmap_sem is held. 222 + * 1. mmap_lock is held. 223 223 * 2. One of the reverse map locks is held (i_mmap_rwsem or anon_vma->rwsem). 224 224 * 3. No other concurrent thread can access the list (release) 225 225 */
+1 -1
include/linux/pagemap.h
··· 538 538 * lock_page_or_retry - Lock the page, unless this would block and the 539 539 * caller indicated that it can handle a retry. 540 540 * 541 - * Return value and mmap_sem implications depend on flags; see 541 + * Return value and mmap_lock implications depend on flags; see 542 542 * __lock_page_or_retry(). 543 543 */ 544 544 static inline int lock_page_or_retry(struct page *page, struct mm_struct *mm,
+3 -3
include/linux/pgtable.h
··· 1134 1134 #endif 1135 1135 /* 1136 1136 * This function is meant to be used by sites walking pagetables with 1137 - * the mmap_sem hold in read mode to protect against MADV_DONTNEED and 1137 + * the mmap_lock held in read mode to protect against MADV_DONTNEED and 1138 1138 * transhuge page faults. MADV_DONTNEED can convert a transhuge pmd 1139 1139 * into a null pmd and the transhuge page fault can convert a null pmd 1140 1140 * into an hugepmd or into a regular pmd (if the hugepage allocation 1141 - * fails). While holding the mmap_sem in read mode the pmd becomes 1141 + * fails). While holding the mmap_lock in read mode the pmd becomes 1142 1142 * stable and stops changing under us only if it's not null and not a 1143 1143 * transhuge pmd. When those races occurs and this function makes a 1144 1144 * difference vs the standard pmd_none_or_clear_bad, the result is ··· 1148 1148 * 1149 1149 * For 32bit kernels with a 64bit large pmd_t this automatically takes 1150 1150 * care of reading the pmd atomically to avoid SMP race conditions 1151 - * against pmd_populate() when the mmap_sem is hold for reading by the 1151 + * against pmd_populate() when the mmap_lock is hold for reading by the 1152 1152 * caller (a special atomic read not done by "gcc" as in the generic 1153 1153 * version above, is also needed when THP is disabled because the page 1154 1154 * fault can populate the pmd from under us).
+1 -1
include/linux/rmap.h
··· 77 77 struct anon_vma_chain { 78 78 struct vm_area_struct *vma; 79 79 struct anon_vma *anon_vma; 80 - struct list_head same_vma; /* locked by mmap_sem & page_table_lock */ 80 + struct list_head same_vma; /* locked by mmap_lock & page_table_lock */ 81 81 struct rb_node rb; /* locked by anon_vma->rwsem */ 82 82 unsigned long rb_subtree_last; 83 83 #ifdef CONFIG_DEBUG_VM_RB
+5 -5
include/linux/sched/mm.h
··· 53 53 54 54 /* 55 55 * This has to be called after a get_task_mm()/mmget_not_zero() 56 - * followed by taking the mmap_sem for writing before modifying the 56 + * followed by taking the mmap_lock for writing before modifying the 57 57 * vmas or anything the coredump pretends not to change from under it. 58 58 * 59 59 * It also has to be called when mmgrab() is used in the context of ··· 61 61 * the context of the process to run down_write() on that pinned mm. 62 62 * 63 63 * NOTE: find_extend_vma() called from GUP context is the only place 64 - * that can modify the "mm" (notably the vm_start/end) under mmap_sem 64 + * that can modify the "mm" (notably the vm_start/end) under mmap_lock 65 65 * for reading and outside the context of the process, so it is also 66 - * the only case that holds the mmap_sem for reading that must call 67 - * this function. Generally if the mmap_sem is hold for reading 66 + * the only case that holds the mmap_lock for reading that must call 67 + * this function. Generally if the mmap_lock is hold for reading 68 68 * there's no need of this check after get_task_mm()/mmget_not_zero(). 69 69 * 70 70 * This function can be obsoleted and the check can be removed, after 71 - * the coredump code will hold the mmap_sem for writing before 71 + * the coredump code will hold the mmap_lock for writing before 72 72 * invoking the ->core_dump methods. 73 73 */ 74 74 static inline bool mmget_still_valid(struct mm_struct *mm)
+1 -1
kernel/acct.c
··· 40 40 * is one more bug... 10/11/98, AV. 41 41 * 42 42 * Oh, fsck... Oopsable SMP race in do_process_acct() - we must hold 43 - * ->mmap_sem to walk the vma list of current->mm. Nasty, since it leaks 43 + * ->mmap_lock to walk the vma list of current->mm. Nasty, since it leaks 44 44 * a struct file opened for write. Fixed. 2/6/2000, AV. 45 45 */ 46 46
+2 -2
kernel/cgroup/cpuset.c
··· 1655 1655 guarantee_online_mems(cs, &newmems); 1656 1656 1657 1657 /* 1658 - * The mpol_rebind_mm() call takes mmap_sem, which we couldn't 1658 + * The mpol_rebind_mm() call takes mmap_lock, which we couldn't 1659 1659 * take while holding tasklist_lock. Forks can happen - the 1660 1660 * mpol_dup() cpuset_being_rebound check will catch such forks, 1661 1661 * and rebind their vma mempolicies too. Because we still hold ··· 1760 1760 * 1761 1761 * Call with cpuset_mutex held. May take callback_lock during call. 1762 1762 * Will take tasklist_lock, scan tasklist for tasks in cpuset cs, 1763 - * lock each such tasks mm->mmap_sem, scan its vma's and rebind 1763 + * lock each such tasks mm->mmap_lock, scan its vma's and rebind 1764 1764 * their mempolicies to the cpusets new mems_allowed. 1765 1765 */ 1766 1766 static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs,
+3 -3
kernel/events/core.c
··· 1316 1316 * perf_event::child_mutex; 1317 1317 * perf_event_context::lock 1318 1318 * perf_event::mmap_mutex 1319 - * mmap_sem 1319 + * mmap_lock 1320 1320 * perf_addr_filters_head::lock 1321 1321 * 1322 1322 * cpu_hotplug_lock ··· 3080 3080 * pre-existing mappings, called once when new filters arrive via SET_FILTER 3081 3081 * ioctl; 3082 3082 * (2) perf_addr_filters_adjust(): adjusting filters' offsets based on newly 3083 - * registered mapping, called for every new mmap(), with mm::mmap_sem down 3083 + * registered mapping, called for every new mmap(), with mm::mmap_lock down 3084 3084 * for reading; 3085 3085 * (3) perf_event_addr_filters_exec(): clearing filters' offsets in the process 3086 3086 * of exec. ··· 9742 9742 /* 9743 9743 * Scan through mm's vmas and see if one of them matches the 9744 9744 * @filter; if so, adjust filter's address range. 9745 - * Called with mm::mmap_sem down for reading. 9745 + * Called with mm::mmap_lock down for reading. 9746 9746 */ 9747 9747 static void perf_addr_filter_apply(struct perf_addr_filter *filter, 9748 9748 struct mm_struct *mm,
+2 -2
kernel/events/uprobes.c
··· 457 457 * @vaddr: the virtual address to store the opcode. 458 458 * @opcode: opcode to be written at @vaddr. 459 459 * 460 - * Called with mm->mmap_sem held for write. 460 + * Called with mm->mmap_lock held for write. 461 461 * Return 0 (success) or a negative errno. 462 462 */ 463 463 int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, ··· 1349 1349 } 1350 1350 1351 1351 /* 1352 - * Called from mmap_region/vma_adjust with mm->mmap_sem acquired. 1352 + * Called from mmap_region/vma_adjust with mm->mmap_lock acquired. 1353 1353 * 1354 1354 * Currently we ignore all errors and always return 0, the callers 1355 1355 * can't handle the failure anyway.
+1 -1
kernel/exit.c
··· 440 440 sync_mm_rss(mm); 441 441 /* 442 442 * Serialize with any possible pending coredump. 443 - * We must hold mmap_sem around checking core_state 443 + * We must hold mmap_lock around checking core_state 444 444 * and clearing tsk->mm. The core-inducing thread 445 445 * will increment ->nr_threads for each thread in the 446 446 * group with ->mm != NULL.
+1 -1
kernel/relay.c
··· 91 91 * 92 92 * Returns 0 if ok, negative on error 93 93 * 94 - * Caller should already have grabbed mmap_sem. 94 + * Caller should already have grabbed mmap_lock. 95 95 */ 96 96 static int relay_mmap_buf(struct rchan_buf *buf, struct vm_area_struct *vma) 97 97 {
+2 -2
kernel/sys.c
··· 2007 2007 } 2008 2008 2009 2009 /* 2010 - * arg_lock protects concurent updates but we still need mmap_sem for 2010 + * arg_lock protects concurent updates but we still need mmap_lock for 2011 2011 * read to exclude races with sys_brk. 2012 2012 */ 2013 2013 mmap_read_lock(mm); ··· 2122 2122 2123 2123 /* 2124 2124 * arg_lock protects concurent updates of arg boundaries, we need 2125 - * mmap_sem for a) concurrent sys_brk, b) finding VMA for addr 2125 + * mmap_lock for a) concurrent sys_brk, b) finding VMA for addr 2126 2126 * validation. 2127 2127 */ 2128 2128 mmap_read_lock(mm);
+4 -4
lib/test_lockup.c
··· 103 103 104 104 static bool lock_mmap_sem; 105 105 module_param(lock_mmap_sem, bool, 0400); 106 - MODULE_PARM_DESC(lock_mmap_sem, "lock mm->mmap_sem: block procfs interfaces"); 106 + MODULE_PARM_DESC(lock_mmap_sem, "lock mm->mmap_lock: block procfs interfaces"); 107 107 108 108 static unsigned long lock_rwsem_ptr; 109 109 module_param_unsafe(lock_rwsem_ptr, ulong, 0400); ··· 191 191 192 192 if (lock_mmap_sem && master) { 193 193 if (verbose) 194 - pr_notice("lock mmap_sem pid=%d\n", main_task->pid); 194 + pr_notice("lock mmap_lock pid=%d\n", main_task->pid); 195 195 if (lock_read) 196 196 mmap_read_lock(main_task->mm); 197 197 else ··· 280 280 else 281 281 mmap_write_unlock(main_task->mm); 282 282 if (verbose) 283 - pr_notice("unlock mmap_sem pid=%d\n", main_task->pid); 283 + pr_notice("unlock mmap_lock pid=%d\n", main_task->pid); 284 284 } 285 285 286 286 if (lock_rwsem_ptr && master) { ··· 505 505 } 506 506 507 507 if (lock_mmap_sem && !main_task->mm) { 508 - pr_err("no mm to lock mmap_sem\n"); 508 + pr_err("no mm to lock mmap_lock\n"); 509 509 return -EINVAL; 510 510 } 511 511
+19 -19
mm/filemap.c
··· 76 76 * ->i_mutex 77 77 * ->i_mmap_rwsem (truncate->unmap_mapping_range) 78 78 * 79 - * ->mmap_sem 79 + * ->mmap_lock 80 80 * ->i_mmap_rwsem 81 81 * ->page_table_lock or pte_lock (various, mainly in memory.c) 82 82 * ->i_pages lock (arch-dependent flush_dcache_mmap_lock) 83 83 * 84 - * ->mmap_sem 84 + * ->mmap_lock 85 85 * ->lock_page (access_process_vm) 86 86 * 87 87 * ->i_mutex (generic_perform_write) 88 - * ->mmap_sem (fault_in_pages_readable->do_page_fault) 88 + * ->mmap_lock (fault_in_pages_readable->do_page_fault) 89 89 * 90 90 * bdi->wb.list_lock 91 91 * sb_lock (fs/fs-writeback.c) ··· 1371 1371 1372 1372 /* 1373 1373 * Return values: 1374 - * 1 - page is locked; mmap_sem is still held. 1374 + * 1 - page is locked; mmap_lock is still held. 1375 1375 * 0 - page is not locked. 1376 1376 * mmap_lock has been released (mmap_read_unlock(), unless flags had both 1377 1377 * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in 1378 - * which case mmap_sem is still held. 1378 + * which case mmap_lock is still held. 1379 1379 * 1380 1380 * If neither ALLOW_RETRY nor KILLABLE are set, will always return 1 1381 - * with the page locked and the mmap_sem unperturbed. 1381 + * with the page locked and the mmap_lock unperturbed. 1382 1382 */ 1383 1383 int __lock_page_or_retry(struct page *page, struct mm_struct *mm, 1384 1384 unsigned int flags) 1385 1385 { 1386 1386 if (fault_flag_allow_retry_first(flags)) { 1387 1387 /* 1388 - * CAUTION! In this case, mmap_sem is not released 1388 + * CAUTION! In this case, mmap_lock is not released 1389 1389 * even though return 0. 1390 1390 */ 1391 1391 if (flags & FAULT_FLAG_RETRY_NOWAIT) ··· 2313 2313 #ifdef CONFIG_MMU 2314 2314 #define MMAP_LOTSAMISS (100) 2315 2315 /* 2316 - * lock_page_maybe_drop_mmap - lock the page, possibly dropping the mmap_sem 2316 + * lock_page_maybe_drop_mmap - lock the page, possibly dropping the mmap_lock 2317 2317 * @vmf - the vm_fault for this fault. 2318 2318 * @page - the page to lock. 2319 2319 * @fpin - the pointer to the file we may pin (or is already pinned). 2320 2320 * 2321 - * This works similar to lock_page_or_retry in that it can drop the mmap_sem. 2321 + * This works similar to lock_page_or_retry in that it can drop the mmap_lock. 2322 2322 * It differs in that it actually returns the page locked if it returns 1 and 0 2323 - * if it couldn't lock the page. If we did have to drop the mmap_sem then fpin 2323 + * if it couldn't lock the page. If we did have to drop the mmap_lock then fpin 2324 2324 * will point to the pinned file and needs to be fput()'ed at a later point. 2325 2325 */ 2326 2326 static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, ··· 2331 2331 2332 2332 /* 2333 2333 * NOTE! This will make us return with VM_FAULT_RETRY, but with 2334 - * the mmap_sem still held. That's how FAULT_FLAG_RETRY_NOWAIT 2334 + * the mmap_lock still held. That's how FAULT_FLAG_RETRY_NOWAIT 2335 2335 * is supposed to work. We have way too many special cases.. 2336 2336 */ 2337 2337 if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) ··· 2341 2341 if (vmf->flags & FAULT_FLAG_KILLABLE) { 2342 2342 if (__lock_page_killable(page)) { 2343 2343 /* 2344 - * We didn't have the right flags to drop the mmap_sem, 2344 + * We didn't have the right flags to drop the mmap_lock, 2345 2345 * but all fault_handlers only check for fatal signals 2346 2346 * if we return VM_FAULT_RETRY, so we need to drop the 2347 - * mmap_sem here and return 0 if we don't have a fpin. 2347 + * mmap_lock here and return 0 if we don't have a fpin. 2348 2348 */ 2349 2349 if (*fpin == NULL) 2350 2350 mmap_read_unlock(vmf->vma->vm_mm); ··· 2409 2409 /* 2410 2410 * Asynchronous readahead happens when we find the page and PG_readahead, 2411 2411 * so we want to possibly extend the readahead further. We return the file that 2412 - * was pinned if we have to drop the mmap_sem in order to do IO. 2412 + * was pinned if we have to drop the mmap_lock in order to do IO. 2413 2413 */ 2414 2414 static struct file *do_async_mmap_readahead(struct vm_fault *vmf, 2415 2415 struct page *page) ··· 2444 2444 * it in the page cache, and handles the special cases reasonably without 2445 2445 * having a lot of duplicated code. 2446 2446 * 2447 - * vma->vm_mm->mmap_sem must be held on entry. 2447 + * vma->vm_mm->mmap_lock must be held on entry. 2448 2448 * 2449 - * If our return value has VM_FAULT_RETRY set, it's because the mmap_sem 2449 + * If our return value has VM_FAULT_RETRY set, it's because the mmap_lock 2450 2450 * may be dropped before doing I/O or by lock_page_maybe_drop_mmap(). 2451 2451 * 2452 - * If our return value does not have VM_FAULT_RETRY set, the mmap_sem 2452 + * If our return value does not have VM_FAULT_RETRY set, the mmap_lock 2453 2453 * has not been released. 2454 2454 * 2455 2455 * We never return with VM_FAULT_RETRY and a bit from VM_FAULT_ERROR set. ··· 2519 2519 goto page_not_uptodate; 2520 2520 2521 2521 /* 2522 - * We've made it this far and we had to drop our mmap_sem, now is the 2522 + * We've made it this far and we had to drop our mmap_lock, now is the 2523 2523 * time to return to the upper layer and have it re-find the vma and 2524 2524 * redo the fault. 2525 2525 */ ··· 2569 2569 2570 2570 out_retry: 2571 2571 /* 2572 - * We dropped the mmap_sem, we need to return to the fault handler to 2572 + * We dropped the mmap_lock, we need to return to the fault handler to 2573 2573 * re-find the vma and come back and find our hopefully still populated 2574 2574 * page. 2575 2575 */
+1 -1
mm/frame_vector.c
··· 29 29 * different type underlying the specified range of virtual addresses. 30 30 * When the function isn't able to map a single page, it returns error. 31 31 * 32 - * This function takes care of grabbing mmap_sem as necessary. 32 + * This function takes care of grabbing mmap_lock as necessary. 33 33 */ 34 34 int get_vaddr_frames(unsigned long start, unsigned int nr_frames, 35 35 unsigned int gup_flags, struct frame_vector *vec)
+19 -19
mm/gup.c
··· 592 592 pmdval = READ_ONCE(*pmd); 593 593 /* 594 594 * MADV_DONTNEED may convert the pmd to null because 595 - * mmap_sem is held in read mode 595 + * mmap_lock is held in read mode 596 596 */ 597 597 if (pmd_none(pmdval)) 598 598 return no_page_table(vma, flags); ··· 855 855 } 856 856 857 857 /* 858 - * mmap_sem must be held on entry. If @locked != NULL and *@flags 859 - * does not include FOLL_NOWAIT, the mmap_sem may be released. If it 858 + * mmap_lock must be held on entry. If @locked != NULL and *@flags 859 + * does not include FOLL_NOWAIT, the mmap_lock may be released. If it 860 860 * is, *@locked will be set to 0 and -EBUSY returned. 861 861 */ 862 862 static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, ··· 979 979 * only intends to ensure the pages are faulted in. 980 980 * @vmas: array of pointers to vmas corresponding to each page. 981 981 * Or NULL if the caller does not require them. 982 - * @locked: whether we're still with the mmap_sem held 982 + * @locked: whether we're still with the mmap_lock held 983 983 * 984 984 * Returns either number of pages pinned (which may be less than the 985 985 * number requested), or an error. Details about the return value: ··· 992 992 * 993 993 * The caller is responsible for releasing returned @pages, via put_page(). 994 994 * 995 - * @vmas are valid only as long as mmap_sem is held. 995 + * @vmas are valid only as long as mmap_lock is held. 996 996 * 997 - * Must be called with mmap_sem held. It may be released. See below. 997 + * Must be called with mmap_lock held. It may be released. See below. 998 998 * 999 999 * __get_user_pages walks a process's page tables and takes a reference to 1000 1000 * each struct page that each user address corresponds to at a given ··· 1015 1015 * appropriate) must be called after the page is finished with, and 1016 1016 * before put_page is called. 1017 1017 * 1018 - * If @locked != NULL, *@locked will be set to 0 when mmap_sem is 1018 + * If @locked != NULL, *@locked will be set to 0 when mmap_lock is 1019 1019 * released by an up_read(). That can happen if @gup_flags does not 1020 1020 * have FOLL_NOWAIT. 1021 1021 * 1022 1022 * A caller using such a combination of @locked and @gup_flags 1023 - * must therefore hold the mmap_sem for reading only, and recognize 1023 + * must therefore hold the mmap_lock for reading only, and recognize 1024 1024 * when it's been released. Otherwise, it must be held for either 1025 1025 * reading or writing and will not be released. 1026 1026 * ··· 1083 1083 if (locked && *locked == 0) { 1084 1084 /* 1085 1085 * We've got a VM_FAULT_RETRY 1086 - * and we've lost mmap_sem. 1086 + * and we've lost mmap_lock. 1087 1087 * We must stop here. 1088 1088 */ 1089 1089 BUG_ON(gup_flags & FOLL_NOWAIT); ··· 1190 1190 * @mm: mm_struct of target mm 1191 1191 * @address: user address 1192 1192 * @fault_flags:flags to pass down to handle_mm_fault() 1193 - * @unlocked: did we unlock the mmap_sem while retrying, maybe NULL if caller 1193 + * @unlocked: did we unlock the mmap_lock while retrying, maybe NULL if caller 1194 1194 * does not allow retry. If NULL, the caller must guarantee 1195 1195 * that fault_flags does not contain FAULT_FLAG_ALLOW_RETRY. 1196 1196 * ··· 1211 1211 * such architectures, gup() will not be enough to make a subsequent access 1212 1212 * succeed. 1213 1213 * 1214 - * This function will not return with an unlocked mmap_sem. So it has not the 1215 - * same semantics wrt the @mm->mmap_sem as does filemap_fault(). 1214 + * This function will not return with an unlocked mmap_lock. So it has not the 1215 + * same semantics wrt the @mm->mmap_lock as does filemap_fault(). 1216 1216 */ 1217 1217 int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, 1218 1218 unsigned long address, unsigned int fault_flags, ··· 1400 1400 * @vma: target vma 1401 1401 * @start: start address 1402 1402 * @end: end address 1403 - * @locked: whether the mmap_sem is still held 1403 + * @locked: whether the mmap_lock is still held 1404 1404 * 1405 1405 * This takes care of mlocking the pages too if VM_LOCKED is set. 1406 1406 * 1407 1407 * return 0 on success, negative error code on error. 1408 1408 * 1409 - * vma->vm_mm->mmap_sem must be held. 1409 + * vma->vm_mm->mmap_lock must be held. 1410 1410 * 1411 1411 * If @locked is NULL, it may be held for read or write and will 1412 1412 * be unperturbed. ··· 1458 1458 * 1459 1459 * This is used to implement mlock() and the MAP_POPULATE / MAP_LOCKED mmap 1460 1460 * flags. VMAs must be already marked with the desired vm_flags, and 1461 - * mmap_sem must not be held. 1461 + * mmap_lock must not be held. 1462 1462 */ 1463 1463 int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) 1464 1464 { ··· 1525 1525 * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - 1526 1526 * allowing a hole to be left in the corefile to save diskspace. 1527 1527 * 1528 - * Called without mmap_sem, but after all other threads have been killed. 1528 + * Called without mmap_lock, but after all other threads have been killed. 1529 1529 */ 1530 1530 #ifdef CONFIG_ELF_CORE 1531 1531 struct page *get_dump_page(unsigned long addr) ··· 1886 1886 * 1887 1887 * The caller is responsible for releasing returned @pages, via put_page(). 1888 1888 * 1889 - * @vmas are valid only as long as mmap_sem is held. 1889 + * @vmas are valid only as long as mmap_lock is held. 1890 1890 * 1891 - * Must be called with mmap_sem held for read or write. 1891 + * Must be called with mmap_lock held for read or write. 1892 1892 * 1893 1893 * get_user_pages_remote walks a process's page tables and takes a reference 1894 1894 * to each struct page that each user address corresponds to at a given ··· 2873 2873 * @pages: array that receives pointers to the pages pinned. 2874 2874 * Should be at least nr_pages long. 2875 2875 * 2876 - * Attempt to pin user pages in memory without taking mm->mmap_sem. 2876 + * Attempt to pin user pages in memory without taking mm->mmap_lock. 2877 2877 * If not successful, it will fall back to taking the lock and 2878 2878 * calling get_user_pages(). 2879 2879 *
+2 -2
mm/huge_memory.c
··· 1746 1746 1747 1747 /* 1748 1748 * We don't have to worry about the ordering of src and dst 1749 - * ptlocks because exclusive mmap_sem prevents deadlock. 1749 + * ptlocks because exclusive mmap_lock prevents deadlock. 1750 1750 */ 1751 1751 old_ptl = __pmd_trans_huge_lock(old_pmd, vma); 1752 1752 if (old_ptl) { ··· 2618 2618 2619 2619 if (PageAnon(head)) { 2620 2620 /* 2621 - * The caller does not necessarily hold an mmap_sem that would 2621 + * The caller does not necessarily hold an mmap_lock that would 2622 2622 * prevent the anon_vma disappearing so we first we take a 2623 2623 * reference to it and then lock the anon_vma for write. This 2624 2624 * is similar to page_lock_anon_vma_read except the write lock
+1 -1
mm/hugetlb.c
··· 4695 4695 (const void __user *) src_addr, 4696 4696 pages_per_huge_page(h), false); 4697 4697 4698 - /* fallback to copy_from_user outside mmap_sem */ 4698 + /* fallback to copy_from_user outside mmap_lock */ 4699 4699 if (unlikely(ret)) { 4700 4700 ret = -ENOENT; 4701 4701 *pagep = page;
+2 -2
mm/internal.h
··· 344 344 } 345 345 346 346 /* 347 - * must be called with vma's mmap_sem held for read or write, and page locked. 347 + * must be called with vma's mmap_lock held for read or write, and page locked. 348 348 */ 349 349 extern void mlock_vma_page(struct page *page); 350 350 extern unsigned int munlock_vma_page(struct page *page); ··· 413 413 414 414 /* 415 415 * FAULT_FLAG_RETRY_NOWAIT means we don't want to wait on page locks or 416 - * anything, so we only pin the file and drop the mmap_sem if only 416 + * anything, so we only pin the file and drop the mmap_lock if only 417 417 * FAULT_FLAG_ALLOW_RETRY is set, while this is the first attempt. 418 418 */ 419 419 if (fault_flag_allow_retry_first(flags) &&
+17 -17
mm/khugepaged.c
··· 534 534 * under mmap sem read mode). Stop here (after we 535 535 * return all pagetables will be destroyed) until 536 536 * khugepaged has finished working on the pagetables 537 - * under the mmap_sem. 537 + * under the mmap_lock. 538 538 */ 539 539 mmap_write_lock(mm); 540 540 mmap_write_unlock(mm); ··· 933 933 #endif 934 934 935 935 /* 936 - * If mmap_sem temporarily dropped, revalidate vma 937 - * before taking mmap_sem. 936 + * If mmap_lock temporarily dropped, revalidate vma 937 + * before taking mmap_lock. 938 938 * Return 0 if succeeds, otherwise return none-zero 939 939 * value (scan code). 940 940 */ ··· 966 966 * Only done if khugepaged_scan_pmd believes it is worthwhile. 967 967 * 968 968 * Called and returns without pte mapped or spinlocks held, 969 - * but with mmap_sem held to protect against vma changes. 969 + * but with mmap_lock held to protect against vma changes. 970 970 */ 971 971 972 972 static bool __collapse_huge_page_swapin(struct mm_struct *mm, ··· 993 993 swapped_in++; 994 994 ret = do_swap_page(&vmf); 995 995 996 - /* do_swap_page returns VM_FAULT_RETRY with released mmap_sem */ 996 + /* do_swap_page returns VM_FAULT_RETRY with released mmap_lock */ 997 997 if (ret & VM_FAULT_RETRY) { 998 998 mmap_read_lock(mm); 999 999 if (hugepage_vma_revalidate(mm, address, &vmf.vma)) { ··· 1047 1047 gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE; 1048 1048 1049 1049 /* 1050 - * Before allocating the hugepage, release the mmap_sem read lock. 1050 + * Before allocating the hugepage, release the mmap_lock read lock. 1051 1051 * The allocation can take potentially a long time if it involves 1052 - * sync compaction, and we do not need to hold the mmap_sem during 1052 + * sync compaction, and we do not need to hold the mmap_lock during 1053 1053 * that. We will recheck the vma after taking it again in write mode. 1054 1054 */ 1055 1055 mmap_read_unlock(mm); ··· 1080 1080 } 1081 1081 1082 1082 /* 1083 - * __collapse_huge_page_swapin always returns with mmap_sem locked. 1084 - * If it fails, we release mmap_sem and jump out_nolock. 1083 + * __collapse_huge_page_swapin always returns with mmap_lock locked. 1084 + * If it fails, we release mmap_lock and jump out_nolock. 1085 1085 * Continuing to collapse causes inconsistency. 1086 1086 */ 1087 1087 if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, ··· 1345 1345 pte_unmap_unlock(pte, ptl); 1346 1346 if (ret) { 1347 1347 node = khugepaged_find_target_node(); 1348 - /* collapse_huge_page will return with the mmap_sem released */ 1348 + /* collapse_huge_page will return with the mmap_lock released */ 1349 1349 collapse_huge_page(mm, address, hpage, node, 1350 1350 referenced, unmapped); 1351 1351 } ··· 1547 1547 * later. 1548 1548 * 1549 1549 * Not that vma->anon_vma check is racy: it can be set up after 1550 - * the check but before we took mmap_sem by the fault path. 1550 + * the check but before we took mmap_lock by the fault path. 1551 1551 * But page lock would prevent establishing any new ptes of the 1552 1552 * page, so we are safe. 1553 1553 * ··· 1567 1567 if (!pmd) 1568 1568 continue; 1569 1569 /* 1570 - * We need exclusive mmap_sem to retract page table. 1570 + * We need exclusive mmap_lock to retract page table. 1571 1571 * 1572 1572 * We use trylock due to lock inversion: we need to acquire 1573 - * mmap_sem while holding page lock. Fault path does it in 1573 + * mmap_lock while holding page lock. Fault path does it in 1574 1574 * reverse order. Trylock is a way to avoid deadlock. 1575 1575 */ 1576 1576 if (mmap_write_trylock(vma->vm_mm)) { ··· 2058 2058 */ 2059 2059 vma = NULL; 2060 2060 if (unlikely(!mmap_read_trylock(mm))) 2061 - goto breakouterloop_mmap_sem; 2061 + goto breakouterloop_mmap_lock; 2062 2062 if (likely(!khugepaged_test_exit(mm))) 2063 2063 vma = find_vma(mm, khugepaged_scan.address); 2064 2064 ··· 2115 2115 khugepaged_scan.address += HPAGE_PMD_SIZE; 2116 2116 progress += HPAGE_PMD_NR; 2117 2117 if (ret) 2118 - /* we released mmap_sem so break loop */ 2119 - goto breakouterloop_mmap_sem; 2118 + /* we released mmap_lock so break loop */ 2119 + goto breakouterloop_mmap_lock; 2120 2120 if (progress >= pages) 2121 2121 goto breakouterloop; 2122 2122 } 2123 2123 } 2124 2124 breakouterloop: 2125 2125 mmap_read_unlock(mm); /* exit_mmap will destroy ptes after this */ 2126 - breakouterloop_mmap_sem: 2126 + breakouterloop_mmap_lock: 2127 2127 2128 2128 spin_lock(&khugepaged_mm_lock); 2129 2129 VM_BUG_ON(khugepaged_scan.mm_slot != mm_slot);
+6 -6
mm/ksm.c
··· 442 442 /* 443 443 * ksmd, and unmerge_and_remove_all_rmap_items(), must not touch an mm's 444 444 * page tables after it has passed through ksm_exit() - which, if necessary, 445 - * takes mmap_sem briefly to serialize against them. ksm_exit() does not set 445 + * takes mmap_lock briefly to serialize against them. ksm_exit() does not set 446 446 * a special flag: they can just back out as soon as mm_users goes to zero. 447 447 * ksm_test_exit() is used throughout to make this test for exit: in some 448 448 * places for correctness, in some places just to avoid unnecessary work. ··· 831 831 * Though it's very tempting to unmerge rmap_items from stable tree rather 832 832 * than check every pte of a given vma, the locking doesn't quite work for 833 833 * that - an rmap_item is assigned to the stable tree after inserting ksm 834 - * page and upping mmap_sem. Nor does it fit with the way we skip dup'ing 834 + * page and upping mmap_lock. Nor does it fit with the way we skip dup'ing 835 835 * rmap_items from parent to child at fork time (so as not to waste time 836 836 * if exit comes before the next scan reaches it). 837 837 * ··· 1292 1292 /* Unstable nid is in union with stable anon_vma: remove first */ 1293 1293 remove_rmap_item_from_tree(rmap_item); 1294 1294 1295 - /* Must get reference to anon_vma while still holding mmap_sem */ 1295 + /* Must get reference to anon_vma while still holding mmap_lock */ 1296 1296 rmap_item->anon_vma = vma->anon_vma; 1297 1297 get_anon_vma(vma->anon_vma); 1298 1298 out: ··· 2343 2343 struct mm_slot, mm_list); 2344 2344 if (ksm_scan.address == 0) { 2345 2345 /* 2346 - * We've completed a full scan of all vmas, holding mmap_sem 2346 + * We've completed a full scan of all vmas, holding mmap_lock 2347 2347 * throughout, and found no VM_MERGEABLE: so do the same as 2348 2348 * __ksm_exit does to remove this mm from all our lists now. 2349 2349 * This applies either when cleaning up after __ksm_exit 2350 2350 * (but beware: we can reach here even before __ksm_exit), 2351 2351 * or when all VM_MERGEABLE areas have been unmapped (and 2352 - * mmap_sem then protects against race with MADV_MERGEABLE). 2352 + * mmap_lock then protects against race with MADV_MERGEABLE). 2353 2353 */ 2354 2354 hash_del(&slot->link); 2355 2355 list_del(&slot->mm_list); ··· 2536 2536 * This process is exiting: if it's straightforward (as is the 2537 2537 * case when ksmd was never running), free mm_slot immediately. 2538 2538 * But if it's at the cursor or has rmap_items linked to it, use 2539 - * mmap_sem to synchronize with any break_cows before pagetables 2539 + * mmap_lock to synchronize with any break_cows before pagetables 2540 2540 * are freed, and leave the mm_slot on the list for ksmd to free. 2541 2541 * Beware: ksm may already have noticed it exiting and freed the slot. 2542 2542 */
+2 -2
mm/maccess.c
··· 40 40 * happens, handle that and return -EFAULT. 41 41 * 42 42 * We ensure that the copy_from_user is executed in atomic context so that 43 - * do_page_fault() doesn't attempt to take mmap_sem. This makes 43 + * do_page_fault() doesn't attempt to take mmap_lock. This makes 44 44 * probe_kernel_read() suitable for use within regions where the caller 45 - * already holds mmap_sem, or other locks which nest inside mmap_sem. 45 + * already holds mmap_lock, or other locks which nest inside mmap_lock. 46 46 * 47 47 * probe_kernel_read_strict() is the same as probe_kernel_read() except for 48 48 * the case where architectures have non-overlapping user and kernel address
+10 -10
mm/madvise.c
··· 40 40 41 41 /* 42 42 * Any behaviour which results in changes to the vma->vm_flags needs to 43 - * take mmap_sem for writing. Others, which simply traverse vmas, need 43 + * take mmap_lock for writing. Others, which simply traverse vmas, need 44 44 * to only take it for reading. 45 45 */ 46 46 static int madvise_need_mmap_write(int behavior) ··· 165 165 166 166 success: 167 167 /* 168 - * vm_flags is protected by the mmap_sem held in write mode. 168 + * vm_flags is protected by the mmap_lock held in write mode. 169 169 */ 170 170 vma->vm_flags = new_flags; 171 171 ··· 285 285 * Filesystem's fadvise may need to take various locks. We need to 286 286 * explicitly grab a reference because the vma (and hence the 287 287 * vma's reference to the file) can go away as soon as we drop 288 - * mmap_sem. 288 + * mmap_lock. 289 289 */ 290 - *prev = NULL; /* tell sys_madvise we drop mmap_sem */ 290 + *prev = NULL; /* tell sys_madvise we drop mmap_lock */ 291 291 get_file(file); 292 292 mmap_read_unlock(current->mm); 293 293 offset = (loff_t)(start - vma->vm_start) ··· 768 768 return -EINVAL; 769 769 770 770 if (!userfaultfd_remove(vma, start, end)) { 771 - *prev = NULL; /* mmap_sem has been dropped, prev is stale */ 771 + *prev = NULL; /* mmap_lock has been dropped, prev is stale */ 772 772 773 773 mmap_read_lock(current->mm); 774 774 vma = find_vma(current->mm, start); ··· 791 791 if (end > vma->vm_end) { 792 792 /* 793 793 * Don't fail if end > vma->vm_end. If the old 794 - * vma was splitted while the mmap_sem was 794 + * vma was splitted while the mmap_lock was 795 795 * released the effect of the concurrent 796 796 * operation may not cause madvise() to 797 797 * have an undefined result. There may be an ··· 826 826 int error; 827 827 struct file *f; 828 828 829 - *prev = NULL; /* tell sys_madvise we drop mmap_sem */ 829 + *prev = NULL; /* tell sys_madvise we drop mmap_lock */ 830 830 831 831 if (vma->vm_flags & VM_LOCKED) 832 832 return -EINVAL; ··· 847 847 * Filesystem's fallocate may need to take i_mutex. We need to 848 848 * explicitly grab a reference because the vma (and hence the 849 849 * vma's reference to the file) can go away as soon as we drop 850 - * mmap_sem. 850 + * mmap_lock. 851 851 */ 852 852 get_file(f); 853 853 if (userfaultfd_remove(vma, start, end)) { 854 - /* mmap_sem was not released by userfaultfd_remove() */ 854 + /* mmap_lock was not released by userfaultfd_remove() */ 855 855 mmap_read_unlock(current->mm); 856 856 } 857 857 error = vfs_fallocate(f, ··· 1153 1153 goto out; 1154 1154 if (prev) 1155 1155 vma = prev->vm_next; 1156 - else /* madvise_remove dropped mmap_sem */ 1156 + else /* madvise_remove dropped mmap_lock */ 1157 1157 vma = find_vma(current->mm, start); 1158 1158 } 1159 1159 out:
+1 -1
mm/memcontrol.c
··· 5901 5901 retry: 5902 5902 if (unlikely(!mmap_read_trylock(mc.mm))) { 5903 5903 /* 5904 - * Someone who are holding the mmap_sem might be waiting in 5904 + * Someone who are holding the mmap_lock might be waiting in 5905 5905 * waitq. So we cancel all extra charges, wake up all waiters, 5906 5906 * and retry. Because we cancel precharges, we might not be able 5907 5907 * to move enough charges, but moving charge is a best-effort
+20 -20
mm/memory.c
··· 1185 1185 * Here there can be other concurrent MADV_DONTNEED or 1186 1186 * trans huge page faults running, and if the pmd is 1187 1187 * none or trans huge it can change under us. This is 1188 - * because MADV_DONTNEED holds the mmap_sem in read 1188 + * because MADV_DONTNEED holds the mmap_lock in read 1189 1189 * mode. 1190 1190 */ 1191 1191 if (pmd_none_or_trans_huge_or_clear_bad(pmd)) ··· 1636 1636 * The page does not need to be reserved. 1637 1637 * 1638 1638 * Usually this function is called from f_op->mmap() handler 1639 - * under mm->mmap_sem write-lock, so it can change vma->vm_flags. 1639 + * under mm->mmap_lock write-lock, so it can change vma->vm_flags. 1640 1640 * Caller must set VM_MIXEDMAP on vma if it wants to call this 1641 1641 * function from other places, for example from page-fault handler. 1642 1642 * ··· 2573 2573 * mapping may be NULL here because some device drivers do not 2574 2574 * set page.mapping but still dirty their pages 2575 2575 * 2576 - * Drop the mmap_sem before waiting on IO, if we can. The file 2576 + * Drop the mmap_lock before waiting on IO, if we can. The file 2577 2577 * is pinning the mapping, as per above. 2578 2578 */ 2579 2579 if ((dirtied || page_mkwrite) && mapping) { ··· 2623 2623 /* 2624 2624 * Handle the case of a page which we actually need to copy to a new page. 2625 2625 * 2626 - * Called with mmap_sem locked and the old page referenced, but 2626 + * Called with mmap_lock locked and the old page referenced, but 2627 2627 * without the ptl held. 2628 2628 * 2629 2629 * High level logic flow: ··· 2887 2887 * change only once the write actually happens. This avoids a few races, 2888 2888 * and potentially makes it more efficient. 2889 2889 * 2890 - * We enter with non-exclusive mmap_sem (to exclude vma changes, 2890 + * We enter with non-exclusive mmap_lock (to exclude vma changes, 2891 2891 * but allow concurrent faults), with pte both mapped and locked. 2892 - * We return with mmap_sem still held, but pte unmapped and unlocked. 2892 + * We return with mmap_lock still held, but pte unmapped and unlocked. 2893 2893 */ 2894 2894 static vm_fault_t do_wp_page(struct vm_fault *vmf) 2895 2895 __releases(vmf->ptl) ··· 3078 3078 EXPORT_SYMBOL(unmap_mapping_range); 3079 3079 3080 3080 /* 3081 - * We enter with non-exclusive mmap_sem (to exclude vma changes, 3081 + * We enter with non-exclusive mmap_lock (to exclude vma changes, 3082 3082 * but allow concurrent faults), and pte mapped but not yet locked. 3083 3083 * We return with pte unmapped and unlocked. 3084 3084 * 3085 - * We return with the mmap_sem locked or unlocked in the same cases 3085 + * We return with the mmap_lock locked or unlocked in the same cases 3086 3086 * as does filemap_fault(). 3087 3087 */ 3088 3088 vm_fault_t do_swap_page(struct vm_fault *vmf) ··· 3303 3303 } 3304 3304 3305 3305 /* 3306 - * We enter with non-exclusive mmap_sem (to exclude vma changes, 3306 + * We enter with non-exclusive mmap_lock (to exclude vma changes, 3307 3307 * but allow concurrent faults), and pte mapped but not yet locked. 3308 - * We return with mmap_sem still held, but pte unmapped and unlocked. 3308 + * We return with mmap_lock still held, but pte unmapped and unlocked. 3309 3309 */ 3310 3310 static vm_fault_t do_anonymous_page(struct vm_fault *vmf) 3311 3311 { ··· 3419 3419 } 3420 3420 3421 3421 /* 3422 - * The mmap_sem must have been held on entry, and may have been 3422 + * The mmap_lock must have been held on entry, and may have been 3423 3423 * released depending on flags and vma->vm_ops->fault() return value. 3424 3424 * See filemap_fault() and __lock_page_retry(). 3425 3425 */ ··· 3928 3928 } 3929 3929 3930 3930 /* 3931 - * We enter with non-exclusive mmap_sem (to exclude vma changes, 3931 + * We enter with non-exclusive mmap_lock (to exclude vma changes, 3932 3932 * but allow concurrent faults). 3933 - * The mmap_sem may have been released depending on flags and our 3933 + * The mmap_lock may have been released depending on flags and our 3934 3934 * return value. See filemap_fault() and __lock_page_or_retry(). 3935 - * If mmap_sem is released, vma may become invalid (for example 3935 + * If mmap_lock is released, vma may become invalid (for example 3936 3936 * by other thread calling munmap()). 3937 3937 */ 3938 3938 static vm_fault_t do_fault(struct vm_fault *vmf) ··· 4161 4161 * with external mmu caches can use to update those (ie the Sparc or 4162 4162 * PowerPC hashed page tables that act as extended TLBs). 4163 4163 * 4164 - * We enter with non-exclusive mmap_sem (to exclude vma changes, but allow 4164 + * We enter with non-exclusive mmap_lock (to exclude vma changes, but allow 4165 4165 * concurrent faults). 4166 4166 * 4167 - * The mmap_sem may have been released depending on flags and our return value. 4167 + * The mmap_lock may have been released depending on flags and our return value. 4168 4168 * See filemap_fault() and __lock_page_or_retry(). 4169 4169 */ 4170 4170 static vm_fault_t handle_pte_fault(struct vm_fault *vmf) ··· 4186 4186 /* 4187 4187 * A regular pmd is established and it can't morph into a huge 4188 4188 * pmd from under us anymore at this point because we hold the 4189 - * mmap_sem read mode and khugepaged takes it in write mode. 4189 + * mmap_lock read mode and khugepaged takes it in write mode. 4190 4190 * So now it's safe to run pte_offset_map(). 4191 4191 */ 4192 4192 vmf->pte = pte_offset_map(vmf->pmd, vmf->address); ··· 4254 4254 /* 4255 4255 * By the time we get here, we already hold the mm semaphore 4256 4256 * 4257 - * The mmap_sem may have been released depending on flags and our 4257 + * The mmap_lock may have been released depending on flags and our 4258 4258 * return value. See filemap_fault() and __lock_page_or_retry(). 4259 4259 */ 4260 4260 static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, ··· 4349 4349 /* 4350 4350 * By the time we get here, we already hold the mm semaphore 4351 4351 * 4352 - * The mmap_sem may have been released depending on flags and our 4352 + * The mmap_lock may have been released depending on flags and our 4353 4353 * return value. See filemap_fault() and __lock_page_or_retry(). 4354 4354 */ 4355 4355 vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, ··· 4793 4793 { 4794 4794 /* 4795 4795 * Some code (nfs/sunrpc) uses socket ops on kernel memory while 4796 - * holding the mmap_sem, this is safe because kernel memory doesn't 4796 + * holding the mmap_lock, this is safe because kernel memory doesn't 4797 4797 * get paged out, therefore we'll never actually fault, and the 4798 4798 * below annotations will generate false positives. 4799 4799 */
+6 -6
mm/mempolicy.c
··· 224 224 * handle an empty nodemask with MPOL_PREFERRED here. 225 225 * 226 226 * Must be called holding task's alloc_lock to protect task's mems_allowed 227 - * and mempolicy. May also be called holding the mmap_semaphore for write. 227 + * and mempolicy. May also be called holding the mmap_lock for write. 228 228 */ 229 229 static int mpol_set_nodemask(struct mempolicy *pol, 230 230 const nodemask_t *nodes, struct nodemask_scratch *nsc) ··· 368 368 /* 369 369 * mpol_rebind_policy - Migrate a policy to a different set of nodes 370 370 * 371 - * Per-vma policies are protected by mmap_sem. Allocations using per-task 371 + * Per-vma policies are protected by mmap_lock. Allocations using per-task 372 372 * policies are protected by task->mems_allowed_seq to prevent a premature 373 373 * OOM/allocation failure due to parallel nodemask modification. 374 374 */ ··· 398 398 /* 399 399 * Rebind each vma in mm to new nodemask. 400 400 * 401 - * Call holding a reference to mm. Takes mm->mmap_sem during call. 401 + * Call holding a reference to mm. Takes mm->mmap_lock during call. 402 402 */ 403 403 404 404 void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) ··· 764 764 765 765 /* 766 766 * Apply policy to a single VMA 767 - * This must be called with the mmap_sem held for writing. 767 + * This must be called with the mmap_lock held for writing. 768 768 */ 769 769 static int vma_replace_policy(struct vm_area_struct *vma, 770 770 struct mempolicy *pol) ··· 789 789 } 790 790 791 791 old = vma->vm_policy; 792 - vma->vm_policy = new; /* protected by mmap_sem */ 792 + vma->vm_policy = new; /* protected by mmap_lock */ 793 793 mpol_put(old); 794 794 795 795 return 0; ··· 985 985 if (flags & MPOL_F_ADDR) { 986 986 /* 987 987 * Take a refcount on the mpol, lookup_node() 988 - * wil drop the mmap_sem, so after calling 988 + * wil drop the mmap_lock, so after calling 989 989 * lookup_node() only "pol" remains valid, "vma" 990 990 * is stale. 991 991 */
+2 -2
mm/migrate.c
··· 2120 2120 * pmd before doing set_pmd_at(), nor to flush the TLB after 2121 2121 * set_pmd_at(). Clearing the pmd here would introduce a race 2122 2122 * condition against MADV_DONTNEED, because MADV_DONTNEED only holds the 2123 - * mmap_sem for reading. If the pmd is set to NULL at any given time, 2123 + * mmap_lock for reading. If the pmd is set to NULL at any given time, 2124 2124 * MADV_DONTNEED won't wait on the pmd lock and it'll skip clearing this 2125 2125 * pmd. 2126 2126 */ ··· 2675 2675 * have the MIGRATE_PFN_MIGRATE flag set for their src array entry. 2676 2676 * 2677 2677 * It is safe to update device page table after migrate_vma_pages() because 2678 - * both destination and source page are still locked, and the mmap_sem is held 2678 + * both destination and source page are still locked, and the mmap_lock is held 2679 2679 * in read mode (hence no one can unmap the range being migrated). 2680 2680 * 2681 2681 * Once the caller is done cleaning up things and updating its page table (if it
+3 -3
mm/mlock.c
··· 49 49 * When lazy mlocking via vmscan, it is important to ensure that the 50 50 * vma's VM_LOCKED status is not concurrently being modified, otherwise we 51 51 * may have mlocked a page that is being munlocked. So lazy mlock must take 52 - * the mmap_sem for read, and verify that the vma really is locked 52 + * the mmap_lock for read, and verify that the vma really is locked 53 53 * (see mm/rmap.c). 54 54 */ 55 55 ··· 381 381 /* 382 382 * Initialize pte walk starting at the already pinned page where we 383 383 * are sure that there is a pte, as it was pinned under the same 384 - * mmap_sem write op. 384 + * mmap_lock write op. 385 385 */ 386 386 pte = get_locked_pte(vma->vm_mm, start, &ptl); 387 387 /* Make sure we do not cross the page table boundary */ ··· 565 565 mm->locked_vm += nr_pages; 566 566 567 567 /* 568 - * vm_flags is protected by the mmap_sem held in write mode. 568 + * vm_flags is protected by the mmap_lock held in write mode. 569 569 * It's okay if try_to_unmap_one unmaps a page just after we 570 570 * set VM_LOCKED, populate_vma_page_range will bring it back. 571 571 */
+18 -18
mm/mmap.c
··· 132 132 vm_flags &= ~VM_SHARED; 133 133 vm_page_prot = vm_pgprot_modify(vm_page_prot, vm_flags); 134 134 } 135 - /* remove_protection_ptes reads vma->vm_page_prot without mmap_sem */ 135 + /* remove_protection_ptes reads vma->vm_page_prot without mmap_lock */ 136 136 WRITE_ONCE(vma->vm_page_prot, vm_page_prot); 137 137 } 138 138 ··· 238 238 239 239 /* 240 240 * Always allow shrinking brk. 241 - * __do_munmap() may downgrade mmap_sem to read. 241 + * __do_munmap() may downgrade mmap_lock to read. 242 242 */ 243 243 if (brk <= mm->brk) { 244 244 int ret; 245 245 246 246 /* 247 - * mm->brk must to be protected by write mmap_sem so update it 248 - * before downgrading mmap_sem. When __do_munmap() fails, 247 + * mm->brk must to be protected by write mmap_lock so update it 248 + * before downgrading mmap_lock. When __do_munmap() fails, 249 249 * mm->brk will be restored from origbrk. 250 250 */ 251 251 mm->brk = brk; ··· 505 505 * After the update, the vma will be reinserted using 506 506 * anon_vma_interval_tree_post_update_vma(). 507 507 * 508 - * The entire update must be protected by exclusive mmap_sem and by 508 + * The entire update must be protected by exclusive mmap_lock and by 509 509 * the root anon_vma's mutex. 510 510 */ 511 511 static inline void ··· 2371 2371 2372 2372 /* 2373 2373 * vma->vm_start/vm_end cannot change under us because the caller 2374 - * is required to hold the mmap_sem in read mode. We need the 2374 + * is required to hold the mmap_lock in read mode. We need the 2375 2375 * anon_vma lock to serialize against concurrent expand_stacks. 2376 2376 */ 2377 2377 anon_vma_lock_write(vma->anon_vma); ··· 2389 2389 if (!error) { 2390 2390 /* 2391 2391 * vma_gap_update() doesn't support concurrent 2392 - * updates, but we only hold a shared mmap_sem 2392 + * updates, but we only hold a shared mmap_lock 2393 2393 * lock here, so we need to protect against 2394 2394 * concurrent vma expansions. 2395 2395 * anon_vma_lock_write() doesn't help here, as ··· 2451 2451 2452 2452 /* 2453 2453 * vma->vm_start/vm_end cannot change under us because the caller 2454 - * is required to hold the mmap_sem in read mode. We need the 2454 + * is required to hold the mmap_lock in read mode. We need the 2455 2455 * anon_vma lock to serialize against concurrent expand_stacks. 2456 2456 */ 2457 2457 anon_vma_lock_write(vma->anon_vma); ··· 2469 2469 if (!error) { 2470 2470 /* 2471 2471 * vma_gap_update() doesn't support concurrent 2472 - * updates, but we only hold a shared mmap_sem 2472 + * updates, but we only hold a shared mmap_lock 2473 2473 * lock here, so we need to protect against 2474 2474 * concurrent vma expansions. 2475 2475 * anon_vma_lock_write() doesn't help here, as ··· 2855 2855 2856 2856 ret = __do_munmap(mm, start, len, &uf, downgrade); 2857 2857 /* 2858 - * Returning 1 indicates mmap_sem is downgraded. 2858 + * Returning 1 indicates mmap_lock is downgraded. 2859 2859 * But 1 is not legal return value of vm_munmap() and munmap(), reset 2860 2860 * it to 0 before return. 2861 2861 */ ··· 3107 3107 /* 3108 3108 * Manually reap the mm to free as much memory as possible. 3109 3109 * Then, as the oom reaper does, set MMF_OOM_SKIP to disregard 3110 - * this mm from further consideration. Taking mm->mmap_sem for 3110 + * this mm from further consideration. Taking mm->mmap_lock for 3111 3111 * write after setting MMF_OOM_SKIP will guarantee that the oom 3112 - * reaper will not run on this mm again after mmap_sem is 3112 + * reaper will not run on this mm again after mmap_lock is 3113 3113 * dropped. 3114 3114 * 3115 - * Nothing can be holding mm->mmap_sem here and the above call 3115 + * Nothing can be holding mm->mmap_lock here and the above call 3116 3116 * to mmu_notifier_release(mm) ensures mmu notifier callbacks in 3117 3117 * __oom_reap_task_mm() will not block. 3118 3118 * ··· 3437 3437 } 3438 3438 3439 3439 /* 3440 - * Called with mm->mmap_sem held for writing. 3440 + * Called with mm->mmap_lock held for writing. 3441 3441 * Insert a new vma covering the given region, with the given flags. 3442 3442 * Its pages are supplied by the given array of struct page *. 3443 3443 * The array can be shorter than len >> PAGE_SHIFT if it's null-terminated. ··· 3513 3513 * operations that could ever happen on a certain mm. This includes 3514 3514 * vmtruncate, try_to_unmap, and all page faults. 3515 3515 * 3516 - * The caller must take the mmap_sem in write mode before calling 3516 + * The caller must take the mmap_lock in write mode before calling 3517 3517 * mm_take_all_locks(). The caller isn't allowed to release the 3518 - * mmap_sem until mm_drop_all_locks() returns. 3518 + * mmap_lock until mm_drop_all_locks() returns. 3519 3519 * 3520 - * mmap_sem in write mode is required in order to block all operations 3520 + * mmap_lock in write mode is required in order to block all operations 3521 3521 * that could modify pagetables and free pages without need of 3522 3522 * altering the vma layout. It's also needed in write mode to avoid new 3523 3523 * anon_vmas to be associated with existing vmas. ··· 3622 3622 } 3623 3623 3624 3624 /* 3625 - * The mmap_sem cannot be released by the caller until 3625 + * The mmap_lock cannot be released by the caller until 3626 3626 * mm_drop_all_locks() returns. 3627 3627 */ 3628 3628 void mm_drop_all_locks(struct mm_struct *mm)
+1 -1
mm/mmu_gather.c
··· 301 301 { 302 302 /* 303 303 * If there are parallel threads are doing PTE changes on same range 304 - * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB 304 + * under non-exclusive lock (e.g., mmap_lock read-side) but defer TLB 305 305 * flush by batching, one thread may end up seeing inconsistent PTEs 306 306 * and result in having stale TLB entries. So flush TLB forcefully 307 307 * if we detect parallel PTE batching threads.
+5 -5
mm/mmu_notifier.c
··· 599 599 } 600 600 601 601 /* 602 - * Same as mmu_notifier_register but here the caller must hold the mmap_sem in 602 + * Same as mmu_notifier_register but here the caller must hold the mmap_lock in 603 603 * write mode. A NULL mn signals the notifier is being registered for itree 604 604 * mode. 605 605 */ ··· 623 623 /* 624 624 * kmalloc cannot be called under mm_take_all_locks(), but we 625 625 * know that mm->notifier_subscriptions can't change while we 626 - * hold the write side of the mmap_sem. 626 + * hold the write side of the mmap_lock. 627 627 */ 628 628 subscriptions = kzalloc( 629 629 sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL); ··· 655 655 * readers. acquire can only be used while holding the mmgrab or 656 656 * mmget, and is safe because once created the 657 657 * mmu_notifier_subscriptions is not freed until the mm is destroyed. 658 - * As above, users holding the mmap_sem or one of the 658 + * As above, users holding the mmap_lock or one of the 659 659 * mm_take_all_locks() do not need to use acquire semantics. 660 660 */ 661 661 if (subscriptions) ··· 689 689 * @mn: The notifier to attach 690 690 * @mm: The mm to attach the notifier to 691 691 * 692 - * Must not hold mmap_sem nor any other VM related lock when calling 692 + * Must not hold mmap_lock nor any other VM related lock when calling 693 693 * this registration function. Must also ensure mm_users can't go down 694 694 * to zero while this runs to avoid races with mmu_notifier_release, 695 695 * so mm has to be current->mm or the mm should be pinned safely such ··· 750 750 * are the same. 751 751 * 752 752 * Each call to mmu_notifier_get() must be paired with a call to 753 - * mmu_notifier_put(). The caller must hold the write side of mm->mmap_sem. 753 + * mmu_notifier_put(). The caller must hold the write side of mm->mmap_lock. 754 754 * 755 755 * While the caller has a mmu_notifier get the mm pointer will remain valid, 756 756 * and can be converted to an active mm pointer via mmget_not_zero().
+4 -4
mm/mprotect.c
··· 49 49 bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; 50 50 51 51 /* 52 - * Can be called with only the mmap_sem for reading by 52 + * Can be called with only the mmap_lock for reading by 53 53 * prot_numa so we must check the pmd isn't constantly 54 54 * changing from under us from pmd_none to pmd_trans_huge 55 55 * and/or the other way around. ··· 59 59 60 60 /* 61 61 * The pmd points to a regular pte so the pmd can't change 62 - * from under us even if the mmap_sem is only hold for 62 + * from under us even if the mmap_lock is only hold for 63 63 * reading. 64 64 */ 65 65 pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); ··· 228 228 next = pmd_addr_end(addr, end); 229 229 230 230 /* 231 - * Automatic NUMA balancing walks the tables with mmap_sem 231 + * Automatic NUMA balancing walks the tables with mmap_lock 232 232 * held for read. It's possible a parallel update to occur 233 233 * between pmd_trans_huge() and a pmd_none_or_clear_bad() 234 234 * check leading to a false positive and clearing. ··· 477 477 478 478 success: 479 479 /* 480 - * vm_flags and vm_page_prot are protected by the mmap_sem 480 + * vm_flags and vm_page_prot are protected by the mmap_lock 481 481 * held in write mode. 482 482 */ 483 483 vma->vm_flags = newflags;
+4 -4
mm/mremap.c
··· 146 146 147 147 /* 148 148 * We don't have to worry about the ordering of src and dst 149 - * pte locks because exclusive mmap_sem prevents deadlock. 149 + * pte locks because exclusive mmap_lock prevents deadlock. 150 150 */ 151 151 old_pte = pte_offset_map_lock(mm, old_pmd, old_addr, &old_ptl); 152 152 new_pte = pte_offset_map(new_pmd, new_addr); ··· 213 213 214 214 /* 215 215 * We don't have to worry about the ordering of src and dst 216 - * ptlocks because exclusive mmap_sem prevents deadlock. 216 + * ptlocks because exclusive mmap_lock prevents deadlock. 217 217 */ 218 218 old_ptl = pmd_lock(vma->vm_mm, old_pmd); 219 219 new_ptl = pmd_lockptr(mm, new_pmd); ··· 710 710 * Always allow a shrinking remap: that just unmaps 711 711 * the unnecessary pages.. 712 712 * __do_munmap does all the needed commit accounting, and 713 - * downgrades mmap_sem to read if so directed. 713 + * downgrades mmap_lock to read if so directed. 714 714 */ 715 715 if (old_len >= new_len) { 716 716 int retval; ··· 720 720 if (retval < 0 && old_len != new_len) { 721 721 ret = retval; 722 722 goto out; 723 - /* Returning 1 indicates mmap_sem is downgraded to read. */ 723 + /* Returning 1 indicates mmap_lock is downgraded to read. */ 724 724 } else if (retval == 1) 725 725 downgraded = true; 726 726 ret = addr;
+3 -3
mm/nommu.c
··· 582 582 * add a VMA into a process's mm_struct in the appropriate place in the list 583 583 * and tree and add to the address space's page tree also if not an anonymous 584 584 * page 585 - * - should be called with mm->mmap_sem held writelocked 585 + * - should be called with mm->mmap_lock held writelocked 586 586 */ 587 587 static void add_vma_to_mm(struct mm_struct *mm, struct vm_area_struct *vma) 588 588 { ··· 696 696 697 697 /* 698 698 * look up the first VMA in which addr resides, NULL if none 699 - * - should be called with mm->mmap_sem at least held readlocked 699 + * - should be called with mm->mmap_lock at least held readlocked 700 700 */ 701 701 struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr) 702 702 { ··· 742 742 743 743 /* 744 744 * look up the first VMA exactly that exactly matches addr 745 - * - should be called with mm->mmap_sem at least held readlocked 745 + * - should be called with mm->mmap_lock at least held readlocked 746 746 */ 747 747 static struct vm_area_struct *find_vma_exact(struct mm_struct *mm, 748 748 unsigned long addr,
+1 -1
mm/oom_kill.c
··· 898 898 /* 899 899 * Kill all user processes sharing victim->mm in other thread groups, if 900 900 * any. They don't get access to memory reserves, though, to avoid 901 - * depletion of all memory. This prevents mm->mmap_sem livelock when an 901 + * depletion of all memory. This prevents mm->mmap_lock livelock when an 902 902 * oom killed thread cannot exit because it requires the semaphore and 903 903 * its contended by another thread trying to allocate memory itself. 904 904 * That thread will now get access to memory reserves since it has a
+3 -3
mm/pagewalk.c
··· 373 373 * caller-specific data to callbacks, @private should be helpful. 374 374 * 375 375 * Locking: 376 - * Callers of walk_page_range() and walk_page_vma() should hold @mm->mmap_sem, 376 + * Callers of walk_page_range() and walk_page_vma() should hold @mm->mmap_lock, 377 377 * because these function traverse vma list and/or access to vma's data. 378 378 */ 379 379 int walk_page_range(struct mm_struct *mm, unsigned long start, ··· 498 498 * Also see walk_page_range() for additional information. 499 499 * 500 500 * Locking: 501 - * This function can't require that the struct mm_struct::mmap_sem is held, 501 + * This function can't require that the struct mm_struct::mmap_lock is held, 502 502 * since @mapping may be mapped by multiple processes. Instead 503 503 * @mapping->i_mmap_rwsem must be held. This might have implications in the 504 504 * callbacks, and it's up tho the caller to ensure that the 505 - * struct mm_struct::mmap_sem is not needed. 505 + * struct mm_struct::mmap_lock is not needed. 506 506 * 507 507 * Also this means that a caller can't rely on the struct 508 508 * vm_area_struct::vm_flags to be constant across a call,
+6 -6
mm/rmap.c
··· 21 21 * Lock ordering in mm: 22 22 * 23 23 * inode->i_mutex (while writing or truncating, not reading or faulting) 24 - * mm->mmap_sem 24 + * mm->mmap_lock 25 25 * page->flags PG_locked (lock_page) * (see huegtlbfs below) 26 26 * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share) 27 27 * mapping->i_mmap_rwsem ··· 177 177 * to do any locking for the common case of already having 178 178 * an anon_vma. 179 179 * 180 - * This must be called with the mmap_sem held for reading. 180 + * This must be called with the mmap_lock held for reading. 181 181 */ 182 182 int __anon_vma_prepare(struct vm_area_struct *vma) 183 183 { ··· 1444 1444 if (!PageTransCompound(page)) { 1445 1445 /* 1446 1446 * Holding pte lock, we do *not* need 1447 - * mmap_sem here 1447 + * mmap_lock here 1448 1448 */ 1449 1449 mlock_vma_page(page); 1450 1450 } ··· 1817 1817 /* 1818 1818 * Note: remove_migration_ptes() cannot use page_lock_anon_vma_read() 1819 1819 * because that depends on page_mapped(); but not all its usages 1820 - * are holding mmap_sem. Users without mmap_sem are required to 1820 + * are holding mmap_lock. Users without mmap_lock are required to 1821 1821 * take a reference count to prevent the anon_vma disappearing 1822 1822 */ 1823 1823 anon_vma = page_anon_vma(page); ··· 1837 1837 * Find all the mappings of a page using the mapping pointer and the vma chains 1838 1838 * contained in the anon_vma struct it points to. 1839 1839 * 1840 - * When called from try_to_munlock(), the mmap_sem of the mm containing the vma 1840 + * When called from try_to_munlock(), the mmap_lock of the mm containing the vma 1841 1841 * where the page was found will be held for write. So, we won't recheck 1842 1842 * vm_flags for that VMA. That should be OK, because that vma shouldn't be 1843 1843 * LOCKED. ··· 1889 1889 * Find all the mappings of a page using the mapping pointer and the vma chains 1890 1890 * contained in the address_space struct it points to. 1891 1891 * 1892 - * When called from try_to_munlock(), the mmap_sem of the mm containing the vma 1892 + * When called from try_to_munlock(), the mmap_lock of the mm containing the vma 1893 1893 * where the page was found will be held for write. So, we won't recheck 1894 1894 * vm_flags for that VMA. That should be OK, because that vma shouldn't be 1895 1895 * LOCKED.
+2 -2
mm/shmem.c
··· 2319 2319 PAGE_SIZE); 2320 2320 kunmap_atomic(page_kaddr); 2321 2321 2322 - /* fallback to copy_from_user outside mmap_sem */ 2322 + /* fallback to copy_from_user outside mmap_lock */ 2323 2323 if (unlikely(ret)) { 2324 2324 *pagep = page; 2325 2325 shmem_inode_unacct_blocks(inode, 1); ··· 4136 4136 loff_t size = vma->vm_end - vma->vm_start; 4137 4137 4138 4138 /* 4139 - * Cloning a new file under mmap_sem leads to a lock ordering conflict 4139 + * Cloning a new file under mmap_lock leads to a lock ordering conflict 4140 4140 * between XFS directory reading and selinux: since this file is only 4141 4141 * accessible to the user through its mapping, use S_PRIVATE flag to 4142 4142 * bypass file security, in the same way as shmem_kernel_file_setup().
+2 -2
mm/swap_state.c
··· 552 552 * This has been extended to use the NUMA policies from the mm triggering 553 553 * the readahead. 554 554 * 555 - * Caller must hold read mmap_sem if vmf->vma is not NULL. 555 + * Caller must hold read mmap_lock if vmf->vma is not NULL. 556 556 */ 557 557 struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, 558 558 struct vm_fault *vmf) ··· 734 734 * Primitive swap readahead code. We simply read in a few pages whoes 735 735 * virtual addresses are around the fault address in the same vma. 736 736 * 737 - * Caller must hold read mmap_sem if vmf->vma is not NULL. 737 + * Caller must hold read mmap_lock if vmf->vma is not NULL. 738 738 * 739 739 */ 740 740 static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
+4 -4
mm/userfaultfd.c
··· 76 76 PAGE_SIZE); 77 77 kunmap_atomic(page_kaddr); 78 78 79 - /* fallback to copy_from_user outside mmap_sem */ 79 + /* fallback to copy_from_user outside mmap_lock */ 80 80 if (unlikely(ret)) { 81 81 ret = -ENOENT; 82 82 *pagep = page; ··· 200 200 #ifdef CONFIG_HUGETLB_PAGE 201 201 /* 202 202 * __mcopy_atomic processing for HUGETLB vmas. Note that this routine is 203 - * called with mmap_sem held, it will release mmap_sem before returning. 203 + * called with mmap_lock held, it will release mmap_lock before returning. 204 204 */ 205 205 static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, 206 206 struct vm_area_struct *dst_vma, ··· 247 247 248 248 retry: 249 249 /* 250 - * On routine entry dst_vma is set. If we had to drop mmap_sem and 250 + * On routine entry dst_vma is set. If we had to drop mmap_lock and 251 251 * retry, dst_vma will be set to NULL and we must lookup again. 252 252 */ 253 253 if (!dst_vma) { ··· 357 357 * private and shared mappings. See the routine 358 358 * restore_reserve_on_error for details. Unfortunately, we 359 359 * can not call restore_reserve_on_error now as it would 360 - * require holding mmap_sem. 360 + * require holding mmap_lock. 361 361 * 362 362 * If a reservation for the page existed in the reservation 363 363 * map of a private mapping, the map was modified to indicate
+1 -1
mm/util.c
··· 425 425 * @bypass_rlim: %true if checking RLIMIT_MEMLOCK should be skipped 426 426 * 427 427 * Assumes @task and @mm are valid (i.e. at least one reference on each), and 428 - * that mmap_sem is held as writer. 428 + * that mmap_lock is held as writer. 429 429 * 430 430 * Return: 431 431 * * 0 on success
+1 -1
security/keys/keyctl.c
··· 875 875 * 876 876 * Allocating a temporary buffer to hold the keys before 877 877 * transferring them to user buffer to avoid potential 878 - * deadlock involving page fault and mmap_sem. 878 + * deadlock involving page fault and mmap_lock. 879 879 * 880 880 * key_data_len = (buflen <= PAGE_SIZE) 881 881 * ? buflen : actual length of key data
+1 -1
sound/core/oss/pcm_oss.c
··· 2876 2876 2877 2877 if (runtime->oss.params) { 2878 2878 /* use mutex_trylock() for params_lock for avoiding a deadlock 2879 - * between mmap_sem and params_lock taken by 2879 + * between mmap_lock and params_lock taken by 2880 2880 * copy_from/to_user() in snd_pcm_oss_write/read() 2881 2881 */ 2882 2882 err = snd_pcm_oss_change_params(substream, true);