Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

binder: use per-vma lock in page reclaiming

Use per-vma locking in the shrinker's callback when reclaiming pages,
similar to the page installation logic. This minimizes contention with
unrelated vmas improving performance. The mmap_sem is still acquired if
the per-vma lock cannot be obtained.

Cc: Suren Baghdasaryan <surenb@google.com>
Suggested-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20241210143114.661252-10-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

authored by

Carlos Llamas and committed by
Greg Kroah-Hartman
95bc2d4a 978ce3ed

+22 -7
+22 -7
drivers/android/binder_alloc.c
··· 1143 1143 struct vm_area_struct *vma; 1144 1144 struct page *page_to_free; 1145 1145 unsigned long page_addr; 1146 + int mm_locked = 0; 1146 1147 size_t index; 1147 1148 1148 1149 if (!mmget_not_zero(mm)) 1149 1150 goto err_mmget; 1150 - if (!mmap_read_trylock(mm)) 1151 - goto err_mmap_read_lock_failed; 1152 - if (!mutex_trylock(&alloc->mutex)) 1153 - goto err_get_alloc_mutex_failed; 1154 1151 1155 1152 index = mdata->page_index; 1156 1153 page_addr = alloc->vm_start + index * PAGE_SIZE; 1157 1154 1158 - vma = vma_lookup(mm, page_addr); 1155 + /* attempt per-vma lock first */ 1156 + vma = lock_vma_under_rcu(mm, page_addr); 1157 + if (!vma) { 1158 + /* fall back to mmap_lock */ 1159 + if (!mmap_read_trylock(mm)) 1160 + goto err_mmap_read_lock_failed; 1161 + mm_locked = 1; 1162 + vma = vma_lookup(mm, page_addr); 1163 + } 1164 + 1165 + if (!mutex_trylock(&alloc->mutex)) 1166 + goto err_get_alloc_mutex_failed; 1167 + 1159 1168 /* 1160 1169 * Since a binder_alloc can only be mapped once, we ensure 1161 1170 * the vma corresponds to this mapping by checking whether ··· 1192 1183 } 1193 1184 1194 1185 mutex_unlock(&alloc->mutex); 1195 - mmap_read_unlock(mm); 1186 + if (mm_locked) 1187 + mmap_read_unlock(mm); 1188 + else 1189 + vma_end_read(vma); 1196 1190 mmput_async(mm); 1197 1191 binder_free_page(page_to_free); 1198 1192 ··· 1204 1192 err_invalid_vma: 1205 1193 mutex_unlock(&alloc->mutex); 1206 1194 err_get_alloc_mutex_failed: 1207 - mmap_read_unlock(mm); 1195 + if (mm_locked) 1196 + mmap_read_unlock(mm); 1197 + else 1198 + vma_end_read(vma); 1208 1199 err_mmap_read_lock_failed: 1209 1200 mmput_async(mm); 1210 1201 err_mmget: