binder_alloc: add missing mmap_lock calls when using the VMA

Take the mmap_read_lock() when using the VMA in binder_alloc_print_pages()
and when checking for a VMA in binder_alloc_new_buf_locked().

It is worth noting binder_alloc_new_buf_locked() drops the VMA read lock
after it verifies a VMA exists, but may be taken again deeper in the call
stack, if necessary.

Link: https://lkml.kernel.org/r/20220810160209.1630707-1-Liam.Howlett@oracle.com
Fixes: a43cfc87caaf (android: binder: stop saving a pointer to the VMA)
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reported-by: Ondrej Mosnacek <omosnace@redhat.com>
Reported-by: <syzbot+a7b60a176ec13cafb793@syzkaller.appspotmail.com>
Acked-by: Carlos Llamas <cmllamas@google.com>
Tested-by: Ondrej Mosnacek <omosnace@redhat.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Christian Brauner (Microsoft) <brauner@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Martijn Coenen <maco@android.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Todd Kjos <tkjos@android.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: "Arve Hjønnevåg" <arve@android.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by Liam Howlett and committed by Andrew Morton 44e602b4 fcab34b4

+21 -10
+21 -10
drivers/android/binder_alloc.c
··· 402 size_t size, data_offsets_size; 403 int ret; 404 405 if (!binder_alloc_get_vma(alloc)) { 406 binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 407 "%d: binder_alloc_buf, no vma\n", 408 alloc->pid); 409 return ERR_PTR(-ESRCH); 410 } 411 412 data_offsets_size = ALIGN(data_size, sizeof(void *)) + 413 ALIGN(offsets_size, sizeof(void *)); ··· 932 * Make sure the binder_alloc is fully initialized, otherwise we might 933 * read inconsistent state. 934 */ 935 - if (binder_alloc_get_vma(alloc) != NULL) { 936 - for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) { 937 - page = &alloc->pages[i]; 938 - if (!page->page_ptr) 939 - free++; 940 - else if (list_empty(&page->lru)) 941 - active++; 942 - else 943 - lru++; 944 - } 945 } 946 mutex_unlock(&alloc->mutex); 947 seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); 948 seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);
··· 402 size_t size, data_offsets_size; 403 int ret; 404 405 + mmap_read_lock(alloc->vma_vm_mm); 406 if (!binder_alloc_get_vma(alloc)) { 407 + mmap_read_unlock(alloc->vma_vm_mm); 408 binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 409 "%d: binder_alloc_buf, no vma\n", 410 alloc->pid); 411 return ERR_PTR(-ESRCH); 412 } 413 + mmap_read_unlock(alloc->vma_vm_mm); 414 415 data_offsets_size = ALIGN(data_size, sizeof(void *)) + 416 ALIGN(offsets_size, sizeof(void *)); ··· 929 * Make sure the binder_alloc is fully initialized, otherwise we might 930 * read inconsistent state. 931 */ 932 + 933 + mmap_read_lock(alloc->vma_vm_mm); 934 + if (binder_alloc_get_vma(alloc) == NULL) { 935 + mmap_read_unlock(alloc->vma_vm_mm); 936 + goto uninitialized; 937 } 938 + 939 + mmap_read_unlock(alloc->vma_vm_mm); 940 + for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) { 941 + page = &alloc->pages[i]; 942 + if (!page->page_ptr) 943 + free++; 944 + else if (list_empty(&page->lru)) 945 + active++; 946 + else 947 + lru++; 948 + } 949 + 950 + uninitialized: 951 mutex_unlock(&alloc->mutex); 952 seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); 953 seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);