Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

usb: mon: Fix a deadlock in usbmon between mmap and read

The problem arises because our read() function grabs a lock of the
circular buffer, finds something of interest, then invokes copy_to_user()
straight from the buffer, which in turn takes mm->mmap_sem. In the same
time, the callback mon_bin_vma_fault() is invoked under mm->mmap_sem.
It attempts to take the fetch lock and deadlocks.

This patch does away with protecting of our page list with any
semaphores, and instead relies on the kernel not close the device
while mmap is active in a process.

In addition, we prohibit re-sizing of a buffer while mmap is active.
This way, when (now unlocked) fault is processed, it works with the
page that is intended to be mapped-in, and not some other random page.
Note that this may have an ABI impact, but hopefully no legitimate
program is this wrong.

Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
Reported-by: syzbot+56f9673bb4cdcbeb0e92@syzkaller.appspotmail.com
Reviewed-by: Alan Stern <stern@rowland.harvard.edu>
Fixes: 46eb14a6e158 ("USB: fix usbmon BUG trigger")
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20191204203941.3503452b@suzdal.zaitcev.lan
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

authored by

Pete Zaitcev and committed by
Greg Kroah-Hartman
19e6317d 59120962

+21 -11
+21 -11
drivers/usb/mon/mon_bin.c
··· 1039 1039 1040 1040 mutex_lock(&rp->fetch_lock); 1041 1041 spin_lock_irqsave(&rp->b_lock, flags); 1042 - mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE); 1043 - kfree(rp->b_vec); 1044 - rp->b_vec = vec; 1045 - rp->b_size = size; 1046 - rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0; 1047 - rp->cnt_lost = 0; 1042 + if (rp->mmap_active) { 1043 + mon_free_buff(vec, size/CHUNK_SIZE); 1044 + kfree(vec); 1045 + ret = -EBUSY; 1046 + } else { 1047 + mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE); 1048 + kfree(rp->b_vec); 1049 + rp->b_vec = vec; 1050 + rp->b_size = size; 1051 + rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0; 1052 + rp->cnt_lost = 0; 1053 + } 1048 1054 spin_unlock_irqrestore(&rp->b_lock, flags); 1049 1055 mutex_unlock(&rp->fetch_lock); 1050 1056 } ··· 1222 1216 static void mon_bin_vma_open(struct vm_area_struct *vma) 1223 1217 { 1224 1218 struct mon_reader_bin *rp = vma->vm_private_data; 1219 + unsigned long flags; 1220 + 1221 + spin_lock_irqsave(&rp->b_lock, flags); 1225 1222 rp->mmap_active++; 1223 + spin_unlock_irqrestore(&rp->b_lock, flags); 1226 1224 } 1227 1225 1228 1226 static void mon_bin_vma_close(struct vm_area_struct *vma) 1229 1227 { 1228 + unsigned long flags; 1229 + 1230 1230 struct mon_reader_bin *rp = vma->vm_private_data; 1231 + spin_lock_irqsave(&rp->b_lock, flags); 1231 1232 rp->mmap_active--; 1233 + spin_unlock_irqrestore(&rp->b_lock, flags); 1232 1234 } 1233 1235 1234 1236 /* ··· 1248 1234 unsigned long offset, chunk_idx; 1249 1235 struct page *pageptr; 1250 1236 1251 - mutex_lock(&rp->fetch_lock); 1252 1237 offset = vmf->pgoff << PAGE_SHIFT; 1253 - if (offset >= rp->b_size) { 1254 - mutex_unlock(&rp->fetch_lock); 1238 + if (offset >= rp->b_size) 1255 1239 return VM_FAULT_SIGBUS; 1256 - } 1257 1240 chunk_idx = offset / CHUNK_SIZE; 1258 1241 pageptr = rp->b_vec[chunk_idx].pg; 1259 1242 get_page(pageptr); 1260 - mutex_unlock(&rp->fetch_lock); 1261 1243 vmf->page = pageptr; 1262 1244 return 0; 1263 1245 }