Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86, kmmio/mmiotrace: Fix double free of kmmio_fault_pages

After every iounmap mmiotrace has to free kmmio_fault_pages, but
it can't do it directly, so it defers freeing by RCU.

It usually works, but when mmiotraced code calls ioremap-iounmap
multiple times without sleeping between (so RCU won't kick in
and start freeing) it can be given the same virtual address, so
at every iounmap mmiotrace will schedule the same pages for
release. Obviously it will explode on second free.

Fix it by marking kmmio_fault_pages which are scheduled for
release and not adding them second time.

Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Tested-by: Marcin Kocielnicki <koriakin@0x04.net>
Tested-by: Shinpei KATO <shinpei@il.is.s.u-tokyo.ac.jp>
Acked-by: Pekka Paalanen <pq@iki.fi>
Cc: Stuart Bennett <stuart@freedesktop.org>
Cc: Marcin Kocielnicki <koriakin@0x04.net>
Cc: nouveau@lists.freedesktop.org
Cc: <stable@kernel.org>
LKML-Reference: <20100613215654.GA3829@joi.lan>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by

Marcin Slusarz and committed by
Ingo Molnar
8b8f79b9 7e27d6e7

+35 -3
+13 -3
arch/x86/mm/kmmio.c
··· 45 45 * Protected by kmmio_lock, when linked into kmmio_page_table. 46 46 */ 47 47 int count; 48 + 49 + bool scheduled_for_release; 48 50 }; 49 51 50 52 struct kmmio_delayed_release { ··· 400 398 BUG_ON(f->count < 0); 401 399 if (!f->count) { 402 400 disarm_kmmio_fault_page(f); 403 - f->release_next = *release_list; 404 - *release_list = f; 401 + if (!f->scheduled_for_release) { 402 + f->release_next = *release_list; 403 + *release_list = f; 404 + f->scheduled_for_release = true; 405 + } 405 406 } 406 407 } 407 408 ··· 476 471 prevp = &f->release_next; 477 472 } else { 478 473 *prevp = f->release_next; 474 + f->release_next = NULL; 475 + f->scheduled_for_release = false; 479 476 } 480 - f = f->release_next; 477 + f = *prevp; 481 478 } 482 479 spin_unlock_irqrestore(&kmmio_lock, flags); 483 480 ··· 516 509 list_del_rcu(&p->list); 517 510 kmmio_count--; 518 511 spin_unlock_irqrestore(&kmmio_lock, flags); 512 + 513 + if (!release_list) 514 + return; 519 515 520 516 drelease = kmalloc(sizeof(*drelease), GFP_ATOMIC); 521 517 if (!drelease) {
+22
arch/x86/mm/testmmiotrace.c
··· 90 90 iounmap(p); 91 91 } 92 92 93 + /* 94 + * Tests how mmiotrace behaves in face of multiple ioremap / iounmaps in 95 + * a short time. We had a bug in deferred freeing procedure which tried 96 + * to free this region multiple times (ioremap can reuse the same address 97 + * for many mappings). 98 + */ 99 + static void do_test_bulk_ioremapping(void) 100 + { 101 + void __iomem *p; 102 + int i; 103 + 104 + for (i = 0; i < 10; ++i) { 105 + p = ioremap_nocache(mmio_address, PAGE_SIZE); 106 + if (p) 107 + iounmap(p); 108 + } 109 + 110 + /* Force freeing. If it will crash we will know why. */ 111 + synchronize_rcu(); 112 + } 113 + 93 114 static int __init init(void) 94 115 { 95 116 unsigned long size = (read_far) ? (8 << 20) : (16 << 10); ··· 125 104 "and writing 16 kB of rubbish in there.\n", 126 105 size >> 10, mmio_address); 127 106 do_test(size); 107 + do_test_bulk_ioremapping(); 128 108 pr_info("All done.\n"); 129 109 return 0; 130 110 }