mm: vmalloc fix lazy unmapping cache aliasing

Jim Radford has reported that the vmap subsystem rewrite was sometimes
causing his VIVT ARM system to behave strangely (seemed like going into
infinite loops trying to fault in pages to userspace).

We determined that the problem was most likely due to a cache aliasing
issue. flush_cache_vunmap was only being called at the moment the page
tables were to be taken down, however with lazy unmapping, this can happen
after the page has subsequently been freed and allocated for something
else. The dangling alias may still have dirty data attached to it.

The fix for this problem is to do the cache flushing when the caller has
called vunmap -- it would be a bug for them to write anything else to the
mapping at that point.

That appeared to solve Jim's problems.

Reported-by: Jim Radford <radford@blackbean.org>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by Nick Piggin and committed by Linus Torvalds b29acbdc 8650e51a

+16 -4
+16 -4
mm/vmalloc.c
··· 77 78 BUG_ON(addr >= end); 79 pgd = pgd_offset_k(addr); 80 - flush_cache_vunmap(addr, end); 81 do { 82 next = pgd_addr_end(addr, end); 83 if (pgd_none_or_clear_bad(pgd)) ··· 542 } 543 544 /* 545 - * Free and unmap a vmap area 546 */ 547 - static void free_unmap_vmap_area(struct vmap_area *va) 548 { 549 va->flags |= VM_LAZY_FREE; 550 atomic_add((va->va_end - va->va_start) >> PAGE_SHIFT, &vmap_lazy_nr); 551 if (unlikely(atomic_read(&vmap_lazy_nr) > lazy_max_pages())) 552 try_purge_vmap_area_lazy(); 553 } 554 555 static struct vmap_area *find_vmap_area(unsigned long addr) ··· 743 spin_unlock(&vmap_block_tree_lock); 744 BUG_ON(tmp != vb); 745 746 - free_unmap_vmap_area(vb->va); 747 call_rcu(&vb->rcu_head, rcu_free_vb); 748 } 749 ··· 805 806 BUG_ON(size & ~PAGE_MASK); 807 BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC); 808 order = get_order(size); 809 810 offset = (unsigned long)addr & (VMAP_BLOCK_SIZE - 1);
··· 77 78 BUG_ON(addr >= end); 79 pgd = pgd_offset_k(addr); 80 do { 81 next = pgd_addr_end(addr, end); 82 if (pgd_none_or_clear_bad(pgd)) ··· 543 } 544 545 /* 546 + * Free and unmap a vmap area, caller ensuring flush_cache_vunmap had been 547 + * called for the correct range previously. 548 */ 549 + static void free_unmap_vmap_area_noflush(struct vmap_area *va) 550 { 551 va->flags |= VM_LAZY_FREE; 552 atomic_add((va->va_end - va->va_start) >> PAGE_SHIFT, &vmap_lazy_nr); 553 if (unlikely(atomic_read(&vmap_lazy_nr) > lazy_max_pages())) 554 try_purge_vmap_area_lazy(); 555 + } 556 + 557 + /* 558 + * Free and unmap a vmap area 559 + */ 560 + static void free_unmap_vmap_area(struct vmap_area *va) 561 + { 562 + flush_cache_vunmap(va->va_start, va->va_end); 563 + free_unmap_vmap_area_noflush(va); 564 } 565 566 static struct vmap_area *find_vmap_area(unsigned long addr) ··· 734 spin_unlock(&vmap_block_tree_lock); 735 BUG_ON(tmp != vb); 736 737 + free_unmap_vmap_area_noflush(vb->va); 738 call_rcu(&vb->rcu_head, rcu_free_vb); 739 } 740 ··· 796 797 BUG_ON(size & ~PAGE_MASK); 798 BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC); 799 + 800 + flush_cache_vunmap((unsigned long)addr, (unsigned long)addr + size); 801 + 802 order = get_order(size); 803 804 offset = (unsigned long)addr & (VMAP_BLOCK_SIZE - 1);