mm: vmalloc fix lazy unmapping cache aliasing

Jim Radford has reported that the vmap subsystem rewrite was sometimes
causing his VIVT ARM system to behave strangely (seemed like going into
infinite loops trying to fault in pages to userspace).

We determined that the problem was most likely due to a cache aliasing
issue. flush_cache_vunmap was only being called at the moment the page
tables were to be taken down, however with lazy unmapping, this can happen
after the page has subsequently been freed and allocated for something
else. The dangling alias may still have dirty data attached to it.

The fix for this problem is to do the cache flushing when the caller has
called vunmap -- it would be a bug for them to write anything else to the
mapping at that point.

That appeared to solve Jim's problems.

Reported-by: Jim Radford <radford@blackbean.org>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by Nick Piggin and committed by Linus Torvalds b29acbdc 8650e51a

+16 -4
+16 -4
mm/vmalloc.c
··· 77 77 78 78 BUG_ON(addr >= end); 79 79 pgd = pgd_offset_k(addr); 80 - flush_cache_vunmap(addr, end); 81 80 do { 82 81 next = pgd_addr_end(addr, end); 83 82 if (pgd_none_or_clear_bad(pgd)) ··· 542 543 } 543 544 544 545 /* 545 - * Free and unmap a vmap area 546 + * Free and unmap a vmap area, caller ensuring flush_cache_vunmap had been 547 + * called for the correct range previously. 546 548 */ 547 - static void free_unmap_vmap_area(struct vmap_area *va) 549 + static void free_unmap_vmap_area_noflush(struct vmap_area *va) 548 550 { 549 551 va->flags |= VM_LAZY_FREE; 550 552 atomic_add((va->va_end - va->va_start) >> PAGE_SHIFT, &vmap_lazy_nr); 551 553 if (unlikely(atomic_read(&vmap_lazy_nr) > lazy_max_pages())) 552 554 try_purge_vmap_area_lazy(); 555 + } 556 + 557 + /* 558 + * Free and unmap a vmap area 559 + */ 560 + static void free_unmap_vmap_area(struct vmap_area *va) 561 + { 562 + flush_cache_vunmap(va->va_start, va->va_end); 563 + free_unmap_vmap_area_noflush(va); 553 564 } 554 565 555 566 static struct vmap_area *find_vmap_area(unsigned long addr) ··· 743 734 spin_unlock(&vmap_block_tree_lock); 744 735 BUG_ON(tmp != vb); 745 736 746 - free_unmap_vmap_area(vb->va); 737 + free_unmap_vmap_area_noflush(vb->va); 747 738 call_rcu(&vb->rcu_head, rcu_free_vb); 748 739 } 749 740 ··· 805 796 806 797 BUG_ON(size & ~PAGE_MASK); 807 798 BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC); 799 + 800 + flush_cache_vunmap((unsigned long)addr, (unsigned long)addr + size); 801 + 808 802 order = get_order(size); 809 803 810 804 offset = (unsigned long)addr & (VMAP_BLOCK_SIZE - 1);