[PATCH] adapt page_lock_anon_vma() to PREEMPT_RCU

page_lock_anon_vma() uses spin_lock() to block RCU. This doesn't work with
PREEMPT_RCU, we have to do rcu_read_lock() explicitely. Otherwise, it is
theoretically possible that slab returns anon_vma's memory to the system
before we do spin_unlock(&anon_vma->lock).

[ Hugh points out that this only matters for PREEMPT_RCU, which isn't merged
yet, and may never be. Regardless, this patch is conceptually the
right thing to do, even if it doesn't matter at this point. - Linus ]

Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <clameter@engr.sgi.com>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by Oleg Nesterov and committed by Linus Torvalds 34bbd704 48dba8ab

+13 -4
+13 -4
mm/rmap.c
··· 183 183 */ 184 184 static struct anon_vma *page_lock_anon_vma(struct page *page) 185 185 { 186 - struct anon_vma *anon_vma = NULL; 186 + struct anon_vma *anon_vma; 187 187 unsigned long anon_mapping; 188 188 189 189 rcu_read_lock(); ··· 195 195 196 196 anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON); 197 197 spin_lock(&anon_vma->lock); 198 + return anon_vma; 198 199 out: 199 200 rcu_read_unlock(); 200 - return anon_vma; 201 + return NULL; 202 + } 203 + 204 + static void page_unlock_anon_vma(struct anon_vma *anon_vma) 205 + { 206 + spin_unlock(&anon_vma->lock); 207 + rcu_read_unlock(); 201 208 } 202 209 203 210 /* ··· 340 333 if (!mapcount) 341 334 break; 342 335 } 343 - spin_unlock(&anon_vma->lock); 336 + 337 + page_unlock_anon_vma(anon_vma); 344 338 return referenced; 345 339 } 346 340 ··· 810 802 if (ret == SWAP_FAIL || !page_mapped(page)) 811 803 break; 812 804 } 813 - spin_unlock(&anon_vma->lock); 805 + 806 + page_unlock_anon_vma(anon_vma); 814 807 return ret; 815 808 } 816 809