[PATCH] adapt page_lock_anon_vma() to PREEMPT_RCU

page_lock_anon_vma() uses spin_lock() to block RCU. This doesn't work with
PREEMPT_RCU, we have to do rcu_read_lock() explicitely. Otherwise, it is
theoretically possible that slab returns anon_vma's memory to the system
before we do spin_unlock(&anon_vma->lock).

[ Hugh points out that this only matters for PREEMPT_RCU, which isn't merged
yet, and may never be. Regardless, this patch is conceptually the
right thing to do, even if it doesn't matter at this point. - Linus ]

Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <clameter@engr.sgi.com>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by Oleg Nesterov and committed by Linus Torvalds 34bbd704 48dba8ab

+13 -4
+13 -4
mm/rmap.c
··· 183 */ 184 static struct anon_vma *page_lock_anon_vma(struct page *page) 185 { 186 - struct anon_vma *anon_vma = NULL; 187 unsigned long anon_mapping; 188 189 rcu_read_lock(); ··· 195 196 anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON); 197 spin_lock(&anon_vma->lock); 198 out: 199 rcu_read_unlock(); 200 - return anon_vma; 201 } 202 203 /* ··· 340 if (!mapcount) 341 break; 342 } 343 - spin_unlock(&anon_vma->lock); 344 return referenced; 345 } 346 ··· 810 if (ret == SWAP_FAIL || !page_mapped(page)) 811 break; 812 } 813 - spin_unlock(&anon_vma->lock); 814 return ret; 815 } 816
··· 183 */ 184 static struct anon_vma *page_lock_anon_vma(struct page *page) 185 { 186 + struct anon_vma *anon_vma; 187 unsigned long anon_mapping; 188 189 rcu_read_lock(); ··· 195 196 anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON); 197 spin_lock(&anon_vma->lock); 198 + return anon_vma; 199 out: 200 rcu_read_unlock(); 201 + return NULL; 202 + } 203 + 204 + static void page_unlock_anon_vma(struct anon_vma *anon_vma) 205 + { 206 + spin_unlock(&anon_vma->lock); 207 + rcu_read_unlock(); 208 } 209 210 /* ··· 333 if (!mapcount) 334 break; 335 } 336 + 337 + page_unlock_anon_vma(anon_vma); 338 return referenced; 339 } 340 ··· 802 if (ret == SWAP_FAIL || !page_mapped(page)) 803 break; 804 } 805 + 806 + page_unlock_anon_vma(anon_vma); 807 return ret; 808 } 809