Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: make compound_head() robust

Hugh has pointed that compound_head() call can be unsafe in some
context. There's one example:

CPU0 CPU1

isolate_migratepages_block()
page_count()
compound_head()
!!PageTail() == true
put_page()
tail->first_page = NULL
head = tail->first_page
alloc_pages(__GFP_COMP)
prep_compound_page()
tail->first_page = head
__SetPageTail(p);
!!PageTail() == true
<head == NULL dereferencing>

The race is pure theoretical. I don't it's possible to trigger it in
practice. But who knows.

We can fix the race by changing how encode PageTail() and compound_head()
within struct page to be able to update them in one shot.

The patch introduces page->compound_head into third double word block in
front of compound_dtor and compound_order. Bit 0 encodes PageTail() and
the rest bits are pointer to head page if bit zero is set.

The patch moves page->pmd_huge_pte out of word, just in case if an
architecture defines pgtable_t into something what can have the bit 0
set.

hugetlb_cgroup uses page->lru.next in the second tail page to store
pointer struct hugetlb_cgroup. The patch switch it to use page->private
in the second tail page instead. The space is free since ->first_page is
removed from the union.

The patch also opens possibility to remove HUGETLB_CGROUP_MIN_ORDER
limitation, since there's now space in first tail page to store struct
hugetlb_cgroup pointer. But that's out of scope of the patch.

That means page->compound_head shares storage space with:

- page->lru.next;
- page->next;
- page->rcu_head.next;

That's too long list to be absolutely sure, but looks like nobody uses
bit 0 of the word.

page->rcu_head.next guaranteed[1] to have bit 0 clean as long as we use
call_rcu(), call_rcu_bh(), call_rcu_sched(), or call_srcu(). But future
call_rcu_lazy() is not allowed as it makes use of the bit and we can
get false positive PageTail().

[1] http://lkml.kernel.org/g/20150827163634.GD4029@linux.vnet.ibm.com

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Kirill A. Shutemov and committed by
Linus Torvalds
1d798ca3 f1e61557

+89 -182
+2 -2
Documentation/vm/split_page_table_lock
··· 54 54 which must be called on PTE table allocation / freeing. 55 55 56 56 Make sure the architecture doesn't use slab allocator for page table 57 - allocation: slab uses page->slab_cache and page->first_page for its pages. 58 - These fields share storage with page->ptl. 57 + allocation: slab uses page->slab_cache for its pages. 58 + This field shares storage with page->ptl. 59 59 60 60 PMD split lock only makes sense if you have more than two page table 61 61 levels.
-1
arch/xtensa/configs/iss_defconfig
··· 169 169 # CONFIG_SPARSEMEM_MANUAL is not set 170 170 CONFIG_FLATMEM=y 171 171 CONFIG_FLAT_NODE_MEM_MAP=y 172 - CONFIG_PAGEFLAGS_EXTENDED=y 173 172 CONFIG_SPLIT_PTLOCK_CPUS=4 174 173 # CONFIG_PHYS_ADDR_T_64BIT is not set 175 174 CONFIG_ZONE_DMA_FLAG=1
+2 -2
include/linux/hugetlb_cgroup.h
··· 32 32 33 33 if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) 34 34 return NULL; 35 - return (struct hugetlb_cgroup *)page[2].lru.next; 35 + return (struct hugetlb_cgroup *)page[2].private; 36 36 } 37 37 38 38 static inline ··· 42 42 43 43 if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) 44 44 return -1; 45 - page[2].lru.next = (void *)h_cg; 45 + page[2].private = (unsigned long)h_cg; 46 46 return 0; 47 47 } 48 48
+3 -50
include/linux/mm.h
··· 430 430 #endif 431 431 } 432 432 433 - static inline struct page *compound_head_by_tail(struct page *tail) 434 - { 435 - struct page *head = tail->first_page; 436 - 437 - /* 438 - * page->first_page may be a dangling pointer to an old 439 - * compound page, so recheck that it is still a tail 440 - * page before returning. 441 - */ 442 - smp_rmb(); 443 - if (likely(PageTail(tail))) 444 - return head; 445 - return tail; 446 - } 447 - 448 - /* 449 - * Since either compound page could be dismantled asynchronously in THP 450 - * or we access asynchronously arbitrary positioned struct page, there 451 - * would be tail flag race. To handle this race, we should call 452 - * smp_rmb() before checking tail flag. compound_head_by_tail() did it. 453 - */ 454 - static inline struct page *compound_head(struct page *page) 455 - { 456 - if (unlikely(PageTail(page))) 457 - return compound_head_by_tail(page); 458 - return page; 459 - } 460 - 461 - /* 462 - * If we access compound page synchronously such as access to 463 - * allocated page, there is no need to handle tail flag race, so we can 464 - * check tail flag directly without any synchronization primitive. 465 - */ 466 - static inline struct page *compound_head_fast(struct page *page) 467 - { 468 - if (unlikely(PageTail(page))) 469 - return page->first_page; 470 - return page; 471 - } 472 - 473 433 /* 474 434 * The atomic page->_mapcount, starts from -1: so that transitions 475 435 * both from it and to it can be tracked, using atomic_inc_and_test ··· 478 518 VM_BUG_ON_PAGE(!PageTail(page), page); 479 519 VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); 480 520 VM_BUG_ON_PAGE(atomic_read(&page->_count) != 0, page); 481 - if (compound_tail_refcounted(page->first_page)) 521 + if (compound_tail_refcounted(compound_head(page))) 482 522 atomic_inc(&page->_mapcount); 483 523 } 484 524 ··· 501 541 { 502 542 struct page *page = virt_to_page(x); 503 543 504 - /* 505 - * We don't need to worry about synchronization of tail flag 506 - * when we call virt_to_head_page() since it is only called for 507 - * already allocated page and this page won't be freed until 508 - * this virt_to_head_page() is finished. So use _fast variant. 509 - */ 510 - return compound_head_fast(page); 544 + return compound_head(page); 511 545 } 512 546 513 547 /* ··· 1540 1586 * with 0. Make sure nobody took it in use in between. 1541 1587 * 1542 1588 * It can happen if arch try to use slab for page table allocation: 1543 - * slab code uses page->slab_cache and page->first_page (for tail 1544 - * pages), which share storage with page->ptl. 1589 + * slab code uses page->slab_cache, which share storage with page->ptl. 1545 1590 */ 1546 1591 VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); 1547 1592 if (!ptlock_alloc(page))
+18 -4
include/linux/mm_types.h
··· 111 111 }; 112 112 }; 113 113 114 - /* Third double word block */ 114 + /* 115 + * Third double word block 116 + * 117 + * WARNING: bit 0 of the first word encode PageTail(). That means 118 + * the rest users of the storage space MUST NOT use the bit to 119 + * avoid collision and false-positive PageTail(). 120 + */ 115 121 union { 116 122 struct list_head lru; /* Pageout list, eg. active_list 117 123 * protected by zone->lru_lock ! ··· 138 132 struct rcu_head rcu_head; /* Used by SLAB 139 133 * when destroying via RCU 140 134 */ 141 - /* First tail page of compound page */ 135 + /* Tail pages of compound page */ 142 136 struct { 137 + unsigned long compound_head; /* If bit zero is set */ 138 + 139 + /* First tail page only */ 143 140 unsigned short int compound_dtor; 144 141 unsigned short int compound_order; 145 142 }; 146 143 147 144 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS 148 - pgtable_t pmd_huge_pte; /* protected by page->ptl */ 145 + struct { 146 + unsigned long __pad; /* do not overlay pmd_huge_pte 147 + * with compound_head to avoid 148 + * possible bit 0 collision. 149 + */ 150 + pgtable_t pmd_huge_pte; /* protected by page->ptl */ 151 + }; 149 152 #endif 150 153 }; 151 154 ··· 175 160 #endif 176 161 #endif 177 162 struct kmem_cache *slab_cache; /* SL[AU]B: Pointer to slab */ 178 - struct page *first_page; /* Compound tail pages */ 179 163 }; 180 164 181 165 #ifdef CONFIG_MEMCG
+25 -69
include/linux/page-flags.h
··· 86 86 PG_private, /* If pagecache, has fs-private data */ 87 87 PG_private_2, /* If pagecache, has fs aux data */ 88 88 PG_writeback, /* Page is under writeback */ 89 - #ifdef CONFIG_PAGEFLAGS_EXTENDED 90 89 PG_head, /* A head page */ 91 - PG_tail, /* A tail page */ 92 - #else 93 - PG_compound, /* A compound page */ 94 - #endif 95 90 PG_swapcache, /* Swap page: swp_entry_t in private */ 96 91 PG_mappedtodisk, /* Has blocks allocated on-disk */ 97 92 PG_reclaim, /* To be reclaimed asap */ ··· 393 398 test_set_page_writeback_keepwrite(page); 394 399 } 395 400 396 - #ifdef CONFIG_PAGEFLAGS_EXTENDED 397 - /* 398 - * System with lots of page flags available. This allows separate 399 - * flags for PageHead() and PageTail() checks of compound pages so that bit 400 - * tests can be used in performance sensitive paths. PageCompound is 401 - * generally not used in hot code paths except arch/powerpc/mm/init_64.c 402 - * and arch/powerpc/kvm/book3s_64_vio_hv.c which use it to detect huge pages 403 - * and avoid handling those in real mode. 404 - */ 405 401 __PAGEFLAG(Head, head) CLEARPAGEFLAG(Head, head) 406 - __PAGEFLAG(Tail, tail) 402 + 403 + static inline int PageTail(struct page *page) 404 + { 405 + return READ_ONCE(page->compound_head) & 1; 406 + } 407 + 408 + static inline void set_compound_head(struct page *page, struct page *head) 409 + { 410 + WRITE_ONCE(page->compound_head, (unsigned long)head + 1); 411 + } 412 + 413 + static inline void clear_compound_head(struct page *page) 414 + { 415 + WRITE_ONCE(page->compound_head, 0); 416 + } 417 + 418 + static inline struct page *compound_head(struct page *page) 419 + { 420 + unsigned long head = READ_ONCE(page->compound_head); 421 + 422 + if (unlikely(head & 1)) 423 + return (struct page *) (head - 1); 424 + return page; 425 + } 407 426 408 427 static inline int PageCompound(struct page *page) 409 428 { 410 - return page->flags & ((1L << PG_head) | (1L << PG_tail)); 429 + return PageHead(page) || PageTail(page); 411 430 412 431 } 413 432 #ifdef CONFIG_TRANSPARENT_HUGEPAGE ··· 433 424 #endif 434 425 435 426 #define PG_head_mask ((1L << PG_head)) 436 - 437 - #else 438 - /* 439 - * Reduce page flag use as much as possible by overlapping 440 - * compound page flags with the flags used for page cache pages. Possible 441 - * because PageCompound is always set for compound pages and not for 442 - * pages on the LRU and/or pagecache. 443 - */ 444 - TESTPAGEFLAG(Compound, compound) 445 - __SETPAGEFLAG(Head, compound) __CLEARPAGEFLAG(Head, compound) 446 - 447 - /* 448 - * PG_reclaim is used in combination with PG_compound to mark the 449 - * head and tail of a compound page. This saves one page flag 450 - * but makes it impossible to use compound pages for the page cache. 451 - * The PG_reclaim bit would have to be used for reclaim or readahead 452 - * if compound pages enter the page cache. 453 - * 454 - * PG_compound & PG_reclaim => Tail page 455 - * PG_compound & ~PG_reclaim => Head page 456 - */ 457 - #define PG_head_mask ((1L << PG_compound)) 458 - #define PG_head_tail_mask ((1L << PG_compound) | (1L << PG_reclaim)) 459 - 460 - static inline int PageHead(struct page *page) 461 - { 462 - return ((page->flags & PG_head_tail_mask) == PG_head_mask); 463 - } 464 - 465 - static inline int PageTail(struct page *page) 466 - { 467 - return ((page->flags & PG_head_tail_mask) == PG_head_tail_mask); 468 - } 469 - 470 - static inline void __SetPageTail(struct page *page) 471 - { 472 - page->flags |= PG_head_tail_mask; 473 - } 474 - 475 - static inline void __ClearPageTail(struct page *page) 476 - { 477 - page->flags &= ~PG_head_tail_mask; 478 - } 479 - 480 - #ifdef CONFIG_TRANSPARENT_HUGEPAGE 481 - static inline void ClearPageCompound(struct page *page) 482 - { 483 - BUG_ON((page->flags & PG_head_tail_mask) != (1 << PG_compound)); 484 - clear_bit(PG_compound, &page->flags); 485 - } 486 - #endif 487 - 488 - #endif /* !PAGEFLAGS_EXTENDED */ 489 427 490 428 #ifdef CONFIG_HUGETLB_PAGE 491 429 int PageHuge(struct page *page);
-12
mm/Kconfig
··· 200 200 depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE 201 201 depends on MIGRATION 202 202 203 - # 204 - # If we have space for more page flags then we can enable additional 205 - # optimizations and functionality. 206 - # 207 - # Regular Sparsemem takes page flag bits for the sectionid if it does not 208 - # use a virtual memmap. Disable extended page flags for 32 bit platforms 209 - # that require the use of a sectionid in the page flags. 210 - # 211 - config PAGEFLAGS_EXTENDED 212 - def_bool y 213 - depends on 64BIT || SPARSEMEM_VMEMMAP || !SPARSEMEM 214 - 215 203 # Heavily threaded applications may benefit from splitting the mm-wide 216 204 # page_table_lock, so that faults on different parts of the user address 217 205 # space can be handled with less contention: split it at this NR_CPUS.
-5
mm/debug.c
··· 25 25 {1UL << PG_private, "private" }, 26 26 {1UL << PG_private_2, "private_2" }, 27 27 {1UL << PG_writeback, "writeback" }, 28 - #ifdef CONFIG_PAGEFLAGS_EXTENDED 29 28 {1UL << PG_head, "head" }, 30 - {1UL << PG_tail, "tail" }, 31 - #else 32 - {1UL << PG_compound, "compound" }, 33 - #endif 34 29 {1UL << PG_swapcache, "swapcache" }, 35 30 {1UL << PG_mappedtodisk, "mappedtodisk" }, 36 31 {1UL << PG_reclaim, "reclaim" },
+1 -2
mm/huge_memory.c
··· 1755 1755 (1L << PG_unevictable))); 1756 1756 page_tail->flags |= (1L << PG_dirty); 1757 1757 1758 - /* clear PageTail before overwriting first_page */ 1759 - smp_wmb(); 1758 + clear_compound_head(page_tail); 1760 1759 1761 1760 if (page_is_young(page)) 1762 1761 set_page_young(page_tail);
+2 -6
mm/hugetlb.c
··· 1001 1001 struct page *p = page + 1; 1002 1002 1003 1003 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { 1004 - __ClearPageTail(p); 1004 + clear_compound_head(p); 1005 1005 set_page_refcounted(p); 1006 - p->first_page = NULL; 1007 1006 } 1008 1007 1009 1008 set_compound_order(page, 0); ··· 1275 1276 */ 1276 1277 __ClearPageReserved(p); 1277 1278 set_page_count(p, 0); 1278 - p->first_page = page; 1279 - /* Make sure p->first_page is always valid for PageTail() */ 1280 - smp_wmb(); 1281 - __SetPageTail(p); 1279 + set_compound_head(p, page); 1282 1280 } 1283 1281 } 1284 1282
+1 -1
mm/hugetlb_cgroup.c
··· 385 385 /* 386 386 * Add cgroup control files only if the huge page consists 387 387 * of more than two normal pages. This is because we use 388 - * page[2].lru.next for storing cgroup details. 388 + * page[2].private for storing cgroup details. 389 389 */ 390 390 if (huge_page_order(h) >= HUGETLB_CGROUP_MIN_ORDER) 391 391 __hugetlb_cgroup_file_init(hstate_index(h));
+2 -2
mm/internal.h
··· 80 80 * speculative page access (like in 81 81 * page_cache_get_speculative()) on tail pages. 82 82 */ 83 - VM_BUG_ON_PAGE(atomic_read(&page->first_page->_count) <= 0, page); 83 + VM_BUG_ON_PAGE(atomic_read(&compound_head(page)->_count) <= 0, page); 84 84 if (get_page_head) 85 - atomic_inc(&page->first_page->_count); 85 + atomic_inc(&compound_head(page)->_count); 86 86 get_huge_page_tail(page); 87 87 } 88 88
-7
mm/memory-failure.c
··· 776 776 #define lru (1UL << PG_lru) 777 777 #define swapbacked (1UL << PG_swapbacked) 778 778 #define head (1UL << PG_head) 779 - #define tail (1UL << PG_tail) 780 - #define compound (1UL << PG_compound) 781 779 #define slab (1UL << PG_slab) 782 780 #define reserved (1UL << PG_reserved) 783 781 ··· 798 800 */ 799 801 { slab, slab, MF_MSG_SLAB, me_kernel }, 800 802 801 - #ifdef CONFIG_PAGEFLAGS_EXTENDED 802 803 { head, head, MF_MSG_HUGE, me_huge_page }, 803 - { tail, tail, MF_MSG_HUGE, me_huge_page }, 804 - #else 805 - { compound, compound, MF_MSG_HUGE, me_huge_page }, 806 - #endif 807 804 808 805 { sc|dirty, sc|dirty, MF_MSG_DIRTY_SWAPCACHE, me_swapcache_dirty }, 809 806 { sc|dirty, sc, MF_MSG_CLEAN_SWAPCACHE, me_swapcache_clean },
+31 -17
mm/page_alloc.c
··· 445 445 /* 446 446 * Higher-order pages are called "compound pages". They are structured thusly: 447 447 * 448 - * The first PAGE_SIZE page is called the "head page". 448 + * The first PAGE_SIZE page is called the "head page" and have PG_head set. 449 449 * 450 - * The remaining PAGE_SIZE pages are called "tail pages". 450 + * The remaining PAGE_SIZE pages are called "tail pages". PageTail() is encoded 451 + * in bit 0 of page->compound_head. The rest of bits is pointer to head page. 451 452 * 452 - * All pages have PG_compound set. All tail pages have their ->first_page 453 - * pointing at the head page. 453 + * The first tail page's ->compound_dtor holds the offset in array of compound 454 + * page destructors. See compound_page_dtors. 454 455 * 455 - * The first tail page's ->lru.next holds the address of the compound page's 456 - * put_page() function. Its ->lru.prev holds the order of allocation. 456 + * The first tail page's ->compound_order holds the order of allocation. 457 457 * This usage means that zero-order pages may not be compound. 458 458 */ 459 459 ··· 473 473 for (i = 1; i < nr_pages; i++) { 474 474 struct page *p = page + i; 475 475 set_page_count(p, 0); 476 - p->first_page = page; 477 - /* Make sure p->first_page is always valid for PageTail() */ 478 - smp_wmb(); 479 - __SetPageTail(p); 476 + set_compound_head(p, page); 480 477 } 481 478 } 482 479 ··· 851 854 852 855 static int free_tail_pages_check(struct page *head_page, struct page *page) 853 856 { 854 - if (!IS_ENABLED(CONFIG_DEBUG_VM)) 855 - return 0; 857 + int ret = 1; 858 + 859 + /* 860 + * We rely page->lru.next never has bit 0 set, unless the page 861 + * is PageTail(). Let's make sure that's true even for poisoned ->lru. 862 + */ 863 + BUILD_BUG_ON((unsigned long)LIST_POISON1 & 1); 864 + 865 + if (!IS_ENABLED(CONFIG_DEBUG_VM)) { 866 + ret = 0; 867 + goto out; 868 + } 856 869 if (unlikely(!PageTail(page))) { 857 870 bad_page(page, "PageTail not set", 0); 858 - return 1; 871 + goto out; 859 872 } 860 - if (unlikely(page->first_page != head_page)) { 861 - bad_page(page, "first_page not consistent", 0); 862 - return 1; 873 + if (unlikely(compound_head(page) != head_page)) { 874 + bad_page(page, "compound_head not consistent", 0); 875 + goto out; 863 876 } 864 - return 0; 877 + ret = 0; 878 + out: 879 + clear_compound_head(page); 880 + return ret; 865 881 } 866 882 867 883 static void __meminit __init_single_page(struct page *page, unsigned long pfn, ··· 941 931 struct page *page = pfn_to_page(start_pfn); 942 932 943 933 init_reserved_page(start_pfn); 934 + 935 + /* Avoid false-positive PageTail() */ 936 + INIT_LIST_HEAD(&page->lru); 937 + 944 938 SetPageReserved(page); 945 939 } 946 940 }
+2 -2
mm/swap.c
··· 201 201 __put_single_page(page); 202 202 return; 203 203 } 204 - VM_BUG_ON_PAGE(page_head != page->first_page, page); 204 + VM_BUG_ON_PAGE(page_head != compound_head(page), page); 205 205 /* 206 206 * We can release the refcount taken by 207 207 * get_page_unless_zero() now that ··· 262 262 * Case 3 is possible, as we may race with 263 263 * __split_huge_page_refcount tearing down a THP page. 264 264 */ 265 - page_head = compound_head_by_tail(page); 265 + page_head = compound_head(page); 266 266 if (!__compound_tail_refcounted(page_head)) 267 267 put_unrefcounted_compound_page(page_head, page); 268 268 else