Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

HWPOISON, hugetlb: detect hwpoison in hugetlb code

This patch enables to block access to hwpoisoned hugepage and
also enables to block unmapping for it.

Dependency:
"HWPOISON, hugetlb: enable error handling path for hugepage"

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Fengguang Wu <fengguang.wu@intel.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andi Kleen <ak@linux.intel.com>

authored by

Naoya Horiguchi and committed by
Andi Kleen
fd6a03ed 93f70f90

+40
+40
mm/hugetlb.c
··· 19 19 #include <linux/sysfs.h> 20 20 #include <linux/slab.h> 21 21 #include <linux/rmap.h> 22 + #include <linux/swap.h> 23 + #include <linux/swapops.h> 22 24 23 25 #include <asm/page.h> 24 26 #include <asm/pgtable.h> ··· 2151 2149 return -ENOMEM; 2152 2150 } 2153 2151 2152 + static int is_hugetlb_entry_hwpoisoned(pte_t pte) 2153 + { 2154 + swp_entry_t swp; 2155 + 2156 + if (huge_pte_none(pte) || pte_present(pte)) 2157 + return 0; 2158 + swp = pte_to_swp_entry(pte); 2159 + if (non_swap_entry(swp) && is_hwpoison_entry(swp)) { 2160 + return 1; 2161 + } else 2162 + return 0; 2163 + } 2164 + 2154 2165 void __unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, 2155 2166 unsigned long end, struct page *ref_page) 2156 2167 { ··· 2220 2205 2221 2206 pte = huge_ptep_get_and_clear(mm, address, ptep); 2222 2207 if (huge_pte_none(pte)) 2208 + continue; 2209 + 2210 + /* 2211 + * HWPoisoned hugepage is already unmapped and dropped reference 2212 + */ 2213 + if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) 2223 2214 continue; 2224 2215 2225 2216 page = pte_page(pte); ··· 2512 2491 } 2513 2492 2514 2493 /* 2494 + * Since memory error handler replaces pte into hwpoison swap entry 2495 + * at the time of error handling, a process which reserved but not have 2496 + * the mapping to the error hugepage does not have hwpoison swap entry. 2497 + * So we need to block accesses from such a process by checking 2498 + * PG_hwpoison bit here. 2499 + */ 2500 + if (unlikely(PageHWPoison(page))) { 2501 + ret = VM_FAULT_HWPOISON; 2502 + goto backout_unlocked; 2503 + } 2504 + 2505 + /* 2515 2506 * If we are going to COW a private mapping later, we examine the 2516 2507 * pending reservations for this page now. This will ensure that 2517 2508 * any allocations necessary to record that reservation occur outside ··· 2576 2543 struct page *pagecache_page = NULL; 2577 2544 static DEFINE_MUTEX(hugetlb_instantiation_mutex); 2578 2545 struct hstate *h = hstate_vma(vma); 2546 + 2547 + ptep = huge_pte_offset(mm, address); 2548 + if (ptep) { 2549 + entry = huge_ptep_get(ptep); 2550 + if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) 2551 + return VM_FAULT_HWPOISON; 2552 + } 2579 2553 2580 2554 ptep = huge_pte_alloc(mm, address, huge_page_size(h)); 2581 2555 if (!ptep)