Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

tile: fix some issues in hugepage support

First, in huge_pte_offset(), we were erroneously checking
pgd_present(), which is always true, rather than pud_present(),
which is the thing that tells us if there is a top-level (L0) PTE.
Fixing this means we properly look up huge page entries only when
the Present bit is actually set in the PTE.

Second, use the standard pte_alloc_map() instead of the hand-rolled
pte_alloc_hugetlb() routine that basically was written to avoid
worrying about CONFIG_HIGHPTE. However, we no longer plan to support
HIGHPTE, so a separate routine was just unnecessary code duplication.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>

+3 -35
+3 -35
arch/tile/mm/hugetlbpage.c
··· 49 49 #endif 50 50 }; 51 51 52 - /* 53 - * This routine is a hybrid of pte_alloc_map() and pte_alloc_kernel(). 54 - * It assumes that L2 PTEs are never in HIGHMEM (we don't support that). 55 - * It locks the user pagetable, and bumps up the mm->nr_ptes field, 56 - * but otherwise allocate the page table using the kernel versions. 57 - */ 58 - static pte_t *pte_alloc_hugetlb(struct mm_struct *mm, pmd_t *pmd, 59 - unsigned long address) 60 - { 61 - pte_t *new; 62 - 63 - if (pmd_none(*pmd)) { 64 - new = pte_alloc_one_kernel(mm, address); 65 - if (!new) 66 - return NULL; 67 - 68 - smp_wmb(); /* See comment in __pte_alloc */ 69 - 70 - spin_lock(&mm->page_table_lock); 71 - if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ 72 - mm->nr_ptes++; 73 - pmd_populate_kernel(mm, pmd, new); 74 - new = NULL; 75 - } else 76 - VM_BUG_ON(pmd_trans_splitting(*pmd)); 77 - spin_unlock(&mm->page_table_lock); 78 - if (new) 79 - pte_free_kernel(mm, new); 80 - } 81 - 82 - return pte_offset_kernel(pmd, address); 83 - } 84 52 #endif 85 53 86 54 pte_t *huge_pte_alloc(struct mm_struct *mm, ··· 77 109 else { 78 110 if (sz != PAGE_SIZE << huge_shift[HUGE_SHIFT_PAGE]) 79 111 panic("Unexpected page size %#lx\n", sz); 80 - return pte_alloc_hugetlb(mm, pmd, addr); 112 + return pte_alloc_map(mm, NULL, pmd, addr); 81 113 } 82 114 } 83 115 #else ··· 112 144 113 145 /* Get the top-level page table entry. */ 114 146 pgd = (pgd_t *)get_pte((pte_t *)mm->pgd, pgd_index(addr), 0); 115 - if (!pgd_present(*pgd)) 116 - return NULL; 117 147 118 148 /* We don't have four levels. */ 119 149 pud = pud_offset(pgd, addr); 120 150 #ifndef __PAGETABLE_PUD_FOLDED 121 151 # error support fourth page table level 122 152 #endif 153 + if (!pud_present(*pud)) 154 + return NULL; 123 155 124 156 /* Check for an L0 huge PTE, if we have three levels. */ 125 157 #ifndef __PAGETABLE_PMD_FOLDED