Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86, mm: unify exit paths in gup_pte_range()

All exit paths from gup_pte_range() require pte_unmap() of the original
pte page before returning. Refactor the code to have a single exit
point to do the unmap.

This mirrors the flow of the generic gup_pte_range() in mm/gup.c.

Link: http://lkml.kernel.org/r/148804251828.36605.14910389618497006945.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Dan Williams and committed by
Linus Torvalds
b2e593e2 ef947b25

+20 -19
+20 -19
arch/x86/mm/gup.c
··· 106 106 unsigned long end, int write, struct page **pages, int *nr) 107 107 { 108 108 struct dev_pagemap *pgmap = NULL; 109 - int nr_start = *nr; 110 - pte_t *ptep; 109 + int nr_start = *nr, ret = 0; 110 + pte_t *ptep, *ptem; 111 111 112 - ptep = pte_offset_map(&pmd, addr); 112 + /* 113 + * Keep the original mapped PTE value (ptem) around since we 114 + * might increment ptep off the end of the page when finishing 115 + * our loop iteration. 116 + */ 117 + ptem = ptep = pte_offset_map(&pmd, addr); 113 118 do { 114 119 pte_t pte = gup_get_pte(ptep); 115 120 struct page *page; 116 121 117 122 /* Similar to the PMD case, NUMA hinting must take slow path */ 118 - if (pte_protnone(pte)) { 119 - pte_unmap(ptep); 120 - return 0; 121 - } 123 + if (pte_protnone(pte)) 124 + break; 122 125 123 - if (!pte_allows_gup(pte_val(pte), write)) { 124 - pte_unmap(ptep); 125 - return 0; 126 - } 126 + if (!pte_allows_gup(pte_val(pte), write)) 127 + break; 127 128 128 129 if (pte_devmap(pte)) { 129 130 pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); 130 131 if (unlikely(!pgmap)) { 131 132 undo_dev_pagemap(nr, nr_start, pages); 132 - pte_unmap(ptep); 133 - return 0; 133 + break; 134 134 } 135 - } else if (pte_special(pte)) { 136 - pte_unmap(ptep); 137 - return 0; 138 - } 135 + } else if (pte_special(pte)) 136 + break; 137 + 139 138 VM_BUG_ON(!pfn_valid(pte_pfn(pte))); 140 139 page = pte_page(pte); 141 140 get_page(page); ··· 144 145 (*nr)++; 145 146 146 147 } while (ptep++, addr += PAGE_SIZE, addr != end); 147 - pte_unmap(ptep - 1); 148 + if (addr == end) 149 + ret = 1; 150 + pte_unmap(ptem); 148 151 149 - return 1; 152 + return ret; 150 153 } 151 154 152 155 static inline void get_head_page_multiple(struct page *page, int nr)