Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: consolidate pte_index() and pte_offset_*() definitions

All architectures define pte_index() as

(address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)

and all architectures define pte_offset_kernel() as an entry in the array
of PTEs indexed by the pte_index().

For the most architectures the pte_offset_kernel() implementation relies
on the availability of pmd_page_vaddr() that converts a PMD entry value to
the virtual address of the page containing PTEs array.

Let's move x86 definitions of the PTE accessors to the generic place in
<linux/pgtable.h> and then simply drop the respective definitions from the
other architectures.

The architectures that didn't provide pmd_page_vaddr() are updated to have
that defined.

The generic implementation of pte_offset_kernel() can be overridden by an
architecture and alpha makes use of this because it has special ordering
requirements for its version of pte_offset_kernel().

[rppt@linux.ibm.com: v2]
Link: http://lkml.kernel.org/r/20200514170327.31389-11-rppt@kernel.org
[rppt@linux.ibm.com: update]
Link: http://lkml.kernel.org/r/20200514170327.31389-12-rppt@kernel.org
[rppt@linux.ibm.com: update]
Link: http://lkml.kernel.org/r/20200514170327.31389-13-rppt@kernel.org
[akpm@linux-foundation.org: fix x86 warning]
[sfr@canb.auug.org.au: fix powerpc build]
Link: http://lkml.kernel.org/r/20200607153443.GB738695@linux.ibm.com

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200514170327.31389-10-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Mike Rapoport and committed by
Linus Torvalds
974b9b2c e05c7b1f

+228 -860
+2 -12
arch/alpha/include/asm/pgtable.h
··· 276 276 extern inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= __DIRTY_BITS; return pte; } 277 277 extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= __ACCESS_BITS; return pte; } 278 278 279 - #define PAGE_DIR_OFFSET(tsk,address) pgd_offset((tsk),(address)) 280 - 281 - /* to find an entry in a kernel page-table-directory */ 282 - #define pgd_offset_k(address) pgd_offset(&init_mm, (address)) 283 - 284 - /* to find an entry in a page-table-directory. */ 285 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) 286 - #define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address)) 287 - 288 279 /* 289 280 * The smp_read_barrier_depends() in the following functions are required to 290 281 * order the load of *dir (the pointer in the top level page table) with any ··· 296 305 smp_read_barrier_depends(); /* see above */ 297 306 return ret; 298 307 } 308 + #define pmd_offset pmd_offset 299 309 300 310 /* Find an entry in the third-level page table.. */ 301 311 extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address) ··· 306 314 smp_read_barrier_depends(); /* see above */ 307 315 return ret; 308 316 } 309 - 310 - #define pte_offset_map(dir,addr) pte_offset_kernel((dir),(addr)) 311 - #define pte_unmap(pte) do { } while (0) 317 + #define pte_offset_kernel pte_offset_kernel 312 318 313 319 extern pgd_t swapper_pg_dir[1024]; 314 320
-22
arch/arc/include/asm/pgtable.h
··· 248 248 extern char empty_zero_page[PAGE_SIZE]; 249 249 #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) 250 250 251 - #define pte_unmap(pte) do { } while (0) 252 - #define pte_unmap_nested(pte) do { } while (0) 253 - 254 251 #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) 255 252 #define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval) 256 253 ··· 279 282 280 283 /* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/ 281 284 #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) 282 - #define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 283 - 284 - /* 285 - * pte_offset gets a @ptr to PMD entry (PGD in our 2-tier paging system) 286 - * and returns ptr to PTE entry corresponding to @addr 287 - */ 288 - #define pte_offset(dir, addr) ((pte_t *)(pmd_page_vaddr(*dir)) +\ 289 - __pte_index(addr)) 290 - 291 - /* No mapping of Page Tables in high mem etc, so following same as above */ 292 - #define pte_offset_kernel(dir, addr) pte_offset(dir, addr) 293 - #define pte_offset_map(dir, addr) pte_offset(dir, addr) 294 285 295 286 /* Zoo of pte_xxx function */ 296 287 #define pte_read(pte) (pte_val(pte) & _PAGE_READ) ··· 315 330 { 316 331 set_pte(ptep, pteval); 317 332 } 318 - 319 - /* 320 - * All kernel related VM pages are in init's mm. 321 - */ 322 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 323 - #define pgd_index(addr) ((addr) >> PGDIR_SHIFT) 324 - #define pgd_offset(mm, addr) (((mm)->pgd)+pgd_index(addr)) 325 333 326 334 /* 327 335 * Macro to quickly access the PGD entry, utlising the fact that some
+1
arch/arm/include/asm/pgtable-2level.h
··· 187 187 { 188 188 return (pmd_t *)pud; 189 189 } 190 + #define pmd_offset pmd_offset 190 191 191 192 #define pmd_large(pmd) (pmd_val(pmd) & 2) 192 193 #define pmd_leaf(pmd) (pmd_val(pmd) & 2)
-7
arch/arm/include/asm/pgtable-3level.h
··· 133 133 return __va(pud_val(pud) & PHYS_MASK & (s32)PAGE_MASK); 134 134 } 135 135 136 - /* Find an entry in the second-level page table.. */ 137 - #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) 138 - static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) 139 - { 140 - return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr); 141 - } 142 - 143 136 #define pmd_bad(pmd) (!(pmd_val(pmd) & 2)) 144 137 145 138 #define copy_pmd(pmdpd,pmdps) \
-1
arch/arm/include/asm/pgtable-nommu.h
··· 22 22 #define pgd_bad(pgd) (0) 23 23 #define pgd_clear(pgdp) 24 24 #define kern_addr_valid(addr) (1) 25 - #define pmd_offset(a, b) ((void *)0) 26 25 /* FIXME */ 27 26 /* 28 27 * PMD_SHIFT determines the size of the area a second-level page table can map
-23
arch/arm/include/asm/pgtable.h
··· 166 166 167 167 extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 168 168 169 - /* to find an entry in a page-table-directory */ 170 - #define pgd_index(addr) ((addr) >> PGDIR_SHIFT) 171 - 172 - #define pgd_offset(mm, addr) ((mm)->pgd + pgd_index(addr)) 173 - 174 - /* to find an entry in a kernel page-table-directory */ 175 - #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) 176 - 177 169 #define pmd_none(pmd) (!pmd_val(pmd)) 178 170 179 171 static inline pte_t *pmd_page_vaddr(pmd_t pmd) ··· 174 182 } 175 183 176 184 #define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) 177 - 178 - #ifndef CONFIG_HIGHPTE 179 - #define __pte_map(pmd) pmd_page_vaddr(*(pmd)) 180 - #define __pte_unmap(pte) do { } while (0) 181 - #else 182 - #define __pte_map(pmd) (pte_t *)kmap_atomic(pmd_page(*(pmd))) 183 - #define __pte_unmap(pte) kunmap_atomic(pte) 184 - #endif 185 - 186 - #define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 187 - 188 - #define pte_offset_kernel(pmd,addr) (pmd_page_vaddr(*(pmd)) + pte_index(addr)) 189 - 190 - #define pte_offset_map(pmd,addr) (__pte_map(pmd) + pte_index(addr)) 191 - #define pte_unmap(pte) __pte_unmap(pte) 192 185 193 186 #define pte_pfn(pte) ((pte_val(pte) & PHYS_MASK) >> PAGE_SHIFT) 194 187 #define pfn_pte(pfn,prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot))
+14 -22
arch/arm64/include/asm/pgtable.h
··· 506 506 return __pmd_to_phys(pmd); 507 507 } 508 508 509 - static inline void pte_unmap(pte_t *pte) { } 509 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 510 + { 511 + return (unsigned long)__va(pmd_page_paddr(pmd)); 512 + } 510 513 511 514 /* Find an entry in the third-level page table. */ 512 - #define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 513 - 514 515 #define pte_offset_phys(dir,addr) (pmd_page_paddr(READ_ONCE(*(dir))) + pte_index(addr) * sizeof(pte_t)) 515 - #define pte_offset_kernel(dir,addr) ((pte_t *)__va(pte_offset_phys((dir), (addr)))) 516 - 517 - #define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr)) 518 516 519 517 #define pte_set_fixmap(addr) ((pte_t *)set_fixmap_offset(FIX_PTE, addr)) 520 518 #define pte_set_fixmap_offset(pmd, addr) pte_set_fixmap(pte_offset_phys(pmd, addr)) ··· 566 568 return __pud_to_phys(pud); 567 569 } 568 570 569 - /* Find an entry in the second-level page table. */ 570 - #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) 571 + static inline unsigned long pud_page_vaddr(pud_t pud) 572 + { 573 + return (unsigned long)__va(pud_page_paddr(pud)); 574 + } 571 575 576 + /* Find an entry in the second-level page table. */ 572 577 #define pmd_offset_phys(dir, addr) (pud_page_paddr(READ_ONCE(*(dir))) + pmd_index(addr) * sizeof(pmd_t)) 573 - #define pmd_offset(dir, addr) ((pmd_t *)__va(pmd_offset_phys((dir), (addr)))) 574 578 575 579 #define pmd_set_fixmap(addr) ((pmd_t *)set_fixmap_offset(FIX_PMD, addr)) 576 580 #define pmd_set_fixmap_offset(pud, addr) pmd_set_fixmap(pmd_offset_phys(pud, addr)) ··· 626 626 return __p4d_to_phys(p4d); 627 627 } 628 628 629 - /* Find an entry in the frst-level page table. */ 630 - #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) 629 + static inline unsigned long p4d_page_vaddr(p4d_t p4d) 630 + { 631 + return (unsigned long)__va(p4d_page_paddr(p4d)); 632 + } 631 633 634 + /* Find an entry in the frst-level page table. */ 632 635 #define pud_offset_phys(dir, addr) (p4d_page_paddr(READ_ONCE(*(dir))) + pud_index(addr) * sizeof(pud_t)) 633 - #define pud_offset(dir, addr) ((pud_t *)__va(pud_offset_phys((dir), (addr)))) 634 636 635 637 #define pud_set_fixmap(addr) ((pud_t *)set_fixmap_offset(FIX_PUD, addr)) 636 638 #define pud_set_fixmap_offset(p4d, addr) pud_set_fixmap(pud_offset_phys(p4d, addr)) ··· 658 656 #endif /* CONFIG_PGTABLE_LEVELS > 3 */ 659 657 660 658 #define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) 661 - 662 - /* to find an entry in a page-table-directory */ 663 - #define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) 664 - 665 - #define pgd_offset_raw(pgd, addr) ((pgd) + pgd_index(addr)) 666 - 667 - #define pgd_offset(mm, addr) (pgd_offset_raw((mm)->pgd, (addr))) 668 - 669 - /* to find an entry in a kernel page-table-directory */ 670 - #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) 671 659 672 660 #define pgd_set_fixmap(addr) ((pgd_t *)set_fixmap_offset(FIX_PGD, addr)) 673 661 #define pgd_clear_fixmap() clear_fixmap(FIX_PGD)
+2 -2
arch/arm64/kernel/hibernate.c
··· 188 188 pmd_t *pmdp; 189 189 pte_t *ptep; 190 190 191 - pgdp = pgd_offset_raw(trans_pgd, dst_addr); 191 + pgdp = pgd_offset_pgd(trans_pgd, dst_addr); 192 192 if (pgd_none(READ_ONCE(*pgdp))) { 193 193 pudp = (void *)get_safe_page(GFP_ATOMIC); 194 194 if (!pudp) ··· 490 490 unsigned long addr = start; 491 491 pgd_t *src_pgdp = pgd_offset_k(start); 492 492 493 - dst_pgdp = pgd_offset_raw(dst_pgdp, start); 493 + dst_pgdp = pgd_offset_pgd(dst_pgdp, start); 494 494 do { 495 495 next = pgd_addr_end(addr, end); 496 496 if (pgd_none(READ_ONCE(*src_pgdp)))
+1 -1
arch/arm64/mm/kasan_init.c
··· 190 190 191 191 pgdp = pgd_offset_k(KASAN_SHADOW_START); 192 192 pgdp_end = pgd_offset_k(KASAN_SHADOW_END); 193 - pgdp_new = pgd_offset_raw(pgdir, KASAN_SHADOW_START); 193 + pgdp_new = pgd_offset_pgd(pgdir, KASAN_SHADOW_START); 194 194 do { 195 195 set_pgd(pgdp_new, READ_ONCE(*pgdp)); 196 196 } while (pgdp++, pgdp_new++, pgdp != pgdp_end);
+4 -4
arch/arm64/mm/mmu.c
··· 341 341 int flags) 342 342 { 343 343 unsigned long addr, end, next; 344 - pgd_t *pgdp = pgd_offset_raw(pgdir, virt); 344 + pgd_t *pgdp = pgd_offset_pgd(pgdir, virt); 345 345 346 346 /* 347 347 * If the virtual and physical address don't have the same offset ··· 663 663 &vmlinux_initdata, 0, VM_NO_GUARD); 664 664 map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, 0, 0); 665 665 666 - if (!READ_ONCE(pgd_val(*pgd_offset_raw(pgdp, FIXADDR_START)))) { 666 + if (!READ_ONCE(pgd_val(*pgd_offset_pgd(pgdp, FIXADDR_START)))) { 667 667 /* 668 668 * The fixmap falls in a separate pgd to the kernel, and doesn't 669 669 * live in the carveout for the swapper_pg_dir. We can simply 670 670 * re-use the existing dir for the fixmap. 671 671 */ 672 - set_pgd(pgd_offset_raw(pgdp, FIXADDR_START), 672 + set_pgd(pgd_offset_pgd(pgdp, FIXADDR_START), 673 673 READ_ONCE(*pgd_offset_k(FIXADDR_START))); 674 674 } else if (CONFIG_PGTABLE_LEVELS > 3) { 675 675 pgd_t *bm_pgdp; ··· 682 682 * entry instead. 683 683 */ 684 684 BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES)); 685 - bm_pgdp = pgd_offset_raw(pgdp, FIXADDR_START); 685 + bm_pgdp = pgd_offset_pgd(pgdp, FIXADDR_START); 686 686 bm_p4dp = p4d_offset(bm_pgdp, FIXADDR_START); 687 687 bm_pudp = pud_set_fixmap_offset(bm_p4dp, FIXADDR_START); 688 688 pud_populate(&init_mm, bm_pudp, lm_alias(bm_pmd));
-1
arch/c6x/include/asm/pgtable.h
··· 26 26 #define pgd_clear(pgdp) 27 27 #define kern_addr_valid(addr) (1) 28 28 29 - #define pmd_offset(a, b) ((void *)0) 30 29 #define pmd_none(x) (!pmd_val(x)) 31 30 #define pmd_present(x) (pmd_val(x)) 32 31 #define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
-30
arch/csky/include/asm/pgtable.h
··· 32 32 #define pgd_ERROR(e) \ 33 33 pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) 34 34 35 - /* Find an entry in the third-level page table.. */ 36 - #define __pte_offset_t(address) \ 37 - (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 38 - #define pte_offset_kernel(dir, address) \ 39 - (pmd_page_vaddr(*(dir)) + __pte_offset_t(address)) 40 - #define pte_offset_map(dir, address) \ 41 - ((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset_t(address)) 42 35 #define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT)) 43 36 #define pte_clear(mm, addr, ptep) set_pte((ptep), \ 44 37 (((unsigned int) addr & PAGE_OFFSET) ? __pte(_PAGE_GLOBAL) : __pte(0))) ··· 46 53 47 54 #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_MODIFIED | \ 48 55 _CACHE_MASK) 49 - 50 - #define pte_unmap(pte) ((void)(pte)) 51 56 52 57 #define __swp_type(x) (((x).val >> 4) & 0xff) 53 58 #define __swp_offset(x) ((x).val >> 12) ··· 220 229 return pte; 221 230 } 222 231 223 - #define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) 224 - #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 225 - 226 - /* to find an entry in a kernel page-table-directory */ 227 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 228 - 229 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 230 - 231 232 #define __HAVE_PHYS_MEM_ACCESS_PROT 232 233 struct file; 233 234 extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, ··· 261 278 { 262 279 return __pte((pte_val(pte) & _PAGE_CHG_MASK) | 263 280 (pgprot_val(newprot))); 264 - } 265 - 266 - /* to find an entry in a page-table-directory */ 267 - static inline pgd_t *pgd_offset(struct mm_struct *mm, unsigned long address) 268 - { 269 - return mm->pgd + pgd_index(address); 270 - } 271 - 272 - /* Find an entry in the third-level page table.. */ 273 - static inline pte_t *pte_offset(pmd_t *dir, unsigned long address) 274 - { 275 - return (pte_t *) (pmd_page_vaddr(*dir)) + 276 - ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)); 277 281 } 278 282 279 283 extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+4 -48
arch/hexagon/include/asm/pgtable.h
··· 206 206 pte_val(*ptep) = _NULL_PTE; 207 207 } 208 208 209 - #ifdef NEED_PMD_INDEX_DESPITE_BEING_2_LEVEL 210 - /** 211 - * pmd_index - returns the index of the entry in the PMD page 212 - * which would control the given virtual address 213 - */ 214 - #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 215 - 216 - #endif 217 - 218 - /** 219 - * pgd_index - returns the index of the entry in the PGD page 220 - * which would control the given virtual address 221 - * 222 - * This returns the *index* for the address in the pgd_t 223 - */ 224 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) 225 - 226 - /* 227 - * pgd_offset - find an offset in a page-table-directory 228 - */ 229 - #define pgd_offset(mm, addr) ((mm)->pgd + pgd_index(addr)) 230 - 231 - /* 232 - * pgd_offset_k - get kernel (init_mm) pgd entry pointer for addr 233 - */ 234 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 235 - 236 209 /** 237 210 * pmd_none - check if pmd_entry is mapped 238 211 * @pmd_entry: pmd entry ··· 376 403 */ 377 404 #define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) 378 405 379 - /* 380 - * May need to invoke the virtual machine as well... 381 - */ 382 - #define pte_unmap(pte) do { } while (0) 383 - #define pte_unmap_nested(pte) do { } while (0) 384 - 385 - /* 386 - * pte_offset_map - returns the linear address of the page table entry 387 - * corresponding to an address 388 - */ 389 - #define pte_offset_map(dir, address) \ 390 - ((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset(address)) 391 - 392 - #define pte_offset_map_nested(pmd, addr) pte_offset_map(pmd, addr) 393 - 394 - /* pte_offset_kernel - kernel version of pte_offset */ 395 - #define pte_offset_kernel(dir, address) \ 396 - ((pte_t *) (unsigned long) __va(pmd_val(*dir) & PAGE_MASK) \ 397 - + __pte_offset(address)) 406 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 407 + { 408 + return (unsigned long)__va(pmd_val(pmd) & PAGE_MASK); 409 + } 398 410 399 411 /* ZERO_PAGE - returns the globally shared zero page */ 400 412 #define ZERO_PAGE(vaddr) (virt_to_page(&empty_zero_page)) 401 - 402 - #define __pte_offset(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 403 413 404 414 /* 405 415 * Swap/file PTE definitions. If _PAGE_PRESENT is zero, the rest of the PTE is
+1 -32
arch/ia64/include/asm/pgtable.h
··· 364 364 365 365 return (region << (PAGE_SHIFT - 6)) | l1index; 366 366 } 367 - 368 - /* The offset in the 1-level directory is given by the 3 region bits 369 - (61..63) and the level-1 bits. */ 370 - static inline pgd_t* 371 - pgd_offset (const struct mm_struct *mm, unsigned long address) 372 - { 373 - return mm->pgd + pgd_index(address); 374 - } 375 - 376 - /* In the kernel's mapped region we completely ignore the region number 377 - (since we know it's in region number 5). */ 378 - #define pgd_offset_k(addr) \ 379 - (init_mm.pgd + (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))) 367 + #define pgd_index pgd_index 380 368 381 369 /* Look up a pgd entry in the gate area. On IA-64, the gate-area 382 370 resides in the kernel-mapped segment, hence we use pgd_offset_k() 383 371 here. */ 384 372 #define pgd_offset_gate(mm, addr) pgd_offset_k(addr) 385 - 386 - #if CONFIG_PGTABLE_LEVELS == 4 387 - /* Find an entry in the second-level page table.. */ 388 - #define pud_offset(dir,addr) \ 389 - ((pud_t *) p4d_page_vaddr(*(dir)) + (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))) 390 - #endif 391 - 392 - /* Find an entry in the third-level page table.. */ 393 - #define pmd_offset(dir,addr) \ 394 - ((pmd_t *) pud_page_vaddr(*(dir)) + (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))) 395 - 396 - /* 397 - * Find an entry in the third-level page table. This looks more complicated than it 398 - * should be because some platforms place page tables in high memory. 399 - */ 400 - #define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 401 - #define pte_offset_kernel(dir,addr) ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(addr)) 402 - #define pte_offset_map(dir,addr) pte_offset_kernel(dir, addr) 403 - #define pte_unmap(pte) do { } while (0) 404 373 405 374 /* atomic versions of the some PTE manipulations: */ 406 375
+1 -22
arch/m68k/include/asm/mcf_pgtable.h
··· 170 170 } 171 171 172 172 #define __pte_page(pte) ((unsigned long) (pte_val(pte) & PAGE_MASK)) 173 - #define __pmd_page(pmd) ((unsigned long) (pmd_val(pmd))) 173 + #define pmd_page_vaddr(pmd) ((unsigned long) (pmd_val(pmd))) 174 174 175 175 static inline int pte_none(pte_t pte) 176 176 { ··· 311 311 extern pgd_t kernel_pg_dir[PTRS_PER_PGD]; 312 312 313 313 /* 314 - * Find an entry in a pagetable directory. 315 - */ 316 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 317 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 318 - 319 - /* 320 - * Find an entry in a kernel pagetable directory. 321 - */ 322 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 323 - 324 - /* 325 - * Find an entry in the third-level pagetable. 326 - */ 327 - #define __pte_offset(address) ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 328 - #define pte_offset_kernel(dir, address) \ 329 - ((pte_t *) __pmd_page(*(dir)) + __pte_offset(address)) 330 - 331 - /* 332 314 * Encode and de-code a swap entry (must be !pte_none(e) && !pte_present(e)) 333 315 */ 334 316 #define __swp_type(x) ((x).val & 0xFF) ··· 322 340 323 341 #define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT)) 324 342 325 - #define pte_offset_map(pmdp, addr) ((pte_t *)__pmd_page(*pmdp) + \ 326 - __pte_offset(addr)) 327 - #define pte_unmap(pte) ((void) 0) 328 343 #define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) 329 344 #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) 330 345
+1 -1
arch/m68k/include/asm/motorola_pgalloc.h
··· 88 88 { 89 89 pmd_set(pmd, page); 90 90 } 91 - #define pmd_pgtable(pmd) ((pgtable_t)__pmd_page(pmd)) 91 + #define pmd_pgtable(pmd) ((pgtable_t)pmd_page_vaddr(pmd)) 92 92 93 93 static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) 94 94 {
+1 -33
arch/m68k/include/asm/motorola_pgtable.h
··· 128 128 } 129 129 130 130 #define __pte_page(pte) ((unsigned long)__va(pte_val(pte) & PAGE_MASK)) 131 - #define __pmd_page(pmd) ((unsigned long)__va(pmd_val(pmd) & _TABLE_MASK)) 131 + #define pmd_page_vaddr(pmd) ((unsigned long)__va(pmd_val(pmd) & _TABLE_MASK)) 132 132 #define pud_page_vaddr(pud) ((unsigned long)__va(pud_val(pud) & _TABLE_MASK)) 133 133 134 134 ··· 192 192 return pte; 193 193 } 194 194 195 - #define PAGE_DIR_OFFSET(tsk,address) pgd_offset((tsk),(address)) 196 - 197 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 198 - 199 - /* to find an entry in a page-table-directory */ 200 - static inline pgd_t *pgd_offset(const struct mm_struct *mm, 201 - unsigned long address) 202 - { 203 - return mm->pgd + pgd_index(address); 204 - } 205 - 206 195 #define swapper_pg_dir kernel_pg_dir 207 196 extern pgd_t kernel_pg_dir[128]; 208 - 209 - static inline pgd_t *pgd_offset_k(unsigned long address) 210 - { 211 - return kernel_pg_dir + (address >> PGDIR_SHIFT); 212 - } 213 - 214 - 215 - /* Find an entry in the second-level page table.. */ 216 - static inline pmd_t *pmd_offset(pud_t *dir, unsigned long address) 217 - { 218 - return (pmd_t *)pud_page_vaddr(*dir) + ((address >> PMD_SHIFT) & (PTRS_PER_PMD-1)); 219 - } 220 - 221 - /* Find an entry in the third-level page table.. */ 222 - static inline pte_t *pte_offset_kernel(pmd_t *pmdp, unsigned long address) 223 - { 224 - return (pte_t *)__pmd_page(*pmdp) + ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)); 225 - } 226 - 227 - #define pte_offset_map(pmdp,address) ((pte_t *)__pmd_page(*pmdp) + (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))) 228 - #define pte_unmap(pte) ((void)0) 229 197 230 198 /* Encode and de-code a swap entry (must be !pte_none(e) && !pte_present(e)) */ 231 199 #define __swp_type(x) (((x).val >> 4) & 0xff)
+6 -18
arch/m68k/include/asm/sun3_pgtable.h
··· 112 112 113 113 #define __pte_page(pte) \ 114 114 ((unsigned long) __va ((pte_val (pte) & SUN3_PAGE_PGNUM_MASK) << PAGE_SHIFT)) 115 - #define __pmd_page(pmd) \ 116 - ((unsigned long) __va (pmd_val (pmd) & PAGE_MASK)) 115 + 116 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 117 + { 118 + return (unsigned long)__va(pmd_val(pmd) & PAGE_MASK); 119 + } 117 120 118 121 static inline int pte_none (pte_t pte) { return !pte_val (pte); } 119 122 static inline int pte_present (pte_t pte) { return pte_val (pte) & SUN3_PAGE_VALID; } ··· 130 127 ({ pte_t __pte; pte_val(__pte) = pfn | pgprot_val(pgprot); __pte; }) 131 128 132 129 #define pte_page(pte) virt_to_page(__pte_page(pte)) 133 - #define pmd_page(pmd) virt_to_page(__pmd_page(pmd)) 130 + #define pmd_page(pmd) virt_to_page(pmd_page_vaddr(pmd)) 134 131 135 132 136 133 static inline int pmd_none2 (pmd_t *pmd) { return !pmd_val (*pmd); } ··· 173 170 174 171 extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 175 172 extern pgd_t kernel_pg_dir[PTRS_PER_PGD]; 176 - 177 - /* Find an entry in a pagetable directory. */ 178 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 179 - 180 - #define pgd_offset(mm, address) \ 181 - ((mm)->pgd + pgd_index(address)) 182 - 183 - /* Find an entry in a kernel pagetable directory. */ 184 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 185 - 186 - /* Find an entry in the third-level pagetable. */ 187 - #define pte_index(address) ((address >> PAGE_SHIFT) & (PTRS_PER_PTE-1)) 188 - #define pte_offset_kernel(pmd, address) ((pte_t *) __pmd_page(*pmd) + pte_index(address)) 189 - #define pte_offset_map(pmd, address) ((pte_t *)page_address(pmd_page(*pmd)) + pte_index(address)) 190 - #define pte_unmap(pte) do { } while (0) 191 173 192 174 /* Macros to (de)construct the fake PTEs representing swap pages. */ 193 175 #define __swp_type(x) ((x).val & 0x7F)
+1 -1
arch/m68k/mm/init.c
··· 141 141 if (!pmd_present(*pmd)) 142 142 continue; 143 143 144 - pte_dir = (pte_t *)__pmd_page(*pmd); 144 + pte_dir = (pte_t *)pmd_page_vaddr(*pmd); 145 145 init_pointer_table(pte_dir, TABLE_PTE); 146 146 } 147 147 }
+4 -17
arch/microblaze/include/asm/pgtable.h
··· 21 21 #define pgd_bad(pgd) (0) 22 22 #define pgd_clear(pgdp) 23 23 #define kern_addr_valid(addr) (1) 24 - #define pmd_offset(a, b) ((void *) 0) 25 24 26 25 #define PAGE_NONE __pgprot(0) /* these mean nothing to non MMU */ 27 26 #define PAGE_SHARED __pgprot(0) /* these mean nothing to non MMU */ ··· 437 438 /* Convert pmd entry to page */ 438 439 /* our pmd entry is an effective address of pte table*/ 439 440 /* returns effective address of the pmd entry*/ 440 - #define pmd_page_kernel(pmd) ((unsigned long) (pmd_val(pmd) & PAGE_MASK)) 441 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 442 + { 443 + return ((unsigned long) (pmd_val(pmd) & PAGE_MASK)); 444 + } 441 445 442 446 /* returns struct *page of the pmd entry*/ 443 447 #define pmd_page(pmd) (pfn_to_page(__pa(pmd_val(pmd)) >> PAGE_SHIFT)) 444 448 445 - /* to find an entry in a kernel page-table-directory */ 446 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 447 - 448 - /* to find an entry in a page-table-directory */ 449 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 450 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 451 - 452 449 /* Find an entry in the third-level page table.. */ 453 - #define pte_index(address) \ 454 - (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 455 - #define pte_offset_kernel(dir, addr) \ 456 - ((pte_t *) pmd_page_kernel(*(dir)) + pte_index(addr)) 457 - #define pte_offset_map(dir, addr) \ 458 - ((pte_t *) kmap_atomic(pmd_page(*(dir))) + pte_index(addr)) 459 - 460 - #define pte_unmap(pte) kunmap_atomic(pte) 461 450 462 451 extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 463 452
-22
arch/mips/include/asm/pgtable-32.h
··· 195 195 196 196 #define pte_page(x) pfn_to_page(pte_pfn(x)) 197 197 198 - /* to find an entry in a kernel page-table-directory */ 199 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 200 - 201 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) 202 - #define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) 203 - #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 204 - 205 - /* to find an entry in a page-table-directory */ 206 - #define pgd_offset(mm, addr) ((mm)->pgd + pgd_index(addr)) 207 - 208 - /* Find an entry in the third-level page table.. */ 209 - #define __pte_offset(address) \ 210 - (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 211 - #define pte_offset(dir, address) \ 212 - ((pte_t *) pmd_page_vaddr(*(dir)) + __pte_offset(address)) 213 - #define pte_offset_kernel(dir, address) \ 214 - ((pte_t *) pmd_page_vaddr(*(dir)) + __pte_offset(address)) 215 - 216 - #define pte_offset_map(dir, address) \ 217 - ((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset(address)) 218 - #define pte_unmap(pte) ((void)(pte)) 219 - 220 198 #if defined(CONFIG_CPU_R3K_TLB) 221 199 222 200 /* Swap entries must have VALID bit cleared. */
-32
arch/mips/include/asm/pgtable-64.h
··· 172 172 173 173 extern pte_t invalid_pte_table[PTRS_PER_PTE]; 174 174 175 - #define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) 176 - 177 175 #ifndef __PAGETABLE_PUD_FOLDED 178 176 /* 179 177 * For 4-level pagetables we defines these ourselves, for 3-level the ··· 219 221 #define p4d_page(p4d) (pfn_to_page(p4d_phys(p4d) >> PAGE_SHIFT)) 220 222 221 223 #define p4d_index(address) (((address) >> P4D_SHIFT) & (PTRS_PER_P4D - 1)) 222 - 223 - static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) 224 - { 225 - return (pud_t *)p4d_page_vaddr(*p4d) + pud_index(address); 226 - } 227 224 228 225 static inline void set_p4d(p4d_t *p4d, p4d_t p4dval) 229 226 { ··· 313 320 #define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) 314 321 #endif 315 322 316 - /* to find an entry in a kernel page-table-directory */ 317 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 318 - 319 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) 320 - #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 321 - 322 - /* to find an entry in a page-table-directory */ 323 - #define pgd_offset(mm, addr) ((mm)->pgd + pgd_index(addr)) 324 - 325 323 #ifndef __PAGETABLE_PMD_FOLDED 326 324 static inline unsigned long pud_page_vaddr(pud_t pud) 327 325 { ··· 321 337 #define pud_phys(pud) virt_to_phys((void *)pud_val(pud)) 322 338 #define pud_page(pud) (pfn_to_page(pud_phys(pud) >> PAGE_SHIFT)) 323 339 324 - /* Find an entry in the second-level page table.. */ 325 - static inline pmd_t *pmd_offset(pud_t * pud, unsigned long address) 326 - { 327 - return (pmd_t *) pud_page_vaddr(*pud) + pmd_index(address); 328 - } 329 340 #endif 330 - 331 - /* Find an entry in the third-level page table.. */ 332 - #define __pte_offset(address) \ 333 - (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 334 - #define pte_offset(dir, address) \ 335 - ((pte_t *) pmd_page_vaddr(*(dir)) + __pte_offset(address)) 336 - #define pte_offset_kernel(dir, address) \ 337 - ((pte_t *) pmd_page_vaddr(*(dir)) + __pte_offset(address)) 338 - #define pte_offset_map(dir, address) \ 339 - ((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset(address)) 340 - #define pte_unmap(pte) ((void)(pte)) 341 341 342 342 /* 343 343 * Initialize a new pgd / pmd table with invalid pointers.
+10 -10
arch/mips/kvm/mmu.c
··· 168 168 clear_page(new_pte); 169 169 pmd_populate_kernel(NULL, pmd, new_pte); 170 170 } 171 - return pte_offset(pmd, addr); 171 + return pte_offset_kernel(pmd, addr); 172 172 } 173 173 174 174 /* Caller must hold kvm->mm_lock */ ··· 187 187 static bool kvm_mips_flush_gpa_pte(pte_t *pte, unsigned long start_gpa, 188 188 unsigned long end_gpa) 189 189 { 190 - int i_min = __pte_offset(start_gpa); 191 - int i_max = __pte_offset(end_gpa); 190 + int i_min = pte_index(start_gpa); 191 + int i_max = pte_index(end_gpa); 192 192 bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PTE - 1); 193 193 int i; 194 194 ··· 215 215 if (!pmd_present(pmd[i])) 216 216 continue; 217 217 218 - pte = pte_offset(pmd + i, 0); 218 + pte = pte_offset_kernel(pmd + i, 0); 219 219 if (i == i_max) 220 220 end = end_gpa; 221 221 ··· 312 312 unsigned long end) \ 313 313 { \ 314 314 int ret = 0; \ 315 - int i_min = __pte_offset(start); \ 316 - int i_max = __pte_offset(end); \ 315 + int i_min = pte_index(start); \ 316 + int i_max = pte_index(end); \ 317 317 int i; \ 318 318 pte_t old, new; \ 319 319 \ ··· 346 346 if (!pmd_present(pmd[i])) \ 347 347 continue; \ 348 348 \ 349 - pte = pte_offset(pmd + i, 0); \ 349 + pte = pte_offset_kernel(pmd + i, 0); \ 350 350 if (i == i_max) \ 351 351 cur_end = end; \ 352 352 \ ··· 842 842 static bool kvm_mips_flush_gva_pte(pte_t *pte, unsigned long start_gva, 843 843 unsigned long end_gva) 844 844 { 845 - int i_min = __pte_offset(start_gva); 846 - int i_max = __pte_offset(end_gva); 845 + int i_min = pte_index(start_gva); 846 + int i_max = pte_index(end_gva); 847 847 bool safe_to_remove = (i_min == 0 && i_max == PTRS_PER_PTE - 1); 848 848 int i; 849 849 ··· 877 877 if (!pmd_present(pmd[i])) 878 878 continue; 879 879 880 - pte = pte_offset(pmd + i, 0); 880 + pte = pte_offset_kernel(pmd + i, 0); 881 881 if (i == i_max) 882 882 end = end_gva; 883 883
+1 -1
arch/mips/kvm/trap_emul.c
··· 594 594 pmd_va = pud_va | (k << PMD_SHIFT); 595 595 if (pmd_va >= end) 596 596 break; 597 - pte = pte_offset(pmd + k, 0); 597 + pte = pte_offset_kernel(pmd + k, 0); 598 598 pte_free_kernel(NULL, pte); 599 599 } 600 600 pmd_free(NULL, pmd);
+4 -14
arch/nds32/include/asm/pgtable.h
··· 186 186 #define pte_clear(mm,addr,ptep) set_pte_at((mm),(addr),(ptep), __pte(0)) 187 187 #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) 188 188 189 - #define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 190 - #define pte_offset_kernel(dir, address) ((pte_t *)pmd_page_kernel(*(dir)) + pte_index(address)) 191 - #define pte_offset_map(dir, address) ((pte_t *)page_address(pmd_page(*(dir))) + pte_index(address)) 192 - #define pte_offset_map_nested(dir, address) pte_offset_map(dir, address) 193 - #define pmd_page_kernel(pmd) ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)) 194 - 195 - #define pte_unmap(pte) do { } while (0) 196 - #define pte_unmap_nested(pte) do { } while (0) 189 + static unsigned long pmd_page_vaddr(pmd_t pmd) 190 + { 191 + return ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)); 192 + } 197 193 198 194 #define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) 199 195 /* ··· 339 343 * PPN = (phys & 0xfffff000); 340 344 * 341 345 */ 342 - 343 - /* to find an entry in a page-table-directory */ 344 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) 345 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 346 - /* to find an entry in a kernel page-table-directory */ 347 - #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) 348 346 349 347 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 350 348 {
+4 -18
arch/nios2/include/asm/pgtable.h
··· 102 102 *pmdptr = pmdval; 103 103 } 104 104 105 - /* to find an entry in a page-table-directory */ 106 - #define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) 107 - #define pgd_offset(mm, addr) ((mm)->pgd + pgd_index(addr)) 108 - 109 105 static inline int pte_write(pte_t pte) \ 110 106 { return pte_val(pte) & _PAGE_WRITE; } 111 107 static inline int pte_dirty(pte_t pte) \ ··· 232 236 */ 233 237 #define mk_pte(page, prot) (pfn_pte(page_to_pfn(page), prot)) 234 238 235 - #define pte_unmap(pte) do { } while (0) 236 - 237 239 /* 238 240 * Conversion functions: convert a page and protection to a page entry, 239 241 * and a page entry and page directory to the page they refer to. 240 242 */ 241 243 #define pmd_phys(pmd) virt_to_phys((void *)pmd_val(pmd)) 242 244 #define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT)) 243 - #define pmd_page_vaddr(pmd) pmd_val(pmd) 244 245 245 - #define pte_offset_map(dir, addr) \ 246 - ((pte_t *) page_address(pmd_page(*dir)) + \ 247 - (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))) 248 - 249 - /* to find an entry in a kernel page-table-directory */ 250 - #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) 251 - 252 - /* Get the address to the PTE for a vaddr in specific directory */ 253 - #define pte_offset_kernel(dir, addr) \ 254 - ((pte_t *) pmd_page_vaddr(*(dir)) + \ 255 - (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))) 246 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 247 + { 248 + return pmd_val(pmd); 249 + } 256 250 257 251 #define pte_ERROR(e) \ 258 252 pr_err("%s:%d: bad pte %08lx.\n", \
+4 -27
arch/openrisc/include/asm/pgtable.h
··· 363 363 } 364 364 365 365 #define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT)) 366 - #define pmd_page_kernel(pmd) ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)) 367 366 368 - /* to find an entry in a page-table-directory. */ 369 - #define pgd_index(address) ((address >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) 370 - 371 - #define __pgd_offset(address) pgd_index(address) 372 - 373 - #define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address)) 374 - 375 - /* to find an entry in a kernel page-table-directory */ 376 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 367 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 368 + { 369 + return ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)); 370 + } 377 371 378 372 #define __pmd_offset(address) \ 379 373 (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 380 374 381 - /* 382 - * the pte page can be thought of an array like this: pte_t[PTRS_PER_PTE] 383 - * 384 - * this macro returns the index of the entry in the pte page which would 385 - * control the given virtual address 386 - */ 387 - #define __pte_offset(address) \ 388 - (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 389 - #define pte_offset_kernel(dir, address) \ 390 - ((pte_t *) pmd_page_kernel(*(dir)) + __pte_offset(address)) 391 - #define pte_offset_map(dir, address) \ 392 - ((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset(address)) 393 - #define pte_offset_map_nested(dir, address) \ 394 - pte_offset_map(dir, address) 395 - 396 - #define pte_unmap(pte) do { } while (0) 397 - #define pte_unmap_nested(pte) do { } while (0) 398 375 #define pte_pfn(x) ((unsigned long)(((x).pte)) >> PAGE_SHIFT) 399 376 #define pfn_pte(pfn, prot) __pte((((pfn) << PAGE_SHIFT)) | pgprot_val(prot)) 400 377
+4 -28
arch/parisc/include/asm/pgtable.h
··· 427 427 428 428 #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) 429 429 430 - #define pmd_page_vaddr(pmd) ((unsigned long) __va(pmd_address(pmd))) 430 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 431 + { 432 + return ((unsigned long) __va(pmd_address(pmd))); 433 + } 431 434 432 435 #define __pmd_page(pmd) ((unsigned long) __va(pmd_address(pmd))) 433 436 #define pmd_page(pmd) virt_to_page((void *)__pmd_page(pmd)) 434 437 435 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 436 - 437 - /* to find an entry in a page-table-directory */ 438 - #define pgd_offset(mm, address) \ 439 - ((mm)->pgd + ((address) >> PGDIR_SHIFT)) 440 - 441 - /* to find an entry in a kernel page-table-directory */ 442 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 443 - 444 438 /* Find an entry in the second-level page table.. */ 445 - 446 - #if CONFIG_PGTABLE_LEVELS == 3 447 - #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) 448 - #define pmd_offset(dir,address) \ 449 - ((pmd_t *) pud_page_vaddr(*(dir)) + pmd_index(address)) 450 - #else 451 - #define pmd_offset(dir,addr) ((pmd_t *) dir) 452 - #endif 453 - 454 - /* Find an entry in the third-level page table.. */ 455 - #define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE-1)) 456 - #define pte_offset_kernel(pmd, address) \ 457 - ((pte_t *) pmd_page_vaddr(*(pmd)) + pte_index(address)) 458 - #define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address) 459 - #define pte_unmap(pte) do { } while (0) 460 - 461 - #define pte_unmap(pte) do { } while (0) 462 - #define pte_unmap_nested(pte) do { } while (0) 463 439 464 440 extern void paging_init (void); 465 441
+1 -1
arch/parisc/kernel/pci-dma.c
··· 201 201 pgd_clear(dir); 202 202 return; 203 203 } 204 - pmd = pmd_offset(dir, vaddr); 204 + pmd = pmd_offset(pud_offset(p4d_offset(dir, vaddr), vaddr), vaddr); 205 205 vaddr &= ~PGDIR_MASK; 206 206 end = vaddr + size; 207 207 if (end > PGDIR_SIZE)
+3 -17
arch/powerpc/include/asm/book3s/32/pgtable.h
··· 112 112 #define PMD_TABLE_SIZE 0 113 113 #define PUD_TABLE_SIZE 0 114 114 #define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE) 115 + 116 + /* Bits to mask out from a PMD to get to the PTE page */ 117 + #define PMD_MASKED_BITS (PTE_TABLE_SIZE - 1) 115 118 #endif /* __ASSEMBLY__ */ 116 119 117 120 #define PTRS_PER_PTE (1 << PTE_INDEX_SIZE) ··· 335 332 #define __HAVE_ARCH_PTE_SAME 336 333 #define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HASHPTE) == 0) 337 334 338 - #define pmd_page_vaddr(pmd) \ 339 - ((unsigned long)__va(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) 340 335 #define pmd_page(pmd) \ 341 336 pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT) 342 - 343 - /* to find an entry in a kernel page-table-directory */ 344 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 345 - 346 - /* to find an entry in a page-table-directory */ 347 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 348 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 349 - 350 - /* Find an entry in the third-level page table.. */ 351 - #define pte_index(address) \ 352 - (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 353 - #define pte_offset_kernel(dir, addr) \ 354 - ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(addr)) 355 - #define pte_offset_map(dir, addr) pte_offset_kernel((dir), (addr)) 356 - static inline void pte_unmap(pte_t *pte) { } 357 337 358 338 /* 359 339 * Encode and decode a swap entry.
-43
arch/powerpc/include/asm/book3s/64/pgtable.h
··· 1005 1005 /* Pointers in the page table tree are physical addresses */ 1006 1006 #define __pgtable_ptr_val(ptr) __pa(ptr) 1007 1007 1008 - #define pmd_page_vaddr(pmd) __va(pmd_val(pmd) & ~PMD_MASKED_BITS) 1009 1008 #define pud_page_vaddr(pud) __va(pud_val(pud) & ~PUD_MASKED_BITS) 1010 1009 #define p4d_page_vaddr(p4d) __va(p4d_val(p4d) & ~P4D_MASKED_BITS) 1011 - 1012 - static inline unsigned long pgd_index(unsigned long address) 1013 - { 1014 - return (address >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1); 1015 - } 1016 - 1017 - static inline unsigned long pud_index(unsigned long address) 1018 - { 1019 - return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1); 1020 - } 1021 - 1022 - static inline unsigned long pmd_index(unsigned long address) 1023 - { 1024 - return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); 1025 - } 1026 - 1027 - static inline unsigned long pte_index(unsigned long address) 1028 - { 1029 - return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1); 1030 - } 1031 - 1032 - /* 1033 - * Find an entry in a page-table-directory. We combine the address region 1034 - * (the high order N bits) and the pgd portion of the address. 1035 - */ 1036 - 1037 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 1038 - 1039 - #define pud_offset(p4dp, addr) \ 1040 - (((pud_t *) p4d_page_vaddr(*(p4dp))) + pud_index(addr)) 1041 - #define pmd_offset(pudp,addr) \ 1042 - (((pmd_t *) pud_page_vaddr(*(pudp))) + pmd_index(addr)) 1043 - #define pte_offset_kernel(dir,addr) \ 1044 - (((pte_t *) pmd_page_vaddr(*(dir))) + pte_index(addr)) 1045 - 1046 - #define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr)) 1047 - 1048 - static inline void pte_unmap(pte_t *pte) { } 1049 - 1050 - /* to find an entry in a kernel page-table-directory */ 1051 - /* This now only contains the vmalloc pages */ 1052 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 1053 1010 1054 1011 #define pte_ERROR(e) \ 1055 1012 pr_err("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
+2 -15
arch/powerpc/include/asm/nohash/32/pgtable.h
··· 28 28 #define PMD_TABLE_SIZE 0 29 29 #define PUD_TABLE_SIZE 0 30 30 #define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE) 31 + 32 + #define PMD_MASKED_BITS (PTE_TABLE_SIZE - 1) 31 33 #endif /* __ASSEMBLY__ */ 32 34 33 35 #define PTRS_PER_PTE (1 << PTE_INDEX_SIZE) ··· 205 203 *pmdp = __pmd(0); 206 204 } 207 205 208 - 209 - /* to find an entry in a kernel page-table-directory */ 210 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 211 - 212 206 /* to find an entry in a page-table-directory */ 213 207 #define pgd_index(address) ((address) >> PGDIR_SHIFT) 214 208 #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) ··· 328 330 * of the pte page. -- paulus 329 331 */ 330 332 #ifndef CONFIG_BOOKE 331 - #define pmd_page_vaddr(pmd) \ 332 - ((unsigned long)__va(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) 333 333 #define pmd_page(pmd) \ 334 334 pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT) 335 335 #else ··· 336 340 #define pmd_page(pmd) \ 337 341 pfn_to_page((__pa(pmd_val(pmd)) >> PAGE_SHIFT)) 338 342 #endif 339 - 340 - /* Find an entry in the third-level page table.. */ 341 - #define pte_index(address) \ 342 - (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 343 - #define pte_offset_kernel(dir, addr) \ 344 - (pmd_bad(*(dir)) ? NULL : (pte_t *)pmd_page_vaddr(*(dir)) + \ 345 - pte_index(addr)) 346 - #define pte_offset_map(dir, addr) pte_offset_kernel((dir), (addr)) 347 - static inline void pte_unmap(pte_t *pte) { } 348 343 349 344 /* 350 345 * Encode and decode a swap entry.
-4
arch/powerpc/include/asm/nohash/64/pgtable-4k.h
··· 78 78 79 79 #endif /* !__ASSEMBLY__ */ 80 80 81 - #define pud_offset(p4dp, addr) \ 82 - (((pud_t *) p4d_page_vaddr(*(p4dp))) + \ 83 - (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))) 84 - 85 81 #define pud_ERROR(e) \ 86 82 pr_err("%s:%d: bad pud %08lx.\n", __FILE__, __LINE__, pud_val(e)) 87 83
-22
arch/powerpc/include/asm/nohash/64/pgtable.h
··· 182 182 *p4dp = __p4d(val); 183 183 } 184 184 185 - /* 186 - * Find an entry in a page-table-directory. We combine the address region 187 - * (the high order N bits) and the pgd portion of the address. 188 - */ 189 - #define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1)) 190 - 191 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 192 - 193 - #define pmd_offset(pudp,addr) \ 194 - (((pmd_t *) pud_page_vaddr(*(pudp))) + (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))) 195 - 196 - #define pte_offset_kernel(dir,addr) \ 197 - (((pte_t *) pmd_page_vaddr(*(dir))) + (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))) 198 - 199 - #define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr)) 200 - 201 - static inline void pte_unmap(pte_t *pte) { } 202 - 203 - /* to find an entry in a kernel page-table-directory */ 204 - /* This now only contains the vmalloc pages */ 205 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 206 - 207 185 /* Atomic PTE updates */ 208 186 static inline unsigned long pte_update(struct mm_struct *mm, 209 187 unsigned long addr,
+7
arch/powerpc/include/asm/pgtable.h
··· 57 57 return __pgprot(pte_flags); 58 58 } 59 59 60 + #ifndef pmd_page_vaddr 61 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 62 + { 63 + return ((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); 64 + } 65 + #define pmd_page_vaddr pmd_page_vaddr 66 + #endif 60 67 /* 61 68 * ZERO_PAGE is a global shared page that is always zero: used 62 69 * for zero-mapped memory areas etc..
-7
arch/riscv/include/asm/pgtable-64.h
··· 70 70 return pfn_to_page(pud_val(pud) >> _PAGE_PFN_SHIFT); 71 71 } 72 72 73 - #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) 74 - 75 - static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) 76 - { 77 - return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr); 78 - } 79 - 80 73 static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot) 81 74 { 82 75 return __pmd((pfn << _PAGE_PFN_SHIFT) | pgprot_val(prot));
-20
arch/riscv/include/asm/pgtable.h
··· 173 173 return pgd_val(pgd) >> _PAGE_PFN_SHIFT; 174 174 } 175 175 176 - #define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) 177 - 178 - /* Locate an entry in the page global directory */ 179 - static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr) 180 - { 181 - return mm->pgd + pgd_index(addr); 182 - } 183 - /* Locate an entry in the kernel page global directory */ 184 - #define pgd_offset_k(addr) pgd_offset(&init_mm, (addr)) 185 - 186 176 static inline struct page *pmd_page(pmd_t pmd) 187 177 { 188 178 return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT); ··· 198 208 } 199 209 200 210 #define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) 201 - 202 - #define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 203 - 204 - static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long addr) 205 - { 206 - return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(addr); 207 - } 208 - 209 - #define pte_offset_map(dir, addr) pte_offset_kernel((dir), (addr)) 210 - #define pte_unmap(pte) ((void)(pte)) 211 211 212 212 static inline int pte_present(pte_t pte) 213 213 {
+15 -15
arch/riscv/mm/init.c
··· 235 235 uintptr_t va, phys_addr_t pa, 236 236 phys_addr_t sz, pgprot_t prot) 237 237 { 238 - uintptr_t pte_index = pte_index(va); 238 + uintptr_t pte_idx = pte_index(va); 239 239 240 240 BUG_ON(sz != PAGE_SIZE); 241 241 242 - if (pte_none(ptep[pte_index])) 243 - ptep[pte_index] = pfn_pte(PFN_DOWN(pa), prot); 242 + if (pte_none(ptep[pte_idx])) 243 + ptep[pte_idx] = pfn_pte(PFN_DOWN(pa), prot); 244 244 } 245 245 246 246 #ifndef __PAGETABLE_PMD_FOLDED ··· 283 283 { 284 284 pte_t *ptep; 285 285 phys_addr_t pte_phys; 286 - uintptr_t pmd_index = pmd_index(va); 286 + uintptr_t pmd_idx = pmd_index(va); 287 287 288 288 if (sz == PMD_SIZE) { 289 - if (pmd_none(pmdp[pmd_index])) 290 - pmdp[pmd_index] = pfn_pmd(PFN_DOWN(pa), prot); 289 + if (pmd_none(pmdp[pmd_idx])) 290 + pmdp[pmd_idx] = pfn_pmd(PFN_DOWN(pa), prot); 291 291 return; 292 292 } 293 293 294 - if (pmd_none(pmdp[pmd_index])) { 294 + if (pmd_none(pmdp[pmd_idx])) { 295 295 pte_phys = alloc_pte(va); 296 - pmdp[pmd_index] = pfn_pmd(PFN_DOWN(pte_phys), PAGE_TABLE); 296 + pmdp[pmd_idx] = pfn_pmd(PFN_DOWN(pte_phys), PAGE_TABLE); 297 297 ptep = get_pte_virt(pte_phys); 298 298 memset(ptep, 0, PAGE_SIZE); 299 299 } else { 300 - pte_phys = PFN_PHYS(_pmd_pfn(pmdp[pmd_index])); 300 + pte_phys = PFN_PHYS(_pmd_pfn(pmdp[pmd_idx])); 301 301 ptep = get_pte_virt(pte_phys); 302 302 } 303 303 ··· 325 325 { 326 326 pgd_next_t *nextp; 327 327 phys_addr_t next_phys; 328 - uintptr_t pgd_index = pgd_index(va); 328 + uintptr_t pgd_idx = pgd_index(va); 329 329 330 330 if (sz == PGDIR_SIZE) { 331 - if (pgd_val(pgdp[pgd_index]) == 0) 332 - pgdp[pgd_index] = pfn_pgd(PFN_DOWN(pa), prot); 331 + if (pgd_val(pgdp[pgd_idx]) == 0) 332 + pgdp[pgd_idx] = pfn_pgd(PFN_DOWN(pa), prot); 333 333 return; 334 334 } 335 335 336 - if (pgd_val(pgdp[pgd_index]) == 0) { 336 + if (pgd_val(pgdp[pgd_idx]) == 0) { 337 337 next_phys = alloc_pgd_next(va); 338 - pgdp[pgd_index] = pfn_pgd(PFN_DOWN(next_phys), PAGE_TABLE); 338 + pgdp[pgd_idx] = pfn_pgd(PFN_DOWN(next_phys), PAGE_TABLE); 339 339 nextp = get_pgd_next_virt(next_phys); 340 340 memset(nextp, 0, PAGE_SIZE); 341 341 } else { 342 - next_phys = PFN_PHYS(_pgd_pfn(pgdp[pgd_index])); 342 + next_phys = PFN_PHYS(_pgd_pfn(pgdp[pgd_idx])); 343 343 nextp = get_pgd_next_virt(next_phys); 344 344 } 345 345
+4 -9
arch/s390/include/asm/pgtable.h
··· 1229 1229 #define p4d_index(address) (((address) >> P4D_SHIFT) & (PTRS_PER_P4D-1)) 1230 1230 #define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) 1231 1231 #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 1232 - #define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE-1)) 1233 1232 1234 1233 #define pmd_deref(pmd) (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN) 1235 1234 #define pud_deref(pud) (pud_val(pud) & _REGION_ENTRY_ORIGIN) ··· 1259 1260 } 1260 1261 1261 1262 #define pgd_offset(mm, address) pgd_offset_raw(READ_ONCE((mm)->pgd), address) 1262 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 1263 1263 1264 1264 static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) 1265 1265 { ··· 1273 1275 return (pud_t *) p4d_deref(*p4d) + pud_index(address); 1274 1276 return (pud_t *) p4d; 1275 1277 } 1278 + #define pud_offset pud_offset 1276 1279 1277 1280 static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) 1278 1281 { ··· 1281 1282 return (pmd_t *) pud_deref(*pud) + pmd_index(address); 1282 1283 return (pmd_t *) pud; 1283 1284 } 1285 + #define pmd_offset pmd_offset 1284 1286 1285 - static inline pte_t *pte_offset(pmd_t *pmd, unsigned long address) 1287 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 1286 1288 { 1287 - return (pte_t *) pmd_deref(*pmd) + pte_index(address); 1289 + return (unsigned long) pmd_deref(pmd); 1288 1290 } 1289 - 1290 - #define pte_offset_kernel(pmd, address) pte_offset(pmd, address) 1291 - #define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address) 1292 - 1293 - static inline void pte_unmap(pte_t *pte) { } 1294 1291 1295 1292 static inline bool gup_fast_permitted(unsigned long start, unsigned long end) 1296 1293 {
+1 -1
arch/s390/mm/pageattr.c
··· 85 85 { 86 86 pte_t *ptep, new; 87 87 88 - ptep = pte_offset(pmdp, addr); 88 + ptep = pte_offset_kernel(pmdp, addr); 89 89 do { 90 90 new = *ptep; 91 91 if (pte_none(new))
-7
arch/sh/include/asm/pgtable-3level.h
··· 39 39 40 40 /* only used by the stubbed out hugetlb gup code, should never be called */ 41 41 #define pud_page(pud) NULL 42 - 43 - #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 44 - static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) 45 - { 46 - return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); 47 - } 48 - 49 42 #define pud_none(x) (!pud_val(x)) 50 43 #define pud_present(x) (pud_val(x)) 51 44 #define pud_clear(xp) do { set_pud(xp, __pud(0)); } while (0)
+5 -20
arch/sh/include/asm/pgtable_32.h
··· 401 401 return pte; 402 402 } 403 403 404 - #define pmd_page_vaddr(pmd) ((unsigned long)pmd_val(pmd)) 404 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 405 + { 406 + return (unsigned long)pmd_val(pmd); 407 + } 408 + 405 409 #define pmd_page(pmd) (virt_to_page(pmd_val(pmd))) 406 - 407 - /* to find an entry in a page-table-directory. */ 408 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) 409 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 410 - 411 - /* to find an entry in a kernel page-table-directory */ 412 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 413 - 414 - #define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) 415 - #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 416 - 417 - /* Find an entry in the third-level page table.. */ 418 - #define pte_index(address) ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 419 - #define __pte_offset(address) pte_index(address) 420 - 421 - #define pte_offset_kernel(dir, address) \ 422 - ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(address)) 423 - #define pte_offset_map(dir, address) pte_offset_kernel(dir, address) 424 - #define pte_unmap(pte) do { } while (0) 425 410 426 411 #ifdef CONFIG_X2TLB 427 412 #define pte_ERROR(e) \
+1 -1
arch/sparc/include/asm/pgalloc_64.h
··· 67 67 68 68 #define pmd_populate_kernel(MM, PMD, PTE) pmd_set(MM, PMD, PTE) 69 69 #define pmd_populate(MM, PMD, PTE) pmd_set(MM, PMD, PTE) 70 - #define pmd_pgtable(PMD) ((pte_t *)__pmd_page(PMD)) 70 + #define pmd_pgtable(PMD) ((pte_t *)pmd_page_vaddr(PMD)) 71 71 72 72 void pgtable_free(void *table, bool is_page); 73 73
+7 -25
arch/sparc/include/asm/pgtable_32.h
··· 146 146 return (unsigned long)__nocache_va(v << 4); 147 147 } 148 148 149 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 150 + { 151 + unsigned long v = pmd_val(pmd) & SRMMU_PTD_PMASK; 152 + return (unsigned long)__nocache_va(v << 4); 153 + } 154 + 149 155 static inline unsigned long pud_page_vaddr(pud_t pud) 150 156 { 151 157 if (srmmu_device_memory(pud_val(pud))) { ··· 321 315 pgprot_val(newprot)); 322 316 } 323 317 324 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 325 - 326 - /* to find an entry in a page-table-directory */ 327 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 328 - 329 - /* to find an entry in a kernel page-table-directory */ 330 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 331 - 332 - /* Find an entry in the second-level page table.. */ 333 - static inline pmd_t *pmd_offset(pud_t * dir, unsigned long address) 334 - { 335 - return (pmd_t *) pud_page_vaddr(*dir) + 336 - ((address >> PMD_SHIFT) & (PTRS_PER_PMD - 1)); 337 - } 338 - 339 - /* Find an entry in the third-level page table.. */ 340 - pte_t *pte_offset_kernel(pmd_t * dir, unsigned long address); 341 - 342 - /* 343 - * This shortcut works on sun4m (and sun4d) because the nocache area is static. 344 - */ 345 - #define pte_offset_map(d, a) pte_offset_kernel(d,a) 346 - #define pte_unmap(pte) do{}while(0) 347 - 348 318 struct seq_file; 349 319 void mmu_info(struct seq_file *m); 350 320 ··· 409 427 410 428 return remap_pfn_range(vma, from, phys_base >> PAGE_SHIFT, size, prot); 411 429 } 412 - #define io_remap_pfn_range io_remap_pfn_range 430 + #define io_remap_pfn_range io_remap_pfn_range 413 431 414 432 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS 415 433 #define ptep_set_access_flags(__vma, __address, __ptep, __entry, __dirty) \
+3 -28
arch/sparc/include/asm/pgtable_64.h
··· 835 835 836 836 #define pud_set(pudp, pmdp) \ 837 837 (pud_val(*(pudp)) = (__pa((unsigned long) (pmdp)))) 838 - static inline unsigned long __pmd_page(pmd_t pmd) 838 + static inline unsigned long pmd_page_vaddr(pmd_t pmd) 839 839 { 840 840 pte_t pte = __pte(pmd_val(pmd)); 841 841 unsigned long pfn; ··· 855 855 return ((unsigned long) __va(pfn << PAGE_SHIFT)); 856 856 } 857 857 858 - #define pmd_page(pmd) virt_to_page((void *)__pmd_page(pmd)) 858 + #define pmd_page(pmd) virt_to_page((void *)pmd_page_vaddr(pmd)) 859 859 #define pud_page(pud) virt_to_page((void *)pud_page_vaddr(pud)) 860 860 #define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0UL) 861 861 #define pud_present(pud) (pud_val(pud) != 0U) ··· 888 888 889 889 #define p4d_set(p4dp, pudp) \ 890 890 (p4d_val(*(p4dp)) = (__pa((unsigned long) (pudp)))) 891 - 892 - /* to find an entry in a page-table-directory. */ 893 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) 894 - #define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address)) 895 - 896 - /* to find an entry in a kernel page-table-directory */ 897 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 898 - 899 - /* Find an entry in the third-level page table.. */ 900 - #define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) 901 - #define pud_offset(p4dp, address) \ 902 - ((pud_t *) p4d_page_vaddr(*(p4dp)) + pud_index(address)) 903 - 904 - /* Find an entry in the second-level page table.. */ 905 - #define pmd_offset(pudp, address) \ 906 - ((pmd_t *) pud_page_vaddr(*(pudp)) + \ 907 - (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))) 908 - 909 - /* Find an entry in the third-level page table.. */ 910 - #define pte_index(address) \ 911 - ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 912 - #define pte_offset_kernel(dir, address) \ 913 - ((pte_t *) __pmd_page(*(dir)) + pte_index(address)) 914 - #define pte_offset_map(dir, address) pte_offset_kernel((dir), (address)) 915 - #define pte_unmap(pte) do { } while (0) 916 891 917 892 /* We cannot include <linux/mm_types.h> at this point yet: */ 918 893 extern struct mm_struct init_mm; ··· 1053 1078 1054 1079 return remap_pfn_range(vma, from, phys_base >> PAGE_SHIFT, size, prot); 1055 1080 } 1056 - #define io_remap_pfn_range io_remap_pfn_range 1081 + #define io_remap_pfn_range io_remap_pfn_range 1057 1082 1058 1083 static inline unsigned long __untagged_addr(unsigned long start) 1059 1084 {
-4
arch/um/include/asm/pgtable-3level.h
··· 89 89 #define pud_page(pud) phys_to_page(pud_val(pud) & PAGE_MASK) 90 90 #define pud_page_vaddr(pud) ((unsigned long) __va(pud_val(pud) & PAGE_MASK)) 91 91 92 - /* Find an entry in the second-level page table.. */ 93 - #define pmd_offset(pud, address) ((pmd_t *) pud_page_vaddr(*(pud)) + \ 94 - pmd_index(address)) 95 - 96 92 static inline unsigned long pte_pfn(pte_t pte) 97 93 { 98 94 return phys_to_pfn(pte_val(pte));
+15 -52
arch/um/include/asm/pgtable.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 2 + /* 3 3 * Copyright (C) 2000 - 2007 Jeff Dike (jdike@{addtoit,linux.intel}.com) 4 4 * Copyright 2003 PathScale, Inc. 5 5 * Derived from include/asm-i386/pgtable.h ··· 131 131 * Undefined behaviour if not.. 132 132 */ 133 133 static inline int pte_read(pte_t pte) 134 - { 134 + { 135 135 return((pte_get_bits(pte, _PAGE_USER)) && 136 136 !(pte_get_bits(pte, _PAGE_PROTNONE))); 137 137 } ··· 163 163 } 164 164 165 165 static inline int pte_newprot(pte_t pte) 166 - { 166 + { 167 167 return(pte_present(pte) && (pte_get_bits(pte, _PAGE_NEWPROT))); 168 168 } 169 169 ··· 185 185 return(pte); 186 186 } 187 187 188 - static inline pte_t pte_mkold(pte_t pte) 189 - { 188 + static inline pte_t pte_mkold(pte_t pte) 189 + { 190 190 pte_clear_bits(pte, _PAGE_ACCESSED); 191 191 return(pte); 192 192 } 193 193 194 194 static inline pte_t pte_wrprotect(pte_t pte) 195 - { 195 + { 196 196 if (likely(pte_get_bits(pte, _PAGE_RW))) 197 197 pte_clear_bits(pte, _PAGE_RW); 198 198 else 199 199 return pte; 200 - return(pte_mknewprot(pte)); 200 + return(pte_mknewprot(pte)); 201 201 } 202 202 203 203 static inline pte_t pte_mkread(pte_t pte) 204 - { 204 + { 205 205 if (unlikely(pte_get_bits(pte, _PAGE_USER))) 206 206 return pte; 207 207 pte_set_bits(pte, _PAGE_USER); 208 - return(pte_mknewprot(pte)); 208 + return(pte_mknewprot(pte)); 209 209 } 210 210 211 211 static inline pte_t pte_mkdirty(pte_t pte) 212 - { 212 + { 213 213 pte_set_bits(pte, _PAGE_DIRTY); 214 214 return(pte); 215 215 } ··· 220 220 return(pte); 221 221 } 222 222 223 - static inline pte_t pte_mkwrite(pte_t pte) 223 + static inline pte_t pte_mkwrite(pte_t pte) 224 224 { 225 225 if (unlikely(pte_get_bits(pte, _PAGE_RW))) 226 226 return pte; 227 227 pte_set_bits(pte, _PAGE_RW); 228 - return(pte_mknewprot(pte)); 228 + return(pte_mknewprot(pte)); 229 229 } 230 230 231 - static inline pte_t pte_mkuptodate(pte_t pte) 231 + static inline pte_t pte_mkuptodate(pte_t pte) 232 232 { 233 233 pte_clear_bits(pte, _PAGE_NEWPAGE); 234 234 if(pte_present(pte)) 235 235 pte_clear_bits(pte, _PAGE_NEWPROT); 236 - return(pte); 236 + return(pte); 237 237 } 238 238 239 239 static inline pte_t pte_mknewpage(pte_t pte) ··· 288 288 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 289 289 { 290 290 pte_set_val(pte, (pte_val(pte) & _PAGE_CHG_MASK), newprot); 291 - return pte; 291 + return pte; 292 292 } 293 - 294 - /* 295 - * the pgd page can be thought of an array like this: pgd_t[PTRS_PER_PGD] 296 - * 297 - * this macro returns the index of the entry in the pgd page which would 298 - * control the given virtual address 299 - */ 300 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1)) 301 - 302 - /* 303 - * pgd_offset() returns a (pgd_t *) 304 - * pgd_index() is used get the offset into the pgd page's array of pgd_t's; 305 - */ 306 - #define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address)) 307 - 308 - /* 309 - * a shortcut which implies the use of the kernel's pgd, instead 310 - * of a process's 311 - */ 312 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 313 293 314 294 /* 315 295 * the pmd page can be thought of an array like this: pmd_t[PTRS_PER_PMD] ··· 298 318 * control the given virtual address 299 319 */ 300 320 #define pmd_page_vaddr(pmd) ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)) 301 - #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) 302 - 303 - #define pmd_page_vaddr(pmd) \ 304 - ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)) 305 - 306 - /* 307 - * the pte page can be thought of an array like this: pte_t[PTRS_PER_PTE] 308 - * 309 - * this macro returns the index of the entry in the pte page which would 310 - * control the given virtual address 311 - */ 312 - #define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 313 - #define pte_offset_kernel(dir, address) \ 314 - ((pte_t *) pmd_page_vaddr(*(dir)) + pte_index(address)) 315 - #define pte_offset_map(dir, address) \ 316 - ((pte_t *)page_address(pmd_page(*(dir))) + pte_index(address)) 317 - #define pte_unmap(pte) do { } while (0) 318 321 319 322 struct mm_struct; 320 323 extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr);
-17
arch/unicore32/include/asm/pgtable.h
··· 153 153 #define pte_none(pte) (!pte_val(pte)) 154 154 #define pte_clear(mm, addr, ptep) set_pte(ptep, __pte(0)) 155 155 #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) 156 - #define pte_offset_kernel(dir, addr) (pmd_page_vaddr(*(dir)) \ 157 - + __pte_index(addr)) 158 - 159 - #define pte_offset_map(dir, addr) (pmd_page_vaddr(*(dir)) \ 160 - + __pte_index(addr)) 161 - #define pte_unmap(pte) do { } while (0) 162 156 163 157 #define set_pte(ptep, pte) cpu_set_pte(ptep, pte) 164 158 ··· 214 220 * and a page entry and page directory to the page they refer to. 215 221 */ 216 222 #define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) 217 - 218 - /* to find an entry in a page-table-directory */ 219 - #define pgd_index(addr) ((addr) >> PGDIR_SHIFT) 220 - 221 - #define pgd_offset(mm, addr) ((mm)->pgd+pgd_index(addr)) 222 - 223 - /* to find an entry in a kernel page-table-directory */ 224 - #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) 225 - 226 - /* Find an entry in the third-level page table.. */ 227 - #define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 228 223 229 224 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 230 225 {
-71
arch/x86/include/asm/pgtable.h
··· 837 837 #define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd)) 838 838 839 839 /* 840 - * the pmd page can be thought of an array like this: pmd_t[PTRS_PER_PMD] 841 - * 842 - * this macro returns the index of the entry in the pmd page which would 843 - * control the given virtual address 844 - */ 845 - static inline unsigned long pmd_index(unsigned long address) 846 - { 847 - return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); 848 - } 849 - 850 - /* 851 840 * Conversion functions: convert a page and protection to a page entry, 852 841 * and a page entry and page directory to the page they refer to. 853 842 * ··· 844 855 * to linux/mm.h:page_to_nid()) 845 856 */ 846 857 #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot)) 847 - 848 - /* 849 - * the pte page can be thought of an array like this: pte_t[PTRS_PER_PTE] 850 - * 851 - * this function returns the index of the entry in the pte page which would 852 - * control the given virtual address 853 - * 854 - * Also define macro so we can test if pte_index is defined for arch. 855 - */ 856 - #define pte_index pte_index 857 - static inline unsigned long pte_index(unsigned long address) 858 - { 859 - return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1); 860 - } 861 - 862 - static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address) 863 - { 864 - return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(address); 865 - } 866 858 867 859 static inline int pmd_bad(pmd_t pmd) 868 860 { ··· 877 907 */ 878 908 #define pud_page(pud) pfn_to_page(pud_pfn(pud)) 879 909 880 - /* Find an entry in the second-level page table.. */ 881 - static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) 882 - { 883 - return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); 884 - } 885 - 886 910 #define pud_leaf pud_large 887 911 static inline int pud_large(pud_t pud) 888 912 { ··· 895 931 return 0; 896 932 } 897 933 #endif /* CONFIG_PGTABLE_LEVELS > 2 */ 898 - 899 - static inline unsigned long pud_index(unsigned long address) 900 - { 901 - return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1); 902 - } 903 934 904 935 #if CONFIG_PGTABLE_LEVELS > 3 905 936 static inline int p4d_none(p4d_t p4d) ··· 917 958 * linux/mmzone.h's __section_mem_map_addr() definition: 918 959 */ 919 960 #define p4d_page(p4d) pfn_to_page(p4d_pfn(p4d)) 920 - 921 - /* Find an entry in the third-level page table.. */ 922 - static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) 923 - { 924 - return (pud_t *)p4d_page_vaddr(*p4d) + pud_index(address); 925 - } 926 961 927 962 static inline int p4d_bad(p4d_t p4d) 928 963 { ··· 989 1036 #endif /* CONFIG_PGTABLE_LEVELS > 4 */ 990 1037 991 1038 #endif /* __ASSEMBLY__ */ 992 - 993 - /* 994 - * the pgd page can be thought of an array like this: pgd_t[PTRS_PER_PGD] 995 - * 996 - * this macro returns the index of the entry in the pgd page which would 997 - * control the given virtual address 998 - */ 999 - #define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) 1000 - 1001 - /* 1002 - * pgd_offset() returns a (pgd_t *) 1003 - * pgd_index() is used get the offset into the pgd page's array of pgd_t's; 1004 - */ 1005 - #define pgd_offset_pgd(pgd, address) (pgd + pgd_index((address))) 1006 - /* 1007 - * a shortcut to get a pgd_t in a given mm 1008 - */ 1009 - #define pgd_offset(mm, address) pgd_offset_pgd((mm)->pgd, (address)) 1010 - /* 1011 - * a shortcut which implies the use of the kernel's pgd, instead 1012 - * of a process's 1013 - */ 1014 - #define pgd_offset_k(address) pgd_offset(&init_mm, (address)) 1015 - 1016 1039 1017 1040 #define KERNEL_PGD_BOUNDARY pgd_index(PAGE_OFFSET) 1018 1041 #define KERNEL_PGD_PTRS (PTRS_PER_PGD - KERNEL_PGD_BOUNDARY)
-11
arch/x86/include/asm/pgtable_32.h
··· 45 45 # include <asm/pgtable-2level.h> 46 46 #endif 47 47 48 - #if defined(CONFIG_HIGHPTE) 49 - #define pte_offset_map(dir, address) \ 50 - ((pte_t *)kmap_atomic(pmd_page(*(dir))) + \ 51 - pte_index((address))) 52 - #define pte_unmap(pte) kunmap_atomic((pte)) 53 - #else 54 - #define pte_offset_map(dir, address) \ 55 - ((pte_t *)page_address(pmd_page(*(dir))) + pte_index((address))) 56 - #define pte_unmap(pte) do { } while (0) 57 - #endif 58 - 59 48 /* Clear a kernel PTE and flush it from the TLB */ 60 49 #define kpte_clear_flush(ptep, vaddr) \ 61 50 do { \
-4
arch/x86/include/asm/pgtable_64.h
··· 186 186 187 187 /* PTE - Level 1 access. */ 188 188 189 - /* x86-64 always has all page tables mapped. */ 190 - #define pte_offset_map(dir, address) pte_offset_kernel((dir), (address)) 191 - #define pte_unmap(pte) ((void)(pte))/* NOP */ 192 - 193 189 /* 194 190 * Encode and de-code a swap entry 195 191 *
+1 -17
arch/xtensa/include/asm/pgtable.h
··· 267 267 static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; } 268 268 static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; } 269 269 270 - static inline pte_t pte_wrprotect(pte_t pte) 270 + static inline pte_t pte_wrprotect(pte_t pte) 271 271 { pte_val(pte) &= ~(_PAGE_WRITABLE | _PAGE_HW_WRITE); return pte; } 272 272 static inline pte_t pte_mkclean(pte_t pte) 273 273 { pte_val(pte) &= ~(_PAGE_DIRTY | _PAGE_HW_WRITE); return pte; } ··· 358 358 pte_t pte = *ptep; 359 359 update_pte(ptep, pte_wrprotect(pte)); 360 360 } 361 - 362 - /* to find an entry in a kernel page-table-directory */ 363 - #define pgd_offset_k(address) pgd_offset(&init_mm, address) 364 - 365 - /* to find an entry in a page-table-directory */ 366 - #define pgd_offset(mm,address) ((mm)->pgd + pgd_index(address)) 367 - 368 - #define pgd_index(address) ((address) >> PGDIR_SHIFT) 369 - 370 - /* Find an entry in the third-level page table.. */ 371 - #define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 372 - #define pte_offset_kernel(dir,addr) \ 373 - ((pte_t*) pmd_page_vaddr(*(dir)) + pte_index(addr)) 374 - #define pte_offset_map(dir,addr) pte_offset_kernel((dir),(addr)) 375 - #define pte_unmap(pte) do { } while (0) 376 - 377 361 378 362 /* 379 363 * Encode and decode a swap and file entry.
+1
include/asm-generic/pgtable-nopmd.h
··· 45 45 { 46 46 return (pmd_t *)pud; 47 47 } 48 + #define pmd_offset pmd_offset 48 49 49 50 #define pmd_val(x) (pud_val((x).pud)) 50 51 #define __pmd(x) ((pmd_t) { __pud(x) } )
+1
include/asm-generic/pgtable-nopud.h
··· 43 43 { 44 44 return (pud_t *)p4d; 45 45 } 46 + #define pud_offset pud_offset 46 47 47 48 #define pud_val(x) (p4d_val((x).p4d)) 48 49 #define __pud(x) ((pud_t) { __p4d(x) })
+91
include/linux/pgtable.h
··· 29 29 #endif 30 30 31 31 /* 32 + * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD] 33 + * 34 + * The pXx_index() functions return the index of the entry in the page 35 + * table page which would control the given virtual address 36 + * 37 + * As these functions may be used by the same code for different levels of 38 + * the page table folding, they are always available, regardless of 39 + * CONFIG_PGTABLE_LEVELS value. For the folded levels they simply return 0 40 + * because in such cases PTRS_PER_PxD equals 1. 41 + */ 42 + 43 + static inline unsigned long pte_index(unsigned long address) 44 + { 45 + return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1); 46 + } 47 + 48 + #ifndef pmd_index 49 + static inline unsigned long pmd_index(unsigned long address) 50 + { 51 + return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); 52 + } 53 + #define pmd_index pmd_index 54 + #endif 55 + 56 + #ifndef pud_index 57 + static inline unsigned long pud_index(unsigned long address) 58 + { 59 + return (address >> PUD_SHIFT) & (PTRS_PER_PUD - 1); 60 + } 61 + #define pud_index pud_index 62 + #endif 63 + 64 + #ifndef pgd_index 65 + /* Must be a compile-time constant, so implement it as a macro */ 66 + #define pgd_index(a) (((a) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) 67 + #endif 68 + 69 + #ifndef pte_offset_kernel 70 + static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address) 71 + { 72 + return (pte_t *)pmd_page_vaddr(*pmd) + pte_index(address); 73 + } 74 + #define pte_offset_kernel pte_offset_kernel 75 + #endif 76 + 77 + #if defined(CONFIG_HIGHPTE) 78 + #define pte_offset_map(dir, address) \ 79 + ((pte_t *)kmap_atomic(pmd_page(*(dir))) + \ 80 + pte_index((address))) 81 + #define pte_unmap(pte) kunmap_atomic((pte)) 82 + #else 83 + #define pte_offset_map(dir, address) pte_offset_kernel((dir), (address)) 84 + #define pte_unmap(pte) ((void)(pte)) /* NOP */ 85 + #endif 86 + 87 + /* Find an entry in the second-level page table.. */ 88 + #ifndef pmd_offset 89 + static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) 90 + { 91 + return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); 92 + } 93 + #define pmd_offset pmd_offset 94 + #endif 95 + 96 + #ifndef pud_offset 97 + static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) 98 + { 99 + return (pud_t *)p4d_page_vaddr(*p4d) + pud_index(address); 100 + } 101 + #define pud_offset pud_offset 102 + #endif 103 + 104 + static inline pgd_t *pgd_offset_pgd(pgd_t *pgd, unsigned long address) 105 + { 106 + return (pgd + pgd_index(address)); 107 + }; 108 + 109 + /* 110 + * a shortcut to get a pgd_t in a given mm 111 + */ 112 + #ifndef pgd_offset 113 + #define pgd_offset(mm, address) pgd_offset_pgd((mm)->pgd, (address)) 114 + #endif 115 + 116 + /* 117 + * a shortcut which implies the use of the kernel's pgd, instead 118 + * of a process's 119 + */ 120 + #define pgd_offset_k(address) pgd_offset(&init_mm, (address)) 121 + 122 + /* 32 123 * In many cases it is known that a virtual address is mapped at PMD or PTE 33 124 * level, so instead of traversing all the page table levels, we can get a 34 125 * pointer to the PMD entry in user or kernel page table or translate a virtual