Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

arm64: MMU definitions

The virtual memory layout is described in
Documentation/arm64/memory.txt. This patch adds the MMU definitions for
the 4KB and 64KB translation table configurations. The SECTION_SIZE is
2MB with 4KB page and 512MB with 64KB page configuration.

PHYS_OFFSET is calculated at run-time and stored in a variable (no
run-time code patching at this stage).

On the current implementation, both user and kernel address spaces are
512G (39-bit) each with a maximum of 256G for the RAM linear mapping.
Linux uses 3 levels of translation tables with the 4K page configuration
and 2 levels with the 64K configuration. Extending the memory space
beyond 39-bit with the 4K pages or 42-bit with 64K pages requires an
additional level of translation tables.

The SPARSEMEM configuration is global to all AArch64 platforms and
allows for 1GB sections with SPARSEMEM_VMEMMAP enabled by default.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>

+912
+73
Documentation/arm64/memory.txt
··· 1 + Memory Layout on AArch64 Linux 2 + ============================== 3 + 4 + Author: Catalin Marinas <catalin.marinas@arm.com> 5 + Date : 20 February 2012 6 + 7 + This document describes the virtual memory layout used by the AArch64 8 + Linux kernel. The architecture allows up to 4 levels of translation 9 + tables with a 4KB page size and up to 3 levels with a 64KB page size. 10 + 11 + AArch64 Linux uses 3 levels of translation tables with the 4KB page 12 + configuration, allowing 39-bit (512GB) virtual addresses for both user 13 + and kernel. With 64KB pages, only 2 levels of translation tables are 14 + used but the memory layout is the same. 15 + 16 + User addresses have bits 63:39 set to 0 while the kernel addresses have 17 + the same bits set to 1. TTBRx selection is given by bit 63 of the 18 + virtual address. The swapper_pg_dir contains only kernel (global) 19 + mappings while the user pgd contains only user (non-global) mappings. 20 + The swapper_pgd_dir address is written to TTBR1 and never written to 21 + TTBR0. 22 + 23 + 24 + AArch64 Linux memory layout: 25 + 26 + Start End Size Use 27 + ----------------------------------------------------------------------- 28 + 0000000000000000 0000007fffffffff 512GB user 29 + 30 + ffffff8000000000 ffffffbbfffcffff ~240GB vmalloc 31 + 32 + ffffffbbfffd0000 ffffffbcfffdffff 64KB [guard page] 33 + 34 + ffffffbbfffe0000 ffffffbcfffeffff 64KB PCI I/O space 35 + 36 + ffffffbbffff0000 ffffffbcffffffff 64KB [guard page] 37 + 38 + ffffffbc00000000 ffffffbdffffffff 8GB vmemmap 39 + 40 + ffffffbe00000000 ffffffbffbffffff ~8GB [guard, future vmmemap] 41 + 42 + ffffffbffc000000 ffffffbfffffffff 64MB modules 43 + 44 + ffffffc000000000 ffffffffffffffff 256GB memory 45 + 46 + 47 + Translation table lookup with 4KB pages: 48 + 49 + +--------+--------+--------+--------+--------+--------+--------+--------+ 50 + |63 56|55 48|47 40|39 32|31 24|23 16|15 8|7 0| 51 + +--------+--------+--------+--------+--------+--------+--------+--------+ 52 + | | | | | | 53 + | | | | | v 54 + | | | | | [11:0] in-page offset 55 + | | | | +-> [20:12] L3 index 56 + | | | +-----------> [29:21] L2 index 57 + | | +---------------------> [38:30] L1 index 58 + | +-------------------------------> [47:39] L0 index (not used) 59 + +-------------------------------------------------> [63] TTBR0/1 60 + 61 + 62 + Translation table lookup with 64KB pages: 63 + 64 + +--------+--------+--------+--------+--------+--------+--------+--------+ 65 + |63 56|55 48|47 40|39 32|31 24|23 16|15 8|7 0| 66 + +--------+--------+--------+--------+--------+--------+--------+--------+ 67 + | | | | | 68 + | | | | v 69 + | | | | [15:0] in-page offset 70 + | | | +----------> [28:16] L3 index 71 + | | +--------------------------> [41:29] L2 index (only 38:29 used) 72 + | +-------------------------------> [47:42] L1 index (not used) 73 + +-------------------------------------------------> [63] TTBR0/1
+144
arch/arm64/include/asm/memory.h
··· 1 + /* 2 + * Based on arch/arm/include/asm/memory.h 3 + * 4 + * Copyright (C) 2000-2002 Russell King 5 + * Copyright (C) 2012 ARM Ltd. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 18 + * 19 + * Note: this file should not be included by non-asm/.h files 20 + */ 21 + #ifndef __ASM_MEMORY_H 22 + #define __ASM_MEMORY_H 23 + 24 + #include <linux/compiler.h> 25 + #include <linux/const.h> 26 + #include <linux/types.h> 27 + #include <asm/sizes.h> 28 + 29 + /* 30 + * Allow for constants defined here to be used from assembly code 31 + * by prepending the UL suffix only with actual C code compilation. 32 + */ 33 + #define UL(x) _AC(x, UL) 34 + 35 + /* 36 + * PAGE_OFFSET - the virtual address of the start of the kernel image. 37 + * VA_BITS - the maximum number of bits for virtual addresses. 38 + * TASK_SIZE - the maximum size of a user space task. 39 + * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area. 40 + * The module space lives between the addresses given by TASK_SIZE 41 + * and PAGE_OFFSET - it must be within 128MB of the kernel text. 42 + */ 43 + #define PAGE_OFFSET UL(0xffffffc000000000) 44 + #define MODULES_END (PAGE_OFFSET) 45 + #define MODULES_VADDR (MODULES_END - SZ_64M) 46 + #define VA_BITS (39) 47 + #define TASK_SIZE_64 (UL(1) << VA_BITS) 48 + 49 + #ifdef CONFIG_COMPAT 50 + #define TASK_SIZE_32 UL(0x100000000) 51 + #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ 52 + TASK_SIZE_32 : TASK_SIZE_64) 53 + #else 54 + #define TASK_SIZE TASK_SIZE_64 55 + #endif /* CONFIG_COMPAT */ 56 + 57 + #define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 4)) 58 + 59 + #if TASK_SIZE_64 > MODULES_VADDR 60 + #error Top of 64-bit user space clashes with start of module space 61 + #endif 62 + 63 + /* 64 + * Physical vs virtual RAM address space conversion. These are 65 + * private definitions which should NOT be used outside memory.h 66 + * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. 67 + */ 68 + #define __virt_to_phys(x) (((phys_addr_t)(x) - PAGE_OFFSET + PHYS_OFFSET)) 69 + #define __phys_to_virt(x) ((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET)) 70 + 71 + /* 72 + * Convert a physical address to a Page Frame Number and back 73 + */ 74 + #define __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT)) 75 + #define __pfn_to_phys(pfn) ((phys_addr_t)(pfn) << PAGE_SHIFT) 76 + 77 + /* 78 + * Convert a page to/from a physical address 79 + */ 80 + #define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page))) 81 + #define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys))) 82 + 83 + /* 84 + * Memory types available. 85 + */ 86 + #define MT_DEVICE_nGnRnE 0 87 + #define MT_DEVICE_nGnRE 1 88 + #define MT_DEVICE_GRE 2 89 + #define MT_NORMAL_NC 3 90 + #define MT_NORMAL 4 91 + 92 + #ifndef __ASSEMBLY__ 93 + 94 + extern phys_addr_t memstart_addr; 95 + /* PHYS_OFFSET - the physical address of the start of memory. */ 96 + #define PHYS_OFFSET ({ memstart_addr; }) 97 + 98 + /* 99 + * PFNs are used to describe any physical page; this means 100 + * PFN 0 == physical address 0. 101 + * 102 + * This is the PFN of the first RAM page in the kernel 103 + * direct-mapped view. We assume this is the first page 104 + * of RAM in the mem_map as well. 105 + */ 106 + #define PHYS_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT) 107 + 108 + /* 109 + * Note: Drivers should NOT use these. They are the wrong 110 + * translation for translating DMA addresses. Use the driver 111 + * DMA support - see dma-mapping.h. 112 + */ 113 + static inline phys_addr_t virt_to_phys(const volatile void *x) 114 + { 115 + return __virt_to_phys((unsigned long)(x)); 116 + } 117 + 118 + static inline void *phys_to_virt(phys_addr_t x) 119 + { 120 + return (void *)(__phys_to_virt(x)); 121 + } 122 + 123 + /* 124 + * Drivers should NOT use these either. 125 + */ 126 + #define __pa(x) __virt_to_phys((unsigned long)(x)) 127 + #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) 128 + #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) 129 + 130 + /* 131 + * virt_to_page(k) convert a _valid_ virtual address to struct page * 132 + * virt_addr_valid(k) indicates whether a virtual address is valid 133 + */ 134 + #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET 135 + 136 + #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) 137 + #define virt_addr_valid(kaddr) (((void *)(kaddr) >= (void *)PAGE_OFFSET) && \ 138 + ((void *)(kaddr) < (void *)high_memory)) 139 + 140 + #endif 141 + 142 + #include <asm-generic/memory_model.h> 143 + 144 + #endif
+30
arch/arm64/include/asm/mmu.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_MMU_H 17 + #define __ASM_MMU_H 18 + 19 + typedef struct { 20 + unsigned int id; 21 + raw_spinlock_t id_lock; 22 + void *vdso; 23 + } mm_context_t; 24 + 25 + #define ASID(mm) ((mm)->context.id & 0xffff) 26 + 27 + extern void paging_init(void); 28 + extern void setup_mm_for_reboot(void); 29 + 30 + #endif
+43
arch/arm64/include/asm/pgtable-2level-hwdef.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_PGTABLE_2LEVEL_HWDEF_H 17 + #define __ASM_PGTABLE_2LEVEL_HWDEF_H 18 + 19 + /* 20 + * With LPAE and 64KB pages, there are 2 levels of page tables. Each level has 21 + * 8192 entries of 8 bytes each, occupying a 64KB page. Levels 0 and 1 are not 22 + * used. The 2nd level table (PGD for Linux) can cover a range of 4TB, each 23 + * entry representing 512MB. The user and kernel address spaces are limited to 24 + * 512GB and therefore we only use 1024 entries in the PGD. 25 + */ 26 + #define PTRS_PER_PTE 8192 27 + #define PTRS_PER_PGD 1024 28 + 29 + /* 30 + * PGDIR_SHIFT determines the size a top-level page table entry can map. 31 + */ 32 + #define PGDIR_SHIFT 29 33 + #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) 34 + #define PGDIR_MASK (~(PGDIR_SIZE-1)) 35 + 36 + /* 37 + * section address mask and size definitions. 38 + */ 39 + #define SECTION_SHIFT 29 40 + #define SECTION_SIZE (_AC(1, UL) << SECTION_SHIFT) 41 + #define SECTION_MASK (~(SECTION_SIZE-1)) 42 + 43 + #endif
+60
arch/arm64/include/asm/pgtable-2level-types.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_PGTABLE_2LEVEL_TYPES_H 17 + #define __ASM_PGTABLE_2LEVEL_TYPES_H 18 + 19 + typedef u64 pteval_t; 20 + typedef u64 pgdval_t; 21 + typedef pgdval_t pmdval_t; 22 + 23 + #undef STRICT_MM_TYPECHECKS 24 + 25 + #ifdef STRICT_MM_TYPECHECKS 26 + 27 + /* 28 + * These are used to make use of C type-checking.. 29 + */ 30 + typedef struct { pteval_t pte; } pte_t; 31 + typedef struct { pgdval_t pgd; } pgd_t; 32 + typedef struct { pteval_t pgprot; } pgprot_t; 33 + 34 + #define pte_val(x) ((x).pte) 35 + #define pgd_val(x) ((x).pgd) 36 + #define pgprot_val(x) ((x).pgprot) 37 + 38 + #define __pte(x) ((pte_t) { (x) } ) 39 + #define __pgd(x) ((pgd_t) { (x) } ) 40 + #define __pgprot(x) ((pgprot_t) { (x) } ) 41 + 42 + #else /* !STRICT_MM_TYPECHECKS */ 43 + 44 + typedef pteval_t pte_t; 45 + typedef pgdval_t pgd_t; 46 + typedef pteval_t pgprot_t; 47 + 48 + #define pte_val(x) (x) 49 + #define pgd_val(x) (x) 50 + #define pgprot_val(x) (x) 51 + 52 + #define __pte(x) (x) 53 + #define __pgd(x) (x) 54 + #define __pgprot(x) (x) 55 + 56 + #endif /* STRICT_MM_TYPECHECKS */ 57 + 58 + #include <asm-generic/pgtable-nopmd.h> 59 + 60 + #endif /* __ASM_PGTABLE_2LEVEL_TYPES_H */
+50
arch/arm64/include/asm/pgtable-3level-hwdef.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_PGTABLE_3LEVEL_HWDEF_H 17 + #define __ASM_PGTABLE_3LEVEL_HWDEF_H 18 + 19 + /* 20 + * With LPAE and 4KB pages, there are 3 levels of page tables. Each level has 21 + * 512 entries of 8 bytes each, occupying a 4K page. The first level table 22 + * covers a range of 512GB, each entry representing 1GB. The user and kernel 23 + * address spaces are limited to 512GB each. 24 + */ 25 + #define PTRS_PER_PTE 512 26 + #define PTRS_PER_PMD 512 27 + #define PTRS_PER_PGD 512 28 + 29 + /* 30 + * PGDIR_SHIFT determines the size a top-level page table entry can map. 31 + */ 32 + #define PGDIR_SHIFT 30 33 + #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) 34 + #define PGDIR_MASK (~(PGDIR_SIZE-1)) 35 + 36 + /* 37 + * PMD_SHIFT determines the size a middle-level page table entry can map. 38 + */ 39 + #define PMD_SHIFT 21 40 + #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) 41 + #define PMD_MASK (~(PMD_SIZE-1)) 42 + 43 + /* 44 + * section address mask and size definitions. 45 + */ 46 + #define SECTION_SHIFT 21 47 + #define SECTION_SIZE (_AC(1, UL) << SECTION_SHIFT) 48 + #define SECTION_MASK (~(SECTION_SIZE-1)) 49 + 50 + #endif
+66
arch/arm64/include/asm/pgtable-3level-types.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_PGTABLE_3LEVEL_TYPES_H 17 + #define __ASM_PGTABLE_3LEVEL_TYPES_H 18 + 19 + typedef u64 pteval_t; 20 + typedef u64 pmdval_t; 21 + typedef u64 pgdval_t; 22 + 23 + #undef STRICT_MM_TYPECHECKS 24 + 25 + #ifdef STRICT_MM_TYPECHECKS 26 + 27 + /* 28 + * These are used to make use of C type-checking.. 29 + */ 30 + typedef struct { pteval_t pte; } pte_t; 31 + typedef struct { pmdval_t pmd; } pmd_t; 32 + typedef struct { pgdval_t pgd; } pgd_t; 33 + typedef struct { pteval_t pgprot; } pgprot_t; 34 + 35 + #define pte_val(x) ((x).pte) 36 + #define pmd_val(x) ((x).pmd) 37 + #define pgd_val(x) ((x).pgd) 38 + #define pgprot_val(x) ((x).pgprot) 39 + 40 + #define __pte(x) ((pte_t) { (x) } ) 41 + #define __pmd(x) ((pmd_t) { (x) } ) 42 + #define __pgd(x) ((pgd_t) { (x) } ) 43 + #define __pgprot(x) ((pgprot_t) { (x) } ) 44 + 45 + #else /* !STRICT_MM_TYPECHECKS */ 46 + 47 + typedef pteval_t pte_t; 48 + typedef pmdval_t pmd_t; 49 + typedef pgdval_t pgd_t; 50 + typedef pteval_t pgprot_t; 51 + 52 + #define pte_val(x) (x) 53 + #define pmd_val(x) (x) 54 + #define pgd_val(x) (x) 55 + #define pgprot_val(x) (x) 56 + 57 + #define __pte(x) (x) 58 + #define __pmd(x) (x) 59 + #define __pgd(x) (x) 60 + #define __pgprot(x) (x) 61 + 62 + #endif /* STRICT_MM_TYPECHECKS */ 63 + 64 + #include <asm-generic/pgtable-nopud.h> 65 + 66 + #endif /* __ASM_PGTABLE_3LEVEL_TYPES_H */
+94
arch/arm64/include/asm/pgtable-hwdef.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_PGTABLE_HWDEF_H 17 + #define __ASM_PGTABLE_HWDEF_H 18 + 19 + #ifdef CONFIG_ARM64_64K_PAGES 20 + #include <asm/pgtable-2level-hwdef.h> 21 + #else 22 + #include <asm/pgtable-3level-hwdef.h> 23 + #endif 24 + 25 + /* 26 + * Hardware page table definitions. 27 + * 28 + * Level 2 descriptor (PMD). 29 + */ 30 + #define PMD_TYPE_MASK (_AT(pmdval_t, 3) << 0) 31 + #define PMD_TYPE_FAULT (_AT(pmdval_t, 0) << 0) 32 + #define PMD_TYPE_TABLE (_AT(pmdval_t, 3) << 0) 33 + #define PMD_TYPE_SECT (_AT(pmdval_t, 1) << 0) 34 + 35 + /* 36 + * Section 37 + */ 38 + #define PMD_SECT_S (_AT(pmdval_t, 3) << 8) 39 + #define PMD_SECT_AF (_AT(pmdval_t, 1) << 10) 40 + #define PMD_SECT_NG (_AT(pmdval_t, 1) << 11) 41 + #define PMD_SECT_XN (_AT(pmdval_t, 1) << 54) 42 + 43 + /* 44 + * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers). 45 + */ 46 + #define PMD_ATTRINDX(t) (_AT(pmdval_t, (t)) << 2) 47 + #define PMD_ATTRINDX_MASK (_AT(pmdval_t, 7) << 2) 48 + 49 + /* 50 + * Level 3 descriptor (PTE). 51 + */ 52 + #define PTE_TYPE_MASK (_AT(pteval_t, 3) << 0) 53 + #define PTE_TYPE_FAULT (_AT(pteval_t, 0) << 0) 54 + #define PTE_TYPE_PAGE (_AT(pteval_t, 3) << 0) 55 + #define PTE_USER (_AT(pteval_t, 1) << 6) /* AP[1] */ 56 + #define PTE_RDONLY (_AT(pteval_t, 1) << 7) /* AP[2] */ 57 + #define PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */ 58 + #define PTE_AF (_AT(pteval_t, 1) << 10) /* Access Flag */ 59 + #define PTE_NG (_AT(pteval_t, 1) << 11) /* nG */ 60 + #define PTE_XN (_AT(pteval_t, 1) << 54) /* XN */ 61 + 62 + /* 63 + * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers). 64 + */ 65 + #define PTE_ATTRINDX(t) (_AT(pteval_t, (t)) << 2) 66 + #define PTE_ATTRINDX_MASK (_AT(pteval_t, 7) << 2) 67 + 68 + /* 69 + * 40-bit physical address supported. 70 + */ 71 + #define PHYS_MASK_SHIFT (40) 72 + #define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1) 73 + 74 + /* 75 + * TCR flags. 76 + */ 77 + #define TCR_TxSZ(x) (((UL(64) - (x)) << 16) | ((UL(64) - (x)) << 0)) 78 + #define TCR_IRGN_NC ((UL(0) << 8) | (UL(0) << 24)) 79 + #define TCR_IRGN_WBWA ((UL(1) << 8) | (UL(1) << 24)) 80 + #define TCR_IRGN_WT ((UL(2) << 8) | (UL(2) << 24)) 81 + #define TCR_IRGN_WBnWA ((UL(3) << 8) | (UL(3) << 24)) 82 + #define TCR_IRGN_MASK ((UL(3) << 8) | (UL(3) << 24)) 83 + #define TCR_ORGN_NC ((UL(0) << 10) | (UL(0) << 26)) 84 + #define TCR_ORGN_WBWA ((UL(1) << 10) | (UL(1) << 26)) 85 + #define TCR_ORGN_WT ((UL(2) << 10) | (UL(2) << 26)) 86 + #define TCR_ORGN_WBnWA ((UL(3) << 10) | (UL(3) << 26)) 87 + #define TCR_ORGN_MASK ((UL(3) << 10) | (UL(3) << 26)) 88 + #define TCR_SHARED ((UL(3) << 12) | (UL(3) << 28)) 89 + #define TCR_TG0_64K (UL(1) << 14) 90 + #define TCR_TG1_64K (UL(1) << 30) 91 + #define TCR_IPS_40BIT (UL(2) << 32) 92 + #define TCR_ASID16 (UL(1) << 36) 93 + 94 + #endif
+328
arch/arm64/include/asm/pgtable.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_PGTABLE_H 17 + #define __ASM_PGTABLE_H 18 + 19 + #include <asm/proc-fns.h> 20 + 21 + #include <asm/memory.h> 22 + #include <asm/pgtable-hwdef.h> 23 + 24 + /* 25 + * Software defined PTE bits definition. 26 + */ 27 + #define PTE_VALID (_AT(pteval_t, 1) << 0) /* pte_present() check */ 28 + #define PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !pte_present() */ 29 + #define PTE_DIRTY (_AT(pteval_t, 1) << 55) 30 + #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) 31 + 32 + /* 33 + * VMALLOC and SPARSEMEM_VMEMMAP ranges. 34 + */ 35 + #define VMALLOC_START UL(0xffffff8000000000) 36 + #define VMALLOC_END (PAGE_OFFSET - UL(0x400000000) - SZ_64K) 37 + 38 + #define vmemmap ((struct page *)(VMALLOC_END + SZ_64K)) 39 + 40 + #define FIRST_USER_ADDRESS 0 41 + 42 + #ifndef __ASSEMBLY__ 43 + extern void __pte_error(const char *file, int line, unsigned long val); 44 + extern void __pmd_error(const char *file, int line, unsigned long val); 45 + extern void __pgd_error(const char *file, int line, unsigned long val); 46 + 47 + #define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte)) 48 + #ifndef CONFIG_ARM64_64K_PAGES 49 + #define pmd_ERROR(pmd) __pmd_error(__FILE__, __LINE__, pmd_val(pmd)) 50 + #endif 51 + #define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) 52 + 53 + /* 54 + * The pgprot_* and protection_map entries will be fixed up at runtime to 55 + * include the cachable and bufferable bits based on memory policy, as well as 56 + * any architecture dependent bits like global/ASID and SMP shared mapping 57 + * bits. 58 + */ 59 + #define _PAGE_DEFAULT PTE_TYPE_PAGE | PTE_AF 60 + 61 + extern pgprot_t pgprot_default; 62 + 63 + #define _MOD_PROT(p, b) __pgprot(pgprot_val(p) | (b)) 64 + 65 + #define PAGE_NONE _MOD_PROT(pgprot_default, PTE_NG | PTE_XN | PTE_RDONLY) 66 + #define PAGE_SHARED _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN) 67 + #define PAGE_SHARED_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG) 68 + #define PAGE_COPY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY) 69 + #define PAGE_COPY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_RDONLY) 70 + #define PAGE_READONLY _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY) 71 + #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_default, PTE_USER | PTE_NG | PTE_RDONLY) 72 + #define PAGE_KERNEL _MOD_PROT(pgprot_default, PTE_XN | PTE_DIRTY) 73 + #define PAGE_KERNEL_EXEC _MOD_PROT(pgprot_default, PTE_DIRTY) 74 + 75 + #define __PAGE_NONE __pgprot(_PAGE_DEFAULT | PTE_NG | PTE_XN | PTE_RDONLY) 76 + #define __PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN) 77 + #define __PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG) 78 + #define __PAGE_COPY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY) 79 + #define __PAGE_COPY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_RDONLY) 80 + #define __PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_XN | PTE_RDONLY) 81 + #define __PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_RDONLY) 82 + 83 + #endif /* __ASSEMBLY__ */ 84 + 85 + #define __P000 __PAGE_NONE 86 + #define __P001 __PAGE_READONLY 87 + #define __P010 __PAGE_COPY 88 + #define __P011 __PAGE_COPY 89 + #define __P100 __PAGE_READONLY_EXEC 90 + #define __P101 __PAGE_READONLY_EXEC 91 + #define __P110 __PAGE_COPY_EXEC 92 + #define __P111 __PAGE_COPY_EXEC 93 + 94 + #define __S000 __PAGE_NONE 95 + #define __S001 __PAGE_READONLY 96 + #define __S010 __PAGE_SHARED 97 + #define __S011 __PAGE_SHARED 98 + #define __S100 __PAGE_READONLY_EXEC 99 + #define __S101 __PAGE_READONLY_EXEC 100 + #define __S110 __PAGE_SHARED_EXEC 101 + #define __S111 __PAGE_SHARED_EXEC 102 + 103 + #ifndef __ASSEMBLY__ 104 + /* 105 + * ZERO_PAGE is a global shared page that is always zero: used 106 + * for zero-mapped memory areas etc.. 107 + */ 108 + extern struct page *empty_zero_page; 109 + #define ZERO_PAGE(vaddr) (empty_zero_page) 110 + 111 + #define pte_pfn(pte) ((pte_val(pte) & PHYS_MASK) >> PAGE_SHIFT) 112 + 113 + #define pfn_pte(pfn,prot) (__pte(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))) 114 + 115 + #define pte_none(pte) (!pte_val(pte)) 116 + #define pte_clear(mm,addr,ptep) set_pte(ptep, __pte(0)) 117 + #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) 118 + #define pte_offset_kernel(dir,addr) (pmd_page_vaddr(*(dir)) + __pte_index(addr)) 119 + 120 + #define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr)) 121 + #define pte_offset_map_nested(dir,addr) pte_offset_kernel((dir), (addr)) 122 + #define pte_unmap(pte) do { } while (0) 123 + #define pte_unmap_nested(pte) do { } while (0) 124 + 125 + /* 126 + * The following only work if pte_present(). Undefined behaviour otherwise. 127 + */ 128 + #define pte_present(pte) (pte_val(pte) & PTE_VALID) 129 + #define pte_dirty(pte) (pte_val(pte) & PTE_DIRTY) 130 + #define pte_young(pte) (pte_val(pte) & PTE_AF) 131 + #define pte_special(pte) (pte_val(pte) & PTE_SPECIAL) 132 + #define pte_write(pte) (!(pte_val(pte) & PTE_RDONLY)) 133 + #define pte_exec(pte) (!(pte_val(pte) & PTE_XN)) 134 + 135 + #define pte_present_exec_user(pte) \ 136 + ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_XN)) == \ 137 + (PTE_VALID | PTE_USER)) 138 + 139 + #define PTE_BIT_FUNC(fn,op) \ 140 + static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; } 141 + 142 + PTE_BIT_FUNC(wrprotect, |= PTE_RDONLY); 143 + PTE_BIT_FUNC(mkwrite, &= ~PTE_RDONLY); 144 + PTE_BIT_FUNC(mkclean, &= ~PTE_DIRTY); 145 + PTE_BIT_FUNC(mkdirty, |= PTE_DIRTY); 146 + PTE_BIT_FUNC(mkold, &= ~PTE_AF); 147 + PTE_BIT_FUNC(mkyoung, |= PTE_AF); 148 + PTE_BIT_FUNC(mkspecial, |= PTE_SPECIAL); 149 + 150 + static inline void set_pte(pte_t *ptep, pte_t pte) 151 + { 152 + *ptep = pte; 153 + } 154 + 155 + extern void __sync_icache_dcache(pte_t pteval, unsigned long addr); 156 + 157 + static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, 158 + pte_t *ptep, pte_t pte) 159 + { 160 + if (pte_present_exec_user(pte)) 161 + __sync_icache_dcache(pte, addr); 162 + set_pte(ptep, pte); 163 + } 164 + 165 + /* 166 + * Huge pte definitions. 167 + */ 168 + #define pte_huge(pte) ((pte_val(pte) & PTE_TYPE_MASK) == PTE_TYPE_HUGEPAGE) 169 + #define pte_mkhuge(pte) (__pte((pte_val(pte) & ~PTE_TYPE_MASK) | PTE_TYPE_HUGEPAGE)) 170 + 171 + #define __pgprot_modify(prot,mask,bits) \ 172 + __pgprot((pgprot_val(prot) & ~(mask)) | (bits)) 173 + 174 + #define __HAVE_ARCH_PTE_SPECIAL 175 + 176 + /* 177 + * Mark the prot value as uncacheable and unbufferable. 178 + */ 179 + #define pgprot_noncached(prot) \ 180 + __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRnE)) 181 + #define pgprot_writecombine(prot) \ 182 + __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_GRE)) 183 + #define pgprot_dmacoherent(prot) \ 184 + __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC)) 185 + #define __HAVE_PHYS_MEM_ACCESS_PROT 186 + struct file; 187 + extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, 188 + unsigned long size, pgprot_t vma_prot); 189 + 190 + #define pmd_none(pmd) (!pmd_val(pmd)) 191 + #define pmd_present(pmd) (pmd_val(pmd)) 192 + 193 + #define pmd_bad(pmd) (!(pmd_val(pmd) & 2)) 194 + 195 + static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) 196 + { 197 + *pmdp = pmd; 198 + dsb(); 199 + } 200 + 201 + static inline void pmd_clear(pmd_t *pmdp) 202 + { 203 + set_pmd(pmdp, __pmd(0)); 204 + } 205 + 206 + static inline pte_t *pmd_page_vaddr(pmd_t pmd) 207 + { 208 + return __va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK); 209 + } 210 + 211 + #define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) 212 + 213 + /* 214 + * Conversion functions: convert a page and protection to a page entry, 215 + * and a page entry and page directory to the page they refer to. 216 + */ 217 + #define mk_pte(page,prot) pfn_pte(page_to_pfn(page),prot) 218 + 219 + #ifndef CONFIG_ARM64_64K_PAGES 220 + 221 + #define pud_none(pud) (!pud_val(pud)) 222 + #define pud_bad(pud) (!(pud_val(pud) & 2)) 223 + #define pud_present(pud) (pud_val(pud)) 224 + 225 + static inline void set_pud(pud_t *pudp, pud_t pud) 226 + { 227 + *pudp = pud; 228 + dsb(); 229 + } 230 + 231 + static inline void pud_clear(pud_t *pudp) 232 + { 233 + set_pud(pudp, __pud(0)); 234 + } 235 + 236 + static inline pmd_t *pud_page_vaddr(pud_t pud) 237 + { 238 + return __va(pud_val(pud) & PHYS_MASK & (s32)PAGE_MASK); 239 + } 240 + 241 + #endif /* CONFIG_ARM64_64K_PAGES */ 242 + 243 + /* to find an entry in a page-table-directory */ 244 + #define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) 245 + 246 + #define pgd_offset(mm, addr) ((mm)->pgd+pgd_index(addr)) 247 + 248 + /* to find an entry in a kernel page-table-directory */ 249 + #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) 250 + 251 + /* Find an entry in the second-level page table.. */ 252 + #ifndef CONFIG_ARM64_64K_PAGES 253 + #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) 254 + static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) 255 + { 256 + return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr); 257 + } 258 + #endif 259 + 260 + /* Find an entry in the third-level page table.. */ 261 + #define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 262 + 263 + static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 264 + { 265 + const pteval_t mask = PTE_USER | PTE_XN | PTE_RDONLY; 266 + pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask); 267 + return pte; 268 + } 269 + 270 + extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 271 + extern pgd_t idmap_pg_dir[PTRS_PER_PGD]; 272 + 273 + #define SWAPPER_DIR_SIZE (3 * PAGE_SIZE) 274 + #define IDMAP_DIR_SIZE (2 * PAGE_SIZE) 275 + 276 + /* 277 + * Encode and decode a swap entry: 278 + * bits 0-1: present (must be zero) 279 + * bit 2: PTE_FILE 280 + * bits 3-8: swap type 281 + * bits 9-63: swap offset 282 + */ 283 + #define __SWP_TYPE_SHIFT 3 284 + #define __SWP_TYPE_BITS 6 285 + #define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1) 286 + #define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT) 287 + 288 + #define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK) 289 + #define __swp_offset(x) ((x).val >> __SWP_OFFSET_SHIFT) 290 + #define __swp_entry(type,offset) ((swp_entry_t) { ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) }) 291 + 292 + #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 293 + #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) 294 + 295 + /* 296 + * Ensure that there are not more swap files than can be encoded in the kernel 297 + * the PTEs. 298 + */ 299 + #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > __SWP_TYPE_BITS) 300 + 301 + /* 302 + * Encode and decode a file entry: 303 + * bits 0-1: present (must be zero) 304 + * bit 2: PTE_FILE 305 + * bits 3-63: file offset / PAGE_SIZE 306 + */ 307 + #define pte_file(pte) (pte_val(pte) & PTE_FILE) 308 + #define pte_to_pgoff(x) (pte_val(x) >> 3) 309 + #define pgoff_to_pte(x) __pte(((x) << 3) | PTE_FILE) 310 + 311 + #define PTE_FILE_MAX_BITS 61 312 + 313 + extern int kern_addr_valid(unsigned long addr); 314 + 315 + #include <asm-generic/pgtable.h> 316 + 317 + /* 318 + * remap a physical page `pfn' of size `size' with page protection `prot' 319 + * into virtual address `from' 320 + */ 321 + #define io_remap_pfn_range(vma,from,pfn,size,prot) \ 322 + remap_pfn_range(vma, from, pfn, size, prot) 323 + 324 + #define pgtable_cache_init() do { } while (0) 325 + 326 + #endif /* !__ASSEMBLY__ */ 327 + 328 + #endif /* __ASM_PGTABLE_H */
+24
arch/arm64/include/asm/sparsemem.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_SPARSEMEM_H 17 + #define __ASM_SPARSEMEM_H 18 + 19 + #ifdef CONFIG_SPARSEMEM 20 + #define MAX_PHYSMEM_BITS 40 21 + #define SECTION_SIZE_BITS 30 22 + #endif 23 + 24 + #endif