Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:
"Once again, Catalin's off on holiday and I'm looking after the arm64
tree. Please can you pull the following arm64 updates for 3.17?

Note that this branch also includes the new GICv3 driver (merged via a
stable tag from Jason's irqchip tree), since there is a fix for older
binutils on top.

Changes include:
- context tracking support (NO_HZ_FULL) which narrowly missed 3.16
- vDSO layout rework following Andy's work on x86
- TEXT_OFFSET fuzzing for bootloader testing
- /proc/cpuinfo tidy-up
- preliminary work to support 48-bit virtual addresses, but this is
currently disabled until KVM has been ported to use it (the patches
do, however, bring some nice clean-up)
- boot-time CPU sanity checks (especially useful on heterogenous
systems)
- support for syscall auditing
- support for CC_STACKPROTECTOR
- defconfig updates"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (55 commits)
arm64: add newline to I-cache policy string
Revert "arm64: dmi: Add SMBIOS/DMI support"
arm64: fpsimd: fix a typo in fpsimd_save_partial_state ENDPROC
arm64: don't call break hooks for BRK exceptions from EL0
arm64: defconfig: enable devtmpfs mount option
arm64: vdso: fix build error when switching from LE to BE
arm64: defconfig: add virtio support for running as a kvm guest
arm64: gicv3: Allow GICv3 compilation with older binutils
arm64: fix soft lockup due to large tlb flush range
arm64/crypto: fix makefile rule for aes-glue-%.o
arm64: Do not invoke audit_syscall_* functions if !CONFIG_AUDIT_SYSCALL
arm64: Fix barriers used for page table modifications
arm64: Add support for 48-bit VA space with 64KB page configuration
arm64: asm/pgtable.h pmd/pud definitions clean-up
arm64: Determine the vmalloc/vmemmap space at build time based on VA_BITS
arm64: Clean up the initial page table creation in head.S
arm64: Remove asm/pgtable-*level-types.h files
arm64: Remove asm/pgtable-*level-hwdef.h files
arm64: Convert bool ARM64_x_LEVELS to int ARM64_PGTABLE_LEVELS
arm64: mm: Implement 4 levels of translation tables
...

+3110 -1002
+35 -8
Documentation/arm64/booting.txt
··· 72 72 73 73 u32 code0; /* Executable code */ 74 74 u32 code1; /* Executable code */ 75 - u64 text_offset; /* Image load offset */ 76 - u64 res0 = 0; /* reserved */ 77 - u64 res1 = 0; /* reserved */ 75 + u64 text_offset; /* Image load offset, little endian */ 76 + u64 image_size; /* Effective Image size, little endian */ 77 + u64 flags; /* kernel flags, little endian */ 78 78 u64 res2 = 0; /* reserved */ 79 79 u64 res3 = 0; /* reserved */ 80 80 u64 res4 = 0; /* reserved */ 81 81 u32 magic = 0x644d5241; /* Magic number, little endian, "ARM\x64" */ 82 - u32 res5 = 0; /* reserved */ 82 + u32 res5; /* reserved (used for PE COFF offset) */ 83 83 84 84 85 85 Header notes: 86 86 87 + - As of v3.17, all fields are little endian unless stated otherwise. 88 + 87 89 - code0/code1 are responsible for branching to stext. 90 + 88 91 - when booting through EFI, code0/code1 are initially skipped. 89 92 res5 is an offset to the PE header and the PE header has the EFI 90 - entry point (efi_stub_entry). When the stub has done its work, it 93 + entry point (efi_stub_entry). When the stub has done its work, it 91 94 jumps to code0 to resume the normal boot process. 92 95 93 - The image must be placed at the specified offset (currently 0x80000) 94 - from the start of the system RAM and called there. The start of the 95 - system RAM must be aligned to 2MB. 96 + - Prior to v3.17, the endianness of text_offset was not specified. In 97 + these cases image_size is zero and text_offset is 0x80000 in the 98 + endianness of the kernel. Where image_size is non-zero image_size is 99 + little-endian and must be respected. Where image_size is zero, 100 + text_offset can be assumed to be 0x80000. 101 + 102 + - The flags field (introduced in v3.17) is a little-endian 64-bit field 103 + composed as follows: 104 + Bit 0: Kernel endianness. 1 if BE, 0 if LE. 105 + Bits 1-63: Reserved. 106 + 107 + - When image_size is zero, a bootloader should attempt to keep as much 108 + memory as possible free for use by the kernel immediately after the 109 + end of the kernel image. The amount of space required will vary 110 + depending on selected features, and is effectively unbound. 111 + 112 + The Image must be placed text_offset bytes from a 2MB aligned base 113 + address near the start of usable system RAM and called there. Memory 114 + below that base address is currently unusable by Linux, and therefore it 115 + is strongly recommended that this location is the start of system RAM. 116 + At least image_size bytes from the start of the image must be free for 117 + use by the kernel. 118 + 119 + Any memory described to the kernel (even that below the 2MB aligned base 120 + address) which is not marked as reserved from the kernel e.g. with a 121 + memreserve region in the device tree) will be considered as available to 122 + the kernel. 96 123 97 124 Before jumping into the kernel, the following conditions must be met: 98 125
+29 -46
Documentation/arm64/memory.txt
··· 2 2 ============================== 3 3 4 4 Author: Catalin Marinas <catalin.marinas@arm.com> 5 - Date : 20 February 2012 6 5 7 6 This document describes the virtual memory layout used by the AArch64 8 7 Linux kernel. The architecture allows up to 4 levels of translation 9 8 tables with a 4KB page size and up to 3 levels with a 64KB page size. 10 9 11 - AArch64 Linux uses 3 levels of translation tables with the 4KB page 12 - configuration, allowing 39-bit (512GB) virtual addresses for both user 13 - and kernel. With 64KB pages, only 2 levels of translation tables are 14 - used but the memory layout is the same. 10 + AArch64 Linux uses either 3 levels or 4 levels of translation tables 11 + with the 4KB page configuration, allowing 39-bit (512GB) or 48-bit 12 + (256TB) virtual addresses, respectively, for both user and kernel. With 13 + 64KB pages, only 2 levels of translation tables, allowing 42-bit (4TB) 14 + virtual address, are used but the memory layout is the same. 15 15 16 - User addresses have bits 63:39 set to 0 while the kernel addresses have 16 + User addresses have bits 63:48 set to 0 while the kernel addresses have 17 17 the same bits set to 1. TTBRx selection is given by bit 63 of the 18 18 virtual address. The swapper_pg_dir contains only kernel (global) 19 19 mappings while the user pgd contains only user (non-global) mappings. ··· 21 21 TTBR0. 22 22 23 23 24 - AArch64 Linux memory layout with 4KB pages: 24 + AArch64 Linux memory layout with 4KB pages + 3 levels: 25 25 26 26 Start End Size Use 27 27 ----------------------------------------------------------------------- 28 28 0000000000000000 0000007fffffffff 512GB user 29 - 30 - ffffff8000000000 ffffffbbfffeffff ~240GB vmalloc 31 - 32 - ffffffbbffff0000 ffffffbbffffffff 64KB [guard page] 33 - 34 - ffffffbc00000000 ffffffbdffffffff 8GB vmemmap 35 - 36 - ffffffbe00000000 ffffffbffbbfffff ~8GB [guard, future vmmemap] 37 - 38 - ffffffbffa000000 ffffffbffaffffff 16MB PCI I/O space 39 - 40 - ffffffbffb000000 ffffffbffbbfffff 12MB [guard] 41 - 42 - ffffffbffbc00000 ffffffbffbdfffff 2MB fixed mappings 43 - 44 - ffffffbffbe00000 ffffffbffbffffff 2MB [guard] 45 - 46 - ffffffbffc000000 ffffffbfffffffff 64MB modules 47 - 48 - ffffffc000000000 ffffffffffffffff 256GB kernel logical memory map 29 + ffffff8000000000 ffffffffffffffff 512GB kernel 49 30 50 31 51 - AArch64 Linux memory layout with 64KB pages: 32 + AArch64 Linux memory layout with 4KB pages + 4 levels: 33 + 34 + Start End Size Use 35 + ----------------------------------------------------------------------- 36 + 0000000000000000 0000ffffffffffff 256TB user 37 + ffff000000000000 ffffffffffffffff 256TB kernel 38 + 39 + 40 + AArch64 Linux memory layout with 64KB pages + 2 levels: 52 41 53 42 Start End Size Use 54 43 ----------------------------------------------------------------------- 55 44 0000000000000000 000003ffffffffff 4TB user 45 + fffffc0000000000 ffffffffffffffff 4TB kernel 56 46 57 - fffffc0000000000 fffffdfbfffeffff ~2TB vmalloc 58 47 59 - fffffdfbffff0000 fffffdfbffffffff 64KB [guard page] 48 + AArch64 Linux memory layout with 64KB pages + 3 levels: 60 49 61 - fffffdfc00000000 fffffdfdffffffff 8GB vmemmap 50 + Start End Size Use 51 + ----------------------------------------------------------------------- 52 + 0000000000000000 0000ffffffffffff 256TB user 53 + ffff000000000000 ffffffffffffffff 256TB kernel 62 54 63 - fffffdfe00000000 fffffdfffbbfffff ~8GB [guard, future vmmemap] 64 55 65 - fffffdfffa000000 fffffdfffaffffff 16MB PCI I/O space 66 - 67 - fffffdfffb000000 fffffdfffbbfffff 12MB [guard] 68 - 69 - fffffdfffbc00000 fffffdfffbdfffff 2MB fixed mappings 70 - 71 - fffffdfffbe00000 fffffdfffbffffff 2MB [guard] 72 - 73 - fffffdfffc000000 fffffdffffffffff 64MB modules 74 - 75 - fffffe0000000000 ffffffffffffffff 2TB kernel logical memory map 56 + For details of the virtual kernel memory layout please see the kernel 57 + booting log. 76 58 77 59 78 60 Translation table lookup with 4KB pages: ··· 68 86 | | | | +-> [20:12] L3 index 69 87 | | | +-----------> [29:21] L2 index 70 88 | | +---------------------> [38:30] L1 index 71 - | +-------------------------------> [47:39] L0 index (not used) 89 + | +-------------------------------> [47:39] L0 index 72 90 +-------------------------------------------------> [63] TTBR0/1 73 91 74 92 ··· 81 99 | | | | v 82 100 | | | | [15:0] in-page offset 83 101 | | | +----------> [28:16] L3 index 84 - | | +--------------------------> [41:29] L2 index (only 38:29 used) 85 - | +-------------------------------> [47:42] L1 index (not used) 102 + | | +--------------------------> [41:29] L2 index 103 + | +-------------------------------> [47:42] L1 index 86 104 +-------------------------------------------------> [63] TTBR0/1 105 + 87 106 88 107 When using KVM, the hypervisor maps kernel pages in EL2, at a fixed 89 108 offset from the kernel VA (top 24bits of the kernel VA set to zero):
+55 -2
arch/arm64/Kconfig
··· 1 1 config ARM64 2 2 def_bool y 3 3 select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE 4 - select ARCH_HAS_OPP 5 4 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST 6 5 select ARCH_USE_CMPXCHG_LOCKREF 7 6 select ARCH_SUPPORTS_ATOMIC_RMW ··· 10 11 select ARM_AMBA 11 12 select ARM_ARCH_TIMER 12 13 select ARM_GIC 14 + select AUDIT_ARCH_COMPAT_GENERIC 15 + select ARM_GIC_V3 13 16 select BUILDTIME_EXTABLE_SORT 14 17 select CLONE_BACKWARDS 15 18 select COMMON_CLK ··· 30 29 select GENERIC_STRNLEN_USER 31 30 select GENERIC_TIME_VSYSCALL 32 31 select HARDIRQS_SW_RESEND 32 + select HAVE_ARCH_AUDITSYSCALL 33 33 select HAVE_ARCH_JUMP_LABEL 34 34 select HAVE_ARCH_KGDB 35 35 select HAVE_ARCH_TRACEHOOK 36 36 select HAVE_C_RECORDMCOUNT 37 + select HAVE_CC_STACKPROTECTOR 37 38 select HAVE_DEBUG_BUGVERBOSE 38 39 select HAVE_DEBUG_KMEMLEAK 39 40 select HAVE_DMA_API_DEBUG ··· 66 63 select RTC_LIB 67 64 select SPARSE_IRQ 68 65 select SYSCTL_EXCEPTION_TRACE 66 + select HAVE_CONTEXT_TRACKING 69 67 help 70 68 ARM 64-bit (AArch64) Linux support. 71 69 ··· 159 155 160 156 menu "Kernel Features" 161 157 158 + choice 159 + prompt "Page size" 160 + default ARM64_4K_PAGES 161 + help 162 + Page size (translation granule) configuration. 163 + 164 + config ARM64_4K_PAGES 165 + bool "4KB" 166 + help 167 + This feature enables 4KB pages support. 168 + 162 169 config ARM64_64K_PAGES 163 - bool "Enable 64KB pages support" 170 + bool "64KB" 164 171 help 165 172 This feature enables 64KB pages support (4KB by default) 166 173 allowing only two levels of page tables and faster TLB 167 174 look-up. AArch32 emulation is not available when this feature 168 175 is enabled. 176 + 177 + endchoice 178 + 179 + choice 180 + prompt "Virtual address space size" 181 + default ARM64_VA_BITS_39 if ARM64_4K_PAGES 182 + default ARM64_VA_BITS_42 if ARM64_64K_PAGES 183 + help 184 + Allows choosing one of multiple possible virtual address 185 + space sizes. The level of translation table is determined by 186 + a combination of page size and virtual address space size. 187 + 188 + config ARM64_VA_BITS_39 189 + bool "39-bit" 190 + depends on ARM64_4K_PAGES 191 + 192 + config ARM64_VA_BITS_42 193 + bool "42-bit" 194 + depends on ARM64_64K_PAGES 195 + 196 + config ARM64_VA_BITS_48 197 + bool "48-bit" 198 + depends on BROKEN 199 + 200 + endchoice 201 + 202 + config ARM64_VA_BITS 203 + int 204 + default 39 if ARM64_VA_BITS_39 205 + default 42 if ARM64_VA_BITS_42 206 + default 48 if ARM64_VA_BITS_48 207 + 208 + config ARM64_PGTABLE_LEVELS 209 + int 210 + default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42 211 + default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_48 212 + default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39 213 + default 4 if ARM64_4K_PAGES && ARM64_VA_BITS_48 169 214 170 215 config CPU_BIG_ENDIAN 171 216 bool "Build big-endian kernel"
+15
arch/arm64/Kconfig.debug
··· 28 28 instructions during context switch. Say Y here only if you are 29 29 planning to use hardware trace tools with this kernel. 30 30 31 + config ARM64_RANDOMIZE_TEXT_OFFSET 32 + bool "Randomize TEXT_OFFSET at build time" 33 + help 34 + Say Y here if you want the image load offset (AKA TEXT_OFFSET) 35 + of the kernel to be randomized at build-time. When selected, 36 + this option will cause TEXT_OFFSET to be randomized upon any 37 + build of the kernel, and the offset will be reflected in the 38 + text_offset field of the resulting Image. This can be used to 39 + fuzz-test bootloaders which respect text_offset. 40 + 41 + This option is intended for bootloader and/or kernel testing 42 + only. Bootloaders must make no assumptions regarding the value 43 + of TEXT_OFFSET and platforms must not require a specific 44 + value. 45 + 31 46 endmenu
+4
arch/arm64/Makefile
··· 38 38 head-y := arch/arm64/kernel/head.o 39 39 40 40 # The byte offset of the kernel image in RAM from the start of RAM. 41 + ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y) 42 + TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%04x0\n", int(65535 * rand())}') 43 + else 41 44 TEXT_OFFSET := 0x00080000 45 + endif 42 46 43 47 export TEXT_OFFSET GZFLAGS 44 48
+7
arch/arm64/configs/defconfig
··· 52 52 # CONFIG_INET_LRO is not set 53 53 # CONFIG_IPV6 is not set 54 54 # CONFIG_WIRELESS is not set 55 + CONFIG_NET_9P=y 56 + CONFIG_NET_9P_VIRTIO=y 55 57 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 56 58 CONFIG_DEVTMPFS=y 59 + CONFIG_DEVTMPFS_MOUNT=y 57 60 CONFIG_DMA_CMA=y 58 61 CONFIG_BLK_DEV_LOOP=y 59 62 CONFIG_VIRTIO_BLK=y ··· 68 65 CONFIG_PATA_OF_PLATFORM=y 69 66 CONFIG_NETDEVICES=y 70 67 CONFIG_TUN=y 68 + CONFIG_VIRTIO_NET=y 71 69 CONFIG_SMC91X=y 72 70 CONFIG_SMSC911X=y 73 71 # CONFIG_WLAN is not set ··· 80 76 CONFIG_SERIAL_AMBA_PL011=y 81 77 CONFIG_SERIAL_AMBA_PL011_CONSOLE=y 82 78 CONFIG_SERIAL_OF_PLATFORM=y 79 + CONFIG_VIRTIO_CONSOLE=y 83 80 # CONFIG_HW_RANDOM is not set 84 81 # CONFIG_HWMON is not set 85 82 CONFIG_REGULATOR=y ··· 95 90 CONFIG_USB_STORAGE=y 96 91 CONFIG_MMC=y 97 92 CONFIG_MMC_ARMMMCI=y 93 + CONFIG_VIRTIO_BALLOON=y 98 94 CONFIG_VIRTIO_MMIO=y 99 95 # CONFIG_IOMMU_SUPPORT is not set 100 96 CONFIG_EXT2_FS=y ··· 113 107 # CONFIG_MISC_FILESYSTEMS is not set 114 108 CONFIG_NFS_FS=y 115 109 CONFIG_ROOT_NFS=y 110 + CONFIG_9P_FS=y 116 111 CONFIG_NLS_CODEPAGE_437=y 117 112 CONFIG_NLS_ISO8859_1=y 118 113 CONFIG_VIRTUALIZATION=y
+1 -1
arch/arm64/crypto/Makefile
··· 35 35 CFLAGS_aes-glue-ce.o := -DUSE_V8_CRYPTO_EXTENSIONS 36 36 37 37 $(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE 38 - $(call if_changed_dep,cc_o_c) 38 + $(call if_changed_rule,cc_o_c)
+1 -10
arch/arm64/include/asm/cacheflush.h
··· 138 138 #define flush_icache_page(vma,page) do { } while (0) 139 139 140 140 /* 141 - * flush_cache_vmap() is used when creating mappings (eg, via vmap, 142 - * vmalloc, ioremap etc) in kernel space for pages. On non-VIPT 143 - * caches, since the direct-mappings of these pages may contain cached 144 - * data, we need to do a full cache flush to ensure that writebacks 145 - * don't corrupt data placed into these pages via the new mappings. 141 + * Not required on AArch64 (PIPT or VIPT non-aliasing D-cache). 146 142 */ 147 143 static inline void flush_cache_vmap(unsigned long start, unsigned long end) 148 144 { 149 - /* 150 - * set_pte_at() called from vmap_pte_range() does not 151 - * have a DSB after cleaning the cache line. 152 - */ 153 - dsb(ish); 154 145 } 155 146 156 147 static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
+10 -6
arch/arm64/include/asm/cachetype.h
··· 30 30 31 31 #ifndef __ASSEMBLY__ 32 32 33 - static inline u32 icache_policy(void) 34 - { 35 - return (read_cpuid_cachetype() >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK; 36 - } 33 + #include <linux/bitops.h> 34 + 35 + #define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) 36 + 37 + #define ICACHEF_ALIASING BIT(0) 38 + #define ICACHEF_AIVIVT BIT(1) 39 + 40 + extern unsigned long __icache_flags; 37 41 38 42 /* 39 43 * Whilst the D-side always behaves as PIPT on AArch64, aliasing is ··· 45 41 */ 46 42 static inline int icache_is_aliasing(void) 47 43 { 48 - return icache_policy() != ICACHE_POLICY_PIPT; 44 + return test_bit(ICACHEF_ALIASING, &__icache_flags); 49 45 } 50 46 51 47 static inline int icache_is_aivivt(void) 52 48 { 53 - return icache_policy() == ICACHE_POLICY_AIVIVT; 49 + return test_bit(ICACHEF_AIVIVT, &__icache_flags); 54 50 } 55 51 56 52 static inline u32 cache_type_cwg(void)
+59
arch/arm64/include/asm/cpu.h
··· 1 + /* 2 + * Copyright (C) 2014 ARM Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + #ifndef __ASM_CPU_H 17 + #define __ASM_CPU_H 18 + 19 + #include <linux/cpu.h> 20 + #include <linux/init.h> 21 + #include <linux/percpu.h> 22 + 23 + /* 24 + * Records attributes of an individual CPU. 25 + */ 26 + struct cpuinfo_arm64 { 27 + struct cpu cpu; 28 + u32 reg_ctr; 29 + u32 reg_cntfrq; 30 + u32 reg_dczid; 31 + u32 reg_midr; 32 + 33 + u64 reg_id_aa64isar0; 34 + u64 reg_id_aa64isar1; 35 + u64 reg_id_aa64mmfr0; 36 + u64 reg_id_aa64mmfr1; 37 + u64 reg_id_aa64pfr0; 38 + u64 reg_id_aa64pfr1; 39 + 40 + u32 reg_id_isar0; 41 + u32 reg_id_isar1; 42 + u32 reg_id_isar2; 43 + u32 reg_id_isar3; 44 + u32 reg_id_isar4; 45 + u32 reg_id_isar5; 46 + u32 reg_id_mmfr0; 47 + u32 reg_id_mmfr1; 48 + u32 reg_id_mmfr2; 49 + u32 reg_id_mmfr3; 50 + u32 reg_id_pfr0; 51 + u32 reg_id_pfr1; 52 + }; 53 + 54 + DECLARE_PER_CPU(struct cpuinfo_arm64, cpu_data); 55 + 56 + void cpuinfo_store_cpu(void); 57 + void __init cpuinfo_store_boot_cpu(void); 58 + 59 + #endif /* __ASM_CPU_H */
+28 -7
arch/arm64/include/asm/cputype.h
··· 18 18 19 19 #define INVALID_HWID ULONG_MAX 20 20 21 + #define MPIDR_UP_BITMASK (0x1 << 30) 22 + #define MPIDR_MT_BITMASK (0x1 << 24) 21 23 #define MPIDR_HWID_BITMASK 0xff00ffffff 22 24 23 25 #define MPIDR_LEVEL_BITS_SHIFT 3 ··· 38 36 __val; \ 39 37 }) 40 38 39 + #define MIDR_REVISION_MASK 0xf 40 + #define MIDR_REVISION(midr) ((midr) & MIDR_REVISION_MASK) 41 + #define MIDR_PARTNUM_SHIFT 4 42 + #define MIDR_PARTNUM_MASK (0xfff << MIDR_PARTNUM_SHIFT) 43 + #define MIDR_PARTNUM(midr) \ 44 + (((midr) & MIDR_PARTNUM_MASK) >> MIDR_PARTNUM_SHIFT) 45 + #define MIDR_ARCHITECTURE_SHIFT 16 46 + #define MIDR_ARCHITECTURE_MASK (0xf << MIDR_ARCHITECTURE_SHIFT) 47 + #define MIDR_ARCHITECTURE(midr) \ 48 + (((midr) & MIDR_ARCHITECTURE_MASK) >> MIDR_ARCHITECTURE_SHIFT) 49 + #define MIDR_VARIANT_SHIFT 20 50 + #define MIDR_VARIANT_MASK (0xf << MIDR_VARIANT_SHIFT) 51 + #define MIDR_VARIANT(midr) \ 52 + (((midr) & MIDR_VARIANT_MASK) >> MIDR_VARIANT_SHIFT) 53 + #define MIDR_IMPLEMENTOR_SHIFT 24 54 + #define MIDR_IMPLEMENTOR_MASK (0xff << MIDR_IMPLEMENTOR_SHIFT) 55 + #define MIDR_IMPLEMENTOR(midr) \ 56 + (((midr) & MIDR_IMPLEMENTOR_MASK) >> MIDR_IMPLEMENTOR_SHIFT) 57 + 41 58 #define ARM_CPU_IMP_ARM 0x41 42 59 #define ARM_CPU_IMP_APM 0x50 43 60 44 - #define ARM_CPU_PART_AEM_V8 0xD0F0 45 - #define ARM_CPU_PART_FOUNDATION 0xD000 46 - #define ARM_CPU_PART_CORTEX_A53 0xD030 47 - #define ARM_CPU_PART_CORTEX_A57 0xD070 61 + #define ARM_CPU_PART_AEM_V8 0xD0F 62 + #define ARM_CPU_PART_FOUNDATION 0xD00 63 + #define ARM_CPU_PART_CORTEX_A57 0xD07 64 + #define ARM_CPU_PART_CORTEX_A53 0xD03 48 65 49 - #define APM_CPU_PART_POTENZA 0x0000 66 + #define APM_CPU_PART_POTENZA 0x000 50 67 51 68 #ifndef __ASSEMBLY__ 52 69 ··· 86 65 87 66 static inline unsigned int __attribute_const__ read_cpuid_implementor(void) 88 67 { 89 - return (read_cpuid_id() & 0xFF000000) >> 24; 68 + return MIDR_IMPLEMENTOR(read_cpuid_id()); 90 69 } 91 70 92 71 static inline unsigned int __attribute_const__ read_cpuid_part_number(void) 93 72 { 94 - return (read_cpuid_id() & 0xFFF0); 73 + return MIDR_PARTNUM(read_cpuid_id()); 95 74 } 96 75 97 76 static inline u32 __attribute_const__ read_cpuid_cachetype(void)
+15 -2
arch/arm64/include/asm/fpsimdmacros.h
··· 40 40 str w\tmpnr, [\state, #16 * 2 + 4] 41 41 .endm 42 42 43 + .macro fpsimd_restore_fpcr state, tmp 44 + /* 45 + * Writes to fpcr may be self-synchronising, so avoid restoring 46 + * the register if it hasn't changed. 47 + */ 48 + mrs \tmp, fpcr 49 + cmp \tmp, \state 50 + b.eq 9999f 51 + msr fpcr, \state 52 + 9999: 53 + .endm 54 + 55 + /* Clobbers \state */ 43 56 .macro fpsimd_restore state, tmpnr 44 57 ldp q0, q1, [\state, #16 * 0] 45 58 ldp q2, q3, [\state, #16 * 2] ··· 73 60 ldr w\tmpnr, [\state, #16 * 2] 74 61 msr fpsr, x\tmpnr 75 62 ldr w\tmpnr, [\state, #16 * 2 + 4] 76 - msr fpcr, x\tmpnr 63 + fpsimd_restore_fpcr x\tmpnr, \state 77 64 .endm 78 65 79 66 .altmacro ··· 97 84 .macro fpsimd_restore_partial state, tmpnr1, tmpnr2 98 85 ldp w\tmpnr1, w\tmpnr2, [\state] 99 86 msr fpsr, x\tmpnr1 100 - msr fpcr, x\tmpnr2 87 + fpsimd_restore_fpcr x\tmpnr2, x\tmpnr1 101 88 adr x\tmpnr1, 0f 102 89 ldr w\tmpnr2, [\state, #8] 103 90 add \state, \state, x\tmpnr2, lsl #4
+1 -5
arch/arm64/include/asm/memory.h
··· 41 41 * The module space lives between the addresses given by TASK_SIZE 42 42 * and PAGE_OFFSET - it must be within 128MB of the kernel text. 43 43 */ 44 - #ifdef CONFIG_ARM64_64K_PAGES 45 - #define VA_BITS (42) 46 - #else 47 - #define VA_BITS (39) 48 - #endif 44 + #define VA_BITS (CONFIG_ARM64_VA_BITS) 49 45 #define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS - 1)) 50 46 #define MODULES_END (PAGE_OFFSET) 51 47 #define MODULES_VADDR (MODULES_END - SZ_64M)
+17 -5
arch/arm64/include/asm/page.h
··· 31 31 /* We do define AT_SYSINFO_EHDR but don't use the gate mechanism */ 32 32 #define __HAVE_ARCH_GATE_AREA 1 33 33 34 + /* 35 + * The idmap and swapper page tables need some space reserved in the kernel 36 + * image. Both require pgd, pud (4 levels only) and pmd tables to (section) 37 + * map the kernel. With the 64K page configuration, swapper and idmap need to 38 + * map to pte level. The swapper also maps the FDT (see __create_page_tables 39 + * for more information). 40 + */ 41 + #ifdef CONFIG_ARM64_64K_PAGES 42 + #define SWAPPER_PGTABLE_LEVELS (CONFIG_ARM64_PGTABLE_LEVELS) 43 + #else 44 + #define SWAPPER_PGTABLE_LEVELS (CONFIG_ARM64_PGTABLE_LEVELS - 1) 45 + #endif 46 + 47 + #define SWAPPER_DIR_SIZE (SWAPPER_PGTABLE_LEVELS * PAGE_SIZE) 48 + #define IDMAP_DIR_SIZE (SWAPPER_DIR_SIZE) 49 + 34 50 #ifndef __ASSEMBLY__ 35 51 36 - #ifdef CONFIG_ARM64_64K_PAGES 37 - #include <asm/pgtable-2level-types.h> 38 - #else 39 - #include <asm/pgtable-3level-types.h> 40 - #endif 52 + #include <asm/pgtable-types.h> 41 53 42 54 extern void __cpu_clear_user_page(void *p, unsigned long user); 43 55 extern void __cpu_copy_user_page(void *to, const void *from,
+22 -2
arch/arm64/include/asm/pgalloc.h
··· 26 26 27 27 #define check_pgt_cache() do { } while (0) 28 28 29 - #ifndef CONFIG_ARM64_64K_PAGES 29 + #if CONFIG_ARM64_PGTABLE_LEVELS > 2 30 30 31 31 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) 32 32 { ··· 44 44 set_pud(pud, __pud(__pa(pmd) | PMD_TYPE_TABLE)); 45 45 } 46 46 47 - #endif /* CONFIG_ARM64_64K_PAGES */ 47 + #endif /* CONFIG_ARM64_PGTABLE_LEVELS > 2 */ 48 + 49 + #if CONFIG_ARM64_PGTABLE_LEVELS > 3 50 + 51 + static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) 52 + { 53 + return (pud_t *)get_zeroed_page(GFP_KERNEL | __GFP_REPEAT); 54 + } 55 + 56 + static inline void pud_free(struct mm_struct *mm, pud_t *pud) 57 + { 58 + BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); 59 + free_page((unsigned long)pud); 60 + } 61 + 62 + static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud) 63 + { 64 + set_pgd(pgd, __pgd(__pa(pud) | PUD_TYPE_TABLE)); 65 + } 66 + 67 + #endif /* CONFIG_ARM64_PGTABLE_LEVELS > 3 */ 48 68 49 69 extern pgd_t *pgd_alloc(struct mm_struct *mm); 50 70 extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
-43
arch/arm64/include/asm/pgtable-2level-hwdef.h
··· 1 - /* 2 - * Copyright (C) 2012 ARM Ltd. 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - * 8 - * This program is distributed in the hope that it will be useful, 9 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 - * GNU General Public License for more details. 12 - * 13 - * You should have received a copy of the GNU General Public License 14 - * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 - */ 16 - #ifndef __ASM_PGTABLE_2LEVEL_HWDEF_H 17 - #define __ASM_PGTABLE_2LEVEL_HWDEF_H 18 - 19 - /* 20 - * With LPAE and 64KB pages, there are 2 levels of page tables. Each level has 21 - * 8192 entries of 8 bytes each, occupying a 64KB page. Levels 0 and 1 are not 22 - * used. The 2nd level table (PGD for Linux) can cover a range of 4TB, each 23 - * entry representing 512MB. The user and kernel address spaces are limited to 24 - * 4TB in the 64KB page configuration. 25 - */ 26 - #define PTRS_PER_PTE 8192 27 - #define PTRS_PER_PGD 8192 28 - 29 - /* 30 - * PGDIR_SHIFT determines the size a top-level page table entry can map. 31 - */ 32 - #define PGDIR_SHIFT 29 33 - #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) 34 - #define PGDIR_MASK (~(PGDIR_SIZE-1)) 35 - 36 - /* 37 - * section address mask and size definitions. 38 - */ 39 - #define SECTION_SHIFT 29 40 - #define SECTION_SIZE (_AC(1, UL) << SECTION_SHIFT) 41 - #define SECTION_MASK (~(SECTION_SIZE-1)) 42 - 43 - #endif
-62
arch/arm64/include/asm/pgtable-2level-types.h
··· 1 - /* 2 - * Copyright (C) 2012 ARM Ltd. 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - * 8 - * This program is distributed in the hope that it will be useful, 9 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 - * GNU General Public License for more details. 12 - * 13 - * You should have received a copy of the GNU General Public License 14 - * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 - */ 16 - #ifndef __ASM_PGTABLE_2LEVEL_TYPES_H 17 - #define __ASM_PGTABLE_2LEVEL_TYPES_H 18 - 19 - #include <asm/types.h> 20 - 21 - typedef u64 pteval_t; 22 - typedef u64 pgdval_t; 23 - typedef pgdval_t pmdval_t; 24 - 25 - #undef STRICT_MM_TYPECHECKS 26 - 27 - #ifdef STRICT_MM_TYPECHECKS 28 - 29 - /* 30 - * These are used to make use of C type-checking.. 31 - */ 32 - typedef struct { pteval_t pte; } pte_t; 33 - typedef struct { pgdval_t pgd; } pgd_t; 34 - typedef struct { pteval_t pgprot; } pgprot_t; 35 - 36 - #define pte_val(x) ((x).pte) 37 - #define pgd_val(x) ((x).pgd) 38 - #define pgprot_val(x) ((x).pgprot) 39 - 40 - #define __pte(x) ((pte_t) { (x) } ) 41 - #define __pgd(x) ((pgd_t) { (x) } ) 42 - #define __pgprot(x) ((pgprot_t) { (x) } ) 43 - 44 - #else /* !STRICT_MM_TYPECHECKS */ 45 - 46 - typedef pteval_t pte_t; 47 - typedef pgdval_t pgd_t; 48 - typedef pteval_t pgprot_t; 49 - 50 - #define pte_val(x) (x) 51 - #define pgd_val(x) (x) 52 - #define pgprot_val(x) (x) 53 - 54 - #define __pte(x) (x) 55 - #define __pgd(x) (x) 56 - #define __pgprot(x) (x) 57 - 58 - #endif /* STRICT_MM_TYPECHECKS */ 59 - 60 - #include <asm-generic/pgtable-nopmd.h> 61 - 62 - #endif /* __ASM_PGTABLE_2LEVEL_TYPES_H */
-50
arch/arm64/include/asm/pgtable-3level-hwdef.h
··· 1 - /* 2 - * Copyright (C) 2012 ARM Ltd. 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - * 8 - * This program is distributed in the hope that it will be useful, 9 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 - * GNU General Public License for more details. 12 - * 13 - * You should have received a copy of the GNU General Public License 14 - * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 - */ 16 - #ifndef __ASM_PGTABLE_3LEVEL_HWDEF_H 17 - #define __ASM_PGTABLE_3LEVEL_HWDEF_H 18 - 19 - /* 20 - * With LPAE and 4KB pages, there are 3 levels of page tables. Each level has 21 - * 512 entries of 8 bytes each, occupying a 4K page. The first level table 22 - * covers a range of 512GB, each entry representing 1GB. The user and kernel 23 - * address spaces are limited to 512GB each. 24 - */ 25 - #define PTRS_PER_PTE 512 26 - #define PTRS_PER_PMD 512 27 - #define PTRS_PER_PGD 512 28 - 29 - /* 30 - * PGDIR_SHIFT determines the size a top-level page table entry can map. 31 - */ 32 - #define PGDIR_SHIFT 30 33 - #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) 34 - #define PGDIR_MASK (~(PGDIR_SIZE-1)) 35 - 36 - /* 37 - * PMD_SHIFT determines the size a middle-level page table entry can map. 38 - */ 39 - #define PMD_SHIFT 21 40 - #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) 41 - #define PMD_MASK (~(PMD_SIZE-1)) 42 - 43 - /* 44 - * section address mask and size definitions. 45 - */ 46 - #define SECTION_SHIFT 21 47 - #define SECTION_SIZE (_AC(1, UL) << SECTION_SHIFT) 48 - #define SECTION_MASK (~(SECTION_SIZE-1)) 49 - 50 - #endif
-68
arch/arm64/include/asm/pgtable-3level-types.h
··· 1 - /* 2 - * Copyright (C) 2012 ARM Ltd. 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - * 8 - * This program is distributed in the hope that it will be useful, 9 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 - * GNU General Public License for more details. 12 - * 13 - * You should have received a copy of the GNU General Public License 14 - * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 - */ 16 - #ifndef __ASM_PGTABLE_3LEVEL_TYPES_H 17 - #define __ASM_PGTABLE_3LEVEL_TYPES_H 18 - 19 - #include <asm/types.h> 20 - 21 - typedef u64 pteval_t; 22 - typedef u64 pmdval_t; 23 - typedef u64 pgdval_t; 24 - 25 - #undef STRICT_MM_TYPECHECKS 26 - 27 - #ifdef STRICT_MM_TYPECHECKS 28 - 29 - /* 30 - * These are used to make use of C type-checking.. 31 - */ 32 - typedef struct { pteval_t pte; } pte_t; 33 - typedef struct { pmdval_t pmd; } pmd_t; 34 - typedef struct { pgdval_t pgd; } pgd_t; 35 - typedef struct { pteval_t pgprot; } pgprot_t; 36 - 37 - #define pte_val(x) ((x).pte) 38 - #define pmd_val(x) ((x).pmd) 39 - #define pgd_val(x) ((x).pgd) 40 - #define pgprot_val(x) ((x).pgprot) 41 - 42 - #define __pte(x) ((pte_t) { (x) } ) 43 - #define __pmd(x) ((pmd_t) { (x) } ) 44 - #define __pgd(x) ((pgd_t) { (x) } ) 45 - #define __pgprot(x) ((pgprot_t) { (x) } ) 46 - 47 - #else /* !STRICT_MM_TYPECHECKS */ 48 - 49 - typedef pteval_t pte_t; 50 - typedef pmdval_t pmd_t; 51 - typedef pgdval_t pgd_t; 52 - typedef pteval_t pgprot_t; 53 - 54 - #define pte_val(x) (x) 55 - #define pmd_val(x) (x) 56 - #define pgd_val(x) (x) 57 - #define pgprot_val(x) (x) 58 - 59 - #define __pte(x) (x) 60 - #define __pmd(x) (x) 61 - #define __pgd(x) (x) 62 - #define __pgprot(x) (x) 63 - 64 - #endif /* STRICT_MM_TYPECHECKS */ 65 - 66 - #include <asm-generic/pgtable-nopud.h> 67 - 68 - #endif /* __ASM_PGTABLE_3LEVEL_TYPES_H */
+37 -5
arch/arm64/include/asm/pgtable-hwdef.h
··· 16 16 #ifndef __ASM_PGTABLE_HWDEF_H 17 17 #define __ASM_PGTABLE_HWDEF_H 18 18 19 - #ifdef CONFIG_ARM64_64K_PAGES 20 - #include <asm/pgtable-2level-hwdef.h> 21 - #else 22 - #include <asm/pgtable-3level-hwdef.h> 19 + #define PTRS_PER_PTE (1 << (PAGE_SHIFT - 3)) 20 + 21 + /* 22 + * PMD_SHIFT determines the size a level 2 page table entry can map. 23 + */ 24 + #if CONFIG_ARM64_PGTABLE_LEVELS > 2 25 + #define PMD_SHIFT ((PAGE_SHIFT - 3) * 2 + 3) 26 + #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) 27 + #define PMD_MASK (~(PMD_SIZE-1)) 28 + #define PTRS_PER_PMD PTRS_PER_PTE 23 29 #endif 30 + 31 + /* 32 + * PUD_SHIFT determines the size a level 1 page table entry can map. 33 + */ 34 + #if CONFIG_ARM64_PGTABLE_LEVELS > 3 35 + #define PUD_SHIFT ((PAGE_SHIFT - 3) * 3 + 3) 36 + #define PUD_SIZE (_AC(1, UL) << PUD_SHIFT) 37 + #define PUD_MASK (~(PUD_SIZE-1)) 38 + #define PTRS_PER_PUD PTRS_PER_PTE 39 + #endif 40 + 41 + /* 42 + * PGDIR_SHIFT determines the size a top-level page table entry can map 43 + * (depending on the configuration, this level can be 0, 1 or 2). 44 + */ 45 + #define PGDIR_SHIFT ((PAGE_SHIFT - 3) * CONFIG_ARM64_PGTABLE_LEVELS + 3) 46 + #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) 47 + #define PGDIR_MASK (~(PGDIR_SIZE-1)) 48 + #define PTRS_PER_PGD (1 << (VA_BITS - PGDIR_SHIFT)) 49 + 50 + /* 51 + * Section address mask and size definitions. 52 + */ 53 + #define SECTION_SHIFT PMD_SHIFT 54 + #define SECTION_SIZE (_AC(1, UL) << SECTION_SHIFT) 55 + #define SECTION_MASK (~(SECTION_SIZE-1)) 24 56 25 57 /* 26 58 * Hardware page table definitions. 27 59 * 28 60 * Level 1 descriptor (PUD). 29 61 */ 30 - 62 + #define PUD_TYPE_TABLE (_AT(pudval_t, 3) << 0) 31 63 #define PUD_TABLE_BIT (_AT(pgdval_t, 1) << 1) 32 64 #define PUD_TYPE_MASK (_AT(pgdval_t, 3) << 0) 33 65 #define PUD_TYPE_SECT (_AT(pgdval_t, 1) << 0)
+95
arch/arm64/include/asm/pgtable-types.h
··· 1 + /* 2 + * Page table types definitions. 3 + * 4 + * Copyright (C) 2014 ARM Ltd. 5 + * Author: Catalin Marinas <catalin.marinas@arm.com> 6 + * 7 + * This program is free software: you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 18 + */ 19 + 20 + #ifndef __ASM_PGTABLE_TYPES_H 21 + #define __ASM_PGTABLE_TYPES_H 22 + 23 + #include <asm/types.h> 24 + 25 + typedef u64 pteval_t; 26 + typedef u64 pmdval_t; 27 + typedef u64 pudval_t; 28 + typedef u64 pgdval_t; 29 + 30 + #undef STRICT_MM_TYPECHECKS 31 + 32 + #ifdef STRICT_MM_TYPECHECKS 33 + 34 + /* 35 + * These are used to make use of C type-checking.. 36 + */ 37 + typedef struct { pteval_t pte; } pte_t; 38 + #define pte_val(x) ((x).pte) 39 + #define __pte(x) ((pte_t) { (x) } ) 40 + 41 + #if CONFIG_ARM64_PGTABLE_LEVELS > 2 42 + typedef struct { pmdval_t pmd; } pmd_t; 43 + #define pmd_val(x) ((x).pmd) 44 + #define __pmd(x) ((pmd_t) { (x) } ) 45 + #endif 46 + 47 + #if CONFIG_ARM64_PGTABLE_LEVELS > 3 48 + typedef struct { pudval_t pud; } pud_t; 49 + #define pud_val(x) ((x).pud) 50 + #define __pud(x) ((pud_t) { (x) } ) 51 + #endif 52 + 53 + typedef struct { pgdval_t pgd; } pgd_t; 54 + #define pgd_val(x) ((x).pgd) 55 + #define __pgd(x) ((pgd_t) { (x) } ) 56 + 57 + typedef struct { pteval_t pgprot; } pgprot_t; 58 + #define pgprot_val(x) ((x).pgprot) 59 + #define __pgprot(x) ((pgprot_t) { (x) } ) 60 + 61 + #else /* !STRICT_MM_TYPECHECKS */ 62 + 63 + typedef pteval_t pte_t; 64 + #define pte_val(x) (x) 65 + #define __pte(x) (x) 66 + 67 + #if CONFIG_ARM64_PGTABLE_LEVELS > 2 68 + typedef pmdval_t pmd_t; 69 + #define pmd_val(x) (x) 70 + #define __pmd(x) (x) 71 + #endif 72 + 73 + #if CONFIG_ARM64_PGTABLE_LEVELS > 3 74 + typedef pudval_t pud_t; 75 + #define pud_val(x) (x) 76 + #define __pud(x) (x) 77 + #endif 78 + 79 + typedef pgdval_t pgd_t; 80 + #define pgd_val(x) (x) 81 + #define __pgd(x) (x) 82 + 83 + typedef pteval_t pgprot_t; 84 + #define pgprot_val(x) (x) 85 + #define __pgprot(x) (x) 86 + 87 + #endif /* STRICT_MM_TYPECHECKS */ 88 + 89 + #if CONFIG_ARM64_PGTABLE_LEVELS == 2 90 + #include <asm-generic/pgtable-nopmd.h> 91 + #elif CONFIG_ARM64_PGTABLE_LEVELS == 3 92 + #include <asm-generic/pgtable-nopud.h> 93 + #endif 94 + 95 + #endif /* __ASM_PGTABLE_TYPES_H */
+76 -24
arch/arm64/include/asm/pgtable.h
··· 33 33 34 34 /* 35 35 * VMALLOC and SPARSEMEM_VMEMMAP ranges. 36 + * 37 + * VMEMAP_SIZE: allows the whole VA space to be covered by a struct page array 38 + * (rounded up to PUD_SIZE). 39 + * VMALLOC_START: beginning of the kernel VA space 40 + * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space, 41 + * fixed mappings and modules 36 42 */ 43 + #define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE) 37 44 #define VMALLOC_START (UL(0xffffffffffffffff) << VA_BITS) 38 - #define VMALLOC_END (PAGE_OFFSET - UL(0x400000000) - SZ_64K) 45 + #define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K) 39 46 40 47 #define vmemmap ((struct page *)(VMALLOC_END + SZ_64K)) 41 48 ··· 51 44 #ifndef __ASSEMBLY__ 52 45 extern void __pte_error(const char *file, int line, unsigned long val); 53 46 extern void __pmd_error(const char *file, int line, unsigned long val); 47 + extern void __pud_error(const char *file, int line, unsigned long val); 54 48 extern void __pgd_error(const char *file, int line, unsigned long val); 55 - 56 - #define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte)) 57 - #ifndef CONFIG_ARM64_64K_PAGES 58 - #define pmd_ERROR(pmd) __pmd_error(__FILE__, __LINE__, pmd_val(pmd)) 59 - #endif 60 - #define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) 61 49 62 50 #ifdef CONFIG_SMP 63 51 #define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) ··· 114 112 extern struct page *empty_zero_page; 115 113 #define ZERO_PAGE(vaddr) (empty_zero_page) 116 114 115 + #define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte)) 116 + 117 117 #define pte_pfn(pte) ((pte_val(pte) & PHYS_MASK) >> PAGE_SHIFT) 118 118 119 119 #define pfn_pte(pfn,prot) (__pte(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))) ··· 123 119 #define pte_none(pte) (!pte_val(pte)) 124 120 #define pte_clear(mm,addr,ptep) set_pte(ptep, __pte(0)) 125 121 #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) 122 + 123 + /* Find an entry in the third-level page table. */ 124 + #define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 125 + 126 126 #define pte_offset_kernel(dir,addr) (pmd_page_vaddr(*(dir)) + pte_index(addr)) 127 127 128 128 #define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr)) ··· 146 138 147 139 #define pte_valid_user(pte) \ 148 140 ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) 141 + #define pte_valid_not_user(pte) \ 142 + ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) 149 143 150 144 static inline pte_t pte_wrprotect(pte_t pte) 151 145 { ··· 194 184 static inline void set_pte(pte_t *ptep, pte_t pte) 195 185 { 196 186 *ptep = pte; 187 + 188 + /* 189 + * Only if the new pte is valid and kernel, otherwise TLB maintenance 190 + * or update_mmu_cache() have the necessary barriers. 191 + */ 192 + if (pte_valid_not_user(pte)) { 193 + dsb(ishst); 194 + isb(); 195 + } 197 196 } 198 197 199 198 extern void __sync_icache_dcache(pte_t pteval, unsigned long addr); ··· 322 303 { 323 304 *pmdp = pmd; 324 305 dsb(ishst); 306 + isb(); 325 307 } 326 308 327 309 static inline void pmd_clear(pmd_t *pmdp) ··· 343 323 */ 344 324 #define mk_pte(page,prot) pfn_pte(page_to_pfn(page),prot) 345 325 346 - #ifndef CONFIG_ARM64_64K_PAGES 326 + #if CONFIG_ARM64_PGTABLE_LEVELS > 2 327 + 328 + #define pmd_ERROR(pmd) __pmd_error(__FILE__, __LINE__, pmd_val(pmd)) 347 329 348 330 #define pud_none(pud) (!pud_val(pud)) 349 331 #define pud_bad(pud) (!(pud_val(pud) & 2)) ··· 355 333 { 356 334 *pudp = pud; 357 335 dsb(ishst); 336 + isb(); 358 337 } 359 338 360 339 static inline void pud_clear(pud_t *pudp) ··· 368 345 return __va(pud_val(pud) & PHYS_MASK & (s32)PAGE_MASK); 369 346 } 370 347 371 - #endif /* CONFIG_ARM64_64K_PAGES */ 348 + /* Find an entry in the second-level page table. */ 349 + #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) 350 + 351 + static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) 352 + { 353 + return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr); 354 + } 355 + 356 + #endif /* CONFIG_ARM64_PGTABLE_LEVELS > 2 */ 357 + 358 + #if CONFIG_ARM64_PGTABLE_LEVELS > 3 359 + 360 + #define pud_ERROR(pud) __pud_error(__FILE__, __LINE__, pud_val(pud)) 361 + 362 + #define pgd_none(pgd) (!pgd_val(pgd)) 363 + #define pgd_bad(pgd) (!(pgd_val(pgd) & 2)) 364 + #define pgd_present(pgd) (pgd_val(pgd)) 365 + 366 + static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) 367 + { 368 + *pgdp = pgd; 369 + dsb(ishst); 370 + } 371 + 372 + static inline void pgd_clear(pgd_t *pgdp) 373 + { 374 + set_pgd(pgdp, __pgd(0)); 375 + } 376 + 377 + static inline pud_t *pgd_page_vaddr(pgd_t pgd) 378 + { 379 + return __va(pgd_val(pgd) & PHYS_MASK & (s32)PAGE_MASK); 380 + } 381 + 382 + /* Find an entry in the frst-level page table. */ 383 + #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) 384 + 385 + static inline pud_t *pud_offset(pgd_t *pgd, unsigned long addr) 386 + { 387 + return (pud_t *)pgd_page_vaddr(*pgd) + pud_index(addr); 388 + } 389 + 390 + #endif /* CONFIG_ARM64_PGTABLE_LEVELS > 3 */ 391 + 392 + #define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) 372 393 373 394 /* to find an entry in a page-table-directory */ 374 395 #define pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) ··· 421 354 422 355 /* to find an entry in a kernel page-table-directory */ 423 356 #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) 424 - 425 - /* Find an entry in the second-level page table.. */ 426 - #ifndef CONFIG_ARM64_64K_PAGES 427 - #define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1)) 428 - static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) 429 - { 430 - return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr); 431 - } 432 - #endif 433 - 434 - /* Find an entry in the third-level page table.. */ 435 - #define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) 436 357 437 358 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 438 359 { ··· 437 382 438 383 extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 439 384 extern pgd_t idmap_pg_dir[PTRS_PER_PGD]; 440 - 441 - #define SWAPPER_DIR_SIZE (3 * PAGE_SIZE) 442 - #define IDMAP_DIR_SIZE (2 * PAGE_SIZE) 443 385 444 386 /* 445 387 * Encode and decode a swap entry:
+2 -2
arch/arm64/include/asm/processor.h
··· 137 137 #define task_pt_regs(p) \ 138 138 ((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1) 139 139 140 - #define KSTK_EIP(tsk) task_pt_regs(tsk)->pc 141 - #define KSTK_ESP(tsk) task_pt_regs(tsk)->sp 140 + #define KSTK_EIP(tsk) ((unsigned long)task_pt_regs(tsk)->pc) 141 + #define KSTK_ESP(tsk) ((unsigned long)task_pt_regs(tsk)->sp) 142 142 143 143 /* 144 144 * Prefetching support
+38
arch/arm64/include/asm/stackprotector.h
··· 1 + /* 2 + * GCC stack protector support. 3 + * 4 + * Stack protector works by putting predefined pattern at the start of 5 + * the stack frame and verifying that it hasn't been overwritten when 6 + * returning from the function. The pattern is called stack canary 7 + * and gcc expects it to be defined by a global variable called 8 + * "__stack_chk_guard" on ARM. This unfortunately means that on SMP 9 + * we cannot have a different canary value per task. 10 + */ 11 + 12 + #ifndef __ASM_STACKPROTECTOR_H 13 + #define __ASM_STACKPROTECTOR_H 14 + 15 + #include <linux/random.h> 16 + #include <linux/version.h> 17 + 18 + extern unsigned long __stack_chk_guard; 19 + 20 + /* 21 + * Initialize the stackprotector canary value. 22 + * 23 + * NOTE: this must only be called from functions that never return, 24 + * and it must always be inlined. 25 + */ 26 + static __always_inline void boot_init_stack_canary(void) 27 + { 28 + unsigned long canary; 29 + 30 + /* Try to get a semi random initial value. */ 31 + get_random_bytes(&canary, sizeof(canary)); 32 + canary ^= LINUX_VERSION_CODE; 33 + 34 + current->stack_canary = canary; 35 + __stack_chk_guard = current->stack_canary; 36 + } 37 + 38 + #endif /* _ASM_STACKPROTECTOR_H */
+14
arch/arm64/include/asm/syscall.h
··· 16 16 #ifndef __ASM_SYSCALL_H 17 17 #define __ASM_SYSCALL_H 18 18 19 + #include <uapi/linux/audit.h> 20 + #include <linux/compat.h> 19 21 #include <linux/err.h> 20 22 21 23 extern const void *sys_call_table[]; ··· 105 103 } 106 104 107 105 memcpy(&regs->regs[i], args, n * sizeof(args[0])); 106 + } 107 + 108 + /* 109 + * We don't care about endianness (__AUDIT_ARCH_LE bit) here because 110 + * AArch64 has the same system calls both on little- and big- endian. 111 + */ 112 + static inline int syscall_get_arch(void) 113 + { 114 + if (is_compat_task()) 115 + return AUDIT_ARCH_ARM; 116 + 117 + return AUDIT_ARCH_AARCH64; 108 118 } 109 119 110 120 #endif /* __ASM_SYSCALL_H */
+60
arch/arm64/include/asm/sysreg.h
··· 1 + /* 2 + * Macros for accessing system registers with older binutils. 3 + * 4 + * Copyright (C) 2014 ARM Ltd. 5 + * Author: Catalin Marinas <catalin.marinas@arm.com> 6 + * 7 + * This program is free software: you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 18 + */ 19 + 20 + #ifndef __ASM_SYSREG_H 21 + #define __ASM_SYSREG_H 22 + 23 + #define sys_reg(op0, op1, crn, crm, op2) \ 24 + ((((op0)-2)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5)) 25 + 26 + #ifdef __ASSEMBLY__ 27 + 28 + .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30 29 + .equ __reg_num_x\num, \num 30 + .endr 31 + .equ __reg_num_xzr, 31 32 + 33 + .macro mrs_s, rt, sreg 34 + .inst 0xd5300000|(\sreg)|(__reg_num_\rt) 35 + .endm 36 + 37 + .macro msr_s, sreg, rt 38 + .inst 0xd5100000|(\sreg)|(__reg_num_\rt) 39 + .endm 40 + 41 + #else 42 + 43 + asm( 44 + " .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" 45 + " .equ __reg_num_x\\num, \\num\n" 46 + " .endr\n" 47 + " .equ __reg_num_xzr, 31\n" 48 + "\n" 49 + " .macro mrs_s, rt, sreg\n" 50 + " .inst 0xd5300000|(\\sreg)|(__reg_num_\\rt)\n" 51 + " .endm\n" 52 + "\n" 53 + " .macro msr_s, sreg, rt\n" 54 + " .inst 0xd5100000|(\\sreg)|(__reg_num_\\rt)\n" 55 + " .endm\n" 56 + ); 57 + 58 + #endif 59 + 60 + #endif /* __ASM_SYSREG_H */
+4 -1
arch/arm64/include/asm/thread_info.h
··· 103 103 #define TIF_NEED_RESCHED 1 104 104 #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */ 105 105 #define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */ 106 + #define TIF_NOHZ 7 106 107 #define TIF_SYSCALL_TRACE 8 107 108 #define TIF_SYSCALL_AUDIT 9 108 109 #define TIF_SYSCALL_TRACEPOINT 10 ··· 119 118 #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) 120 119 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 121 120 #define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE) 121 + #define _TIF_NOHZ (1 << TIF_NOHZ) 122 122 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 123 123 #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) 124 124 #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) ··· 130 128 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE) 131 129 132 130 #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ 133 - _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP) 131 + _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ 132 + _TIF_NOHZ) 134 133 135 134 #endif /* __KERNEL__ */ 136 135 #endif /* __ASM_THREAD_INFO_H */
+10 -1
arch/arm64/include/asm/tlb.h
··· 91 91 tlb_remove_page(tlb, pte); 92 92 } 93 93 94 - #ifndef CONFIG_ARM64_64K_PAGES 94 + #if CONFIG_ARM64_PGTABLE_LEVELS > 2 95 95 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, 96 96 unsigned long addr) 97 97 { 98 98 tlb_add_flush(tlb, addr); 99 99 tlb_remove_page(tlb, virt_to_page(pmdp)); 100 + } 101 + #endif 102 + 103 + #if CONFIG_ARM64_PGTABLE_LEVELS > 3 104 + static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, 105 + unsigned long addr) 106 + { 107 + tlb_add_flush(tlb, addr); 108 + tlb_remove_page(tlb, virt_to_page(pudp)); 100 109 } 101 110 #endif 102 111
+29 -5
arch/arm64/include/asm/tlbflush.h
··· 98 98 dsb(ish); 99 99 } 100 100 101 - static inline void flush_tlb_range(struct vm_area_struct *vma, 102 - unsigned long start, unsigned long end) 101 + static inline void __flush_tlb_range(struct vm_area_struct *vma, 102 + unsigned long start, unsigned long end) 103 103 { 104 104 unsigned long asid = (unsigned long)ASID(vma->vm_mm) << 48; 105 105 unsigned long addr; ··· 112 112 dsb(ish); 113 113 } 114 114 115 - static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) 115 + static inline void __flush_tlb_kernel_range(unsigned long start, unsigned long end) 116 116 { 117 117 unsigned long addr; 118 118 start >>= 12; ··· 122 122 for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) 123 123 asm("tlbi vaae1is, %0" : : "r"(addr)); 124 124 dsb(ish); 125 + isb(); 126 + } 127 + 128 + /* 129 + * This is meant to avoid soft lock-ups on large TLB flushing ranges and not 130 + * necessarily a performance improvement. 131 + */ 132 + #define MAX_TLB_RANGE (1024UL << PAGE_SHIFT) 133 + 134 + static inline void flush_tlb_range(struct vm_area_struct *vma, 135 + unsigned long start, unsigned long end) 136 + { 137 + if ((end - start) <= MAX_TLB_RANGE) 138 + __flush_tlb_range(vma, start, end); 139 + else 140 + flush_tlb_mm(vma->vm_mm); 141 + } 142 + 143 + static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) 144 + { 145 + if ((end - start) <= MAX_TLB_RANGE) 146 + __flush_tlb_kernel_range(start, end); 147 + else 148 + flush_tlb_all(); 125 149 } 126 150 127 151 /* ··· 155 131 unsigned long addr, pte_t *ptep) 156 132 { 157 133 /* 158 - * set_pte() does not have a DSB, so make sure that the page table 159 - * write is visible. 134 + * set_pte() does not have a DSB for user mappings, so make sure that 135 + * the page table write is visible. 160 136 */ 161 137 dsb(ishst); 162 138 }
+17
arch/arm64/include/asm/unistd.h
··· 26 26 #define __ARCH_WANT_COMPAT_SYS_SENDFILE 27 27 #define __ARCH_WANT_SYS_FORK 28 28 #define __ARCH_WANT_SYS_VFORK 29 + 30 + /* 31 + * Compat syscall numbers used by the AArch64 kernel. 32 + */ 33 + #define __NR_compat_restart_syscall 0 34 + #define __NR_compat_sigreturn 119 35 + #define __NR_compat_rt_sigreturn 173 36 + 37 + /* 38 + * The following SVCs are ARM private. 39 + */ 40 + #define __ARM_NR_COMPAT_BASE 0x0f0000 41 + #define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2) 42 + #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5) 43 + 44 + #define __NR_compat_syscalls 383 29 45 #endif 46 + 30 47 #define __ARCH_WANT_SYS_CLONE 31 48 #include <uapi/asm/unistd.h> 32 49
+766 -400
arch/arm64/include/asm/unistd32.h
··· 21 21 #define __SYSCALL(x, y) 22 22 #endif 23 23 24 - __SYSCALL(0, sys_restart_syscall) 25 - __SYSCALL(1, sys_exit) 26 - __SYSCALL(2, sys_fork) 27 - __SYSCALL(3, sys_read) 28 - __SYSCALL(4, sys_write) 29 - __SYSCALL(5, compat_sys_open) 30 - __SYSCALL(6, sys_close) 31 - __SYSCALL(7, sys_ni_syscall) /* 7 was sys_waitpid */ 32 - __SYSCALL(8, sys_creat) 33 - __SYSCALL(9, sys_link) 34 - __SYSCALL(10, sys_unlink) 35 - __SYSCALL(11, compat_sys_execve) 36 - __SYSCALL(12, sys_chdir) 37 - __SYSCALL(13, sys_ni_syscall) /* 13 was sys_time */ 38 - __SYSCALL(14, sys_mknod) 39 - __SYSCALL(15, sys_chmod) 40 - __SYSCALL(16, sys_lchown16) 41 - __SYSCALL(17, sys_ni_syscall) /* 17 was sys_break */ 42 - __SYSCALL(18, sys_ni_syscall) /* 18 was sys_stat */ 43 - __SYSCALL(19, compat_sys_lseek) 44 - __SYSCALL(20, sys_getpid) 45 - __SYSCALL(21, compat_sys_mount) 46 - __SYSCALL(22, sys_ni_syscall) /* 22 was sys_umount */ 47 - __SYSCALL(23, sys_setuid16) 48 - __SYSCALL(24, sys_getuid16) 49 - __SYSCALL(25, sys_ni_syscall) /* 25 was sys_stime */ 50 - __SYSCALL(26, compat_sys_ptrace) 51 - __SYSCALL(27, sys_ni_syscall) /* 27 was sys_alarm */ 52 - __SYSCALL(28, sys_ni_syscall) /* 28 was sys_fstat */ 53 - __SYSCALL(29, sys_pause) 54 - __SYSCALL(30, sys_ni_syscall) /* 30 was sys_utime */ 55 - __SYSCALL(31, sys_ni_syscall) /* 31 was sys_stty */ 56 - __SYSCALL(32, sys_ni_syscall) /* 32 was sys_gtty */ 57 - __SYSCALL(33, sys_access) 58 - __SYSCALL(34, sys_nice) 59 - __SYSCALL(35, sys_ni_syscall) /* 35 was sys_ftime */ 60 - __SYSCALL(36, sys_sync) 61 - __SYSCALL(37, sys_kill) 62 - __SYSCALL(38, sys_rename) 63 - __SYSCALL(39, sys_mkdir) 64 - __SYSCALL(40, sys_rmdir) 65 - __SYSCALL(41, sys_dup) 66 - __SYSCALL(42, sys_pipe) 67 - __SYSCALL(43, compat_sys_times) 68 - __SYSCALL(44, sys_ni_syscall) /* 44 was sys_prof */ 69 - __SYSCALL(45, sys_brk) 70 - __SYSCALL(46, sys_setgid16) 71 - __SYSCALL(47, sys_getgid16) 72 - __SYSCALL(48, sys_ni_syscall) /* 48 was sys_signal */ 73 - __SYSCALL(49, sys_geteuid16) 74 - __SYSCALL(50, sys_getegid16) 75 - __SYSCALL(51, sys_acct) 76 - __SYSCALL(52, sys_umount) 77 - __SYSCALL(53, sys_ni_syscall) /* 53 was sys_lock */ 78 - __SYSCALL(54, compat_sys_ioctl) 79 - __SYSCALL(55, compat_sys_fcntl) 80 - __SYSCALL(56, sys_ni_syscall) /* 56 was sys_mpx */ 81 - __SYSCALL(57, sys_setpgid) 82 - __SYSCALL(58, sys_ni_syscall) /* 58 was sys_ulimit */ 83 - __SYSCALL(59, sys_ni_syscall) /* 59 was sys_olduname */ 84 - __SYSCALL(60, sys_umask) 85 - __SYSCALL(61, sys_chroot) 86 - __SYSCALL(62, compat_sys_ustat) 87 - __SYSCALL(63, sys_dup2) 88 - __SYSCALL(64, sys_getppid) 89 - __SYSCALL(65, sys_getpgrp) 90 - __SYSCALL(66, sys_setsid) 91 - __SYSCALL(67, compat_sys_sigaction) 92 - __SYSCALL(68, sys_ni_syscall) /* 68 was sys_sgetmask */ 93 - __SYSCALL(69, sys_ni_syscall) /* 69 was sys_ssetmask */ 94 - __SYSCALL(70, sys_setreuid16) 95 - __SYSCALL(71, sys_setregid16) 96 - __SYSCALL(72, sys_sigsuspend) 97 - __SYSCALL(73, compat_sys_sigpending) 98 - __SYSCALL(74, sys_sethostname) 99 - __SYSCALL(75, compat_sys_setrlimit) 100 - __SYSCALL(76, sys_ni_syscall) /* 76 was compat_sys_getrlimit */ 101 - __SYSCALL(77, compat_sys_getrusage) 102 - __SYSCALL(78, compat_sys_gettimeofday) 103 - __SYSCALL(79, compat_sys_settimeofday) 104 - __SYSCALL(80, sys_getgroups16) 105 - __SYSCALL(81, sys_setgroups16) 106 - __SYSCALL(82, sys_ni_syscall) /* 82 was compat_sys_select */ 107 - __SYSCALL(83, sys_symlink) 108 - __SYSCALL(84, sys_ni_syscall) /* 84 was sys_lstat */ 109 - __SYSCALL(85, sys_readlink) 110 - __SYSCALL(86, sys_uselib) 111 - __SYSCALL(87, sys_swapon) 112 - __SYSCALL(88, sys_reboot) 113 - __SYSCALL(89, sys_ni_syscall) /* 89 was sys_readdir */ 114 - __SYSCALL(90, sys_ni_syscall) /* 90 was sys_mmap */ 115 - __SYSCALL(91, sys_munmap) 116 - __SYSCALL(92, compat_sys_truncate) 117 - __SYSCALL(93, compat_sys_ftruncate) 118 - __SYSCALL(94, sys_fchmod) 119 - __SYSCALL(95, sys_fchown16) 120 - __SYSCALL(96, sys_getpriority) 121 - __SYSCALL(97, sys_setpriority) 122 - __SYSCALL(98, sys_ni_syscall) /* 98 was sys_profil */ 123 - __SYSCALL(99, compat_sys_statfs) 124 - __SYSCALL(100, compat_sys_fstatfs) 125 - __SYSCALL(101, sys_ni_syscall) /* 101 was sys_ioperm */ 126 - __SYSCALL(102, sys_ni_syscall) /* 102 was sys_socketcall */ 127 - __SYSCALL(103, sys_syslog) 128 - __SYSCALL(104, compat_sys_setitimer) 129 - __SYSCALL(105, compat_sys_getitimer) 130 - __SYSCALL(106, compat_sys_newstat) 131 - __SYSCALL(107, compat_sys_newlstat) 132 - __SYSCALL(108, compat_sys_newfstat) 133 - __SYSCALL(109, sys_ni_syscall) /* 109 was sys_uname */ 134 - __SYSCALL(110, sys_ni_syscall) /* 110 was sys_iopl */ 135 - __SYSCALL(111, sys_vhangup) 136 - __SYSCALL(112, sys_ni_syscall) /* 112 was sys_idle */ 137 - __SYSCALL(113, sys_ni_syscall) /* 113 was sys_syscall */ 138 - __SYSCALL(114, compat_sys_wait4) 139 - __SYSCALL(115, sys_swapoff) 140 - __SYSCALL(116, compat_sys_sysinfo) 141 - __SYSCALL(117, sys_ni_syscall) /* 117 was sys_ipc */ 142 - __SYSCALL(118, sys_fsync) 143 - __SYSCALL(119, compat_sys_sigreturn_wrapper) 144 - __SYSCALL(120, sys_clone) 145 - __SYSCALL(121, sys_setdomainname) 146 - __SYSCALL(122, sys_newuname) 147 - __SYSCALL(123, sys_ni_syscall) /* 123 was sys_modify_ldt */ 148 - __SYSCALL(124, compat_sys_adjtimex) 149 - __SYSCALL(125, sys_mprotect) 150 - __SYSCALL(126, compat_sys_sigprocmask) 151 - __SYSCALL(127, sys_ni_syscall) /* 127 was sys_create_module */ 152 - __SYSCALL(128, sys_init_module) 153 - __SYSCALL(129, sys_delete_module) 154 - __SYSCALL(130, sys_ni_syscall) /* 130 was sys_get_kernel_syms */ 155 - __SYSCALL(131, sys_quotactl) 156 - __SYSCALL(132, sys_getpgid) 157 - __SYSCALL(133, sys_fchdir) 158 - __SYSCALL(134, sys_bdflush) 159 - __SYSCALL(135, sys_sysfs) 160 - __SYSCALL(136, sys_personality) 161 - __SYSCALL(137, sys_ni_syscall) /* 137 was sys_afs_syscall */ 162 - __SYSCALL(138, sys_setfsuid16) 163 - __SYSCALL(139, sys_setfsgid16) 164 - __SYSCALL(140, sys_llseek) 165 - __SYSCALL(141, compat_sys_getdents) 166 - __SYSCALL(142, compat_sys_select) 167 - __SYSCALL(143, sys_flock) 168 - __SYSCALL(144, sys_msync) 169 - __SYSCALL(145, compat_sys_readv) 170 - __SYSCALL(146, compat_sys_writev) 171 - __SYSCALL(147, sys_getsid) 172 - __SYSCALL(148, sys_fdatasync) 173 - __SYSCALL(149, compat_sys_sysctl) 174 - __SYSCALL(150, sys_mlock) 175 - __SYSCALL(151, sys_munlock) 176 - __SYSCALL(152, sys_mlockall) 177 - __SYSCALL(153, sys_munlockall) 178 - __SYSCALL(154, sys_sched_setparam) 179 - __SYSCALL(155, sys_sched_getparam) 180 - __SYSCALL(156, sys_sched_setscheduler) 181 - __SYSCALL(157, sys_sched_getscheduler) 182 - __SYSCALL(158, sys_sched_yield) 183 - __SYSCALL(159, sys_sched_get_priority_max) 184 - __SYSCALL(160, sys_sched_get_priority_min) 185 - __SYSCALL(161, compat_sys_sched_rr_get_interval) 186 - __SYSCALL(162, compat_sys_nanosleep) 187 - __SYSCALL(163, sys_mremap) 188 - __SYSCALL(164, sys_setresuid16) 189 - __SYSCALL(165, sys_getresuid16) 190 - __SYSCALL(166, sys_ni_syscall) /* 166 was sys_vm86 */ 191 - __SYSCALL(167, sys_ni_syscall) /* 167 was sys_query_module */ 192 - __SYSCALL(168, sys_poll) 193 - __SYSCALL(169, sys_ni_syscall) 194 - __SYSCALL(170, sys_setresgid16) 195 - __SYSCALL(171, sys_getresgid16) 196 - __SYSCALL(172, sys_prctl) 197 - __SYSCALL(173, compat_sys_rt_sigreturn_wrapper) 198 - __SYSCALL(174, compat_sys_rt_sigaction) 199 - __SYSCALL(175, compat_sys_rt_sigprocmask) 200 - __SYSCALL(176, compat_sys_rt_sigpending) 201 - __SYSCALL(177, compat_sys_rt_sigtimedwait) 202 - __SYSCALL(178, compat_sys_rt_sigqueueinfo) 203 - __SYSCALL(179, compat_sys_rt_sigsuspend) 204 - __SYSCALL(180, compat_sys_pread64_wrapper) 205 - __SYSCALL(181, compat_sys_pwrite64_wrapper) 206 - __SYSCALL(182, sys_chown16) 207 - __SYSCALL(183, sys_getcwd) 208 - __SYSCALL(184, sys_capget) 209 - __SYSCALL(185, sys_capset) 210 - __SYSCALL(186, compat_sys_sigaltstack) 211 - __SYSCALL(187, compat_sys_sendfile) 212 - __SYSCALL(188, sys_ni_syscall) /* 188 reserved */ 213 - __SYSCALL(189, sys_ni_syscall) /* 189 reserved */ 214 - __SYSCALL(190, sys_vfork) 215 - __SYSCALL(191, compat_sys_getrlimit) /* SuS compliant getrlimit */ 216 - __SYSCALL(192, sys_mmap_pgoff) 217 - __SYSCALL(193, compat_sys_truncate64_wrapper) 218 - __SYSCALL(194, compat_sys_ftruncate64_wrapper) 219 - __SYSCALL(195, sys_stat64) 220 - __SYSCALL(196, sys_lstat64) 221 - __SYSCALL(197, sys_fstat64) 222 - __SYSCALL(198, sys_lchown) 223 - __SYSCALL(199, sys_getuid) 224 - __SYSCALL(200, sys_getgid) 225 - __SYSCALL(201, sys_geteuid) 226 - __SYSCALL(202, sys_getegid) 227 - __SYSCALL(203, sys_setreuid) 228 - __SYSCALL(204, sys_setregid) 229 - __SYSCALL(205, sys_getgroups) 230 - __SYSCALL(206, sys_setgroups) 231 - __SYSCALL(207, sys_fchown) 232 - __SYSCALL(208, sys_setresuid) 233 - __SYSCALL(209, sys_getresuid) 234 - __SYSCALL(210, sys_setresgid) 235 - __SYSCALL(211, sys_getresgid) 236 - __SYSCALL(212, sys_chown) 237 - __SYSCALL(213, sys_setuid) 238 - __SYSCALL(214, sys_setgid) 239 - __SYSCALL(215, sys_setfsuid) 240 - __SYSCALL(216, sys_setfsgid) 241 - __SYSCALL(217, compat_sys_getdents64) 242 - __SYSCALL(218, sys_pivot_root) 243 - __SYSCALL(219, sys_mincore) 244 - __SYSCALL(220, sys_madvise) 245 - __SYSCALL(221, compat_sys_fcntl64) 246 - __SYSCALL(222, sys_ni_syscall) /* 222 for tux */ 247 - __SYSCALL(223, sys_ni_syscall) /* 223 is unused */ 248 - __SYSCALL(224, sys_gettid) 249 - __SYSCALL(225, compat_sys_readahead_wrapper) 250 - __SYSCALL(226, sys_setxattr) 251 - __SYSCALL(227, sys_lsetxattr) 252 - __SYSCALL(228, sys_fsetxattr) 253 - __SYSCALL(229, sys_getxattr) 254 - __SYSCALL(230, sys_lgetxattr) 255 - __SYSCALL(231, sys_fgetxattr) 256 - __SYSCALL(232, sys_listxattr) 257 - __SYSCALL(233, sys_llistxattr) 258 - __SYSCALL(234, sys_flistxattr) 259 - __SYSCALL(235, sys_removexattr) 260 - __SYSCALL(236, sys_lremovexattr) 261 - __SYSCALL(237, sys_fremovexattr) 262 - __SYSCALL(238, sys_tkill) 263 - __SYSCALL(239, sys_sendfile64) 264 - __SYSCALL(240, compat_sys_futex) 265 - __SYSCALL(241, compat_sys_sched_setaffinity) 266 - __SYSCALL(242, compat_sys_sched_getaffinity) 267 - __SYSCALL(243, compat_sys_io_setup) 268 - __SYSCALL(244, sys_io_destroy) 269 - __SYSCALL(245, compat_sys_io_getevents) 270 - __SYSCALL(246, compat_sys_io_submit) 271 - __SYSCALL(247, sys_io_cancel) 272 - __SYSCALL(248, sys_exit_group) 273 - __SYSCALL(249, compat_sys_lookup_dcookie) 274 - __SYSCALL(250, sys_epoll_create) 275 - __SYSCALL(251, sys_epoll_ctl) 276 - __SYSCALL(252, sys_epoll_wait) 277 - __SYSCALL(253, sys_remap_file_pages) 278 - __SYSCALL(254, sys_ni_syscall) /* 254 for set_thread_area */ 279 - __SYSCALL(255, sys_ni_syscall) /* 255 for get_thread_area */ 280 - __SYSCALL(256, sys_set_tid_address) 281 - __SYSCALL(257, compat_sys_timer_create) 282 - __SYSCALL(258, compat_sys_timer_settime) 283 - __SYSCALL(259, compat_sys_timer_gettime) 284 - __SYSCALL(260, sys_timer_getoverrun) 285 - __SYSCALL(261, sys_timer_delete) 286 - __SYSCALL(262, compat_sys_clock_settime) 287 - __SYSCALL(263, compat_sys_clock_gettime) 288 - __SYSCALL(264, compat_sys_clock_getres) 289 - __SYSCALL(265, compat_sys_clock_nanosleep) 290 - __SYSCALL(266, compat_sys_statfs64_wrapper) 291 - __SYSCALL(267, compat_sys_fstatfs64_wrapper) 292 - __SYSCALL(268, sys_tgkill) 293 - __SYSCALL(269, compat_sys_utimes) 294 - __SYSCALL(270, compat_sys_fadvise64_64_wrapper) 295 - __SYSCALL(271, sys_pciconfig_iobase) 296 - __SYSCALL(272, sys_pciconfig_read) 297 - __SYSCALL(273, sys_pciconfig_write) 298 - __SYSCALL(274, compat_sys_mq_open) 299 - __SYSCALL(275, sys_mq_unlink) 300 - __SYSCALL(276, compat_sys_mq_timedsend) 301 - __SYSCALL(277, compat_sys_mq_timedreceive) 302 - __SYSCALL(278, compat_sys_mq_notify) 303 - __SYSCALL(279, compat_sys_mq_getsetattr) 304 - __SYSCALL(280, compat_sys_waitid) 305 - __SYSCALL(281, sys_socket) 306 - __SYSCALL(282, sys_bind) 307 - __SYSCALL(283, sys_connect) 308 - __SYSCALL(284, sys_listen) 309 - __SYSCALL(285, sys_accept) 310 - __SYSCALL(286, sys_getsockname) 311 - __SYSCALL(287, sys_getpeername) 312 - __SYSCALL(288, sys_socketpair) 313 - __SYSCALL(289, sys_send) 314 - __SYSCALL(290, sys_sendto) 315 - __SYSCALL(291, compat_sys_recv) 316 - __SYSCALL(292, compat_sys_recvfrom) 317 - __SYSCALL(293, sys_shutdown) 318 - __SYSCALL(294, compat_sys_setsockopt) 319 - __SYSCALL(295, compat_sys_getsockopt) 320 - __SYSCALL(296, compat_sys_sendmsg) 321 - __SYSCALL(297, compat_sys_recvmsg) 322 - __SYSCALL(298, sys_semop) 323 - __SYSCALL(299, sys_semget) 324 - __SYSCALL(300, compat_sys_semctl) 325 - __SYSCALL(301, compat_sys_msgsnd) 326 - __SYSCALL(302, compat_sys_msgrcv) 327 - __SYSCALL(303, sys_msgget) 328 - __SYSCALL(304, compat_sys_msgctl) 329 - __SYSCALL(305, compat_sys_shmat) 330 - __SYSCALL(306, sys_shmdt) 331 - __SYSCALL(307, sys_shmget) 332 - __SYSCALL(308, compat_sys_shmctl) 333 - __SYSCALL(309, sys_add_key) 334 - __SYSCALL(310, sys_request_key) 335 - __SYSCALL(311, compat_sys_keyctl) 336 - __SYSCALL(312, compat_sys_semtimedop) 337 - __SYSCALL(313, sys_ni_syscall) 338 - __SYSCALL(314, sys_ioprio_set) 339 - __SYSCALL(315, sys_ioprio_get) 340 - __SYSCALL(316, sys_inotify_init) 341 - __SYSCALL(317, sys_inotify_add_watch) 342 - __SYSCALL(318, sys_inotify_rm_watch) 343 - __SYSCALL(319, compat_sys_mbind) 344 - __SYSCALL(320, compat_sys_get_mempolicy) 345 - __SYSCALL(321, compat_sys_set_mempolicy) 346 - __SYSCALL(322, compat_sys_openat) 347 - __SYSCALL(323, sys_mkdirat) 348 - __SYSCALL(324, sys_mknodat) 349 - __SYSCALL(325, sys_fchownat) 350 - __SYSCALL(326, compat_sys_futimesat) 351 - __SYSCALL(327, sys_fstatat64) 352 - __SYSCALL(328, sys_unlinkat) 353 - __SYSCALL(329, sys_renameat) 354 - __SYSCALL(330, sys_linkat) 355 - __SYSCALL(331, sys_symlinkat) 356 - __SYSCALL(332, sys_readlinkat) 357 - __SYSCALL(333, sys_fchmodat) 358 - __SYSCALL(334, sys_faccessat) 359 - __SYSCALL(335, compat_sys_pselect6) 360 - __SYSCALL(336, compat_sys_ppoll) 361 - __SYSCALL(337, sys_unshare) 362 - __SYSCALL(338, compat_sys_set_robust_list) 363 - __SYSCALL(339, compat_sys_get_robust_list) 364 - __SYSCALL(340, sys_splice) 365 - __SYSCALL(341, compat_sys_sync_file_range2_wrapper) 366 - __SYSCALL(342, sys_tee) 367 - __SYSCALL(343, compat_sys_vmsplice) 368 - __SYSCALL(344, compat_sys_move_pages) 369 - __SYSCALL(345, sys_getcpu) 370 - __SYSCALL(346, compat_sys_epoll_pwait) 371 - __SYSCALL(347, compat_sys_kexec_load) 372 - __SYSCALL(348, compat_sys_utimensat) 373 - __SYSCALL(349, compat_sys_signalfd) 374 - __SYSCALL(350, sys_timerfd_create) 375 - __SYSCALL(351, sys_eventfd) 376 - __SYSCALL(352, compat_sys_fallocate_wrapper) 377 - __SYSCALL(353, compat_sys_timerfd_settime) 378 - __SYSCALL(354, compat_sys_timerfd_gettime) 379 - __SYSCALL(355, compat_sys_signalfd4) 380 - __SYSCALL(356, sys_eventfd2) 381 - __SYSCALL(357, sys_epoll_create1) 382 - __SYSCALL(358, sys_dup3) 383 - __SYSCALL(359, sys_pipe2) 384 - __SYSCALL(360, sys_inotify_init1) 385 - __SYSCALL(361, compat_sys_preadv) 386 - __SYSCALL(362, compat_sys_pwritev) 387 - __SYSCALL(363, compat_sys_rt_tgsigqueueinfo) 388 - __SYSCALL(364, sys_perf_event_open) 389 - __SYSCALL(365, compat_sys_recvmmsg) 390 - __SYSCALL(366, sys_accept4) 391 - __SYSCALL(367, sys_fanotify_init) 392 - __SYSCALL(368, compat_sys_fanotify_mark) 393 - __SYSCALL(369, sys_prlimit64) 394 - __SYSCALL(370, sys_name_to_handle_at) 395 - __SYSCALL(371, compat_sys_open_by_handle_at) 396 - __SYSCALL(372, compat_sys_clock_adjtime) 397 - __SYSCALL(373, sys_syncfs) 398 - __SYSCALL(374, compat_sys_sendmmsg) 399 - __SYSCALL(375, sys_setns) 400 - __SYSCALL(376, compat_sys_process_vm_readv) 401 - __SYSCALL(377, compat_sys_process_vm_writev) 402 - __SYSCALL(378, sys_kcmp) 403 - __SYSCALL(379, sys_finit_module) 404 - __SYSCALL(380, sys_sched_setattr) 405 - __SYSCALL(381, sys_sched_getattr) 406 - __SYSCALL(382, sys_renameat2) 407 - 408 - #define __NR_compat_syscalls 383 409 - 410 - /* 411 - * Compat syscall numbers used by the AArch64 kernel. 412 - */ 413 - #define __NR_compat_restart_syscall 0 414 - #define __NR_compat_sigreturn 119 415 - #define __NR_compat_rt_sigreturn 173 416 - 417 - 418 - /* 419 - * The following SVCs are ARM private. 420 - */ 421 - #define __ARM_NR_COMPAT_BASE 0x0f0000 422 - #define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2) 423 - #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5) 24 + #define __NR_restart_syscall 0 25 + __SYSCALL(__NR_restart_syscall, sys_restart_syscall) 26 + #define __NR_exit 1 27 + __SYSCALL(__NR_exit, sys_exit) 28 + #define __NR_fork 2 29 + __SYSCALL(__NR_fork, sys_fork) 30 + #define __NR_read 3 31 + __SYSCALL(__NR_read, sys_read) 32 + #define __NR_write 4 33 + __SYSCALL(__NR_write, sys_write) 34 + #define __NR_open 5 35 + __SYSCALL(__NR_open, compat_sys_open) 36 + #define __NR_close 6 37 + __SYSCALL(__NR_close, sys_close) 38 + /* 7 was sys_waitpid */ 39 + __SYSCALL(7, sys_ni_syscall) 40 + #define __NR_creat 8 41 + __SYSCALL(__NR_creat, sys_creat) 42 + #define __NR_link 9 43 + __SYSCALL(__NR_link, sys_link) 44 + #define __NR_unlink 10 45 + __SYSCALL(__NR_unlink, sys_unlink) 46 + #define __NR_execve 11 47 + __SYSCALL(__NR_execve, compat_sys_execve) 48 + #define __NR_chdir 12 49 + __SYSCALL(__NR_chdir, sys_chdir) 50 + /* 13 was sys_time */ 51 + __SYSCALL(13, sys_ni_syscall) 52 + #define __NR_mknod 14 53 + __SYSCALL(__NR_mknod, sys_mknod) 54 + #define __NR_chmod 15 55 + __SYSCALL(__NR_chmod, sys_chmod) 56 + #define __NR_lchown 16 57 + __SYSCALL(__NR_lchown, sys_lchown16) 58 + /* 17 was sys_break */ 59 + __SYSCALL(17, sys_ni_syscall) 60 + /* 18 was sys_stat */ 61 + __SYSCALL(18, sys_ni_syscall) 62 + #define __NR_lseek 19 63 + __SYSCALL(__NR_lseek, compat_sys_lseek) 64 + #define __NR_getpid 20 65 + __SYSCALL(__NR_getpid, sys_getpid) 66 + #define __NR_mount 21 67 + __SYSCALL(__NR_mount, compat_sys_mount) 68 + /* 22 was sys_umount */ 69 + __SYSCALL(22, sys_ni_syscall) 70 + #define __NR_setuid 23 71 + __SYSCALL(__NR_setuid, sys_setuid16) 72 + #define __NR_getuid 24 73 + __SYSCALL(__NR_getuid, sys_getuid16) 74 + /* 25 was sys_stime */ 75 + __SYSCALL(25, sys_ni_syscall) 76 + #define __NR_ptrace 26 77 + __SYSCALL(__NR_ptrace, compat_sys_ptrace) 78 + /* 27 was sys_alarm */ 79 + __SYSCALL(27, sys_ni_syscall) 80 + /* 28 was sys_fstat */ 81 + __SYSCALL(28, sys_ni_syscall) 82 + #define __NR_pause 29 83 + __SYSCALL(__NR_pause, sys_pause) 84 + /* 30 was sys_utime */ 85 + __SYSCALL(30, sys_ni_syscall) 86 + /* 31 was sys_stty */ 87 + __SYSCALL(31, sys_ni_syscall) 88 + /* 32 was sys_gtty */ 89 + __SYSCALL(32, sys_ni_syscall) 90 + #define __NR_access 33 91 + __SYSCALL(__NR_access, sys_access) 92 + #define __NR_nice 34 93 + __SYSCALL(__NR_nice, sys_nice) 94 + /* 35 was sys_ftime */ 95 + __SYSCALL(35, sys_ni_syscall) 96 + #define __NR_sync 36 97 + __SYSCALL(__NR_sync, sys_sync) 98 + #define __NR_kill 37 99 + __SYSCALL(__NR_kill, sys_kill) 100 + #define __NR_rename 38 101 + __SYSCALL(__NR_rename, sys_rename) 102 + #define __NR_mkdir 39 103 + __SYSCALL(__NR_mkdir, sys_mkdir) 104 + #define __NR_rmdir 40 105 + __SYSCALL(__NR_rmdir, sys_rmdir) 106 + #define __NR_dup 41 107 + __SYSCALL(__NR_dup, sys_dup) 108 + #define __NR_pipe 42 109 + __SYSCALL(__NR_pipe, sys_pipe) 110 + #define __NR_times 43 111 + __SYSCALL(__NR_times, compat_sys_times) 112 + /* 44 was sys_prof */ 113 + __SYSCALL(44, sys_ni_syscall) 114 + #define __NR_brk 45 115 + __SYSCALL(__NR_brk, sys_brk) 116 + #define __NR_setgid 46 117 + __SYSCALL(__NR_setgid, sys_setgid16) 118 + #define __NR_getgid 47 119 + __SYSCALL(__NR_getgid, sys_getgid16) 120 + /* 48 was sys_signal */ 121 + __SYSCALL(48, sys_ni_syscall) 122 + #define __NR_geteuid 49 123 + __SYSCALL(__NR_geteuid, sys_geteuid16) 124 + #define __NR_getegid 50 125 + __SYSCALL(__NR_getegid, sys_getegid16) 126 + #define __NR_acct 51 127 + __SYSCALL(__NR_acct, sys_acct) 128 + #define __NR_umount2 52 129 + __SYSCALL(__NR_umount2, sys_umount) 130 + /* 53 was sys_lock */ 131 + __SYSCALL(53, sys_ni_syscall) 132 + #define __NR_ioctl 54 133 + __SYSCALL(__NR_ioctl, compat_sys_ioctl) 134 + #define __NR_fcntl 55 135 + __SYSCALL(__NR_fcntl, compat_sys_fcntl) 136 + /* 56 was sys_mpx */ 137 + __SYSCALL(56, sys_ni_syscall) 138 + #define __NR_setpgid 57 139 + __SYSCALL(__NR_setpgid, sys_setpgid) 140 + /* 58 was sys_ulimit */ 141 + __SYSCALL(58, sys_ni_syscall) 142 + /* 59 was sys_olduname */ 143 + __SYSCALL(59, sys_ni_syscall) 144 + #define __NR_umask 60 145 + __SYSCALL(__NR_umask, sys_umask) 146 + #define __NR_chroot 61 147 + __SYSCALL(__NR_chroot, sys_chroot) 148 + #define __NR_ustat 62 149 + __SYSCALL(__NR_ustat, compat_sys_ustat) 150 + #define __NR_dup2 63 151 + __SYSCALL(__NR_dup2, sys_dup2) 152 + #define __NR_getppid 64 153 + __SYSCALL(__NR_getppid, sys_getppid) 154 + #define __NR_getpgrp 65 155 + __SYSCALL(__NR_getpgrp, sys_getpgrp) 156 + #define __NR_setsid 66 157 + __SYSCALL(__NR_setsid, sys_setsid) 158 + #define __NR_sigaction 67 159 + __SYSCALL(__NR_sigaction, compat_sys_sigaction) 160 + /* 68 was sys_sgetmask */ 161 + __SYSCALL(68, sys_ni_syscall) 162 + /* 69 was sys_ssetmask */ 163 + __SYSCALL(69, sys_ni_syscall) 164 + #define __NR_setreuid 70 165 + __SYSCALL(__NR_setreuid, sys_setreuid16) 166 + #define __NR_setregid 71 167 + __SYSCALL(__NR_setregid, sys_setregid16) 168 + #define __NR_sigsuspend 72 169 + __SYSCALL(__NR_sigsuspend, sys_sigsuspend) 170 + #define __NR_sigpending 73 171 + __SYSCALL(__NR_sigpending, compat_sys_sigpending) 172 + #define __NR_sethostname 74 173 + __SYSCALL(__NR_sethostname, sys_sethostname) 174 + #define __NR_setrlimit 75 175 + __SYSCALL(__NR_setrlimit, compat_sys_setrlimit) 176 + /* 76 was compat_sys_getrlimit */ 177 + __SYSCALL(76, sys_ni_syscall) 178 + #define __NR_getrusage 77 179 + __SYSCALL(__NR_getrusage, compat_sys_getrusage) 180 + #define __NR_gettimeofday 78 181 + __SYSCALL(__NR_gettimeofday, compat_sys_gettimeofday) 182 + #define __NR_settimeofday 79 183 + __SYSCALL(__NR_settimeofday, compat_sys_settimeofday) 184 + #define __NR_getgroups 80 185 + __SYSCALL(__NR_getgroups, sys_getgroups16) 186 + #define __NR_setgroups 81 187 + __SYSCALL(__NR_setgroups, sys_setgroups16) 188 + /* 82 was compat_sys_select */ 189 + __SYSCALL(82, sys_ni_syscall) 190 + #define __NR_symlink 83 191 + __SYSCALL(__NR_symlink, sys_symlink) 192 + /* 84 was sys_lstat */ 193 + __SYSCALL(84, sys_ni_syscall) 194 + #define __NR_readlink 85 195 + __SYSCALL(__NR_readlink, sys_readlink) 196 + #define __NR_uselib 86 197 + __SYSCALL(__NR_uselib, sys_uselib) 198 + #define __NR_swapon 87 199 + __SYSCALL(__NR_swapon, sys_swapon) 200 + #define __NR_reboot 88 201 + __SYSCALL(__NR_reboot, sys_reboot) 202 + /* 89 was sys_readdir */ 203 + __SYSCALL(89, sys_ni_syscall) 204 + /* 90 was sys_mmap */ 205 + __SYSCALL(90, sys_ni_syscall) 206 + #define __NR_munmap 91 207 + __SYSCALL(__NR_munmap, sys_munmap) 208 + #define __NR_truncate 92 209 + __SYSCALL(__NR_truncate, compat_sys_truncate) 210 + #define __NR_ftruncate 93 211 + __SYSCALL(__NR_ftruncate, compat_sys_ftruncate) 212 + #define __NR_fchmod 94 213 + __SYSCALL(__NR_fchmod, sys_fchmod) 214 + #define __NR_fchown 95 215 + __SYSCALL(__NR_fchown, sys_fchown16) 216 + #define __NR_getpriority 96 217 + __SYSCALL(__NR_getpriority, sys_getpriority) 218 + #define __NR_setpriority 97 219 + __SYSCALL(__NR_setpriority, sys_setpriority) 220 + /* 98 was sys_profil */ 221 + __SYSCALL(98, sys_ni_syscall) 222 + #define __NR_statfs 99 223 + __SYSCALL(__NR_statfs, compat_sys_statfs) 224 + #define __NR_fstatfs 100 225 + __SYSCALL(__NR_fstatfs, compat_sys_fstatfs) 226 + /* 101 was sys_ioperm */ 227 + __SYSCALL(101, sys_ni_syscall) 228 + /* 102 was sys_socketcall */ 229 + __SYSCALL(102, sys_ni_syscall) 230 + #define __NR_syslog 103 231 + __SYSCALL(__NR_syslog, sys_syslog) 232 + #define __NR_setitimer 104 233 + __SYSCALL(__NR_setitimer, compat_sys_setitimer) 234 + #define __NR_getitimer 105 235 + __SYSCALL(__NR_getitimer, compat_sys_getitimer) 236 + #define __NR_stat 106 237 + __SYSCALL(__NR_stat, compat_sys_newstat) 238 + #define __NR_lstat 107 239 + __SYSCALL(__NR_lstat, compat_sys_newlstat) 240 + #define __NR_fstat 108 241 + __SYSCALL(__NR_fstat, compat_sys_newfstat) 242 + /* 109 was sys_uname */ 243 + __SYSCALL(109, sys_ni_syscall) 244 + /* 110 was sys_iopl */ 245 + __SYSCALL(110, sys_ni_syscall) 246 + #define __NR_vhangup 111 247 + __SYSCALL(__NR_vhangup, sys_vhangup) 248 + /* 112 was sys_idle */ 249 + __SYSCALL(112, sys_ni_syscall) 250 + /* 113 was sys_syscall */ 251 + __SYSCALL(113, sys_ni_syscall) 252 + #define __NR_wait4 114 253 + __SYSCALL(__NR_wait4, compat_sys_wait4) 254 + #define __NR_swapoff 115 255 + __SYSCALL(__NR_swapoff, sys_swapoff) 256 + #define __NR_sysinfo 116 257 + __SYSCALL(__NR_sysinfo, compat_sys_sysinfo) 258 + /* 117 was sys_ipc */ 259 + __SYSCALL(117, sys_ni_syscall) 260 + #define __NR_fsync 118 261 + __SYSCALL(__NR_fsync, sys_fsync) 262 + #define __NR_sigreturn 119 263 + __SYSCALL(__NR_sigreturn, compat_sys_sigreturn_wrapper) 264 + #define __NR_clone 120 265 + __SYSCALL(__NR_clone, sys_clone) 266 + #define __NR_setdomainname 121 267 + __SYSCALL(__NR_setdomainname, sys_setdomainname) 268 + #define __NR_uname 122 269 + __SYSCALL(__NR_uname, sys_newuname) 270 + /* 123 was sys_modify_ldt */ 271 + __SYSCALL(123, sys_ni_syscall) 272 + #define __NR_adjtimex 124 273 + __SYSCALL(__NR_adjtimex, compat_sys_adjtimex) 274 + #define __NR_mprotect 125 275 + __SYSCALL(__NR_mprotect, sys_mprotect) 276 + #define __NR_sigprocmask 126 277 + __SYSCALL(__NR_sigprocmask, compat_sys_sigprocmask) 278 + /* 127 was sys_create_module */ 279 + __SYSCALL(127, sys_ni_syscall) 280 + #define __NR_init_module 128 281 + __SYSCALL(__NR_init_module, sys_init_module) 282 + #define __NR_delete_module 129 283 + __SYSCALL(__NR_delete_module, sys_delete_module) 284 + /* 130 was sys_get_kernel_syms */ 285 + __SYSCALL(130, sys_ni_syscall) 286 + #define __NR_quotactl 131 287 + __SYSCALL(__NR_quotactl, sys_quotactl) 288 + #define __NR_getpgid 132 289 + __SYSCALL(__NR_getpgid, sys_getpgid) 290 + #define __NR_fchdir 133 291 + __SYSCALL(__NR_fchdir, sys_fchdir) 292 + #define __NR_bdflush 134 293 + __SYSCALL(__NR_bdflush, sys_bdflush) 294 + #define __NR_sysfs 135 295 + __SYSCALL(__NR_sysfs, sys_sysfs) 296 + #define __NR_personality 136 297 + __SYSCALL(__NR_personality, sys_personality) 298 + /* 137 was sys_afs_syscall */ 299 + __SYSCALL(137, sys_ni_syscall) 300 + #define __NR_setfsuid 138 301 + __SYSCALL(__NR_setfsuid, sys_setfsuid16) 302 + #define __NR_setfsgid 139 303 + __SYSCALL(__NR_setfsgid, sys_setfsgid16) 304 + #define __NR__llseek 140 305 + __SYSCALL(__NR__llseek, sys_llseek) 306 + #define __NR_getdents 141 307 + __SYSCALL(__NR_getdents, compat_sys_getdents) 308 + #define __NR__newselect 142 309 + __SYSCALL(__NR__newselect, compat_sys_select) 310 + #define __NR_flock 143 311 + __SYSCALL(__NR_flock, sys_flock) 312 + #define __NR_msync 144 313 + __SYSCALL(__NR_msync, sys_msync) 314 + #define __NR_readv 145 315 + __SYSCALL(__NR_readv, compat_sys_readv) 316 + #define __NR_writev 146 317 + __SYSCALL(__NR_writev, compat_sys_writev) 318 + #define __NR_getsid 147 319 + __SYSCALL(__NR_getsid, sys_getsid) 320 + #define __NR_fdatasync 148 321 + __SYSCALL(__NR_fdatasync, sys_fdatasync) 322 + #define __NR__sysctl 149 323 + __SYSCALL(__NR__sysctl, compat_sys_sysctl) 324 + #define __NR_mlock 150 325 + __SYSCALL(__NR_mlock, sys_mlock) 326 + #define __NR_munlock 151 327 + __SYSCALL(__NR_munlock, sys_munlock) 328 + #define __NR_mlockall 152 329 + __SYSCALL(__NR_mlockall, sys_mlockall) 330 + #define __NR_munlockall 153 331 + __SYSCALL(__NR_munlockall, sys_munlockall) 332 + #define __NR_sched_setparam 154 333 + __SYSCALL(__NR_sched_setparam, sys_sched_setparam) 334 + #define __NR_sched_getparam 155 335 + __SYSCALL(__NR_sched_getparam, sys_sched_getparam) 336 + #define __NR_sched_setscheduler 156 337 + __SYSCALL(__NR_sched_setscheduler, sys_sched_setscheduler) 338 + #define __NR_sched_getscheduler 157 339 + __SYSCALL(__NR_sched_getscheduler, sys_sched_getscheduler) 340 + #define __NR_sched_yield 158 341 + __SYSCALL(__NR_sched_yield, sys_sched_yield) 342 + #define __NR_sched_get_priority_max 159 343 + __SYSCALL(__NR_sched_get_priority_max, sys_sched_get_priority_max) 344 + #define __NR_sched_get_priority_min 160 345 + __SYSCALL(__NR_sched_get_priority_min, sys_sched_get_priority_min) 346 + #define __NR_sched_rr_get_interval 161 347 + __SYSCALL(__NR_sched_rr_get_interval, compat_sys_sched_rr_get_interval) 348 + #define __NR_nanosleep 162 349 + __SYSCALL(__NR_nanosleep, compat_sys_nanosleep) 350 + #define __NR_mremap 163 351 + __SYSCALL(__NR_mremap, sys_mremap) 352 + #define __NR_setresuid 164 353 + __SYSCALL(__NR_setresuid, sys_setresuid16) 354 + #define __NR_getresuid 165 355 + __SYSCALL(__NR_getresuid, sys_getresuid16) 356 + /* 166 was sys_vm86 */ 357 + __SYSCALL(166, sys_ni_syscall) 358 + /* 167 was sys_query_module */ 359 + __SYSCALL(167, sys_ni_syscall) 360 + #define __NR_poll 168 361 + __SYSCALL(__NR_poll, sys_poll) 362 + #define __NR_nfsservctl 169 363 + __SYSCALL(__NR_nfsservctl, sys_ni_syscall) 364 + #define __NR_setresgid 170 365 + __SYSCALL(__NR_setresgid, sys_setresgid16) 366 + #define __NR_getresgid 171 367 + __SYSCALL(__NR_getresgid, sys_getresgid16) 368 + #define __NR_prctl 172 369 + __SYSCALL(__NR_prctl, sys_prctl) 370 + #define __NR_rt_sigreturn 173 371 + __SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn_wrapper) 372 + #define __NR_rt_sigaction 174 373 + __SYSCALL(__NR_rt_sigaction, compat_sys_rt_sigaction) 374 + #define __NR_rt_sigprocmask 175 375 + __SYSCALL(__NR_rt_sigprocmask, compat_sys_rt_sigprocmask) 376 + #define __NR_rt_sigpending 176 377 + __SYSCALL(__NR_rt_sigpending, compat_sys_rt_sigpending) 378 + #define __NR_rt_sigtimedwait 177 379 + __SYSCALL(__NR_rt_sigtimedwait, compat_sys_rt_sigtimedwait) 380 + #define __NR_rt_sigqueueinfo 178 381 + __SYSCALL(__NR_rt_sigqueueinfo, compat_sys_rt_sigqueueinfo) 382 + #define __NR_rt_sigsuspend 179 383 + __SYSCALL(__NR_rt_sigsuspend, compat_sys_rt_sigsuspend) 384 + #define __NR_pread64 180 385 + __SYSCALL(__NR_pread64, compat_sys_pread64_wrapper) 386 + #define __NR_pwrite64 181 387 + __SYSCALL(__NR_pwrite64, compat_sys_pwrite64_wrapper) 388 + #define __NR_chown 182 389 + __SYSCALL(__NR_chown, sys_chown16) 390 + #define __NR_getcwd 183 391 + __SYSCALL(__NR_getcwd, sys_getcwd) 392 + #define __NR_capget 184 393 + __SYSCALL(__NR_capget, sys_capget) 394 + #define __NR_capset 185 395 + __SYSCALL(__NR_capset, sys_capset) 396 + #define __NR_sigaltstack 186 397 + __SYSCALL(__NR_sigaltstack, compat_sys_sigaltstack) 398 + #define __NR_sendfile 187 399 + __SYSCALL(__NR_sendfile, compat_sys_sendfile) 400 + /* 188 reserved */ 401 + __SYSCALL(188, sys_ni_syscall) 402 + /* 189 reserved */ 403 + __SYSCALL(189, sys_ni_syscall) 404 + #define __NR_vfork 190 405 + __SYSCALL(__NR_vfork, sys_vfork) 406 + #define __NR_ugetrlimit 191 /* SuS compliant getrlimit */ 407 + __SYSCALL(__NR_ugetrlimit, compat_sys_getrlimit) /* SuS compliant getrlimit */ 408 + #define __NR_mmap2 192 409 + __SYSCALL(__NR_mmap2, sys_mmap_pgoff) 410 + #define __NR_truncate64 193 411 + __SYSCALL(__NR_truncate64, compat_sys_truncate64_wrapper) 412 + #define __NR_ftruncate64 194 413 + __SYSCALL(__NR_ftruncate64, compat_sys_ftruncate64_wrapper) 414 + #define __NR_stat64 195 415 + __SYSCALL(__NR_stat64, sys_stat64) 416 + #define __NR_lstat64 196 417 + __SYSCALL(__NR_lstat64, sys_lstat64) 418 + #define __NR_fstat64 197 419 + __SYSCALL(__NR_fstat64, sys_fstat64) 420 + #define __NR_lchown32 198 421 + __SYSCALL(__NR_lchown32, sys_lchown) 422 + #define __NR_getuid32 199 423 + __SYSCALL(__NR_getuid32, sys_getuid) 424 + #define __NR_getgid32 200 425 + __SYSCALL(__NR_getgid32, sys_getgid) 426 + #define __NR_geteuid32 201 427 + __SYSCALL(__NR_geteuid32, sys_geteuid) 428 + #define __NR_getegid32 202 429 + __SYSCALL(__NR_getegid32, sys_getegid) 430 + #define __NR_setreuid32 203 431 + __SYSCALL(__NR_setreuid32, sys_setreuid) 432 + #define __NR_setregid32 204 433 + __SYSCALL(__NR_setregid32, sys_setregid) 434 + #define __NR_getgroups32 205 435 + __SYSCALL(__NR_getgroups32, sys_getgroups) 436 + #define __NR_setgroups32 206 437 + __SYSCALL(__NR_setgroups32, sys_setgroups) 438 + #define __NR_fchown32 207 439 + __SYSCALL(__NR_fchown32, sys_fchown) 440 + #define __NR_setresuid32 208 441 + __SYSCALL(__NR_setresuid32, sys_setresuid) 442 + #define __NR_getresuid32 209 443 + __SYSCALL(__NR_getresuid32, sys_getresuid) 444 + #define __NR_setresgid32 210 445 + __SYSCALL(__NR_setresgid32, sys_setresgid) 446 + #define __NR_getresgid32 211 447 + __SYSCALL(__NR_getresgid32, sys_getresgid) 448 + #define __NR_chown32 212 449 + __SYSCALL(__NR_chown32, sys_chown) 450 + #define __NR_setuid32 213 451 + __SYSCALL(__NR_setuid32, sys_setuid) 452 + #define __NR_setgid32 214 453 + __SYSCALL(__NR_setgid32, sys_setgid) 454 + #define __NR_setfsuid32 215 455 + __SYSCALL(__NR_setfsuid32, sys_setfsuid) 456 + #define __NR_setfsgid32 216 457 + __SYSCALL(__NR_setfsgid32, sys_setfsgid) 458 + #define __NR_getdents64 217 459 + __SYSCALL(__NR_getdents64, compat_sys_getdents64) 460 + #define __NR_pivot_root 218 461 + __SYSCALL(__NR_pivot_root, sys_pivot_root) 462 + #define __NR_mincore 219 463 + __SYSCALL(__NR_mincore, sys_mincore) 464 + #define __NR_madvise 220 465 + __SYSCALL(__NR_madvise, sys_madvise) 466 + #define __NR_fcntl64 221 467 + __SYSCALL(__NR_fcntl64, compat_sys_fcntl64) 468 + /* 222 for tux */ 469 + __SYSCALL(222, sys_ni_syscall) 470 + /* 223 is unused */ 471 + __SYSCALL(223, sys_ni_syscall) 472 + #define __NR_gettid 224 473 + __SYSCALL(__NR_gettid, sys_gettid) 474 + #define __NR_readahead 225 475 + __SYSCALL(__NR_readahead, compat_sys_readahead_wrapper) 476 + #define __NR_setxattr 226 477 + __SYSCALL(__NR_setxattr, sys_setxattr) 478 + #define __NR_lsetxattr 227 479 + __SYSCALL(__NR_lsetxattr, sys_lsetxattr) 480 + #define __NR_fsetxattr 228 481 + __SYSCALL(__NR_fsetxattr, sys_fsetxattr) 482 + #define __NR_getxattr 229 483 + __SYSCALL(__NR_getxattr, sys_getxattr) 484 + #define __NR_lgetxattr 230 485 + __SYSCALL(__NR_lgetxattr, sys_lgetxattr) 486 + #define __NR_fgetxattr 231 487 + __SYSCALL(__NR_fgetxattr, sys_fgetxattr) 488 + #define __NR_listxattr 232 489 + __SYSCALL(__NR_listxattr, sys_listxattr) 490 + #define __NR_llistxattr 233 491 + __SYSCALL(__NR_llistxattr, sys_llistxattr) 492 + #define __NR_flistxattr 234 493 + __SYSCALL(__NR_flistxattr, sys_flistxattr) 494 + #define __NR_removexattr 235 495 + __SYSCALL(__NR_removexattr, sys_removexattr) 496 + #define __NR_lremovexattr 236 497 + __SYSCALL(__NR_lremovexattr, sys_lremovexattr) 498 + #define __NR_fremovexattr 237 499 + __SYSCALL(__NR_fremovexattr, sys_fremovexattr) 500 + #define __NR_tkill 238 501 + __SYSCALL(__NR_tkill, sys_tkill) 502 + #define __NR_sendfile64 239 503 + __SYSCALL(__NR_sendfile64, sys_sendfile64) 504 + #define __NR_futex 240 505 + __SYSCALL(__NR_futex, compat_sys_futex) 506 + #define __NR_sched_setaffinity 241 507 + __SYSCALL(__NR_sched_setaffinity, compat_sys_sched_setaffinity) 508 + #define __NR_sched_getaffinity 242 509 + __SYSCALL(__NR_sched_getaffinity, compat_sys_sched_getaffinity) 510 + #define __NR_io_setup 243 511 + __SYSCALL(__NR_io_setup, compat_sys_io_setup) 512 + #define __NR_io_destroy 244 513 + __SYSCALL(__NR_io_destroy, sys_io_destroy) 514 + #define __NR_io_getevents 245 515 + __SYSCALL(__NR_io_getevents, compat_sys_io_getevents) 516 + #define __NR_io_submit 246 517 + __SYSCALL(__NR_io_submit, compat_sys_io_submit) 518 + #define __NR_io_cancel 247 519 + __SYSCALL(__NR_io_cancel, sys_io_cancel) 520 + #define __NR_exit_group 248 521 + __SYSCALL(__NR_exit_group, sys_exit_group) 522 + #define __NR_lookup_dcookie 249 523 + __SYSCALL(__NR_lookup_dcookie, compat_sys_lookup_dcookie) 524 + #define __NR_epoll_create 250 525 + __SYSCALL(__NR_epoll_create, sys_epoll_create) 526 + #define __NR_epoll_ctl 251 527 + __SYSCALL(__NR_epoll_ctl, sys_epoll_ctl) 528 + #define __NR_epoll_wait 252 529 + __SYSCALL(__NR_epoll_wait, sys_epoll_wait) 530 + #define __NR_remap_file_pages 253 531 + __SYSCALL(__NR_remap_file_pages, sys_remap_file_pages) 532 + /* 254 for set_thread_area */ 533 + __SYSCALL(254, sys_ni_syscall) 534 + /* 255 for get_thread_area */ 535 + __SYSCALL(255, sys_ni_syscall) 536 + #define __NR_set_tid_address 256 537 + __SYSCALL(__NR_set_tid_address, sys_set_tid_address) 538 + #define __NR_timer_create 257 539 + __SYSCALL(__NR_timer_create, compat_sys_timer_create) 540 + #define __NR_timer_settime 258 541 + __SYSCALL(__NR_timer_settime, compat_sys_timer_settime) 542 + #define __NR_timer_gettime 259 543 + __SYSCALL(__NR_timer_gettime, compat_sys_timer_gettime) 544 + #define __NR_timer_getoverrun 260 545 + __SYSCALL(__NR_timer_getoverrun, sys_timer_getoverrun) 546 + #define __NR_timer_delete 261 547 + __SYSCALL(__NR_timer_delete, sys_timer_delete) 548 + #define __NR_clock_settime 262 549 + __SYSCALL(__NR_clock_settime, compat_sys_clock_settime) 550 + #define __NR_clock_gettime 263 551 + __SYSCALL(__NR_clock_gettime, compat_sys_clock_gettime) 552 + #define __NR_clock_getres 264 553 + __SYSCALL(__NR_clock_getres, compat_sys_clock_getres) 554 + #define __NR_clock_nanosleep 265 555 + __SYSCALL(__NR_clock_nanosleep, compat_sys_clock_nanosleep) 556 + #define __NR_statfs64 266 557 + __SYSCALL(__NR_statfs64, compat_sys_statfs64_wrapper) 558 + #define __NR_fstatfs64 267 559 + __SYSCALL(__NR_fstatfs64, compat_sys_fstatfs64_wrapper) 560 + #define __NR_tgkill 268 561 + __SYSCALL(__NR_tgkill, sys_tgkill) 562 + #define __NR_utimes 269 563 + __SYSCALL(__NR_utimes, compat_sys_utimes) 564 + #define __NR_arm_fadvise64_64 270 565 + __SYSCALL(__NR_arm_fadvise64_64, compat_sys_fadvise64_64_wrapper) 566 + #define __NR_pciconfig_iobase 271 567 + __SYSCALL(__NR_pciconfig_iobase, sys_pciconfig_iobase) 568 + #define __NR_pciconfig_read 272 569 + __SYSCALL(__NR_pciconfig_read, sys_pciconfig_read) 570 + #define __NR_pciconfig_write 273 571 + __SYSCALL(__NR_pciconfig_write, sys_pciconfig_write) 572 + #define __NR_mq_open 274 573 + __SYSCALL(__NR_mq_open, compat_sys_mq_open) 574 + #define __NR_mq_unlink 275 575 + __SYSCALL(__NR_mq_unlink, sys_mq_unlink) 576 + #define __NR_mq_timedsend 276 577 + __SYSCALL(__NR_mq_timedsend, compat_sys_mq_timedsend) 578 + #define __NR_mq_timedreceive 277 579 + __SYSCALL(__NR_mq_timedreceive, compat_sys_mq_timedreceive) 580 + #define __NR_mq_notify 278 581 + __SYSCALL(__NR_mq_notify, compat_sys_mq_notify) 582 + #define __NR_mq_getsetattr 279 583 + __SYSCALL(__NR_mq_getsetattr, compat_sys_mq_getsetattr) 584 + #define __NR_waitid 280 585 + __SYSCALL(__NR_waitid, compat_sys_waitid) 586 + #define __NR_socket 281 587 + __SYSCALL(__NR_socket, sys_socket) 588 + #define __NR_bind 282 589 + __SYSCALL(__NR_bind, sys_bind) 590 + #define __NR_connect 283 591 + __SYSCALL(__NR_connect, sys_connect) 592 + #define __NR_listen 284 593 + __SYSCALL(__NR_listen, sys_listen) 594 + #define __NR_accept 285 595 + __SYSCALL(__NR_accept, sys_accept) 596 + #define __NR_getsockname 286 597 + __SYSCALL(__NR_getsockname, sys_getsockname) 598 + #define __NR_getpeername 287 599 + __SYSCALL(__NR_getpeername, sys_getpeername) 600 + #define __NR_socketpair 288 601 + __SYSCALL(__NR_socketpair, sys_socketpair) 602 + #define __NR_send 289 603 + __SYSCALL(__NR_send, sys_send) 604 + #define __NR_sendto 290 605 + __SYSCALL(__NR_sendto, sys_sendto) 606 + #define __NR_recv 291 607 + __SYSCALL(__NR_recv, compat_sys_recv) 608 + #define __NR_recvfrom 292 609 + __SYSCALL(__NR_recvfrom, compat_sys_recvfrom) 610 + #define __NR_shutdown 293 611 + __SYSCALL(__NR_shutdown, sys_shutdown) 612 + #define __NR_setsockopt 294 613 + __SYSCALL(__NR_setsockopt, compat_sys_setsockopt) 614 + #define __NR_getsockopt 295 615 + __SYSCALL(__NR_getsockopt, compat_sys_getsockopt) 616 + #define __NR_sendmsg 296 617 + __SYSCALL(__NR_sendmsg, compat_sys_sendmsg) 618 + #define __NR_recvmsg 297 619 + __SYSCALL(__NR_recvmsg, compat_sys_recvmsg) 620 + #define __NR_semop 298 621 + __SYSCALL(__NR_semop, sys_semop) 622 + #define __NR_semget 299 623 + __SYSCALL(__NR_semget, sys_semget) 624 + #define __NR_semctl 300 625 + __SYSCALL(__NR_semctl, compat_sys_semctl) 626 + #define __NR_msgsnd 301 627 + __SYSCALL(__NR_msgsnd, compat_sys_msgsnd) 628 + #define __NR_msgrcv 302 629 + __SYSCALL(__NR_msgrcv, compat_sys_msgrcv) 630 + #define __NR_msgget 303 631 + __SYSCALL(__NR_msgget, sys_msgget) 632 + #define __NR_msgctl 304 633 + __SYSCALL(__NR_msgctl, compat_sys_msgctl) 634 + #define __NR_shmat 305 635 + __SYSCALL(__NR_shmat, compat_sys_shmat) 636 + #define __NR_shmdt 306 637 + __SYSCALL(__NR_shmdt, sys_shmdt) 638 + #define __NR_shmget 307 639 + __SYSCALL(__NR_shmget, sys_shmget) 640 + #define __NR_shmctl 308 641 + __SYSCALL(__NR_shmctl, compat_sys_shmctl) 642 + #define __NR_add_key 309 643 + __SYSCALL(__NR_add_key, sys_add_key) 644 + #define __NR_request_key 310 645 + __SYSCALL(__NR_request_key, sys_request_key) 646 + #define __NR_keyctl 311 647 + __SYSCALL(__NR_keyctl, compat_sys_keyctl) 648 + #define __NR_semtimedop 312 649 + __SYSCALL(__NR_semtimedop, compat_sys_semtimedop) 650 + #define __NR_vserver 313 651 + __SYSCALL(__NR_vserver, sys_ni_syscall) 652 + #define __NR_ioprio_set 314 653 + __SYSCALL(__NR_ioprio_set, sys_ioprio_set) 654 + #define __NR_ioprio_get 315 655 + __SYSCALL(__NR_ioprio_get, sys_ioprio_get) 656 + #define __NR_inotify_init 316 657 + __SYSCALL(__NR_inotify_init, sys_inotify_init) 658 + #define __NR_inotify_add_watch 317 659 + __SYSCALL(__NR_inotify_add_watch, sys_inotify_add_watch) 660 + #define __NR_inotify_rm_watch 318 661 + __SYSCALL(__NR_inotify_rm_watch, sys_inotify_rm_watch) 662 + #define __NR_mbind 319 663 + __SYSCALL(__NR_mbind, compat_sys_mbind) 664 + #define __NR_get_mempolicy 320 665 + __SYSCALL(__NR_get_mempolicy, compat_sys_get_mempolicy) 666 + #define __NR_set_mempolicy 321 667 + __SYSCALL(__NR_set_mempolicy, compat_sys_set_mempolicy) 668 + #define __NR_openat 322 669 + __SYSCALL(__NR_openat, compat_sys_openat) 670 + #define __NR_mkdirat 323 671 + __SYSCALL(__NR_mkdirat, sys_mkdirat) 672 + #define __NR_mknodat 324 673 + __SYSCALL(__NR_mknodat, sys_mknodat) 674 + #define __NR_fchownat 325 675 + __SYSCALL(__NR_fchownat, sys_fchownat) 676 + #define __NR_futimesat 326 677 + __SYSCALL(__NR_futimesat, compat_sys_futimesat) 678 + #define __NR_fstatat64 327 679 + __SYSCALL(__NR_fstatat64, sys_fstatat64) 680 + #define __NR_unlinkat 328 681 + __SYSCALL(__NR_unlinkat, sys_unlinkat) 682 + #define __NR_renameat 329 683 + __SYSCALL(__NR_renameat, sys_renameat) 684 + #define __NR_linkat 330 685 + __SYSCALL(__NR_linkat, sys_linkat) 686 + #define __NR_symlinkat 331 687 + __SYSCALL(__NR_symlinkat, sys_symlinkat) 688 + #define __NR_readlinkat 332 689 + __SYSCALL(__NR_readlinkat, sys_readlinkat) 690 + #define __NR_fchmodat 333 691 + __SYSCALL(__NR_fchmodat, sys_fchmodat) 692 + #define __NR_faccessat 334 693 + __SYSCALL(__NR_faccessat, sys_faccessat) 694 + #define __NR_pselect6 335 695 + __SYSCALL(__NR_pselect6, compat_sys_pselect6) 696 + #define __NR_ppoll 336 697 + __SYSCALL(__NR_ppoll, compat_sys_ppoll) 698 + #define __NR_unshare 337 699 + __SYSCALL(__NR_unshare, sys_unshare) 700 + #define __NR_set_robust_list 338 701 + __SYSCALL(__NR_set_robust_list, compat_sys_set_robust_list) 702 + #define __NR_get_robust_list 339 703 + __SYSCALL(__NR_get_robust_list, compat_sys_get_robust_list) 704 + #define __NR_splice 340 705 + __SYSCALL(__NR_splice, sys_splice) 706 + #define __NR_sync_file_range2 341 707 + __SYSCALL(__NR_sync_file_range2, compat_sys_sync_file_range2_wrapper) 708 + #define __NR_tee 342 709 + __SYSCALL(__NR_tee, sys_tee) 710 + #define __NR_vmsplice 343 711 + __SYSCALL(__NR_vmsplice, compat_sys_vmsplice) 712 + #define __NR_move_pages 344 713 + __SYSCALL(__NR_move_pages, compat_sys_move_pages) 714 + #define __NR_getcpu 345 715 + __SYSCALL(__NR_getcpu, sys_getcpu) 716 + #define __NR_epoll_pwait 346 717 + __SYSCALL(__NR_epoll_pwait, compat_sys_epoll_pwait) 718 + #define __NR_kexec_load 347 719 + __SYSCALL(__NR_kexec_load, compat_sys_kexec_load) 720 + #define __NR_utimensat 348 721 + __SYSCALL(__NR_utimensat, compat_sys_utimensat) 722 + #define __NR_signalfd 349 723 + __SYSCALL(__NR_signalfd, compat_sys_signalfd) 724 + #define __NR_timerfd_create 350 725 + __SYSCALL(__NR_timerfd_create, sys_timerfd_create) 726 + #define __NR_eventfd 351 727 + __SYSCALL(__NR_eventfd, sys_eventfd) 728 + #define __NR_fallocate 352 729 + __SYSCALL(__NR_fallocate, compat_sys_fallocate_wrapper) 730 + #define __NR_timerfd_settime 353 731 + __SYSCALL(__NR_timerfd_settime, compat_sys_timerfd_settime) 732 + #define __NR_timerfd_gettime 354 733 + __SYSCALL(__NR_timerfd_gettime, compat_sys_timerfd_gettime) 734 + #define __NR_signalfd4 355 735 + __SYSCALL(__NR_signalfd4, compat_sys_signalfd4) 736 + #define __NR_eventfd2 356 737 + __SYSCALL(__NR_eventfd2, sys_eventfd2) 738 + #define __NR_epoll_create1 357 739 + __SYSCALL(__NR_epoll_create1, sys_epoll_create1) 740 + #define __NR_dup3 358 741 + __SYSCALL(__NR_dup3, sys_dup3) 742 + #define __NR_pipe2 359 743 + __SYSCALL(__NR_pipe2, sys_pipe2) 744 + #define __NR_inotify_init1 360 745 + __SYSCALL(__NR_inotify_init1, sys_inotify_init1) 746 + #define __NR_preadv 361 747 + __SYSCALL(__NR_preadv, compat_sys_preadv) 748 + #define __NR_pwritev 362 749 + __SYSCALL(__NR_pwritev, compat_sys_pwritev) 750 + #define __NR_rt_tgsigqueueinfo 363 751 + __SYSCALL(__NR_rt_tgsigqueueinfo, compat_sys_rt_tgsigqueueinfo) 752 + #define __NR_perf_event_open 364 753 + __SYSCALL(__NR_perf_event_open, sys_perf_event_open) 754 + #define __NR_recvmmsg 365 755 + __SYSCALL(__NR_recvmmsg, compat_sys_recvmmsg) 756 + #define __NR_accept4 366 757 + __SYSCALL(__NR_accept4, sys_accept4) 758 + #define __NR_fanotify_init 367 759 + __SYSCALL(__NR_fanotify_init, sys_fanotify_init) 760 + #define __NR_fanotify_mark 368 761 + __SYSCALL(__NR_fanotify_mark, compat_sys_fanotify_mark) 762 + #define __NR_prlimit64 369 763 + __SYSCALL(__NR_prlimit64, sys_prlimit64) 764 + #define __NR_name_to_handle_at 370 765 + __SYSCALL(__NR_name_to_handle_at, sys_name_to_handle_at) 766 + #define __NR_open_by_handle_at 371 767 + __SYSCALL(__NR_open_by_handle_at, compat_sys_open_by_handle_at) 768 + #define __NR_clock_adjtime 372 769 + __SYSCALL(__NR_clock_adjtime, compat_sys_clock_adjtime) 770 + #define __NR_syncfs 373 771 + __SYSCALL(__NR_syncfs, sys_syncfs) 772 + #define __NR_sendmmsg 374 773 + __SYSCALL(__NR_sendmmsg, compat_sys_sendmmsg) 774 + #define __NR_setns 375 775 + __SYSCALL(__NR_setns, sys_setns) 776 + #define __NR_process_vm_readv 376 777 + __SYSCALL(__NR_process_vm_readv, compat_sys_process_vm_readv) 778 + #define __NR_process_vm_writev 377 779 + __SYSCALL(__NR_process_vm_writev, compat_sys_process_vm_writev) 780 + #define __NR_kcmp 378 781 + __SYSCALL(__NR_kcmp, sys_kcmp) 782 + #define __NR_finit_module 379 783 + __SYSCALL(__NR_finit_module, sys_finit_module) 784 + #define __NR_sched_setattr 380 785 + __SYSCALL(__NR_sched_setattr, sys_sched_setattr) 786 + #define __NR_sched_getattr 381 787 + __SYSCALL(__NR_sched_getattr, sys_sched_getattr) 788 + #define __NR_renameat2 382 789 + __SYSCALL(__NR_renameat2, sys_renameat2)
+2 -1
arch/arm64/kernel/Makefile
··· 15 15 arm64-obj-y := cputable.o debug-monitors.o entry.o irq.o fpsimd.o \ 16 16 entry-fpsimd.o process.o ptrace.o setup.o signal.o \ 17 17 sys.o stacktrace.o time.o traps.o io.o vdso.o \ 18 - hyp-stub.o psci.o cpu_ops.o insn.o return_address.o 18 + hyp-stub.o psci.o cpu_ops.o insn.o return_address.o \ 19 + cpuinfo.o 19 20 20 21 arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \ 21 22 sys_compat.o
+1 -1
arch/arm64/kernel/cpu_ops.c
··· 30 30 static const struct cpu_operations *supported_cpu_ops[] __initconst = { 31 31 #ifdef CONFIG_SMP 32 32 &smp_spin_table_ops, 33 - &cpu_psci_ops, 34 33 #endif 34 + &cpu_psci_ops, 35 35 NULL, 36 36 }; 37 37
+192
arch/arm64/kernel/cpuinfo.c
··· 1 + /* 2 + * Record and handle CPU attributes. 3 + * 4 + * Copyright (C) 2014 ARM Ltd. 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + #include <asm/arch_timer.h> 18 + #include <asm/cachetype.h> 19 + #include <asm/cpu.h> 20 + #include <asm/cputype.h> 21 + 22 + #include <linux/bitops.h> 23 + #include <linux/init.h> 24 + #include <linux/kernel.h> 25 + #include <linux/printk.h> 26 + #include <linux/smp.h> 27 + 28 + /* 29 + * In case the boot CPU is hotpluggable, we record its initial state and 30 + * current state separately. Certain system registers may contain different 31 + * values depending on configuration at or after reset. 32 + */ 33 + DEFINE_PER_CPU(struct cpuinfo_arm64, cpu_data); 34 + static struct cpuinfo_arm64 boot_cpu_data; 35 + 36 + static char *icache_policy_str[] = { 37 + [ICACHE_POLICY_RESERVED] = "RESERVED/UNKNOWN", 38 + [ICACHE_POLICY_AIVIVT] = "AIVIVT", 39 + [ICACHE_POLICY_VIPT] = "VIPT", 40 + [ICACHE_POLICY_PIPT] = "PIPT", 41 + }; 42 + 43 + unsigned long __icache_flags; 44 + 45 + static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info) 46 + { 47 + unsigned int cpu = smp_processor_id(); 48 + u32 l1ip = CTR_L1IP(info->reg_ctr); 49 + 50 + if (l1ip != ICACHE_POLICY_PIPT) 51 + set_bit(ICACHEF_ALIASING, &__icache_flags); 52 + if (l1ip == ICACHE_POLICY_AIVIVT); 53 + set_bit(ICACHEF_AIVIVT, &__icache_flags); 54 + 55 + pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu); 56 + } 57 + 58 + static int check_reg_mask(char *name, u64 mask, u64 boot, u64 cur, int cpu) 59 + { 60 + if ((boot & mask) == (cur & mask)) 61 + return 0; 62 + 63 + pr_warn("SANITY CHECK: Unexpected variation in %s. Boot CPU: %#016lx, CPU%d: %#016lx\n", 64 + name, (unsigned long)boot, cpu, (unsigned long)cur); 65 + 66 + return 1; 67 + } 68 + 69 + #define CHECK_MASK(field, mask, boot, cur, cpu) \ 70 + check_reg_mask(#field, mask, (boot)->reg_ ## field, (cur)->reg_ ## field, cpu) 71 + 72 + #define CHECK(field, boot, cur, cpu) \ 73 + CHECK_MASK(field, ~0ULL, boot, cur, cpu) 74 + 75 + /* 76 + * Verify that CPUs don't have unexpected differences that will cause problems. 77 + */ 78 + static void cpuinfo_sanity_check(struct cpuinfo_arm64 *cur) 79 + { 80 + unsigned int cpu = smp_processor_id(); 81 + struct cpuinfo_arm64 *boot = &boot_cpu_data; 82 + unsigned int diff = 0; 83 + 84 + /* 85 + * The kernel can handle differing I-cache policies, but otherwise 86 + * caches should look identical. Userspace JITs will make use of 87 + * *minLine. 88 + */ 89 + diff |= CHECK_MASK(ctr, 0xffff3fff, boot, cur, cpu); 90 + 91 + /* 92 + * Userspace may perform DC ZVA instructions. Mismatched block sizes 93 + * could result in too much or too little memory being zeroed if a 94 + * process is preempted and migrated between CPUs. 95 + */ 96 + diff |= CHECK(dczid, boot, cur, cpu); 97 + 98 + /* If different, timekeeping will be broken (especially with KVM) */ 99 + diff |= CHECK(cntfrq, boot, cur, cpu); 100 + 101 + /* 102 + * Even in big.LITTLE, processors should be identical instruction-set 103 + * wise. 104 + */ 105 + diff |= CHECK(id_aa64isar0, boot, cur, cpu); 106 + diff |= CHECK(id_aa64isar1, boot, cur, cpu); 107 + 108 + /* 109 + * Differing PARange support is fine as long as all peripherals and 110 + * memory are mapped within the minimum PARange of all CPUs. 111 + * Linux should not care about secure memory. 112 + * ID_AA64MMFR1 is currently RES0. 113 + */ 114 + diff |= CHECK_MASK(id_aa64mmfr0, 0xffffffffffff0ff0, boot, cur, cpu); 115 + diff |= CHECK(id_aa64mmfr1, boot, cur, cpu); 116 + 117 + /* 118 + * EL3 is not our concern. 119 + * ID_AA64PFR1 is currently RES0. 120 + */ 121 + diff |= CHECK_MASK(id_aa64pfr0, 0xffffffffffff0fff, boot, cur, cpu); 122 + diff |= CHECK(id_aa64pfr1, boot, cur, cpu); 123 + 124 + /* 125 + * If we have AArch32, we care about 32-bit features for compat. These 126 + * registers should be RES0 otherwise. 127 + */ 128 + diff |= CHECK(id_isar0, boot, cur, cpu); 129 + diff |= CHECK(id_isar1, boot, cur, cpu); 130 + diff |= CHECK(id_isar2, boot, cur, cpu); 131 + diff |= CHECK(id_isar3, boot, cur, cpu); 132 + diff |= CHECK(id_isar4, boot, cur, cpu); 133 + diff |= CHECK(id_isar5, boot, cur, cpu); 134 + diff |= CHECK(id_mmfr0, boot, cur, cpu); 135 + diff |= CHECK(id_mmfr1, boot, cur, cpu); 136 + diff |= CHECK(id_mmfr2, boot, cur, cpu); 137 + diff |= CHECK(id_mmfr3, boot, cur, cpu); 138 + diff |= CHECK(id_pfr0, boot, cur, cpu); 139 + diff |= CHECK(id_pfr1, boot, cur, cpu); 140 + 141 + /* 142 + * Mismatched CPU features are a recipe for disaster. Don't even 143 + * pretend to support them. 144 + */ 145 + WARN_TAINT_ONCE(diff, TAINT_CPU_OUT_OF_SPEC, 146 + "Unsupported CPU feature variation."); 147 + } 148 + 149 + static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) 150 + { 151 + info->reg_cntfrq = arch_timer_get_cntfrq(); 152 + info->reg_ctr = read_cpuid_cachetype(); 153 + info->reg_dczid = read_cpuid(DCZID_EL0); 154 + info->reg_midr = read_cpuid_id(); 155 + 156 + info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1); 157 + info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1); 158 + info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); 159 + info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1); 160 + info->reg_id_aa64pfr0 = read_cpuid(ID_AA64PFR0_EL1); 161 + info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1); 162 + 163 + info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1); 164 + info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1); 165 + info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1); 166 + info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1); 167 + info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1); 168 + info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1); 169 + info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1); 170 + info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1); 171 + info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1); 172 + info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1); 173 + info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1); 174 + info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1); 175 + 176 + cpuinfo_detect_icache_policy(info); 177 + } 178 + 179 + void cpuinfo_store_cpu(void) 180 + { 181 + struct cpuinfo_arm64 *info = this_cpu_ptr(&cpu_data); 182 + __cpuinfo_store_cpu(info); 183 + cpuinfo_sanity_check(info); 184 + } 185 + 186 + void __init cpuinfo_store_boot_cpu(void) 187 + { 188 + struct cpuinfo_arm64 *info = &per_cpu(cpu_data, 0); 189 + __cpuinfo_store_cpu(info); 190 + 191 + boot_cpu_data = *info; 192 + }
+11 -11
arch/arm64/kernel/debug-monitors.c
··· 315 315 { 316 316 siginfo_t info; 317 317 318 - if (call_break_hook(regs, esr) == DBG_HOOK_HANDLED) 319 - return 0; 318 + if (user_mode(regs)) { 319 + info = (siginfo_t) { 320 + .si_signo = SIGTRAP, 321 + .si_errno = 0, 322 + .si_code = TRAP_BRKPT, 323 + .si_addr = (void __user *)instruction_pointer(regs), 324 + }; 320 325 321 - if (!user_mode(regs)) 326 + force_sig_info(SIGTRAP, &info, current); 327 + } else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) { 328 + pr_warning("Unexpected kernel BRK exception at EL1\n"); 322 329 return -EFAULT; 330 + } 323 331 324 - info = (siginfo_t) { 325 - .si_signo = SIGTRAP, 326 - .si_errno = 0, 327 - .si_code = TRAP_BRKPT, 328 - .si_addr = (void __user *)instruction_pointer(regs), 329 - }; 330 - 331 - force_sig_info(SIGTRAP, &info, current); 332 332 return 0; 333 333 } 334 334
+1 -1
arch/arm64/kernel/entry-fpsimd.S
··· 52 52 ENTRY(fpsimd_save_partial_state) 53 53 fpsimd_save_partial x0, 1, 8, 9 54 54 ret 55 - ENDPROC(fpsimd_load_partial_state) 55 + ENDPROC(fpsimd_save_partial_state) 56 56 57 57 /* 58 58 * Load the bottom n FP registers.
+49 -7
arch/arm64/kernel/entry.S
··· 27 27 #include <asm/esr.h> 28 28 #include <asm/thread_info.h> 29 29 #include <asm/unistd.h> 30 - #include <asm/unistd32.h> 30 + 31 + /* 32 + * Context tracking subsystem. Used to instrument transitions 33 + * between user and kernel mode. 34 + */ 35 + .macro ct_user_exit, syscall = 0 36 + #ifdef CONFIG_CONTEXT_TRACKING 37 + bl context_tracking_user_exit 38 + .if \syscall == 1 39 + /* 40 + * Save/restore needed during syscalls. Restore syscall arguments from 41 + * the values already saved on stack during kernel_entry. 42 + */ 43 + ldp x0, x1, [sp] 44 + ldp x2, x3, [sp, #S_X2] 45 + ldp x4, x5, [sp, #S_X4] 46 + ldp x6, x7, [sp, #S_X6] 47 + .endif 48 + #endif 49 + .endm 50 + 51 + .macro ct_user_enter 52 + #ifdef CONFIG_CONTEXT_TRACKING 53 + bl context_tracking_user_enter 54 + #endif 55 + .endm 31 56 32 57 /* 33 58 * Bad Abort numbers ··· 116 91 .macro kernel_exit, el, ret = 0 117 92 ldp x21, x22, [sp, #S_PC] // load ELR, SPSR 118 93 .if \el == 0 94 + ct_user_enter 119 95 ldr x23, [sp, #S_SP] // load return stack pointer 120 96 .endif 121 97 .if \ret ··· 379 353 lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class 380 354 cmp x24, #ESR_EL1_EC_SVC64 // SVC in 64-bit state 381 355 b.eq el0_svc 382 - adr lr, ret_to_user 383 356 cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 384 357 b.eq el0_da 385 358 cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0 ··· 407 382 lsr x24, x25, #ESR_EL1_EC_SHIFT // exception class 408 383 cmp x24, #ESR_EL1_EC_SVC32 // SVC in 32-bit state 409 384 b.eq el0_svc_compat 410 - adr lr, ret_to_user 411 385 cmp x24, #ESR_EL1_EC_DABT_EL0 // data abort in EL0 412 386 b.eq el0_da 413 387 cmp x24, #ESR_EL1_EC_IABT_EL0 // instruction abort in EL0 ··· 449 425 /* 450 426 * Data abort handling 451 427 */ 452 - mrs x0, far_el1 453 - bic x0, x0, #(0xff << 56) 428 + mrs x26, far_el1 454 429 // enable interrupts before calling the main handler 455 430 enable_dbg_and_irq 431 + ct_user_exit 432 + bic x0, x26, #(0xff << 56) 456 433 mov x1, x25 457 434 mov x2, sp 435 + adr lr, ret_to_user 458 436 b do_mem_abort 459 437 el0_ia: 460 438 /* 461 439 * Instruction abort handling 462 440 */ 463 - mrs x0, far_el1 441 + mrs x26, far_el1 464 442 // enable interrupts before calling the main handler 465 443 enable_dbg_and_irq 444 + ct_user_exit 445 + mov x0, x26 466 446 orr x1, x25, #1 << 24 // use reserved ISS bit for instruction aborts 467 447 mov x2, sp 448 + adr lr, ret_to_user 468 449 b do_mem_abort 469 450 el0_fpsimd_acc: 470 451 /* 471 452 * Floating Point or Advanced SIMD access 472 453 */ 473 454 enable_dbg 455 + ct_user_exit 474 456 mov x0, x25 475 457 mov x1, sp 458 + adr lr, ret_to_user 476 459 b do_fpsimd_acc 477 460 el0_fpsimd_exc: 478 461 /* 479 462 * Floating Point or Advanced SIMD exception 480 463 */ 481 464 enable_dbg 465 + ct_user_exit 482 466 mov x0, x25 483 467 mov x1, sp 468 + adr lr, ret_to_user 484 469 b do_fpsimd_exc 485 470 el0_sp_pc: 486 471 /* 487 472 * Stack or PC alignment exception handling 488 473 */ 489 - mrs x0, far_el1 474 + mrs x26, far_el1 490 475 // enable interrupts before calling the main handler 491 476 enable_dbg_and_irq 477 + mov x0, x26 492 478 mov x1, x25 493 479 mov x2, sp 480 + adr lr, ret_to_user 494 481 b do_sp_pc_abort 495 482 el0_undef: 496 483 /* ··· 509 474 */ 510 475 // enable interrupts before calling the main handler 511 476 enable_dbg_and_irq 477 + ct_user_exit 512 478 mov x0, sp 479 + adr lr, ret_to_user 513 480 b do_undefinstr 514 481 el0_dbg: 515 482 /* ··· 523 486 mov x2, sp 524 487 bl do_debug_exception 525 488 enable_dbg 489 + ct_user_exit 526 490 b ret_to_user 527 491 el0_inv: 528 492 enable_dbg 493 + ct_user_exit 529 494 mov x0, sp 530 495 mov x1, #BAD_SYNC 531 496 mrs x2, esr_el1 497 + adr lr, ret_to_user 532 498 b bad_mode 533 499 ENDPROC(el0_sync) 534 500 ··· 544 504 bl trace_hardirqs_off 545 505 #endif 546 506 507 + ct_user_exit 547 508 irq_handler 548 509 549 510 #ifdef CONFIG_TRACE_IRQFLAGS ··· 649 608 el0_svc_naked: // compat entry point 650 609 stp x0, scno, [sp, #S_ORIG_X0] // save the original x0 and syscall number 651 610 enable_dbg_and_irq 611 + ct_user_exit 1 652 612 653 613 ldr x16, [tsk, #TI_FLAGS] // check for syscall hooks 654 614 tst x16, #_TIF_SYSCALL_WORK
+76 -45
arch/arm64/kernel/head.S
··· 22 22 23 23 #include <linux/linkage.h> 24 24 #include <linux/init.h> 25 + #include <linux/irqchip/arm-gic-v3.h> 25 26 26 27 #include <asm/assembler.h> 27 28 #include <asm/ptrace.h> ··· 36 35 #include <asm/page.h> 37 36 #include <asm/virt.h> 38 37 39 - /* 40 - * swapper_pg_dir is the virtual address of the initial page table. We place 41 - * the page tables 3 * PAGE_SIZE below KERNEL_RAM_VADDR. The idmap_pg_dir has 42 - * 2 pages and is placed below swapper_pg_dir. 43 - */ 44 38 #define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET) 45 39 46 - #if (KERNEL_RAM_VADDR & 0xfffff) != 0x80000 47 - #error KERNEL_RAM_VADDR must start at 0xXXX80000 40 + #if (TEXT_OFFSET & 0xf) != 0 41 + #error TEXT_OFFSET must be at least 16B aligned 42 + #elif (PAGE_OFFSET & 0xfffff) != 0 43 + #error PAGE_OFFSET must be at least 2MB aligned 44 + #elif TEXT_OFFSET > 0xfffff 45 + #error TEXT_OFFSET must be less than 2MB 48 46 #endif 49 47 50 - #define SWAPPER_DIR_SIZE (3 * PAGE_SIZE) 51 - #define IDMAP_DIR_SIZE (2 * PAGE_SIZE) 52 - 53 - .globl swapper_pg_dir 54 - .equ swapper_pg_dir, KERNEL_RAM_VADDR - SWAPPER_DIR_SIZE 55 - 56 - .globl idmap_pg_dir 57 - .equ idmap_pg_dir, swapper_pg_dir - IDMAP_DIR_SIZE 58 - 59 - .macro pgtbl, ttb0, ttb1, phys 60 - add \ttb1, \phys, #TEXT_OFFSET - SWAPPER_DIR_SIZE 61 - sub \ttb0, \ttb1, #IDMAP_DIR_SIZE 48 + .macro pgtbl, ttb0, ttb1, virt_to_phys 49 + ldr \ttb1, =swapper_pg_dir 50 + ldr \ttb0, =idmap_pg_dir 51 + add \ttb1, \ttb1, \virt_to_phys 52 + add \ttb0, \ttb0, \virt_to_phys 62 53 .endm 63 54 64 55 #ifdef CONFIG_ARM64_64K_PAGES 65 56 #define BLOCK_SHIFT PAGE_SHIFT 66 57 #define BLOCK_SIZE PAGE_SIZE 58 + #define TABLE_SHIFT PMD_SHIFT 67 59 #else 68 60 #define BLOCK_SHIFT SECTION_SHIFT 69 61 #define BLOCK_SIZE SECTION_SIZE 62 + #define TABLE_SHIFT PUD_SHIFT 70 63 #endif 71 64 72 65 #define KERNEL_START KERNEL_RAM_VADDR ··· 115 120 b stext // branch to kernel start, magic 116 121 .long 0 // reserved 117 122 #endif 118 - .quad TEXT_OFFSET // Image load offset from start of RAM 119 - .quad 0 // reserved 120 - .quad 0 // reserved 123 + .quad _kernel_offset_le // Image load offset from start of RAM, little-endian 124 + .quad _kernel_size_le // Effective size of kernel image, little-endian 125 + .quad _kernel_flags_le // Informative flags, little-endian 121 126 .quad 0 // reserved 122 127 .quad 0 // reserved 123 128 .quad 0 // reserved ··· 290 295 msr cnthctl_el2, x0 291 296 msr cntvoff_el2, xzr // Clear virtual offset 292 297 298 + #ifdef CONFIG_ARM_GIC_V3 299 + /* GICv3 system register access */ 300 + mrs x0, id_aa64pfr0_el1 301 + ubfx x0, x0, #24, #4 302 + cmp x0, #1 303 + b.ne 3f 304 + 305 + mrs_s x0, ICC_SRE_EL2 306 + orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1 307 + orr x0, x0, #ICC_SRE_EL2_ENABLE // Set ICC_SRE_EL2.Enable==1 308 + msr_s ICC_SRE_EL2, x0 309 + isb // Make sure SRE is now set 310 + msr_s ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults 311 + 312 + 3: 313 + #endif 314 + 293 315 /* Populate ID registers. */ 294 316 mrs x0, midr_el1 295 317 mrs x1, mpidr_el1 ··· 425 413 mov x23, x0 // x23=current cpu_table 426 414 cbz x23, __error_p // invalid processor (x23=0)? 427 415 428 - pgtbl x25, x26, x24 // x25=TTBR0, x26=TTBR1 416 + pgtbl x25, x26, x28 // x25=TTBR0, x26=TTBR1 429 417 ldr x12, [x23, #CPU_INFO_SETUP] 430 418 add x12, x12, x28 // __virt_to_phys 431 419 blr x12 // initialise processor ··· 467 455 * x27 = *virtual* address to jump to upon completion 468 456 * 469 457 * other registers depend on the function called upon completion 458 + * 459 + * We align the entire function to the smallest power of two larger than it to 460 + * ensure it fits within a single block map entry. Otherwise were PHYS_OFFSET 461 + * close to the end of a 512MB or 1GB block we might require an additional 462 + * table to map the entire function. 470 463 */ 471 - .align 6 464 + .align 4 472 465 __turn_mmu_on: 473 466 msr sctlr_el1, x0 474 467 isb ··· 496 479 .quad PAGE_OFFSET 497 480 498 481 /* 499 - * Macro to populate the PGD for the corresponding block entry in the next 500 - * level (tbl) for the given virtual address. 482 + * Macro to create a table entry to the next page. 501 483 * 502 - * Preserves: pgd, tbl, virt 484 + * tbl: page table address 485 + * virt: virtual address 486 + * shift: #imm page table shift 487 + * ptrs: #imm pointers per table page 488 + * 489 + * Preserves: virt 490 + * Corrupts: tmp1, tmp2 491 + * Returns: tbl -> next level table page address 492 + */ 493 + .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2 494 + lsr \tmp1, \virt, #\shift 495 + and \tmp1, \tmp1, #\ptrs - 1 // table index 496 + add \tmp2, \tbl, #PAGE_SIZE 497 + orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type 498 + str \tmp2, [\tbl, \tmp1, lsl #3] 499 + add \tbl, \tbl, #PAGE_SIZE // next level table page 500 + .endm 501 + 502 + /* 503 + * Macro to populate the PGD (and possibily PUD) for the corresponding 504 + * block entry in the next level (tbl) for the given virtual address. 505 + * 506 + * Preserves: tbl, next, virt 503 507 * Corrupts: tmp1, tmp2 504 508 */ 505 - .macro create_pgd_entry, pgd, tbl, virt, tmp1, tmp2 506 - lsr \tmp1, \virt, #PGDIR_SHIFT 507 - and \tmp1, \tmp1, #PTRS_PER_PGD - 1 // PGD index 508 - orr \tmp2, \tbl, #3 // PGD entry table type 509 - str \tmp2, [\pgd, \tmp1, lsl #3] 509 + .macro create_pgd_entry, tbl, virt, tmp1, tmp2 510 + create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2 511 + #if SWAPPER_PGTABLE_LEVELS == 3 512 + create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2 513 + #endif 510 514 .endm 511 515 512 516 /* ··· 560 522 * - pgd entry for fixed mappings (TTBR1) 561 523 */ 562 524 __create_page_tables: 563 - pgtbl x25, x26, x24 // idmap_pg_dir and swapper_pg_dir addresses 525 + pgtbl x25, x26, x28 // idmap_pg_dir and swapper_pg_dir addresses 564 526 mov x27, lr 565 527 566 528 /* ··· 588 550 /* 589 551 * Create the identity mapping. 590 552 */ 591 - add x0, x25, #PAGE_SIZE // section table address 553 + mov x0, x25 // idmap_pg_dir 592 554 ldr x3, =KERNEL_START 593 555 add x3, x3, x28 // __pa(KERNEL_START) 594 - create_pgd_entry x25, x0, x3, x5, x6 556 + create_pgd_entry x0, x3, x5, x6 595 557 ldr x6, =KERNEL_END 596 558 mov x5, x3 // __pa(KERNEL_START) 597 559 add x6, x6, x28 // __pa(KERNEL_END) ··· 600 562 /* 601 563 * Map the kernel image (starting with PHYS_OFFSET). 602 564 */ 603 - add x0, x26, #PAGE_SIZE // section table address 565 + mov x0, x26 // swapper_pg_dir 604 566 mov x5, #PAGE_OFFSET 605 - create_pgd_entry x26, x0, x5, x3, x6 567 + create_pgd_entry x0, x5, x3, x6 606 568 ldr x6, =KERNEL_END 607 569 mov x3, x24 // phys offset 608 570 create_block_map x0, x7, x3, x5, x6 ··· 624 586 create_block_map x0, x7, x3, x5, x6 625 587 1: 626 588 /* 627 - * Create the pgd entry for the fixed mappings. 628 - */ 629 - ldr x5, =FIXADDR_TOP // Fixed mapping virtual address 630 - add x0, x26, #2 * PAGE_SIZE // section table address 631 - create_pgd_entry x26, x0, x5, x6, x7 632 - 633 - /* 634 589 * Since the page tables have been populated with non-cacheable 635 590 * accesses (MMU disabled), invalidate the idmap and swapper page 636 591 * tables again to remove any speculatively loaded cache lines. ··· 642 611 __switch_data: 643 612 .quad __mmap_switched 644 613 .quad __bss_start // x6 645 - .quad _end // x7 614 + .quad __bss_stop // x7 646 615 .quad processor_id // x4 647 616 .quad __fdt_pointer // x5 648 617 .quad memstart_addr // x6
+1
arch/arm64/kernel/hyp-stub.S
··· 19 19 20 20 #include <linux/init.h> 21 21 #include <linux/linkage.h> 22 + #include <linux/irqchip/arm-gic-v3.h> 22 23 23 24 #include <asm/assembler.h> 24 25 #include <asm/ptrace.h>
+62
arch/arm64/kernel/image.h
··· 1 + /* 2 + * Linker script macros to generate Image header fields. 3 + * 4 + * Copyright (C) 2014 ARM Ltd. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + #ifndef __ASM_IMAGE_H 19 + #define __ASM_IMAGE_H 20 + 21 + #ifndef LINKER_SCRIPT 22 + #error This file should only be included in vmlinux.lds.S 23 + #endif 24 + 25 + /* 26 + * There aren't any ELF relocations we can use to endian-swap values known only 27 + * at link time (e.g. the subtraction of two symbol addresses), so we must get 28 + * the linker to endian-swap certain values before emitting them. 29 + */ 30 + #ifdef CONFIG_CPU_BIG_ENDIAN 31 + #define DATA_LE64(data) \ 32 + ((((data) & 0x00000000000000ff) << 56) | \ 33 + (((data) & 0x000000000000ff00) << 40) | \ 34 + (((data) & 0x0000000000ff0000) << 24) | \ 35 + (((data) & 0x00000000ff000000) << 8) | \ 36 + (((data) & 0x000000ff00000000) >> 8) | \ 37 + (((data) & 0x0000ff0000000000) >> 24) | \ 38 + (((data) & 0x00ff000000000000) >> 40) | \ 39 + (((data) & 0xff00000000000000) >> 56)) 40 + #else 41 + #define DATA_LE64(data) ((data) & 0xffffffffffffffff) 42 + #endif 43 + 44 + #ifdef CONFIG_CPU_BIG_ENDIAN 45 + #define __HEAD_FLAG_BE 1 46 + #else 47 + #define __HEAD_FLAG_BE 0 48 + #endif 49 + 50 + #define __HEAD_FLAGS (__HEAD_FLAG_BE << 0) 51 + 52 + /* 53 + * These will output as part of the Image header, which should be little-endian 54 + * regardless of the endianness of the kernel. While constant values could be 55 + * endian swapped in head.S, all are done here for consistency. 56 + */ 57 + #define HEAD_SYMBOLS \ 58 + _kernel_size_le = DATA_LE64(_end - _text); \ 59 + _kernel_offset_le = DATA_LE64(TEXT_OFFSET); \ 60 + _kernel_flags_le = DATA_LE64(__HEAD_FLAGS); 61 + 62 + #endif /* __ASM_IMAGE_H */
+1 -1
arch/arm64/kernel/kuser32.S
··· 28 28 * See Documentation/arm/kernel_user_helpers.txt for formal definitions. 29 29 */ 30 30 31 - #include <asm/unistd32.h> 31 + #include <asm/unistd.h> 32 32 33 33 .align 5 34 34 .globl __kuser_helper_start
+6
arch/arm64/kernel/process.c
··· 51 51 #include <asm/processor.h> 52 52 #include <asm/stacktrace.h> 53 53 54 + #ifdef CONFIG_CC_STACKPROTECTOR 55 + #include <linux/stackprotector.h> 56 + unsigned long __stack_chk_guard __read_mostly; 57 + EXPORT_SYMBOL(__stack_chk_guard); 58 + #endif 59 + 54 60 static void setup_restart(void) 55 61 { 56 62 /*
+5 -3
arch/arm64/kernel/psci.c
··· 235 235 * PSCI Function IDs for v0.2+ are well defined so use 236 236 * standard values. 237 237 */ 238 - static int psci_0_2_init(struct device_node *np) 238 + static int __init psci_0_2_init(struct device_node *np) 239 239 { 240 240 int err, ver; 241 241 ··· 296 296 /* 297 297 * PSCI < v0.2 get PSCI Function IDs via DT. 298 298 */ 299 - static int psci_0_1_init(struct device_node *np) 299 + static int __init psci_0_1_init(struct device_node *np) 300 300 { 301 301 u32 id; 302 302 int err; ··· 434 434 return 0; 435 435 } 436 436 #endif 437 + #endif 437 438 438 439 const struct cpu_operations cpu_psci_ops = { 439 440 .name = "psci", 441 + #ifdef CONFIG_SMP 440 442 .cpu_init = cpu_psci_cpu_init, 441 443 .cpu_prepare = cpu_psci_cpu_prepare, 442 444 .cpu_boot = cpu_psci_cpu_boot, ··· 447 445 .cpu_die = cpu_psci_cpu_die, 448 446 .cpu_kill = cpu_psci_cpu_kill, 449 447 #endif 448 + #endif 450 449 }; 451 450 452 - #endif
+11
arch/arm64/kernel/ptrace.c
··· 19 19 * along with this program. If not, see <http://www.gnu.org/licenses/>. 20 20 */ 21 21 22 + #include <linux/audit.h> 22 23 #include <linux/compat.h> 23 24 #include <linux/kernel.h> 24 25 #include <linux/sched.h> ··· 40 39 #include <asm/compat.h> 41 40 #include <asm/debug-monitors.h> 42 41 #include <asm/pgtable.h> 42 + #include <asm/syscall.h> 43 43 #include <asm/traps.h> 44 44 #include <asm/system_misc.h> 45 45 ··· 1115 1113 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) 1116 1114 trace_sys_enter(regs, regs->syscallno); 1117 1115 1116 + #ifdef CONFIG_AUDITSYSCALL 1117 + audit_syscall_entry(syscall_get_arch(), regs->syscallno, 1118 + regs->orig_x0, regs->regs[1], regs->regs[2], regs->regs[3]); 1119 + #endif 1120 + 1118 1121 return regs->syscallno; 1119 1122 } 1120 1123 1121 1124 asmlinkage void syscall_trace_exit(struct pt_regs *regs) 1122 1125 { 1126 + #ifdef CONFIG_AUDITSYSCALL 1127 + audit_syscall_exit(regs); 1128 + #endif 1129 + 1123 1130 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) 1124 1131 trace_sys_exit(regs, regs_return_value(regs)); 1125 1132
+22 -25
arch/arm64/kernel/setup.c
··· 45 45 #include <linux/efi.h> 46 46 47 47 #include <asm/fixmap.h> 48 + #include <asm/cpu.h> 48 49 #include <asm/cputype.h> 49 50 #include <asm/elf.h> 50 51 #include <asm/cputable.h> ··· 78 77 #endif 79 78 80 79 static const char *cpu_name; 81 - static const char *machine_name; 82 80 phys_addr_t __fdt_pointer __initdata; 83 81 84 82 /* ··· 219 219 sprintf(init_utsname()->machine, ELF_PLATFORM); 220 220 elf_hwcap = 0; 221 221 222 + cpuinfo_store_boot_cpu(); 223 + 222 224 /* 223 225 * Check for sane CTR_EL0.CWG value. 224 226 */ ··· 309 307 while (true) 310 308 cpu_relax(); 311 309 } 312 - 313 - machine_name = of_flat_dt_get_machine_name(); 314 310 } 315 311 316 312 /* ··· 417 417 } 418 418 arch_initcall_sync(arm64_device_init); 419 419 420 - static DEFINE_PER_CPU(struct cpu, cpu_data); 421 - 422 420 static int __init topology_init(void) 423 421 { 424 422 int i; 425 423 426 424 for_each_possible_cpu(i) { 427 - struct cpu *cpu = &per_cpu(cpu_data, i); 425 + struct cpu *cpu = &per_cpu(cpu_data.cpu, i); 428 426 cpu->hotpluggable = 1; 429 427 register_cpu(cpu, i); 430 428 } ··· 447 449 { 448 450 int i; 449 451 450 - seq_printf(m, "Processor\t: %s rev %d (%s)\n", 451 - cpu_name, read_cpuid_id() & 15, ELF_PLATFORM); 452 + /* 453 + * Dump out the common processor features in a single line. Userspace 454 + * should read the hwcaps with getauxval(AT_HWCAP) rather than 455 + * attempting to parse this. 456 + */ 457 + seq_puts(m, "features\t:"); 458 + for (i = 0; hwcap_str[i]; i++) 459 + if (elf_hwcap & (1 << i)) 460 + seq_printf(m, " %s", hwcap_str[i]); 461 + seq_puts(m, "\n\n"); 452 462 453 463 for_each_online_cpu(i) { 464 + struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i); 465 + u32 midr = cpuinfo->reg_midr; 466 + 454 467 /* 455 468 * glibc reads /proc/cpuinfo to determine the number of 456 469 * online processors, looking for lines beginning with ··· 470 461 #ifdef CONFIG_SMP 471 462 seq_printf(m, "processor\t: %d\n", i); 472 463 #endif 464 + seq_printf(m, "implementer\t: 0x%02x\n", 465 + MIDR_IMPLEMENTOR(midr)); 466 + seq_printf(m, "variant\t\t: 0x%x\n", MIDR_VARIANT(midr)); 467 + seq_printf(m, "partnum\t\t: 0x%03x\n", MIDR_PARTNUM(midr)); 468 + seq_printf(m, "revision\t: 0x%x\n\n", MIDR_REVISION(midr)); 473 469 } 474 - 475 - /* dump out the processor features */ 476 - seq_puts(m, "Features\t: "); 477 - 478 - for (i = 0; hwcap_str[i]; i++) 479 - if (elf_hwcap & (1 << i)) 480 - seq_printf(m, "%s ", hwcap_str[i]); 481 - 482 - seq_printf(m, "\nCPU implementer\t: 0x%02x\n", read_cpuid_id() >> 24); 483 - seq_printf(m, "CPU architecture: AArch64\n"); 484 - seq_printf(m, "CPU variant\t: 0x%x\n", (read_cpuid_id() >> 20) & 15); 485 - seq_printf(m, "CPU part\t: 0x%03x\n", (read_cpuid_id() >> 4) & 0xfff); 486 - seq_printf(m, "CPU revision\t: %d\n", read_cpuid_id() & 15); 487 - 488 - seq_puts(m, "\n"); 489 - 490 - seq_printf(m, "Hardware\t: %s\n", machine_name); 491 470 492 471 return 0; 493 472 }
+1 -1
arch/arm64/kernel/signal32.c
··· 27 27 #include <asm/fpsimd.h> 28 28 #include <asm/signal32.h> 29 29 #include <asm/uaccess.h> 30 - #include <asm/unistd32.h> 30 + #include <asm/unistd.h> 31 31 32 32 struct compat_sigcontext { 33 33 /* We always set these two fields to 0 */
+6
arch/arm64/kernel/smp.c
··· 39 39 40 40 #include <asm/atomic.h> 41 41 #include <asm/cacheflush.h> 42 + #include <asm/cpu.h> 42 43 #include <asm/cputype.h> 43 44 #include <asm/cpu_ops.h> 44 45 #include <asm/mmu_context.h> ··· 154 153 155 154 if (cpu_ops[cpu]->cpu_postboot) 156 155 cpu_ops[cpu]->cpu_postboot(); 156 + 157 + /* 158 + * Log the CPU info before it is marked online and might get read. 159 + */ 160 + cpuinfo_store_cpu(); 157 161 158 162 /* 159 163 * Enable GIC and timers.
+1 -1
arch/arm64/kernel/suspend.c
··· 119 119 extern struct sleep_save_sp sleep_save_sp; 120 120 extern phys_addr_t sleep_idmap_phys; 121 121 122 - static int cpu_suspend_init(void) 122 + static int __init cpu_suspend_init(void) 123 123 { 124 124 void *ctx_ptr; 125 125
+1 -1
arch/arm64/kernel/sys_compat.c
··· 26 26 #include <linux/uaccess.h> 27 27 28 28 #include <asm/cacheflush.h> 29 - #include <asm/unistd32.h> 29 + #include <asm/unistd.h> 30 30 31 31 static inline void 32 32 do_compat_cache_op(unsigned long start, unsigned long end, int flags)
+33 -14
arch/arm64/kernel/topology.c
··· 20 20 #include <linux/of.h> 21 21 #include <linux/sched.h> 22 22 23 + #include <asm/cputype.h> 23 24 #include <asm/topology.h> 24 25 25 26 static int __init get_cpu_for_node(struct device_node *node) ··· 189 188 * Check that all cores are in the topology; the SMP code will 190 189 * only mark cores described in the DT as possible. 191 190 */ 192 - for_each_possible_cpu(cpu) { 193 - if (cpu_topology[cpu].cluster_id == -1) { 194 - pr_err("CPU%d: No topology information specified\n", 195 - cpu); 191 + for_each_possible_cpu(cpu) 192 + if (cpu_topology[cpu].cluster_id == -1) 196 193 ret = -EINVAL; 197 - } 198 - } 199 194 200 195 out_map: 201 196 of_node_put(map); ··· 216 219 struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; 217 220 int cpu; 218 221 219 - if (cpuid_topo->cluster_id == -1) { 220 - /* 221 - * DT does not contain topology information for this cpu. 222 - */ 223 - pr_debug("CPU%u: No topology information configured\n", cpuid); 224 - return; 225 - } 226 - 227 222 /* update core and thread sibling masks */ 228 223 for_each_possible_cpu(cpu) { 229 224 cpu_topo = &cpu_topology[cpu]; ··· 238 249 239 250 void store_cpu_topology(unsigned int cpuid) 240 251 { 252 + struct cpu_topology *cpuid_topo = &cpu_topology[cpuid]; 253 + u64 mpidr; 254 + 255 + if (cpuid_topo->cluster_id != -1) 256 + goto topology_populated; 257 + 258 + mpidr = read_cpuid_mpidr(); 259 + 260 + /* Uniprocessor systems can rely on default topology values */ 261 + if (mpidr & MPIDR_UP_BITMASK) 262 + return; 263 + 264 + /* Create cpu topology mapping based on MPIDR. */ 265 + if (mpidr & MPIDR_MT_BITMASK) { 266 + /* Multiprocessor system : Multi-threads per core */ 267 + cpuid_topo->thread_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); 268 + cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 1); 269 + cpuid_topo->cluster_id = MPIDR_AFFINITY_LEVEL(mpidr, 2); 270 + } else { 271 + /* Multiprocessor system : Single-thread per core */ 272 + cpuid_topo->thread_id = -1; 273 + cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); 274 + cpuid_topo->cluster_id = MPIDR_AFFINITY_LEVEL(mpidr, 1); 275 + } 276 + 277 + pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n", 278 + cpuid, cpuid_topo->cluster_id, cpuid_topo->core_id, 279 + cpuid_topo->thread_id, mpidr); 280 + 281 + topology_populated: 241 282 update_siblings_masks(cpuid); 242 283 } 243 284
+9 -4
arch/arm64/kernel/traps.c
··· 156 156 frame.pc = thread_saved_pc(tsk); 157 157 } 158 158 159 - printk("Call trace:\n"); 159 + pr_emerg("Call trace:\n"); 160 160 while (1) { 161 161 unsigned long where = frame.pc; 162 162 int ret; ··· 331 331 332 332 void __pte_error(const char *file, int line, unsigned long val) 333 333 { 334 - printk("%s:%d: bad pte %016lx.\n", file, line, val); 334 + pr_crit("%s:%d: bad pte %016lx.\n", file, line, val); 335 335 } 336 336 337 337 void __pmd_error(const char *file, int line, unsigned long val) 338 338 { 339 - printk("%s:%d: bad pmd %016lx.\n", file, line, val); 339 + pr_crit("%s:%d: bad pmd %016lx.\n", file, line, val); 340 + } 341 + 342 + void __pud_error(const char *file, int line, unsigned long val) 343 + { 344 + pr_crit("%s:%d: bad pud %016lx.\n", file, line, val); 340 345 } 341 346 342 347 void __pgd_error(const char *file, int line, unsigned long val) 343 348 { 344 - printk("%s:%d: bad pgd %016lx.\n", file, line, val); 349 + pr_crit("%s:%d: bad pgd %016lx.\n", file, line, val); 345 350 } 346 351 347 352 void __init trap_init(void)
+51 -43
arch/arm64/kernel/vdso.c
··· 88 88 { 89 89 struct mm_struct *mm = current->mm; 90 90 unsigned long addr = AARCH32_VECTORS_BASE; 91 - int ret; 91 + static struct vm_special_mapping spec = { 92 + .name = "[vectors]", 93 + .pages = vectors_page, 94 + 95 + }; 96 + void *ret; 92 97 93 98 down_write(&mm->mmap_sem); 94 99 current->mm->context.vdso = (void *)addr; 95 100 96 101 /* Map vectors page at the high address. */ 97 - ret = install_special_mapping(mm, addr, PAGE_SIZE, 98 - VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC, 99 - vectors_page); 102 + ret = _install_special_mapping(mm, addr, PAGE_SIZE, 103 + VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC, 104 + &spec); 100 105 101 106 up_write(&mm->mmap_sem); 102 107 103 - return ret; 108 + return PTR_ERR_OR_ZERO(ret); 104 109 } 105 110 #endif /* CONFIG_COMPAT */ 111 + 112 + static struct vm_special_mapping vdso_spec[2]; 106 113 107 114 static int __init vdso_init(void) 108 115 { ··· 121 114 } 122 115 123 116 vdso_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT; 124 - pr_info("vdso: %ld pages (%ld code, %ld data) at base %p\n", 125 - vdso_pages + 1, vdso_pages, 1L, &vdso_start); 117 + pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n", 118 + vdso_pages + 1, vdso_pages, &vdso_start, 1L, vdso_data); 126 119 127 120 /* Allocate the vDSO pagelist, plus a page for the data. */ 128 121 vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *), ··· 130 123 if (vdso_pagelist == NULL) 131 124 return -ENOMEM; 132 125 126 + /* Grab the vDSO data page. */ 127 + vdso_pagelist[0] = virt_to_page(vdso_data); 128 + 133 129 /* Grab the vDSO code pages. */ 134 130 for (i = 0; i < vdso_pages; i++) 135 - vdso_pagelist[i] = virt_to_page(&vdso_start + i * PAGE_SIZE); 131 + vdso_pagelist[i + 1] = virt_to_page(&vdso_start + i * PAGE_SIZE); 136 132 137 - /* Grab the vDSO data page. */ 138 - vdso_pagelist[i] = virt_to_page(vdso_data); 133 + /* Populate the special mapping structures */ 134 + vdso_spec[0] = (struct vm_special_mapping) { 135 + .name = "[vvar]", 136 + .pages = vdso_pagelist, 137 + }; 138 + 139 + vdso_spec[1] = (struct vm_special_mapping) { 140 + .name = "[vdso]", 141 + .pages = &vdso_pagelist[1], 142 + }; 139 143 140 144 return 0; 141 145 } ··· 156 138 int uses_interp) 157 139 { 158 140 struct mm_struct *mm = current->mm; 159 - unsigned long vdso_base, vdso_mapping_len; 160 - int ret; 141 + unsigned long vdso_base, vdso_text_len, vdso_mapping_len; 142 + void *ret; 161 143 144 + vdso_text_len = vdso_pages << PAGE_SHIFT; 162 145 /* Be sure to map the data page */ 163 - vdso_mapping_len = (vdso_pages + 1) << PAGE_SHIFT; 146 + vdso_mapping_len = vdso_text_len + PAGE_SIZE; 164 147 165 148 down_write(&mm->mmap_sem); 166 149 vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0); 167 150 if (IS_ERR_VALUE(vdso_base)) { 168 - ret = vdso_base; 151 + ret = ERR_PTR(vdso_base); 169 152 goto up_fail; 170 153 } 171 - mm->context.vdso = (void *)vdso_base; 154 + ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE, 155 + VM_READ|VM_MAYREAD, 156 + &vdso_spec[0]); 157 + if (IS_ERR(ret)) 158 + goto up_fail; 172 159 173 - ret = install_special_mapping(mm, vdso_base, vdso_mapping_len, 174 - VM_READ|VM_EXEC| 175 - VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, 176 - vdso_pagelist); 177 - if (ret) { 178 - mm->context.vdso = NULL; 160 + vdso_base += PAGE_SIZE; 161 + mm->context.vdso = (void *)vdso_base; 162 + ret = _install_special_mapping(mm, vdso_base, vdso_text_len, 163 + VM_READ|VM_EXEC| 164 + VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, 165 + &vdso_spec[1]); 166 + if (IS_ERR(ret)) 179 167 goto up_fail; 180 - } 168 + 169 + 170 + up_write(&mm->mmap_sem); 171 + return 0; 181 172 182 173 up_fail: 174 + mm->context.vdso = NULL; 183 175 up_write(&mm->mmap_sem); 184 - 185 - return ret; 186 - } 187 - 188 - const char *arch_vma_name(struct vm_area_struct *vma) 189 - { 190 - /* 191 - * We can re-use the vdso pointer in mm_context_t for identifying 192 - * the vectors page for compat applications. The vDSO will always 193 - * sit above TASK_UNMAPPED_BASE and so we don't need to worry about 194 - * it conflicting with the vectors base. 195 - */ 196 - if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso) { 197 - #ifdef CONFIG_COMPAT 198 - if (vma->vm_start == AARCH32_VECTORS_BASE) 199 - return "[vectors]"; 200 - #endif 201 - return "[vdso]"; 202 - } 203 - 204 - return NULL; 176 + return PTR_ERR(ret); 205 177 } 206 178 207 179 /*
+3 -3
arch/arm64/kernel/vdso/Makefile
··· 43 43 $(call if_changed,vdsosym) 44 44 45 45 # Assembly rules for the .S files 46 - $(obj-vdso): %.o: %.S 46 + $(obj-vdso): %.o: %.S FORCE 47 47 $(call if_changed_dep,vdsoas) 48 48 49 49 # Actual build commands 50 - quiet_cmd_vdsold = VDSOL $@ 50 + quiet_cmd_vdsold = VDSOL $@ 51 51 cmd_vdsold = $(CC) $(c_flags) -Wl,-n -Wl,-T $^ -o $@ 52 - quiet_cmd_vdsoas = VDSOA $@ 52 + quiet_cmd_vdsoas = VDSOA $@ 53 53 cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $< 54 54 55 55 # Install commands for the unstripped file
+1 -3
arch/arm64/kernel/vdso/vdso.lds.S
··· 28 28 29 29 SECTIONS 30 30 { 31 + PROVIDE(_vdso_data = . - PAGE_SIZE); 31 32 . = VDSO_LBASE + SIZEOF_HEADERS; 32 33 33 34 .hash : { *(.hash) } :text ··· 57 56 58 57 _end = .; 59 58 PROVIDE(end = .); 60 - 61 - . = ALIGN(PAGE_SIZE); 62 - PROVIDE(_vdso_data = .); 63 59 64 60 /DISCARD/ : { 65 61 *(.note.GNU-stack)
+16
arch/arm64/kernel/vmlinux.lds.S
··· 9 9 #include <asm/memory.h> 10 10 #include <asm/page.h> 11 11 12 + #include "image.h" 13 + 12 14 #define ARM_EXIT_KEEP(x) 13 15 #define ARM_EXIT_DISCARD(x) x 14 16 ··· 106 104 _edata = .; 107 105 108 106 BSS_SECTION(0, 0, 0) 107 + 108 + . = ALIGN(PAGE_SIZE); 109 + idmap_pg_dir = .; 110 + . += IDMAP_DIR_SIZE; 111 + swapper_pg_dir = .; 112 + . += SWAPPER_DIR_SIZE; 113 + 109 114 _end = .; 110 115 111 116 STABS_DEBUG 117 + 118 + HEAD_SYMBOLS 112 119 } 113 120 114 121 /* ··· 125 114 */ 126 115 ASSERT(((__hyp_idmap_text_start + PAGE_SIZE) > __hyp_idmap_text_end), 127 116 "HYP init code too big") 117 + 118 + /* 119 + * If padding is applied before .head.text, virt<->phys conversions will fail. 120 + */ 121 + ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")
+1
arch/arm64/mm/fault.c
··· 62 62 break; 63 63 64 64 pud = pud_offset(pgd, addr); 65 + printk(", *pud=%016llx", pud_val(*pud)); 65 66 if (pud_none(*pud) || pud_bad(*pud)) 66 67 break; 67 68
+19 -15
arch/arm64/mm/init.c
··· 33 33 #include <linux/dma-mapping.h> 34 34 #include <linux/dma-contiguous.h> 35 35 36 + #include <asm/fixmap.h> 36 37 #include <asm/sections.h> 37 38 #include <asm/setup.h> 38 39 #include <asm/sizes.h> ··· 138 137 { 139 138 phys_addr_t dma_phys_limit = 0; 140 139 141 - /* Register the kernel text, kernel data and initrd with memblock */ 140 + /* 141 + * Register the kernel text, kernel data, initrd, and initial 142 + * pagetables with memblock. 143 + */ 142 144 memblock_reserve(__pa(_text), _end - _text); 143 145 #ifdef CONFIG_BLK_DEV_INITRD 144 146 if (initrd_start) 145 147 memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start); 146 148 #endif 147 - 148 - /* 149 - * Reserve the page tables. These are already in use, 150 - * and can only be in node 0. 151 - */ 152 - memblock_reserve(__pa(swapper_pg_dir), SWAPPER_DIR_SIZE); 153 - memblock_reserve(__pa(idmap_pg_dir), IDMAP_DIR_SIZE); 154 149 155 150 early_init_fdt_scan_reserved_mem(); 156 151 ··· 266 269 267 270 #define MLK(b, t) b, t, ((t) - (b)) >> 10 268 271 #define MLM(b, t) b, t, ((t) - (b)) >> 20 272 + #define MLG(b, t) b, t, ((t) - (b)) >> 30 269 273 #define MLK_ROUNDUP(b, t) b, t, DIV_ROUND_UP(((t) - (b)), SZ_1K) 270 274 271 275 pr_notice("Virtual kernel memory layout:\n" 272 - " vmalloc : 0x%16lx - 0x%16lx (%6ld MB)\n" 276 + " vmalloc : 0x%16lx - 0x%16lx (%6ld GB)\n" 273 277 #ifdef CONFIG_SPARSEMEM_VMEMMAP 274 - " vmemmap : 0x%16lx - 0x%16lx (%6ld MB)\n" 278 + " vmemmap : 0x%16lx - 0x%16lx (%6ld GB maximum)\n" 279 + " 0x%16lx - 0x%16lx (%6ld MB actual)\n" 275 280 #endif 281 + " PCI I/O : 0x%16lx - 0x%16lx (%6ld MB)\n" 282 + " fixed : 0x%16lx - 0x%16lx (%6ld KB)\n" 276 283 " modules : 0x%16lx - 0x%16lx (%6ld MB)\n" 277 284 " memory : 0x%16lx - 0x%16lx (%6ld MB)\n" 278 - " .init : 0x%p" " - 0x%p" " (%6ld kB)\n" 279 - " .text : 0x%p" " - 0x%p" " (%6ld kB)\n" 280 - " .data : 0x%p" " - 0x%p" " (%6ld kB)\n", 281 - MLM(VMALLOC_START, VMALLOC_END), 285 + " .init : 0x%p" " - 0x%p" " (%6ld KB)\n" 286 + " .text : 0x%p" " - 0x%p" " (%6ld KB)\n" 287 + " .data : 0x%p" " - 0x%p" " (%6ld KB)\n", 288 + MLG(VMALLOC_START, VMALLOC_END), 282 289 #ifdef CONFIG_SPARSEMEM_VMEMMAP 290 + MLG((unsigned long)vmemmap, 291 + (unsigned long)vmemmap + VMEMMAP_SIZE), 283 292 MLM((unsigned long)virt_to_page(PAGE_OFFSET), 284 293 (unsigned long)virt_to_page(high_memory)), 285 294 #endif 295 + MLM((unsigned long)PCI_IOBASE, (unsigned long)PCI_IOBASE + SZ_16M), 296 + MLK(FIXADDR_START, FIXADDR_TOP), 286 297 MLM(MODULES_VADDR, MODULES_END), 287 298 MLM(PAGE_OFFSET, (unsigned long)high_memory), 288 - 289 299 MLK_ROUNDUP(__init_begin, __init_end), 290 300 MLK_ROUNDUP(_text, _etext), 291 301 MLK_ROUNDUP(_sdata, _edata));
+22 -8
arch/arm64/mm/ioremap.c
··· 103 103 } 104 104 EXPORT_SYMBOL(ioremap_cache); 105 105 106 - #ifndef CONFIG_ARM64_64K_PAGES 107 106 static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss; 107 + #if CONFIG_ARM64_PGTABLE_LEVELS > 2 108 + static pte_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss; 109 + #endif 110 + #if CONFIG_ARM64_PGTABLE_LEVELS > 3 111 + static pte_t bm_pud[PTRS_PER_PUD] __page_aligned_bss; 108 112 #endif 109 113 110 - static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) 114 + static inline pud_t * __init early_ioremap_pud(unsigned long addr) 111 115 { 112 116 pgd_t *pgd; 113 - pud_t *pud; 114 117 115 118 pgd = pgd_offset_k(addr); 116 119 BUG_ON(pgd_none(*pgd) || pgd_bad(*pgd)); 117 120 118 - pud = pud_offset(pgd, addr); 121 + return pud_offset(pgd, addr); 122 + } 123 + 124 + static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) 125 + { 126 + pud_t *pud = early_ioremap_pud(addr); 127 + 119 128 BUG_ON(pud_none(*pud) || pud_bad(*pud)); 120 129 121 130 return pmd_offset(pud, addr); ··· 141 132 142 133 void __init early_ioremap_init(void) 143 134 { 135 + pgd_t *pgd; 136 + pud_t *pud; 144 137 pmd_t *pmd; 138 + unsigned long addr = fix_to_virt(FIX_BTMAP_BEGIN); 145 139 146 - pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)); 147 - #ifndef CONFIG_ARM64_64K_PAGES 148 - /* need to populate pmd for 4k pagesize only */ 140 + pgd = pgd_offset_k(addr); 141 + pgd_populate(&init_mm, pgd, bm_pud); 142 + pud = pud_offset(pgd, addr); 143 + pud_populate(&init_mm, pud, bm_pmd); 144 + pmd = pmd_offset(pud, addr); 149 145 pmd_populate_kernel(&init_mm, pmd, bm_pte); 150 - #endif 146 + 151 147 /* 152 148 * The boot-ioremap range spans multiple pmds, for which 153 149 * we are not prepared:
+11 -3
arch/arm64/mm/mmu.c
··· 32 32 #include <asm/setup.h> 33 33 #include <asm/sizes.h> 34 34 #include <asm/tlb.h> 35 + #include <asm/memblock.h> 35 36 #include <asm/mmu_context.h> 36 37 37 38 #include "mm.h" ··· 205 204 unsigned long end, unsigned long phys, 206 205 int map_io) 207 206 { 208 - pud_t *pud = pud_offset(pgd, addr); 207 + pud_t *pud; 209 208 unsigned long next; 210 209 210 + if (pgd_none(*pgd)) { 211 + pud = early_alloc(PTRS_PER_PUD * sizeof(pud_t)); 212 + pgd_populate(&init_mm, pgd, pud); 213 + } 214 + BUG_ON(pgd_bad(*pgd)); 215 + 216 + pud = pud_offset(pgd, addr); 211 217 do { 212 218 next = pud_addr_end(addr, end); 213 219 ··· 298 290 * memory addressable from the initial direct kernel mapping. 299 291 * 300 292 * The initial direct kernel mapping, located at swapper_pg_dir, 301 - * gives us PGDIR_SIZE memory starting from PHYS_OFFSET (which must be 293 + * gives us PUD_SIZE memory starting from PHYS_OFFSET (which must be 302 294 * aligned to 2MB as per Documentation/arm64/booting.txt). 303 295 */ 304 - limit = PHYS_OFFSET + PGDIR_SIZE; 296 + limit = PHYS_OFFSET + PUD_SIZE; 305 297 memblock_set_current_limit(limit); 306 298 307 299 /* map all the memory banks */
+5
drivers/irqchip/Kconfig
··· 10 10 config GIC_NON_BANKED 11 11 bool 12 12 13 + config ARM_GIC_V3 14 + bool 15 + select IRQ_DOMAIN 16 + select MULTI_IRQ_HANDLER 17 + 13 18 config ARM_NVIC 14 19 bool 15 20 select IRQ_DOMAIN
+2 -1
drivers/irqchip/Makefile
··· 15 15 obj-$(CONFIG_ARCH_SUNXI) += irq-sun4i.o 16 16 obj-$(CONFIG_ARCH_SUNXI) += irq-sunxi-nmi.o 17 17 obj-$(CONFIG_ARCH_SPEAR3XX) += spear-shirq.o 18 - obj-$(CONFIG_ARM_GIC) += irq-gic.o 18 + obj-$(CONFIG_ARM_GIC) += irq-gic.o irq-gic-common.o 19 + obj-$(CONFIG_ARM_GIC_V3) += irq-gic-v3.o irq-gic-common.o 19 20 obj-$(CONFIG_ARM_NVIC) += irq-nvic.o 20 21 obj-$(CONFIG_ARM_VIC) += irq-vic.o 21 22 obj-$(CONFIG_IMGPDC_IRQ) += irq-imgpdc.o
+115
drivers/irqchip/irq-gic-common.c
··· 1 + /* 2 + * Copyright (C) 2002 ARM Limited, All Rights Reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + 17 + #include <linux/interrupt.h> 18 + #include <linux/io.h> 19 + #include <linux/irq.h> 20 + #include <linux/irqchip/arm-gic.h> 21 + 22 + #include "irq-gic-common.h" 23 + 24 + void gic_configure_irq(unsigned int irq, unsigned int type, 25 + void __iomem *base, void (*sync_access)(void)) 26 + { 27 + u32 enablemask = 1 << (irq % 32); 28 + u32 enableoff = (irq / 32) * 4; 29 + u32 confmask = 0x2 << ((irq % 16) * 2); 30 + u32 confoff = (irq / 16) * 4; 31 + bool enabled = false; 32 + u32 val; 33 + 34 + /* 35 + * Read current configuration register, and insert the config 36 + * for "irq", depending on "type". 37 + */ 38 + val = readl_relaxed(base + GIC_DIST_CONFIG + confoff); 39 + if (type == IRQ_TYPE_LEVEL_HIGH) 40 + val &= ~confmask; 41 + else if (type == IRQ_TYPE_EDGE_RISING) 42 + val |= confmask; 43 + 44 + /* 45 + * As recommended by the spec, disable the interrupt before changing 46 + * the configuration 47 + */ 48 + if (readl_relaxed(base + GIC_DIST_ENABLE_SET + enableoff) & enablemask) { 49 + writel_relaxed(enablemask, base + GIC_DIST_ENABLE_CLEAR + enableoff); 50 + if (sync_access) 51 + sync_access(); 52 + enabled = true; 53 + } 54 + 55 + /* 56 + * Write back the new configuration, and possibly re-enable 57 + * the interrupt. 58 + */ 59 + writel_relaxed(val, base + GIC_DIST_CONFIG + confoff); 60 + 61 + if (enabled) 62 + writel_relaxed(enablemask, base + GIC_DIST_ENABLE_SET + enableoff); 63 + 64 + if (sync_access) 65 + sync_access(); 66 + } 67 + 68 + void __init gic_dist_config(void __iomem *base, int gic_irqs, 69 + void (*sync_access)(void)) 70 + { 71 + unsigned int i; 72 + 73 + /* 74 + * Set all global interrupts to be level triggered, active low. 75 + */ 76 + for (i = 32; i < gic_irqs; i += 16) 77 + writel_relaxed(0, base + GIC_DIST_CONFIG + i / 4); 78 + 79 + /* 80 + * Set priority on all global interrupts. 81 + */ 82 + for (i = 32; i < gic_irqs; i += 4) 83 + writel_relaxed(0xa0a0a0a0, base + GIC_DIST_PRI + i); 84 + 85 + /* 86 + * Disable all interrupts. Leave the PPI and SGIs alone 87 + * as they are enabled by redistributor registers. 88 + */ 89 + for (i = 32; i < gic_irqs; i += 32) 90 + writel_relaxed(0xffffffff, base + GIC_DIST_ENABLE_CLEAR + i / 8); 91 + 92 + if (sync_access) 93 + sync_access(); 94 + } 95 + 96 + void gic_cpu_config(void __iomem *base, void (*sync_access)(void)) 97 + { 98 + int i; 99 + 100 + /* 101 + * Deal with the banked PPI and SGI interrupts - disable all 102 + * PPI interrupts, ensure all SGI interrupts are enabled. 103 + */ 104 + writel_relaxed(0xffff0000, base + GIC_DIST_ENABLE_CLEAR); 105 + writel_relaxed(0x0000ffff, base + GIC_DIST_ENABLE_SET); 106 + 107 + /* 108 + * Set priority on PPI and SGI interrupts 109 + */ 110 + for (i = 0; i < 32; i += 4) 111 + writel_relaxed(0xa0a0a0a0, base + GIC_DIST_PRI + i * 4 / 4); 112 + 113 + if (sync_access) 114 + sync_access(); 115 + }
+29
drivers/irqchip/irq-gic-common.h
··· 1 + /* 2 + * Copyright (C) 2002 ARM Limited, All Rights Reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + 17 + #ifndef _IRQ_GIC_COMMON_H 18 + #define _IRQ_GIC_COMMON_H 19 + 20 + #include <linux/of.h> 21 + #include <linux/irqdomain.h> 22 + 23 + void gic_configure_irq(unsigned int irq, unsigned int type, 24 + void __iomem *base, void (*sync_access)(void)); 25 + void gic_dist_config(void __iomem *base, int gic_irqs, 26 + void (*sync_access)(void)); 27 + void gic_cpu_config(void __iomem *base, void (*sync_access)(void)); 28 + 29 + #endif /* _IRQ_GIC_COMMON_H */
+692
drivers/irqchip/irq-gic-v3.c
··· 1 + /* 2 + * Copyright (C) 2013, 2014 ARM Limited, All Rights Reserved. 3 + * Author: Marc Zyngier <marc.zyngier@arm.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #include <linux/cpu.h> 19 + #include <linux/delay.h> 20 + #include <linux/interrupt.h> 21 + #include <linux/of.h> 22 + #include <linux/of_address.h> 23 + #include <linux/of_irq.h> 24 + #include <linux/percpu.h> 25 + #include <linux/slab.h> 26 + 27 + #include <linux/irqchip/arm-gic-v3.h> 28 + 29 + #include <asm/cputype.h> 30 + #include <asm/exception.h> 31 + #include <asm/smp_plat.h> 32 + 33 + #include "irq-gic-common.h" 34 + #include "irqchip.h" 35 + 36 + struct gic_chip_data { 37 + void __iomem *dist_base; 38 + void __iomem **redist_base; 39 + void __percpu __iomem **rdist; 40 + struct irq_domain *domain; 41 + u64 redist_stride; 42 + u32 redist_regions; 43 + unsigned int irq_nr; 44 + }; 45 + 46 + static struct gic_chip_data gic_data __read_mostly; 47 + 48 + #define gic_data_rdist() (this_cpu_ptr(gic_data.rdist)) 49 + #define gic_data_rdist_rd_base() (*gic_data_rdist()) 50 + #define gic_data_rdist_sgi_base() (gic_data_rdist_rd_base() + SZ_64K) 51 + 52 + /* Our default, arbitrary priority value. Linux only uses one anyway. */ 53 + #define DEFAULT_PMR_VALUE 0xf0 54 + 55 + static inline unsigned int gic_irq(struct irq_data *d) 56 + { 57 + return d->hwirq; 58 + } 59 + 60 + static inline int gic_irq_in_rdist(struct irq_data *d) 61 + { 62 + return gic_irq(d) < 32; 63 + } 64 + 65 + static inline void __iomem *gic_dist_base(struct irq_data *d) 66 + { 67 + if (gic_irq_in_rdist(d)) /* SGI+PPI -> SGI_base for this CPU */ 68 + return gic_data_rdist_sgi_base(); 69 + 70 + if (d->hwirq <= 1023) /* SPI -> dist_base */ 71 + return gic_data.dist_base; 72 + 73 + if (d->hwirq >= 8192) 74 + BUG(); /* LPI Detected!!! */ 75 + 76 + return NULL; 77 + } 78 + 79 + static void gic_do_wait_for_rwp(void __iomem *base) 80 + { 81 + u32 count = 1000000; /* 1s! */ 82 + 83 + while (readl_relaxed(base + GICD_CTLR) & GICD_CTLR_RWP) { 84 + count--; 85 + if (!count) { 86 + pr_err_ratelimited("RWP timeout, gone fishing\n"); 87 + return; 88 + } 89 + cpu_relax(); 90 + udelay(1); 91 + }; 92 + } 93 + 94 + /* Wait for completion of a distributor change */ 95 + static void gic_dist_wait_for_rwp(void) 96 + { 97 + gic_do_wait_for_rwp(gic_data.dist_base); 98 + } 99 + 100 + /* Wait for completion of a redistributor change */ 101 + static void gic_redist_wait_for_rwp(void) 102 + { 103 + gic_do_wait_for_rwp(gic_data_rdist_rd_base()); 104 + } 105 + 106 + /* Low level accessors */ 107 + static u64 gic_read_iar(void) 108 + { 109 + u64 irqstat; 110 + 111 + asm volatile("mrs_s %0, " __stringify(ICC_IAR1_EL1) : "=r" (irqstat)); 112 + return irqstat; 113 + } 114 + 115 + static void gic_write_pmr(u64 val) 116 + { 117 + asm volatile("msr_s " __stringify(ICC_PMR_EL1) ", %0" : : "r" (val)); 118 + } 119 + 120 + static void gic_write_ctlr(u64 val) 121 + { 122 + asm volatile("msr_s " __stringify(ICC_CTLR_EL1) ", %0" : : "r" (val)); 123 + isb(); 124 + } 125 + 126 + static void gic_write_grpen1(u64 val) 127 + { 128 + asm volatile("msr_s " __stringify(ICC_GRPEN1_EL1) ", %0" : : "r" (val)); 129 + isb(); 130 + } 131 + 132 + static void gic_write_sgi1r(u64 val) 133 + { 134 + asm volatile("msr_s " __stringify(ICC_SGI1R_EL1) ", %0" : : "r" (val)); 135 + } 136 + 137 + static void gic_enable_sre(void) 138 + { 139 + u64 val; 140 + 141 + asm volatile("mrs_s %0, " __stringify(ICC_SRE_EL1) : "=r" (val)); 142 + val |= ICC_SRE_EL1_SRE; 143 + asm volatile("msr_s " __stringify(ICC_SRE_EL1) ", %0" : : "r" (val)); 144 + isb(); 145 + 146 + /* 147 + * Need to check that the SRE bit has actually been set. If 148 + * not, it means that SRE is disabled at EL2. We're going to 149 + * die painfully, and there is nothing we can do about it. 150 + * 151 + * Kindly inform the luser. 152 + */ 153 + asm volatile("mrs_s %0, " __stringify(ICC_SRE_EL1) : "=r" (val)); 154 + if (!(val & ICC_SRE_EL1_SRE)) 155 + pr_err("GIC: unable to set SRE (disabled at EL2), panic ahead\n"); 156 + } 157 + 158 + static void gic_enable_redist(void) 159 + { 160 + void __iomem *rbase; 161 + u32 count = 1000000; /* 1s! */ 162 + u32 val; 163 + 164 + rbase = gic_data_rdist_rd_base(); 165 + 166 + /* Wake up this CPU redistributor */ 167 + val = readl_relaxed(rbase + GICR_WAKER); 168 + val &= ~GICR_WAKER_ProcessorSleep; 169 + writel_relaxed(val, rbase + GICR_WAKER); 170 + 171 + while (readl_relaxed(rbase + GICR_WAKER) & GICR_WAKER_ChildrenAsleep) { 172 + count--; 173 + if (!count) { 174 + pr_err_ratelimited("redist didn't wake up...\n"); 175 + return; 176 + } 177 + cpu_relax(); 178 + udelay(1); 179 + }; 180 + } 181 + 182 + /* 183 + * Routines to disable, enable, EOI and route interrupts 184 + */ 185 + static void gic_poke_irq(struct irq_data *d, u32 offset) 186 + { 187 + u32 mask = 1 << (gic_irq(d) % 32); 188 + void (*rwp_wait)(void); 189 + void __iomem *base; 190 + 191 + if (gic_irq_in_rdist(d)) { 192 + base = gic_data_rdist_sgi_base(); 193 + rwp_wait = gic_redist_wait_for_rwp; 194 + } else { 195 + base = gic_data.dist_base; 196 + rwp_wait = gic_dist_wait_for_rwp; 197 + } 198 + 199 + writel_relaxed(mask, base + offset + (gic_irq(d) / 32) * 4); 200 + rwp_wait(); 201 + } 202 + 203 + static int gic_peek_irq(struct irq_data *d, u32 offset) 204 + { 205 + u32 mask = 1 << (gic_irq(d) % 32); 206 + void __iomem *base; 207 + 208 + if (gic_irq_in_rdist(d)) 209 + base = gic_data_rdist_sgi_base(); 210 + else 211 + base = gic_data.dist_base; 212 + 213 + return !!(readl_relaxed(base + offset + (gic_irq(d) / 32) * 4) & mask); 214 + } 215 + 216 + static void gic_mask_irq(struct irq_data *d) 217 + { 218 + gic_poke_irq(d, GICD_ICENABLER); 219 + } 220 + 221 + static void gic_unmask_irq(struct irq_data *d) 222 + { 223 + gic_poke_irq(d, GICD_ISENABLER); 224 + } 225 + 226 + static void gic_eoi_irq(struct irq_data *d) 227 + { 228 + gic_write_eoir(gic_irq(d)); 229 + } 230 + 231 + static int gic_set_type(struct irq_data *d, unsigned int type) 232 + { 233 + unsigned int irq = gic_irq(d); 234 + void (*rwp_wait)(void); 235 + void __iomem *base; 236 + 237 + /* Interrupt configuration for SGIs can't be changed */ 238 + if (irq < 16) 239 + return -EINVAL; 240 + 241 + if (type != IRQ_TYPE_LEVEL_HIGH && type != IRQ_TYPE_EDGE_RISING) 242 + return -EINVAL; 243 + 244 + if (gic_irq_in_rdist(d)) { 245 + base = gic_data_rdist_sgi_base(); 246 + rwp_wait = gic_redist_wait_for_rwp; 247 + } else { 248 + base = gic_data.dist_base; 249 + rwp_wait = gic_dist_wait_for_rwp; 250 + } 251 + 252 + gic_configure_irq(irq, type, base, rwp_wait); 253 + 254 + return 0; 255 + } 256 + 257 + static u64 gic_mpidr_to_affinity(u64 mpidr) 258 + { 259 + u64 aff; 260 + 261 + aff = (MPIDR_AFFINITY_LEVEL(mpidr, 3) << 32 | 262 + MPIDR_AFFINITY_LEVEL(mpidr, 2) << 16 | 263 + MPIDR_AFFINITY_LEVEL(mpidr, 1) << 8 | 264 + MPIDR_AFFINITY_LEVEL(mpidr, 0)); 265 + 266 + return aff; 267 + } 268 + 269 + static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs) 270 + { 271 + u64 irqnr; 272 + 273 + do { 274 + irqnr = gic_read_iar(); 275 + 276 + if (likely(irqnr > 15 && irqnr < 1020)) { 277 + u64 irq = irq_find_mapping(gic_data.domain, irqnr); 278 + if (likely(irq)) { 279 + handle_IRQ(irq, regs); 280 + continue; 281 + } 282 + 283 + WARN_ONCE(true, "Unexpected SPI received!\n"); 284 + gic_write_eoir(irqnr); 285 + } 286 + if (irqnr < 16) { 287 + gic_write_eoir(irqnr); 288 + #ifdef CONFIG_SMP 289 + handle_IPI(irqnr, regs); 290 + #else 291 + WARN_ONCE(true, "Unexpected SGI received!\n"); 292 + #endif 293 + continue; 294 + } 295 + } while (irqnr != ICC_IAR1_EL1_SPURIOUS); 296 + } 297 + 298 + static void __init gic_dist_init(void) 299 + { 300 + unsigned int i; 301 + u64 affinity; 302 + void __iomem *base = gic_data.dist_base; 303 + 304 + /* Disable the distributor */ 305 + writel_relaxed(0, base + GICD_CTLR); 306 + gic_dist_wait_for_rwp(); 307 + 308 + gic_dist_config(base, gic_data.irq_nr, gic_dist_wait_for_rwp); 309 + 310 + /* Enable distributor with ARE, Group1 */ 311 + writel_relaxed(GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A | GICD_CTLR_ENABLE_G1, 312 + base + GICD_CTLR); 313 + 314 + /* 315 + * Set all global interrupts to the boot CPU only. ARE must be 316 + * enabled. 317 + */ 318 + affinity = gic_mpidr_to_affinity(cpu_logical_map(smp_processor_id())); 319 + for (i = 32; i < gic_data.irq_nr; i++) 320 + writeq_relaxed(affinity, base + GICD_IROUTER + i * 8); 321 + } 322 + 323 + static int gic_populate_rdist(void) 324 + { 325 + u64 mpidr = cpu_logical_map(smp_processor_id()); 326 + u64 typer; 327 + u32 aff; 328 + int i; 329 + 330 + /* 331 + * Convert affinity to a 32bit value that can be matched to 332 + * GICR_TYPER bits [63:32]. 333 + */ 334 + aff = (MPIDR_AFFINITY_LEVEL(mpidr, 3) << 24 | 335 + MPIDR_AFFINITY_LEVEL(mpidr, 2) << 16 | 336 + MPIDR_AFFINITY_LEVEL(mpidr, 1) << 8 | 337 + MPIDR_AFFINITY_LEVEL(mpidr, 0)); 338 + 339 + for (i = 0; i < gic_data.redist_regions; i++) { 340 + void __iomem *ptr = gic_data.redist_base[i]; 341 + u32 reg; 342 + 343 + reg = readl_relaxed(ptr + GICR_PIDR2) & GIC_PIDR2_ARCH_MASK; 344 + if (reg != GIC_PIDR2_ARCH_GICv3 && 345 + reg != GIC_PIDR2_ARCH_GICv4) { /* We're in trouble... */ 346 + pr_warn("No redistributor present @%p\n", ptr); 347 + break; 348 + } 349 + 350 + do { 351 + typer = readq_relaxed(ptr + GICR_TYPER); 352 + if ((typer >> 32) == aff) { 353 + gic_data_rdist_rd_base() = ptr; 354 + pr_info("CPU%d: found redistributor %llx @%p\n", 355 + smp_processor_id(), 356 + (unsigned long long)mpidr, ptr); 357 + return 0; 358 + } 359 + 360 + if (gic_data.redist_stride) { 361 + ptr += gic_data.redist_stride; 362 + } else { 363 + ptr += SZ_64K * 2; /* Skip RD_base + SGI_base */ 364 + if (typer & GICR_TYPER_VLPIS) 365 + ptr += SZ_64K * 2; /* Skip VLPI_base + reserved page */ 366 + } 367 + } while (!(typer & GICR_TYPER_LAST)); 368 + } 369 + 370 + /* We couldn't even deal with ourselves... */ 371 + WARN(true, "CPU%d: mpidr %llx has no re-distributor!\n", 372 + smp_processor_id(), (unsigned long long)mpidr); 373 + return -ENODEV; 374 + } 375 + 376 + static void gic_cpu_init(void) 377 + { 378 + void __iomem *rbase; 379 + 380 + /* Register ourselves with the rest of the world */ 381 + if (gic_populate_rdist()) 382 + return; 383 + 384 + gic_enable_redist(); 385 + 386 + rbase = gic_data_rdist_sgi_base(); 387 + 388 + gic_cpu_config(rbase, gic_redist_wait_for_rwp); 389 + 390 + /* Enable system registers */ 391 + gic_enable_sre(); 392 + 393 + /* Set priority mask register */ 394 + gic_write_pmr(DEFAULT_PMR_VALUE); 395 + 396 + /* EOI deactivates interrupt too (mode 0) */ 397 + gic_write_ctlr(ICC_CTLR_EL1_EOImode_drop_dir); 398 + 399 + /* ... and let's hit the road... */ 400 + gic_write_grpen1(1); 401 + } 402 + 403 + #ifdef CONFIG_SMP 404 + static int gic_secondary_init(struct notifier_block *nfb, 405 + unsigned long action, void *hcpu) 406 + { 407 + if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 408 + gic_cpu_init(); 409 + return NOTIFY_OK; 410 + } 411 + 412 + /* 413 + * Notifier for enabling the GIC CPU interface. Set an arbitrarily high 414 + * priority because the GIC needs to be up before the ARM generic timers. 415 + */ 416 + static struct notifier_block gic_cpu_notifier = { 417 + .notifier_call = gic_secondary_init, 418 + .priority = 100, 419 + }; 420 + 421 + static u16 gic_compute_target_list(int *base_cpu, const struct cpumask *mask, 422 + u64 cluster_id) 423 + { 424 + int cpu = *base_cpu; 425 + u64 mpidr = cpu_logical_map(cpu); 426 + u16 tlist = 0; 427 + 428 + while (cpu < nr_cpu_ids) { 429 + /* 430 + * If we ever get a cluster of more than 16 CPUs, just 431 + * scream and skip that CPU. 432 + */ 433 + if (WARN_ON((mpidr & 0xff) >= 16)) 434 + goto out; 435 + 436 + tlist |= 1 << (mpidr & 0xf); 437 + 438 + cpu = cpumask_next(cpu, mask); 439 + if (cpu == nr_cpu_ids) 440 + goto out; 441 + 442 + mpidr = cpu_logical_map(cpu); 443 + 444 + if (cluster_id != (mpidr & ~0xffUL)) { 445 + cpu--; 446 + goto out; 447 + } 448 + } 449 + out: 450 + *base_cpu = cpu; 451 + return tlist; 452 + } 453 + 454 + static void gic_send_sgi(u64 cluster_id, u16 tlist, unsigned int irq) 455 + { 456 + u64 val; 457 + 458 + val = (MPIDR_AFFINITY_LEVEL(cluster_id, 3) << 48 | 459 + MPIDR_AFFINITY_LEVEL(cluster_id, 2) << 32 | 460 + irq << 24 | 461 + MPIDR_AFFINITY_LEVEL(cluster_id, 1) << 16 | 462 + tlist); 463 + 464 + pr_debug("CPU%d: ICC_SGI1R_EL1 %llx\n", smp_processor_id(), val); 465 + gic_write_sgi1r(val); 466 + } 467 + 468 + static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq) 469 + { 470 + int cpu; 471 + 472 + if (WARN_ON(irq >= 16)) 473 + return; 474 + 475 + /* 476 + * Ensure that stores to Normal memory are visible to the 477 + * other CPUs before issuing the IPI. 478 + */ 479 + smp_wmb(); 480 + 481 + for_each_cpu_mask(cpu, *mask) { 482 + u64 cluster_id = cpu_logical_map(cpu) & ~0xffUL; 483 + u16 tlist; 484 + 485 + tlist = gic_compute_target_list(&cpu, mask, cluster_id); 486 + gic_send_sgi(cluster_id, tlist, irq); 487 + } 488 + 489 + /* Force the above writes to ICC_SGI1R_EL1 to be executed */ 490 + isb(); 491 + } 492 + 493 + static void gic_smp_init(void) 494 + { 495 + set_smp_cross_call(gic_raise_softirq); 496 + register_cpu_notifier(&gic_cpu_notifier); 497 + } 498 + 499 + static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, 500 + bool force) 501 + { 502 + unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask); 503 + void __iomem *reg; 504 + int enabled; 505 + u64 val; 506 + 507 + if (gic_irq_in_rdist(d)) 508 + return -EINVAL; 509 + 510 + /* If interrupt was enabled, disable it first */ 511 + enabled = gic_peek_irq(d, GICD_ISENABLER); 512 + if (enabled) 513 + gic_mask_irq(d); 514 + 515 + reg = gic_dist_base(d) + GICD_IROUTER + (gic_irq(d) * 8); 516 + val = gic_mpidr_to_affinity(cpu_logical_map(cpu)); 517 + 518 + writeq_relaxed(val, reg); 519 + 520 + /* 521 + * If the interrupt was enabled, enabled it again. Otherwise, 522 + * just wait for the distributor to have digested our changes. 523 + */ 524 + if (enabled) 525 + gic_unmask_irq(d); 526 + else 527 + gic_dist_wait_for_rwp(); 528 + 529 + return IRQ_SET_MASK_OK; 530 + } 531 + #else 532 + #define gic_set_affinity NULL 533 + #define gic_smp_init() do { } while(0) 534 + #endif 535 + 536 + static struct irq_chip gic_chip = { 537 + .name = "GICv3", 538 + .irq_mask = gic_mask_irq, 539 + .irq_unmask = gic_unmask_irq, 540 + .irq_eoi = gic_eoi_irq, 541 + .irq_set_type = gic_set_type, 542 + .irq_set_affinity = gic_set_affinity, 543 + }; 544 + 545 + static int gic_irq_domain_map(struct irq_domain *d, unsigned int irq, 546 + irq_hw_number_t hw) 547 + { 548 + /* SGIs are private to the core kernel */ 549 + if (hw < 16) 550 + return -EPERM; 551 + /* PPIs */ 552 + if (hw < 32) { 553 + irq_set_percpu_devid(irq); 554 + irq_set_chip_and_handler(irq, &gic_chip, 555 + handle_percpu_devid_irq); 556 + set_irq_flags(irq, IRQF_VALID | IRQF_NOAUTOEN); 557 + } 558 + /* SPIs */ 559 + if (hw >= 32 && hw < gic_data.irq_nr) { 560 + irq_set_chip_and_handler(irq, &gic_chip, 561 + handle_fasteoi_irq); 562 + set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 563 + } 564 + irq_set_chip_data(irq, d->host_data); 565 + return 0; 566 + } 567 + 568 + static int gic_irq_domain_xlate(struct irq_domain *d, 569 + struct device_node *controller, 570 + const u32 *intspec, unsigned int intsize, 571 + unsigned long *out_hwirq, unsigned int *out_type) 572 + { 573 + if (d->of_node != controller) 574 + return -EINVAL; 575 + if (intsize < 3) 576 + return -EINVAL; 577 + 578 + switch(intspec[0]) { 579 + case 0: /* SPI */ 580 + *out_hwirq = intspec[1] + 32; 581 + break; 582 + case 1: /* PPI */ 583 + *out_hwirq = intspec[1] + 16; 584 + break; 585 + default: 586 + return -EINVAL; 587 + } 588 + 589 + *out_type = intspec[2] & IRQ_TYPE_SENSE_MASK; 590 + return 0; 591 + } 592 + 593 + static const struct irq_domain_ops gic_irq_domain_ops = { 594 + .map = gic_irq_domain_map, 595 + .xlate = gic_irq_domain_xlate, 596 + }; 597 + 598 + static int __init gic_of_init(struct device_node *node, struct device_node *parent) 599 + { 600 + void __iomem *dist_base; 601 + void __iomem **redist_base; 602 + u64 redist_stride; 603 + u32 redist_regions; 604 + u32 reg; 605 + int gic_irqs; 606 + int err; 607 + int i; 608 + 609 + dist_base = of_iomap(node, 0); 610 + if (!dist_base) { 611 + pr_err("%s: unable to map gic dist registers\n", 612 + node->full_name); 613 + return -ENXIO; 614 + } 615 + 616 + reg = readl_relaxed(dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK; 617 + if (reg != GIC_PIDR2_ARCH_GICv3 && reg != GIC_PIDR2_ARCH_GICv4) { 618 + pr_err("%s: no distributor detected, giving up\n", 619 + node->full_name); 620 + err = -ENODEV; 621 + goto out_unmap_dist; 622 + } 623 + 624 + if (of_property_read_u32(node, "#redistributor-regions", &redist_regions)) 625 + redist_regions = 1; 626 + 627 + redist_base = kzalloc(sizeof(*redist_base) * redist_regions, GFP_KERNEL); 628 + if (!redist_base) { 629 + err = -ENOMEM; 630 + goto out_unmap_dist; 631 + } 632 + 633 + for (i = 0; i < redist_regions; i++) { 634 + redist_base[i] = of_iomap(node, 1 + i); 635 + if (!redist_base[i]) { 636 + pr_err("%s: couldn't map region %d\n", 637 + node->full_name, i); 638 + err = -ENODEV; 639 + goto out_unmap_rdist; 640 + } 641 + } 642 + 643 + if (of_property_read_u64(node, "redistributor-stride", &redist_stride)) 644 + redist_stride = 0; 645 + 646 + gic_data.dist_base = dist_base; 647 + gic_data.redist_base = redist_base; 648 + gic_data.redist_regions = redist_regions; 649 + gic_data.redist_stride = redist_stride; 650 + 651 + /* 652 + * Find out how many interrupts are supported. 653 + * The GIC only supports up to 1020 interrupt sources (SGI+PPI+SPI) 654 + */ 655 + gic_irqs = readl_relaxed(gic_data.dist_base + GICD_TYPER) & 0x1f; 656 + gic_irqs = (gic_irqs + 1) * 32; 657 + if (gic_irqs > 1020) 658 + gic_irqs = 1020; 659 + gic_data.irq_nr = gic_irqs; 660 + 661 + gic_data.domain = irq_domain_add_tree(node, &gic_irq_domain_ops, 662 + &gic_data); 663 + gic_data.rdist = alloc_percpu(typeof(*gic_data.rdist)); 664 + 665 + if (WARN_ON(!gic_data.domain) || WARN_ON(!gic_data.rdist)) { 666 + err = -ENOMEM; 667 + goto out_free; 668 + } 669 + 670 + set_handle_irq(gic_handle_irq); 671 + 672 + gic_smp_init(); 673 + gic_dist_init(); 674 + gic_cpu_init(); 675 + 676 + return 0; 677 + 678 + out_free: 679 + if (gic_data.domain) 680 + irq_domain_remove(gic_data.domain); 681 + free_percpu(gic_data.rdist); 682 + out_unmap_rdist: 683 + for (i = 0; i < redist_regions; i++) 684 + if (redist_base[i]) 685 + iounmap(redist_base[i]); 686 + kfree(redist_base); 687 + out_unmap_dist: 688 + iounmap(dist_base); 689 + return err; 690 + } 691 + 692 + IRQCHIP_DECLARE(gic_v3, "arm,gic-v3", gic_of_init);
+4 -55
drivers/irqchip/irq-gic.c
··· 47 47 #include <asm/exception.h> 48 48 #include <asm/smp_plat.h> 49 49 50 + #include "irq-gic-common.h" 50 51 #include "irqchip.h" 51 52 52 53 union gic_base { ··· 190 189 { 191 190 void __iomem *base = gic_dist_base(d); 192 191 unsigned int gicirq = gic_irq(d); 193 - u32 enablemask = 1 << (gicirq % 32); 194 - u32 enableoff = (gicirq / 32) * 4; 195 - u32 confmask = 0x2 << ((gicirq % 16) * 2); 196 - u32 confoff = (gicirq / 16) * 4; 197 - bool enabled = false; 198 - u32 val; 199 192 200 193 /* Interrupt configuration for SGIs can't be changed */ 201 194 if (gicirq < 16) ··· 203 208 if (gic_arch_extn.irq_set_type) 204 209 gic_arch_extn.irq_set_type(d, type); 205 210 206 - val = readl_relaxed(base + GIC_DIST_CONFIG + confoff); 207 - if (type == IRQ_TYPE_LEVEL_HIGH) 208 - val &= ~confmask; 209 - else if (type == IRQ_TYPE_EDGE_RISING) 210 - val |= confmask; 211 - 212 - /* 213 - * As recommended by the spec, disable the interrupt before changing 214 - * the configuration 215 - */ 216 - if (readl_relaxed(base + GIC_DIST_ENABLE_SET + enableoff) & enablemask) { 217 - writel_relaxed(enablemask, base + GIC_DIST_ENABLE_CLEAR + enableoff); 218 - enabled = true; 219 - } 220 - 221 - writel_relaxed(val, base + GIC_DIST_CONFIG + confoff); 222 - 223 - if (enabled) 224 - writel_relaxed(enablemask, base + GIC_DIST_ENABLE_SET + enableoff); 211 + gic_configure_irq(gicirq, type, base, NULL); 225 212 226 213 raw_spin_unlock(&irq_controller_lock); 227 214 ··· 365 388 writel_relaxed(0, base + GIC_DIST_CTRL); 366 389 367 390 /* 368 - * Set all global interrupts to be level triggered, active low. 369 - */ 370 - for (i = 32; i < gic_irqs; i += 16) 371 - writel_relaxed(0, base + GIC_DIST_CONFIG + i * 4 / 16); 372 - 373 - /* 374 391 * Set all global interrupts to this CPU only. 375 392 */ 376 393 cpumask = gic_get_cpumask(gic); ··· 373 402 for (i = 32; i < gic_irqs; i += 4) 374 403 writel_relaxed(cpumask, base + GIC_DIST_TARGET + i * 4 / 4); 375 404 376 - /* 377 - * Set priority on all global interrupts. 378 - */ 379 - for (i = 32; i < gic_irqs; i += 4) 380 - writel_relaxed(0xa0a0a0a0, base + GIC_DIST_PRI + i * 4 / 4); 381 - 382 - /* 383 - * Disable all interrupts. Leave the PPI and SGIs alone 384 - * as these enables are banked registers. 385 - */ 386 - for (i = 32; i < gic_irqs; i += 32) 387 - writel_relaxed(0xffffffff, base + GIC_DIST_ENABLE_CLEAR + i * 4 / 32); 405 + gic_dist_config(base, gic_irqs, NULL); 388 406 389 407 writel_relaxed(1, base + GIC_DIST_CTRL); 390 408 } ··· 400 440 if (i != cpu) 401 441 gic_cpu_map[i] &= ~cpu_mask; 402 442 403 - /* 404 - * Deal with the banked PPI and SGI interrupts - disable all 405 - * PPI interrupts, ensure all SGI interrupts are enabled. 406 - */ 407 - writel_relaxed(0xffff0000, dist_base + GIC_DIST_ENABLE_CLEAR); 408 - writel_relaxed(0x0000ffff, dist_base + GIC_DIST_ENABLE_SET); 409 - 410 - /* 411 - * Set priority on PPI and SGI interrupts 412 - */ 413 - for (i = 0; i < 32; i += 4) 414 - writel_relaxed(0xa0a0a0a0, dist_base + GIC_DIST_PRI + i * 4 / 4); 443 + gic_cpu_config(dist_base, NULL); 415 444 416 445 writel_relaxed(0xf0, base + GIC_CPU_PRIMASK); 417 446 writel_relaxed(1, base + GIC_CPU_CTRL);
+200
include/linux/irqchip/arm-gic-v3.h
··· 1 + /* 2 + * Copyright (C) 2013, 2014 ARM Limited, All Rights Reserved. 3 + * Author: Marc Zyngier <marc.zyngier@arm.com> 4 + * 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + #ifndef __LINUX_IRQCHIP_ARM_GIC_V3_H 19 + #define __LINUX_IRQCHIP_ARM_GIC_V3_H 20 + 21 + #include <asm/sysreg.h> 22 + 23 + /* 24 + * Distributor registers. We assume we're running non-secure, with ARE 25 + * being set. Secure-only and non-ARE registers are not described. 26 + */ 27 + #define GICD_CTLR 0x0000 28 + #define GICD_TYPER 0x0004 29 + #define GICD_IIDR 0x0008 30 + #define GICD_STATUSR 0x0010 31 + #define GICD_SETSPI_NSR 0x0040 32 + #define GICD_CLRSPI_NSR 0x0048 33 + #define GICD_SETSPI_SR 0x0050 34 + #define GICD_CLRSPI_SR 0x0058 35 + #define GICD_SEIR 0x0068 36 + #define GICD_ISENABLER 0x0100 37 + #define GICD_ICENABLER 0x0180 38 + #define GICD_ISPENDR 0x0200 39 + #define GICD_ICPENDR 0x0280 40 + #define GICD_ISACTIVER 0x0300 41 + #define GICD_ICACTIVER 0x0380 42 + #define GICD_IPRIORITYR 0x0400 43 + #define GICD_ICFGR 0x0C00 44 + #define GICD_IROUTER 0x6000 45 + #define GICD_PIDR2 0xFFE8 46 + 47 + #define GICD_CTLR_RWP (1U << 31) 48 + #define GICD_CTLR_ARE_NS (1U << 4) 49 + #define GICD_CTLR_ENABLE_G1A (1U << 1) 50 + #define GICD_CTLR_ENABLE_G1 (1U << 0) 51 + 52 + #define GICD_IROUTER_SPI_MODE_ONE (0U << 31) 53 + #define GICD_IROUTER_SPI_MODE_ANY (1U << 31) 54 + 55 + #define GIC_PIDR2_ARCH_MASK 0xf0 56 + #define GIC_PIDR2_ARCH_GICv3 0x30 57 + #define GIC_PIDR2_ARCH_GICv4 0x40 58 + 59 + /* 60 + * Re-Distributor registers, offsets from RD_base 61 + */ 62 + #define GICR_CTLR GICD_CTLR 63 + #define GICR_IIDR 0x0004 64 + #define GICR_TYPER 0x0008 65 + #define GICR_STATUSR GICD_STATUSR 66 + #define GICR_WAKER 0x0014 67 + #define GICR_SETLPIR 0x0040 68 + #define GICR_CLRLPIR 0x0048 69 + #define GICR_SEIR GICD_SEIR 70 + #define GICR_PROPBASER 0x0070 71 + #define GICR_PENDBASER 0x0078 72 + #define GICR_INVLPIR 0x00A0 73 + #define GICR_INVALLR 0x00B0 74 + #define GICR_SYNCR 0x00C0 75 + #define GICR_MOVLPIR 0x0100 76 + #define GICR_MOVALLR 0x0110 77 + #define GICR_PIDR2 GICD_PIDR2 78 + 79 + #define GICR_WAKER_ProcessorSleep (1U << 1) 80 + #define GICR_WAKER_ChildrenAsleep (1U << 2) 81 + 82 + /* 83 + * Re-Distributor registers, offsets from SGI_base 84 + */ 85 + #define GICR_ISENABLER0 GICD_ISENABLER 86 + #define GICR_ICENABLER0 GICD_ICENABLER 87 + #define GICR_ISPENDR0 GICD_ISPENDR 88 + #define GICR_ICPENDR0 GICD_ICPENDR 89 + #define GICR_ISACTIVER0 GICD_ISACTIVER 90 + #define GICR_ICACTIVER0 GICD_ICACTIVER 91 + #define GICR_IPRIORITYR0 GICD_IPRIORITYR 92 + #define GICR_ICFGR0 GICD_ICFGR 93 + 94 + #define GICR_TYPER_VLPIS (1U << 1) 95 + #define GICR_TYPER_LAST (1U << 4) 96 + 97 + /* 98 + * CPU interface registers 99 + */ 100 + #define ICC_CTLR_EL1_EOImode_drop_dir (0U << 1) 101 + #define ICC_CTLR_EL1_EOImode_drop (1U << 1) 102 + #define ICC_SRE_EL1_SRE (1U << 0) 103 + 104 + /* 105 + * Hypervisor interface registers (SRE only) 106 + */ 107 + #define ICH_LR_VIRTUAL_ID_MASK ((1UL << 32) - 1) 108 + 109 + #define ICH_LR_EOI (1UL << 41) 110 + #define ICH_LR_GROUP (1UL << 60) 111 + #define ICH_LR_STATE (3UL << 62) 112 + #define ICH_LR_PENDING_BIT (1UL << 62) 113 + #define ICH_LR_ACTIVE_BIT (1UL << 63) 114 + 115 + #define ICH_MISR_EOI (1 << 0) 116 + #define ICH_MISR_U (1 << 1) 117 + 118 + #define ICH_HCR_EN (1 << 0) 119 + #define ICH_HCR_UIE (1 << 1) 120 + 121 + #define ICH_VMCR_CTLR_SHIFT 0 122 + #define ICH_VMCR_CTLR_MASK (0x21f << ICH_VMCR_CTLR_SHIFT) 123 + #define ICH_VMCR_BPR1_SHIFT 18 124 + #define ICH_VMCR_BPR1_MASK (7 << ICH_VMCR_BPR1_SHIFT) 125 + #define ICH_VMCR_BPR0_SHIFT 21 126 + #define ICH_VMCR_BPR0_MASK (7 << ICH_VMCR_BPR0_SHIFT) 127 + #define ICH_VMCR_PMR_SHIFT 24 128 + #define ICH_VMCR_PMR_MASK (0xffUL << ICH_VMCR_PMR_SHIFT) 129 + 130 + #define ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1) 131 + #define ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0) 132 + #define ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5) 133 + #define ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0) 134 + #define ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4) 135 + #define ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5) 136 + #define ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 137 + 138 + #define ICC_IAR1_EL1_SPURIOUS 0x3ff 139 + 140 + #define ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5) 141 + 142 + #define ICC_SRE_EL2_SRE (1 << 0) 143 + #define ICC_SRE_EL2_ENABLE (1 << 3) 144 + 145 + /* 146 + * System register definitions 147 + */ 148 + #define ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4) 149 + #define ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0) 150 + #define ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1) 151 + #define ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2) 152 + #define ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3) 153 + #define ICH_ELSR_EL2 sys_reg(3, 4, 12, 11, 5) 154 + #define ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7) 155 + 156 + #define __LR0_EL2(x) sys_reg(3, 4, 12, 12, x) 157 + #define __LR8_EL2(x) sys_reg(3, 4, 12, 13, x) 158 + 159 + #define ICH_LR0_EL2 __LR0_EL2(0) 160 + #define ICH_LR1_EL2 __LR0_EL2(1) 161 + #define ICH_LR2_EL2 __LR0_EL2(2) 162 + #define ICH_LR3_EL2 __LR0_EL2(3) 163 + #define ICH_LR4_EL2 __LR0_EL2(4) 164 + #define ICH_LR5_EL2 __LR0_EL2(5) 165 + #define ICH_LR6_EL2 __LR0_EL2(6) 166 + #define ICH_LR7_EL2 __LR0_EL2(7) 167 + #define ICH_LR8_EL2 __LR8_EL2(0) 168 + #define ICH_LR9_EL2 __LR8_EL2(1) 169 + #define ICH_LR10_EL2 __LR8_EL2(2) 170 + #define ICH_LR11_EL2 __LR8_EL2(3) 171 + #define ICH_LR12_EL2 __LR8_EL2(4) 172 + #define ICH_LR13_EL2 __LR8_EL2(5) 173 + #define ICH_LR14_EL2 __LR8_EL2(6) 174 + #define ICH_LR15_EL2 __LR8_EL2(7) 175 + 176 + #define __AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x) 177 + #define ICH_AP0R0_EL2 __AP0Rx_EL2(0) 178 + #define ICH_AP0R1_EL2 __AP0Rx_EL2(1) 179 + #define ICH_AP0R2_EL2 __AP0Rx_EL2(2) 180 + #define ICH_AP0R3_EL2 __AP0Rx_EL2(3) 181 + 182 + #define __AP1Rx_EL2(x) sys_reg(3, 4, 12, 9, x) 183 + #define ICH_AP1R0_EL2 __AP1Rx_EL2(0) 184 + #define ICH_AP1R1_EL2 __AP1Rx_EL2(1) 185 + #define ICH_AP1R2_EL2 __AP1Rx_EL2(2) 186 + #define ICH_AP1R3_EL2 __AP1Rx_EL2(3) 187 + 188 + #ifndef __ASSEMBLY__ 189 + 190 + #include <linux/stringify.h> 191 + 192 + static inline void gic_write_eoir(u64 irq) 193 + { 194 + asm volatile("msr_s " __stringify(ICC_EOIR1_EL1) ", %0" : : "r" (irq)); 195 + isb(); 196 + } 197 + 198 + #endif 199 + 200 + #endif
+1
include/uapi/linux/audit.h
··· 342 342 #define __AUDIT_ARCH_64BIT 0x80000000 343 343 #define __AUDIT_ARCH_LE 0x40000000 344 344 345 + #define AUDIT_ARCH_AARCH64 (EM_AARCH64|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) 345 346 #define AUDIT_ARCH_ALPHA (EM_ALPHA|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) 346 347 #define AUDIT_ARCH_ARM (EM_ARM|__AUDIT_ARCH_LE) 347 348 #define AUDIT_ARCH_ARMEB (EM_ARM)