Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-next/bti' into for-next/core

Support for Branch Target Identification (BTI) in user and kernel
(Mark Brown and others)
* for-next/bti: (39 commits)
arm64: vdso: Fix CFI directives in sigreturn trampoline
arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction
arm64: bti: Fix support for userspace only BTI
arm64: kconfig: Update and comment GCC version check for kernel BTI
arm64: vdso: Map the vDSO text with guarded pages when built for BTI
arm64: vdso: Force the vDSO to be linked as BTI when built for BTI
arm64: vdso: Annotate for BTI
arm64: asm: Provide a mechanism for generating ELF note for BTI
arm64: bti: Provide Kconfig for kernel mode BTI
arm64: mm: Mark executable text as guarded pages
arm64: bpf: Annotate JITed code for BTI
arm64: Set GP bit in kernel page tables to enable BTI for the kernel
arm64: asm: Override SYM_FUNC_START when building the kernel with BTI
arm64: bti: Support building kernel C code using BTI
arm64: Document why we enable PAC support for leaf functions
arm64: insn: Report PAC and BTI instructions as skippable
arm64: insn: Don't assume unrecognized HINTs are skippable
arm64: insn: Provide a better name for aarch64_insn_is_nop()
arm64: insn: Add constants for new HINT instruction decode
arm64: Disable old style assembly annotations
...

+963 -219
+2
Documentation/arm64/cpu-feature-registers.rst
··· 176 176 +------------------------------+---------+---------+ 177 177 | SSBS | [7-4] | y | 178 178 +------------------------------+---------+---------+ 179 + | BT | [3-0] | y | 180 + +------------------------------+---------+---------+ 179 181 180 182 181 183 4) MIDR_EL1 - Main ID Register
+5
Documentation/arm64/elf_hwcaps.rst
··· 236 236 237 237 Functionality implied by ID_AA64ISAR0_EL1.RNDR == 0b0001. 238 238 239 + HWCAP2_BTI 240 + 241 + Functionality implied by ID_AA64PFR0_EL1.BT == 0b0001. 242 + 243 + 239 244 4. Unused AT_HWCAP bits 240 245 ----------------------- 241 246
+1
Documentation/filesystems/proc.rst
··· 543 543 hg huge page advise flag 544 544 nh no huge page advise flag 545 545 mg mergable advise flag 546 + bt - arm64 BTI guarded page 546 547 == ======================================= 547 548 548 549 Note that there is no guarantee that every flag and associated mnemonic will
+46
arch/arm64/Kconfig
··· 9 9 select ACPI_MCFG if (ACPI && PCI) 10 10 select ACPI_SPCR_TABLE if ACPI 11 11 select ACPI_PPTT if ACPI 12 + select ARCH_BINFMT_ELF_STATE 12 13 select ARCH_HAS_DEBUG_VIRTUAL 13 14 select ARCH_HAS_DEVMEM_IS_ALLOWED 14 15 select ARCH_HAS_DMA_PREP_COHERENT ··· 33 32 select ARCH_HAS_SYSCALL_WRAPPER 34 33 select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT 35 34 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST 35 + select ARCH_HAVE_ELF_PROT 36 36 select ARCH_HAVE_NMI_SAFE_CMPXCHG 37 37 select ARCH_INLINE_READ_LOCK if !PREEMPTION 38 38 select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION ··· 63 61 select ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE if !PREEMPTION 64 62 select ARCH_KEEP_MEMBLOCK 65 63 select ARCH_USE_CMPXCHG_LOCKREF 64 + select ARCH_USE_GNU_PROPERTY 66 65 select ARCH_USE_QUEUED_RWLOCKS 67 66 select ARCH_USE_QUEUED_SPINLOCKS 67 + select ARCH_USE_SYM_ANNOTATIONS 68 68 select ARCH_SUPPORTS_MEMORY_FAILURE 69 69 select ARCH_SUPPORTS_ATOMIC_RMW 70 70 select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG) ··· 1587 1583 endmenu 1588 1584 1589 1585 menu "ARMv8.5 architectural features" 1586 + 1587 + config ARM64_BTI 1588 + bool "Branch Target Identification support" 1589 + default y 1590 + help 1591 + Branch Target Identification (part of the ARMv8.5 Extensions) 1592 + provides a mechanism to limit the set of locations to which computed 1593 + branch instructions such as BR or BLR can jump. 1594 + 1595 + To make use of BTI on CPUs that support it, say Y. 1596 + 1597 + BTI is intended to provide complementary protection to other control 1598 + flow integrity protection mechanisms, such as the Pointer 1599 + authentication mechanism provided as part of the ARMv8.3 Extensions. 1600 + For this reason, it does not make sense to enable this option without 1601 + also enabling support for pointer authentication. Thus, when 1602 + enabling this option you should also select ARM64_PTR_AUTH=y. 1603 + 1604 + Userspace binaries must also be specifically compiled to make use of 1605 + this mechanism. If you say N here or the hardware does not support 1606 + BTI, such binaries can still run, but you get no additional 1607 + enforcement of branch destinations. 1608 + 1609 + config ARM64_BTI_KERNEL 1610 + bool "Use Branch Target Identification for kernel" 1611 + default y 1612 + depends on ARM64_BTI 1613 + depends on ARM64_PTR_AUTH 1614 + depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI 1615 + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697 1616 + depends on !CC_IS_GCC || GCC_VERSION >= 100100 1617 + depends on !(CC_IS_CLANG && GCOV_KERNEL) 1618 + depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS) 1619 + help 1620 + Build the kernel with Branch Target Identification annotations 1621 + and enable enforcement of this for kernel code. When this option 1622 + is enabled and the system supports BTI all kernel code including 1623 + modular code must have BTI enabled. 1624 + 1625 + config CC_HAS_BRANCH_PROT_PAC_RET_BTI 1626 + # GCC 9 or later, clang 8 or later 1627 + def_bool $(cc-option,-mbranch-protection=pac-ret+leaf+bti) 1590 1628 1591 1629 config ARM64_E0PD 1592 1630 bool "Enable support for E0PD"
+7
arch/arm64/Makefile
··· 70 70 71 71 ifeq ($(CONFIG_ARM64_PTR_AUTH),y) 72 72 branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all 73 + # We enable additional protection for leaf functions as there is some 74 + # narrow potential for ROP protection benefits and no substantial 75 + # performance impact has been observed. 76 + ifeq ($(CONFIG_ARM64_BTI_KERNEL),y) 77 + branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI) := -mbranch-protection=pac-ret+leaf+bti 78 + else 73 79 branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf 80 + endif 74 81 # -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the 75 82 # compiler to generate them and consequently to break the single image contract 76 83 # we pass it only to the assembler. This option is utilized only in case of non
+50
arch/arm64/include/asm/assembler.h
··· 736 736 .Lyield_out_\@ : 737 737 .endm 738 738 739 + /* 740 + * This macro emits a program property note section identifying 741 + * architecture features which require special handling, mainly for 742 + * use in assembly files included in the VDSO. 743 + */ 744 + 745 + #define NT_GNU_PROPERTY_TYPE_0 5 746 + #define GNU_PROPERTY_AARCH64_FEATURE_1_AND 0xc0000000 747 + 748 + #define GNU_PROPERTY_AARCH64_FEATURE_1_BTI (1U << 0) 749 + #define GNU_PROPERTY_AARCH64_FEATURE_1_PAC (1U << 1) 750 + 751 + #ifdef CONFIG_ARM64_BTI_KERNEL 752 + #define GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT \ 753 + ((GNU_PROPERTY_AARCH64_FEATURE_1_BTI | \ 754 + GNU_PROPERTY_AARCH64_FEATURE_1_PAC)) 755 + #endif 756 + 757 + #ifdef GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT 758 + .macro emit_aarch64_feature_1_and, feat=GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT 759 + .pushsection .note.gnu.property, "a" 760 + .align 3 761 + .long 2f - 1f 762 + .long 6f - 3f 763 + .long NT_GNU_PROPERTY_TYPE_0 764 + 1: .string "GNU" 765 + 2: 766 + .align 3 767 + 3: .long GNU_PROPERTY_AARCH64_FEATURE_1_AND 768 + .long 5f - 4f 769 + 4: 770 + /* 771 + * This is described with an array of char in the Linux API 772 + * spec but the text and all other usage (including binutils, 773 + * clang and GCC) treat this as a 32 bit value so no swizzling 774 + * is required for big endian. 775 + */ 776 + .long \feat 777 + 5: 778 + .align 3 779 + 6: 780 + .popsection 781 + .endm 782 + 783 + #else 784 + .macro emit_aarch64_feature_1_and, feat=0 785 + .endm 786 + 787 + #endif /* GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT */ 788 + 739 789 #endif /* __ASM_ASSEMBLER_H */
+2 -1
arch/arm64/include/asm/cpucaps.h
··· 62 62 #define ARM64_HAS_ADDRESS_AUTH 52 63 63 #define ARM64_HAS_GENERIC_AUTH 53 64 64 #define ARM64_HAS_32BIT_EL1 54 65 + #define ARM64_BTI 55 65 66 66 - #define ARM64_NCAPS 55 67 + #define ARM64_NCAPS 56 67 68 68 69 #endif /* __ASM_CPUCAPS_H */
+5
arch/arm64/include/asm/cpufeature.h
··· 687 687 system_uses_irq_prio_masking(); 688 688 } 689 689 690 + static inline bool system_supports_bti(void) 691 + { 692 + return IS_ENABLED(CONFIG_ARM64_BTI) && cpus_have_const_cap(ARM64_BTI); 693 + } 694 + 690 695 #define ARM64_BP_HARDEN_UNKNOWN -1 691 696 #define ARM64_BP_HARDEN_WA_NEEDED 0 692 697 #define ARM64_BP_HARDEN_NOT_REQUIRED 1
+50
arch/arm64/include/asm/elf.h
··· 114 114 115 115 #ifndef __ASSEMBLY__ 116 116 117 + #include <uapi/linux/elf.h> 117 118 #include <linux/bug.h> 119 + #include <linux/errno.h> 120 + #include <linux/fs.h> 121 + #include <linux/types.h> 118 122 #include <asm/processor.h> /* for signal_minsigstksz, used by ARCH_DLINFO */ 119 123 120 124 typedef unsigned long elf_greg_t; ··· 227 223 aarch32_setup_additional_pages 228 224 229 225 #endif /* CONFIG_COMPAT */ 226 + 227 + struct arch_elf_state { 228 + int flags; 229 + }; 230 + 231 + #define ARM64_ELF_BTI (1 << 0) 232 + 233 + #define INIT_ARCH_ELF_STATE { \ 234 + .flags = 0, \ 235 + } 236 + 237 + static inline int arch_parse_elf_property(u32 type, const void *data, 238 + size_t datasz, bool compat, 239 + struct arch_elf_state *arch) 240 + { 241 + /* No known properties for AArch32 yet */ 242 + if (IS_ENABLED(CONFIG_COMPAT) && compat) 243 + return 0; 244 + 245 + if (type == GNU_PROPERTY_AARCH64_FEATURE_1_AND) { 246 + const u32 *p = data; 247 + 248 + if (datasz != sizeof(*p)) 249 + return -ENOEXEC; 250 + 251 + if (system_supports_bti() && 252 + (*p & GNU_PROPERTY_AARCH64_FEATURE_1_BTI)) 253 + arch->flags |= ARM64_ELF_BTI; 254 + } 255 + 256 + return 0; 257 + } 258 + 259 + static inline int arch_elf_pt_proc(void *ehdr, void *phdr, 260 + struct file *f, bool is_interp, 261 + struct arch_elf_state *state) 262 + { 263 + return 0; 264 + } 265 + 266 + static inline int arch_check_elf(void *ehdr, bool has_interp, 267 + void *interp_ehdr, 268 + struct arch_elf_state *state) 269 + { 270 + return 0; 271 + } 230 272 231 273 #endif /* !__ASSEMBLY__ */ 232 274
+1 -1
arch/arm64/include/asm/esr.h
··· 22 22 #define ESR_ELx_EC_PAC (0x09) /* EL2 and above */ 23 23 /* Unallocated EC: 0x0A - 0x0B */ 24 24 #define ESR_ELx_EC_CP14_64 (0x0C) 25 - /* Unallocated EC: 0x0d */ 25 + #define ESR_ELx_EC_BTI (0x0D) 26 26 #define ESR_ELx_EC_ILL (0x0E) 27 27 /* Unallocated EC: 0x0F - 0x10 */ 28 28 #define ESR_ELx_EC_SVC32 (0x11)
+1
arch/arm64/include/asm/exception.h
··· 34 34 asmlinkage void enter_from_user_mode(void); 35 35 void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs); 36 36 void do_undefinstr(struct pt_regs *regs); 37 + void do_bti(struct pt_regs *regs); 37 38 asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr); 38 39 void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, 39 40 struct pt_regs *regs);
+1
arch/arm64/include/asm/hwcap.h
··· 94 94 #define KERNEL_HWCAP_BF16 __khwcap2_feature(BF16) 95 95 #define KERNEL_HWCAP_DGH __khwcap2_feature(DGH) 96 96 #define KERNEL_HWCAP_RNG __khwcap2_feature(RNG) 97 + #define KERNEL_HWCAP_BTI __khwcap2_feature(BTI) 97 98 98 99 /* 99 100 * This yields a mask that user programs can use to figure out what
+27 -3
arch/arm64/include/asm/insn.h
··· 39 39 * system instructions */ 40 40 }; 41 41 42 - enum aarch64_insn_hint_op { 42 + enum aarch64_insn_hint_cr_op { 43 43 AARCH64_INSN_HINT_NOP = 0x0 << 5, 44 44 AARCH64_INSN_HINT_YIELD = 0x1 << 5, 45 45 AARCH64_INSN_HINT_WFE = 0x2 << 5, 46 46 AARCH64_INSN_HINT_WFI = 0x3 << 5, 47 47 AARCH64_INSN_HINT_SEV = 0x4 << 5, 48 48 AARCH64_INSN_HINT_SEVL = 0x5 << 5, 49 + 50 + AARCH64_INSN_HINT_XPACLRI = 0x07 << 5, 51 + AARCH64_INSN_HINT_PACIA_1716 = 0x08 << 5, 52 + AARCH64_INSN_HINT_PACIB_1716 = 0x0A << 5, 53 + AARCH64_INSN_HINT_AUTIA_1716 = 0x0C << 5, 54 + AARCH64_INSN_HINT_AUTIB_1716 = 0x0E << 5, 55 + AARCH64_INSN_HINT_PACIAZ = 0x18 << 5, 56 + AARCH64_INSN_HINT_PACIASP = 0x19 << 5, 57 + AARCH64_INSN_HINT_PACIBZ = 0x1A << 5, 58 + AARCH64_INSN_HINT_PACIBSP = 0x1B << 5, 59 + AARCH64_INSN_HINT_AUTIAZ = 0x1C << 5, 60 + AARCH64_INSN_HINT_AUTIASP = 0x1D << 5, 61 + AARCH64_INSN_HINT_AUTIBZ = 0x1E << 5, 62 + AARCH64_INSN_HINT_AUTIBSP = 0x1F << 5, 63 + 64 + AARCH64_INSN_HINT_ESB = 0x10 << 5, 65 + AARCH64_INSN_HINT_PSB = 0x11 << 5, 66 + AARCH64_INSN_HINT_TSB = 0x12 << 5, 67 + AARCH64_INSN_HINT_CSDB = 0x14 << 5, 68 + 69 + AARCH64_INSN_HINT_BTI = 0x20 << 5, 70 + AARCH64_INSN_HINT_BTIC = 0x22 << 5, 71 + AARCH64_INSN_HINT_BTIJ = 0x24 << 5, 72 + AARCH64_INSN_HINT_BTIJC = 0x26 << 5, 49 73 }; 50 74 51 75 enum aarch64_insn_imm_type { ··· 368 344 369 345 #undef __AARCH64_INSN_FUNCS 370 346 371 - bool aarch64_insn_is_nop(u32 insn); 347 + bool aarch64_insn_is_steppable_hint(u32 insn); 372 348 bool aarch64_insn_is_branch_imm(u32 insn); 373 349 374 350 static inline bool aarch64_insn_is_adr_adrp(u32 insn) ··· 394 370 enum aarch64_insn_branch_type type); 395 371 u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr, 396 372 enum aarch64_insn_condition cond); 397 - u32 aarch64_insn_gen_hint(enum aarch64_insn_hint_op op); 373 + u32 aarch64_insn_gen_hint(enum aarch64_insn_hint_cr_op op); 398 374 u32 aarch64_insn_gen_nop(void); 399 375 u32 aarch64_insn_gen_branch_reg(enum aarch64_insn_register reg, 400 376 enum aarch64_insn_branch_type type);
+4 -2
arch/arm64/include/asm/kvm_emulate.h
··· 507 507 508 508 static __always_inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr) 509 509 { 510 - if (vcpu_mode_is_32bit(vcpu)) 510 + if (vcpu_mode_is_32bit(vcpu)) { 511 511 kvm_skip_instr32(vcpu, is_wide_instr); 512 - else 512 + } else { 513 513 *vcpu_pc(vcpu) += 4; 514 + *vcpu_cpsr(vcpu) &= ~PSR_BTYPE_MASK; 515 + } 514 516 515 517 /* advance the singlestep state machine */ 516 518 *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;
+46
arch/arm64/include/asm/linkage.h
··· 4 4 #define __ALIGN .align 2 5 5 #define __ALIGN_STR ".align 2" 6 6 7 + #if defined(CONFIG_ARM64_BTI_KERNEL) && defined(__aarch64__) 8 + 9 + /* 10 + * Since current versions of gas reject the BTI instruction unless we 11 + * set the architecture version to v8.5 we use the hint instruction 12 + * instead. 13 + */ 14 + #define BTI_C hint 34 ; 15 + #define BTI_J hint 36 ; 16 + 17 + /* 18 + * When using in-kernel BTI we need to ensure that PCS-conformant assembly 19 + * functions have suitable annotations. Override SYM_FUNC_START to insert 20 + * a BTI landing pad at the start of everything. 21 + */ 22 + #define SYM_FUNC_START(name) \ 23 + SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) \ 24 + BTI_C 25 + 26 + #define SYM_FUNC_START_NOALIGN(name) \ 27 + SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE) \ 28 + BTI_C 29 + 30 + #define SYM_FUNC_START_LOCAL(name) \ 31 + SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN) \ 32 + BTI_C 33 + 34 + #define SYM_FUNC_START_LOCAL_NOALIGN(name) \ 35 + SYM_START(name, SYM_L_LOCAL, SYM_A_NONE) \ 36 + BTI_C 37 + 38 + #define SYM_FUNC_START_WEAK(name) \ 39 + SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN) \ 40 + BTI_C 41 + 42 + #define SYM_FUNC_START_WEAK_NOALIGN(name) \ 43 + SYM_START(name, SYM_L_WEAK, SYM_A_NONE) \ 44 + BTI_C 45 + 46 + #define SYM_INNER_LABEL(name, linkage) \ 47 + .type name SYM_T_NONE ASM_NL \ 48 + SYM_ENTRY(name, linkage, SYM_A_NONE) \ 49 + BTI_J 50 + 51 + #endif 52 + 7 53 /* 8 54 * Annotate a function as position independent, i.e., safe to be called before 9 55 * the kernel virtual mapping is activated.
+37
arch/arm64/include/asm/mman.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __ASM_MMAN_H__ 3 + #define __ASM_MMAN_H__ 4 + 5 + #include <linux/compiler.h> 6 + #include <linux/types.h> 7 + #include <uapi/asm/mman.h> 8 + 9 + static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, 10 + unsigned long pkey __always_unused) 11 + { 12 + if (system_supports_bti() && (prot & PROT_BTI)) 13 + return VM_ARM64_BTI; 14 + 15 + return 0; 16 + } 17 + #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) 18 + 19 + static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags) 20 + { 21 + return (vm_flags & VM_ARM64_BTI) ? __pgprot(PTE_GP) : __pgprot(0); 22 + } 23 + #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) 24 + 25 + static inline bool arch_validate_prot(unsigned long prot, 26 + unsigned long addr __always_unused) 27 + { 28 + unsigned long supported = PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM; 29 + 30 + if (system_supports_bti()) 31 + supported |= PROT_BTI; 32 + 33 + return (prot & ~supported) == 0; 34 + } 35 + #define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr) 36 + 37 + #endif /* ! __ASM_MMAN_H__ */
+1
arch/arm64/include/asm/pgtable-hwdef.h
··· 151 151 #define PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */ 152 152 #define PTE_AF (_AT(pteval_t, 1) << 10) /* Access Flag */ 153 153 #define PTE_NG (_AT(pteval_t, 1) << 11) /* nG */ 154 + #define PTE_GP (_AT(pteval_t, 1) << 50) /* BTI guarded */ 154 155 #define PTE_DBM (_AT(pteval_t, 1) << 51) /* Dirty Bit Management */ 155 156 #define PTE_CONT (_AT(pteval_t, 1) << 52) /* Contiguous range */ 156 157 #define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */
+11
arch/arm64/include/asm/pgtable-prot.h
··· 21 21 22 22 #ifndef __ASSEMBLY__ 23 23 24 + #include <asm/cpufeature.h> 24 25 #include <asm/pgtable-types.h> 25 26 26 27 extern bool arm64_use_ng_mappings; ··· 31 30 32 31 #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0) 33 32 #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0) 33 + 34 + /* 35 + * If we have userspace only BTI we don't want to mark kernel pages 36 + * guarded even if the system does support BTI. 37 + */ 38 + #ifdef CONFIG_ARM64_BTI_KERNEL 39 + #define PTE_MAYBE_GP (system_supports_bti() ? PTE_GP : 0) 40 + #else 41 + #define PTE_MAYBE_GP 0 42 + #endif 34 43 35 44 #define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) 36 45 #define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
+1 -1
arch/arm64/include/asm/pgtable.h
··· 661 661 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 662 662 { 663 663 const pteval_t mask = PTE_USER | PTE_PXN | PTE_UXN | PTE_RDONLY | 664 - PTE_PROT_NONE | PTE_VALID | PTE_WRITE; 664 + PTE_PROT_NONE | PTE_VALID | PTE_WRITE | PTE_GP; 665 665 /* preserve the hardware dirty information */ 666 666 if (pte_hw_dirty(pte)) 667 667 pte = pte_mkdirty(pte);
+1
arch/arm64/include/asm/ptrace.h
··· 35 35 #define GIC_PRIO_PSR_I_SET (1 << 4) 36 36 37 37 /* Additional SPSR bits not exposed in the UABI */ 38 + 38 39 #define PSR_IL_BIT (1 << 20) 39 40 40 41 /* AArch32-specific ptrace requests */
+3
arch/arm64/include/asm/sysreg.h
··· 559 559 #endif 560 560 561 561 /* SCTLR_EL1 specific flags. */ 562 + #define SCTLR_EL1_BT1 (BIT(36)) 563 + #define SCTLR_EL1_BT0 (BIT(35)) 562 564 #define SCTLR_EL1_UCI (BIT(26)) 563 565 #define SCTLR_EL1_E0E (BIT(24)) 564 566 #define SCTLR_EL1_SPAN (BIT(23)) ··· 681 679 #define ID_AA64PFR1_SSBS_PSTATE_NI 0 682 680 #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 683 681 #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 682 + #define ID_AA64PFR1_BT_BTI 0x1 684 683 685 684 /* id_aa64zfr0 */ 686 685 #define ID_AA64ZFR0_F64MM_SHIFT 56
+1
arch/arm64/include/uapi/asm/hwcap.h
··· 73 73 #define HWCAP2_BF16 (1 << 14) 74 74 #define HWCAP2_DGH (1 << 15) 75 75 #define HWCAP2_RNG (1 << 16) 76 + #define HWCAP2_BTI (1 << 17) 76 77 77 78 #endif /* _UAPI__ASM_HWCAP_H */
+9
arch/arm64/include/uapi/asm/mman.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef _UAPI__ASM_MMAN_H 3 + #define _UAPI__ASM_MMAN_H 4 + 5 + #include <asm-generic/mman.h> 6 + 7 + #define PROT_BTI 0x10 /* BTI guarded page */ 8 + 9 + #endif /* ! _UAPI__ASM_MMAN_H */
+9
arch/arm64/include/uapi/asm/ptrace.h
··· 46 46 #define PSR_I_BIT 0x00000080 47 47 #define PSR_A_BIT 0x00000100 48 48 #define PSR_D_BIT 0x00000200 49 + #define PSR_BTYPE_MASK 0x00000c00 49 50 #define PSR_SSBS_BIT 0x00001000 50 51 #define PSR_PAN_BIT 0x00400000 51 52 #define PSR_UAO_BIT 0x00800000 ··· 56 55 #define PSR_Z_BIT 0x40000000 57 56 #define PSR_N_BIT 0x80000000 58 57 58 + #define PSR_BTYPE_SHIFT 10 59 + 59 60 /* 60 61 * Groups of PSR bits 61 62 */ ··· 65 62 #define PSR_s 0x00ff0000 /* Status */ 66 63 #define PSR_x 0x0000ff00 /* Extension */ 67 64 #define PSR_c 0x000000ff /* Control */ 65 + 66 + /* Convenience names for the values of PSTATE.BTYPE */ 67 + #define PSR_BTYPE_NONE (0b00 << PSR_BTYPE_SHIFT) 68 + #define PSR_BTYPE_JC (0b01 << PSR_BTYPE_SHIFT) 69 + #define PSR_BTYPE_C (0b10 << PSR_BTYPE_SHIFT) 70 + #define PSR_BTYPE_J (0b11 << PSR_BTYPE_SHIFT) 68 71 69 72 /* syscall emulation path in ptrace */ 70 73 #define PTRACE_SYSEMU 31
+2 -2
arch/arm64/kernel/cpu-reset.S
··· 29 29 * branch to what would be the reset vector. It must be executed with the 30 30 * flat identity mapping. 31 31 */ 32 - ENTRY(__cpu_soft_restart) 32 + SYM_CODE_START(__cpu_soft_restart) 33 33 /* Clear sctlr_el1 flags. */ 34 34 mrs x12, sctlr_el1 35 35 mov_q x13, SCTLR_ELx_FLAGS ··· 47 47 mov x1, x3 // arg1 48 48 mov x2, x4 // arg2 49 49 br x8 50 - ENDPROC(__cpu_soft_restart) 50 + SYM_CODE_END(__cpu_soft_restart) 51 51 52 52 .popsection
+37
arch/arm64/kernel/cpufeature.c
··· 241 241 ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0), 242 242 ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0), 243 243 ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI), 244 + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI), 245 + FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0), 244 246 ARM64_FTR_END, 245 247 }; 246 248 ··· 1640 1638 } 1641 1639 #endif 1642 1640 1641 + #ifdef CONFIG_ARM64_BTI 1642 + static void bti_enable(const struct arm64_cpu_capabilities *__unused) 1643 + { 1644 + /* 1645 + * Use of X16/X17 for tail-calls and trampolines that jump to 1646 + * function entry points using BR is a requirement for 1647 + * marking binaries with GNU_PROPERTY_AARCH64_FEATURE_1_BTI. 1648 + * So, be strict and forbid other BRs using other registers to 1649 + * jump onto a PACIxSP instruction: 1650 + */ 1651 + sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_BT0 | SCTLR_EL1_BT1); 1652 + isb(); 1653 + } 1654 + #endif /* CONFIG_ARM64_BTI */ 1655 + 1643 1656 /* Internal helper functions to match cpu capability type */ 1644 1657 static bool 1645 1658 cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) ··· 2037 2020 .min_field_value = 1, 2038 2021 }, 2039 2022 #endif 2023 + #ifdef CONFIG_ARM64_BTI 2024 + { 2025 + .desc = "Branch Target Identification", 2026 + .capability = ARM64_BTI, 2027 + #ifdef CONFIG_ARM64_BTI_KERNEL 2028 + .type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE, 2029 + #else 2030 + .type = ARM64_CPUCAP_SYSTEM_FEATURE, 2031 + #endif 2032 + .matches = has_cpuid_feature, 2033 + .cpu_enable = bti_enable, 2034 + .sys_reg = SYS_ID_AA64PFR1_EL1, 2035 + .field_pos = ID_AA64PFR1_BT_SHIFT, 2036 + .min_field_value = ID_AA64PFR1_BT_BTI, 2037 + .sign = FTR_UNSIGNED, 2038 + }, 2039 + #endif 2040 2040 {}, 2041 2041 }; 2042 2042 ··· 2163 2129 HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_F64MM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_F64MM, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM), 2164 2130 #endif 2165 2131 HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS), 2132 + #ifdef CONFIG_ARM64_BTI 2133 + HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_BT_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_BT_BTI, CAP_HWCAP, KERNEL_HWCAP_BTI), 2134 + #endif 2166 2135 #ifdef CONFIG_ARM64_PTR_AUTH 2167 2136 HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA), 2168 2137 HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG),
+1
arch/arm64/kernel/cpuinfo.c
··· 92 92 "bf16", 93 93 "dgh", 94 94 "rng", 95 + "bti", 95 96 NULL 96 97 }; 97 98
+2 -2
arch/arm64/kernel/efi-rt-wrapper.S
··· 5 5 6 6 #include <linux/linkage.h> 7 7 8 - ENTRY(__efi_rt_asm_wrapper) 8 + SYM_FUNC_START(__efi_rt_asm_wrapper) 9 9 stp x29, x30, [sp, #-32]! 10 10 mov x29, sp 11 11 ··· 35 35 b.ne 0f 36 36 ret 37 37 0: b efi_handle_corrupted_x18 // tail call 38 - ENDPROC(__efi_rt_asm_wrapper) 38 + SYM_FUNC_END(__efi_rt_asm_wrapper)
+11
arch/arm64/kernel/entry-common.c
··· 188 188 } 189 189 NOKPROBE_SYMBOL(el0_undef); 190 190 191 + static void notrace el0_bti(struct pt_regs *regs) 192 + { 193 + user_exit_irqoff(); 194 + local_daif_restore(DAIF_PROCCTX); 195 + do_bti(regs); 196 + } 197 + NOKPROBE_SYMBOL(el0_bti); 198 + 191 199 static void notrace el0_inv(struct pt_regs *regs, unsigned long esr) 192 200 { 193 201 user_exit_irqoff(); ··· 262 254 break; 263 255 case ESR_ELx_EC_UNKNOWN: 264 256 el0_undef(regs); 257 + break; 258 + case ESR_ELx_EC_BTI: 259 + el0_bti(regs); 265 260 break; 266 261 case ESR_ELx_EC_BREAKPT_LOW: 267 262 case ESR_ELx_EC_SOFTSTP_LOW:
+10 -10
arch/arm64/kernel/entry-fpsimd.S
··· 16 16 * 17 17 * x0 - pointer to struct fpsimd_state 18 18 */ 19 - ENTRY(fpsimd_save_state) 19 + SYM_FUNC_START(fpsimd_save_state) 20 20 fpsimd_save x0, 8 21 21 ret 22 - ENDPROC(fpsimd_save_state) 22 + SYM_FUNC_END(fpsimd_save_state) 23 23 24 24 /* 25 25 * Load the FP registers. 26 26 * 27 27 * x0 - pointer to struct fpsimd_state 28 28 */ 29 - ENTRY(fpsimd_load_state) 29 + SYM_FUNC_START(fpsimd_load_state) 30 30 fpsimd_restore x0, 8 31 31 ret 32 - ENDPROC(fpsimd_load_state) 32 + SYM_FUNC_END(fpsimd_load_state) 33 33 34 34 #ifdef CONFIG_ARM64_SVE 35 - ENTRY(sve_save_state) 35 + SYM_FUNC_START(sve_save_state) 36 36 sve_save 0, x1, 2 37 37 ret 38 - ENDPROC(sve_save_state) 38 + SYM_FUNC_END(sve_save_state) 39 39 40 - ENTRY(sve_load_state) 40 + SYM_FUNC_START(sve_load_state) 41 41 sve_load 0, x1, x2, 3, x4 42 42 ret 43 - ENDPROC(sve_load_state) 43 + SYM_FUNC_END(sve_load_state) 44 44 45 - ENTRY(sve_get_vl) 45 + SYM_FUNC_START(sve_get_vl) 46 46 _sve_rdvl 0, 1 47 47 ret 48 - ENDPROC(sve_get_vl) 48 + SYM_FUNC_END(sve_get_vl) 49 49 #endif /* CONFIG_ARM64_SVE */
+14 -13
arch/arm64/kernel/entry.S
··· 728 728 SYM_CODE_END(el0_error) 729 729 730 730 /* 731 - * Ok, we need to do extra processing, enter the slow path. 732 - */ 733 - work_pending: 734 - mov x0, sp // 'regs' 735 - bl do_notify_resume 736 - #ifdef CONFIG_TRACE_IRQFLAGS 737 - bl trace_hardirqs_on // enabled while in userspace 738 - #endif 739 - ldr x1, [tsk, #TSK_TI_FLAGS] // re-check for single-step 740 - b finish_ret_to_user 741 - /* 742 731 * "slow" syscall return path. 743 732 */ 744 - ret_to_user: 733 + SYM_CODE_START_LOCAL(ret_to_user) 745 734 disable_daif 746 735 gic_prio_kentry_setup tmp=x3 747 736 ldr x1, [tsk, #TSK_TI_FLAGS] ··· 742 753 bl stackleak_erase 743 754 #endif 744 755 kernel_exit 0 745 - ENDPROC(ret_to_user) 756 + 757 + /* 758 + * Ok, we need to do extra processing, enter the slow path. 759 + */ 760 + work_pending: 761 + mov x0, sp // 'regs' 762 + bl do_notify_resume 763 + #ifdef CONFIG_TRACE_IRQFLAGS 764 + bl trace_hardirqs_on // enabled while in userspace 765 + #endif 766 + ldr x1, [tsk, #TSK_TI_FLAGS] // re-check for single-step 767 + b finish_ret_to_user 768 + SYM_CODE_END(ret_to_user) 746 769 747 770 .popsection // .entry.text 748 771
+8 -8
arch/arm64/kernel/hibernate-asm.S
··· 65 65 * x5: physical address of a zero page that remains zero after resume 66 66 */ 67 67 .pushsection ".hibernate_exit.text", "ax" 68 - ENTRY(swsusp_arch_suspend_exit) 68 + SYM_CODE_START(swsusp_arch_suspend_exit) 69 69 /* 70 70 * We execute from ttbr0, change ttbr1 to our copied linear map tables 71 71 * with a break-before-make via the zero page ··· 110 110 cbz x24, 3f /* Do we need to re-initialise EL2? */ 111 111 hvc #0 112 112 3: ret 113 - ENDPROC(swsusp_arch_suspend_exit) 113 + SYM_CODE_END(swsusp_arch_suspend_exit) 114 114 115 115 /* 116 116 * Restore the hyp stub. ··· 119 119 * 120 120 * x24: The physical address of __hyp_stub_vectors 121 121 */ 122 - el1_sync: 122 + SYM_CODE_START_LOCAL(el1_sync) 123 123 msr vbar_el2, x24 124 124 eret 125 - ENDPROC(el1_sync) 125 + SYM_CODE_END(el1_sync) 126 126 127 127 .macro invalid_vector label 128 - \label: 128 + SYM_CODE_START_LOCAL(\label) 129 129 b \label 130 - ENDPROC(\label) 130 + SYM_CODE_END(\label) 131 131 .endm 132 132 133 133 invalid_vector el2_sync_invalid ··· 141 141 142 142 /* el2 vectors - switch el2 here while we restore the memory image. */ 143 143 .align 11 144 - ENTRY(hibernate_el2_vectors) 144 + SYM_CODE_START(hibernate_el2_vectors) 145 145 ventry el2_sync_invalid // Synchronous EL2t 146 146 ventry el2_irq_invalid // IRQ EL2t 147 147 ventry el2_fiq_invalid // FIQ EL2t ··· 161 161 ventry el1_irq_invalid // IRQ 32-bit EL1 162 162 ventry el1_fiq_invalid // FIQ 32-bit EL1 163 163 ventry el1_error_invalid // Error 32-bit EL1 164 - END(hibernate_el2_vectors) 164 + SYM_CODE_END(hibernate_el2_vectors) 165 165 166 166 .popsection
+10 -10
arch/arm64/kernel/hyp-stub.S
··· 21 21 22 22 .align 11 23 23 24 - ENTRY(__hyp_stub_vectors) 24 + SYM_CODE_START(__hyp_stub_vectors) 25 25 ventry el2_sync_invalid // Synchronous EL2t 26 26 ventry el2_irq_invalid // IRQ EL2t 27 27 ventry el2_fiq_invalid // FIQ EL2t ··· 41 41 ventry el1_irq_invalid // IRQ 32-bit EL1 42 42 ventry el1_fiq_invalid // FIQ 32-bit EL1 43 43 ventry el1_error_invalid // Error 32-bit EL1 44 - ENDPROC(__hyp_stub_vectors) 44 + SYM_CODE_END(__hyp_stub_vectors) 45 45 46 46 .align 11 47 47 48 - el1_sync: 48 + SYM_CODE_START_LOCAL(el1_sync) 49 49 cmp x0, #HVC_SET_VECTORS 50 50 b.ne 2f 51 51 msr vbar_el2, x1 ··· 68 68 69 69 9: mov x0, xzr 70 70 eret 71 - ENDPROC(el1_sync) 71 + SYM_CODE_END(el1_sync) 72 72 73 73 .macro invalid_vector label 74 - \label: 74 + SYM_CODE_START_LOCAL(\label) 75 75 b \label 76 - ENDPROC(\label) 76 + SYM_CODE_END(\label) 77 77 .endm 78 78 79 79 invalid_vector el2_sync_invalid ··· 106 106 * initialisation entry point. 107 107 */ 108 108 109 - ENTRY(__hyp_set_vectors) 109 + SYM_FUNC_START(__hyp_set_vectors) 110 110 mov x1, x0 111 111 mov x0, #HVC_SET_VECTORS 112 112 hvc #0 113 113 ret 114 - ENDPROC(__hyp_set_vectors) 114 + SYM_FUNC_END(__hyp_set_vectors) 115 115 116 - ENTRY(__hyp_reset_vectors) 116 + SYM_FUNC_START(__hyp_reset_vectors) 117 117 mov x0, #HVC_RESET_VECTORS 118 118 hvc #0 119 119 ret 120 - ENDPROC(__hyp_reset_vectors) 120 + SYM_FUNC_END(__hyp_reset_vectors)
+22 -10
arch/arm64/kernel/insn.c
··· 51 51 return aarch64_insn_encoding_class[(insn >> 25) & 0xf]; 52 52 } 53 53 54 - /* NOP is an alias of HINT */ 55 - bool __kprobes aarch64_insn_is_nop(u32 insn) 54 + bool __kprobes aarch64_insn_is_steppable_hint(u32 insn) 56 55 { 57 56 if (!aarch64_insn_is_hint(insn)) 58 57 return false; 59 58 60 59 switch (insn & 0xFE0) { 61 - case AARCH64_INSN_HINT_YIELD: 62 - case AARCH64_INSN_HINT_WFE: 63 - case AARCH64_INSN_HINT_WFI: 64 - case AARCH64_INSN_HINT_SEV: 65 - case AARCH64_INSN_HINT_SEVL: 66 - return false; 67 - default: 60 + case AARCH64_INSN_HINT_XPACLRI: 61 + case AARCH64_INSN_HINT_PACIA_1716: 62 + case AARCH64_INSN_HINT_PACIB_1716: 63 + case AARCH64_INSN_HINT_AUTIA_1716: 64 + case AARCH64_INSN_HINT_AUTIB_1716: 65 + case AARCH64_INSN_HINT_PACIAZ: 66 + case AARCH64_INSN_HINT_PACIASP: 67 + case AARCH64_INSN_HINT_PACIBZ: 68 + case AARCH64_INSN_HINT_PACIBSP: 69 + case AARCH64_INSN_HINT_AUTIAZ: 70 + case AARCH64_INSN_HINT_AUTIASP: 71 + case AARCH64_INSN_HINT_AUTIBZ: 72 + case AARCH64_INSN_HINT_AUTIBSP: 73 + case AARCH64_INSN_HINT_BTI: 74 + case AARCH64_INSN_HINT_BTIC: 75 + case AARCH64_INSN_HINT_BTIJ: 76 + case AARCH64_INSN_HINT_BTIJC: 77 + case AARCH64_INSN_HINT_NOP: 68 78 return true; 79 + default: 80 + return false; 69 81 } 70 82 } 71 83 ··· 586 574 offset >> 2); 587 575 } 588 576 589 - u32 __kprobes aarch64_insn_gen_hint(enum aarch64_insn_hint_op op) 577 + u32 __kprobes aarch64_insn_gen_hint(enum aarch64_insn_hint_cr_op op) 590 578 { 591 579 return aarch64_insn_get_hint_value() | op; 592 580 }
+1 -1
arch/arm64/kernel/probes/decode-insn.c
··· 46 46 * except for the NOP case. 47 47 */ 48 48 if (aarch64_insn_is_hint(insn)) 49 - return aarch64_insn_is_nop(insn); 49 + return aarch64_insn_is_steppable_hint(insn); 50 50 51 51 return true; 52 52 }
+2 -2
arch/arm64/kernel/probes/kprobes_trampoline.S
··· 61 61 ldp x28, x29, [sp, #S_X28] 62 62 .endm 63 63 64 - ENTRY(kretprobe_trampoline) 64 + SYM_CODE_START(kretprobe_trampoline) 65 65 sub sp, sp, #S_FRAME_SIZE 66 66 67 67 save_all_base_regs ··· 79 79 add sp, sp, #S_FRAME_SIZE 80 80 ret 81 81 82 - ENDPROC(kretprobe_trampoline) 82 + SYM_CODE_END(kretprobe_trampoline)
+39 -2
arch/arm64/kernel/process.c
··· 11 11 12 12 #include <linux/compat.h> 13 13 #include <linux/efi.h> 14 + #include <linux/elf.h> 14 15 #include <linux/export.h> 15 16 #include <linux/sched.h> 16 17 #include <linux/sched/debug.h> ··· 19 18 #include <linux/sched/task_stack.h> 20 19 #include <linux/kernel.h> 21 20 #include <linux/lockdep.h> 21 + #include <linux/mman.h> 22 22 #include <linux/mm.h> 23 23 #include <linux/stddef.h> 24 24 #include <linux/sysctl.h> ··· 211 209 while (1); 212 210 } 213 211 212 + #define bstr(suffix, str) [PSR_BTYPE_ ## suffix >> PSR_BTYPE_SHIFT] = str 213 + static const char *const btypes[] = { 214 + bstr(NONE, "--"), 215 + bstr( JC, "jc"), 216 + bstr( C, "-c"), 217 + bstr( J , "j-") 218 + }; 219 + #undef bstr 220 + 214 221 static void print_pstate(struct pt_regs *regs) 215 222 { 216 223 u64 pstate = regs->pstate; ··· 238 227 pstate & PSR_AA32_I_BIT ? 'I' : 'i', 239 228 pstate & PSR_AA32_F_BIT ? 'F' : 'f'); 240 229 } else { 241 - printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO)\n", 230 + const char *btype_str = btypes[(pstate & PSR_BTYPE_MASK) >> 231 + PSR_BTYPE_SHIFT]; 232 + 233 + printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO BTYPE=%s)\n", 242 234 pstate, 243 235 pstate & PSR_N_BIT ? 'N' : 'n', 244 236 pstate & PSR_Z_BIT ? 'Z' : 'z', ··· 252 238 pstate & PSR_I_BIT ? 'I' : 'i', 253 239 pstate & PSR_F_BIT ? 'F' : 'f', 254 240 pstate & PSR_PAN_BIT ? '+' : '-', 255 - pstate & PSR_UAO_BIT ? '+' : '-'); 241 + pstate & PSR_UAO_BIT ? '+' : '-', 242 + btype_str); 256 243 } 257 244 } 258 245 ··· 670 655 if (system_capabilities_finalized()) 671 656 preempt_schedule_irq(); 672 657 } 658 + 659 + #ifdef CONFIG_BINFMT_ELF 660 + int arch_elf_adjust_prot(int prot, const struct arch_elf_state *state, 661 + bool has_interp, bool is_interp) 662 + { 663 + /* 664 + * For dynamically linked executables the interpreter is 665 + * responsible for setting PROT_BTI on everything except 666 + * itself. 667 + */ 668 + if (is_interp != has_interp) 669 + return prot; 670 + 671 + if (!(state->flags & ARM64_ELF_BTI)) 672 + return prot; 673 + 674 + if (prot & PROT_EXEC) 675 + prot |= PROT_BTI; 676 + 677 + return prot; 678 + } 679 + #endif
+1 -1
arch/arm64/kernel/ptrace.c
··· 1874 1874 */ 1875 1875 #define SPSR_EL1_AARCH64_RES0_BITS \ 1876 1876 (GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \ 1877 - GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5)) 1877 + GENMASK_ULL(20, 13) | GENMASK_ULL(5, 5)) 1878 1878 #define SPSR_EL1_AARCH32_RES0_BITS \ 1879 1879 (GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20)) 1880 1880
+22 -22
arch/arm64/kernel/reloc_test_syms.S
··· 5 5 6 6 #include <linux/linkage.h> 7 7 8 - ENTRY(absolute_data64) 8 + SYM_FUNC_START(absolute_data64) 9 9 ldr x0, 0f 10 10 ret 11 11 0: .quad sym64_abs 12 - ENDPROC(absolute_data64) 12 + SYM_FUNC_END(absolute_data64) 13 13 14 - ENTRY(absolute_data32) 14 + SYM_FUNC_START(absolute_data32) 15 15 ldr w0, 0f 16 16 ret 17 17 0: .long sym32_abs 18 - ENDPROC(absolute_data32) 18 + SYM_FUNC_END(absolute_data32) 19 19 20 - ENTRY(absolute_data16) 20 + SYM_FUNC_START(absolute_data16) 21 21 adr x0, 0f 22 22 ldrh w0, [x0] 23 23 ret 24 24 0: .short sym16_abs, 0 25 - ENDPROC(absolute_data16) 25 + SYM_FUNC_END(absolute_data16) 26 26 27 - ENTRY(signed_movw) 27 + SYM_FUNC_START(signed_movw) 28 28 movz x0, #:abs_g2_s:sym64_abs 29 29 movk x0, #:abs_g1_nc:sym64_abs 30 30 movk x0, #:abs_g0_nc:sym64_abs 31 31 ret 32 - ENDPROC(signed_movw) 32 + SYM_FUNC_END(signed_movw) 33 33 34 - ENTRY(unsigned_movw) 34 + SYM_FUNC_START(unsigned_movw) 35 35 movz x0, #:abs_g3:sym64_abs 36 36 movk x0, #:abs_g2_nc:sym64_abs 37 37 movk x0, #:abs_g1_nc:sym64_abs 38 38 movk x0, #:abs_g0_nc:sym64_abs 39 39 ret 40 - ENDPROC(unsigned_movw) 40 + SYM_FUNC_END(unsigned_movw) 41 41 42 42 .align 12 43 43 .space 0xff8 44 - ENTRY(relative_adrp) 44 + SYM_FUNC_START(relative_adrp) 45 45 adrp x0, sym64_rel 46 46 add x0, x0, #:lo12:sym64_rel 47 47 ret 48 - ENDPROC(relative_adrp) 48 + SYM_FUNC_END(relative_adrp) 49 49 50 50 .align 12 51 51 .space 0xffc 52 - ENTRY(relative_adrp_far) 52 + SYM_FUNC_START(relative_adrp_far) 53 53 adrp x0, memstart_addr 54 54 add x0, x0, #:lo12:memstart_addr 55 55 ret 56 - ENDPROC(relative_adrp_far) 56 + SYM_FUNC_END(relative_adrp_far) 57 57 58 - ENTRY(relative_adr) 58 + SYM_FUNC_START(relative_adr) 59 59 adr x0, sym64_rel 60 60 ret 61 - ENDPROC(relative_adr) 61 + SYM_FUNC_END(relative_adr) 62 62 63 - ENTRY(relative_data64) 63 + SYM_FUNC_START(relative_data64) 64 64 adr x1, 0f 65 65 ldr x0, [x1] 66 66 add x0, x0, x1 67 67 ret 68 68 0: .quad sym64_rel - . 69 - ENDPROC(relative_data64) 69 + SYM_FUNC_END(relative_data64) 70 70 71 - ENTRY(relative_data32) 71 + SYM_FUNC_START(relative_data32) 72 72 adr x1, 0f 73 73 ldr w0, [x1] 74 74 add x0, x0, x1 75 75 ret 76 76 0: .long sym64_rel - . 77 - ENDPROC(relative_data32) 77 + SYM_FUNC_END(relative_data32) 78 78 79 - ENTRY(relative_data16) 79 + SYM_FUNC_START(relative_data16) 80 80 adr x1, 0f 81 81 ldrsh w0, [x1] 82 82 add x0, x0, x1 83 83 ret 84 84 0: .short sym64_rel - ., 0 85 - ENDPROC(relative_data16) 85 + SYM_FUNC_END(relative_data16)
+2 -2
arch/arm64/kernel/relocate_kernel.S
··· 26 26 * control_code_page, a special page which has been set up to be preserved 27 27 * during the copy operation. 28 28 */ 29 - ENTRY(arm64_relocate_new_kernel) 29 + SYM_CODE_START(arm64_relocate_new_kernel) 30 30 31 31 /* Setup the list loop variables. */ 32 32 mov x18, x2 /* x18 = dtb address */ ··· 111 111 mov x3, xzr 112 112 br x17 113 113 114 - ENDPROC(arm64_relocate_new_kernel) 114 + SYM_CODE_END(arm64_relocate_new_kernel) 115 115 116 116 .align 3 /* To keep the 64-bit values below naturally aligned. */ 117 117
+16
arch/arm64/kernel/signal.c
··· 732 732 regs->regs[29] = (unsigned long)&user->next_frame->fp; 733 733 regs->pc = (unsigned long)ka->sa.sa_handler; 734 734 735 + /* 736 + * Signal delivery is a (wacky) indirect function call in 737 + * userspace, so simulate the same setting of BTYPE as a BLR 738 + * <register containing the signal handler entry point>. 739 + * Signal delivery to a location in a PROT_BTI guarded page 740 + * that is not a function entry point will now trigger a 741 + * SIGILL in userspace. 742 + * 743 + * If the signal handler entry point is not in a PROT_BTI 744 + * guarded page, this is harmless. 745 + */ 746 + if (system_supports_bti()) { 747 + regs->pstate &= ~PSR_BTYPE_MASK; 748 + regs->pstate |= PSR_BTYPE_C; 749 + } 750 + 735 751 if (ka->sa.sa_flags & SA_RESTORER) 736 752 sigtramp = ka->sa.sa_restorer; 737 753 else
+6 -6
arch/arm64/kernel/sleep.S
··· 62 62 * 63 63 * x0 = struct sleep_stack_data area 64 64 */ 65 - ENTRY(__cpu_suspend_enter) 65 + SYM_FUNC_START(__cpu_suspend_enter) 66 66 stp x29, lr, [x0, #SLEEP_STACK_DATA_CALLEE_REGS] 67 67 stp x19, x20, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+16] 68 68 stp x21, x22, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+32] ··· 95 95 ldp x29, lr, [sp], #16 96 96 mov x0, #1 97 97 ret 98 - ENDPROC(__cpu_suspend_enter) 98 + SYM_FUNC_END(__cpu_suspend_enter) 99 99 100 100 .pushsection ".idmap.text", "awx" 101 - ENTRY(cpu_resume) 101 + SYM_CODE_START(cpu_resume) 102 102 bl el2_setup // if in EL2 drop to EL1 cleanly 103 103 bl __cpu_setup 104 104 /* enable the MMU early - so we can access sleep_save_stash by va */ ··· 106 106 bl __enable_mmu 107 107 ldr x8, =_cpu_resume 108 108 br x8 109 - ENDPROC(cpu_resume) 109 + SYM_CODE_END(cpu_resume) 110 110 .ltorg 111 111 .popsection 112 112 113 - ENTRY(_cpu_resume) 113 + SYM_FUNC_START(_cpu_resume) 114 114 mrs x1, mpidr_el1 115 115 adr_l x8, mpidr_hash // x8 = struct mpidr_hash virt address 116 116 ··· 146 146 ldp x29, lr, [x29] 147 147 mov x0, #0 148 148 ret 149 - ENDPROC(_cpu_resume) 149 + SYM_FUNC_END(_cpu_resume)
+4 -4
arch/arm64/kernel/smccc-call.S
··· 30 30 * unsigned long a6, unsigned long a7, struct arm_smccc_res *res, 31 31 * struct arm_smccc_quirk *quirk) 32 32 */ 33 - ENTRY(__arm_smccc_smc) 33 + SYM_FUNC_START(__arm_smccc_smc) 34 34 SMCCC smc 35 - ENDPROC(__arm_smccc_smc) 35 + SYM_FUNC_END(__arm_smccc_smc) 36 36 EXPORT_SYMBOL(__arm_smccc_smc) 37 37 38 38 /* ··· 41 41 * unsigned long a6, unsigned long a7, struct arm_smccc_res *res, 42 42 * struct arm_smccc_quirk *quirk) 43 43 */ 44 - ENTRY(__arm_smccc_hvc) 44 + SYM_FUNC_START(__arm_smccc_hvc) 45 45 SMCCC hvc 46 - ENDPROC(__arm_smccc_hvc) 46 + SYM_FUNC_END(__arm_smccc_hvc) 47 47 EXPORT_SYMBOL(__arm_smccc_hvc)
+18
arch/arm64/kernel/syscall.c
··· 98 98 regs->orig_x0 = regs->regs[0]; 99 99 regs->syscallno = scno; 100 100 101 + /* 102 + * BTI note: 103 + * The architecture does not guarantee that SPSR.BTYPE is zero 104 + * on taking an SVC, so we could return to userspace with a 105 + * non-zero BTYPE after the syscall. 106 + * 107 + * This shouldn't matter except when userspace is explicitly 108 + * doing something stupid, such as setting PROT_BTI on a page 109 + * that lacks conforming BTI/PACIxSP instructions, falling 110 + * through from one executable page to another with differing 111 + * PROT_BTI, or messing with BTYPE via ptrace: in such cases, 112 + * userspace should not be surprised if a SIGILL occurs on 113 + * syscall return. 114 + * 115 + * So, don't touch regs->pstate & PSR_BTYPE_MASK here. 116 + * (Similarly for HVC and SMC elsewhere.) 117 + */ 118 + 101 119 cortex_a76_erratum_1463225_svc_handler(); 102 120 local_daif_restore(DAIF_PROCCTX); 103 121 user_exit();
+71 -60
arch/arm64/kernel/traps.c
··· 272 272 } 273 273 } 274 274 275 + #ifdef CONFIG_COMPAT 276 + #define PSTATE_IT_1_0_SHIFT 25 277 + #define PSTATE_IT_1_0_MASK (0x3 << PSTATE_IT_1_0_SHIFT) 278 + #define PSTATE_IT_7_2_SHIFT 10 279 + #define PSTATE_IT_7_2_MASK (0x3f << PSTATE_IT_7_2_SHIFT) 280 + 281 + static u32 compat_get_it_state(struct pt_regs *regs) 282 + { 283 + u32 it, pstate = regs->pstate; 284 + 285 + it = (pstate & PSTATE_IT_1_0_MASK) >> PSTATE_IT_1_0_SHIFT; 286 + it |= ((pstate & PSTATE_IT_7_2_MASK) >> PSTATE_IT_7_2_SHIFT) << 2; 287 + 288 + return it; 289 + } 290 + 291 + static void compat_set_it_state(struct pt_regs *regs, u32 it) 292 + { 293 + u32 pstate_it; 294 + 295 + pstate_it = (it << PSTATE_IT_1_0_SHIFT) & PSTATE_IT_1_0_MASK; 296 + pstate_it |= ((it >> 2) << PSTATE_IT_7_2_SHIFT) & PSTATE_IT_7_2_MASK; 297 + 298 + regs->pstate &= ~PSR_AA32_IT_MASK; 299 + regs->pstate |= pstate_it; 300 + } 301 + 302 + static void advance_itstate(struct pt_regs *regs) 303 + { 304 + u32 it; 305 + 306 + /* ARM mode */ 307 + if (!(regs->pstate & PSR_AA32_T_BIT) || 308 + !(regs->pstate & PSR_AA32_IT_MASK)) 309 + return; 310 + 311 + it = compat_get_it_state(regs); 312 + 313 + /* 314 + * If this is the last instruction of the block, wipe the IT 315 + * state. Otherwise advance it. 316 + */ 317 + if (!(it & 7)) 318 + it = 0; 319 + else 320 + it = (it & 0xe0) | ((it << 1) & 0x1f); 321 + 322 + compat_set_it_state(regs, it); 323 + } 324 + #else 325 + static void advance_itstate(struct pt_regs *regs) 326 + { 327 + } 328 + #endif 329 + 275 330 void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) 276 331 { 277 332 regs->pc += size; ··· 337 282 */ 338 283 if (user_mode(regs)) 339 284 user_fastforward_single_step(current); 285 + 286 + if (compat_user_mode(regs)) 287 + advance_itstate(regs); 288 + else 289 + regs->pstate &= ~PSR_BTYPE_MASK; 340 290 } 341 291 342 292 static LIST_HEAD(undef_hook); ··· 470 410 force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc); 471 411 } 472 412 NOKPROBE_SYMBOL(do_undefinstr); 413 + 414 + void do_bti(struct pt_regs *regs) 415 + { 416 + BUG_ON(!user_mode(regs)); 417 + force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc); 418 + } 419 + NOKPROBE_SYMBOL(do_bti); 473 420 474 421 #define __user_cache_maint(insn, address, res) \ 475 422 if (address >= user_addr_max()) { \ ··· 633 566 {}, 634 567 }; 635 568 636 - 637 569 #ifdef CONFIG_COMPAT 638 - #define PSTATE_IT_1_0_SHIFT 25 639 - #define PSTATE_IT_1_0_MASK (0x3 << PSTATE_IT_1_0_SHIFT) 640 - #define PSTATE_IT_7_2_SHIFT 10 641 - #define PSTATE_IT_7_2_MASK (0x3f << PSTATE_IT_7_2_SHIFT) 642 - 643 - static u32 compat_get_it_state(struct pt_regs *regs) 644 - { 645 - u32 it, pstate = regs->pstate; 646 - 647 - it = (pstate & PSTATE_IT_1_0_MASK) >> PSTATE_IT_1_0_SHIFT; 648 - it |= ((pstate & PSTATE_IT_7_2_MASK) >> PSTATE_IT_7_2_SHIFT) << 2; 649 - 650 - return it; 651 - } 652 - 653 - static void compat_set_it_state(struct pt_regs *regs, u32 it) 654 - { 655 - u32 pstate_it; 656 - 657 - pstate_it = (it << PSTATE_IT_1_0_SHIFT) & PSTATE_IT_1_0_MASK; 658 - pstate_it |= ((it >> 2) << PSTATE_IT_7_2_SHIFT) & PSTATE_IT_7_2_MASK; 659 - 660 - regs->pstate &= ~PSR_AA32_IT_MASK; 661 - regs->pstate |= pstate_it; 662 - } 663 - 664 570 static bool cp15_cond_valid(unsigned int esr, struct pt_regs *regs) 665 571 { 666 572 int cond; ··· 654 614 return aarch32_opcode_cond_checks[cond](regs->pstate); 655 615 } 656 616 657 - static void advance_itstate(struct pt_regs *regs) 658 - { 659 - u32 it; 660 - 661 - /* ARM mode */ 662 - if (!(regs->pstate & PSR_AA32_T_BIT) || 663 - !(regs->pstate & PSR_AA32_IT_MASK)) 664 - return; 665 - 666 - it = compat_get_it_state(regs); 667 - 668 - /* 669 - * If this is the last instruction of the block, wipe the IT 670 - * state. Otherwise advance it. 671 - */ 672 - if (!(it & 7)) 673 - it = 0; 674 - else 675 - it = (it & 0xe0) | ((it << 1) & 0x1f); 676 - 677 - compat_set_it_state(regs, it); 678 - } 679 - 680 - static void arm64_compat_skip_faulting_instruction(struct pt_regs *regs, 681 - unsigned int sz) 682 - { 683 - advance_itstate(regs); 684 - arm64_skip_faulting_instruction(regs, sz); 685 - } 686 - 687 617 static void compat_cntfrq_read_handler(unsigned int esr, struct pt_regs *regs) 688 618 { 689 619 int reg = (esr & ESR_ELx_CP15_32_ISS_RT_MASK) >> ESR_ELx_CP15_32_ISS_RT_SHIFT; 690 620 691 621 pt_regs_write_reg(regs, reg, arch_timer_get_rate()); 692 - arm64_compat_skip_faulting_instruction(regs, 4); 622 + arm64_skip_faulting_instruction(regs, 4); 693 623 } 694 624 695 625 static const struct sys64_hook cp15_32_hooks[] = { ··· 679 669 680 670 pt_regs_write_reg(regs, rt, lower_32_bits(val)); 681 671 pt_regs_write_reg(regs, rt2, upper_32_bits(val)); 682 - arm64_compat_skip_faulting_instruction(regs, 4); 672 + arm64_skip_faulting_instruction(regs, 4); 683 673 } 684 674 685 675 static const struct sys64_hook cp15_64_hooks[] = { ··· 700 690 * There is no T16 variant of a CP access, so we 701 691 * always advance PC by 4 bytes. 702 692 */ 703 - arm64_compat_skip_faulting_instruction(regs, 4); 693 + arm64_skip_faulting_instruction(regs, 4); 704 694 return; 705 695 } 706 696 ··· 763 753 [ESR_ELx_EC_CP10_ID] = "CP10 MRC/VMRS", 764 754 [ESR_ELx_EC_PAC] = "PAC", 765 755 [ESR_ELx_EC_CP14_64] = "CP14 MCRR/MRRC", 756 + [ESR_ELx_EC_BTI] = "BTI", 766 757 [ESR_ELx_EC_ILL] = "PSTATE.IL", 767 758 [ESR_ELx_EC_SVC32] = "SVC (AArch32)", 768 759 [ESR_ELx_EC_HVC32] = "HVC (AArch32)",
+5 -1
arch/arm64/kernel/vdso.c
··· 136 136 int uses_interp) 137 137 { 138 138 unsigned long vdso_base, vdso_text_len, vdso_mapping_len; 139 + unsigned long gp_flags = 0; 139 140 void *ret; 140 141 141 142 vdso_text_len = vdso_info[abi].vdso_pages << PAGE_SHIFT; ··· 155 154 if (IS_ERR(ret)) 156 155 goto up_fail; 157 156 157 + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && system_supports_bti()) 158 + gp_flags = VM_ARM64_BTI; 159 + 158 160 vdso_base += PAGE_SIZE; 159 161 mm->context.vdso = (void *)vdso_base; 160 162 ret = _install_special_mapping(mm, vdso_base, vdso_text_len, 161 - VM_READ|VM_EXEC| 163 + VM_READ|VM_EXEC|gp_flags| 162 164 VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, 163 165 vdso_info[abi].cm); 164 166 if (IS_ERR(ret))
+3 -1
arch/arm64/kernel/vdso/Makefile
··· 17 17 targets := $(obj-vdso) vdso.so vdso.so.dbg 18 18 obj-vdso := $(addprefix $(obj)/, $(obj-vdso)) 19 19 20 + btildflags-$(CONFIG_ARM64_BTI_KERNEL) += -z force-bti 21 + 20 22 # -Bsymbolic has been added for consistency with arm, the compat vDSO and 21 23 # potential future proofing if we end up with internal calls to the exported 22 24 # routines, as x86 does (see 6f121e548f83 ("x86, vdso: Reimplement vdso.so 23 25 # preparation in build-time C")). 24 26 ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \ 25 - -Bsymbolic --eh-frame-hdr --build-id -n -T 27 + -Bsymbolic --eh-frame-hdr --build-id -n $(btildflags-y) -T 26 28 27 29 ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18 28 30 ccflags-y += -DDISABLE_BRANCH_PROFILING
+3
arch/arm64/kernel/vdso/note.S
··· 12 12 #include <linux/version.h> 13 13 #include <linux/elfnote.h> 14 14 #include <linux/build-salt.h> 15 + #include <asm/assembler.h> 15 16 16 17 ELFNOTE_START(Linux, 0, "a") 17 18 .long LINUX_VERSION_CODE 18 19 ELFNOTE_END 19 20 20 21 BUILD_SALT 22 + 23 + emit_aarch64_feature_1_and
+47 -7
arch/arm64/kernel/vdso/sigreturn.S
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 3 * Sigreturn trampoline for returning from a signal when the SA_RESTORER 4 - * flag is not set. 4 + * flag is not set. It serves primarily as a hall of shame for crappy 5 + * unwinders and features an exciting but mysterious NOP instruction. 6 + * 7 + * It's also fragile as hell, so please think twice before changing anything 8 + * in here. 5 9 * 6 10 * Copyright (C) 2012 ARM Limited 7 11 * ··· 13 9 */ 14 10 15 11 #include <linux/linkage.h> 12 + #include <asm/assembler.h> 16 13 #include <asm/unistd.h> 17 14 18 15 .text 19 16 20 - nop 21 - SYM_FUNC_START(__kernel_rt_sigreturn) 17 + /* Ensure that the mysterious NOP can be associated with a function. */ 22 18 .cfi_startproc 19 + 20 + /* 21 + * .cfi_signal_frame causes the corresponding Frame Description Entry in the 22 + * .eh_frame section to be annotated as a signal frame. This allows DWARF 23 + * unwinders (e.g. libstdc++) to implement _Unwind_GetIPInfo(), which permits 24 + * unwinding out of the signal trampoline without the need for the mysterious 25 + * NOP. 26 + */ 23 27 .cfi_signal_frame 24 - .cfi_def_cfa x29, 0 25 - .cfi_offset x29, 0 * 8 26 - .cfi_offset x30, 1 * 8 28 + 29 + /* 30 + * Tell the unwinder where to locate the frame record linking back to the 31 + * interrupted context. We don't provide unwind info for registers other 32 + * than the frame pointer and the link register here; in practice, this 33 + * is sufficient for unwinding in C/C++ based runtimes and the values in 34 + * the sigcontext may have been modified by this point anyway. Debuggers 35 + * already have baked-in strategies for attempting to unwind out of signals. 36 + */ 37 + .cfi_def_cfa x29, 0 38 + .cfi_offset x29, 0 * 8 39 + .cfi_offset x30, 1 * 8 40 + 41 + /* 42 + * This mysterious NOP is required for some unwinders (e.g. libc++) that 43 + * unconditionally subtract one from the result of _Unwind_GetIP() in order to 44 + * identify the calling function. 45 + * Hack borrowed from arch/powerpc/kernel/vdso64/sigtramp.S. 46 + */ 47 + nop // Mysterious NOP 48 + 49 + /* 50 + * GDB relies on being able to identify the sigreturn instruction sequence to 51 + * unwind from signal handlers. We cannot, therefore, use SYM_FUNC_START() 52 + * here, as it will emit a BTI C instruction and break the unwinder. Thankfully, 53 + * this function is only ever called from a RET and so omitting the landing pad 54 + * is perfectly fine. 55 + */ 56 + SYM_CODE_START(__kernel_rt_sigreturn) 27 57 mov x8, #__NR_rt_sigreturn 28 58 svc #0 29 59 .cfi_endproc 30 - SYM_FUNC_END(__kernel_rt_sigreturn) 60 + SYM_CODE_END(__kernel_rt_sigreturn) 61 + 62 + emit_aarch64_feature_1_and
+3
arch/arm64/kernel/vdso/vdso.S
··· 8 8 #include <linux/init.h> 9 9 #include <linux/linkage.h> 10 10 #include <linux/const.h> 11 + #include <asm/assembler.h> 11 12 #include <asm/page.h> 12 13 13 14 .globl vdso_start, vdso_end ··· 20 19 vdso_end: 21 20 22 21 .previous 22 + 23 + emit_aarch64_feature_1_and
+11 -8
arch/arm64/kernel/vdso32/sigreturn.S
··· 3 3 * This file provides both A32 and T32 versions, in accordance with the 4 4 * arm sigreturn code. 5 5 * 6 + * Please read the comments in arch/arm64/kernel/vdso/sigreturn.S to 7 + * understand some of the craziness in here. 8 + * 6 9 * Copyright (C) 2018 ARM Limited 7 10 */ 8 11 ··· 20 17 .save {r0-r15} 21 18 .pad #COMPAT_SIGFRAME_REGS_OFFSET 22 19 nop 23 - SYM_FUNC_START(__kernel_sigreturn_arm) 20 + SYM_CODE_START(__kernel_sigreturn_arm) 24 21 mov r7, #__NR_compat_sigreturn 25 22 svc #0 26 23 .fnend 27 - SYM_FUNC_END(__kernel_sigreturn_arm) 24 + SYM_CODE_END(__kernel_sigreturn_arm) 28 25 29 26 .fnstart 30 27 .save {r0-r15} 31 28 .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET 32 29 nop 33 - SYM_FUNC_START(__kernel_rt_sigreturn_arm) 30 + SYM_CODE_START(__kernel_rt_sigreturn_arm) 34 31 mov r7, #__NR_compat_rt_sigreturn 35 32 svc #0 36 33 .fnend 37 - SYM_FUNC_END(__kernel_rt_sigreturn_arm) 34 + SYM_CODE_END(__kernel_rt_sigreturn_arm) 38 35 39 36 .thumb 40 37 .fnstart 41 38 .save {r0-r15} 42 39 .pad #COMPAT_SIGFRAME_REGS_OFFSET 43 40 nop 44 - SYM_FUNC_START(__kernel_sigreturn_thumb) 41 + SYM_CODE_START(__kernel_sigreturn_thumb) 45 42 mov r7, #__NR_compat_sigreturn 46 43 svc #0 47 44 .fnend 48 - SYM_FUNC_END(__kernel_sigreturn_thumb) 45 + SYM_CODE_END(__kernel_sigreturn_thumb) 49 46 50 47 .fnstart 51 48 .save {r0-r15} 52 49 .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET 53 50 nop 54 - SYM_FUNC_START(__kernel_rt_sigreturn_thumb) 51 + SYM_CODE_START(__kernel_rt_sigreturn_thumb) 55 52 mov r7, #__NR_compat_rt_sigreturn 56 53 svc #0 57 54 .fnend 58 - SYM_FUNC_END(__kernel_rt_sigreturn_thumb) 55 + SYM_CODE_END(__kernel_rt_sigreturn_thumb)
+5
arch/arm64/mm/dump.c
··· 146 146 .set = "UXN", 147 147 .clear = " ", 148 148 }, { 149 + .mask = PTE_GP, 150 + .val = PTE_GP, 151 + .set = "GP", 152 + .clear = " ", 153 + }, { 149 154 .mask = PTE_ATTRINDX_MASK, 150 155 .val = PTE_ATTRINDX(MT_DEVICE_nGnRnE), 151 156 .set = "DEVICE/nGnRnE",
+24
arch/arm64/mm/mmu.c
··· 610 610 #endif 611 611 612 612 /* 613 + * Open coded check for BTI, only for use to determine configuration 614 + * for early mappings for before the cpufeature code has run. 615 + */ 616 + static bool arm64_early_this_cpu_has_bti(void) 617 + { 618 + u64 pfr1; 619 + 620 + if (!IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) 621 + return false; 622 + 623 + pfr1 = read_sysreg_s(SYS_ID_AA64PFR1_EL1); 624 + return cpuid_feature_extract_unsigned_field(pfr1, 625 + ID_AA64PFR1_BT_SHIFT); 626 + } 627 + 628 + /* 613 629 * Create fine-grained mappings for the kernel. 614 630 */ 615 631 static void __init map_kernel(pgd_t *pgdp) ··· 639 623 * explicitly requested with rodata=off. 640 624 */ 641 625 pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; 626 + 627 + /* 628 + * If we have a CPU that supports BTI and a kernel built for 629 + * BTI then mark the kernel executable text as guarded pages 630 + * now so we don't have to rewrite the page tables later. 631 + */ 632 + if (arm64_early_this_cpu_has_bti()) 633 + text_prot = __pgprot_modify(text_prot, PTE_GP, PTE_GP); 642 634 643 635 /* 644 636 * Only rodata will be remapped with different permissions later on,
+2 -2
arch/arm64/mm/pageattr.c
··· 126 126 { 127 127 return change_memory_common(addr, numpages, 128 128 __pgprot(PTE_PXN), 129 - __pgprot(0)); 129 + __pgprot(PTE_MAYBE_GP)); 130 130 } 131 131 132 132 int set_memory_x(unsigned long addr, int numpages) 133 133 { 134 134 return change_memory_common(addr, numpages, 135 - __pgprot(0), 135 + __pgprot(PTE_MAYBE_GP), 136 136 __pgprot(PTE_PXN)); 137 137 } 138 138
+8
arch/arm64/net/bpf_jit.h
··· 211 211 /* Rn & imm; set condition flags */ 212 212 #define A64_TST_I(sf, Rn, imm) A64_ANDS_I(sf, A64_ZR, Rn, imm) 213 213 214 + /* HINTs */ 215 + #define A64_HINT(x) aarch64_insn_gen_hint(x) 216 + 217 + /* BTI */ 218 + #define A64_BTI_C A64_HINT(AARCH64_INSN_HINT_BTIC) 219 + #define A64_BTI_J A64_HINT(AARCH64_INSN_HINT_BTIJ) 220 + #define A64_BTI_JC A64_HINT(AARCH64_INSN_HINT_BTIJC) 221 + 214 222 #endif /* _BPF_JIT_H */
+12
arch/arm64/net/bpf_jit_comp.c
··· 177 177 #define STACK_ALIGN(sz) (((sz) + 15) & ~15) 178 178 179 179 /* Tail call offset to jump into */ 180 + #if IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) 181 + #define PROLOGUE_OFFSET 8 182 + #else 180 183 #define PROLOGUE_OFFSET 7 184 + #endif 181 185 182 186 static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf) 183 187 { ··· 218 214 * 219 215 */ 220 216 217 + /* BTI landing pad */ 218 + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) 219 + emit(A64_BTI_C, ctx); 220 + 221 221 /* Save FP and LR registers to stay align with ARM64 AAPCS */ 222 222 emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); 223 223 emit(A64_MOV(1, A64_FP, A64_SP), ctx); ··· 244 236 cur_offset, PROLOGUE_OFFSET); 245 237 return -1; 246 238 } 239 + 240 + /* BTI landing pad for the tail call, done with a BR */ 241 + if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) 242 + emit(A64_BTI_J, ctx); 247 243 } 248 244 249 245 ctx->stack_size = STACK_ALIGN(prog->aux->stack_depth);
+1
arch/x86/Kconfig
··· 91 91 select ARCH_USE_BUILTIN_BSWAP 92 92 select ARCH_USE_QUEUED_RWLOCKS 93 93 select ARCH_USE_QUEUED_SPINLOCKS 94 + select ARCH_USE_SYM_ANNOTATIONS 94 95 select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH 95 96 select ARCH_WANT_DEFAULT_BPF_JIT if X86_64 96 97 select ARCH_WANTS_DYNAMIC_TASK_STRUCT
-9
arch/x86/Kconfig.debug
··· 99 99 100 100 If in doubt, say "Y". 101 101 102 - config DOUBLEFAULT 103 - default y 104 - bool "Enable doublefault exception handler" if EXPERT && X86_32 105 - ---help--- 106 - This option allows trapping of rare doublefault exceptions that 107 - would otherwise cause a system to silently reboot. Disabling this 108 - option saves about 4k and might cause you much additional grey 109 - hair. 110 - 111 102 config DEBUG_TLBFLUSH 112 103 bool "Set upper limit of TLB entries to flush one-by-one" 113 104 depends on DEBUG_KERNEL
-2
arch/x86/entry/entry_32.S
··· 1536 1536 jmp common_exception 1537 1537 SYM_CODE_END(debug) 1538 1538 1539 - #ifdef CONFIG_DOUBLEFAULT 1540 1539 SYM_CODE_START(double_fault) 1541 1540 1: 1542 1541 /* ··· 1575 1576 hlt 1576 1577 jmp 1b 1577 1578 SYM_CODE_END(double_fault) 1578 - #endif 1579 1579 1580 1580 /* 1581 1581 * NMI is doubly nasty. It can happen on the first instruction of
+1 -1
arch/x86/include/asm/doublefault.h
··· 2 2 #ifndef _ASM_X86_DOUBLEFAULT_H 3 3 #define _ASM_X86_DOUBLEFAULT_H 4 4 5 - #if defined(CONFIG_X86_32) && defined(CONFIG_DOUBLEFAULT) 5 + #ifdef CONFIG_X86_32 6 6 extern void doublefault_init_cpu_tss(void); 7 7 #else 8 8 static inline void doublefault_init_cpu_tss(void)
-2
arch/x86/include/asm/traps.h
··· 69 69 dotraplinkage void do_bounds(struct pt_regs *regs, long error_code); 70 70 dotraplinkage void do_invalid_op(struct pt_regs *regs, long error_code); 71 71 dotraplinkage void do_device_not_available(struct pt_regs *regs, long error_code); 72 - #if defined(CONFIG_X86_64) || defined(CONFIG_DOUBLEFAULT) 73 72 dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code, unsigned long cr2); 74 - #endif 75 73 dotraplinkage void do_coprocessor_segment_overrun(struct pt_regs *regs, long error_code); 76 74 dotraplinkage void do_invalid_TSS(struct pt_regs *regs, long error_code); 77 75 dotraplinkage void do_segment_not_present(struct pt_regs *regs, long error_code);
+1 -3
arch/x86/kernel/Makefile
··· 102 102 obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o 103 103 obj-y += kprobes/ 104 104 obj-$(CONFIG_MODULES) += module.o 105 - ifeq ($(CONFIG_X86_32),y) 106 - obj-$(CONFIG_DOUBLEFAULT) += doublefault_32.o 107 - endif 105 + obj-$(CONFIG_X86_32) += doublefault_32.o 108 106 obj-$(CONFIG_KGDB) += kgdb.o 109 107 obj-$(CONFIG_VM86) += vm86_32.o 110 108 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
-4
arch/x86/kernel/dumpstack_32.c
··· 87 87 88 88 static bool in_doublefault_stack(unsigned long *stack, struct stack_info *info) 89 89 { 90 - #ifdef CONFIG_DOUBLEFAULT 91 90 struct cpu_entry_area *cea = get_cpu_entry_area(raw_smp_processor_id()); 92 91 struct doublefault_stack *ss = &cea->doublefault_stack; 93 92 ··· 102 103 info->next_sp = (unsigned long *)this_cpu_read(cpu_tss_rw.x86_tss.sp); 103 104 104 105 return true; 105 - #else 106 - return false; 107 - #endif 108 106 } 109 107 110 108
-2
arch/x86/kernel/traps.c
··· 326 326 } 327 327 #endif 328 328 329 - #if defined(CONFIG_X86_64) || defined(CONFIG_DOUBLEFAULT) 330 329 /* 331 330 * Runs on an IST stack for x86_64 and on a special task stack for x86_32. 332 331 * ··· 449 450 die("double fault", regs, error_code); 450 451 panic("Machine halted."); 451 452 } 452 - #endif 453 453 454 454 dotraplinkage void do_bounds(struct pt_regs *regs, long error_code) 455 455 {
+1 -3
arch/x86/mm/cpu_entry_area.c
··· 17 17 DEFINE_PER_CPU(struct cea_exception_stacks*, cea_exception_stacks); 18 18 #endif 19 19 20 - #if defined(CONFIG_X86_32) && defined(CONFIG_DOUBLEFAULT) 20 + #ifdef CONFIG_X86_32 21 21 DECLARE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack); 22 22 #endif 23 23 ··· 114 114 #else 115 115 static inline void percpu_setup_exception_stacks(unsigned int cpu) 116 116 { 117 - #ifdef CONFIG_DOUBLEFAULT 118 117 struct cpu_entry_area *cea = get_cpu_entry_area(cpu); 119 118 120 119 cea_map_percpu_pages(&cea->doublefault_stack, 121 120 &per_cpu(doublefault_stack, cpu), 1, PAGE_KERNEL); 122 - #endif 123 121 } 124 122 #endif 125 123
+6
fs/Kconfig.binfmt
··· 36 36 config ARCH_BINFMT_ELF_STATE 37 37 bool 38 38 39 + config ARCH_HAVE_ELF_PROT 40 + bool 41 + 42 + config ARCH_USE_GNU_PROPERTY 43 + bool 44 + 39 45 config BINFMT_ELF_FDPIC 40 46 bool "Kernel support for FDPIC ELF binaries" 41 47 default y if !BINFMT_ELF
+139 -6
fs/binfmt_elf.c
··· 40 40 #include <linux/sched/coredump.h> 41 41 #include <linux/sched/task_stack.h> 42 42 #include <linux/sched/cputime.h> 43 + #include <linux/sizes.h> 44 + #include <linux/types.h> 43 45 #include <linux/cred.h> 44 46 #include <linux/dax.h> 45 47 #include <linux/uaccess.h> 46 48 #include <asm/param.h> 47 49 #include <asm/page.h> 50 + 51 + #ifndef ELF_COMPAT 52 + #define ELF_COMPAT 0 53 + #endif 48 54 49 55 #ifndef user_long_t 50 56 #define user_long_t long ··· 545 539 546 540 #endif /* !CONFIG_ARCH_BINFMT_ELF_STATE */ 547 541 548 - static inline int make_prot(u32 p_flags) 542 + static inline int make_prot(u32 p_flags, struct arch_elf_state *arch_state, 543 + bool has_interp, bool is_interp) 549 544 { 550 545 int prot = 0; 551 546 ··· 556 549 prot |= PROT_WRITE; 557 550 if (p_flags & PF_X) 558 551 prot |= PROT_EXEC; 559 - return prot; 552 + 553 + return arch_elf_adjust_prot(prot, arch_state, has_interp, is_interp); 560 554 } 561 555 562 556 /* This is much more generalized than the library routine read function, ··· 567 559 568 560 static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex, 569 561 struct file *interpreter, 570 - unsigned long no_base, struct elf_phdr *interp_elf_phdata) 562 + unsigned long no_base, struct elf_phdr *interp_elf_phdata, 563 + struct arch_elf_state *arch_state) 571 564 { 572 565 struct elf_phdr *eppnt; 573 566 unsigned long load_addr = 0; ··· 600 591 for (i = 0; i < interp_elf_ex->e_phnum; i++, eppnt++) { 601 592 if (eppnt->p_type == PT_LOAD) { 602 593 int elf_type = MAP_PRIVATE | MAP_DENYWRITE; 603 - int elf_prot = make_prot(eppnt->p_flags); 594 + int elf_prot = make_prot(eppnt->p_flags, arch_state, 595 + true, true); 604 596 unsigned long vaddr = 0; 605 597 unsigned long k, map_addr; 606 598 ··· 692 682 * libraries. There is no binary dependent code anywhere else. 693 683 */ 694 684 685 + static int parse_elf_property(const char *data, size_t *off, size_t datasz, 686 + struct arch_elf_state *arch, 687 + bool have_prev_type, u32 *prev_type) 688 + { 689 + size_t o, step; 690 + const struct gnu_property *pr; 691 + int ret; 692 + 693 + if (*off == datasz) 694 + return -ENOENT; 695 + 696 + if (WARN_ON_ONCE(*off > datasz || *off % ELF_GNU_PROPERTY_ALIGN)) 697 + return -EIO; 698 + o = *off; 699 + datasz -= *off; 700 + 701 + if (datasz < sizeof(*pr)) 702 + return -ENOEXEC; 703 + pr = (const struct gnu_property *)(data + o); 704 + o += sizeof(*pr); 705 + datasz -= sizeof(*pr); 706 + 707 + if (pr->pr_datasz > datasz) 708 + return -ENOEXEC; 709 + 710 + WARN_ON_ONCE(o % ELF_GNU_PROPERTY_ALIGN); 711 + step = round_up(pr->pr_datasz, ELF_GNU_PROPERTY_ALIGN); 712 + if (step > datasz) 713 + return -ENOEXEC; 714 + 715 + /* Properties are supposed to be unique and sorted on pr_type: */ 716 + if (have_prev_type && pr->pr_type <= *prev_type) 717 + return -ENOEXEC; 718 + *prev_type = pr->pr_type; 719 + 720 + ret = arch_parse_elf_property(pr->pr_type, data + o, 721 + pr->pr_datasz, ELF_COMPAT, arch); 722 + if (ret) 723 + return ret; 724 + 725 + *off = o + step; 726 + return 0; 727 + } 728 + 729 + #define NOTE_DATA_SZ SZ_1K 730 + #define GNU_PROPERTY_TYPE_0_NAME "GNU" 731 + #define NOTE_NAME_SZ (sizeof(GNU_PROPERTY_TYPE_0_NAME)) 732 + 733 + static int parse_elf_properties(struct file *f, const struct elf_phdr *phdr, 734 + struct arch_elf_state *arch) 735 + { 736 + union { 737 + struct elf_note nhdr; 738 + char data[NOTE_DATA_SZ]; 739 + } note; 740 + loff_t pos; 741 + ssize_t n; 742 + size_t off, datasz; 743 + int ret; 744 + bool have_prev_type; 745 + u32 prev_type; 746 + 747 + if (!IS_ENABLED(CONFIG_ARCH_USE_GNU_PROPERTY) || !phdr) 748 + return 0; 749 + 750 + /* load_elf_binary() shouldn't call us unless this is true... */ 751 + if (WARN_ON_ONCE(phdr->p_type != PT_GNU_PROPERTY)) 752 + return -ENOEXEC; 753 + 754 + /* If the properties are crazy large, that's too bad (for now): */ 755 + if (phdr->p_filesz > sizeof(note)) 756 + return -ENOEXEC; 757 + 758 + pos = phdr->p_offset; 759 + n = kernel_read(f, &note, phdr->p_filesz, &pos); 760 + 761 + BUILD_BUG_ON(sizeof(note) < sizeof(note.nhdr) + NOTE_NAME_SZ); 762 + if (n < 0 || n < sizeof(note.nhdr) + NOTE_NAME_SZ) 763 + return -EIO; 764 + 765 + if (note.nhdr.n_type != NT_GNU_PROPERTY_TYPE_0 || 766 + note.nhdr.n_namesz != NOTE_NAME_SZ || 767 + strncmp(note.data + sizeof(note.nhdr), 768 + GNU_PROPERTY_TYPE_0_NAME, n - sizeof(note.nhdr))) 769 + return -ENOEXEC; 770 + 771 + off = round_up(sizeof(note.nhdr) + NOTE_NAME_SZ, 772 + ELF_GNU_PROPERTY_ALIGN); 773 + if (off > n) 774 + return -ENOEXEC; 775 + 776 + if (note.nhdr.n_descsz > n - off) 777 + return -ENOEXEC; 778 + datasz = off + note.nhdr.n_descsz; 779 + 780 + have_prev_type = false; 781 + do { 782 + ret = parse_elf_property(note.data, &off, datasz, arch, 783 + have_prev_type, &prev_type); 784 + have_prev_type = true; 785 + } while (!ret); 786 + 787 + return ret == -ENOENT ? 0 : ret; 788 + } 789 + 695 790 static int load_elf_binary(struct linux_binprm *bprm) 696 791 { 697 792 struct file *interpreter = NULL; /* to shut gcc up */ ··· 804 689 int load_addr_set = 0; 805 690 unsigned long error; 806 691 struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL; 692 + struct elf_phdr *elf_property_phdata = NULL; 807 693 unsigned long elf_bss, elf_brk; 808 694 int bss_prot = 0; 809 695 int retval, i; ··· 841 725 elf_ppnt = elf_phdata; 842 726 for (i = 0; i < elf_ex->e_phnum; i++, elf_ppnt++) { 843 727 char *elf_interpreter; 728 + 729 + if (elf_ppnt->p_type == PT_GNU_PROPERTY) { 730 + elf_property_phdata = elf_ppnt; 731 + continue; 732 + } 844 733 845 734 if (elf_ppnt->p_type != PT_INTERP) 846 735 continue; ··· 940 819 goto out_free_dentry; 941 820 942 821 /* Pass PT_LOPROC..PT_HIPROC headers to arch code */ 822 + elf_property_phdata = NULL; 943 823 elf_ppnt = interp_elf_phdata; 944 824 for (i = 0; i < interp_elf_ex->e_phnum; i++, elf_ppnt++) 945 825 switch (elf_ppnt->p_type) { 826 + case PT_GNU_PROPERTY: 827 + elf_property_phdata = elf_ppnt; 828 + break; 829 + 946 830 case PT_LOPROC ... PT_HIPROC: 947 831 retval = arch_elf_pt_proc(interp_elf_ex, 948 832 elf_ppnt, interpreter, ··· 957 831 break; 958 832 } 959 833 } 834 + 835 + retval = parse_elf_properties(interpreter ?: bprm->file, 836 + elf_property_phdata, &arch_state); 837 + if (retval) 838 + goto out_free_dentry; 960 839 961 840 /* 962 841 * Allow arch code to reject the ELF at this point, whilst it's ··· 1044 913 } 1045 914 } 1046 915 1047 - elf_prot = make_prot(elf_ppnt->p_flags); 916 + elf_prot = make_prot(elf_ppnt->p_flags, &arch_state, 917 + !!interpreter, false); 1048 918 1049 919 elf_flags = MAP_PRIVATE | MAP_DENYWRITE | MAP_EXECUTABLE; 1050 920 ··· 1188 1056 if (interpreter) { 1189 1057 elf_entry = load_elf_interp(interp_elf_ex, 1190 1058 interpreter, 1191 - load_bias, interp_elf_phdata); 1059 + load_bias, interp_elf_phdata, 1060 + &arch_state); 1192 1061 if (!IS_ERR((void *)elf_entry)) { 1193 1062 /* 1194 1063 * load_elf_interp() returns relocation
+4
fs/compat_binfmt_elf.c
··· 17 17 #include <linux/elfcore-compat.h> 18 18 #include <linux/time.h> 19 19 20 + #define ELF_COMPAT 1 21 + 20 22 /* 21 23 * Rename the basic ELF layout types to refer to the 32-bit class of files. 22 24 */ ··· 30 28 #undef elf_shdr 31 29 #undef elf_note 32 30 #undef elf_addr_t 31 + #undef ELF_GNU_PROPERTY_ALIGN 33 32 #define elfhdr elf32_hdr 34 33 #define elf_phdr elf32_phdr 35 34 #define elf_shdr elf32_shdr 36 35 #define elf_note elf32_note 37 36 #define elf_addr_t Elf32_Addr 37 + #define ELF_GNU_PROPERTY_ALIGN ELF32_GNU_PROPERTY_ALIGN 38 38 39 39 /* 40 40 * Some data types as stored in coredump.
+3
fs/proc/task_mmu.c
··· 638 638 [ilog2(VM_ARCH_1)] = "ar", 639 639 [ilog2(VM_WIPEONFORK)] = "wf", 640 640 [ilog2(VM_DONTDUMP)] = "dd", 641 + #ifdef CONFIG_ARM64_BTI 642 + [ilog2(VM_ARM64_BTI)] = "bt", 643 + #endif 641 644 #ifdef CONFIG_MEM_SOFT_DIRTY 642 645 [ilog2(VM_SOFTDIRTY)] = "sd", 643 646 #endif
+43
include/linux/elf.h
··· 2 2 #ifndef _LINUX_ELF_H 3 3 #define _LINUX_ELF_H 4 4 5 + #include <linux/types.h> 5 6 #include <asm/elf.h> 6 7 #include <uapi/linux/elf.h> 7 8 ··· 22 21 SET_PERSONALITY(ex) 23 22 #endif 24 23 24 + #define ELF32_GNU_PROPERTY_ALIGN 4 25 + #define ELF64_GNU_PROPERTY_ALIGN 8 26 + 25 27 #if ELF_CLASS == ELFCLASS32 26 28 27 29 extern Elf32_Dyn _DYNAMIC []; ··· 35 31 #define elf_addr_t Elf32_Off 36 32 #define Elf_Half Elf32_Half 37 33 #define Elf_Word Elf32_Word 34 + #define ELF_GNU_PROPERTY_ALIGN ELF32_GNU_PROPERTY_ALIGN 38 35 39 36 #else 40 37 ··· 47 42 #define elf_addr_t Elf64_Off 48 43 #define Elf_Half Elf64_Half 49 44 #define Elf_Word Elf64_Word 45 + #define ELF_GNU_PROPERTY_ALIGN ELF64_GNU_PROPERTY_ALIGN 50 46 51 47 #endif 52 48 ··· 62 56 extern int elf_coredump_extra_notes_size(void); 63 57 extern int elf_coredump_extra_notes_write(struct coredump_params *cprm); 64 58 #endif 59 + 60 + /* 61 + * NT_GNU_PROPERTY_TYPE_0 header: 62 + * Keep this internal until/unless there is an agreed UAPI definition. 63 + * pr_type values (GNU_PROPERTY_*) are public and defined in the UAPI header. 64 + */ 65 + struct gnu_property { 66 + u32 pr_type; 67 + u32 pr_datasz; 68 + }; 69 + 70 + struct arch_elf_state; 71 + 72 + #ifndef CONFIG_ARCH_USE_GNU_PROPERTY 73 + static inline int arch_parse_elf_property(u32 type, const void *data, 74 + size_t datasz, bool compat, 75 + struct arch_elf_state *arch) 76 + { 77 + return 0; 78 + } 79 + #else 80 + extern int arch_parse_elf_property(u32 type, const void *data, size_t datasz, 81 + bool compat, struct arch_elf_state *arch); 82 + #endif 83 + 84 + #ifdef CONFIG_ARCH_HAVE_ELF_PROT 85 + int arch_elf_adjust_prot(int prot, const struct arch_elf_state *state, 86 + bool has_interp, bool is_interp); 87 + #else 88 + static inline int arch_elf_adjust_prot(int prot, 89 + const struct arch_elf_state *state, 90 + bool has_interp, bool is_interp) 91 + { 92 + return prot; 93 + } 94 + #endif 95 + 65 96 #endif /* _LINUX_ELF_H */
+4 -4
include/linux/linkage.h
··· 105 105 106 106 /* === DEPRECATED annotations === */ 107 107 108 - #ifndef CONFIG_X86 108 + #ifndef CONFIG_ARCH_USE_SYM_ANNOTATIONS 109 109 #ifndef GLOBAL 110 110 /* deprecated, use SYM_DATA*, SYM_ENTRY, or similar */ 111 111 #define GLOBAL(name) \ ··· 118 118 #define ENTRY(name) \ 119 119 SYM_FUNC_START(name) 120 120 #endif 121 - #endif /* CONFIG_X86 */ 121 + #endif /* CONFIG_ARCH_USE_SYM_ANNOTATIONS */ 122 122 #endif /* LINKER_SCRIPT */ 123 123 124 - #ifndef CONFIG_X86 124 + #ifndef CONFIG_ARCH_USE_SYM_ANNOTATIONS 125 125 #ifndef WEAK 126 126 /* deprecated, use SYM_FUNC_START_WEAK* */ 127 127 #define WEAK(name) \ ··· 143 143 #define ENDPROC(name) \ 144 144 SYM_FUNC_END(name) 145 145 #endif 146 - #endif /* CONFIG_X86 */ 146 + #endif /* CONFIG_ARCH_USE_SYM_ANNOTATIONS */ 147 147 148 148 /* === generic annotations === */ 149 149
+3
include/linux/mm.h
··· 325 325 #elif defined(CONFIG_SPARC64) 326 326 # define VM_SPARC_ADI VM_ARCH_1 /* Uses ADI tag for access control */ 327 327 # define VM_ARCH_CLEAR VM_SPARC_ADI 328 + #elif defined(CONFIG_ARM64) 329 + # define VM_ARM64_BTI VM_ARCH_1 /* BTI guarded page, a.k.a. GP bit */ 330 + # define VM_ARCH_CLEAR VM_ARM64_BTI 328 331 #elif !defined(CONFIG_MMU) 329 332 # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ 330 333 #endif
+11
include/uapi/linux/elf.h
··· 36 36 #define PT_LOPROC 0x70000000 37 37 #define PT_HIPROC 0x7fffffff 38 38 #define PT_GNU_EH_FRAME 0x6474e550 39 + #define PT_GNU_PROPERTY 0x6474e553 39 40 40 41 #define PT_GNU_STACK (PT_LOOS + 0x474e551) 41 42 ··· 368 367 * Notes used in ET_CORE. Architectures export some of the arch register sets 369 368 * using the corresponding note types via the PTRACE_GETREGSET and 370 369 * PTRACE_SETREGSET requests. 370 + * The note name for all these is "LINUX". 371 371 */ 372 372 #define NT_PRSTATUS 1 373 373 #define NT_PRFPREG 2 ··· 431 429 #define NT_MIPS_FP_MODE 0x801 /* MIPS floating-point mode */ 432 430 #define NT_MIPS_MSA 0x802 /* MIPS SIMD registers */ 433 431 432 + /* Note types with note name "GNU" */ 433 + #define NT_GNU_PROPERTY_TYPE_0 5 434 + 434 435 /* Note header in a PT_NOTE section */ 435 436 typedef struct elf32_note { 436 437 Elf32_Word n_namesz; /* Name size */ ··· 447 442 Elf64_Word n_descsz; /* Content size */ 448 443 Elf64_Word n_type; /* Content type */ 449 444 } Elf64_Nhdr; 445 + 446 + /* .note.gnu.property types for EM_AARCH64: */ 447 + #define GNU_PROPERTY_AARCH64_FEATURE_1_AND 0xc0000000 448 + 449 + /* Bits for GNU_PROPERTY_AARCH64_FEATURE_1_BTI */ 450 + #define GNU_PROPERTY_AARCH64_FEATURE_1_BTI (1U << 0) 450 451 451 452 #endif /* _UAPI_LINUX_ELF_H */
+3
lib/Kconfig
··· 80 80 config ARCH_HAS_FAST_MULTIPLIER 81 81 bool 82 82 83 + config ARCH_USE_SYM_ANNOTATIONS 84 + bool 85 + 83 86 config INDIRECT_PIO 84 87 bool "Access I/O in non-MMIO mode" 85 88 depends on ARM64
-1
tools/testing/selftests/wireguard/qemu/debug.config
··· 58 58 CONFIG_USER_STACKTRACE_SUPPORT=y 59 59 CONFIG_DEBUG_SG=y 60 60 CONFIG_DEBUG_NOTIFIERS=y 61 - CONFIG_DOUBLEFAULT=y 62 61 CONFIG_X86_DEBUG_FPU=y 63 62 CONFIG_DEBUG_SECTION_MISMATCH=y 64 63 CONFIG_DEBUG_PAGEALLOC=y