Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'riscv-for-linus-v7.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux

Pull RISC-V updates from Paul Walmsley:
"Before v7.0 is released, fix a few issues with the CFI patchset,
merged earlier in v7.0-rc, that primarily affect interfaces to
non-kernel code:

- Improve the prctl() interface for per-task indirect branch landing
pad control to expand abbreviations and to resemble the speculation
control prctl() interface

- Expand the "LP" and "SS" abbreviations in the ptrace uapi header
file to "branch landing pad" and "shadow stack", to improve
readability

- Fix a typo in a CFI-related macro name in the ptrace uapi header
file

- Ensure that the indirect branch tracking state and shadow stack
state are unlocked immediately after an exec() on the new task so
that libc subsequently can control it

- While working in this area, clean up the kernel-internal,
cross-architecture prctl() function names by expanding the
abbreviations mentioned above"

* tag 'riscv-for-linus-v7.0-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
prctl: cfi: change the branch landing pad prctl()s to be more descriptive
riscv: ptrace: cfi: expand "SS" references to "shadow stack" in uapi headers
prctl: rename branch landing pad implementation functions to be more explicit
riscv: ptrace: expand "LP" references to "branch landing pads" in uapi headers
riscv: cfi: clear CFI lock status in start_thread()
riscv: ptrace: cfi: fix "PRACE" typo in uapi header

+149 -139
+36 -21
Documentation/arch/riscv/zicfilp.rst
··· 76 76 4. prctl() enabling 77 77 -------------------- 78 78 79 - :c:macro:`PR_SET_INDIR_BR_LP_STATUS` / :c:macro:`PR_GET_INDIR_BR_LP_STATUS` / 80 - :c:macro:`PR_LOCK_INDIR_BR_LP_STATUS` are three prctls added to manage indirect 81 - branch tracking. These prctls are architecture-agnostic and return -EINVAL if 82 - the underlying functionality is not supported. 79 + Per-task indirect branch tracking state can be monitored and 80 + controlled via the :c:macro:`PR_GET_CFI` and :c:macro:`PR_SET_CFI` 81 + ``prctl()` arguments (respectively), by supplying 82 + :c:macro:`PR_CFI_BRANCH_LANDING_PADS` as the second argument. These 83 + are architecture-agnostic, and will return -EINVAL if the underlying 84 + functionality is not supported. 83 85 84 - * prctl(PR_SET_INDIR_BR_LP_STATUS, unsigned long arg) 86 + * prctl(:c:macro:`PR_SET_CFI`, :c:macro:`PR_CFI_BRANCH_LANDING_PADS`, unsigned long arg) 85 87 86 - If arg1 is :c:macro:`PR_INDIR_BR_LP_ENABLE` and if CPU supports 87 - ``zicfilp`` then the kernel will enable indirect branch tracking for the 88 - task. The dynamic loader can issue this :c:macro:`prctl` once it has 88 + arg is a bitmask. 89 + 90 + If :c:macro:`PR_CFI_ENABLE` is set in arg, and the CPU supports 91 + ``zicfilp``, then the kernel will enable indirect branch tracking for 92 + the task. The dynamic loader can issue this ``prctl()`` once it has 89 93 determined that all the objects loaded in the address space support 90 - indirect branch tracking. Additionally, if there is a `dlopen` to an 91 - object which wasn't compiled with ``zicfilp``, the dynamic loader can 92 - issue this prctl with arg1 set to 0 (i.e. :c:macro:`PR_INDIR_BR_LP_ENABLE` 93 - cleared). 94 + indirect branch tracking. 94 95 95 - * prctl(PR_GET_INDIR_BR_LP_STATUS, unsigned long * arg) 96 + Indirect branch tracking state can also be locked once enabled. This 97 + prevents the task from subsequently disabling it. This is done by 98 + setting the bit :c:macro:`PR_CFI_LOCK` in arg. Either indirect branch 99 + tracking must already be enabled for the task, or the bit 100 + :c:macro:`PR_CFI_ENABLE` must also be set in arg. This is intended 101 + for environments that wish to run with a strict security posture that 102 + do not wish to load objects without ``zicfilp`` support. 96 103 97 - Returns the current status of indirect branch tracking. If enabled 98 - it'll return :c:macro:`PR_INDIR_BR_LP_ENABLE` 104 + Indirect branch tracking can also be disabled for the task, assuming 105 + that it has not previously been enabled and locked. If there is a 106 + ``dlopen()`` to an object which wasn't compiled with ``zicfilp``, the 107 + dynamic loader can issue this ``prctl()`` with arg set to 108 + :c:macro:`PR_CFI_DISABLE`. Disabling indirect branch tracking for the 109 + task is not possible if it has previously been enabled and locked. 99 110 100 - * prctl(PR_LOCK_INDIR_BR_LP_STATUS, unsigned long arg) 101 111 102 - Locks the current status of indirect branch tracking on the task. User 103 - space may want to run with a strict security posture and wouldn't want 104 - loading of objects without ``zicfilp`` support in them, to disallow 105 - disabling of indirect branch tracking. In this case, user space can 106 - use this prctl to lock the current settings. 112 + * prctl(:c:macro:`PR_GET_CFI`, :c:macro:`PR_CFI_BRANCH_LANDING_PADS`, unsigned long * arg) 113 + 114 + Returns the current status of indirect branch tracking into a bitmask 115 + stored into the memory location pointed to by arg. The bitmask will 116 + have the :c:macro:`PR_CFI_ENABLE` bit set if indirect branch tracking 117 + is currently enabled for the task, and if it is locked, will 118 + additionally have the :c:macro:`PR_CFI_LOCK` bit set. If indirect 119 + branch tracking is currently disabled for the task, the 120 + :c:macro:`PR_CFI_DISABLE` bit will be set. 121 + 107 122 108 123 5. violations related to indirect branch tracking 109 124 --------------------------------------------------
+4 -4
arch/riscv/include/asm/usercfi.h
··· 39 39 bool is_shstk_enabled(struct task_struct *task); 40 40 bool is_shstk_locked(struct task_struct *task); 41 41 bool is_shstk_allocated(struct task_struct *task); 42 - void set_shstk_lock(struct task_struct *task); 42 + void set_shstk_lock(struct task_struct *task, bool lock); 43 43 void set_shstk_status(struct task_struct *task, bool enable); 44 44 unsigned long get_active_shstk(struct task_struct *task); 45 45 int restore_user_shstk(struct task_struct *tsk, unsigned long shstk_ptr); ··· 47 47 bool is_indir_lp_enabled(struct task_struct *task); 48 48 bool is_indir_lp_locked(struct task_struct *task); 49 49 void set_indir_lp_status(struct task_struct *task, bool enable); 50 - void set_indir_lp_lock(struct task_struct *task); 50 + void set_indir_lp_lock(struct task_struct *task, bool lock); 51 51 52 52 #define PR_SHADOW_STACK_SUPPORTED_STATUS_MASK (PR_SHADOW_STACK_ENABLE) 53 53 ··· 69 69 70 70 #define is_shstk_allocated(task) false 71 71 72 - #define set_shstk_lock(task) do {} while (0) 72 + #define set_shstk_lock(task, lock) do {} while (0) 73 73 74 74 #define set_shstk_status(task, enable) do {} while (0) 75 75 ··· 79 79 80 80 #define set_indir_lp_status(task, enable) do {} while (0) 81 81 82 - #define set_indir_lp_lock(task) do {} while (0) 82 + #define set_indir_lp_lock(task, lock) do {} while (0) 83 83 84 84 #define restore_user_shstk(tsk, shstk_ptr) -EINVAL 85 85
+20 -18
arch/riscv/include/uapi/asm/ptrace.h
··· 132 132 unsigned long ss_ptr; /* shadow stack pointer */ 133 133 }; 134 134 135 - #define PTRACE_CFI_LP_EN_BIT 0 136 - #define PTRACE_CFI_LP_LOCK_BIT 1 137 - #define PTRACE_CFI_ELP_BIT 2 138 - #define PTRACE_CFI_SS_EN_BIT 3 139 - #define PTRACE_CFI_SS_LOCK_BIT 4 140 - #define PTRACE_CFI_SS_PTR_BIT 5 135 + #define PTRACE_CFI_BRANCH_LANDING_PAD_EN_BIT 0 136 + #define PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_BIT 1 137 + #define PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_BIT 2 138 + #define PTRACE_CFI_SHADOW_STACK_EN_BIT 3 139 + #define PTRACE_CFI_SHADOW_STACK_LOCK_BIT 4 140 + #define PTRACE_CFI_SHADOW_STACK_PTR_BIT 5 141 141 142 - #define PTRACE_CFI_LP_EN_STATE _BITUL(PTRACE_CFI_LP_EN_BIT) 143 - #define PTRACE_CFI_LP_LOCK_STATE _BITUL(PTRACE_CFI_LP_LOCK_BIT) 144 - #define PTRACE_CFI_ELP_STATE _BITUL(PTRACE_CFI_ELP_BIT) 145 - #define PTRACE_CFI_SS_EN_STATE _BITUL(PTRACE_CFI_SS_EN_BIT) 146 - #define PTRACE_CFI_SS_LOCK_STATE _BITUL(PTRACE_CFI_SS_LOCK_BIT) 147 - #define PTRACE_CFI_SS_PTR_STATE _BITUL(PTRACE_CFI_SS_PTR_BIT) 142 + #define PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE _BITUL(PTRACE_CFI_BRANCH_LANDING_PAD_EN_BIT) 143 + #define PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_STATE \ 144 + _BITUL(PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_BIT) 145 + #define PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE \ 146 + _BITUL(PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_BIT) 147 + #define PTRACE_CFI_SHADOW_STACK_EN_STATE _BITUL(PTRACE_CFI_SHADOW_STACK_EN_BIT) 148 + #define PTRACE_CFI_SHADOW_STACK_LOCK_STATE _BITUL(PTRACE_CFI_SHADOW_STACK_LOCK_BIT) 149 + #define PTRACE_CFI_SHADOW_STACK_PTR_STATE _BITUL(PTRACE_CFI_SHADOW_STACK_PTR_BIT) 148 150 149 - #define PRACE_CFI_STATE_INVALID_MASK ~(PTRACE_CFI_LP_EN_STATE | \ 150 - PTRACE_CFI_LP_LOCK_STATE | \ 151 - PTRACE_CFI_ELP_STATE | \ 152 - PTRACE_CFI_SS_EN_STATE | \ 153 - PTRACE_CFI_SS_LOCK_STATE | \ 154 - PTRACE_CFI_SS_PTR_STATE) 151 + #define PTRACE_CFI_STATE_INVALID_MASK ~(PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE | \ 152 + PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_STATE | \ 153 + PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE | \ 154 + PTRACE_CFI_SHADOW_STACK_EN_STATE | \ 155 + PTRACE_CFI_SHADOW_STACK_LOCK_STATE | \ 156 + PTRACE_CFI_SHADOW_STACK_PTR_STATE) 155 157 156 158 struct __cfi_status { 157 159 __u64 cfi_state;
+2
arch/riscv/kernel/process.c
··· 160 160 * clear shadow stack state on exec. 161 161 * libc will set it later via prctl. 162 162 */ 163 + set_shstk_lock(current, false); 163 164 set_shstk_status(current, false); 164 165 set_shstk_base(current, 0, 0); 165 166 set_active_shstk(current, 0); ··· 168 167 * disable indirect branch tracking on exec. 169 168 * libc will enable it later via prctl. 170 169 */ 170 + set_indir_lp_lock(current, false); 171 171 set_indir_lp_status(current, false); 172 172 173 173 #ifdef CONFIG_64BIT
+11 -11
arch/riscv/kernel/ptrace.c
··· 303 303 regs = task_pt_regs(target); 304 304 305 305 if (is_indir_lp_enabled(target)) { 306 - user_cfi.cfi_status.cfi_state |= PTRACE_CFI_LP_EN_STATE; 306 + user_cfi.cfi_status.cfi_state |= PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE; 307 307 user_cfi.cfi_status.cfi_state |= is_indir_lp_locked(target) ? 308 - PTRACE_CFI_LP_LOCK_STATE : 0; 308 + PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_STATE : 0; 309 309 user_cfi.cfi_status.cfi_state |= (regs->status & SR_ELP) ? 310 - PTRACE_CFI_ELP_STATE : 0; 310 + PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE : 0; 311 311 } 312 312 313 313 if (is_shstk_enabled(target)) { 314 - user_cfi.cfi_status.cfi_state |= (PTRACE_CFI_SS_EN_STATE | 315 - PTRACE_CFI_SS_PTR_STATE); 314 + user_cfi.cfi_status.cfi_state |= (PTRACE_CFI_SHADOW_STACK_EN_STATE | 315 + PTRACE_CFI_SHADOW_STACK_PTR_STATE); 316 316 user_cfi.cfi_status.cfi_state |= is_shstk_locked(target) ? 317 - PTRACE_CFI_SS_LOCK_STATE : 0; 317 + PTRACE_CFI_SHADOW_STACK_LOCK_STATE : 0; 318 318 user_cfi.shstk_ptr = get_active_shstk(target); 319 319 } 320 320 ··· 349 349 * rsvd field should be set to zero so that if those fields are needed in future 350 350 */ 351 351 if ((user_cfi.cfi_status.cfi_state & 352 - (PTRACE_CFI_LP_EN_STATE | PTRACE_CFI_LP_LOCK_STATE | 353 - PTRACE_CFI_SS_EN_STATE | PTRACE_CFI_SS_LOCK_STATE)) || 354 - (user_cfi.cfi_status.cfi_state & PRACE_CFI_STATE_INVALID_MASK)) 352 + (PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE | PTRACE_CFI_BRANCH_LANDING_PAD_LOCK_STATE | 353 + PTRACE_CFI_SHADOW_STACK_EN_STATE | PTRACE_CFI_SHADOW_STACK_LOCK_STATE)) || 354 + (user_cfi.cfi_status.cfi_state & PTRACE_CFI_STATE_INVALID_MASK)) 355 355 return -EINVAL; 356 356 357 357 /* If lpad is enabled on target and ptrace requests to set / clear elp, do that */ 358 358 if (is_indir_lp_enabled(target)) { 359 359 if (user_cfi.cfi_status.cfi_state & 360 - PTRACE_CFI_ELP_STATE) /* set elp state */ 360 + PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE) /* set elp state */ 361 361 regs->status |= SR_ELP; 362 362 else 363 363 regs->status &= ~SR_ELP; /* clear elp state */ ··· 365 365 366 366 /* If shadow stack enabled on target, set new shadow stack pointer */ 367 367 if (is_shstk_enabled(target) && 368 - (user_cfi.cfi_status.cfi_state & PTRACE_CFI_SS_PTR_STATE)) 368 + (user_cfi.cfi_status.cfi_state & PTRACE_CFI_SHADOW_STACK_PTR_STATE)) 369 369 set_active_shstk(target, user_cfi.shstk_ptr); 370 370 371 371 return 0;
+19 -20
arch/riscv/kernel/usercfi.c
··· 74 74 csr_write(CSR_ENVCFG, task->thread.envcfg); 75 75 } 76 76 77 - void set_shstk_lock(struct task_struct *task) 77 + void set_shstk_lock(struct task_struct *task, bool lock) 78 78 { 79 - task->thread_info.user_cfi_state.ubcfi_locked = 1; 79 + task->thread_info.user_cfi_state.ubcfi_locked = lock; 80 80 } 81 81 82 82 bool is_indir_lp_enabled(struct task_struct *task) ··· 104 104 csr_write(CSR_ENVCFG, task->thread.envcfg); 105 105 } 106 106 107 - void set_indir_lp_lock(struct task_struct *task) 107 + void set_indir_lp_lock(struct task_struct *task, bool lock) 108 108 { 109 - task->thread_info.user_cfi_state.ufcfi_locked = 1; 109 + task->thread_info.user_cfi_state.ufcfi_locked = lock; 110 110 } 111 111 /* 112 112 * If size is 0, then to be compatible with regular stack we want it to be as big as ··· 452 452 !is_shstk_enabled(task) || arg != 0) 453 453 return -EINVAL; 454 454 455 - set_shstk_lock(task); 455 + set_shstk_lock(task, true); 456 456 457 457 return 0; 458 458 } 459 459 460 - int arch_get_indir_br_lp_status(struct task_struct *t, unsigned long __user *status) 460 + int arch_prctl_get_branch_landing_pad_state(struct task_struct *t, 461 + unsigned long __user *state) 461 462 { 462 463 unsigned long fcfi_status = 0; 463 464 464 465 if (!is_user_lpad_enabled()) 465 466 return -EINVAL; 466 467 467 - /* indirect branch tracking is enabled on the task or not */ 468 - fcfi_status |= (is_indir_lp_enabled(t) ? PR_INDIR_BR_LP_ENABLE : 0); 468 + fcfi_status = (is_indir_lp_enabled(t) ? PR_CFI_ENABLE : PR_CFI_DISABLE); 469 + fcfi_status |= (is_indir_lp_locked(t) ? PR_CFI_LOCK : 0); 469 470 470 - return copy_to_user(status, &fcfi_status, sizeof(fcfi_status)) ? -EFAULT : 0; 471 + return copy_to_user(state, &fcfi_status, sizeof(fcfi_status)) ? -EFAULT : 0; 471 472 } 472 473 473 - int arch_set_indir_br_lp_status(struct task_struct *t, unsigned long status) 474 + int arch_prctl_set_branch_landing_pad_state(struct task_struct *t, unsigned long state) 474 475 { 475 - bool enable_indir_lp = false; 476 - 477 476 if (!is_user_lpad_enabled()) 478 477 return -EINVAL; 479 478 ··· 480 481 if (is_indir_lp_locked(t)) 481 482 return -EINVAL; 482 483 483 - /* Reject unknown flags */ 484 - if (status & ~PR_INDIR_BR_LP_ENABLE) 484 + if (!(state & (PR_CFI_ENABLE | PR_CFI_DISABLE))) 485 485 return -EINVAL; 486 486 487 - enable_indir_lp = (status & PR_INDIR_BR_LP_ENABLE); 488 - set_indir_lp_status(t, enable_indir_lp); 487 + if (state & PR_CFI_ENABLE && state & PR_CFI_DISABLE) 488 + return -EINVAL; 489 + 490 + set_indir_lp_status(t, !!(state & PR_CFI_ENABLE)); 489 491 490 492 return 0; 491 493 } 492 494 493 - int arch_lock_indir_br_lp_status(struct task_struct *task, 494 - unsigned long arg) 495 + int arch_prctl_lock_branch_landing_pad_state(struct task_struct *task) 495 496 { 496 497 /* 497 498 * If indirect branch tracking is not supported or not enabled on task, 498 499 * nothing to lock here 499 500 */ 500 501 if (!is_user_lpad_enabled() || 501 - !is_indir_lp_enabled(task) || arg != 0) 502 + !is_indir_lp_enabled(task)) 502 503 return -EINVAL; 503 504 504 - set_indir_lp_lock(task); 505 + set_indir_lp_lock(task, true); 505 506 506 507 return 0; 507 508 }
+3 -3
include/linux/cpu.h
··· 229 229 #define smt_mitigations SMT_MITIGATIONS_OFF 230 230 #endif 231 231 232 - int arch_get_indir_br_lp_status(struct task_struct *t, unsigned long __user *status); 233 - int arch_set_indir_br_lp_status(struct task_struct *t, unsigned long status); 234 - int arch_lock_indir_br_lp_status(struct task_struct *t, unsigned long status); 232 + int arch_prctl_get_branch_landing_pad_state(struct task_struct *t, unsigned long __user *state); 233 + int arch_prctl_set_branch_landing_pad_state(struct task_struct *t, unsigned long state); 234 + int arch_prctl_lock_branch_landing_pad_state(struct task_struct *t); 235 235 236 236 #endif /* _LINUX_CPU_H_ */
+15 -22
include/uapi/linux/prctl.h
··· 397 397 # define PR_RSEQ_SLICE_EXT_ENABLE 0x01 398 398 399 399 /* 400 - * Get the current indirect branch tracking configuration for the current 401 - * thread, this will be the value configured via PR_SET_INDIR_BR_LP_STATUS. 400 + * Get or set the control flow integrity (CFI) configuration for the 401 + * current thread. 402 + * 403 + * Some per-thread control flow integrity settings are not yet 404 + * controlled through this prctl(); see for example 405 + * PR_{GET,SET,LOCK}_SHADOW_STACK_STATUS 402 406 */ 403 - #define PR_GET_INDIR_BR_LP_STATUS 80 404 - 407 + #define PR_GET_CFI 80 408 + #define PR_SET_CFI 81 405 409 /* 406 - * Set the indirect branch tracking configuration. PR_INDIR_BR_LP_ENABLE will 407 - * enable cpu feature for user thread, to track all indirect branches and ensure 408 - * they land on arch defined landing pad instruction. 409 - * x86 - If enabled, an indirect branch must land on an ENDBRANCH instruction. 410 - * arch64 - If enabled, an indirect branch must land on a BTI instruction. 411 - * riscv - If enabled, an indirect branch must land on an lpad instruction. 412 - * PR_INDIR_BR_LP_DISABLE will disable feature for user thread and indirect 413 - * branches will no more be tracked by cpu to land on arch defined landing pad 414 - * instruction. 410 + * Forward-edge CFI variants (excluding ARM64 BTI, which has its own 411 + * prctl()s). 415 412 */ 416 - #define PR_SET_INDIR_BR_LP_STATUS 81 417 - # define PR_INDIR_BR_LP_ENABLE (1UL << 0) 418 - 419 - /* 420 - * Prevent further changes to the specified indirect branch tracking 421 - * configuration. All bits may be locked via this call, including 422 - * undefined bits. 423 - */ 424 - #define PR_LOCK_INDIR_BR_LP_STATUS 82 413 + #define PR_CFI_BRANCH_LANDING_PADS 0 414 + /* Return and control values for PR_{GET,SET}_CFI */ 415 + # define PR_CFI_ENABLE _BITUL(0) 416 + # define PR_CFI_DISABLE _BITUL(1) 417 + # define PR_CFI_LOCK _BITUL(2) 425 418 426 419 #endif /* _LINUX_PRCTL_H */
+17 -13
kernel/sys.c
··· 2388 2388 return -EINVAL; 2389 2389 } 2390 2390 2391 - int __weak arch_get_indir_br_lp_status(struct task_struct *t, unsigned long __user *status) 2391 + int __weak arch_prctl_get_branch_landing_pad_state(struct task_struct *t, 2392 + unsigned long __user *state) 2392 2393 { 2393 2394 return -EINVAL; 2394 2395 } 2395 2396 2396 - int __weak arch_set_indir_br_lp_status(struct task_struct *t, unsigned long status) 2397 + int __weak arch_prctl_set_branch_landing_pad_state(struct task_struct *t, unsigned long state) 2397 2398 { 2398 2399 return -EINVAL; 2399 2400 } 2400 2401 2401 - int __weak arch_lock_indir_br_lp_status(struct task_struct *t, unsigned long status) 2402 + int __weak arch_prctl_lock_branch_landing_pad_state(struct task_struct *t) 2402 2403 { 2403 2404 return -EINVAL; 2404 2405 } ··· 2889 2888 return -EINVAL; 2890 2889 error = rseq_slice_extension_prctl(arg2, arg3); 2891 2890 break; 2892 - case PR_GET_INDIR_BR_LP_STATUS: 2893 - if (arg3 || arg4 || arg5) 2891 + case PR_GET_CFI: 2892 + if (arg2 != PR_CFI_BRANCH_LANDING_PADS) 2894 2893 return -EINVAL; 2895 - error = arch_get_indir_br_lp_status(me, (unsigned long __user *)arg2); 2894 + if (arg4 || arg5) 2895 + return -EINVAL; 2896 + error = arch_prctl_get_branch_landing_pad_state(me, (unsigned long __user *)arg3); 2896 2897 break; 2897 - case PR_SET_INDIR_BR_LP_STATUS: 2898 - if (arg3 || arg4 || arg5) 2898 + case PR_SET_CFI: 2899 + if (arg2 != PR_CFI_BRANCH_LANDING_PADS) 2899 2900 return -EINVAL; 2900 - error = arch_set_indir_br_lp_status(me, arg2); 2901 - break; 2902 - case PR_LOCK_INDIR_BR_LP_STATUS: 2903 - if (arg3 || arg4 || arg5) 2901 + if (arg4 || arg5) 2904 2902 return -EINVAL; 2905 - error = arch_lock_indir_br_lp_status(me, arg2); 2903 + error = arch_prctl_set_branch_landing_pad_state(me, arg3); 2904 + if (error) 2905 + break; 2906 + if (arg3 & PR_CFI_LOCK && !(arg3 & PR_CFI_DISABLE)) 2907 + error = arch_prctl_lock_branch_landing_pad_state(me); 2906 2908 break; 2907 2909 default: 2908 2910 trace_task_prctl_unknown(option, arg2, arg3, arg4, arg5);
+15 -21
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 397 397 # define PR_RSEQ_SLICE_EXT_ENABLE 0x01 398 398 399 399 /* 400 - * Get the current indirect branch tracking configuration for the current 401 - * thread, this will be the value configured via PR_SET_INDIR_BR_LP_STATUS. 400 + * Get or set the control flow integrity (CFI) configuration for the 401 + * current thread. 402 + * 403 + * Some per-thread control flow integrity settings are not yet 404 + * controlled through this prctl(); see for example 405 + * PR_{GET,SET,LOCK}_SHADOW_STACK_STATUS 402 406 */ 403 - #define PR_GET_INDIR_BR_LP_STATUS 80 404 - 407 + #define PR_GET_CFI 80 408 + #define PR_SET_CFI 81 405 409 /* 406 - * Set the indirect branch tracking configuration. PR_INDIR_BR_LP_ENABLE will 407 - * enable cpu feature for user thread, to track all indirect branches and ensure 408 - * they land on arch defined landing pad instruction. 409 - * x86 - If enabled, an indirect branch must land on an ENDBRANCH instruction. 410 - * arch64 - If enabled, an indirect branch must land on a BTI instruction. 411 - * riscv - If enabled, an indirect branch must land on an lpad instruction. 412 - * PR_INDIR_BR_LP_DISABLE will disable feature for user thread and indirect 413 - * branches will no more be tracked by cpu to land on arch defined landing pad 414 - * instruction. 410 + * Forward-edge CFI variants (excluding ARM64 BTI, which has its own 411 + * prctl()s). 415 412 */ 416 - #define PR_SET_INDIR_BR_LP_STATUS 81 417 - # define PR_INDIR_BR_LP_ENABLE (1UL << 0) 413 + #define PR_CFI_BRANCH_LANDING_PADS 0 414 + /* Return and control values for PR_{GET,SET}_CFI */ 415 + # define PR_CFI_ENABLE _BITUL(0) 416 + # define PR_CFI_DISABLE _BITUL(1) 417 + # define PR_CFI_LOCK _BITUL(2) 418 418 419 - /* 420 - * Prevent further changes to the specified indirect branch tracking 421 - * configuration. All bits may be locked via this call, including 422 - * undefined bits. 423 - */ 424 - #define PR_LOCK_INDIR_BR_LP_STATUS 82 425 419 426 420 #endif /* _LINUX_PRCTL_H */
+7 -6
tools/testing/selftests/riscv/cfi/cfitests.c
··· 94 94 } 95 95 96 96 switch (ptrace_test_num) { 97 - #define CFI_ENABLE_MASK (PTRACE_CFI_LP_EN_STATE | \ 98 - PTRACE_CFI_SS_EN_STATE | \ 99 - PTRACE_CFI_SS_PTR_STATE) 97 + #define CFI_ENABLE_MASK (PTRACE_CFI_BRANCH_LANDING_PAD_EN_STATE | \ 98 + PTRACE_CFI_SHADOW_STACK_EN_STATE | \ 99 + PTRACE_CFI_SHADOW_STACK_PTR_STATE) 100 100 case 0: 101 101 if ((cfi_reg.cfi_status.cfi_state & CFI_ENABLE_MASK) != CFI_ENABLE_MASK) 102 102 ksft_exit_fail_msg("%s: ptrace_getregset failed, %llu\n", __func__, ··· 106 106 __func__); 107 107 break; 108 108 case 1: 109 - if (!(cfi_reg.cfi_status.cfi_state & PTRACE_CFI_ELP_STATE)) 109 + if (!(cfi_reg.cfi_status.cfi_state & 110 + PTRACE_CFI_BRANCH_EXPECTED_LANDING_PAD_STATE)) 110 111 ksft_exit_fail_msg("%s: elp must have been set\n", __func__); 111 112 /* clear elp state. not interested in anything else */ 112 113 cfi_reg.cfi_status.cfi_state = 0; ··· 146 145 * pads for user mode except lighting up a bit in senvcfg via a prctl. 147 146 * Enable landing pad support throughout the execution of the test binary. 148 147 */ 149 - ret = my_syscall5(__NR_prctl, PR_GET_INDIR_BR_LP_STATUS, &lpad_status, 0, 0, 0); 148 + ret = my_syscall5(__NR_prctl, PR_GET_CFI, PR_CFI_BRANCH_LANDING_PADS, &lpad_status, 0, 0); 150 149 if (ret) 151 150 ksft_exit_fail_msg("Get landing pad status failed with %d\n", ret); 152 151 153 - if (!(lpad_status & PR_INDIR_BR_LP_ENABLE)) 152 + if (!(lpad_status & PR_CFI_ENABLE)) 154 153 ksft_exit_fail_msg("Landing pad is not enabled, should be enabled via glibc\n"); 155 154 156 155 ret = my_syscall5(__NR_prctl, PR_GET_SHADOW_STACK_STATUS, &ss_status, 0, 0, 0);