Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

bpf: Fall back to nospec for Spectre v1

This implements the core of the series and causes the verifier to fall
back to mitigating Spectre v1 using speculation barriers. The approach
was presented at LPC'24 [1] and RAID'24 [2].

If we find any forbidden behavior on a speculative path, we insert a
nospec (e.g., lfence speculation barrier on x86) before the instruction
and stop verifying the path. While verifying a speculative path, we can
furthermore stop verification of that path whenever we encounter a
nospec instruction.

A minimal example program would look as follows:

A = true
B = true
if A goto e
f()
if B goto e
unsafe()
e: exit

There are the following speculative and non-speculative paths
(`cur->speculative` and `speculative` referring to the value of the
push_stack() parameters):

- A = true
- B = true
- if A goto e
- A && !cur->speculative && !speculative
- exit
- !A && !cur->speculative && speculative
- f()
- if B goto e
- B && cur->speculative && !speculative
- exit
- !B && cur->speculative && speculative
- unsafe()

If f() contains any unsafe behavior under Spectre v1 and the unsafe
behavior matches `state->speculative &&
error_recoverable_with_nospec(err)`, do_check() will now add a nospec
before f() instead of rejecting the program:

A = true
B = true
if A goto e
nospec
f()
if B goto e
unsafe()
e: exit

Alternatively, the algorithm also takes advantage of nospec instructions
inserted for other reasons (e.g., Spectre v4). Taking the program above
as an example, speculative path exploration can stop before f() if a
nospec was inserted there because of Spectre v4 sanitization.

In this example, all instructions after the nospec are dead code (and
with the nospec they are also dead code speculatively).

For this, it relies on the fact that speculation barriers generally
prevent all later instructions from executing if the speculation was not
correct:

* On Intel x86_64, lfence acts as full speculation barrier, not only as
a load fence [3]:

An LFENCE instruction or a serializing instruction will ensure that
no later instructions execute, even speculatively, until all prior
instructions complete locally. [...] Inserting an LFENCE instruction
after a bounds check prevents later operations from executing before
the bound check completes.

This was experimentally confirmed in [4].

* On AMD x86_64, lfence is dispatch-serializing [5] (requires MSR
C001_1029[1] to be set if the MSR is supported, this happens in
init_amd()). AMD further specifies "A dispatch serializing instruction
forces the processor to retire the serializing instruction and all
previous instructions before the next instruction is executed" [8]. As
dispatch is not specific to memory loads or branches, lfence therefore
also affects all instructions there. Also, if retiring a branch means
it's PC change becomes architectural (should be), this means any
"wrong" speculation is aborted as required for this series.

* ARM's SB speculation barrier instruction also affects "any instruction
that appears later in the program order than the barrier" [6].

* PowerPC's barrier also affects all subsequent instructions [7]:

[...] executing an ori R31,R31,0 instruction ensures that all
instructions preceding the ori R31,R31,0 instruction have completed
before the ori R31,R31,0 instruction completes, and that no
subsequent instructions are initiated, even out-of-order, until
after the ori R31,R31,0 instruction completes. The ori R31,R31,0
instruction may complete before storage accesses associated with
instructions preceding the ori R31,R31,0 instruction have been
performed

Regarding the example, this implies that `if B goto e` will not execute
before `if A goto e` completes. Once `if A goto e` completes, the CPU
should find that the speculation was wrong and continue with `exit`.

If there is any other path that leads to `if B goto e` (and therefore
`unsafe()`) without going through `if A goto e`, then a nospec will
still be needed there. However, this patch assumes this other path will
be explored separately and therefore be discovered by the verifier even
if the exploration discussed here stops at the nospec.

This patch furthermore has the unfortunate consequence that Spectre v1
mitigations now only support architectures which implement BPF_NOSPEC.
Before this commit, Spectre v1 mitigations prevented exploits by
rejecting the programs on all architectures. Because some JITs do not
implement BPF_NOSPEC, this patch therefore may regress unpriv BPF's
security to a limited extent:

* The regression is limited to systems vulnerable to Spectre v1, have
unprivileged BPF enabled, and do NOT emit insns for BPF_NOSPEC. The
latter is not the case for x86 64- and 32-bit, arm64, and powerpc
64-bit and they are therefore not affected by the regression.
According to commit a6f6a95f2580 ("LoongArch, bpf: Fix jit to skip
speculation barrier opcode"), LoongArch is not vulnerable to Spectre
v1 and therefore also not affected by the regression.

* To the best of my knowledge this regression may therefore only affect
MIPS. This is deemed acceptable because unpriv BPF is still disabled
there by default. As stated in a previous commit, BPF_NOSPEC could be
implemented for MIPS based on GCC's speculation_barrier
implementation.

* It is unclear which other architectures (besides x86 64- and 32-bit,
ARM64, PowerPC 64-bit, LoongArch, and MIPS) supported by the kernel
are vulnerable to Spectre v1. Also, it is not clear if barriers are
available on these architectures. Implementing BPF_NOSPEC on these
architectures therefore is non-trivial. Searching GCC and the kernel
for speculation barrier implementations for these architectures
yielded no result.

* If any of those regressed systems is also vulnerable to Spectre v4,
the system was already vulnerable to Spectre v4 attacks based on
unpriv BPF before this patch and the impact is therefore further
limited.

As an alternative to regressing security, one could still reject
programs if the architecture does not emit BPF_NOSPEC (e.g., by removing
the empty BPF_NOSPEC-case from all JITs except for LoongArch where it
appears justified). However, this will cause rejections on these archs
that are likely unfounded in the vast majority of cases.

In the tests, some are now successful where we previously had a
false-positive (i.e., rejection). Change them to reflect where the
nospec should be inserted (using __xlated_unpriv) and modify the error
message if the nospec is able to mitigate a problem that previously
shadowed another problem (in that case __xlated_unpriv does not work,
therefore just add a comment).

Define SPEC_V1 to avoid duplicating this ifdef whenever we check for
nospec insns using __xlated_unpriv, define it here once. This also
improves readability. PowerPC can probably also be added here. However,
omit it for now because the BPF CI currently does not include a test.

Limit it to EPERM, EACCES, and EINVAL (and not everything except for
EFAULT and ENOMEM) as it already has the desired effect for most
real-world programs. Briefly went through all the occurrences of EPERM,
EINVAL, and EACCESS in verifier.c to validate that catching them like
this makes sense.

Thanks to Dustin for their help in checking the vendor documentation.

[1] https://lpc.events/event/18/contributions/1954/ ("Mitigating
Spectre-PHT using Speculation Barriers in Linux eBPF")
[2] https://arxiv.org/pdf/2405.00078 ("VeriFence: Lightweight and
Precise Spectre Defenses for Untrusted Linux Kernel Extensions")
[3] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/runtime-speculative-side-channel-mitigations.html
("Managed Runtime Speculative Execution Side Channel Mitigations")
[4] https://dl.acm.org/doi/pdf/10.1145/3359789.3359837 ("Speculator: a
tool to analyze speculative execution attacks and mitigations" -
Section 4.6 "Stopping Speculative Execution")
[5] https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/programmer-references/software-techniques-for-managing-speculation.pdf
("White Paper - SOFTWARE TECHNIQUES FOR MANAGING SPECULATION ON AMD
PROCESSORS - REVISION 5.09.23")
[6] https://developer.arm.com/documentation/ddi0597/2020-12/Base-Instructions/SB--Speculation-Barrier-
("SB - Speculation Barrier - Arm Armv8-A A32/T32 Instruction Set
Architecture (2020-12)")
[7] https://wiki.raptorcs.com/w/images/5/5f/OPF_PowerISA_v3.1C.pdf
("Power ISA™ - Version 3.1C - May 26, 2024 - Section 9.2.1 of Book
III")
[8] https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/programmer-references/40332.pdf
("AMD64 Architecture Programmer’s Manual Volumes 1–5 - Revision 4.08
- April 2024 - 7.6.4 Serializing Instructions")

Signed-off-by: Luis Gerhorst <luis.gerhorst@fau.de>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Henriette Herzog <henriette.herzog@rub.de>
Cc: Dustin Nguyen <nguyen@cs.fau.de>
Cc: Maximilian Ott <ott@cs.fau.de>
Cc: Milan Stephan <milan.stephan@fau.de>
Link: https://lore.kernel.org/r/20250603212428.338473-1-luis.gerhorst@fau.de
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

authored by

Luis Gerhorst and committed by
Alexei Starovoitov
d6f1c85f 9124a450

+184 -54
+1
include/linux/bpf_verifier.h
··· 580 580 u64 map_key_state; /* constant (32 bit) key tracking for maps */ 581 581 int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ 582 582 u32 seen; /* this insn was processed by the verifier at env->pass_cnt */ 583 + bool nospec; /* do not execute this instruction speculatively */ 583 584 bool nospec_result; /* result is unsafe under speculation, nospec must follow */ 584 585 bool zext_dst; /* this insn zero extends dst reg */ 585 586 bool needs_zext; /* alu op needs to clear upper bits */
+74 -4
kernel/bpf/verifier.c
··· 2013 2013 return 0; 2014 2014 } 2015 2015 2016 + static bool error_recoverable_with_nospec(int err) 2017 + { 2018 + /* Should only return true for non-fatal errors that are allowed to 2019 + * occur during speculative verification. For these we can insert a 2020 + * nospec and the program might still be accepted. Do not include 2021 + * something like ENOMEM because it is likely to re-occur for the next 2022 + * architectural path once it has been recovered-from in all speculative 2023 + * paths. 2024 + */ 2025 + return err == -EPERM || err == -EACCES || err == -EINVAL; 2026 + } 2027 + 2016 2028 static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env, 2017 2029 int insn_idx, int prev_insn_idx, 2018 2030 bool speculative) ··· 11159 11147 return -ENOTSUPP; 11160 11148 } 11161 11149 11162 - static struct bpf_insn_aux_data *cur_aux(struct bpf_verifier_env *env) 11150 + static struct bpf_insn_aux_data *cur_aux(const struct bpf_verifier_env *env) 11163 11151 { 11164 11152 return &env->insn_aux_data[env->insn_idx]; 11165 11153 } ··· 14027 14015 static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env, 14028 14016 const struct bpf_insn *insn) 14029 14017 { 14030 - return env->bypass_spec_v1 || BPF_SRC(insn->code) == BPF_K; 14018 + return env->bypass_spec_v1 || 14019 + BPF_SRC(insn->code) == BPF_K || 14020 + cur_aux(env)->nospec; 14031 14021 } 14032 14022 14033 14023 static int update_alu_sanitation_state(struct bpf_insn_aux_data *aux, ··· 19746 19732 sanitize_mark_insn_seen(env); 19747 19733 prev_insn_idx = env->insn_idx; 19748 19734 19735 + /* Reduce verification complexity by stopping speculative path 19736 + * verification when a nospec is encountered. 19737 + */ 19738 + if (state->speculative && cur_aux(env)->nospec) 19739 + goto process_bpf_exit; 19740 + 19749 19741 err = do_check_insn(env, &do_print_state); 19750 - if (err < 0) { 19742 + if (state->speculative && error_recoverable_with_nospec(err)) { 19743 + /* Prevent this speculative path from ever reaching the 19744 + * insn that would have been unsafe to execute. 19745 + */ 19746 + cur_aux(env)->nospec = true; 19747 + /* If it was an ADD/SUB insn, potentially remove any 19748 + * markings for alu sanitization. 19749 + */ 19750 + cur_aux(env)->alu_state = 0; 19751 + goto process_bpf_exit; 19752 + } else if (err < 0) { 19751 19753 return err; 19752 19754 } else if (err == PROCESS_BPF_EXIT) { 19755 + goto process_bpf_exit; 19756 + } 19757 + WARN_ON_ONCE(err); 19758 + 19759 + if (state->speculative && cur_aux(env)->nospec_result) { 19760 + /* If we are on a path that performed a jump-op, this 19761 + * may skip a nospec patched-in after the jump. This can 19762 + * currently never happen because nospec_result is only 19763 + * used for the write-ops 19764 + * `*(size*)(dst_reg+off)=src_reg|imm32` which must 19765 + * never skip the following insn. Still, add a warning 19766 + * to document this in case nospec_result is used 19767 + * elsewhere in the future. 19768 + */ 19769 + WARN_ON_ONCE(env->insn_idx != prev_insn_idx + 1); 19753 19770 process_bpf_exit: 19754 19771 mark_verifier_state_scratched(env); 19755 19772 update_branch_counts(env, env->cur_state); ··· 19798 19753 continue; 19799 19754 } 19800 19755 } 19801 - WARN_ON_ONCE(err); 19802 19756 } 19803 19757 19804 19758 return 0; ··· 20925 20881 bpf_convert_ctx_access_t convert_ctx_access; 20926 20882 u8 mode; 20927 20883 20884 + if (env->insn_aux_data[i + delta].nospec) { 20885 + WARN_ON_ONCE(env->insn_aux_data[i + delta].alu_state); 20886 + struct bpf_insn patch[] = { 20887 + BPF_ST_NOSPEC(), 20888 + *insn, 20889 + }; 20890 + 20891 + cnt = ARRAY_SIZE(patch); 20892 + new_prog = bpf_patch_insn_data(env, i + delta, patch, cnt); 20893 + if (!new_prog) 20894 + return -ENOMEM; 20895 + 20896 + delta += cnt - 1; 20897 + env->prog = new_prog; 20898 + insn = new_prog->insnsi + i + delta; 20899 + /* This can not be easily merged with the 20900 + * nospec_result-case, because an insn may require a 20901 + * nospec before and after itself. Therefore also do not 20902 + * 'continue' here but potentially apply further 20903 + * patching to insn. *insn should equal patch[1] now. 20904 + */ 20905 + } 20906 + 20928 20907 if (insn->code == (BPF_LDX | BPF_MEM | BPF_B) || 20929 20908 insn->code == (BPF_LDX | BPF_MEM | BPF_H) || 20930 20909 insn->code == (BPF_LDX | BPF_MEM | BPF_W) || ··· 20998 20931 20999 20932 if (type == BPF_WRITE && 21000 20933 env->insn_aux_data[i + delta].nospec_result) { 20934 + /* nospec_result is only used to mitigate Spectre v4 and 20935 + * to limit verification-time for Spectre v1. 20936 + */ 21001 20937 struct bpf_insn patch[] = { 21002 20938 *insn, 21003 20939 BPF_ST_NOSPEC(),
+4
tools/testing/selftests/bpf/progs/bpf_misc.h
··· 231 231 #define CAN_USE_LOAD_ACQ_STORE_REL 232 232 #endif 233 233 234 + #if defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) 235 + #define SPEC_V1 236 + #endif 237 + 234 238 #endif
+7 -1
tools/testing/selftests/bpf/progs/verifier_and.c
··· 85 85 86 86 SEC("socket") 87 87 __description("check known subreg with unknown reg") 88 - __success __failure_unpriv __msg_unpriv("R1 !read_ok") 88 + __success __success_unpriv 89 89 __retval(0) 90 + #ifdef SPEC_V1 91 + __xlated_unpriv("if w0 < 0x1 goto pc+2") 92 + __xlated_unpriv("nospec") /* inserted to prevent `R1 !read_ok'` */ 93 + __xlated_unpriv("goto pc-1") /* `r1 = *(u32*)(r1 + 512)`, sanitized dead code */ 94 + __xlated_unpriv("r0 = 0") 95 + #endif 90 96 __naked void known_subreg_with_unknown_reg(void) 91 97 { 92 98 asm volatile (" \
+49 -12
tools/testing/selftests/bpf/progs/verifier_bounds.c
··· 620 620 621 621 SEC("socket") 622 622 __description("bounds check mixed 32bit and 64bit arithmetic. test1") 623 - __success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'") 623 + __success __success_unpriv 624 624 __retval(0) 625 + #ifdef SPEC_V1 626 + __xlated_unpriv("goto pc+2") 627 + __xlated_unpriv("nospec") /* inserted to prevent `R0 invalid mem access 'scalar'` */ 628 + __xlated_unpriv("goto pc-1") /* sanitized dead code */ 629 + __xlated_unpriv("exit") 630 + #endif 625 631 __naked void _32bit_and_64bit_arithmetic_test1(void) 626 632 { 627 633 asm volatile (" \ ··· 649 643 650 644 SEC("socket") 651 645 __description("bounds check mixed 32bit and 64bit arithmetic. test2") 652 - __success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'") 646 + __success __success_unpriv 653 647 __retval(0) 648 + #ifdef SPEC_V1 649 + __xlated_unpriv("goto pc+2") 650 + __xlated_unpriv("nospec") /* inserted to prevent `R0 invalid mem access 'scalar'` */ 651 + __xlated_unpriv("goto pc-1") /* sanitized dead code */ 652 + __xlated_unpriv("exit") 653 + #endif 654 654 __naked void _32bit_and_64bit_arithmetic_test2(void) 655 655 { 656 656 asm volatile (" \ ··· 703 691 704 692 SEC("socket") 705 693 __description("bounds check for reg = 0, reg xor 1") 706 - __success __failure_unpriv 707 - __msg_unpriv("R0 min value is outside of the allowed memory range") 694 + __success __success_unpriv 708 695 __retval(0) 696 + #ifdef SPEC_V1 697 + __xlated_unpriv("if r1 != 0x0 goto pc+2") 698 + __xlated_unpriv("nospec") /* inserted to prevent `R0 min value is outside of the allowed memory range` */ 699 + __xlated_unpriv("goto pc-1") /* sanitized dead code */ 700 + __xlated_unpriv("r0 = 0") 701 + #endif 709 702 __naked void reg_0_reg_xor_1(void) 710 703 { 711 704 asm volatile (" \ ··· 736 719 737 720 SEC("socket") 738 721 __description("bounds check for reg32 = 0, reg32 xor 1") 739 - __success __failure_unpriv 740 - __msg_unpriv("R0 min value is outside of the allowed memory range") 722 + __success __success_unpriv 741 723 __retval(0) 724 + #ifdef SPEC_V1 725 + __xlated_unpriv("if w1 != 0x0 goto pc+2") 726 + __xlated_unpriv("nospec") /* inserted to prevent `R0 min value is outside of the allowed memory range` */ 727 + __xlated_unpriv("goto pc-1") /* sanitized dead code */ 728 + __xlated_unpriv("r0 = 0") 729 + #endif 742 730 __naked void reg32_0_reg32_xor_1(void) 743 731 { 744 732 asm volatile (" \ ··· 769 747 770 748 SEC("socket") 771 749 __description("bounds check for reg = 2, reg xor 3") 772 - __success __failure_unpriv 773 - __msg_unpriv("R0 min value is outside of the allowed memory range") 750 + __success __success_unpriv 774 751 __retval(0) 752 + #ifdef SPEC_V1 753 + __xlated_unpriv("if r1 > 0x0 goto pc+2") 754 + __xlated_unpriv("nospec") /* inserted to prevent `R0 min value is outside of the allowed memory range` */ 755 + __xlated_unpriv("goto pc-1") /* sanitized dead code */ 756 + __xlated_unpriv("r0 = 0") 757 + #endif 775 758 __naked void reg_2_reg_xor_3(void) 776 759 { 777 760 asm volatile (" \ ··· 856 829 857 830 SEC("socket") 858 831 __description("bounds check for reg > 0, reg xor 3") 859 - __success __failure_unpriv 860 - __msg_unpriv("R0 min value is outside of the allowed memory range") 832 + __success __success_unpriv 861 833 __retval(0) 834 + #ifdef SPEC_V1 835 + __xlated_unpriv("if r1 >= 0x0 goto pc+2") 836 + __xlated_unpriv("nospec") /* inserted to prevent `R0 min value is outside of the allowed memory range` */ 837 + __xlated_unpriv("goto pc-1") /* sanitized dead code */ 838 + __xlated_unpriv("r0 = 0") 839 + #endif 862 840 __naked void reg_0_reg_xor_3(void) 863 841 { 864 842 asm volatile (" \ ··· 890 858 891 859 SEC("socket") 892 860 __description("bounds check for reg32 > 0, reg32 xor 3") 893 - __success __failure_unpriv 894 - __msg_unpriv("R0 min value is outside of the allowed memory range") 861 + __success __success_unpriv 895 862 __retval(0) 863 + #ifdef SPEC_V1 864 + __xlated_unpriv("if w1 >= 0x0 goto pc+2") 865 + __xlated_unpriv("nospec") /* inserted to prevent `R0 min value is outside of the allowed memory range` */ 866 + __xlated_unpriv("goto pc-1") /* sanitized dead code */ 867 + __xlated_unpriv("r0 = 0") 868 + #endif 896 869 __naked void reg32_0_reg32_xor_3(void) 897 870 { 898 871 asm volatile (" \
+14 -2
tools/testing/selftests/bpf/progs/verifier_movsx.c
··· 245 245 SEC("socket") 246 246 __description("MOV32SX, S8, var_off not u32_max, positive after s8 extension") 247 247 __success __retval(0) 248 - __failure_unpriv __msg_unpriv("frame pointer is read only") 248 + __success_unpriv 249 + #ifdef SPEC_V1 250 + __xlated_unpriv("w0 = 0") 251 + __xlated_unpriv("exit") 252 + __xlated_unpriv("nospec") /* inserted to prevent `frame pointer is read only` */ 253 + __xlated_unpriv("goto pc-1") 254 + #endif 249 255 __naked void mov64sx_s32_varoff_2(void) 250 256 { 251 257 asm volatile (" \ ··· 273 267 SEC("socket") 274 268 __description("MOV32SX, S8, var_off not u32_max, negative after s8 extension") 275 269 __success __retval(0) 276 - __failure_unpriv __msg_unpriv("frame pointer is read only") 270 + __success_unpriv 271 + #ifdef SPEC_V1 272 + __xlated_unpriv("w0 = 0") 273 + __xlated_unpriv("exit") 274 + __xlated_unpriv("nospec") /* inserted to prevent `frame pointer is read only` */ 275 + __xlated_unpriv("goto pc-1") 276 + #endif 277 277 __naked void mov64sx_s32_varoff_3(void) 278 278 { 279 279 asm volatile (" \
+7 -1
tools/testing/selftests/bpf/progs/verifier_unpriv.c
··· 572 572 573 573 SEC("socket") 574 574 __description("alu32: mov u32 const") 575 - __success __failure_unpriv __msg_unpriv("R7 invalid mem access 'scalar'") 575 + __success __success_unpriv 576 576 __retval(0) 577 + #ifdef SPEC_V1 578 + __xlated_unpriv("if r0 == 0x0 goto pc+2") 579 + __xlated_unpriv("nospec") /* inserted to prevent `R7 invalid mem access 'scalar'` */ 580 + __xlated_unpriv("goto pc-1") /* sanitized dead code */ 581 + __xlated_unpriv("exit") 582 + #endif 577 583 __naked void alu32_mov_u32_const(void) 578 584 { 579 585 asm volatile (" \
+12 -4
tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c
··· 398 398 399 399 SEC("socket") 400 400 __description("map access: mixing value pointer and scalar, 1") 401 - __success __failure_unpriv __msg_unpriv("R2 pointer comparison prohibited") 401 + __success __failure_unpriv 402 + __msg_unpriv("R2 tried to add from different maps, paths or scalars, pointer arithmetic with it prohibited for !root") 402 403 __retval(0) 403 404 __naked void value_pointer_and_scalar_1(void) 404 405 { ··· 434 433 l3_%=: /* branch B */ \ 435 434 r0 = 0x13371337; \ 436 435 /* verifier follows fall-through */ \ 436 + /* unpriv: nospec (inserted to prevent `R2 pointer comparison prohibited`) */\ 437 437 if r2 != 0x100000 goto l4_%=; \ 438 438 r0 = 0; \ 439 439 exit; \ ··· 452 450 453 451 SEC("socket") 454 452 __description("map access: mixing value pointer and scalar, 2") 455 - __success __failure_unpriv __msg_unpriv("R0 invalid mem access 'scalar'") 453 + __success __failure_unpriv 454 + __msg_unpriv("R2 tried to add from different maps, paths or scalars, pointer arithmetic with it prohibited for !root") 456 455 __retval(0) 457 456 __naked void value_pointer_and_scalar_2(void) 458 457 { ··· 495 492 * prevent dead code sanitization, rejected \ 496 493 * via branch B however \ 497 494 */ \ 495 + /* unpriv: nospec (inserted to prevent `R0 invalid mem access 'scalar'`) */\ 498 496 r0 = *(u8*)(r0 + 0); \ 499 497 r0 = 0; \ 500 498 exit; \ ··· 1300 1296 1301 1297 SEC("socket") 1302 1298 __description("map access: value_ptr -= unknown scalar, 2") 1303 - __success __failure_unpriv 1304 - __msg_unpriv("R0 pointer arithmetic of map value goes out of range") 1299 + __success __success_unpriv 1305 1300 __retval(1) 1301 + #ifdef SPEC_V1 1302 + __xlated_unpriv("r1 &= 7") 1303 + __xlated_unpriv("nospec") /* inserted to prevent `R0 pointer arithmetic of map value goes out of range` */ 1304 + __xlated_unpriv("r0 -= r1") 1305 + #endif 1306 1306 __naked void value_ptr_unknown_scalar_2_2(void) 1307 1307 { 1308 1308 asm volatile (" \
+1 -2
tools/testing/selftests/bpf/verifier/dead_code.c
··· 2 2 "dead code: start", 3 3 .insns = { 4 4 BPF_JMP_IMM(BPF_JA, 0, 0, 2), 5 + /* unpriv: nospec (inserted to prevent "R9 !read_ok") */ 5 6 BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0), 6 7 BPF_JMP_IMM(BPF_JA, 0, 0, 2), 7 8 BPF_MOV64_IMM(BPF_REG_0, 7), 8 9 BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, -4), 9 10 BPF_EXIT_INSN(), 10 11 }, 11 - .errstr_unpriv = "R9 !read_ok", 12 - .result_unpriv = REJECT, 13 12 .result = ACCEPT, 14 13 .retval = 7, 15 14 },
+11 -22
tools/testing/selftests/bpf/verifier/jmp32.c
··· 84 84 BPF_JMP32_IMM(BPF_JSET, BPF_REG_7, 0x10, 1), 85 85 BPF_EXIT_INSN(), 86 86 BPF_JMP32_IMM(BPF_JGE, BPF_REG_7, 0x10, 1), 87 + /* unpriv: nospec (inserted to prevent "R9 !read_ok") */ 87 88 BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0), 88 89 BPF_EXIT_INSN(), 89 90 }, 90 - .errstr_unpriv = "R9 !read_ok", 91 - .result_unpriv = REJECT, 92 91 .result = ACCEPT, 93 92 }, 94 93 { ··· 148 149 BPF_JMP32_IMM(BPF_JEQ, BPF_REG_7, 0x10, 1), 149 150 BPF_EXIT_INSN(), 150 151 BPF_JMP32_IMM(BPF_JSGE, BPF_REG_7, 0xf, 1), 152 + /* unpriv: nospec (inserted to prevent "R9 !read_ok") */ 151 153 BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0), 152 154 BPF_EXIT_INSN(), 153 155 }, 154 - .errstr_unpriv = "R9 !read_ok", 155 - .result_unpriv = REJECT, 156 156 .result = ACCEPT, 157 157 }, 158 158 { ··· 212 214 BPF_JMP32_IMM(BPF_JNE, BPF_REG_7, 0x10, 1), 213 215 BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0x10, 1), 214 216 BPF_EXIT_INSN(), 217 + /* unpriv: nospec (inserted to prevent "R9 !read_ok") */ 215 218 BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0), 216 219 BPF_EXIT_INSN(), 217 220 }, 218 - .errstr_unpriv = "R9 !read_ok", 219 - .result_unpriv = REJECT, 220 221 .result = ACCEPT, 221 222 }, 222 223 { ··· 280 283 BPF_JMP32_REG(BPF_JGE, BPF_REG_7, BPF_REG_8, 1), 281 284 BPF_EXIT_INSN(), 282 285 BPF_JMP32_IMM(BPF_JGE, BPF_REG_7, 0x7ffffff0, 1), 286 + /* unpriv: nospec (inserted to prevent "R0 invalid mem access 'scalar'") */ 283 287 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 284 288 BPF_EXIT_INSN(), 285 289 }, 286 - .errstr_unpriv = "R0 invalid mem access 'scalar'", 287 - .result_unpriv = REJECT, 288 290 .result = ACCEPT, 289 291 .retval = 2, 290 292 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 350 354 BPF_JMP32_REG(BPF_JGT, BPF_REG_7, BPF_REG_8, 1), 351 355 BPF_EXIT_INSN(), 352 356 BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 0x7ffffff0, 1), 357 + /* unpriv: nospec (inserted to prevent "R0 invalid mem access 'scalar'") */ 353 358 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 354 359 BPF_EXIT_INSN(), 355 360 }, 356 - .errstr_unpriv = "R0 invalid mem access 'scalar'", 357 - .result_unpriv = REJECT, 358 361 .result = ACCEPT, 359 362 .retval = 2, 360 363 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 420 425 BPF_JMP32_REG(BPF_JLE, BPF_REG_7, BPF_REG_8, 1), 421 426 BPF_EXIT_INSN(), 422 427 BPF_JMP32_IMM(BPF_JLE, BPF_REG_7, 0x7ffffff0, 1), 428 + /* unpriv: nospec (inserted to prevent "R0 invalid mem access 'scalar'") */ 423 429 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 424 430 BPF_EXIT_INSN(), 425 431 }, 426 - .errstr_unpriv = "R0 invalid mem access 'scalar'", 427 - .result_unpriv = REJECT, 428 432 .result = ACCEPT, 429 433 .retval = 2, 430 434 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 490 496 BPF_JMP32_REG(BPF_JLT, BPF_REG_7, BPF_REG_8, 1), 491 497 BPF_EXIT_INSN(), 492 498 BPF_JMP_IMM(BPF_JSLT, BPF_REG_7, 0x7ffffff0, 1), 499 + /* unpriv: nospec (inserted to prevent "R0 invalid mem access 'scalar'") */ 493 500 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 494 501 BPF_EXIT_INSN(), 495 502 }, 496 - .errstr_unpriv = "R0 invalid mem access 'scalar'", 497 - .result_unpriv = REJECT, 498 503 .result = ACCEPT, 499 504 .retval = 2, 500 505 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 560 567 BPF_JMP32_REG(BPF_JSGE, BPF_REG_7, BPF_REG_8, 1), 561 568 BPF_EXIT_INSN(), 562 569 BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0x7ffffff0, 1), 570 + /* unpriv: nospec (inserted to prevent "R0 invalid mem access 'scalar'") */ 563 571 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 564 572 BPF_EXIT_INSN(), 565 573 }, 566 - .errstr_unpriv = "R0 invalid mem access 'scalar'", 567 - .result_unpriv = REJECT, 568 574 .result = ACCEPT, 569 575 .retval = 2, 570 576 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 630 638 BPF_JMP32_REG(BPF_JSGT, BPF_REG_7, BPF_REG_8, 1), 631 639 BPF_EXIT_INSN(), 632 640 BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, -2, 1), 641 + /* unpriv: nospec (inserted to prevent "R0 invalid mem access 'scalar'") */ 633 642 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 634 643 BPF_EXIT_INSN(), 635 644 }, 636 - .errstr_unpriv = "R0 invalid mem access 'scalar'", 637 - .result_unpriv = REJECT, 638 645 .result = ACCEPT, 639 646 .retval = 2, 640 647 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 700 709 BPF_JMP32_REG(BPF_JSLE, BPF_REG_7, BPF_REG_8, 1), 701 710 BPF_EXIT_INSN(), 702 711 BPF_JMP_IMM(BPF_JSLE, BPF_REG_7, 0x7ffffff0, 1), 712 + /* unpriv: nospec (inserted to prevent "R0 invalid mem access 'scalar'") */ 703 713 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 704 714 BPF_EXIT_INSN(), 705 715 }, 706 - .errstr_unpriv = "R0 invalid mem access 'scalar'", 707 - .result_unpriv = REJECT, 708 716 .result = ACCEPT, 709 717 .retval = 2, 710 718 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 770 780 BPF_JMP32_REG(BPF_JSLT, BPF_REG_7, BPF_REG_8, 1), 771 781 BPF_EXIT_INSN(), 772 782 BPF_JMP32_IMM(BPF_JSLT, BPF_REG_7, -1, 1), 783 + /* unpriv: nospec (inserted to prevent "R0 invalid mem access 'scalar'") */ 773 784 BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), 774 785 BPF_EXIT_INSN(), 775 786 }, 776 - .errstr_unpriv = "R0 invalid mem access 'scalar'", 777 - .result_unpriv = REJECT, 778 787 .result = ACCEPT, 779 788 .retval = 2, 780 789 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+4 -6
tools/testing/selftests/bpf/verifier/jset.c
··· 78 78 .insns = { 79 79 BPF_MOV64_IMM(BPF_REG_0, 1), 80 80 BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1), 81 + /* unpriv: nospec (inserted to prevent "R9 !read_ok") */ 81 82 BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0), 82 83 BPF_EXIT_INSN(), 83 84 }, 84 85 .prog_type = BPF_PROG_TYPE_SOCKET_FILTER, 85 - .errstr_unpriv = "R9 !read_ok", 86 - .result_unpriv = REJECT, 87 86 .retval = 1, 88 87 .result = ACCEPT, 89 88 }, ··· 135 136 BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), 136 137 BPF_ALU64_IMM(BPF_OR, BPF_REG_0, 2), 137 138 BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 3, 1), 139 + /* unpriv: nospec (inserted to prevent "R9 !read_ok") */ 138 140 BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0), 139 141 BPF_MOV64_IMM(BPF_REG_0, 0), 140 142 BPF_EXIT_INSN(), 141 143 }, 142 144 .prog_type = BPF_PROG_TYPE_SOCKET_FILTER, 143 - .errstr_unpriv = "R9 !read_ok", 144 - .result_unpriv = REJECT, 145 145 .result = ACCEPT, 146 146 }, 147 147 { ··· 152 154 BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xff), 153 155 BPF_JMP_IMM(BPF_JSET, BPF_REG_1, 0xf0, 3), 154 156 BPF_JMP_IMM(BPF_JLT, BPF_REG_1, 0x10, 1), 157 + /* unpriv: nospec (inserted to prevent "R9 !read_ok") */ 155 158 BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0), 156 159 BPF_EXIT_INSN(), 157 160 BPF_JMP_IMM(BPF_JSET, BPF_REG_1, 0x10, 1), 158 161 BPF_EXIT_INSN(), 159 162 BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0x10, 1), 163 + /* unpriv: nospec (inserted to prevent "R9 !read_ok") */ 160 164 BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0), 161 165 BPF_EXIT_INSN(), 162 166 }, 163 167 .prog_type = BPF_PROG_TYPE_SOCKET_FILTER, 164 - .errstr_unpriv = "R9 !read_ok", 165 - .result_unpriv = REJECT, 166 168 .result = ACCEPT, 167 169 },