Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'locking-kcsan-2020-06-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull the Kernel Concurrency Sanitizer from Thomas Gleixner:
"The Kernel Concurrency Sanitizer (KCSAN) is a dynamic race detector,
which relies on compile-time instrumentation, and uses a
watchpoint-based sampling approach to detect races.

The feature was under development for quite some time and has already
found legitimate bugs.

Unfortunately it comes with a limitation, which was only understood
late in the development cycle:

It requires an up to date CLANG-11 compiler

CLANG-11 is not yet released (scheduled for June), but it's the only
compiler today which handles the kernel requirements and especially
the annotations of functions to exclude them from KCSAN
instrumentation correctly.

These annotations really need to work so that low level entry code and
especially int3 text poke handling can be completely isolated.

A detailed discussion of the requirements and compiler issues can be
found here:

https://lore.kernel.org/lkml/CANpmjNMTsY_8241bS7=XAfqvZHFLrVEkv_uM4aDUWE_kh3Rvbw@mail.gmail.com/

We came to the conclusion that trying to work around compiler
limitations and bugs again would end up in a major trainwreck, so
requiring a working compiler seemed to be the best choice.

For Continous Integration purposes the compiler restriction is
manageable and that's where most xxSAN reports come from.

For a change this limitation might make GCC people actually look at
their bugs. Some issues with CSAN in GCC are 7 years old and one has
been 'fixed' 3 years ago with a half baken solution which 'solved' the
reported issue but not the underlying problem.

The KCSAN developers also ponder to use a GCC plugin to become
independent, but that's not something which will show up in a few
days.

Blocking KCSAN until wide spread compiler support is available is not
a really good alternative because the continuous growth of lockless
optimizations in the kernel demands proper tooling support"

* tag 'locking-kcsan-2020-06-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (76 commits)
compiler_types.h, kasan: Use __SANITIZE_ADDRESS__ instead of CONFIG_KASAN to decide inlining
compiler.h: Move function attributes to compiler_types.h
compiler.h: Avoid nested statement expression in data_race()
compiler.h: Remove data_race() and unnecessary checks from {READ,WRITE}_ONCE()
kcsan: Update Documentation to change supported compilers
kcsan: Remove 'noinline' from __no_kcsan_or_inline
kcsan: Pass option tsan-instrument-read-before-write to Clang
kcsan: Support distinguishing volatile accesses
kcsan: Restrict supported compilers
kcsan: Avoid inserting __tsan_func_entry/exit if possible
ubsan, kcsan: Don't combine sanitizer with kcov on clang
objtool, kcsan: Add kcsan_disable_current() and kcsan_enable_current_nowarn()
kcsan: Add __kcsan_{enable,disable}_current() variants
checkpatch: Warn about data_race() without comment
kcsan: Use GFP_ATOMIC under spin lock
Improve KCSAN documentation a bit
kcsan: Make reporting aware of KCSAN tests
kcsan: Fix function matching in report
kcsan: Change data_race() to no longer require marking racing accesses
kcsan: Move kcsan_{disable,enable}_current() to kcsan-checks.h
...

+4244 -592
+1
Documentation/dev-tools/index.rst
··· 21 21 kasan 22 22 ubsan 23 23 kmemleak 24 + kcsan 24 25 gdb-kernel-debugging 25 26 kgdb 26 27 kselftest
+321
Documentation/dev-tools/kcsan.rst
··· 1 + The Kernel Concurrency Sanitizer (KCSAN) 2 + ======================================== 3 + 4 + The Kernel Concurrency Sanitizer (KCSAN) is a dynamic race detector, which 5 + relies on compile-time instrumentation, and uses a watchpoint-based sampling 6 + approach to detect races. KCSAN's primary purpose is to detect `data races`_. 7 + 8 + Usage 9 + ----- 10 + 11 + KCSAN requires Clang version 11 or later. 12 + 13 + To enable KCSAN configure the kernel with:: 14 + 15 + CONFIG_KCSAN = y 16 + 17 + KCSAN provides several other configuration options to customize behaviour (see 18 + the respective help text in ``lib/Kconfig.kcsan`` for more info). 19 + 20 + Error reports 21 + ~~~~~~~~~~~~~ 22 + 23 + A typical data race report looks like this:: 24 + 25 + ================================================================== 26 + BUG: KCSAN: data-race in generic_permission / kernfs_refresh_inode 27 + 28 + write to 0xffff8fee4c40700c of 4 bytes by task 175 on cpu 4: 29 + kernfs_refresh_inode+0x70/0x170 30 + kernfs_iop_permission+0x4f/0x90 31 + inode_permission+0x190/0x200 32 + link_path_walk.part.0+0x503/0x8e0 33 + path_lookupat.isra.0+0x69/0x4d0 34 + filename_lookup+0x136/0x280 35 + user_path_at_empty+0x47/0x60 36 + vfs_statx+0x9b/0x130 37 + __do_sys_newlstat+0x50/0xb0 38 + __x64_sys_newlstat+0x37/0x50 39 + do_syscall_64+0x85/0x260 40 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 41 + 42 + read to 0xffff8fee4c40700c of 4 bytes by task 166 on cpu 6: 43 + generic_permission+0x5b/0x2a0 44 + kernfs_iop_permission+0x66/0x90 45 + inode_permission+0x190/0x200 46 + link_path_walk.part.0+0x503/0x8e0 47 + path_lookupat.isra.0+0x69/0x4d0 48 + filename_lookup+0x136/0x280 49 + user_path_at_empty+0x47/0x60 50 + do_faccessat+0x11a/0x390 51 + __x64_sys_access+0x3c/0x50 52 + do_syscall_64+0x85/0x260 53 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 54 + 55 + Reported by Kernel Concurrency Sanitizer on: 56 + CPU: 6 PID: 166 Comm: systemd-journal Not tainted 5.3.0-rc7+ #1 57 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 58 + ================================================================== 59 + 60 + The header of the report provides a short summary of the functions involved in 61 + the race. It is followed by the access types and stack traces of the 2 threads 62 + involved in the data race. 63 + 64 + The other less common type of data race report looks like this:: 65 + 66 + ================================================================== 67 + BUG: KCSAN: data-race in e1000_clean_rx_irq+0x551/0xb10 68 + 69 + race at unknown origin, with read to 0xffff933db8a2ae6c of 1 bytes by interrupt on cpu 0: 70 + e1000_clean_rx_irq+0x551/0xb10 71 + e1000_clean+0x533/0xda0 72 + net_rx_action+0x329/0x900 73 + __do_softirq+0xdb/0x2db 74 + irq_exit+0x9b/0xa0 75 + do_IRQ+0x9c/0xf0 76 + ret_from_intr+0x0/0x18 77 + default_idle+0x3f/0x220 78 + arch_cpu_idle+0x21/0x30 79 + do_idle+0x1df/0x230 80 + cpu_startup_entry+0x14/0x20 81 + rest_init+0xc5/0xcb 82 + arch_call_rest_init+0x13/0x2b 83 + start_kernel+0x6db/0x700 84 + 85 + Reported by Kernel Concurrency Sanitizer on: 86 + CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.3.0-rc7+ #2 87 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 88 + ================================================================== 89 + 90 + This report is generated where it was not possible to determine the other 91 + racing thread, but a race was inferred due to the data value of the watched 92 + memory location having changed. These can occur either due to missing 93 + instrumentation or e.g. DMA accesses. These reports will only be generated if 94 + ``CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN=y`` (selected by default). 95 + 96 + Selective analysis 97 + ~~~~~~~~~~~~~~~~~~ 98 + 99 + It may be desirable to disable data race detection for specific accesses, 100 + functions, compilation units, or entire subsystems. For static blacklisting, 101 + the below options are available: 102 + 103 + * KCSAN understands the ``data_race(expr)`` annotation, which tells KCSAN that 104 + any data races due to accesses in ``expr`` should be ignored and resulting 105 + behaviour when encountering a data race is deemed safe. 106 + 107 + * Disabling data race detection for entire functions can be accomplished by 108 + using the function attribute ``__no_kcsan``:: 109 + 110 + __no_kcsan 111 + void foo(void) { 112 + ... 113 + 114 + To dynamically limit for which functions to generate reports, see the 115 + `DebugFS interface`_ blacklist/whitelist feature. 116 + 117 + For ``__always_inline`` functions, replace ``__always_inline`` with 118 + ``__no_kcsan_or_inline`` (which implies ``__always_inline``):: 119 + 120 + static __no_kcsan_or_inline void foo(void) { 121 + ... 122 + 123 + * To disable data race detection for a particular compilation unit, add to the 124 + ``Makefile``:: 125 + 126 + KCSAN_SANITIZE_file.o := n 127 + 128 + * To disable data race detection for all compilation units listed in a 129 + ``Makefile``, add to the respective ``Makefile``:: 130 + 131 + KCSAN_SANITIZE := n 132 + 133 + Furthermore, it is possible to tell KCSAN to show or hide entire classes of 134 + data races, depending on preferences. These can be changed via the following 135 + Kconfig options: 136 + 137 + * ``CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY``: If enabled and a conflicting write 138 + is observed via a watchpoint, but the data value of the memory location was 139 + observed to remain unchanged, do not report the data race. 140 + 141 + * ``CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC``: Assume that plain aligned writes 142 + up to word size are atomic by default. Assumes that such writes are not 143 + subject to unsafe compiler optimizations resulting in data races. The option 144 + causes KCSAN to not report data races due to conflicts where the only plain 145 + accesses are aligned writes up to word size. 146 + 147 + DebugFS interface 148 + ~~~~~~~~~~~~~~~~~ 149 + 150 + The file ``/sys/kernel/debug/kcsan`` provides the following interface: 151 + 152 + * Reading ``/sys/kernel/debug/kcsan`` returns various runtime statistics. 153 + 154 + * Writing ``on`` or ``off`` to ``/sys/kernel/debug/kcsan`` allows turning KCSAN 155 + on or off, respectively. 156 + 157 + * Writing ``!some_func_name`` to ``/sys/kernel/debug/kcsan`` adds 158 + ``some_func_name`` to the report filter list, which (by default) blacklists 159 + reporting data races where either one of the top stackframes are a function 160 + in the list. 161 + 162 + * Writing either ``blacklist`` or ``whitelist`` to ``/sys/kernel/debug/kcsan`` 163 + changes the report filtering behaviour. For example, the blacklist feature 164 + can be used to silence frequently occurring data races; the whitelist feature 165 + can help with reproduction and testing of fixes. 166 + 167 + Tuning performance 168 + ~~~~~~~~~~~~~~~~~~ 169 + 170 + Core parameters that affect KCSAN's overall performance and bug detection 171 + ability are exposed as kernel command-line arguments whose defaults can also be 172 + changed via the corresponding Kconfig options. 173 + 174 + * ``kcsan.skip_watch`` (``CONFIG_KCSAN_SKIP_WATCH``): Number of per-CPU memory 175 + operations to skip, before another watchpoint is set up. Setting up 176 + watchpoints more frequently will result in the likelihood of races to be 177 + observed to increase. This parameter has the most significant impact on 178 + overall system performance and race detection ability. 179 + 180 + * ``kcsan.udelay_task`` (``CONFIG_KCSAN_UDELAY_TASK``): For tasks, the 181 + microsecond delay to stall execution after a watchpoint has been set up. 182 + Larger values result in the window in which we may observe a race to 183 + increase. 184 + 185 + * ``kcsan.udelay_interrupt`` (``CONFIG_KCSAN_UDELAY_INTERRUPT``): For 186 + interrupts, the microsecond delay to stall execution after a watchpoint has 187 + been set up. Interrupts have tighter latency requirements, and their delay 188 + should generally be smaller than the one chosen for tasks. 189 + 190 + They may be tweaked at runtime via ``/sys/module/kcsan/parameters/``. 191 + 192 + Data Races 193 + ---------- 194 + 195 + In an execution, two memory accesses form a *data race* if they *conflict*, 196 + they happen concurrently in different threads, and at least one of them is a 197 + *plain access*; they *conflict* if both access the same memory location, and at 198 + least one is a write. For a more thorough discussion and definition, see `"Plain 199 + Accesses and Data Races" in the LKMM`_. 200 + 201 + .. _"Plain Accesses and Data Races" in the LKMM: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/memory-model/Documentation/explanation.txt#n1922 202 + 203 + Relationship with the Linux-Kernel Memory Consistency Model (LKMM) 204 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 205 + 206 + The LKMM defines the propagation and ordering rules of various memory 207 + operations, which gives developers the ability to reason about concurrent code. 208 + Ultimately this allows to determine the possible executions of concurrent code, 209 + and if that code is free from data races. 210 + 211 + KCSAN is aware of *marked atomic operations* (``READ_ONCE``, ``WRITE_ONCE``, 212 + ``atomic_*``, etc.), but is oblivious of any ordering guarantees and simply 213 + assumes that memory barriers are placed correctly. In other words, KCSAN 214 + assumes that as long as a plain access is not observed to race with another 215 + conflicting access, memory operations are correctly ordered. 216 + 217 + This means that KCSAN will not report *potential* data races due to missing 218 + memory ordering. Developers should therefore carefully consider the required 219 + memory ordering requirements that remain unchecked. If, however, missing 220 + memory ordering (that is observable with a particular compiler and 221 + architecture) leads to an observable data race (e.g. entering a critical 222 + section erroneously), KCSAN would report the resulting data race. 223 + 224 + Race Detection Beyond Data Races 225 + -------------------------------- 226 + 227 + For code with complex concurrency design, race-condition bugs may not always 228 + manifest as data races. Race conditions occur if concurrently executing 229 + operations result in unexpected system behaviour. On the other hand, data races 230 + are defined at the C-language level. The following macros can be used to check 231 + properties of concurrent code where bugs would not manifest as data races. 232 + 233 + .. kernel-doc:: include/linux/kcsan-checks.h 234 + :functions: ASSERT_EXCLUSIVE_WRITER ASSERT_EXCLUSIVE_WRITER_SCOPED 235 + ASSERT_EXCLUSIVE_ACCESS ASSERT_EXCLUSIVE_ACCESS_SCOPED 236 + ASSERT_EXCLUSIVE_BITS 237 + 238 + Implementation Details 239 + ---------------------- 240 + 241 + KCSAN relies on observing that two accesses happen concurrently. Crucially, we 242 + want to (a) increase the chances of observing races (especially for races that 243 + manifest rarely), and (b) be able to actually observe them. We can accomplish 244 + (a) by injecting various delays, and (b) by using address watchpoints (or 245 + breakpoints). 246 + 247 + If we deliberately stall a memory access, while we have a watchpoint for its 248 + address set up, and then observe the watchpoint to fire, two accesses to the 249 + same address just raced. Using hardware watchpoints, this is the approach taken 250 + in `DataCollider 251 + <http://usenix.org/legacy/events/osdi10/tech/full_papers/Erickson.pdf>`_. 252 + Unlike DataCollider, KCSAN does not use hardware watchpoints, but instead 253 + relies on compiler instrumentation and "soft watchpoints". 254 + 255 + In KCSAN, watchpoints are implemented using an efficient encoding that stores 256 + access type, size, and address in a long; the benefits of using "soft 257 + watchpoints" are portability and greater flexibility. KCSAN then relies on the 258 + compiler instrumenting plain accesses. For each instrumented plain access: 259 + 260 + 1. Check if a matching watchpoint exists; if yes, and at least one access is a 261 + write, then we encountered a racing access. 262 + 263 + 2. Periodically, if no matching watchpoint exists, set up a watchpoint and 264 + stall for a small randomized delay. 265 + 266 + 3. Also check the data value before the delay, and re-check the data value 267 + after delay; if the values mismatch, we infer a race of unknown origin. 268 + 269 + To detect data races between plain and marked accesses, KCSAN also annotates 270 + marked accesses, but only to check if a watchpoint exists; i.e. KCSAN never 271 + sets up a watchpoint on marked accesses. By never setting up watchpoints for 272 + marked operations, if all accesses to a variable that is accessed concurrently 273 + are properly marked, KCSAN will never trigger a watchpoint and therefore never 274 + report the accesses. 275 + 276 + Key Properties 277 + ~~~~~~~~~~~~~~ 278 + 279 + 1. **Memory Overhead:** The overall memory overhead is only a few MiB 280 + depending on configuration. The current implementation uses a small array of 281 + longs to encode watchpoint information, which is negligible. 282 + 283 + 2. **Performance Overhead:** KCSAN's runtime aims to be minimal, using an 284 + efficient watchpoint encoding that does not require acquiring any shared 285 + locks in the fast-path. For kernel boot on a system with 8 CPUs: 286 + 287 + - 5.0x slow-down with the default KCSAN config; 288 + - 2.8x slow-down from runtime fast-path overhead only (set very large 289 + ``KCSAN_SKIP_WATCH`` and unset ``KCSAN_SKIP_WATCH_RANDOMIZE``). 290 + 291 + 3. **Annotation Overheads:** Minimal annotations are required outside the KCSAN 292 + runtime. As a result, maintenance overheads are minimal as the kernel 293 + evolves. 294 + 295 + 4. **Detects Racy Writes from Devices:** Due to checking data values upon 296 + setting up watchpoints, racy writes from devices can also be detected. 297 + 298 + 5. **Memory Ordering:** KCSAN is *not* explicitly aware of the LKMM's ordering 299 + rules; this may result in missed data races (false negatives). 300 + 301 + 6. **Analysis Accuracy:** For observed executions, due to using a sampling 302 + strategy, the analysis is *unsound* (false negatives possible), but aims to 303 + be complete (no false positives). 304 + 305 + Alternatives Considered 306 + ----------------------- 307 + 308 + An alternative data race detection approach for the kernel can be found in the 309 + `Kernel Thread Sanitizer (KTSAN) <https://github.com/google/ktsan/wiki>`_. 310 + KTSAN is a happens-before data race detector, which explicitly establishes the 311 + happens-before order between memory operations, which can then be used to 312 + determine data races as defined in `Data Races`_. 313 + 314 + To build a correct happens-before relation, KTSAN must be aware of all ordering 315 + rules of the LKMM and synchronization primitives. Unfortunately, any omission 316 + leads to large numbers of false positives, which is especially detrimental in 317 + the context of the kernel which includes numerous custom synchronization 318 + mechanisms. To track the happens-before relation, KTSAN's implementation 319 + requires metadata for each memory location (shadow memory), which for each page 320 + corresponds to 4 pages of shadow memory, and can translate into overhead of 321 + tens of GiB on a large system.
+11
MAINTAINERS
··· 9305 9305 F: scripts/Kconfig.include 9306 9306 F: scripts/kconfig/ 9307 9307 9308 + KCSAN 9309 + M: Marco Elver <elver@google.com> 9310 + R: Dmitry Vyukov <dvyukov@google.com> 9311 + L: kasan-dev@googlegroups.com 9312 + S: Maintained 9313 + F: Documentation/dev-tools/kcsan.rst 9314 + F: include/linux/kcsan*.h 9315 + F: kernel/kcsan/ 9316 + F: lib/Kconfig.kcsan 9317 + F: scripts/Makefile.kcsan 9318 + 9308 9319 KDUMP 9309 9320 M: Dave Young <dyoung@redhat.com> 9310 9321 M: Baoquan He <bhe@redhat.com>
+2 -1
Makefile
··· 531 531 532 532 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS 533 533 export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE 534 - export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN 534 + export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN CFLAGS_KCSAN 535 535 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE 536 536 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE 537 537 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL ··· 965 965 include scripts/Makefile.kasan 966 966 include scripts/Makefile.extrawarn 967 967 include scripts/Makefile.ubsan 968 + include scripts/Makefile.kcsan 968 969 969 970 # Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments 970 971 KBUILD_CPPFLAGS += $(KCPPFLAGS)
+1
arch/x86/Kconfig
··· 233 233 select THREAD_INFO_IN_TASK 234 234 select USER_STACKTRACE_SUPPORT 235 235 select VIRT_TO_BUS 236 + select HAVE_ARCH_KCSAN if X86_64 236 237 select X86_FEATURE_NAMES if PROC_FS 237 238 select PROC_PID_ARCH_STATUS if PROC_FS 238 239 imply IMA_SECURE_AND_OR_TRUSTED_BOOT if EFI
+2
arch/x86/boot/Makefile
··· 9 9 # Changed by many, many contributors over the years. 10 10 # 11 11 12 + # Sanitizer runtimes are unavailable and cannot be linked for early boot code. 12 13 KASAN_SANITIZE := n 14 + KCSAN_SANITIZE := n 13 15 OBJECT_FILES_NON_STANDARD := y 14 16 15 17 # Kernel does not boot with kcov instrumentation here.
+2
arch/x86/boot/compressed/Makefile
··· 17 17 # (see scripts/Makefile.lib size_append) 18 18 # compressed vmlinux.bin.all + u32 size of vmlinux.bin.all 19 19 20 + # Sanitizer runtimes are unavailable and cannot be linked for early boot code. 20 21 KASAN_SANITIZE := n 22 + KCSAN_SANITIZE := n 21 23 OBJECT_FILES_NON_STANDARD := y 22 24 23 25 # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
+6
arch/x86/entry/vdso/Makefile
··· 10 10 include $(srctree)/lib/vdso/Makefile 11 11 12 12 KBUILD_CFLAGS += $(DISABLE_LTO) 13 + 14 + # Sanitizer runtimes are unavailable and cannot be linked here. 13 15 KASAN_SANITIZE := n 14 16 UBSAN_SANITIZE := n 17 + KCSAN_SANITIZE := n 15 18 OBJECT_FILES_NON_STANDARD := y 16 19 17 20 # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. ··· 32 29 33 30 # files to link into kernel 34 31 obj-y += vma.o 32 + KASAN_SANITIZE_vma.o := y 33 + UBSAN_SANITIZE_vma.o := y 34 + KCSAN_SANITIZE_vma.o := y 35 35 OBJECT_FILES_NON_STANDARD_vma.o := n 36 36 37 37 # vDSO images to build
+5 -1
arch/x86/include/asm/bitops.h
··· 201 201 return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(btc), *addr, c, "Ir", nr); 202 202 } 203 203 204 - static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr) 204 + static __no_kcsan_or_inline bool constant_test_bit(long nr, const volatile unsigned long *addr) 205 205 { 206 + /* 207 + * Because this is a plain access, we need to disable KCSAN here to 208 + * avoid double instrumentation via instrumented bitops. 209 + */ 206 210 return ((1UL << (nr & (BITS_PER_LONG-1))) & 207 211 (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; 208 212 }
+4
arch/x86/kernel/Makefile
··· 28 28 KASAN_SANITIZE_stacktrace.o := n 29 29 KASAN_SANITIZE_paravirt.o := n 30 30 31 + # With some compiler versions the generated code results in boot hangs, caused 32 + # by several compilation units. To be safe, disable all instrumentation. 33 + KCSAN_SANITIZE := n 34 + 31 35 OBJECT_FILES_NON_STANDARD_test_nx.o := y 32 36 OBJECT_FILES_NON_STANDARD_paravirt_patch.o := y 33 37
+3
arch/x86/kernel/cpu/Makefile
··· 13 13 KCOV_INSTRUMENT_common.o := n 14 14 KCOV_INSTRUMENT_perf_event.o := n 15 15 16 + # As above, instrumenting secondary CPU boot code causes boot hangs. 17 + KCSAN_SANITIZE_common.o := n 18 + 16 19 # Make sure load_percpu_segment has no stackprotector 17 20 nostackp := $(call cc-option, -fno-stack-protector) 18 21 CFLAGS_common.o := $(nostackp)
+9 -1
arch/x86/kernel/e820.c
··· 991 991 while (pa_data) { 992 992 data = early_memremap(pa_data, sizeof(*data)); 993 993 e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 994 - e820__range_update_kexec(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 994 + 995 + /* 996 + * SETUP_EFI is supplied by kexec and does not need to be 997 + * reserved. 998 + */ 999 + if (data->type != SETUP_EFI) 1000 + e820__range_update_kexec(pa_data, 1001 + sizeof(*data) + data->len, 1002 + E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 995 1003 996 1004 if (data->type == SETUP_INDIRECT && 997 1005 ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) {
+9
arch/x86/lib/Makefile
··· 6 6 # Produces uninteresting flaky coverage. 7 7 KCOV_INSTRUMENT_delay.o := n 8 8 9 + # KCSAN uses udelay for introducing watchpoint delay; avoid recursion. 10 + KCSAN_SANITIZE_delay.o := n 11 + ifdef CONFIG_KCSAN 12 + # In case KCSAN+lockdep+ftrace are enabled, disable ftrace for delay.o to avoid 13 + # lockdep -> [other libs] -> KCSAN -> udelay -> ftrace -> lockdep recursion. 14 + CFLAGS_REMOVE_delay.o = $(CC_FLAGS_FTRACE) 15 + endif 16 + 9 17 # Early boot use of cmdline; don't instrument it 10 18 ifdef CONFIG_AMD_MEM_ENCRYPT 11 19 KCOV_INSTRUMENT_cmdline.o := n 12 20 KASAN_SANITIZE_cmdline.o := n 21 + KCSAN_SANITIZE_cmdline.o := n 13 22 14 23 ifdef CONFIG_FUNCTION_TRACER 15 24 CFLAGS_REMOVE_cmdline.o = -pg
+4
arch/x86/mm/Makefile
··· 7 7 KASAN_SANITIZE_mem_encrypt.o := n 8 8 KASAN_SANITIZE_mem_encrypt_identity.o := n 9 9 10 + # Disable KCSAN entirely, because otherwise we get warnings that some functions 11 + # reference __initdata sections. 12 + KCSAN_SANITIZE := n 13 + 10 14 ifdef CONFIG_FUNCTION_TRACER 11 15 CFLAGS_REMOVE_mem_encrypt.o = -pg 12 16 CFLAGS_REMOVE_mem_encrypt_identity.o = -pg
+1
arch/x86/purgatory/.gitignore
··· 1 + purgatory.chk
+15 -4
arch/x86/purgatory/Makefile
··· 14 14 15 15 CFLAGS_sha256.o := -D__DISABLE_EXPORTS 16 16 17 - LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib 18 - targets += purgatory.ro 17 + # When linking purgatory.ro with -r unresolved symbols are not checked, 18 + # also link a purgatory.chk binary without -r to check for unresolved symbols. 19 + PURGATORY_LDFLAGS := -e purgatory_start -nostdlib -z nodefaultlib 20 + LDFLAGS_purgatory.ro := -r $(PURGATORY_LDFLAGS) 21 + LDFLAGS_purgatory.chk := $(PURGATORY_LDFLAGS) 22 + targets += purgatory.ro purgatory.chk 19 23 24 + # Sanitizer, etc. runtimes are unavailable and cannot be linked here. 25 + GCOV_PROFILE := n 20 26 KASAN_SANITIZE := n 27 + UBSAN_SANITIZE := n 28 + KCSAN_SANITIZE := n 21 29 KCOV_INSTRUMENT := n 22 30 23 31 # These are adjustments to the compiler flags used for objects that ··· 33 25 34 26 PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel 35 27 PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss 36 - PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) 28 + PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING 37 29 38 30 # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That 39 31 # in turn leaves some undefined symbols like __fentry__ in purgatory and not ··· 66 58 $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE 67 59 $(call if_changed,ld) 68 60 61 + $(obj)/purgatory.chk: $(obj)/purgatory.ro FORCE 62 + $(call if_changed,ld) 63 + 69 64 targets += kexec-purgatory.c 70 65 71 66 quiet_cmd_bin2c = BIN2C $@ 72 67 cmd_bin2c = $(objtree)/scripts/bin2c kexec_purgatory < $< > $@ 73 68 74 - $(obj)/kexec-purgatory.c: $(obj)/purgatory.ro FORCE 69 + $(obj)/kexec-purgatory.c: $(obj)/purgatory.ro $(obj)/purgatory.chk FORCE 75 70 $(call if_changed,bin2c) 76 71 77 72 obj-$(CONFIG_KEXEC_FILE) += kexec-purgatory.o
+3
arch/x86/realmode/Makefile
··· 6 6 # for more details. 7 7 # 8 8 # 9 + 10 + # Sanitizer runtimes are unavailable and cannot be linked here. 9 11 KASAN_SANITIZE := n 12 + KCSAN_SANITIZE := n 10 13 OBJECT_FILES_NON_STANDARD := y 11 14 12 15 subdir- := rm
+3
arch/x86/realmode/rm/Makefile
··· 6 6 # for more details. 7 7 # 8 8 # 9 + 10 + # Sanitizer runtimes are unavailable and cannot be linked here. 9 11 KASAN_SANITIZE := n 12 + KCSAN_SANITIZE := n 10 13 OBJECT_FILES_NON_STANDARD := y 11 14 12 15 # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
+2
drivers/firmware/efi/libstub/Makefile
··· 37 37 KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) 38 38 39 39 GCOV_PROFILE := n 40 + # Sanitizer runtimes are unavailable and cannot be linked here. 40 41 KASAN_SANITIZE := n 42 + KCSAN_SANITIZE := n 41 43 UBSAN_SANITIZE := n 42 44 OBJECT_FILES_NON_STANDARD := y 43 45
+356 -355
include/asm-generic/atomic-instrumented.h
··· 18 18 #define _ASM_GENERIC_ATOMIC_INSTRUMENTED_H 19 19 20 20 #include <linux/build_bug.h> 21 - #include <linux/kasan-checks.h> 21 + #include <linux/compiler.h> 22 + #include <linux/instrumented.h> 22 23 23 - static inline int 24 + static __always_inline int 24 25 atomic_read(const atomic_t *v) 25 26 { 26 - kasan_check_read(v, sizeof(*v)); 27 + instrument_atomic_read(v, sizeof(*v)); 27 28 return arch_atomic_read(v); 28 29 } 29 30 #define atomic_read atomic_read 30 31 31 32 #if defined(arch_atomic_read_acquire) 32 - static inline int 33 + static __always_inline int 33 34 atomic_read_acquire(const atomic_t *v) 34 35 { 35 - kasan_check_read(v, sizeof(*v)); 36 + instrument_atomic_read(v, sizeof(*v)); 36 37 return arch_atomic_read_acquire(v); 37 38 } 38 39 #define atomic_read_acquire atomic_read_acquire 39 40 #endif 40 41 41 - static inline void 42 + static __always_inline void 42 43 atomic_set(atomic_t *v, int i) 43 44 { 44 - kasan_check_write(v, sizeof(*v)); 45 + instrument_atomic_write(v, sizeof(*v)); 45 46 arch_atomic_set(v, i); 46 47 } 47 48 #define atomic_set atomic_set 48 49 49 50 #if defined(arch_atomic_set_release) 50 - static inline void 51 + static __always_inline void 51 52 atomic_set_release(atomic_t *v, int i) 52 53 { 53 - kasan_check_write(v, sizeof(*v)); 54 + instrument_atomic_write(v, sizeof(*v)); 54 55 arch_atomic_set_release(v, i); 55 56 } 56 57 #define atomic_set_release atomic_set_release 57 58 #endif 58 59 59 - static inline void 60 + static __always_inline void 60 61 atomic_add(int i, atomic_t *v) 61 62 { 62 - kasan_check_write(v, sizeof(*v)); 63 + instrument_atomic_write(v, sizeof(*v)); 63 64 arch_atomic_add(i, v); 64 65 } 65 66 #define atomic_add atomic_add 66 67 67 68 #if !defined(arch_atomic_add_return_relaxed) || defined(arch_atomic_add_return) 68 - static inline int 69 + static __always_inline int 69 70 atomic_add_return(int i, atomic_t *v) 70 71 { 71 - kasan_check_write(v, sizeof(*v)); 72 + instrument_atomic_write(v, sizeof(*v)); 72 73 return arch_atomic_add_return(i, v); 73 74 } 74 75 #define atomic_add_return atomic_add_return 75 76 #endif 76 77 77 78 #if defined(arch_atomic_add_return_acquire) 78 - static inline int 79 + static __always_inline int 79 80 atomic_add_return_acquire(int i, atomic_t *v) 80 81 { 81 - kasan_check_write(v, sizeof(*v)); 82 + instrument_atomic_write(v, sizeof(*v)); 82 83 return arch_atomic_add_return_acquire(i, v); 83 84 } 84 85 #define atomic_add_return_acquire atomic_add_return_acquire 85 86 #endif 86 87 87 88 #if defined(arch_atomic_add_return_release) 88 - static inline int 89 + static __always_inline int 89 90 atomic_add_return_release(int i, atomic_t *v) 90 91 { 91 - kasan_check_write(v, sizeof(*v)); 92 + instrument_atomic_write(v, sizeof(*v)); 92 93 return arch_atomic_add_return_release(i, v); 93 94 } 94 95 #define atomic_add_return_release atomic_add_return_release 95 96 #endif 96 97 97 98 #if defined(arch_atomic_add_return_relaxed) 98 - static inline int 99 + static __always_inline int 99 100 atomic_add_return_relaxed(int i, atomic_t *v) 100 101 { 101 - kasan_check_write(v, sizeof(*v)); 102 + instrument_atomic_write(v, sizeof(*v)); 102 103 return arch_atomic_add_return_relaxed(i, v); 103 104 } 104 105 #define atomic_add_return_relaxed atomic_add_return_relaxed 105 106 #endif 106 107 107 108 #if !defined(arch_atomic_fetch_add_relaxed) || defined(arch_atomic_fetch_add) 108 - static inline int 109 + static __always_inline int 109 110 atomic_fetch_add(int i, atomic_t *v) 110 111 { 111 - kasan_check_write(v, sizeof(*v)); 112 + instrument_atomic_write(v, sizeof(*v)); 112 113 return arch_atomic_fetch_add(i, v); 113 114 } 114 115 #define atomic_fetch_add atomic_fetch_add 115 116 #endif 116 117 117 118 #if defined(arch_atomic_fetch_add_acquire) 118 - static inline int 119 + static __always_inline int 119 120 atomic_fetch_add_acquire(int i, atomic_t *v) 120 121 { 121 - kasan_check_write(v, sizeof(*v)); 122 + instrument_atomic_write(v, sizeof(*v)); 122 123 return arch_atomic_fetch_add_acquire(i, v); 123 124 } 124 125 #define atomic_fetch_add_acquire atomic_fetch_add_acquire 125 126 #endif 126 127 127 128 #if defined(arch_atomic_fetch_add_release) 128 - static inline int 129 + static __always_inline int 129 130 atomic_fetch_add_release(int i, atomic_t *v) 130 131 { 131 - kasan_check_write(v, sizeof(*v)); 132 + instrument_atomic_write(v, sizeof(*v)); 132 133 return arch_atomic_fetch_add_release(i, v); 133 134 } 134 135 #define atomic_fetch_add_release atomic_fetch_add_release 135 136 #endif 136 137 137 138 #if defined(arch_atomic_fetch_add_relaxed) 138 - static inline int 139 + static __always_inline int 139 140 atomic_fetch_add_relaxed(int i, atomic_t *v) 140 141 { 141 - kasan_check_write(v, sizeof(*v)); 142 + instrument_atomic_write(v, sizeof(*v)); 142 143 return arch_atomic_fetch_add_relaxed(i, v); 143 144 } 144 145 #define atomic_fetch_add_relaxed atomic_fetch_add_relaxed 145 146 #endif 146 147 147 - static inline void 148 + static __always_inline void 148 149 atomic_sub(int i, atomic_t *v) 149 150 { 150 - kasan_check_write(v, sizeof(*v)); 151 + instrument_atomic_write(v, sizeof(*v)); 151 152 arch_atomic_sub(i, v); 152 153 } 153 154 #define atomic_sub atomic_sub 154 155 155 156 #if !defined(arch_atomic_sub_return_relaxed) || defined(arch_atomic_sub_return) 156 - static inline int 157 + static __always_inline int 157 158 atomic_sub_return(int i, atomic_t *v) 158 159 { 159 - kasan_check_write(v, sizeof(*v)); 160 + instrument_atomic_write(v, sizeof(*v)); 160 161 return arch_atomic_sub_return(i, v); 161 162 } 162 163 #define atomic_sub_return atomic_sub_return 163 164 #endif 164 165 165 166 #if defined(arch_atomic_sub_return_acquire) 166 - static inline int 167 + static __always_inline int 167 168 atomic_sub_return_acquire(int i, atomic_t *v) 168 169 { 169 - kasan_check_write(v, sizeof(*v)); 170 + instrument_atomic_write(v, sizeof(*v)); 170 171 return arch_atomic_sub_return_acquire(i, v); 171 172 } 172 173 #define atomic_sub_return_acquire atomic_sub_return_acquire 173 174 #endif 174 175 175 176 #if defined(arch_atomic_sub_return_release) 176 - static inline int 177 + static __always_inline int 177 178 atomic_sub_return_release(int i, atomic_t *v) 178 179 { 179 - kasan_check_write(v, sizeof(*v)); 180 + instrument_atomic_write(v, sizeof(*v)); 180 181 return arch_atomic_sub_return_release(i, v); 181 182 } 182 183 #define atomic_sub_return_release atomic_sub_return_release 183 184 #endif 184 185 185 186 #if defined(arch_atomic_sub_return_relaxed) 186 - static inline int 187 + static __always_inline int 187 188 atomic_sub_return_relaxed(int i, atomic_t *v) 188 189 { 189 - kasan_check_write(v, sizeof(*v)); 190 + instrument_atomic_write(v, sizeof(*v)); 190 191 return arch_atomic_sub_return_relaxed(i, v); 191 192 } 192 193 #define atomic_sub_return_relaxed atomic_sub_return_relaxed 193 194 #endif 194 195 195 196 #if !defined(arch_atomic_fetch_sub_relaxed) || defined(arch_atomic_fetch_sub) 196 - static inline int 197 + static __always_inline int 197 198 atomic_fetch_sub(int i, atomic_t *v) 198 199 { 199 - kasan_check_write(v, sizeof(*v)); 200 + instrument_atomic_write(v, sizeof(*v)); 200 201 return arch_atomic_fetch_sub(i, v); 201 202 } 202 203 #define atomic_fetch_sub atomic_fetch_sub 203 204 #endif 204 205 205 206 #if defined(arch_atomic_fetch_sub_acquire) 206 - static inline int 207 + static __always_inline int 207 208 atomic_fetch_sub_acquire(int i, atomic_t *v) 208 209 { 209 - kasan_check_write(v, sizeof(*v)); 210 + instrument_atomic_write(v, sizeof(*v)); 210 211 return arch_atomic_fetch_sub_acquire(i, v); 211 212 } 212 213 #define atomic_fetch_sub_acquire atomic_fetch_sub_acquire 213 214 #endif 214 215 215 216 #if defined(arch_atomic_fetch_sub_release) 216 - static inline int 217 + static __always_inline int 217 218 atomic_fetch_sub_release(int i, atomic_t *v) 218 219 { 219 - kasan_check_write(v, sizeof(*v)); 220 + instrument_atomic_write(v, sizeof(*v)); 220 221 return arch_atomic_fetch_sub_release(i, v); 221 222 } 222 223 #define atomic_fetch_sub_release atomic_fetch_sub_release 223 224 #endif 224 225 225 226 #if defined(arch_atomic_fetch_sub_relaxed) 226 - static inline int 227 + static __always_inline int 227 228 atomic_fetch_sub_relaxed(int i, atomic_t *v) 228 229 { 229 - kasan_check_write(v, sizeof(*v)); 230 + instrument_atomic_write(v, sizeof(*v)); 230 231 return arch_atomic_fetch_sub_relaxed(i, v); 231 232 } 232 233 #define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed 233 234 #endif 234 235 235 236 #if defined(arch_atomic_inc) 236 - static inline void 237 + static __always_inline void 237 238 atomic_inc(atomic_t *v) 238 239 { 239 - kasan_check_write(v, sizeof(*v)); 240 + instrument_atomic_write(v, sizeof(*v)); 240 241 arch_atomic_inc(v); 241 242 } 242 243 #define atomic_inc atomic_inc 243 244 #endif 244 245 245 246 #if defined(arch_atomic_inc_return) 246 - static inline int 247 + static __always_inline int 247 248 atomic_inc_return(atomic_t *v) 248 249 { 249 - kasan_check_write(v, sizeof(*v)); 250 + instrument_atomic_write(v, sizeof(*v)); 250 251 return arch_atomic_inc_return(v); 251 252 } 252 253 #define atomic_inc_return atomic_inc_return 253 254 #endif 254 255 255 256 #if defined(arch_atomic_inc_return_acquire) 256 - static inline int 257 + static __always_inline int 257 258 atomic_inc_return_acquire(atomic_t *v) 258 259 { 259 - kasan_check_write(v, sizeof(*v)); 260 + instrument_atomic_write(v, sizeof(*v)); 260 261 return arch_atomic_inc_return_acquire(v); 261 262 } 262 263 #define atomic_inc_return_acquire atomic_inc_return_acquire 263 264 #endif 264 265 265 266 #if defined(arch_atomic_inc_return_release) 266 - static inline int 267 + static __always_inline int 267 268 atomic_inc_return_release(atomic_t *v) 268 269 { 269 - kasan_check_write(v, sizeof(*v)); 270 + instrument_atomic_write(v, sizeof(*v)); 270 271 return arch_atomic_inc_return_release(v); 271 272 } 272 273 #define atomic_inc_return_release atomic_inc_return_release 273 274 #endif 274 275 275 276 #if defined(arch_atomic_inc_return_relaxed) 276 - static inline int 277 + static __always_inline int 277 278 atomic_inc_return_relaxed(atomic_t *v) 278 279 { 279 - kasan_check_write(v, sizeof(*v)); 280 + instrument_atomic_write(v, sizeof(*v)); 280 281 return arch_atomic_inc_return_relaxed(v); 281 282 } 282 283 #define atomic_inc_return_relaxed atomic_inc_return_relaxed 283 284 #endif 284 285 285 286 #if defined(arch_atomic_fetch_inc) 286 - static inline int 287 + static __always_inline int 287 288 atomic_fetch_inc(atomic_t *v) 288 289 { 289 - kasan_check_write(v, sizeof(*v)); 290 + instrument_atomic_write(v, sizeof(*v)); 290 291 return arch_atomic_fetch_inc(v); 291 292 } 292 293 #define atomic_fetch_inc atomic_fetch_inc 293 294 #endif 294 295 295 296 #if defined(arch_atomic_fetch_inc_acquire) 296 - static inline int 297 + static __always_inline int 297 298 atomic_fetch_inc_acquire(atomic_t *v) 298 299 { 299 - kasan_check_write(v, sizeof(*v)); 300 + instrument_atomic_write(v, sizeof(*v)); 300 301 return arch_atomic_fetch_inc_acquire(v); 301 302 } 302 303 #define atomic_fetch_inc_acquire atomic_fetch_inc_acquire 303 304 #endif 304 305 305 306 #if defined(arch_atomic_fetch_inc_release) 306 - static inline int 307 + static __always_inline int 307 308 atomic_fetch_inc_release(atomic_t *v) 308 309 { 309 - kasan_check_write(v, sizeof(*v)); 310 + instrument_atomic_write(v, sizeof(*v)); 310 311 return arch_atomic_fetch_inc_release(v); 311 312 } 312 313 #define atomic_fetch_inc_release atomic_fetch_inc_release 313 314 #endif 314 315 315 316 #if defined(arch_atomic_fetch_inc_relaxed) 316 - static inline int 317 + static __always_inline int 317 318 atomic_fetch_inc_relaxed(atomic_t *v) 318 319 { 319 - kasan_check_write(v, sizeof(*v)); 320 + instrument_atomic_write(v, sizeof(*v)); 320 321 return arch_atomic_fetch_inc_relaxed(v); 321 322 } 322 323 #define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed 323 324 #endif 324 325 325 326 #if defined(arch_atomic_dec) 326 - static inline void 327 + static __always_inline void 327 328 atomic_dec(atomic_t *v) 328 329 { 329 - kasan_check_write(v, sizeof(*v)); 330 + instrument_atomic_write(v, sizeof(*v)); 330 331 arch_atomic_dec(v); 331 332 } 332 333 #define atomic_dec atomic_dec 333 334 #endif 334 335 335 336 #if defined(arch_atomic_dec_return) 336 - static inline int 337 + static __always_inline int 337 338 atomic_dec_return(atomic_t *v) 338 339 { 339 - kasan_check_write(v, sizeof(*v)); 340 + instrument_atomic_write(v, sizeof(*v)); 340 341 return arch_atomic_dec_return(v); 341 342 } 342 343 #define atomic_dec_return atomic_dec_return 343 344 #endif 344 345 345 346 #if defined(arch_atomic_dec_return_acquire) 346 - static inline int 347 + static __always_inline int 347 348 atomic_dec_return_acquire(atomic_t *v) 348 349 { 349 - kasan_check_write(v, sizeof(*v)); 350 + instrument_atomic_write(v, sizeof(*v)); 350 351 return arch_atomic_dec_return_acquire(v); 351 352 } 352 353 #define atomic_dec_return_acquire atomic_dec_return_acquire 353 354 #endif 354 355 355 356 #if defined(arch_atomic_dec_return_release) 356 - static inline int 357 + static __always_inline int 357 358 atomic_dec_return_release(atomic_t *v) 358 359 { 359 - kasan_check_write(v, sizeof(*v)); 360 + instrument_atomic_write(v, sizeof(*v)); 360 361 return arch_atomic_dec_return_release(v); 361 362 } 362 363 #define atomic_dec_return_release atomic_dec_return_release 363 364 #endif 364 365 365 366 #if defined(arch_atomic_dec_return_relaxed) 366 - static inline int 367 + static __always_inline int 367 368 atomic_dec_return_relaxed(atomic_t *v) 368 369 { 369 - kasan_check_write(v, sizeof(*v)); 370 + instrument_atomic_write(v, sizeof(*v)); 370 371 return arch_atomic_dec_return_relaxed(v); 371 372 } 372 373 #define atomic_dec_return_relaxed atomic_dec_return_relaxed 373 374 #endif 374 375 375 376 #if defined(arch_atomic_fetch_dec) 376 - static inline int 377 + static __always_inline int 377 378 atomic_fetch_dec(atomic_t *v) 378 379 { 379 - kasan_check_write(v, sizeof(*v)); 380 + instrument_atomic_write(v, sizeof(*v)); 380 381 return arch_atomic_fetch_dec(v); 381 382 } 382 383 #define atomic_fetch_dec atomic_fetch_dec 383 384 #endif 384 385 385 386 #if defined(arch_atomic_fetch_dec_acquire) 386 - static inline int 387 + static __always_inline int 387 388 atomic_fetch_dec_acquire(atomic_t *v) 388 389 { 389 - kasan_check_write(v, sizeof(*v)); 390 + instrument_atomic_write(v, sizeof(*v)); 390 391 return arch_atomic_fetch_dec_acquire(v); 391 392 } 392 393 #define atomic_fetch_dec_acquire atomic_fetch_dec_acquire 393 394 #endif 394 395 395 396 #if defined(arch_atomic_fetch_dec_release) 396 - static inline int 397 + static __always_inline int 397 398 atomic_fetch_dec_release(atomic_t *v) 398 399 { 399 - kasan_check_write(v, sizeof(*v)); 400 + instrument_atomic_write(v, sizeof(*v)); 400 401 return arch_atomic_fetch_dec_release(v); 401 402 } 402 403 #define atomic_fetch_dec_release atomic_fetch_dec_release 403 404 #endif 404 405 405 406 #if defined(arch_atomic_fetch_dec_relaxed) 406 - static inline int 407 + static __always_inline int 407 408 atomic_fetch_dec_relaxed(atomic_t *v) 408 409 { 409 - kasan_check_write(v, sizeof(*v)); 410 + instrument_atomic_write(v, sizeof(*v)); 410 411 return arch_atomic_fetch_dec_relaxed(v); 411 412 } 412 413 #define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed 413 414 #endif 414 415 415 - static inline void 416 + static __always_inline void 416 417 atomic_and(int i, atomic_t *v) 417 418 { 418 - kasan_check_write(v, sizeof(*v)); 419 + instrument_atomic_write(v, sizeof(*v)); 419 420 arch_atomic_and(i, v); 420 421 } 421 422 #define atomic_and atomic_and 422 423 423 424 #if !defined(arch_atomic_fetch_and_relaxed) || defined(arch_atomic_fetch_and) 424 - static inline int 425 + static __always_inline int 425 426 atomic_fetch_and(int i, atomic_t *v) 426 427 { 427 - kasan_check_write(v, sizeof(*v)); 428 + instrument_atomic_write(v, sizeof(*v)); 428 429 return arch_atomic_fetch_and(i, v); 429 430 } 430 431 #define atomic_fetch_and atomic_fetch_and 431 432 #endif 432 433 433 434 #if defined(arch_atomic_fetch_and_acquire) 434 - static inline int 435 + static __always_inline int 435 436 atomic_fetch_and_acquire(int i, atomic_t *v) 436 437 { 437 - kasan_check_write(v, sizeof(*v)); 438 + instrument_atomic_write(v, sizeof(*v)); 438 439 return arch_atomic_fetch_and_acquire(i, v); 439 440 } 440 441 #define atomic_fetch_and_acquire atomic_fetch_and_acquire 441 442 #endif 442 443 443 444 #if defined(arch_atomic_fetch_and_release) 444 - static inline int 445 + static __always_inline int 445 446 atomic_fetch_and_release(int i, atomic_t *v) 446 447 { 447 - kasan_check_write(v, sizeof(*v)); 448 + instrument_atomic_write(v, sizeof(*v)); 448 449 return arch_atomic_fetch_and_release(i, v); 449 450 } 450 451 #define atomic_fetch_and_release atomic_fetch_and_release 451 452 #endif 452 453 453 454 #if defined(arch_atomic_fetch_and_relaxed) 454 - static inline int 455 + static __always_inline int 455 456 atomic_fetch_and_relaxed(int i, atomic_t *v) 456 457 { 457 - kasan_check_write(v, sizeof(*v)); 458 + instrument_atomic_write(v, sizeof(*v)); 458 459 return arch_atomic_fetch_and_relaxed(i, v); 459 460 } 460 461 #define atomic_fetch_and_relaxed atomic_fetch_and_relaxed 461 462 #endif 462 463 463 464 #if defined(arch_atomic_andnot) 464 - static inline void 465 + static __always_inline void 465 466 atomic_andnot(int i, atomic_t *v) 466 467 { 467 - kasan_check_write(v, sizeof(*v)); 468 + instrument_atomic_write(v, sizeof(*v)); 468 469 arch_atomic_andnot(i, v); 469 470 } 470 471 #define atomic_andnot atomic_andnot 471 472 #endif 472 473 473 474 #if defined(arch_atomic_fetch_andnot) 474 - static inline int 475 + static __always_inline int 475 476 atomic_fetch_andnot(int i, atomic_t *v) 476 477 { 477 - kasan_check_write(v, sizeof(*v)); 478 + instrument_atomic_write(v, sizeof(*v)); 478 479 return arch_atomic_fetch_andnot(i, v); 479 480 } 480 481 #define atomic_fetch_andnot atomic_fetch_andnot 481 482 #endif 482 483 483 484 #if defined(arch_atomic_fetch_andnot_acquire) 484 - static inline int 485 + static __always_inline int 485 486 atomic_fetch_andnot_acquire(int i, atomic_t *v) 486 487 { 487 - kasan_check_write(v, sizeof(*v)); 488 + instrument_atomic_write(v, sizeof(*v)); 488 489 return arch_atomic_fetch_andnot_acquire(i, v); 489 490 } 490 491 #define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire 491 492 #endif 492 493 493 494 #if defined(arch_atomic_fetch_andnot_release) 494 - static inline int 495 + static __always_inline int 495 496 atomic_fetch_andnot_release(int i, atomic_t *v) 496 497 { 497 - kasan_check_write(v, sizeof(*v)); 498 + instrument_atomic_write(v, sizeof(*v)); 498 499 return arch_atomic_fetch_andnot_release(i, v); 499 500 } 500 501 #define atomic_fetch_andnot_release atomic_fetch_andnot_release 501 502 #endif 502 503 503 504 #if defined(arch_atomic_fetch_andnot_relaxed) 504 - static inline int 505 + static __always_inline int 505 506 atomic_fetch_andnot_relaxed(int i, atomic_t *v) 506 507 { 507 - kasan_check_write(v, sizeof(*v)); 508 + instrument_atomic_write(v, sizeof(*v)); 508 509 return arch_atomic_fetch_andnot_relaxed(i, v); 509 510 } 510 511 #define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed 511 512 #endif 512 513 513 - static inline void 514 + static __always_inline void 514 515 atomic_or(int i, atomic_t *v) 515 516 { 516 - kasan_check_write(v, sizeof(*v)); 517 + instrument_atomic_write(v, sizeof(*v)); 517 518 arch_atomic_or(i, v); 518 519 } 519 520 #define atomic_or atomic_or 520 521 521 522 #if !defined(arch_atomic_fetch_or_relaxed) || defined(arch_atomic_fetch_or) 522 - static inline int 523 + static __always_inline int 523 524 atomic_fetch_or(int i, atomic_t *v) 524 525 { 525 - kasan_check_write(v, sizeof(*v)); 526 + instrument_atomic_write(v, sizeof(*v)); 526 527 return arch_atomic_fetch_or(i, v); 527 528 } 528 529 #define atomic_fetch_or atomic_fetch_or 529 530 #endif 530 531 531 532 #if defined(arch_atomic_fetch_or_acquire) 532 - static inline int 533 + static __always_inline int 533 534 atomic_fetch_or_acquire(int i, atomic_t *v) 534 535 { 535 - kasan_check_write(v, sizeof(*v)); 536 + instrument_atomic_write(v, sizeof(*v)); 536 537 return arch_atomic_fetch_or_acquire(i, v); 537 538 } 538 539 #define atomic_fetch_or_acquire atomic_fetch_or_acquire 539 540 #endif 540 541 541 542 #if defined(arch_atomic_fetch_or_release) 542 - static inline int 543 + static __always_inline int 543 544 atomic_fetch_or_release(int i, atomic_t *v) 544 545 { 545 - kasan_check_write(v, sizeof(*v)); 546 + instrument_atomic_write(v, sizeof(*v)); 546 547 return arch_atomic_fetch_or_release(i, v); 547 548 } 548 549 #define atomic_fetch_or_release atomic_fetch_or_release 549 550 #endif 550 551 551 552 #if defined(arch_atomic_fetch_or_relaxed) 552 - static inline int 553 + static __always_inline int 553 554 atomic_fetch_or_relaxed(int i, atomic_t *v) 554 555 { 555 - kasan_check_write(v, sizeof(*v)); 556 + instrument_atomic_write(v, sizeof(*v)); 556 557 return arch_atomic_fetch_or_relaxed(i, v); 557 558 } 558 559 #define atomic_fetch_or_relaxed atomic_fetch_or_relaxed 559 560 #endif 560 561 561 - static inline void 562 + static __always_inline void 562 563 atomic_xor(int i, atomic_t *v) 563 564 { 564 - kasan_check_write(v, sizeof(*v)); 565 + instrument_atomic_write(v, sizeof(*v)); 565 566 arch_atomic_xor(i, v); 566 567 } 567 568 #define atomic_xor atomic_xor 568 569 569 570 #if !defined(arch_atomic_fetch_xor_relaxed) || defined(arch_atomic_fetch_xor) 570 - static inline int 571 + static __always_inline int 571 572 atomic_fetch_xor(int i, atomic_t *v) 572 573 { 573 - kasan_check_write(v, sizeof(*v)); 574 + instrument_atomic_write(v, sizeof(*v)); 574 575 return arch_atomic_fetch_xor(i, v); 575 576 } 576 577 #define atomic_fetch_xor atomic_fetch_xor 577 578 #endif 578 579 579 580 #if defined(arch_atomic_fetch_xor_acquire) 580 - static inline int 581 + static __always_inline int 581 582 atomic_fetch_xor_acquire(int i, atomic_t *v) 582 583 { 583 - kasan_check_write(v, sizeof(*v)); 584 + instrument_atomic_write(v, sizeof(*v)); 584 585 return arch_atomic_fetch_xor_acquire(i, v); 585 586 } 586 587 #define atomic_fetch_xor_acquire atomic_fetch_xor_acquire 587 588 #endif 588 589 589 590 #if defined(arch_atomic_fetch_xor_release) 590 - static inline int 591 + static __always_inline int 591 592 atomic_fetch_xor_release(int i, atomic_t *v) 592 593 { 593 - kasan_check_write(v, sizeof(*v)); 594 + instrument_atomic_write(v, sizeof(*v)); 594 595 return arch_atomic_fetch_xor_release(i, v); 595 596 } 596 597 #define atomic_fetch_xor_release atomic_fetch_xor_release 597 598 #endif 598 599 599 600 #if defined(arch_atomic_fetch_xor_relaxed) 600 - static inline int 601 + static __always_inline int 601 602 atomic_fetch_xor_relaxed(int i, atomic_t *v) 602 603 { 603 - kasan_check_write(v, sizeof(*v)); 604 + instrument_atomic_write(v, sizeof(*v)); 604 605 return arch_atomic_fetch_xor_relaxed(i, v); 605 606 } 606 607 #define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed 607 608 #endif 608 609 609 610 #if !defined(arch_atomic_xchg_relaxed) || defined(arch_atomic_xchg) 610 - static inline int 611 + static __always_inline int 611 612 atomic_xchg(atomic_t *v, int i) 612 613 { 613 - kasan_check_write(v, sizeof(*v)); 614 + instrument_atomic_write(v, sizeof(*v)); 614 615 return arch_atomic_xchg(v, i); 615 616 } 616 617 #define atomic_xchg atomic_xchg 617 618 #endif 618 619 619 620 #if defined(arch_atomic_xchg_acquire) 620 - static inline int 621 + static __always_inline int 621 622 atomic_xchg_acquire(atomic_t *v, int i) 622 623 { 623 - kasan_check_write(v, sizeof(*v)); 624 + instrument_atomic_write(v, sizeof(*v)); 624 625 return arch_atomic_xchg_acquire(v, i); 625 626 } 626 627 #define atomic_xchg_acquire atomic_xchg_acquire 627 628 #endif 628 629 629 630 #if defined(arch_atomic_xchg_release) 630 - static inline int 631 + static __always_inline int 631 632 atomic_xchg_release(atomic_t *v, int i) 632 633 { 633 - kasan_check_write(v, sizeof(*v)); 634 + instrument_atomic_write(v, sizeof(*v)); 634 635 return arch_atomic_xchg_release(v, i); 635 636 } 636 637 #define atomic_xchg_release atomic_xchg_release 637 638 #endif 638 639 639 640 #if defined(arch_atomic_xchg_relaxed) 640 - static inline int 641 + static __always_inline int 641 642 atomic_xchg_relaxed(atomic_t *v, int i) 642 643 { 643 - kasan_check_write(v, sizeof(*v)); 644 + instrument_atomic_write(v, sizeof(*v)); 644 645 return arch_atomic_xchg_relaxed(v, i); 645 646 } 646 647 #define atomic_xchg_relaxed atomic_xchg_relaxed 647 648 #endif 648 649 649 650 #if !defined(arch_atomic_cmpxchg_relaxed) || defined(arch_atomic_cmpxchg) 650 - static inline int 651 + static __always_inline int 651 652 atomic_cmpxchg(atomic_t *v, int old, int new) 652 653 { 653 - kasan_check_write(v, sizeof(*v)); 654 + instrument_atomic_write(v, sizeof(*v)); 654 655 return arch_atomic_cmpxchg(v, old, new); 655 656 } 656 657 #define atomic_cmpxchg atomic_cmpxchg 657 658 #endif 658 659 659 660 #if defined(arch_atomic_cmpxchg_acquire) 660 - static inline int 661 + static __always_inline int 661 662 atomic_cmpxchg_acquire(atomic_t *v, int old, int new) 662 663 { 663 - kasan_check_write(v, sizeof(*v)); 664 + instrument_atomic_write(v, sizeof(*v)); 664 665 return arch_atomic_cmpxchg_acquire(v, old, new); 665 666 } 666 667 #define atomic_cmpxchg_acquire atomic_cmpxchg_acquire 667 668 #endif 668 669 669 670 #if defined(arch_atomic_cmpxchg_release) 670 - static inline int 671 + static __always_inline int 671 672 atomic_cmpxchg_release(atomic_t *v, int old, int new) 672 673 { 673 - kasan_check_write(v, sizeof(*v)); 674 + instrument_atomic_write(v, sizeof(*v)); 674 675 return arch_atomic_cmpxchg_release(v, old, new); 675 676 } 676 677 #define atomic_cmpxchg_release atomic_cmpxchg_release 677 678 #endif 678 679 679 680 #if defined(arch_atomic_cmpxchg_relaxed) 680 - static inline int 681 + static __always_inline int 681 682 atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) 682 683 { 683 - kasan_check_write(v, sizeof(*v)); 684 + instrument_atomic_write(v, sizeof(*v)); 684 685 return arch_atomic_cmpxchg_relaxed(v, old, new); 685 686 } 686 687 #define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed 687 688 #endif 688 689 689 690 #if defined(arch_atomic_try_cmpxchg) 690 - static inline bool 691 + static __always_inline bool 691 692 atomic_try_cmpxchg(atomic_t *v, int *old, int new) 692 693 { 693 - kasan_check_write(v, sizeof(*v)); 694 - kasan_check_write(old, sizeof(*old)); 694 + instrument_atomic_write(v, sizeof(*v)); 695 + instrument_atomic_write(old, sizeof(*old)); 695 696 return arch_atomic_try_cmpxchg(v, old, new); 696 697 } 697 698 #define atomic_try_cmpxchg atomic_try_cmpxchg 698 699 #endif 699 700 700 701 #if defined(arch_atomic_try_cmpxchg_acquire) 701 - static inline bool 702 + static __always_inline bool 702 703 atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) 703 704 { 704 - kasan_check_write(v, sizeof(*v)); 705 - kasan_check_write(old, sizeof(*old)); 705 + instrument_atomic_write(v, sizeof(*v)); 706 + instrument_atomic_write(old, sizeof(*old)); 706 707 return arch_atomic_try_cmpxchg_acquire(v, old, new); 707 708 } 708 709 #define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire 709 710 #endif 710 711 711 712 #if defined(arch_atomic_try_cmpxchg_release) 712 - static inline bool 713 + static __always_inline bool 713 714 atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) 714 715 { 715 - kasan_check_write(v, sizeof(*v)); 716 - kasan_check_write(old, sizeof(*old)); 716 + instrument_atomic_write(v, sizeof(*v)); 717 + instrument_atomic_write(old, sizeof(*old)); 717 718 return arch_atomic_try_cmpxchg_release(v, old, new); 718 719 } 719 720 #define atomic_try_cmpxchg_release atomic_try_cmpxchg_release 720 721 #endif 721 722 722 723 #if defined(arch_atomic_try_cmpxchg_relaxed) 723 - static inline bool 724 + static __always_inline bool 724 725 atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) 725 726 { 726 - kasan_check_write(v, sizeof(*v)); 727 - kasan_check_write(old, sizeof(*old)); 727 + instrument_atomic_write(v, sizeof(*v)); 728 + instrument_atomic_write(old, sizeof(*old)); 728 729 return arch_atomic_try_cmpxchg_relaxed(v, old, new); 729 730 } 730 731 #define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed 731 732 #endif 732 733 733 734 #if defined(arch_atomic_sub_and_test) 734 - static inline bool 735 + static __always_inline bool 735 736 atomic_sub_and_test(int i, atomic_t *v) 736 737 { 737 - kasan_check_write(v, sizeof(*v)); 738 + instrument_atomic_write(v, sizeof(*v)); 738 739 return arch_atomic_sub_and_test(i, v); 739 740 } 740 741 #define atomic_sub_and_test atomic_sub_and_test 741 742 #endif 742 743 743 744 #if defined(arch_atomic_dec_and_test) 744 - static inline bool 745 + static __always_inline bool 745 746 atomic_dec_and_test(atomic_t *v) 746 747 { 747 - kasan_check_write(v, sizeof(*v)); 748 + instrument_atomic_write(v, sizeof(*v)); 748 749 return arch_atomic_dec_and_test(v); 749 750 } 750 751 #define atomic_dec_and_test atomic_dec_and_test 751 752 #endif 752 753 753 754 #if defined(arch_atomic_inc_and_test) 754 - static inline bool 755 + static __always_inline bool 755 756 atomic_inc_and_test(atomic_t *v) 756 757 { 757 - kasan_check_write(v, sizeof(*v)); 758 + instrument_atomic_write(v, sizeof(*v)); 758 759 return arch_atomic_inc_and_test(v); 759 760 } 760 761 #define atomic_inc_and_test atomic_inc_and_test 761 762 #endif 762 763 763 764 #if defined(arch_atomic_add_negative) 764 - static inline bool 765 + static __always_inline bool 765 766 atomic_add_negative(int i, atomic_t *v) 766 767 { 767 - kasan_check_write(v, sizeof(*v)); 768 + instrument_atomic_write(v, sizeof(*v)); 768 769 return arch_atomic_add_negative(i, v); 769 770 } 770 771 #define atomic_add_negative atomic_add_negative 771 772 #endif 772 773 773 774 #if defined(arch_atomic_fetch_add_unless) 774 - static inline int 775 + static __always_inline int 775 776 atomic_fetch_add_unless(atomic_t *v, int a, int u) 776 777 { 777 - kasan_check_write(v, sizeof(*v)); 778 + instrument_atomic_write(v, sizeof(*v)); 778 779 return arch_atomic_fetch_add_unless(v, a, u); 779 780 } 780 781 #define atomic_fetch_add_unless atomic_fetch_add_unless 781 782 #endif 782 783 783 784 #if defined(arch_atomic_add_unless) 784 - static inline bool 785 + static __always_inline bool 785 786 atomic_add_unless(atomic_t *v, int a, int u) 786 787 { 787 - kasan_check_write(v, sizeof(*v)); 788 + instrument_atomic_write(v, sizeof(*v)); 788 789 return arch_atomic_add_unless(v, a, u); 789 790 } 790 791 #define atomic_add_unless atomic_add_unless 791 792 #endif 792 793 793 794 #if defined(arch_atomic_inc_not_zero) 794 - static inline bool 795 + static __always_inline bool 795 796 atomic_inc_not_zero(atomic_t *v) 796 797 { 797 - kasan_check_write(v, sizeof(*v)); 798 + instrument_atomic_write(v, sizeof(*v)); 798 799 return arch_atomic_inc_not_zero(v); 799 800 } 800 801 #define atomic_inc_not_zero atomic_inc_not_zero 801 802 #endif 802 803 803 804 #if defined(arch_atomic_inc_unless_negative) 804 - static inline bool 805 + static __always_inline bool 805 806 atomic_inc_unless_negative(atomic_t *v) 806 807 { 807 - kasan_check_write(v, sizeof(*v)); 808 + instrument_atomic_write(v, sizeof(*v)); 808 809 return arch_atomic_inc_unless_negative(v); 809 810 } 810 811 #define atomic_inc_unless_negative atomic_inc_unless_negative 811 812 #endif 812 813 813 814 #if defined(arch_atomic_dec_unless_positive) 814 - static inline bool 815 + static __always_inline bool 815 816 atomic_dec_unless_positive(atomic_t *v) 816 817 { 817 - kasan_check_write(v, sizeof(*v)); 818 + instrument_atomic_write(v, sizeof(*v)); 818 819 return arch_atomic_dec_unless_positive(v); 819 820 } 820 821 #define atomic_dec_unless_positive atomic_dec_unless_positive 821 822 #endif 822 823 823 824 #if defined(arch_atomic_dec_if_positive) 824 - static inline int 825 + static __always_inline int 825 826 atomic_dec_if_positive(atomic_t *v) 826 827 { 827 - kasan_check_write(v, sizeof(*v)); 828 + instrument_atomic_write(v, sizeof(*v)); 828 829 return arch_atomic_dec_if_positive(v); 829 830 } 830 831 #define atomic_dec_if_positive atomic_dec_if_positive 831 832 #endif 832 833 833 - static inline s64 834 + static __always_inline s64 834 835 atomic64_read(const atomic64_t *v) 835 836 { 836 - kasan_check_read(v, sizeof(*v)); 837 + instrument_atomic_read(v, sizeof(*v)); 837 838 return arch_atomic64_read(v); 838 839 } 839 840 #define atomic64_read atomic64_read 840 841 841 842 #if defined(arch_atomic64_read_acquire) 842 - static inline s64 843 + static __always_inline s64 843 844 atomic64_read_acquire(const atomic64_t *v) 844 845 { 845 - kasan_check_read(v, sizeof(*v)); 846 + instrument_atomic_read(v, sizeof(*v)); 846 847 return arch_atomic64_read_acquire(v); 847 848 } 848 849 #define atomic64_read_acquire atomic64_read_acquire 849 850 #endif 850 851 851 - static inline void 852 + static __always_inline void 852 853 atomic64_set(atomic64_t *v, s64 i) 853 854 { 854 - kasan_check_write(v, sizeof(*v)); 855 + instrument_atomic_write(v, sizeof(*v)); 855 856 arch_atomic64_set(v, i); 856 857 } 857 858 #define atomic64_set atomic64_set 858 859 859 860 #if defined(arch_atomic64_set_release) 860 - static inline void 861 + static __always_inline void 861 862 atomic64_set_release(atomic64_t *v, s64 i) 862 863 { 863 - kasan_check_write(v, sizeof(*v)); 864 + instrument_atomic_write(v, sizeof(*v)); 864 865 arch_atomic64_set_release(v, i); 865 866 } 866 867 #define atomic64_set_release atomic64_set_release 867 868 #endif 868 869 869 - static inline void 870 + static __always_inline void 870 871 atomic64_add(s64 i, atomic64_t *v) 871 872 { 872 - kasan_check_write(v, sizeof(*v)); 873 + instrument_atomic_write(v, sizeof(*v)); 873 874 arch_atomic64_add(i, v); 874 875 } 875 876 #define atomic64_add atomic64_add 876 877 877 878 #if !defined(arch_atomic64_add_return_relaxed) || defined(arch_atomic64_add_return) 878 - static inline s64 879 + static __always_inline s64 879 880 atomic64_add_return(s64 i, atomic64_t *v) 880 881 { 881 - kasan_check_write(v, sizeof(*v)); 882 + instrument_atomic_write(v, sizeof(*v)); 882 883 return arch_atomic64_add_return(i, v); 883 884 } 884 885 #define atomic64_add_return atomic64_add_return 885 886 #endif 886 887 887 888 #if defined(arch_atomic64_add_return_acquire) 888 - static inline s64 889 + static __always_inline s64 889 890 atomic64_add_return_acquire(s64 i, atomic64_t *v) 890 891 { 891 - kasan_check_write(v, sizeof(*v)); 892 + instrument_atomic_write(v, sizeof(*v)); 892 893 return arch_atomic64_add_return_acquire(i, v); 893 894 } 894 895 #define atomic64_add_return_acquire atomic64_add_return_acquire 895 896 #endif 896 897 897 898 #if defined(arch_atomic64_add_return_release) 898 - static inline s64 899 + static __always_inline s64 899 900 atomic64_add_return_release(s64 i, atomic64_t *v) 900 901 { 901 - kasan_check_write(v, sizeof(*v)); 902 + instrument_atomic_write(v, sizeof(*v)); 902 903 return arch_atomic64_add_return_release(i, v); 903 904 } 904 905 #define atomic64_add_return_release atomic64_add_return_release 905 906 #endif 906 907 907 908 #if defined(arch_atomic64_add_return_relaxed) 908 - static inline s64 909 + static __always_inline s64 909 910 atomic64_add_return_relaxed(s64 i, atomic64_t *v) 910 911 { 911 - kasan_check_write(v, sizeof(*v)); 912 + instrument_atomic_write(v, sizeof(*v)); 912 913 return arch_atomic64_add_return_relaxed(i, v); 913 914 } 914 915 #define atomic64_add_return_relaxed atomic64_add_return_relaxed 915 916 #endif 916 917 917 918 #if !defined(arch_atomic64_fetch_add_relaxed) || defined(arch_atomic64_fetch_add) 918 - static inline s64 919 + static __always_inline s64 919 920 atomic64_fetch_add(s64 i, atomic64_t *v) 920 921 { 921 - kasan_check_write(v, sizeof(*v)); 922 + instrument_atomic_write(v, sizeof(*v)); 922 923 return arch_atomic64_fetch_add(i, v); 923 924 } 924 925 #define atomic64_fetch_add atomic64_fetch_add 925 926 #endif 926 927 927 928 #if defined(arch_atomic64_fetch_add_acquire) 928 - static inline s64 929 + static __always_inline s64 929 930 atomic64_fetch_add_acquire(s64 i, atomic64_t *v) 930 931 { 931 - kasan_check_write(v, sizeof(*v)); 932 + instrument_atomic_write(v, sizeof(*v)); 932 933 return arch_atomic64_fetch_add_acquire(i, v); 933 934 } 934 935 #define atomic64_fetch_add_acquire atomic64_fetch_add_acquire 935 936 #endif 936 937 937 938 #if defined(arch_atomic64_fetch_add_release) 938 - static inline s64 939 + static __always_inline s64 939 940 atomic64_fetch_add_release(s64 i, atomic64_t *v) 940 941 { 941 - kasan_check_write(v, sizeof(*v)); 942 + instrument_atomic_write(v, sizeof(*v)); 942 943 return arch_atomic64_fetch_add_release(i, v); 943 944 } 944 945 #define atomic64_fetch_add_release atomic64_fetch_add_release 945 946 #endif 946 947 947 948 #if defined(arch_atomic64_fetch_add_relaxed) 948 - static inline s64 949 + static __always_inline s64 949 950 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) 950 951 { 951 - kasan_check_write(v, sizeof(*v)); 952 + instrument_atomic_write(v, sizeof(*v)); 952 953 return arch_atomic64_fetch_add_relaxed(i, v); 953 954 } 954 955 #define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed 955 956 #endif 956 957 957 - static inline void 958 + static __always_inline void 958 959 atomic64_sub(s64 i, atomic64_t *v) 959 960 { 960 - kasan_check_write(v, sizeof(*v)); 961 + instrument_atomic_write(v, sizeof(*v)); 961 962 arch_atomic64_sub(i, v); 962 963 } 963 964 #define atomic64_sub atomic64_sub 964 965 965 966 #if !defined(arch_atomic64_sub_return_relaxed) || defined(arch_atomic64_sub_return) 966 - static inline s64 967 + static __always_inline s64 967 968 atomic64_sub_return(s64 i, atomic64_t *v) 968 969 { 969 - kasan_check_write(v, sizeof(*v)); 970 + instrument_atomic_write(v, sizeof(*v)); 970 971 return arch_atomic64_sub_return(i, v); 971 972 } 972 973 #define atomic64_sub_return atomic64_sub_return 973 974 #endif 974 975 975 976 #if defined(arch_atomic64_sub_return_acquire) 976 - static inline s64 977 + static __always_inline s64 977 978 atomic64_sub_return_acquire(s64 i, atomic64_t *v) 978 979 { 979 - kasan_check_write(v, sizeof(*v)); 980 + instrument_atomic_write(v, sizeof(*v)); 980 981 return arch_atomic64_sub_return_acquire(i, v); 981 982 } 982 983 #define atomic64_sub_return_acquire atomic64_sub_return_acquire 983 984 #endif 984 985 985 986 #if defined(arch_atomic64_sub_return_release) 986 - static inline s64 987 + static __always_inline s64 987 988 atomic64_sub_return_release(s64 i, atomic64_t *v) 988 989 { 989 - kasan_check_write(v, sizeof(*v)); 990 + instrument_atomic_write(v, sizeof(*v)); 990 991 return arch_atomic64_sub_return_release(i, v); 991 992 } 992 993 #define atomic64_sub_return_release atomic64_sub_return_release 993 994 #endif 994 995 995 996 #if defined(arch_atomic64_sub_return_relaxed) 996 - static inline s64 997 + static __always_inline s64 997 998 atomic64_sub_return_relaxed(s64 i, atomic64_t *v) 998 999 { 999 - kasan_check_write(v, sizeof(*v)); 1000 + instrument_atomic_write(v, sizeof(*v)); 1000 1001 return arch_atomic64_sub_return_relaxed(i, v); 1001 1002 } 1002 1003 #define atomic64_sub_return_relaxed atomic64_sub_return_relaxed 1003 1004 #endif 1004 1005 1005 1006 #if !defined(arch_atomic64_fetch_sub_relaxed) || defined(arch_atomic64_fetch_sub) 1006 - static inline s64 1007 + static __always_inline s64 1007 1008 atomic64_fetch_sub(s64 i, atomic64_t *v) 1008 1009 { 1009 - kasan_check_write(v, sizeof(*v)); 1010 + instrument_atomic_write(v, sizeof(*v)); 1010 1011 return arch_atomic64_fetch_sub(i, v); 1011 1012 } 1012 1013 #define atomic64_fetch_sub atomic64_fetch_sub 1013 1014 #endif 1014 1015 1015 1016 #if defined(arch_atomic64_fetch_sub_acquire) 1016 - static inline s64 1017 + static __always_inline s64 1017 1018 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) 1018 1019 { 1019 - kasan_check_write(v, sizeof(*v)); 1020 + instrument_atomic_write(v, sizeof(*v)); 1020 1021 return arch_atomic64_fetch_sub_acquire(i, v); 1021 1022 } 1022 1023 #define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire 1023 1024 #endif 1024 1025 1025 1026 #if defined(arch_atomic64_fetch_sub_release) 1026 - static inline s64 1027 + static __always_inline s64 1027 1028 atomic64_fetch_sub_release(s64 i, atomic64_t *v) 1028 1029 { 1029 - kasan_check_write(v, sizeof(*v)); 1030 + instrument_atomic_write(v, sizeof(*v)); 1030 1031 return arch_atomic64_fetch_sub_release(i, v); 1031 1032 } 1032 1033 #define atomic64_fetch_sub_release atomic64_fetch_sub_release 1033 1034 #endif 1034 1035 1035 1036 #if defined(arch_atomic64_fetch_sub_relaxed) 1036 - static inline s64 1037 + static __always_inline s64 1037 1038 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) 1038 1039 { 1039 - kasan_check_write(v, sizeof(*v)); 1040 + instrument_atomic_write(v, sizeof(*v)); 1040 1041 return arch_atomic64_fetch_sub_relaxed(i, v); 1041 1042 } 1042 1043 #define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed 1043 1044 #endif 1044 1045 1045 1046 #if defined(arch_atomic64_inc) 1046 - static inline void 1047 + static __always_inline void 1047 1048 atomic64_inc(atomic64_t *v) 1048 1049 { 1049 - kasan_check_write(v, sizeof(*v)); 1050 + instrument_atomic_write(v, sizeof(*v)); 1050 1051 arch_atomic64_inc(v); 1051 1052 } 1052 1053 #define atomic64_inc atomic64_inc 1053 1054 #endif 1054 1055 1055 1056 #if defined(arch_atomic64_inc_return) 1056 - static inline s64 1057 + static __always_inline s64 1057 1058 atomic64_inc_return(atomic64_t *v) 1058 1059 { 1059 - kasan_check_write(v, sizeof(*v)); 1060 + instrument_atomic_write(v, sizeof(*v)); 1060 1061 return arch_atomic64_inc_return(v); 1061 1062 } 1062 1063 #define atomic64_inc_return atomic64_inc_return 1063 1064 #endif 1064 1065 1065 1066 #if defined(arch_atomic64_inc_return_acquire) 1066 - static inline s64 1067 + static __always_inline s64 1067 1068 atomic64_inc_return_acquire(atomic64_t *v) 1068 1069 { 1069 - kasan_check_write(v, sizeof(*v)); 1070 + instrument_atomic_write(v, sizeof(*v)); 1070 1071 return arch_atomic64_inc_return_acquire(v); 1071 1072 } 1072 1073 #define atomic64_inc_return_acquire atomic64_inc_return_acquire 1073 1074 #endif 1074 1075 1075 1076 #if defined(arch_atomic64_inc_return_release) 1076 - static inline s64 1077 + static __always_inline s64 1077 1078 atomic64_inc_return_release(atomic64_t *v) 1078 1079 { 1079 - kasan_check_write(v, sizeof(*v)); 1080 + instrument_atomic_write(v, sizeof(*v)); 1080 1081 return arch_atomic64_inc_return_release(v); 1081 1082 } 1082 1083 #define atomic64_inc_return_release atomic64_inc_return_release 1083 1084 #endif 1084 1085 1085 1086 #if defined(arch_atomic64_inc_return_relaxed) 1086 - static inline s64 1087 + static __always_inline s64 1087 1088 atomic64_inc_return_relaxed(atomic64_t *v) 1088 1089 { 1089 - kasan_check_write(v, sizeof(*v)); 1090 + instrument_atomic_write(v, sizeof(*v)); 1090 1091 return arch_atomic64_inc_return_relaxed(v); 1091 1092 } 1092 1093 #define atomic64_inc_return_relaxed atomic64_inc_return_relaxed 1093 1094 #endif 1094 1095 1095 1096 #if defined(arch_atomic64_fetch_inc) 1096 - static inline s64 1097 + static __always_inline s64 1097 1098 atomic64_fetch_inc(atomic64_t *v) 1098 1099 { 1099 - kasan_check_write(v, sizeof(*v)); 1100 + instrument_atomic_write(v, sizeof(*v)); 1100 1101 return arch_atomic64_fetch_inc(v); 1101 1102 } 1102 1103 #define atomic64_fetch_inc atomic64_fetch_inc 1103 1104 #endif 1104 1105 1105 1106 #if defined(arch_atomic64_fetch_inc_acquire) 1106 - static inline s64 1107 + static __always_inline s64 1107 1108 atomic64_fetch_inc_acquire(atomic64_t *v) 1108 1109 { 1109 - kasan_check_write(v, sizeof(*v)); 1110 + instrument_atomic_write(v, sizeof(*v)); 1110 1111 return arch_atomic64_fetch_inc_acquire(v); 1111 1112 } 1112 1113 #define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire 1113 1114 #endif 1114 1115 1115 1116 #if defined(arch_atomic64_fetch_inc_release) 1116 - static inline s64 1117 + static __always_inline s64 1117 1118 atomic64_fetch_inc_release(atomic64_t *v) 1118 1119 { 1119 - kasan_check_write(v, sizeof(*v)); 1120 + instrument_atomic_write(v, sizeof(*v)); 1120 1121 return arch_atomic64_fetch_inc_release(v); 1121 1122 } 1122 1123 #define atomic64_fetch_inc_release atomic64_fetch_inc_release 1123 1124 #endif 1124 1125 1125 1126 #if defined(arch_atomic64_fetch_inc_relaxed) 1126 - static inline s64 1127 + static __always_inline s64 1127 1128 atomic64_fetch_inc_relaxed(atomic64_t *v) 1128 1129 { 1129 - kasan_check_write(v, sizeof(*v)); 1130 + instrument_atomic_write(v, sizeof(*v)); 1130 1131 return arch_atomic64_fetch_inc_relaxed(v); 1131 1132 } 1132 1133 #define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed 1133 1134 #endif 1134 1135 1135 1136 #if defined(arch_atomic64_dec) 1136 - static inline void 1137 + static __always_inline void 1137 1138 atomic64_dec(atomic64_t *v) 1138 1139 { 1139 - kasan_check_write(v, sizeof(*v)); 1140 + instrument_atomic_write(v, sizeof(*v)); 1140 1141 arch_atomic64_dec(v); 1141 1142 } 1142 1143 #define atomic64_dec atomic64_dec 1143 1144 #endif 1144 1145 1145 1146 #if defined(arch_atomic64_dec_return) 1146 - static inline s64 1147 + static __always_inline s64 1147 1148 atomic64_dec_return(atomic64_t *v) 1148 1149 { 1149 - kasan_check_write(v, sizeof(*v)); 1150 + instrument_atomic_write(v, sizeof(*v)); 1150 1151 return arch_atomic64_dec_return(v); 1151 1152 } 1152 1153 #define atomic64_dec_return atomic64_dec_return 1153 1154 #endif 1154 1155 1155 1156 #if defined(arch_atomic64_dec_return_acquire) 1156 - static inline s64 1157 + static __always_inline s64 1157 1158 atomic64_dec_return_acquire(atomic64_t *v) 1158 1159 { 1159 - kasan_check_write(v, sizeof(*v)); 1160 + instrument_atomic_write(v, sizeof(*v)); 1160 1161 return arch_atomic64_dec_return_acquire(v); 1161 1162 } 1162 1163 #define atomic64_dec_return_acquire atomic64_dec_return_acquire 1163 1164 #endif 1164 1165 1165 1166 #if defined(arch_atomic64_dec_return_release) 1166 - static inline s64 1167 + static __always_inline s64 1167 1168 atomic64_dec_return_release(atomic64_t *v) 1168 1169 { 1169 - kasan_check_write(v, sizeof(*v)); 1170 + instrument_atomic_write(v, sizeof(*v)); 1170 1171 return arch_atomic64_dec_return_release(v); 1171 1172 } 1172 1173 #define atomic64_dec_return_release atomic64_dec_return_release 1173 1174 #endif 1174 1175 1175 1176 #if defined(arch_atomic64_dec_return_relaxed) 1176 - static inline s64 1177 + static __always_inline s64 1177 1178 atomic64_dec_return_relaxed(atomic64_t *v) 1178 1179 { 1179 - kasan_check_write(v, sizeof(*v)); 1180 + instrument_atomic_write(v, sizeof(*v)); 1180 1181 return arch_atomic64_dec_return_relaxed(v); 1181 1182 } 1182 1183 #define atomic64_dec_return_relaxed atomic64_dec_return_relaxed 1183 1184 #endif 1184 1185 1185 1186 #if defined(arch_atomic64_fetch_dec) 1186 - static inline s64 1187 + static __always_inline s64 1187 1188 atomic64_fetch_dec(atomic64_t *v) 1188 1189 { 1189 - kasan_check_write(v, sizeof(*v)); 1190 + instrument_atomic_write(v, sizeof(*v)); 1190 1191 return arch_atomic64_fetch_dec(v); 1191 1192 } 1192 1193 #define atomic64_fetch_dec atomic64_fetch_dec 1193 1194 #endif 1194 1195 1195 1196 #if defined(arch_atomic64_fetch_dec_acquire) 1196 - static inline s64 1197 + static __always_inline s64 1197 1198 atomic64_fetch_dec_acquire(atomic64_t *v) 1198 1199 { 1199 - kasan_check_write(v, sizeof(*v)); 1200 + instrument_atomic_write(v, sizeof(*v)); 1200 1201 return arch_atomic64_fetch_dec_acquire(v); 1201 1202 } 1202 1203 #define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire 1203 1204 #endif 1204 1205 1205 1206 #if defined(arch_atomic64_fetch_dec_release) 1206 - static inline s64 1207 + static __always_inline s64 1207 1208 atomic64_fetch_dec_release(atomic64_t *v) 1208 1209 { 1209 - kasan_check_write(v, sizeof(*v)); 1210 + instrument_atomic_write(v, sizeof(*v)); 1210 1211 return arch_atomic64_fetch_dec_release(v); 1211 1212 } 1212 1213 #define atomic64_fetch_dec_release atomic64_fetch_dec_release 1213 1214 #endif 1214 1215 1215 1216 #if defined(arch_atomic64_fetch_dec_relaxed) 1216 - static inline s64 1217 + static __always_inline s64 1217 1218 atomic64_fetch_dec_relaxed(atomic64_t *v) 1218 1219 { 1219 - kasan_check_write(v, sizeof(*v)); 1220 + instrument_atomic_write(v, sizeof(*v)); 1220 1221 return arch_atomic64_fetch_dec_relaxed(v); 1221 1222 } 1222 1223 #define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed 1223 1224 #endif 1224 1225 1225 - static inline void 1226 + static __always_inline void 1226 1227 atomic64_and(s64 i, atomic64_t *v) 1227 1228 { 1228 - kasan_check_write(v, sizeof(*v)); 1229 + instrument_atomic_write(v, sizeof(*v)); 1229 1230 arch_atomic64_and(i, v); 1230 1231 } 1231 1232 #define atomic64_and atomic64_and 1232 1233 1233 1234 #if !defined(arch_atomic64_fetch_and_relaxed) || defined(arch_atomic64_fetch_and) 1234 - static inline s64 1235 + static __always_inline s64 1235 1236 atomic64_fetch_and(s64 i, atomic64_t *v) 1236 1237 { 1237 - kasan_check_write(v, sizeof(*v)); 1238 + instrument_atomic_write(v, sizeof(*v)); 1238 1239 return arch_atomic64_fetch_and(i, v); 1239 1240 } 1240 1241 #define atomic64_fetch_and atomic64_fetch_and 1241 1242 #endif 1242 1243 1243 1244 #if defined(arch_atomic64_fetch_and_acquire) 1244 - static inline s64 1245 + static __always_inline s64 1245 1246 atomic64_fetch_and_acquire(s64 i, atomic64_t *v) 1246 1247 { 1247 - kasan_check_write(v, sizeof(*v)); 1248 + instrument_atomic_write(v, sizeof(*v)); 1248 1249 return arch_atomic64_fetch_and_acquire(i, v); 1249 1250 } 1250 1251 #define atomic64_fetch_and_acquire atomic64_fetch_and_acquire 1251 1252 #endif 1252 1253 1253 1254 #if defined(arch_atomic64_fetch_and_release) 1254 - static inline s64 1255 + static __always_inline s64 1255 1256 atomic64_fetch_and_release(s64 i, atomic64_t *v) 1256 1257 { 1257 - kasan_check_write(v, sizeof(*v)); 1258 + instrument_atomic_write(v, sizeof(*v)); 1258 1259 return arch_atomic64_fetch_and_release(i, v); 1259 1260 } 1260 1261 #define atomic64_fetch_and_release atomic64_fetch_and_release 1261 1262 #endif 1262 1263 1263 1264 #if defined(arch_atomic64_fetch_and_relaxed) 1264 - static inline s64 1265 + static __always_inline s64 1265 1266 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) 1266 1267 { 1267 - kasan_check_write(v, sizeof(*v)); 1268 + instrument_atomic_write(v, sizeof(*v)); 1268 1269 return arch_atomic64_fetch_and_relaxed(i, v); 1269 1270 } 1270 1271 #define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed 1271 1272 #endif 1272 1273 1273 1274 #if defined(arch_atomic64_andnot) 1274 - static inline void 1275 + static __always_inline void 1275 1276 atomic64_andnot(s64 i, atomic64_t *v) 1276 1277 { 1277 - kasan_check_write(v, sizeof(*v)); 1278 + instrument_atomic_write(v, sizeof(*v)); 1278 1279 arch_atomic64_andnot(i, v); 1279 1280 } 1280 1281 #define atomic64_andnot atomic64_andnot 1281 1282 #endif 1282 1283 1283 1284 #if defined(arch_atomic64_fetch_andnot) 1284 - static inline s64 1285 + static __always_inline s64 1285 1286 atomic64_fetch_andnot(s64 i, atomic64_t *v) 1286 1287 { 1287 - kasan_check_write(v, sizeof(*v)); 1288 + instrument_atomic_write(v, sizeof(*v)); 1288 1289 return arch_atomic64_fetch_andnot(i, v); 1289 1290 } 1290 1291 #define atomic64_fetch_andnot atomic64_fetch_andnot 1291 1292 #endif 1292 1293 1293 1294 #if defined(arch_atomic64_fetch_andnot_acquire) 1294 - static inline s64 1295 + static __always_inline s64 1295 1296 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) 1296 1297 { 1297 - kasan_check_write(v, sizeof(*v)); 1298 + instrument_atomic_write(v, sizeof(*v)); 1298 1299 return arch_atomic64_fetch_andnot_acquire(i, v); 1299 1300 } 1300 1301 #define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire 1301 1302 #endif 1302 1303 1303 1304 #if defined(arch_atomic64_fetch_andnot_release) 1304 - static inline s64 1305 + static __always_inline s64 1305 1306 atomic64_fetch_andnot_release(s64 i, atomic64_t *v) 1306 1307 { 1307 - kasan_check_write(v, sizeof(*v)); 1308 + instrument_atomic_write(v, sizeof(*v)); 1308 1309 return arch_atomic64_fetch_andnot_release(i, v); 1309 1310 } 1310 1311 #define atomic64_fetch_andnot_release atomic64_fetch_andnot_release 1311 1312 #endif 1312 1313 1313 1314 #if defined(arch_atomic64_fetch_andnot_relaxed) 1314 - static inline s64 1315 + static __always_inline s64 1315 1316 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) 1316 1317 { 1317 - kasan_check_write(v, sizeof(*v)); 1318 + instrument_atomic_write(v, sizeof(*v)); 1318 1319 return arch_atomic64_fetch_andnot_relaxed(i, v); 1319 1320 } 1320 1321 #define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed 1321 1322 #endif 1322 1323 1323 - static inline void 1324 + static __always_inline void 1324 1325 atomic64_or(s64 i, atomic64_t *v) 1325 1326 { 1326 - kasan_check_write(v, sizeof(*v)); 1327 + instrument_atomic_write(v, sizeof(*v)); 1327 1328 arch_atomic64_or(i, v); 1328 1329 } 1329 1330 #define atomic64_or atomic64_or 1330 1331 1331 1332 #if !defined(arch_atomic64_fetch_or_relaxed) || defined(arch_atomic64_fetch_or) 1332 - static inline s64 1333 + static __always_inline s64 1333 1334 atomic64_fetch_or(s64 i, atomic64_t *v) 1334 1335 { 1335 - kasan_check_write(v, sizeof(*v)); 1336 + instrument_atomic_write(v, sizeof(*v)); 1336 1337 return arch_atomic64_fetch_or(i, v); 1337 1338 } 1338 1339 #define atomic64_fetch_or atomic64_fetch_or 1339 1340 #endif 1340 1341 1341 1342 #if defined(arch_atomic64_fetch_or_acquire) 1342 - static inline s64 1343 + static __always_inline s64 1343 1344 atomic64_fetch_or_acquire(s64 i, atomic64_t *v) 1344 1345 { 1345 - kasan_check_write(v, sizeof(*v)); 1346 + instrument_atomic_write(v, sizeof(*v)); 1346 1347 return arch_atomic64_fetch_or_acquire(i, v); 1347 1348 } 1348 1349 #define atomic64_fetch_or_acquire atomic64_fetch_or_acquire 1349 1350 #endif 1350 1351 1351 1352 #if defined(arch_atomic64_fetch_or_release) 1352 - static inline s64 1353 + static __always_inline s64 1353 1354 atomic64_fetch_or_release(s64 i, atomic64_t *v) 1354 1355 { 1355 - kasan_check_write(v, sizeof(*v)); 1356 + instrument_atomic_write(v, sizeof(*v)); 1356 1357 return arch_atomic64_fetch_or_release(i, v); 1357 1358 } 1358 1359 #define atomic64_fetch_or_release atomic64_fetch_or_release 1359 1360 #endif 1360 1361 1361 1362 #if defined(arch_atomic64_fetch_or_relaxed) 1362 - static inline s64 1363 + static __always_inline s64 1363 1364 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) 1364 1365 { 1365 - kasan_check_write(v, sizeof(*v)); 1366 + instrument_atomic_write(v, sizeof(*v)); 1366 1367 return arch_atomic64_fetch_or_relaxed(i, v); 1367 1368 } 1368 1369 #define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed 1369 1370 #endif 1370 1371 1371 - static inline void 1372 + static __always_inline void 1372 1373 atomic64_xor(s64 i, atomic64_t *v) 1373 1374 { 1374 - kasan_check_write(v, sizeof(*v)); 1375 + instrument_atomic_write(v, sizeof(*v)); 1375 1376 arch_atomic64_xor(i, v); 1376 1377 } 1377 1378 #define atomic64_xor atomic64_xor 1378 1379 1379 1380 #if !defined(arch_atomic64_fetch_xor_relaxed) || defined(arch_atomic64_fetch_xor) 1380 - static inline s64 1381 + static __always_inline s64 1381 1382 atomic64_fetch_xor(s64 i, atomic64_t *v) 1382 1383 { 1383 - kasan_check_write(v, sizeof(*v)); 1384 + instrument_atomic_write(v, sizeof(*v)); 1384 1385 return arch_atomic64_fetch_xor(i, v); 1385 1386 } 1386 1387 #define atomic64_fetch_xor atomic64_fetch_xor 1387 1388 #endif 1388 1389 1389 1390 #if defined(arch_atomic64_fetch_xor_acquire) 1390 - static inline s64 1391 + static __always_inline s64 1391 1392 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) 1392 1393 { 1393 - kasan_check_write(v, sizeof(*v)); 1394 + instrument_atomic_write(v, sizeof(*v)); 1394 1395 return arch_atomic64_fetch_xor_acquire(i, v); 1395 1396 } 1396 1397 #define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire 1397 1398 #endif 1398 1399 1399 1400 #if defined(arch_atomic64_fetch_xor_release) 1400 - static inline s64 1401 + static __always_inline s64 1401 1402 atomic64_fetch_xor_release(s64 i, atomic64_t *v) 1402 1403 { 1403 - kasan_check_write(v, sizeof(*v)); 1404 + instrument_atomic_write(v, sizeof(*v)); 1404 1405 return arch_atomic64_fetch_xor_release(i, v); 1405 1406 } 1406 1407 #define atomic64_fetch_xor_release atomic64_fetch_xor_release 1407 1408 #endif 1408 1409 1409 1410 #if defined(arch_atomic64_fetch_xor_relaxed) 1410 - static inline s64 1411 + static __always_inline s64 1411 1412 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) 1412 1413 { 1413 - kasan_check_write(v, sizeof(*v)); 1414 + instrument_atomic_write(v, sizeof(*v)); 1414 1415 return arch_atomic64_fetch_xor_relaxed(i, v); 1415 1416 } 1416 1417 #define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed 1417 1418 #endif 1418 1419 1419 1420 #if !defined(arch_atomic64_xchg_relaxed) || defined(arch_atomic64_xchg) 1420 - static inline s64 1421 + static __always_inline s64 1421 1422 atomic64_xchg(atomic64_t *v, s64 i) 1422 1423 { 1423 - kasan_check_write(v, sizeof(*v)); 1424 + instrument_atomic_write(v, sizeof(*v)); 1424 1425 return arch_atomic64_xchg(v, i); 1425 1426 } 1426 1427 #define atomic64_xchg atomic64_xchg 1427 1428 #endif 1428 1429 1429 1430 #if defined(arch_atomic64_xchg_acquire) 1430 - static inline s64 1431 + static __always_inline s64 1431 1432 atomic64_xchg_acquire(atomic64_t *v, s64 i) 1432 1433 { 1433 - kasan_check_write(v, sizeof(*v)); 1434 + instrument_atomic_write(v, sizeof(*v)); 1434 1435 return arch_atomic64_xchg_acquire(v, i); 1435 1436 } 1436 1437 #define atomic64_xchg_acquire atomic64_xchg_acquire 1437 1438 #endif 1438 1439 1439 1440 #if defined(arch_atomic64_xchg_release) 1440 - static inline s64 1441 + static __always_inline s64 1441 1442 atomic64_xchg_release(atomic64_t *v, s64 i) 1442 1443 { 1443 - kasan_check_write(v, sizeof(*v)); 1444 + instrument_atomic_write(v, sizeof(*v)); 1444 1445 return arch_atomic64_xchg_release(v, i); 1445 1446 } 1446 1447 #define atomic64_xchg_release atomic64_xchg_release 1447 1448 #endif 1448 1449 1449 1450 #if defined(arch_atomic64_xchg_relaxed) 1450 - static inline s64 1451 + static __always_inline s64 1451 1452 atomic64_xchg_relaxed(atomic64_t *v, s64 i) 1452 1453 { 1453 - kasan_check_write(v, sizeof(*v)); 1454 + instrument_atomic_write(v, sizeof(*v)); 1454 1455 return arch_atomic64_xchg_relaxed(v, i); 1455 1456 } 1456 1457 #define atomic64_xchg_relaxed atomic64_xchg_relaxed 1457 1458 #endif 1458 1459 1459 1460 #if !defined(arch_atomic64_cmpxchg_relaxed) || defined(arch_atomic64_cmpxchg) 1460 - static inline s64 1461 + static __always_inline s64 1461 1462 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) 1462 1463 { 1463 - kasan_check_write(v, sizeof(*v)); 1464 + instrument_atomic_write(v, sizeof(*v)); 1464 1465 return arch_atomic64_cmpxchg(v, old, new); 1465 1466 } 1466 1467 #define atomic64_cmpxchg atomic64_cmpxchg 1467 1468 #endif 1468 1469 1469 1470 #if defined(arch_atomic64_cmpxchg_acquire) 1470 - static inline s64 1471 + static __always_inline s64 1471 1472 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) 1472 1473 { 1473 - kasan_check_write(v, sizeof(*v)); 1474 + instrument_atomic_write(v, sizeof(*v)); 1474 1475 return arch_atomic64_cmpxchg_acquire(v, old, new); 1475 1476 } 1476 1477 #define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire 1477 1478 #endif 1478 1479 1479 1480 #if defined(arch_atomic64_cmpxchg_release) 1480 - static inline s64 1481 + static __always_inline s64 1481 1482 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) 1482 1483 { 1483 - kasan_check_write(v, sizeof(*v)); 1484 + instrument_atomic_write(v, sizeof(*v)); 1484 1485 return arch_atomic64_cmpxchg_release(v, old, new); 1485 1486 } 1486 1487 #define atomic64_cmpxchg_release atomic64_cmpxchg_release 1487 1488 #endif 1488 1489 1489 1490 #if defined(arch_atomic64_cmpxchg_relaxed) 1490 - static inline s64 1491 + static __always_inline s64 1491 1492 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) 1492 1493 { 1493 - kasan_check_write(v, sizeof(*v)); 1494 + instrument_atomic_write(v, sizeof(*v)); 1494 1495 return arch_atomic64_cmpxchg_relaxed(v, old, new); 1495 1496 } 1496 1497 #define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed 1497 1498 #endif 1498 1499 1499 1500 #if defined(arch_atomic64_try_cmpxchg) 1500 - static inline bool 1501 + static __always_inline bool 1501 1502 atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) 1502 1503 { 1503 - kasan_check_write(v, sizeof(*v)); 1504 - kasan_check_write(old, sizeof(*old)); 1504 + instrument_atomic_write(v, sizeof(*v)); 1505 + instrument_atomic_write(old, sizeof(*old)); 1505 1506 return arch_atomic64_try_cmpxchg(v, old, new); 1506 1507 } 1507 1508 #define atomic64_try_cmpxchg atomic64_try_cmpxchg 1508 1509 #endif 1509 1510 1510 1511 #if defined(arch_atomic64_try_cmpxchg_acquire) 1511 - static inline bool 1512 + static __always_inline bool 1512 1513 atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) 1513 1514 { 1514 - kasan_check_write(v, sizeof(*v)); 1515 - kasan_check_write(old, sizeof(*old)); 1515 + instrument_atomic_write(v, sizeof(*v)); 1516 + instrument_atomic_write(old, sizeof(*old)); 1516 1517 return arch_atomic64_try_cmpxchg_acquire(v, old, new); 1517 1518 } 1518 1519 #define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire 1519 1520 #endif 1520 1521 1521 1522 #if defined(arch_atomic64_try_cmpxchg_release) 1522 - static inline bool 1523 + static __always_inline bool 1523 1524 atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) 1524 1525 { 1525 - kasan_check_write(v, sizeof(*v)); 1526 - kasan_check_write(old, sizeof(*old)); 1526 + instrument_atomic_write(v, sizeof(*v)); 1527 + instrument_atomic_write(old, sizeof(*old)); 1527 1528 return arch_atomic64_try_cmpxchg_release(v, old, new); 1528 1529 } 1529 1530 #define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release 1530 1531 #endif 1531 1532 1532 1533 #if defined(arch_atomic64_try_cmpxchg_relaxed) 1533 - static inline bool 1534 + static __always_inline bool 1534 1535 atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) 1535 1536 { 1536 - kasan_check_write(v, sizeof(*v)); 1537 - kasan_check_write(old, sizeof(*old)); 1537 + instrument_atomic_write(v, sizeof(*v)); 1538 + instrument_atomic_write(old, sizeof(*old)); 1538 1539 return arch_atomic64_try_cmpxchg_relaxed(v, old, new); 1539 1540 } 1540 1541 #define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed 1541 1542 #endif 1542 1543 1543 1544 #if defined(arch_atomic64_sub_and_test) 1544 - static inline bool 1545 + static __always_inline bool 1545 1546 atomic64_sub_and_test(s64 i, atomic64_t *v) 1546 1547 { 1547 - kasan_check_write(v, sizeof(*v)); 1548 + instrument_atomic_write(v, sizeof(*v)); 1548 1549 return arch_atomic64_sub_and_test(i, v); 1549 1550 } 1550 1551 #define atomic64_sub_and_test atomic64_sub_and_test 1551 1552 #endif 1552 1553 1553 1554 #if defined(arch_atomic64_dec_and_test) 1554 - static inline bool 1555 + static __always_inline bool 1555 1556 atomic64_dec_and_test(atomic64_t *v) 1556 1557 { 1557 - kasan_check_write(v, sizeof(*v)); 1558 + instrument_atomic_write(v, sizeof(*v)); 1558 1559 return arch_atomic64_dec_and_test(v); 1559 1560 } 1560 1561 #define atomic64_dec_and_test atomic64_dec_and_test 1561 1562 #endif 1562 1563 1563 1564 #if defined(arch_atomic64_inc_and_test) 1564 - static inline bool 1565 + static __always_inline bool 1565 1566 atomic64_inc_and_test(atomic64_t *v) 1566 1567 { 1567 - kasan_check_write(v, sizeof(*v)); 1568 + instrument_atomic_write(v, sizeof(*v)); 1568 1569 return arch_atomic64_inc_and_test(v); 1569 1570 } 1570 1571 #define atomic64_inc_and_test atomic64_inc_and_test 1571 1572 #endif 1572 1573 1573 1574 #if defined(arch_atomic64_add_negative) 1574 - static inline bool 1575 + static __always_inline bool 1575 1576 atomic64_add_negative(s64 i, atomic64_t *v) 1576 1577 { 1577 - kasan_check_write(v, sizeof(*v)); 1578 + instrument_atomic_write(v, sizeof(*v)); 1578 1579 return arch_atomic64_add_negative(i, v); 1579 1580 } 1580 1581 #define atomic64_add_negative atomic64_add_negative 1581 1582 #endif 1582 1583 1583 1584 #if defined(arch_atomic64_fetch_add_unless) 1584 - static inline s64 1585 + static __always_inline s64 1585 1586 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) 1586 1587 { 1587 - kasan_check_write(v, sizeof(*v)); 1588 + instrument_atomic_write(v, sizeof(*v)); 1588 1589 return arch_atomic64_fetch_add_unless(v, a, u); 1589 1590 } 1590 1591 #define atomic64_fetch_add_unless atomic64_fetch_add_unless 1591 1592 #endif 1592 1593 1593 1594 #if defined(arch_atomic64_add_unless) 1594 - static inline bool 1595 + static __always_inline bool 1595 1596 atomic64_add_unless(atomic64_t *v, s64 a, s64 u) 1596 1597 { 1597 - kasan_check_write(v, sizeof(*v)); 1598 + instrument_atomic_write(v, sizeof(*v)); 1598 1599 return arch_atomic64_add_unless(v, a, u); 1599 1600 } 1600 1601 #define atomic64_add_unless atomic64_add_unless 1601 1602 #endif 1602 1603 1603 1604 #if defined(arch_atomic64_inc_not_zero) 1604 - static inline bool 1605 + static __always_inline bool 1605 1606 atomic64_inc_not_zero(atomic64_t *v) 1606 1607 { 1607 - kasan_check_write(v, sizeof(*v)); 1608 + instrument_atomic_write(v, sizeof(*v)); 1608 1609 return arch_atomic64_inc_not_zero(v); 1609 1610 } 1610 1611 #define atomic64_inc_not_zero atomic64_inc_not_zero 1611 1612 #endif 1612 1613 1613 1614 #if defined(arch_atomic64_inc_unless_negative) 1614 - static inline bool 1615 + static __always_inline bool 1615 1616 atomic64_inc_unless_negative(atomic64_t *v) 1616 1617 { 1617 - kasan_check_write(v, sizeof(*v)); 1618 + instrument_atomic_write(v, sizeof(*v)); 1618 1619 return arch_atomic64_inc_unless_negative(v); 1619 1620 } 1620 1621 #define atomic64_inc_unless_negative atomic64_inc_unless_negative 1621 1622 #endif 1622 1623 1623 1624 #if defined(arch_atomic64_dec_unless_positive) 1624 - static inline bool 1625 + static __always_inline bool 1625 1626 atomic64_dec_unless_positive(atomic64_t *v) 1626 1627 { 1627 - kasan_check_write(v, sizeof(*v)); 1628 + instrument_atomic_write(v, sizeof(*v)); 1628 1629 return arch_atomic64_dec_unless_positive(v); 1629 1630 } 1630 1631 #define atomic64_dec_unless_positive atomic64_dec_unless_positive 1631 1632 #endif 1632 1633 1633 1634 #if defined(arch_atomic64_dec_if_positive) 1634 - static inline s64 1635 + static __always_inline s64 1635 1636 atomic64_dec_if_positive(atomic64_t *v) 1636 1637 { 1637 - kasan_check_write(v, sizeof(*v)); 1638 + instrument_atomic_write(v, sizeof(*v)); 1638 1639 return arch_atomic64_dec_if_positive(v); 1639 1640 } 1640 1641 #define atomic64_dec_if_positive atomic64_dec_if_positive ··· 1645 1644 #define xchg(ptr, ...) \ 1646 1645 ({ \ 1647 1646 typeof(ptr) __ai_ptr = (ptr); \ 1648 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1647 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1649 1648 arch_xchg(__ai_ptr, __VA_ARGS__); \ 1650 1649 }) 1651 1650 #endif ··· 1654 1653 #define xchg_acquire(ptr, ...) \ 1655 1654 ({ \ 1656 1655 typeof(ptr) __ai_ptr = (ptr); \ 1657 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1656 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1658 1657 arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \ 1659 1658 }) 1660 1659 #endif ··· 1663 1662 #define xchg_release(ptr, ...) \ 1664 1663 ({ \ 1665 1664 typeof(ptr) __ai_ptr = (ptr); \ 1666 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1665 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1667 1666 arch_xchg_release(__ai_ptr, __VA_ARGS__); \ 1668 1667 }) 1669 1668 #endif ··· 1672 1671 #define xchg_relaxed(ptr, ...) \ 1673 1672 ({ \ 1674 1673 typeof(ptr) __ai_ptr = (ptr); \ 1675 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1674 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1676 1675 arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \ 1677 1676 }) 1678 1677 #endif ··· 1681 1680 #define cmpxchg(ptr, ...) \ 1682 1681 ({ \ 1683 1682 typeof(ptr) __ai_ptr = (ptr); \ 1684 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1683 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1685 1684 arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ 1686 1685 }) 1687 1686 #endif ··· 1690 1689 #define cmpxchg_acquire(ptr, ...) \ 1691 1690 ({ \ 1692 1691 typeof(ptr) __ai_ptr = (ptr); \ 1693 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1692 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1694 1693 arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \ 1695 1694 }) 1696 1695 #endif ··· 1699 1698 #define cmpxchg_release(ptr, ...) \ 1700 1699 ({ \ 1701 1700 typeof(ptr) __ai_ptr = (ptr); \ 1702 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1701 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1703 1702 arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ 1704 1703 }) 1705 1704 #endif ··· 1708 1707 #define cmpxchg_relaxed(ptr, ...) \ 1709 1708 ({ \ 1710 1709 typeof(ptr) __ai_ptr = (ptr); \ 1711 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1710 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1712 1711 arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \ 1713 1712 }) 1714 1713 #endif ··· 1717 1716 #define cmpxchg64(ptr, ...) \ 1718 1717 ({ \ 1719 1718 typeof(ptr) __ai_ptr = (ptr); \ 1720 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1719 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1721 1720 arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \ 1722 1721 }) 1723 1722 #endif ··· 1726 1725 #define cmpxchg64_acquire(ptr, ...) \ 1727 1726 ({ \ 1728 1727 typeof(ptr) __ai_ptr = (ptr); \ 1729 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1728 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1730 1729 arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \ 1731 1730 }) 1732 1731 #endif ··· 1735 1734 #define cmpxchg64_release(ptr, ...) \ 1736 1735 ({ \ 1737 1736 typeof(ptr) __ai_ptr = (ptr); \ 1738 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1737 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1739 1738 arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ 1740 1739 }) 1741 1740 #endif ··· 1744 1743 #define cmpxchg64_relaxed(ptr, ...) \ 1745 1744 ({ \ 1746 1745 typeof(ptr) __ai_ptr = (ptr); \ 1747 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1746 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1748 1747 arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \ 1749 1748 }) 1750 1749 #endif ··· 1752 1751 #define cmpxchg_local(ptr, ...) \ 1753 1752 ({ \ 1754 1753 typeof(ptr) __ai_ptr = (ptr); \ 1755 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1754 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1756 1755 arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \ 1757 1756 }) 1758 1757 1759 1758 #define cmpxchg64_local(ptr, ...) \ 1760 1759 ({ \ 1761 1760 typeof(ptr) __ai_ptr = (ptr); \ 1762 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1761 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1763 1762 arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \ 1764 1763 }) 1765 1764 1766 1765 #define sync_cmpxchg(ptr, ...) \ 1767 1766 ({ \ 1768 1767 typeof(ptr) __ai_ptr = (ptr); \ 1769 - kasan_check_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1768 + instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ 1770 1769 arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ 1771 1770 }) 1772 1771 1773 1772 #define cmpxchg_double(ptr, ...) \ 1774 1773 ({ \ 1775 1774 typeof(ptr) __ai_ptr = (ptr); \ 1776 - kasan_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ 1775 + instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ 1777 1776 arch_cmpxchg_double(__ai_ptr, __VA_ARGS__); \ 1778 1777 }) 1779 1778 ··· 1781 1780 #define cmpxchg_double_local(ptr, ...) \ 1782 1781 ({ \ 1783 1782 typeof(ptr) __ai_ptr = (ptr); \ 1784 - kasan_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ 1783 + instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ 1785 1784 arch_cmpxchg_double_local(__ai_ptr, __VA_ARGS__); \ 1786 1785 }) 1787 1786 1788 1787 #endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */ 1789 - // b29b625d5de9280f680e42c7be859b55b15e5f6a 1788 + // 89bf97f3a7509b740845e51ddf31055b48a81f40
+166 -165
include/asm-generic/atomic-long.h
··· 6 6 #ifndef _ASM_GENERIC_ATOMIC_LONG_H 7 7 #define _ASM_GENERIC_ATOMIC_LONG_H 8 8 9 + #include <linux/compiler.h> 9 10 #include <asm/types.h> 10 11 11 12 #ifdef CONFIG_64BIT ··· 23 22 24 23 #ifdef CONFIG_64BIT 25 24 26 - static inline long 25 + static __always_inline long 27 26 atomic_long_read(const atomic_long_t *v) 28 27 { 29 28 return atomic64_read(v); 30 29 } 31 30 32 - static inline long 31 + static __always_inline long 33 32 atomic_long_read_acquire(const atomic_long_t *v) 34 33 { 35 34 return atomic64_read_acquire(v); 36 35 } 37 36 38 - static inline void 37 + static __always_inline void 39 38 atomic_long_set(atomic_long_t *v, long i) 40 39 { 41 40 atomic64_set(v, i); 42 41 } 43 42 44 - static inline void 43 + static __always_inline void 45 44 atomic_long_set_release(atomic_long_t *v, long i) 46 45 { 47 46 atomic64_set_release(v, i); 48 47 } 49 48 50 - static inline void 49 + static __always_inline void 51 50 atomic_long_add(long i, atomic_long_t *v) 52 51 { 53 52 atomic64_add(i, v); 54 53 } 55 54 56 - static inline long 55 + static __always_inline long 57 56 atomic_long_add_return(long i, atomic_long_t *v) 58 57 { 59 58 return atomic64_add_return(i, v); 60 59 } 61 60 62 - static inline long 61 + static __always_inline long 63 62 atomic_long_add_return_acquire(long i, atomic_long_t *v) 64 63 { 65 64 return atomic64_add_return_acquire(i, v); 66 65 } 67 66 68 - static inline long 67 + static __always_inline long 69 68 atomic_long_add_return_release(long i, atomic_long_t *v) 70 69 { 71 70 return atomic64_add_return_release(i, v); 72 71 } 73 72 74 - static inline long 73 + static __always_inline long 75 74 atomic_long_add_return_relaxed(long i, atomic_long_t *v) 76 75 { 77 76 return atomic64_add_return_relaxed(i, v); 78 77 } 79 78 80 - static inline long 79 + static __always_inline long 81 80 atomic_long_fetch_add(long i, atomic_long_t *v) 82 81 { 83 82 return atomic64_fetch_add(i, v); 84 83 } 85 84 86 - static inline long 85 + static __always_inline long 87 86 atomic_long_fetch_add_acquire(long i, atomic_long_t *v) 88 87 { 89 88 return atomic64_fetch_add_acquire(i, v); 90 89 } 91 90 92 - static inline long 91 + static __always_inline long 93 92 atomic_long_fetch_add_release(long i, atomic_long_t *v) 94 93 { 95 94 return atomic64_fetch_add_release(i, v); 96 95 } 97 96 98 - static inline long 97 + static __always_inline long 99 98 atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) 100 99 { 101 100 return atomic64_fetch_add_relaxed(i, v); 102 101 } 103 102 104 - static inline void 103 + static __always_inline void 105 104 atomic_long_sub(long i, atomic_long_t *v) 106 105 { 107 106 atomic64_sub(i, v); 108 107 } 109 108 110 - static inline long 109 + static __always_inline long 111 110 atomic_long_sub_return(long i, atomic_long_t *v) 112 111 { 113 112 return atomic64_sub_return(i, v); 114 113 } 115 114 116 - static inline long 115 + static __always_inline long 117 116 atomic_long_sub_return_acquire(long i, atomic_long_t *v) 118 117 { 119 118 return atomic64_sub_return_acquire(i, v); 120 119 } 121 120 122 - static inline long 121 + static __always_inline long 123 122 atomic_long_sub_return_release(long i, atomic_long_t *v) 124 123 { 125 124 return atomic64_sub_return_release(i, v); 126 125 } 127 126 128 - static inline long 127 + static __always_inline long 129 128 atomic_long_sub_return_relaxed(long i, atomic_long_t *v) 130 129 { 131 130 return atomic64_sub_return_relaxed(i, v); 132 131 } 133 132 134 - static inline long 133 + static __always_inline long 135 134 atomic_long_fetch_sub(long i, atomic_long_t *v) 136 135 { 137 136 return atomic64_fetch_sub(i, v); 138 137 } 139 138 140 - static inline long 139 + static __always_inline long 141 140 atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) 142 141 { 143 142 return atomic64_fetch_sub_acquire(i, v); 144 143 } 145 144 146 - static inline long 145 + static __always_inline long 147 146 atomic_long_fetch_sub_release(long i, atomic_long_t *v) 148 147 { 149 148 return atomic64_fetch_sub_release(i, v); 150 149 } 151 150 152 - static inline long 151 + static __always_inline long 153 152 atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) 154 153 { 155 154 return atomic64_fetch_sub_relaxed(i, v); 156 155 } 157 156 158 - static inline void 157 + static __always_inline void 159 158 atomic_long_inc(atomic_long_t *v) 160 159 { 161 160 atomic64_inc(v); 162 161 } 163 162 164 - static inline long 163 + static __always_inline long 165 164 atomic_long_inc_return(atomic_long_t *v) 166 165 { 167 166 return atomic64_inc_return(v); 168 167 } 169 168 170 - static inline long 169 + static __always_inline long 171 170 atomic_long_inc_return_acquire(atomic_long_t *v) 172 171 { 173 172 return atomic64_inc_return_acquire(v); 174 173 } 175 174 176 - static inline long 175 + static __always_inline long 177 176 atomic_long_inc_return_release(atomic_long_t *v) 178 177 { 179 178 return atomic64_inc_return_release(v); 180 179 } 181 180 182 - static inline long 181 + static __always_inline long 183 182 atomic_long_inc_return_relaxed(atomic_long_t *v) 184 183 { 185 184 return atomic64_inc_return_relaxed(v); 186 185 } 187 186 188 - static inline long 187 + static __always_inline long 189 188 atomic_long_fetch_inc(atomic_long_t *v) 190 189 { 191 190 return atomic64_fetch_inc(v); 192 191 } 193 192 194 - static inline long 193 + static __always_inline long 195 194 atomic_long_fetch_inc_acquire(atomic_long_t *v) 196 195 { 197 196 return atomic64_fetch_inc_acquire(v); 198 197 } 199 198 200 - static inline long 199 + static __always_inline long 201 200 atomic_long_fetch_inc_release(atomic_long_t *v) 202 201 { 203 202 return atomic64_fetch_inc_release(v); 204 203 } 205 204 206 - static inline long 205 + static __always_inline long 207 206 atomic_long_fetch_inc_relaxed(atomic_long_t *v) 208 207 { 209 208 return atomic64_fetch_inc_relaxed(v); 210 209 } 211 210 212 - static inline void 211 + static __always_inline void 213 212 atomic_long_dec(atomic_long_t *v) 214 213 { 215 214 atomic64_dec(v); 216 215 } 217 216 218 - static inline long 217 + static __always_inline long 219 218 atomic_long_dec_return(atomic_long_t *v) 220 219 { 221 220 return atomic64_dec_return(v); 222 221 } 223 222 224 - static inline long 223 + static __always_inline long 225 224 atomic_long_dec_return_acquire(atomic_long_t *v) 226 225 { 227 226 return atomic64_dec_return_acquire(v); 228 227 } 229 228 230 - static inline long 229 + static __always_inline long 231 230 atomic_long_dec_return_release(atomic_long_t *v) 232 231 { 233 232 return atomic64_dec_return_release(v); 234 233 } 235 234 236 - static inline long 235 + static __always_inline long 237 236 atomic_long_dec_return_relaxed(atomic_long_t *v) 238 237 { 239 238 return atomic64_dec_return_relaxed(v); 240 239 } 241 240 242 - static inline long 241 + static __always_inline long 243 242 atomic_long_fetch_dec(atomic_long_t *v) 244 243 { 245 244 return atomic64_fetch_dec(v); 246 245 } 247 246 248 - static inline long 247 + static __always_inline long 249 248 atomic_long_fetch_dec_acquire(atomic_long_t *v) 250 249 { 251 250 return atomic64_fetch_dec_acquire(v); 252 251 } 253 252 254 - static inline long 253 + static __always_inline long 255 254 atomic_long_fetch_dec_release(atomic_long_t *v) 256 255 { 257 256 return atomic64_fetch_dec_release(v); 258 257 } 259 258 260 - static inline long 259 + static __always_inline long 261 260 atomic_long_fetch_dec_relaxed(atomic_long_t *v) 262 261 { 263 262 return atomic64_fetch_dec_relaxed(v); 264 263 } 265 264 266 - static inline void 265 + static __always_inline void 267 266 atomic_long_and(long i, atomic_long_t *v) 268 267 { 269 268 atomic64_and(i, v); 270 269 } 271 270 272 - static inline long 271 + static __always_inline long 273 272 atomic_long_fetch_and(long i, atomic_long_t *v) 274 273 { 275 274 return atomic64_fetch_and(i, v); 276 275 } 277 276 278 - static inline long 277 + static __always_inline long 279 278 atomic_long_fetch_and_acquire(long i, atomic_long_t *v) 280 279 { 281 280 return atomic64_fetch_and_acquire(i, v); 282 281 } 283 282 284 - static inline long 283 + static __always_inline long 285 284 atomic_long_fetch_and_release(long i, atomic_long_t *v) 286 285 { 287 286 return atomic64_fetch_and_release(i, v); 288 287 } 289 288 290 - static inline long 289 + static __always_inline long 291 290 atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) 292 291 { 293 292 return atomic64_fetch_and_relaxed(i, v); 294 293 } 295 294 296 - static inline void 295 + static __always_inline void 297 296 atomic_long_andnot(long i, atomic_long_t *v) 298 297 { 299 298 atomic64_andnot(i, v); 300 299 } 301 300 302 - static inline long 301 + static __always_inline long 303 302 atomic_long_fetch_andnot(long i, atomic_long_t *v) 304 303 { 305 304 return atomic64_fetch_andnot(i, v); 306 305 } 307 306 308 - static inline long 307 + static __always_inline long 309 308 atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) 310 309 { 311 310 return atomic64_fetch_andnot_acquire(i, v); 312 311 } 313 312 314 - static inline long 313 + static __always_inline long 315 314 atomic_long_fetch_andnot_release(long i, atomic_long_t *v) 316 315 { 317 316 return atomic64_fetch_andnot_release(i, v); 318 317 } 319 318 320 - static inline long 319 + static __always_inline long 321 320 atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) 322 321 { 323 322 return atomic64_fetch_andnot_relaxed(i, v); 324 323 } 325 324 326 - static inline void 325 + static __always_inline void 327 326 atomic_long_or(long i, atomic_long_t *v) 328 327 { 329 328 atomic64_or(i, v); 330 329 } 331 330 332 - static inline long 331 + static __always_inline long 333 332 atomic_long_fetch_or(long i, atomic_long_t *v) 334 333 { 335 334 return atomic64_fetch_or(i, v); 336 335 } 337 336 338 - static inline long 337 + static __always_inline long 339 338 atomic_long_fetch_or_acquire(long i, atomic_long_t *v) 340 339 { 341 340 return atomic64_fetch_or_acquire(i, v); 342 341 } 343 342 344 - static inline long 343 + static __always_inline long 345 344 atomic_long_fetch_or_release(long i, atomic_long_t *v) 346 345 { 347 346 return atomic64_fetch_or_release(i, v); 348 347 } 349 348 350 - static inline long 349 + static __always_inline long 351 350 atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) 352 351 { 353 352 return atomic64_fetch_or_relaxed(i, v); 354 353 } 355 354 356 - static inline void 355 + static __always_inline void 357 356 atomic_long_xor(long i, atomic_long_t *v) 358 357 { 359 358 atomic64_xor(i, v); 360 359 } 361 360 362 - static inline long 361 + static __always_inline long 363 362 atomic_long_fetch_xor(long i, atomic_long_t *v) 364 363 { 365 364 return atomic64_fetch_xor(i, v); 366 365 } 367 366 368 - static inline long 367 + static __always_inline long 369 368 atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) 370 369 { 371 370 return atomic64_fetch_xor_acquire(i, v); 372 371 } 373 372 374 - static inline long 373 + static __always_inline long 375 374 atomic_long_fetch_xor_release(long i, atomic_long_t *v) 376 375 { 377 376 return atomic64_fetch_xor_release(i, v); 378 377 } 379 378 380 - static inline long 379 + static __always_inline long 381 380 atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) 382 381 { 383 382 return atomic64_fetch_xor_relaxed(i, v); 384 383 } 385 384 386 - static inline long 385 + static __always_inline long 387 386 atomic_long_xchg(atomic_long_t *v, long i) 388 387 { 389 388 return atomic64_xchg(v, i); 390 389 } 391 390 392 - static inline long 391 + static __always_inline long 393 392 atomic_long_xchg_acquire(atomic_long_t *v, long i) 394 393 { 395 394 return atomic64_xchg_acquire(v, i); 396 395 } 397 396 398 - static inline long 397 + static __always_inline long 399 398 atomic_long_xchg_release(atomic_long_t *v, long i) 400 399 { 401 400 return atomic64_xchg_release(v, i); 402 401 } 403 402 404 - static inline long 403 + static __always_inline long 405 404 atomic_long_xchg_relaxed(atomic_long_t *v, long i) 406 405 { 407 406 return atomic64_xchg_relaxed(v, i); 408 407 } 409 408 410 - static inline long 409 + static __always_inline long 411 410 atomic_long_cmpxchg(atomic_long_t *v, long old, long new) 412 411 { 413 412 return atomic64_cmpxchg(v, old, new); 414 413 } 415 414 416 - static inline long 415 + static __always_inline long 417 416 atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) 418 417 { 419 418 return atomic64_cmpxchg_acquire(v, old, new); 420 419 } 421 420 422 - static inline long 421 + static __always_inline long 423 422 atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) 424 423 { 425 424 return atomic64_cmpxchg_release(v, old, new); 426 425 } 427 426 428 - static inline long 427 + static __always_inline long 429 428 atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) 430 429 { 431 430 return atomic64_cmpxchg_relaxed(v, old, new); 432 431 } 433 432 434 - static inline bool 433 + static __always_inline bool 435 434 atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) 436 435 { 437 436 return atomic64_try_cmpxchg(v, (s64 *)old, new); 438 437 } 439 438 440 - static inline bool 439 + static __always_inline bool 441 440 atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) 442 441 { 443 442 return atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); 444 443 } 445 444 446 - static inline bool 445 + static __always_inline bool 447 446 atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) 448 447 { 449 448 return atomic64_try_cmpxchg_release(v, (s64 *)old, new); 450 449 } 451 450 452 - static inline bool 451 + static __always_inline bool 453 452 atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) 454 453 { 455 454 return atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); 456 455 } 457 456 458 - static inline bool 457 + static __always_inline bool 459 458 atomic_long_sub_and_test(long i, atomic_long_t *v) 460 459 { 461 460 return atomic64_sub_and_test(i, v); 462 461 } 463 462 464 - static inline bool 463 + static __always_inline bool 465 464 atomic_long_dec_and_test(atomic_long_t *v) 466 465 { 467 466 return atomic64_dec_and_test(v); 468 467 } 469 468 470 - static inline bool 469 + static __always_inline bool 471 470 atomic_long_inc_and_test(atomic_long_t *v) 472 471 { 473 472 return atomic64_inc_and_test(v); 474 473 } 475 474 476 - static inline bool 475 + static __always_inline bool 477 476 atomic_long_add_negative(long i, atomic_long_t *v) 478 477 { 479 478 return atomic64_add_negative(i, v); 480 479 } 481 480 482 - static inline long 481 + static __always_inline long 483 482 atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) 484 483 { 485 484 return atomic64_fetch_add_unless(v, a, u); 486 485 } 487 486 488 - static inline bool 487 + static __always_inline bool 489 488 atomic_long_add_unless(atomic_long_t *v, long a, long u) 490 489 { 491 490 return atomic64_add_unless(v, a, u); 492 491 } 493 492 494 - static inline bool 493 + static __always_inline bool 495 494 atomic_long_inc_not_zero(atomic_long_t *v) 496 495 { 497 496 return atomic64_inc_not_zero(v); 498 497 } 499 498 500 - static inline bool 499 + static __always_inline bool 501 500 atomic_long_inc_unless_negative(atomic_long_t *v) 502 501 { 503 502 return atomic64_inc_unless_negative(v); 504 503 } 505 504 506 - static inline bool 505 + static __always_inline bool 507 506 atomic_long_dec_unless_positive(atomic_long_t *v) 508 507 { 509 508 return atomic64_dec_unless_positive(v); 510 509 } 511 510 512 - static inline long 511 + static __always_inline long 513 512 atomic_long_dec_if_positive(atomic_long_t *v) 514 513 { 515 514 return atomic64_dec_if_positive(v); ··· 517 516 518 517 #else /* CONFIG_64BIT */ 519 518 520 - static inline long 519 + static __always_inline long 521 520 atomic_long_read(const atomic_long_t *v) 522 521 { 523 522 return atomic_read(v); 524 523 } 525 524 526 - static inline long 525 + static __always_inline long 527 526 atomic_long_read_acquire(const atomic_long_t *v) 528 527 { 529 528 return atomic_read_acquire(v); 530 529 } 531 530 532 - static inline void 531 + static __always_inline void 533 532 atomic_long_set(atomic_long_t *v, long i) 534 533 { 535 534 atomic_set(v, i); 536 535 } 537 536 538 - static inline void 537 + static __always_inline void 539 538 atomic_long_set_release(atomic_long_t *v, long i) 540 539 { 541 540 atomic_set_release(v, i); 542 541 } 543 542 544 - static inline void 543 + static __always_inline void 545 544 atomic_long_add(long i, atomic_long_t *v) 546 545 { 547 546 atomic_add(i, v); 548 547 } 549 548 550 - static inline long 549 + static __always_inline long 551 550 atomic_long_add_return(long i, atomic_long_t *v) 552 551 { 553 552 return atomic_add_return(i, v); 554 553 } 555 554 556 - static inline long 555 + static __always_inline long 557 556 atomic_long_add_return_acquire(long i, atomic_long_t *v) 558 557 { 559 558 return atomic_add_return_acquire(i, v); 560 559 } 561 560 562 - static inline long 561 + static __always_inline long 563 562 atomic_long_add_return_release(long i, atomic_long_t *v) 564 563 { 565 564 return atomic_add_return_release(i, v); 566 565 } 567 566 568 - static inline long 567 + static __always_inline long 569 568 atomic_long_add_return_relaxed(long i, atomic_long_t *v) 570 569 { 571 570 return atomic_add_return_relaxed(i, v); 572 571 } 573 572 574 - static inline long 573 + static __always_inline long 575 574 atomic_long_fetch_add(long i, atomic_long_t *v) 576 575 { 577 576 return atomic_fetch_add(i, v); 578 577 } 579 578 580 - static inline long 579 + static __always_inline long 581 580 atomic_long_fetch_add_acquire(long i, atomic_long_t *v) 582 581 { 583 582 return atomic_fetch_add_acquire(i, v); 584 583 } 585 584 586 - static inline long 585 + static __always_inline long 587 586 atomic_long_fetch_add_release(long i, atomic_long_t *v) 588 587 { 589 588 return atomic_fetch_add_release(i, v); 590 589 } 591 590 592 - static inline long 591 + static __always_inline long 593 592 atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) 594 593 { 595 594 return atomic_fetch_add_relaxed(i, v); 596 595 } 597 596 598 - static inline void 597 + static __always_inline void 599 598 atomic_long_sub(long i, atomic_long_t *v) 600 599 { 601 600 atomic_sub(i, v); 602 601 } 603 602 604 - static inline long 603 + static __always_inline long 605 604 atomic_long_sub_return(long i, atomic_long_t *v) 606 605 { 607 606 return atomic_sub_return(i, v); 608 607 } 609 608 610 - static inline long 609 + static __always_inline long 611 610 atomic_long_sub_return_acquire(long i, atomic_long_t *v) 612 611 { 613 612 return atomic_sub_return_acquire(i, v); 614 613 } 615 614 616 - static inline long 615 + static __always_inline long 617 616 atomic_long_sub_return_release(long i, atomic_long_t *v) 618 617 { 619 618 return atomic_sub_return_release(i, v); 620 619 } 621 620 622 - static inline long 621 + static __always_inline long 623 622 atomic_long_sub_return_relaxed(long i, atomic_long_t *v) 624 623 { 625 624 return atomic_sub_return_relaxed(i, v); 626 625 } 627 626 628 - static inline long 627 + static __always_inline long 629 628 atomic_long_fetch_sub(long i, atomic_long_t *v) 630 629 { 631 630 return atomic_fetch_sub(i, v); 632 631 } 633 632 634 - static inline long 633 + static __always_inline long 635 634 atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) 636 635 { 637 636 return atomic_fetch_sub_acquire(i, v); 638 637 } 639 638 640 - static inline long 639 + static __always_inline long 641 640 atomic_long_fetch_sub_release(long i, atomic_long_t *v) 642 641 { 643 642 return atomic_fetch_sub_release(i, v); 644 643 } 645 644 646 - static inline long 645 + static __always_inline long 647 646 atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) 648 647 { 649 648 return atomic_fetch_sub_relaxed(i, v); 650 649 } 651 650 652 - static inline void 651 + static __always_inline void 653 652 atomic_long_inc(atomic_long_t *v) 654 653 { 655 654 atomic_inc(v); 656 655 } 657 656 658 - static inline long 657 + static __always_inline long 659 658 atomic_long_inc_return(atomic_long_t *v) 660 659 { 661 660 return atomic_inc_return(v); 662 661 } 663 662 664 - static inline long 663 + static __always_inline long 665 664 atomic_long_inc_return_acquire(atomic_long_t *v) 666 665 { 667 666 return atomic_inc_return_acquire(v); 668 667 } 669 668 670 - static inline long 669 + static __always_inline long 671 670 atomic_long_inc_return_release(atomic_long_t *v) 672 671 { 673 672 return atomic_inc_return_release(v); 674 673 } 675 674 676 - static inline long 675 + static __always_inline long 677 676 atomic_long_inc_return_relaxed(atomic_long_t *v) 678 677 { 679 678 return atomic_inc_return_relaxed(v); 680 679 } 681 680 682 - static inline long 681 + static __always_inline long 683 682 atomic_long_fetch_inc(atomic_long_t *v) 684 683 { 685 684 return atomic_fetch_inc(v); 686 685 } 687 686 688 - static inline long 687 + static __always_inline long 689 688 atomic_long_fetch_inc_acquire(atomic_long_t *v) 690 689 { 691 690 return atomic_fetch_inc_acquire(v); 692 691 } 693 692 694 - static inline long 693 + static __always_inline long 695 694 atomic_long_fetch_inc_release(atomic_long_t *v) 696 695 { 697 696 return atomic_fetch_inc_release(v); 698 697 } 699 698 700 - static inline long 699 + static __always_inline long 701 700 atomic_long_fetch_inc_relaxed(atomic_long_t *v) 702 701 { 703 702 return atomic_fetch_inc_relaxed(v); 704 703 } 705 704 706 - static inline void 705 + static __always_inline void 707 706 atomic_long_dec(atomic_long_t *v) 708 707 { 709 708 atomic_dec(v); 710 709 } 711 710 712 - static inline long 711 + static __always_inline long 713 712 atomic_long_dec_return(atomic_long_t *v) 714 713 { 715 714 return atomic_dec_return(v); 716 715 } 717 716 718 - static inline long 717 + static __always_inline long 719 718 atomic_long_dec_return_acquire(atomic_long_t *v) 720 719 { 721 720 return atomic_dec_return_acquire(v); 722 721 } 723 722 724 - static inline long 723 + static __always_inline long 725 724 atomic_long_dec_return_release(atomic_long_t *v) 726 725 { 727 726 return atomic_dec_return_release(v); 728 727 } 729 728 730 - static inline long 729 + static __always_inline long 731 730 atomic_long_dec_return_relaxed(atomic_long_t *v) 732 731 { 733 732 return atomic_dec_return_relaxed(v); 734 733 } 735 734 736 - static inline long 735 + static __always_inline long 737 736 atomic_long_fetch_dec(atomic_long_t *v) 738 737 { 739 738 return atomic_fetch_dec(v); 740 739 } 741 740 742 - static inline long 741 + static __always_inline long 743 742 atomic_long_fetch_dec_acquire(atomic_long_t *v) 744 743 { 745 744 return atomic_fetch_dec_acquire(v); 746 745 } 747 746 748 - static inline long 747 + static __always_inline long 749 748 atomic_long_fetch_dec_release(atomic_long_t *v) 750 749 { 751 750 return atomic_fetch_dec_release(v); 752 751 } 753 752 754 - static inline long 753 + static __always_inline long 755 754 atomic_long_fetch_dec_relaxed(atomic_long_t *v) 756 755 { 757 756 return atomic_fetch_dec_relaxed(v); 758 757 } 759 758 760 - static inline void 759 + static __always_inline void 761 760 atomic_long_and(long i, atomic_long_t *v) 762 761 { 763 762 atomic_and(i, v); 764 763 } 765 764 766 - static inline long 765 + static __always_inline long 767 766 atomic_long_fetch_and(long i, atomic_long_t *v) 768 767 { 769 768 return atomic_fetch_and(i, v); 770 769 } 771 770 772 - static inline long 771 + static __always_inline long 773 772 atomic_long_fetch_and_acquire(long i, atomic_long_t *v) 774 773 { 775 774 return atomic_fetch_and_acquire(i, v); 776 775 } 777 776 778 - static inline long 777 + static __always_inline long 779 778 atomic_long_fetch_and_release(long i, atomic_long_t *v) 780 779 { 781 780 return atomic_fetch_and_release(i, v); 782 781 } 783 782 784 - static inline long 783 + static __always_inline long 785 784 atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) 786 785 { 787 786 return atomic_fetch_and_relaxed(i, v); 788 787 } 789 788 790 - static inline void 789 + static __always_inline void 791 790 atomic_long_andnot(long i, atomic_long_t *v) 792 791 { 793 792 atomic_andnot(i, v); 794 793 } 795 794 796 - static inline long 795 + static __always_inline long 797 796 atomic_long_fetch_andnot(long i, atomic_long_t *v) 798 797 { 799 798 return atomic_fetch_andnot(i, v); 800 799 } 801 800 802 - static inline long 801 + static __always_inline long 803 802 atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) 804 803 { 805 804 return atomic_fetch_andnot_acquire(i, v); 806 805 } 807 806 808 - static inline long 807 + static __always_inline long 809 808 atomic_long_fetch_andnot_release(long i, atomic_long_t *v) 810 809 { 811 810 return atomic_fetch_andnot_release(i, v); 812 811 } 813 812 814 - static inline long 813 + static __always_inline long 815 814 atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) 816 815 { 817 816 return atomic_fetch_andnot_relaxed(i, v); 818 817 } 819 818 820 - static inline void 819 + static __always_inline void 821 820 atomic_long_or(long i, atomic_long_t *v) 822 821 { 823 822 atomic_or(i, v); 824 823 } 825 824 826 - static inline long 825 + static __always_inline long 827 826 atomic_long_fetch_or(long i, atomic_long_t *v) 828 827 { 829 828 return atomic_fetch_or(i, v); 830 829 } 831 830 832 - static inline long 831 + static __always_inline long 833 832 atomic_long_fetch_or_acquire(long i, atomic_long_t *v) 834 833 { 835 834 return atomic_fetch_or_acquire(i, v); 836 835 } 837 836 838 - static inline long 837 + static __always_inline long 839 838 atomic_long_fetch_or_release(long i, atomic_long_t *v) 840 839 { 841 840 return atomic_fetch_or_release(i, v); 842 841 } 843 842 844 - static inline long 843 + static __always_inline long 845 844 atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) 846 845 { 847 846 return atomic_fetch_or_relaxed(i, v); 848 847 } 849 848 850 - static inline void 849 + static __always_inline void 851 850 atomic_long_xor(long i, atomic_long_t *v) 852 851 { 853 852 atomic_xor(i, v); 854 853 } 855 854 856 - static inline long 855 + static __always_inline long 857 856 atomic_long_fetch_xor(long i, atomic_long_t *v) 858 857 { 859 858 return atomic_fetch_xor(i, v); 860 859 } 861 860 862 - static inline long 861 + static __always_inline long 863 862 atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) 864 863 { 865 864 return atomic_fetch_xor_acquire(i, v); 866 865 } 867 866 868 - static inline long 867 + static __always_inline long 869 868 atomic_long_fetch_xor_release(long i, atomic_long_t *v) 870 869 { 871 870 return atomic_fetch_xor_release(i, v); 872 871 } 873 872 874 - static inline long 873 + static __always_inline long 875 874 atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) 876 875 { 877 876 return atomic_fetch_xor_relaxed(i, v); 878 877 } 879 878 880 - static inline long 879 + static __always_inline long 881 880 atomic_long_xchg(atomic_long_t *v, long i) 882 881 { 883 882 return atomic_xchg(v, i); 884 883 } 885 884 886 - static inline long 885 + static __always_inline long 887 886 atomic_long_xchg_acquire(atomic_long_t *v, long i) 888 887 { 889 888 return atomic_xchg_acquire(v, i); 890 889 } 891 890 892 - static inline long 891 + static __always_inline long 893 892 atomic_long_xchg_release(atomic_long_t *v, long i) 894 893 { 895 894 return atomic_xchg_release(v, i); 896 895 } 897 896 898 - static inline long 897 + static __always_inline long 899 898 atomic_long_xchg_relaxed(atomic_long_t *v, long i) 900 899 { 901 900 return atomic_xchg_relaxed(v, i); 902 901 } 903 902 904 - static inline long 903 + static __always_inline long 905 904 atomic_long_cmpxchg(atomic_long_t *v, long old, long new) 906 905 { 907 906 return atomic_cmpxchg(v, old, new); 908 907 } 909 908 910 - static inline long 909 + static __always_inline long 911 910 atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) 912 911 { 913 912 return atomic_cmpxchg_acquire(v, old, new); 914 913 } 915 914 916 - static inline long 915 + static __always_inline long 917 916 atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) 918 917 { 919 918 return atomic_cmpxchg_release(v, old, new); 920 919 } 921 920 922 - static inline long 921 + static __always_inline long 923 922 atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) 924 923 { 925 924 return atomic_cmpxchg_relaxed(v, old, new); 926 925 } 927 926 928 - static inline bool 927 + static __always_inline bool 929 928 atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) 930 929 { 931 930 return atomic_try_cmpxchg(v, (int *)old, new); 932 931 } 933 932 934 - static inline bool 933 + static __always_inline bool 935 934 atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) 936 935 { 937 936 return atomic_try_cmpxchg_acquire(v, (int *)old, new); 938 937 } 939 938 940 - static inline bool 939 + static __always_inline bool 941 940 atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) 942 941 { 943 942 return atomic_try_cmpxchg_release(v, (int *)old, new); 944 943 } 945 944 946 - static inline bool 945 + static __always_inline bool 947 946 atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) 948 947 { 949 948 return atomic_try_cmpxchg_relaxed(v, (int *)old, new); 950 949 } 951 950 952 - static inline bool 951 + static __always_inline bool 953 952 atomic_long_sub_and_test(long i, atomic_long_t *v) 954 953 { 955 954 return atomic_sub_and_test(i, v); 956 955 } 957 956 958 - static inline bool 957 + static __always_inline bool 959 958 atomic_long_dec_and_test(atomic_long_t *v) 960 959 { 961 960 return atomic_dec_and_test(v); 962 961 } 963 962 964 - static inline bool 963 + static __always_inline bool 965 964 atomic_long_inc_and_test(atomic_long_t *v) 966 965 { 967 966 return atomic_inc_and_test(v); 968 967 } 969 968 970 - static inline bool 969 + static __always_inline bool 971 970 atomic_long_add_negative(long i, atomic_long_t *v) 972 971 { 973 972 return atomic_add_negative(i, v); 974 973 } 975 974 976 - static inline long 975 + static __always_inline long 977 976 atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) 978 977 { 979 978 return atomic_fetch_add_unless(v, a, u); 980 979 } 981 980 982 - static inline bool 981 + static __always_inline bool 983 982 atomic_long_add_unless(atomic_long_t *v, long a, long u) 984 983 { 985 984 return atomic_add_unless(v, a, u); 986 985 } 987 986 988 - static inline bool 987 + static __always_inline bool 989 988 atomic_long_inc_not_zero(atomic_long_t *v) 990 989 { 991 990 return atomic_inc_not_zero(v); 992 991 } 993 992 994 - static inline bool 993 + static __always_inline bool 995 994 atomic_long_inc_unless_negative(atomic_long_t *v) 996 995 { 997 996 return atomic_inc_unless_negative(v); 998 997 } 999 998 1000 - static inline bool 999 + static __always_inline bool 1001 1000 atomic_long_dec_unless_positive(atomic_long_t *v) 1002 1001 { 1003 1002 return atomic_dec_unless_positive(v); 1004 1003 } 1005 1004 1006 - static inline long 1005 + static __always_inline long 1007 1006 atomic_long_dec_if_positive(atomic_long_t *v) 1008 1007 { 1009 1008 return atomic_dec_if_positive(v); ··· 1011 1010 1012 1011 #endif /* CONFIG_64BIT */ 1013 1012 #endif /* _ASM_GENERIC_ATOMIC_LONG_H */ 1014 - // 77558968132ce4f911ad53f6f52ce423006f6268 1013 + // a624200981f552b2c6be4f32fe44da8289f30d87
+7 -7
include/asm-generic/bitops/instrumented-atomic.h
··· 11 11 #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H 12 12 #define _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H 13 13 14 - #include <linux/kasan-checks.h> 14 + #include <linux/instrumented.h> 15 15 16 16 /** 17 17 * set_bit - Atomically set a bit in memory ··· 25 25 */ 26 26 static inline void set_bit(long nr, volatile unsigned long *addr) 27 27 { 28 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 28 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 29 29 arch_set_bit(nr, addr); 30 30 } 31 31 ··· 38 38 */ 39 39 static inline void clear_bit(long nr, volatile unsigned long *addr) 40 40 { 41 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 41 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 42 42 arch_clear_bit(nr, addr); 43 43 } 44 44 ··· 54 54 */ 55 55 static inline void change_bit(long nr, volatile unsigned long *addr) 56 56 { 57 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 57 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 58 58 arch_change_bit(nr, addr); 59 59 } 60 60 ··· 67 67 */ 68 68 static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) 69 69 { 70 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 70 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 71 71 return arch_test_and_set_bit(nr, addr); 72 72 } 73 73 ··· 80 80 */ 81 81 static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) 82 82 { 83 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 83 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 84 84 return arch_test_and_clear_bit(nr, addr); 85 85 } 86 86 ··· 93 93 */ 94 94 static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) 95 95 { 96 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 96 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 97 97 return arch_test_and_change_bit(nr, addr); 98 98 } 99 99
+5 -5
include/asm-generic/bitops/instrumented-lock.h
··· 11 11 #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H 12 12 #define _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H 13 13 14 - #include <linux/kasan-checks.h> 14 + #include <linux/instrumented.h> 15 15 16 16 /** 17 17 * clear_bit_unlock - Clear a bit in memory, for unlock ··· 22 22 */ 23 23 static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) 24 24 { 25 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 25 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 26 26 arch_clear_bit_unlock(nr, addr); 27 27 } 28 28 ··· 37 37 */ 38 38 static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) 39 39 { 40 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 40 + instrument_write(addr + BIT_WORD(nr), sizeof(long)); 41 41 arch___clear_bit_unlock(nr, addr); 42 42 } 43 43 ··· 52 52 */ 53 53 static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) 54 54 { 55 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 55 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 56 56 return arch_test_and_set_bit_lock(nr, addr); 57 57 } 58 58 ··· 71 71 static inline bool 72 72 clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) 73 73 { 74 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 74 + instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); 75 75 return arch_clear_bit_unlock_is_negative_byte(nr, addr); 76 76 } 77 77 /* Let everybody know we have it. */
+8 -8
include/asm-generic/bitops/instrumented-non-atomic.h
··· 11 11 #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H 12 12 #define _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H 13 13 14 - #include <linux/kasan-checks.h> 14 + #include <linux/instrumented.h> 15 15 16 16 /** 17 17 * __set_bit - Set a bit in memory ··· 24 24 */ 25 25 static inline void __set_bit(long nr, volatile unsigned long *addr) 26 26 { 27 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 27 + instrument_write(addr + BIT_WORD(nr), sizeof(long)); 28 28 arch___set_bit(nr, addr); 29 29 } 30 30 ··· 39 39 */ 40 40 static inline void __clear_bit(long nr, volatile unsigned long *addr) 41 41 { 42 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 42 + instrument_write(addr + BIT_WORD(nr), sizeof(long)); 43 43 arch___clear_bit(nr, addr); 44 44 } 45 45 ··· 54 54 */ 55 55 static inline void __change_bit(long nr, volatile unsigned long *addr) 56 56 { 57 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 57 + instrument_write(addr + BIT_WORD(nr), sizeof(long)); 58 58 arch___change_bit(nr, addr); 59 59 } 60 60 ··· 68 68 */ 69 69 static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) 70 70 { 71 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 71 + instrument_write(addr + BIT_WORD(nr), sizeof(long)); 72 72 return arch___test_and_set_bit(nr, addr); 73 73 } 74 74 ··· 82 82 */ 83 83 static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) 84 84 { 85 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 85 + instrument_write(addr + BIT_WORD(nr), sizeof(long)); 86 86 return arch___test_and_clear_bit(nr, addr); 87 87 } 88 88 ··· 96 96 */ 97 97 static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) 98 98 { 99 - kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); 99 + instrument_write(addr + BIT_WORD(nr), sizeof(long)); 100 100 return arch___test_and_change_bit(nr, addr); 101 101 } 102 102 ··· 107 107 */ 108 108 static inline bool test_bit(long nr, const volatile unsigned long *addr) 109 109 { 110 - kasan_check_read(addr + BIT_WORD(nr), sizeof(long)); 110 + instrument_atomic_read(addr + BIT_WORD(nr), sizeof(long)); 111 111 return arch_test_bit(nr, addr); 112 112 } 113 113
+10 -1
include/linux/compiler-clang.h
··· 16 16 #define KASAN_ABI_VERSION 5 17 17 18 18 #if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer) 19 - /* emulate gcc's __SANITIZE_ADDRESS__ flag */ 19 + /* Emulate GCC's __SANITIZE_ADDRESS__ flag */ 20 20 #define __SANITIZE_ADDRESS__ 21 21 #define __no_sanitize_address \ 22 22 __attribute__((no_sanitize("address", "hwaddress"))) 23 23 #else 24 24 #define __no_sanitize_address 25 + #endif 26 + 27 + #if __has_feature(thread_sanitizer) 28 + /* emulate gcc's __SANITIZE_THREAD__ flag */ 29 + #define __SANITIZE_THREAD__ 30 + #define __no_sanitize_thread \ 31 + __attribute__((no_sanitize("thread"))) 32 + #else 33 + #define __no_sanitize_thread 25 34 #endif 26 35 27 36 /*
+6
include/linux/compiler-gcc.h
··· 144 144 #define __no_sanitize_address 145 145 #endif 146 146 147 + #if defined(__SANITIZE_THREAD__) && __has_attribute(__no_sanitize_thread__) 148 + #define __no_sanitize_thread __attribute__((no_sanitize_thread)) 149 + #else 150 + #define __no_sanitize_thread 151 + #endif 152 + 147 153 #if GCC_VERSION >= 50100 148 154 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 149 155 #endif
+31 -22
include/linux/compiler.h
··· 250 250 */ 251 251 #include <asm/barrier.h> 252 252 #include <linux/kasan-checks.h> 253 + #include <linux/kcsan-checks.h> 254 + 255 + /** 256 + * data_race - mark an expression as containing intentional data races 257 + * 258 + * This data_race() macro is useful for situations in which data races 259 + * should be forgiven. One example is diagnostic code that accesses 260 + * shared variables but is not a part of the core synchronization design. 261 + * 262 + * This macro *does not* affect normal code generation, but is a hint 263 + * to tooling that data races here are to be ignored. 264 + */ 265 + #define data_race(expr) \ 266 + ({ \ 267 + __unqual_scalar_typeof(({ expr; })) __v = ({ \ 268 + __kcsan_disable_current(); \ 269 + expr; \ 270 + }); \ 271 + __kcsan_enable_current(); \ 272 + __v; \ 273 + }) 253 274 254 275 /* 255 276 * Use __READ_ONCE() instead of READ_ONCE() if you do not require any ··· 292 271 __READ_ONCE_SCALAR(x); \ 293 272 }) 294 273 295 - #define __WRITE_ONCE(x, val) \ 296 - do { \ 297 - *(volatile typeof(x) *)&(x) = (val); \ 274 + #define __WRITE_ONCE(x, val) \ 275 + do { \ 276 + *(volatile typeof(x) *)&(x) = (val); \ 298 277 } while (0) 299 278 300 - #define WRITE_ONCE(x, val) \ 301 - do { \ 302 - compiletime_assert_rwonce_type(x); \ 303 - __WRITE_ONCE(x, val); \ 279 + #define WRITE_ONCE(x, val) \ 280 + do { \ 281 + compiletime_assert_rwonce_type(x); \ 282 + __WRITE_ONCE(x, val); \ 304 283 } while (0) 305 284 306 - #ifdef CONFIG_KASAN 307 - /* 308 - * We can't declare function 'inline' because __no_sanitize_address conflicts 309 - * with inlining. Attempt to inline it may cause a build failure. 310 - * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368 311 - * '__maybe_unused' allows us to avoid defined-but-not-used warnings. 312 - */ 313 - # define __no_kasan_or_inline __no_sanitize_address notrace __maybe_unused 314 - #else 315 - # define __no_kasan_or_inline __always_inline 316 - #endif 317 - 318 - static __no_kasan_or_inline 285 + static __no_sanitize_or_inline 319 286 unsigned long __read_once_word_nocheck(const void *addr) 320 287 { 321 288 return __READ_ONCE(*(unsigned long *)addr); ··· 311 302 312 303 /* 313 304 * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a 314 - * word from memory atomically but without telling KASAN. This is usually 315 - * used by unwinding code when walking the stack of a running process. 305 + * word from memory atomically but without telling KASAN/KCSAN. This is 306 + * usually used by unwinding code when walking the stack of a running process. 316 307 */ 317 308 #define READ_ONCE_NOCHECK(x) \ 318 309 ({ \
+32
include/linux/compiler_types.h
··· 171 171 */ 172 172 #define noinline_for_stack noinline 173 173 174 + /* 175 + * Sanitizer helper attributes: Because using __always_inline and 176 + * __no_sanitize_* conflict, provide helper attributes that will either expand 177 + * to __no_sanitize_* in compilation units where instrumentation is enabled 178 + * (__SANITIZE_*__), or __always_inline in compilation units without 179 + * instrumentation (__SANITIZE_*__ undefined). 180 + */ 181 + #ifdef __SANITIZE_ADDRESS__ 182 + /* 183 + * We can't declare function 'inline' because __no_sanitize_address conflicts 184 + * with inlining. Attempt to inline it may cause a build failure. 185 + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368 186 + * '__maybe_unused' allows us to avoid defined-but-not-used warnings. 187 + */ 188 + # define __no_kasan_or_inline __no_sanitize_address notrace __maybe_unused 189 + # define __no_sanitize_or_inline __no_kasan_or_inline 190 + #else 191 + # define __no_kasan_or_inline __always_inline 192 + #endif 193 + 194 + #define __no_kcsan __no_sanitize_thread 195 + #ifdef __SANITIZE_THREAD__ 196 + # define __no_kcsan_or_inline __no_kcsan notrace __maybe_unused 197 + # define __no_sanitize_or_inline __no_kcsan_or_inline 198 + #else 199 + # define __no_kcsan_or_inline __always_inline 200 + #endif 201 + 202 + #ifndef __no_sanitize_or_inline 203 + #define __no_sanitize_or_inline __always_inline 204 + #endif 205 + 174 206 #endif /* __KERNEL__ */ 175 207 176 208 #endif /* __ASSEMBLY__ */
+109
include/linux/instrumented.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + /* 4 + * This header provides generic wrappers for memory access instrumentation that 5 + * the compiler cannot emit for: KASAN, KCSAN. 6 + */ 7 + #ifndef _LINUX_INSTRUMENTED_H 8 + #define _LINUX_INSTRUMENTED_H 9 + 10 + #include <linux/compiler.h> 11 + #include <linux/kasan-checks.h> 12 + #include <linux/kcsan-checks.h> 13 + #include <linux/types.h> 14 + 15 + /** 16 + * instrument_read - instrument regular read access 17 + * 18 + * Instrument a regular read access. The instrumentation should be inserted 19 + * before the actual read happens. 20 + * 21 + * @ptr address of access 22 + * @size size of access 23 + */ 24 + static __always_inline void instrument_read(const volatile void *v, size_t size) 25 + { 26 + kasan_check_read(v, size); 27 + kcsan_check_read(v, size); 28 + } 29 + 30 + /** 31 + * instrument_write - instrument regular write access 32 + * 33 + * Instrument a regular write access. The instrumentation should be inserted 34 + * before the actual write happens. 35 + * 36 + * @ptr address of access 37 + * @size size of access 38 + */ 39 + static __always_inline void instrument_write(const volatile void *v, size_t size) 40 + { 41 + kasan_check_write(v, size); 42 + kcsan_check_write(v, size); 43 + } 44 + 45 + /** 46 + * instrument_atomic_read - instrument atomic read access 47 + * 48 + * Instrument an atomic read access. The instrumentation should be inserted 49 + * before the actual read happens. 50 + * 51 + * @ptr address of access 52 + * @size size of access 53 + */ 54 + static __always_inline void instrument_atomic_read(const volatile void *v, size_t size) 55 + { 56 + kasan_check_read(v, size); 57 + kcsan_check_atomic_read(v, size); 58 + } 59 + 60 + /** 61 + * instrument_atomic_write - instrument atomic write access 62 + * 63 + * Instrument an atomic write access. The instrumentation should be inserted 64 + * before the actual write happens. 65 + * 66 + * @ptr address of access 67 + * @size size of access 68 + */ 69 + static __always_inline void instrument_atomic_write(const volatile void *v, size_t size) 70 + { 71 + kasan_check_write(v, size); 72 + kcsan_check_atomic_write(v, size); 73 + } 74 + 75 + /** 76 + * instrument_copy_to_user - instrument reads of copy_to_user 77 + * 78 + * Instrument reads from kernel memory, that are due to copy_to_user (and 79 + * variants). The instrumentation must be inserted before the accesses. 80 + * 81 + * @to destination address 82 + * @from source address 83 + * @n number of bytes to copy 84 + */ 85 + static __always_inline void 86 + instrument_copy_to_user(void __user *to, const void *from, unsigned long n) 87 + { 88 + kasan_check_read(from, n); 89 + kcsan_check_read(from, n); 90 + } 91 + 92 + /** 93 + * instrument_copy_from_user - instrument writes of copy_from_user 94 + * 95 + * Instrument writes to kernel memory, that are due to copy_from_user (and 96 + * variants). The instrumentation should be inserted before the accesses. 97 + * 98 + * @to destination address 99 + * @from source address 100 + * @n number of bytes to copy 101 + */ 102 + static __always_inline void 103 + instrument_copy_from_user(const void *to, const void __user *from, unsigned long n) 104 + { 105 + kasan_check_write(to, n); 106 + kcsan_check_write(to, n); 107 + } 108 + 109 + #endif /* _LINUX_INSTRUMENTED_H */
+430
include/linux/kcsan-checks.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _LINUX_KCSAN_CHECKS_H 4 + #define _LINUX_KCSAN_CHECKS_H 5 + 6 + /* Note: Only include what is already included by compiler.h. */ 7 + #include <linux/compiler_attributes.h> 8 + #include <linux/types.h> 9 + 10 + /* 11 + * ACCESS TYPE MODIFIERS 12 + * 13 + * <none>: normal read access; 14 + * WRITE : write access; 15 + * ATOMIC: access is atomic; 16 + * ASSERT: access is not a regular access, but an assertion; 17 + * SCOPED: access is a scoped access; 18 + */ 19 + #define KCSAN_ACCESS_WRITE 0x1 20 + #define KCSAN_ACCESS_ATOMIC 0x2 21 + #define KCSAN_ACCESS_ASSERT 0x4 22 + #define KCSAN_ACCESS_SCOPED 0x8 23 + 24 + /* 25 + * __kcsan_*: Always calls into the runtime when KCSAN is enabled. This may be used 26 + * even in compilation units that selectively disable KCSAN, but must use KCSAN 27 + * to validate access to an address. Never use these in header files! 28 + */ 29 + #ifdef CONFIG_KCSAN 30 + /** 31 + * __kcsan_check_access - check generic access for races 32 + * 33 + * @ptr: address of access 34 + * @size: size of access 35 + * @type: access type modifier 36 + */ 37 + void __kcsan_check_access(const volatile void *ptr, size_t size, int type); 38 + 39 + /** 40 + * kcsan_disable_current - disable KCSAN for the current context 41 + * 42 + * Supports nesting. 43 + */ 44 + void kcsan_disable_current(void); 45 + 46 + /** 47 + * kcsan_enable_current - re-enable KCSAN for the current context 48 + * 49 + * Supports nesting. 50 + */ 51 + void kcsan_enable_current(void); 52 + void kcsan_enable_current_nowarn(void); /* Safe in uaccess regions. */ 53 + 54 + /** 55 + * kcsan_nestable_atomic_begin - begin nestable atomic region 56 + * 57 + * Accesses within the atomic region may appear to race with other accesses but 58 + * should be considered atomic. 59 + */ 60 + void kcsan_nestable_atomic_begin(void); 61 + 62 + /** 63 + * kcsan_nestable_atomic_end - end nestable atomic region 64 + */ 65 + void kcsan_nestable_atomic_end(void); 66 + 67 + /** 68 + * kcsan_flat_atomic_begin - begin flat atomic region 69 + * 70 + * Accesses within the atomic region may appear to race with other accesses but 71 + * should be considered atomic. 72 + */ 73 + void kcsan_flat_atomic_begin(void); 74 + 75 + /** 76 + * kcsan_flat_atomic_end - end flat atomic region 77 + */ 78 + void kcsan_flat_atomic_end(void); 79 + 80 + /** 81 + * kcsan_atomic_next - consider following accesses as atomic 82 + * 83 + * Force treating the next n memory accesses for the current context as atomic 84 + * operations. 85 + * 86 + * @n: number of following memory accesses to treat as atomic. 87 + */ 88 + void kcsan_atomic_next(int n); 89 + 90 + /** 91 + * kcsan_set_access_mask - set access mask 92 + * 93 + * Set the access mask for all accesses for the current context if non-zero. 94 + * Only value changes to bits set in the mask will be reported. 95 + * 96 + * @mask: bitmask 97 + */ 98 + void kcsan_set_access_mask(unsigned long mask); 99 + 100 + /* Scoped access information. */ 101 + struct kcsan_scoped_access { 102 + struct list_head list; 103 + const volatile void *ptr; 104 + size_t size; 105 + int type; 106 + }; 107 + /* 108 + * Automatically call kcsan_end_scoped_access() when kcsan_scoped_access goes 109 + * out of scope; relies on attribute "cleanup", which is supported by all 110 + * compilers that support KCSAN. 111 + */ 112 + #define __kcsan_cleanup_scoped \ 113 + __maybe_unused __attribute__((__cleanup__(kcsan_end_scoped_access))) 114 + 115 + /** 116 + * kcsan_begin_scoped_access - begin scoped access 117 + * 118 + * Begin scoped access and initialize @sa, which will cause KCSAN to 119 + * continuously check the memory range in the current thread until 120 + * kcsan_end_scoped_access() is called for @sa. 121 + * 122 + * Scoped accesses are implemented by appending @sa to an internal list for the 123 + * current execution context, and then checked on every call into the KCSAN 124 + * runtime. 125 + * 126 + * @ptr: address of access 127 + * @size: size of access 128 + * @type: access type modifier 129 + * @sa: struct kcsan_scoped_access to use for the scope of the access 130 + */ 131 + struct kcsan_scoped_access * 132 + kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type, 133 + struct kcsan_scoped_access *sa); 134 + 135 + /** 136 + * kcsan_end_scoped_access - end scoped access 137 + * 138 + * End a scoped access, which will stop KCSAN checking the memory range. 139 + * Requires that kcsan_begin_scoped_access() was previously called once for @sa. 140 + * 141 + * @sa: a previously initialized struct kcsan_scoped_access 142 + */ 143 + void kcsan_end_scoped_access(struct kcsan_scoped_access *sa); 144 + 145 + 146 + #else /* CONFIG_KCSAN */ 147 + 148 + static inline void __kcsan_check_access(const volatile void *ptr, size_t size, 149 + int type) { } 150 + 151 + static inline void kcsan_disable_current(void) { } 152 + static inline void kcsan_enable_current(void) { } 153 + static inline void kcsan_enable_current_nowarn(void) { } 154 + static inline void kcsan_nestable_atomic_begin(void) { } 155 + static inline void kcsan_nestable_atomic_end(void) { } 156 + static inline void kcsan_flat_atomic_begin(void) { } 157 + static inline void kcsan_flat_atomic_end(void) { } 158 + static inline void kcsan_atomic_next(int n) { } 159 + static inline void kcsan_set_access_mask(unsigned long mask) { } 160 + 161 + struct kcsan_scoped_access { }; 162 + #define __kcsan_cleanup_scoped __maybe_unused 163 + static inline struct kcsan_scoped_access * 164 + kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type, 165 + struct kcsan_scoped_access *sa) { return sa; } 166 + static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { } 167 + 168 + #endif /* CONFIG_KCSAN */ 169 + 170 + #ifdef __SANITIZE_THREAD__ 171 + /* 172 + * Only calls into the runtime when the particular compilation unit has KCSAN 173 + * instrumentation enabled. May be used in header files. 174 + */ 175 + #define kcsan_check_access __kcsan_check_access 176 + 177 + /* 178 + * Only use these to disable KCSAN for accesses in the current compilation unit; 179 + * calls into libraries may still perform KCSAN checks. 180 + */ 181 + #define __kcsan_disable_current kcsan_disable_current 182 + #define __kcsan_enable_current kcsan_enable_current_nowarn 183 + #else 184 + static inline void kcsan_check_access(const volatile void *ptr, size_t size, 185 + int type) { } 186 + static inline void __kcsan_enable_current(void) { } 187 + static inline void __kcsan_disable_current(void) { } 188 + #endif 189 + 190 + /** 191 + * __kcsan_check_read - check regular read access for races 192 + * 193 + * @ptr: address of access 194 + * @size: size of access 195 + */ 196 + #define __kcsan_check_read(ptr, size) __kcsan_check_access(ptr, size, 0) 197 + 198 + /** 199 + * __kcsan_check_write - check regular write access for races 200 + * 201 + * @ptr: address of access 202 + * @size: size of access 203 + */ 204 + #define __kcsan_check_write(ptr, size) \ 205 + __kcsan_check_access(ptr, size, KCSAN_ACCESS_WRITE) 206 + 207 + /** 208 + * kcsan_check_read - check regular read access for races 209 + * 210 + * @ptr: address of access 211 + * @size: size of access 212 + */ 213 + #define kcsan_check_read(ptr, size) kcsan_check_access(ptr, size, 0) 214 + 215 + /** 216 + * kcsan_check_write - check regular write access for races 217 + * 218 + * @ptr: address of access 219 + * @size: size of access 220 + */ 221 + #define kcsan_check_write(ptr, size) \ 222 + kcsan_check_access(ptr, size, KCSAN_ACCESS_WRITE) 223 + 224 + /* 225 + * Check for atomic accesses: if atomic accesses are not ignored, this simply 226 + * aliases to kcsan_check_access(), otherwise becomes a no-op. 227 + */ 228 + #ifdef CONFIG_KCSAN_IGNORE_ATOMICS 229 + #define kcsan_check_atomic_read(...) do { } while (0) 230 + #define kcsan_check_atomic_write(...) do { } while (0) 231 + #else 232 + #define kcsan_check_atomic_read(ptr, size) \ 233 + kcsan_check_access(ptr, size, KCSAN_ACCESS_ATOMIC) 234 + #define kcsan_check_atomic_write(ptr, size) \ 235 + kcsan_check_access(ptr, size, KCSAN_ACCESS_ATOMIC | KCSAN_ACCESS_WRITE) 236 + #endif 237 + 238 + /** 239 + * ASSERT_EXCLUSIVE_WRITER - assert no concurrent writes to @var 240 + * 241 + * Assert that there are no concurrent writes to @var; other readers are 242 + * allowed. This assertion can be used to specify properties of concurrent code, 243 + * where violation cannot be detected as a normal data race. 244 + * 245 + * For example, if we only have a single writer, but multiple concurrent 246 + * readers, to avoid data races, all these accesses must be marked; even 247 + * concurrent marked writes racing with the single writer are bugs. 248 + * Unfortunately, due to being marked, they are no longer data races. For cases 249 + * like these, we can use the macro as follows: 250 + * 251 + * .. code-block:: c 252 + * 253 + * void writer(void) { 254 + * spin_lock(&update_foo_lock); 255 + * ASSERT_EXCLUSIVE_WRITER(shared_foo); 256 + * WRITE_ONCE(shared_foo, ...); 257 + * spin_unlock(&update_foo_lock); 258 + * } 259 + * void reader(void) { 260 + * // update_foo_lock does not need to be held! 261 + * ... = READ_ONCE(shared_foo); 262 + * } 263 + * 264 + * Note: ASSERT_EXCLUSIVE_WRITER_SCOPED(), if applicable, performs more thorough 265 + * checking if a clear scope where no concurrent writes are expected exists. 266 + * 267 + * @var: variable to assert on 268 + */ 269 + #define ASSERT_EXCLUSIVE_WRITER(var) \ 270 + __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT) 271 + 272 + /* 273 + * Helper macros for implementation of for ASSERT_EXCLUSIVE_*_SCOPED(). @id is 274 + * expected to be unique for the scope in which instances of kcsan_scoped_access 275 + * are declared. 276 + */ 277 + #define __kcsan_scoped_name(c, suffix) __kcsan_scoped_##c##suffix 278 + #define __ASSERT_EXCLUSIVE_SCOPED(var, type, id) \ 279 + struct kcsan_scoped_access __kcsan_scoped_name(id, _) \ 280 + __kcsan_cleanup_scoped; \ 281 + struct kcsan_scoped_access *__kcsan_scoped_name(id, _dummy_p) \ 282 + __maybe_unused = kcsan_begin_scoped_access( \ 283 + &(var), sizeof(var), KCSAN_ACCESS_SCOPED | (type), \ 284 + &__kcsan_scoped_name(id, _)) 285 + 286 + /** 287 + * ASSERT_EXCLUSIVE_WRITER_SCOPED - assert no concurrent writes to @var in scope 288 + * 289 + * Scoped variant of ASSERT_EXCLUSIVE_WRITER(). 290 + * 291 + * Assert that there are no concurrent writes to @var for the duration of the 292 + * scope in which it is introduced. This provides a better way to fully cover 293 + * the enclosing scope, compared to multiple ASSERT_EXCLUSIVE_WRITER(), and 294 + * increases the likelihood for KCSAN to detect racing accesses. 295 + * 296 + * For example, it allows finding race-condition bugs that only occur due to 297 + * state changes within the scope itself: 298 + * 299 + * .. code-block:: c 300 + * 301 + * void writer(void) { 302 + * spin_lock(&update_foo_lock); 303 + * { 304 + * ASSERT_EXCLUSIVE_WRITER_SCOPED(shared_foo); 305 + * WRITE_ONCE(shared_foo, 42); 306 + * ... 307 + * // shared_foo should still be 42 here! 308 + * } 309 + * spin_unlock(&update_foo_lock); 310 + * } 311 + * void buggy(void) { 312 + * if (READ_ONCE(shared_foo) == 42) 313 + * WRITE_ONCE(shared_foo, 1); // bug! 314 + * } 315 + * 316 + * @var: variable to assert on 317 + */ 318 + #define ASSERT_EXCLUSIVE_WRITER_SCOPED(var) \ 319 + __ASSERT_EXCLUSIVE_SCOPED(var, KCSAN_ACCESS_ASSERT, __COUNTER__) 320 + 321 + /** 322 + * ASSERT_EXCLUSIVE_ACCESS - assert no concurrent accesses to @var 323 + * 324 + * Assert that there are no concurrent accesses to @var (no readers nor 325 + * writers). This assertion can be used to specify properties of concurrent 326 + * code, where violation cannot be detected as a normal data race. 327 + * 328 + * For example, where exclusive access is expected after determining no other 329 + * users of an object are left, but the object is not actually freed. We can 330 + * check that this property actually holds as follows: 331 + * 332 + * .. code-block:: c 333 + * 334 + * if (refcount_dec_and_test(&obj->refcnt)) { 335 + * ASSERT_EXCLUSIVE_ACCESS(*obj); 336 + * do_some_cleanup(obj); 337 + * release_for_reuse(obj); 338 + * } 339 + * 340 + * Note: ASSERT_EXCLUSIVE_ACCESS_SCOPED(), if applicable, performs more thorough 341 + * checking if a clear scope where no concurrent accesses are expected exists. 342 + * 343 + * Note: For cases where the object is freed, `KASAN <kasan.html>`_ is a better 344 + * fit to detect use-after-free bugs. 345 + * 346 + * @var: variable to assert on 347 + */ 348 + #define ASSERT_EXCLUSIVE_ACCESS(var) \ 349 + __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT) 350 + 351 + /** 352 + * ASSERT_EXCLUSIVE_ACCESS_SCOPED - assert no concurrent accesses to @var in scope 353 + * 354 + * Scoped variant of ASSERT_EXCLUSIVE_ACCESS(). 355 + * 356 + * Assert that there are no concurrent accesses to @var (no readers nor writers) 357 + * for the entire duration of the scope in which it is introduced. This provides 358 + * a better way to fully cover the enclosing scope, compared to multiple 359 + * ASSERT_EXCLUSIVE_ACCESS(), and increases the likelihood for KCSAN to detect 360 + * racing accesses. 361 + * 362 + * @var: variable to assert on 363 + */ 364 + #define ASSERT_EXCLUSIVE_ACCESS_SCOPED(var) \ 365 + __ASSERT_EXCLUSIVE_SCOPED(var, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT, __COUNTER__) 366 + 367 + /** 368 + * ASSERT_EXCLUSIVE_BITS - assert no concurrent writes to subset of bits in @var 369 + * 370 + * Bit-granular variant of ASSERT_EXCLUSIVE_WRITER(). 371 + * 372 + * Assert that there are no concurrent writes to a subset of bits in @var; 373 + * concurrent readers are permitted. This assertion captures more detailed 374 + * bit-level properties, compared to the other (word granularity) assertions. 375 + * Only the bits set in @mask are checked for concurrent modifications, while 376 + * ignoring the remaining bits, i.e. concurrent writes (or reads) to ~mask bits 377 + * are ignored. 378 + * 379 + * Use this for variables, where some bits must not be modified concurrently, 380 + * yet other bits are expected to be modified concurrently. 381 + * 382 + * For example, variables where, after initialization, some bits are read-only, 383 + * but other bits may still be modified concurrently. A reader may wish to 384 + * assert that this is true as follows: 385 + * 386 + * .. code-block:: c 387 + * 388 + * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK); 389 + * foo = (READ_ONCE(flags) & READ_ONLY_MASK) >> READ_ONLY_SHIFT; 390 + * 391 + * Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is assumed 392 + * to access the masked bits only, and KCSAN optimistically assumes it is 393 + * therefore safe, even in the presence of data races, and marking it with 394 + * READ_ONCE() is optional from KCSAN's point-of-view. We caution, however, that 395 + * it may still be advisable to do so, since we cannot reason about all compiler 396 + * optimizations when it comes to bit manipulations (on the reader and writer 397 + * side). If you are sure nothing can go wrong, we can write the above simply 398 + * as: 399 + * 400 + * .. code-block:: c 401 + * 402 + * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK); 403 + * foo = (flags & READ_ONLY_MASK) >> READ_ONLY_SHIFT; 404 + * 405 + * Another example, where this may be used, is when certain bits of @var may 406 + * only be modified when holding the appropriate lock, but other bits may still 407 + * be modified concurrently. Writers, where other bits may change concurrently, 408 + * could use the assertion as follows: 409 + * 410 + * .. code-block:: c 411 + * 412 + * spin_lock(&foo_lock); 413 + * ASSERT_EXCLUSIVE_BITS(flags, FOO_MASK); 414 + * old_flags = flags; 415 + * new_flags = (old_flags & ~FOO_MASK) | (new_foo << FOO_SHIFT); 416 + * if (cmpxchg(&flags, old_flags, new_flags) != old_flags) { ... } 417 + * spin_unlock(&foo_lock); 418 + * 419 + * @var: variable to assert on 420 + * @mask: only check for modifications to bits set in @mask 421 + */ 422 + #define ASSERT_EXCLUSIVE_BITS(var, mask) \ 423 + do { \ 424 + kcsan_set_access_mask(mask); \ 425 + __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT);\ 426 + kcsan_set_access_mask(0); \ 427 + kcsan_atomic_next(1); \ 428 + } while (0) 429 + 430 + #endif /* _LINUX_KCSAN_CHECKS_H */
+59
include/linux/kcsan.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _LINUX_KCSAN_H 4 + #define _LINUX_KCSAN_H 5 + 6 + #include <linux/kcsan-checks.h> 7 + #include <linux/types.h> 8 + 9 + #ifdef CONFIG_KCSAN 10 + 11 + /* 12 + * Context for each thread of execution: for tasks, this is stored in 13 + * task_struct, and interrupts access internal per-CPU storage. 14 + */ 15 + struct kcsan_ctx { 16 + int disable_count; /* disable counter */ 17 + int atomic_next; /* number of following atomic ops */ 18 + 19 + /* 20 + * We distinguish between: (a) nestable atomic regions that may contain 21 + * other nestable regions; and (b) flat atomic regions that do not keep 22 + * track of nesting. Both (a) and (b) are entirely independent of each 23 + * other, and a flat region may be started in a nestable region or 24 + * vice-versa. 25 + * 26 + * This is required because, for example, in the annotations for 27 + * seqlocks, we declare seqlock writer critical sections as (a) nestable 28 + * atomic regions, but reader critical sections as (b) flat atomic 29 + * regions, but have encountered cases where seqlock reader critical 30 + * sections are contained within writer critical sections (the opposite 31 + * may be possible, too). 32 + * 33 + * To support these cases, we independently track the depth of nesting 34 + * for (a), and whether the leaf level is flat for (b). 35 + */ 36 + int atomic_nest_count; 37 + bool in_flat_atomic; 38 + 39 + /* 40 + * Access mask for all accesses if non-zero. 41 + */ 42 + unsigned long access_mask; 43 + 44 + /* List of scoped accesses. */ 45 + struct list_head scoped_accesses; 46 + }; 47 + 48 + /** 49 + * kcsan_init - initialize KCSAN runtime 50 + */ 51 + void kcsan_init(void); 52 + 53 + #else /* CONFIG_KCSAN */ 54 + 55 + static inline void kcsan_init(void) { } 56 + 57 + #endif /* CONFIG_KCSAN */ 58 + 59 + #endif /* _LINUX_KCSAN_H */
+4
include/linux/sched.h
··· 31 31 #include <linux/task_io_accounting.h> 32 32 #include <linux/posix-timers.h> 33 33 #include <linux/rseq.h> 34 + #include <linux/kcsan.h> 34 35 35 36 /* task_struct member predeclarations (sorted alphabetically): */ 36 37 struct audit_context; ··· 1197 1196 1198 1197 #ifdef CONFIG_KASAN 1199 1198 unsigned int kasan_depth; 1199 + #endif 1200 + #ifdef CONFIG_KCSAN 1201 + struct kcsan_ctx kcsan_ctx; 1200 1202 #endif 1201 1203 1202 1204 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+47 -4
include/linux/seqlock.h
··· 37 37 #include <linux/preempt.h> 38 38 #include <linux/lockdep.h> 39 39 #include <linux/compiler.h> 40 + #include <linux/kcsan-checks.h> 40 41 #include <asm/processor.h> 42 + 43 + /* 44 + * The seqlock interface does not prescribe a precise sequence of read 45 + * begin/retry/end. For readers, typically there is a call to 46 + * read_seqcount_begin() and read_seqcount_retry(), however, there are more 47 + * esoteric cases which do not follow this pattern. 48 + * 49 + * As a consequence, we take the following best-effort approach for raw usage 50 + * via seqcount_t under KCSAN: upon beginning a seq-reader critical section, 51 + * pessimistically mark the next KCSAN_SEQLOCK_REGION_MAX memory accesses as 52 + * atomics; if there is a matching read_seqcount_retry() call, no following 53 + * memory operations are considered atomic. Usage of seqlocks via seqlock_t 54 + * interface is not affected. 55 + */ 56 + #define KCSAN_SEQLOCK_REGION_MAX 1000 41 57 42 58 /* 43 59 * Version using sequence counter only. ··· 131 115 cpu_relax(); 132 116 goto repeat; 133 117 } 118 + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); 134 119 return ret; 135 120 } 136 121 ··· 148 131 { 149 132 unsigned ret = READ_ONCE(s->sequence); 150 133 smp_rmb(); 134 + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); 151 135 return ret; 152 136 } 153 137 ··· 201 183 { 202 184 unsigned ret = READ_ONCE(s->sequence); 203 185 smp_rmb(); 186 + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); 204 187 return ret & ~1; 205 188 } 206 189 ··· 221 202 */ 222 203 static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start) 223 204 { 224 - return unlikely(s->sequence != start); 205 + kcsan_atomic_next(0); 206 + return unlikely(READ_ONCE(s->sequence) != start); 225 207 } 226 208 227 209 /** ··· 245 225 246 226 static inline void raw_write_seqcount_begin(seqcount_t *s) 247 227 { 228 + kcsan_nestable_atomic_begin(); 248 229 s->sequence++; 249 230 smp_wmb(); 250 231 } ··· 254 233 { 255 234 smp_wmb(); 256 235 s->sequence++; 236 + kcsan_nestable_atomic_end(); 257 237 } 258 238 259 239 /** ··· 264 242 * This can be used to provide an ordering guarantee instead of the 265 243 * usual consistency guarantee. It is one wmb cheaper, because we can 266 244 * collapse the two back-to-back wmb()s. 245 + * 246 + * Note that writes surrounding the barrier should be declared atomic (e.g. 247 + * via WRITE_ONCE): a) to ensure the writes become visible to other threads 248 + * atomically, avoiding compiler optimizations; b) to document which writes are 249 + * meant to propagate to the reader critical section. This is necessary because 250 + * neither writes before and after the barrier are enclosed in a seq-writer 251 + * critical section that would ensure readers are aware of ongoing writes. 267 252 * 268 253 * seqcount_t seq; 269 254 * bool X = true, Y = false; ··· 291 262 * 292 263 * void write(void) 293 264 * { 294 - * Y = true; 265 + * WRITE_ONCE(Y, true); 295 266 * 296 267 * raw_write_seqcount_barrier(seq); 297 268 * 298 - * X = false; 269 + * WRITE_ONCE(X, false); 299 270 * } 300 271 */ 301 272 static inline void raw_write_seqcount_barrier(seqcount_t *s) 302 273 { 274 + kcsan_nestable_atomic_begin(); 303 275 s->sequence++; 304 276 smp_wmb(); 305 277 s->sequence++; 278 + kcsan_nestable_atomic_end(); 306 279 } 307 280 308 281 static inline int raw_read_seqcount_latch(seqcount_t *s) ··· 429 398 static inline void write_seqcount_invalidate(seqcount_t *s) 430 399 { 431 400 smp_wmb(); 401 + kcsan_nestable_atomic_begin(); 432 402 s->sequence+=2; 403 + kcsan_nestable_atomic_end(); 433 404 } 434 405 435 406 typedef struct { ··· 463 430 */ 464 431 static inline unsigned read_seqbegin(const seqlock_t *sl) 465 432 { 466 - return read_seqcount_begin(&sl->seqcount); 433 + unsigned ret = read_seqcount_begin(&sl->seqcount); 434 + 435 + kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry() */ 436 + kcsan_flat_atomic_begin(); 437 + return ret; 467 438 } 468 439 469 440 static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) 470 441 { 442 + /* 443 + * Assume not nested: read_seqretry() may be called multiple times when 444 + * completing read critical section. 445 + */ 446 + kcsan_flat_atomic_end(); 447 + 471 448 return read_seqcount_retry(&sl->seqcount, start); 472 449 } 473 450
+7 -7
include/linux/uaccess.h
··· 2 2 #ifndef __LINUX_UACCESS_H__ 3 3 #define __LINUX_UACCESS_H__ 4 4 5 + #include <linux/instrumented.h> 5 6 #include <linux/sched.h> 6 7 #include <linux/thread_info.h> 7 - #include <linux/kasan-checks.h> 8 8 9 9 #define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS) 10 10 ··· 58 58 static __always_inline __must_check unsigned long 59 59 __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) 60 60 { 61 - kasan_check_write(to, n); 61 + instrument_copy_from_user(to, from, n); 62 62 check_object_size(to, n, false); 63 63 return raw_copy_from_user(to, from, n); 64 64 } ··· 67 67 __copy_from_user(void *to, const void __user *from, unsigned long n) 68 68 { 69 69 might_fault(); 70 - kasan_check_write(to, n); 70 + instrument_copy_from_user(to, from, n); 71 71 check_object_size(to, n, false); 72 72 return raw_copy_from_user(to, from, n); 73 73 } ··· 88 88 static __always_inline __must_check unsigned long 89 89 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) 90 90 { 91 - kasan_check_read(from, n); 91 + instrument_copy_to_user(to, from, n); 92 92 check_object_size(from, n, true); 93 93 return raw_copy_to_user(to, from, n); 94 94 } ··· 97 97 __copy_to_user(void __user *to, const void *from, unsigned long n) 98 98 { 99 99 might_fault(); 100 - kasan_check_read(from, n); 100 + instrument_copy_to_user(to, from, n); 101 101 check_object_size(from, n, true); 102 102 return raw_copy_to_user(to, from, n); 103 103 } ··· 109 109 unsigned long res = n; 110 110 might_fault(); 111 111 if (likely(access_ok(from, n))) { 112 - kasan_check_write(to, n); 112 + instrument_copy_from_user(to, from, n); 113 113 res = raw_copy_from_user(to, from, n); 114 114 } 115 115 if (unlikely(res)) ··· 127 127 { 128 128 might_fault(); 129 129 if (access_ok(to, n)) { 130 - kasan_check_read(from, n); 130 + instrument_copy_to_user(to, from, n); 131 131 n = raw_copy_to_user(to, from, n); 132 132 } 133 133 return n;
+10
init/init_task.c
··· 174 174 #ifdef CONFIG_KASAN 175 175 .kasan_depth = 1, 176 176 #endif 177 + #ifdef CONFIG_KCSAN 178 + .kcsan_ctx = { 179 + .disable_count = 0, 180 + .atomic_next = 0, 181 + .atomic_nest_count = 0, 182 + .in_flat_atomic = false, 183 + .access_mask = 0, 184 + .scoped_accesses = {LIST_POISON1, NULL}, 185 + }, 186 + #endif 177 187 #ifdef CONFIG_TRACE_IRQFLAGS 178 188 .softirqs_enabled = 1, 179 189 #endif
+2
init/main.c
··· 95 95 #include <linux/rodata_test.h> 96 96 #include <linux/jump_label.h> 97 97 #include <linux/mem_encrypt.h> 98 + #include <linux/kcsan.h> 98 99 99 100 #include <asm/io.h> 100 101 #include <asm/bugs.h> ··· 1037 1036 acpi_subsystem_init(); 1038 1037 arch_post_acpi_subsys_init(); 1039 1038 sfi_init_late(); 1039 + kcsan_init(); 1040 1040 1041 1041 /* Do the rest non-__init'ed, we're now alive */ 1042 1042 arch_call_rest_init();
+6
kernel/Makefile
··· 23 23 # Prevents flicker of uninteresting __do_softirq()/__local_bh_disable_ip() 24 24 # in coverage traces. 25 25 KCOV_INSTRUMENT_softirq.o := n 26 + # Avoid KCSAN instrumentation in softirq ("No shared variables, all the data 27 + # are CPU local" => assume no data races), to reduce overhead in interrupts. 28 + KCSAN_SANITIZE_softirq.o = n 26 29 # These are called from save_stack_trace() on slub debug path, 27 30 # and produce insane amounts of uninteresting coverage. 28 31 KCOV_INSTRUMENT_module.o := n ··· 34 31 # Don't self-instrument. 35 32 KCOV_INSTRUMENT_kcov.o := n 36 33 KASAN_SANITIZE_kcov.o := n 34 + KCSAN_SANITIZE_kcov.o := n 37 35 CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 38 36 39 37 # cond_syscall is currently not LTO compatible ··· 107 103 obj-$(CONFIG_IRQ_WORK) += irq_work.o 108 104 obj-$(CONFIG_CPU_PM) += cpu_pm.o 109 105 obj-$(CONFIG_BPF) += bpf/ 106 + obj-$(CONFIG_KCSAN) += kcsan/ 110 107 obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o 111 108 112 109 obj-$(CONFIG_PERF_EVENTS) += events/ ··· 126 121 127 122 obj-$(CONFIG_GCC_PLUGIN_STACKLEAK) += stackleak.o 128 123 KASAN_SANITIZE_stackleak.o := n 124 + KCSAN_SANITIZE_stackleak.o := n 129 125 KCOV_INSTRUMENT_stackleak.o := n 130 126 131 127 $(obj)/configs.o: $(obj)/config_data.gz
+14
kernel/kcsan/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + KCSAN_SANITIZE := n 3 + KCOV_INSTRUMENT := n 4 + UBSAN_SANITIZE := n 5 + 6 + CFLAGS_REMOVE_core.o = $(CC_FLAGS_FTRACE) 7 + CFLAGS_REMOVE_debugfs.o = $(CC_FLAGS_FTRACE) 8 + CFLAGS_REMOVE_report.o = $(CC_FLAGS_FTRACE) 9 + 10 + CFLAGS_core.o := $(call cc-option,-fno-conserve-stack,) \ 11 + $(call cc-option,-fno-stack-protector,) 12 + 13 + obj-y := core.o debugfs.o report.o 14 + obj-$(CONFIG_KCSAN_SELFTEST) += test.o
+20
kernel/kcsan/atomic.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _KERNEL_KCSAN_ATOMIC_H 4 + #define _KERNEL_KCSAN_ATOMIC_H 5 + 6 + #include <linux/jiffies.h> 7 + #include <linux/sched.h> 8 + 9 + /* 10 + * Special rules for certain memory where concurrent conflicting accesses are 11 + * common, however, the current convention is to not mark them; returns true if 12 + * access to @ptr should be considered atomic. Called from slow-path. 13 + */ 14 + static bool kcsan_is_atomic_special(const volatile void *ptr) 15 + { 16 + /* volatile globals that have been observed in data races. */ 17 + return ptr == &jiffies || ptr == &current->state; 18 + } 19 + 20 + #endif /* _KERNEL_KCSAN_ATOMIC_H */
+850
kernel/kcsan/core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/atomic.h> 4 + #include <linux/bug.h> 5 + #include <linux/delay.h> 6 + #include <linux/export.h> 7 + #include <linux/init.h> 8 + #include <linux/kernel.h> 9 + #include <linux/list.h> 10 + #include <linux/moduleparam.h> 11 + #include <linux/percpu.h> 12 + #include <linux/preempt.h> 13 + #include <linux/random.h> 14 + #include <linux/sched.h> 15 + #include <linux/uaccess.h> 16 + 17 + #include "atomic.h" 18 + #include "encoding.h" 19 + #include "kcsan.h" 20 + 21 + static bool kcsan_early_enable = IS_ENABLED(CONFIG_KCSAN_EARLY_ENABLE); 22 + unsigned int kcsan_udelay_task = CONFIG_KCSAN_UDELAY_TASK; 23 + unsigned int kcsan_udelay_interrupt = CONFIG_KCSAN_UDELAY_INTERRUPT; 24 + static long kcsan_skip_watch = CONFIG_KCSAN_SKIP_WATCH; 25 + static bool kcsan_interrupt_watcher = IS_ENABLED(CONFIG_KCSAN_INTERRUPT_WATCHER); 26 + 27 + #ifdef MODULE_PARAM_PREFIX 28 + #undef MODULE_PARAM_PREFIX 29 + #endif 30 + #define MODULE_PARAM_PREFIX "kcsan." 31 + module_param_named(early_enable, kcsan_early_enable, bool, 0); 32 + module_param_named(udelay_task, kcsan_udelay_task, uint, 0644); 33 + module_param_named(udelay_interrupt, kcsan_udelay_interrupt, uint, 0644); 34 + module_param_named(skip_watch, kcsan_skip_watch, long, 0644); 35 + module_param_named(interrupt_watcher, kcsan_interrupt_watcher, bool, 0444); 36 + 37 + bool kcsan_enabled; 38 + 39 + /* Per-CPU kcsan_ctx for interrupts */ 40 + static DEFINE_PER_CPU(struct kcsan_ctx, kcsan_cpu_ctx) = { 41 + .disable_count = 0, 42 + .atomic_next = 0, 43 + .atomic_nest_count = 0, 44 + .in_flat_atomic = false, 45 + .access_mask = 0, 46 + .scoped_accesses = {LIST_POISON1, NULL}, 47 + }; 48 + 49 + /* 50 + * Helper macros to index into adjacent slots, starting from address slot 51 + * itself, followed by the right and left slots. 52 + * 53 + * The purpose is 2-fold: 54 + * 55 + * 1. if during insertion the address slot is already occupied, check if 56 + * any adjacent slots are free; 57 + * 2. accesses that straddle a slot boundary due to size that exceeds a 58 + * slot's range may check adjacent slots if any watchpoint matches. 59 + * 60 + * Note that accesses with very large size may still miss a watchpoint; however, 61 + * given this should be rare, this is a reasonable trade-off to make, since this 62 + * will avoid: 63 + * 64 + * 1. excessive contention between watchpoint checks and setup; 65 + * 2. larger number of simultaneous watchpoints without sacrificing 66 + * performance. 67 + * 68 + * Example: SLOT_IDX values for KCSAN_CHECK_ADJACENT=1, where i is [0, 1, 2]: 69 + * 70 + * slot=0: [ 1, 2, 0] 71 + * slot=9: [10, 11, 9] 72 + * slot=63: [64, 65, 63] 73 + */ 74 + #define SLOT_IDX(slot, i) (slot + ((i + KCSAN_CHECK_ADJACENT) % NUM_SLOTS)) 75 + 76 + /* 77 + * SLOT_IDX_FAST is used in the fast-path. Not first checking the address's primary 78 + * slot (middle) is fine if we assume that races occur rarely. The set of 79 + * indices {SLOT_IDX(slot, i) | i in [0, NUM_SLOTS)} is equivalent to 80 + * {SLOT_IDX_FAST(slot, i) | i in [0, NUM_SLOTS)}. 81 + */ 82 + #define SLOT_IDX_FAST(slot, i) (slot + i) 83 + 84 + /* 85 + * Watchpoints, with each entry encoded as defined in encoding.h: in order to be 86 + * able to safely update and access a watchpoint without introducing locking 87 + * overhead, we encode each watchpoint as a single atomic long. The initial 88 + * zero-initialized state matches INVALID_WATCHPOINT. 89 + * 90 + * Add NUM_SLOTS-1 entries to account for overflow; this helps avoid having to 91 + * use more complicated SLOT_IDX_FAST calculation with modulo in the fast-path. 92 + */ 93 + static atomic_long_t watchpoints[CONFIG_KCSAN_NUM_WATCHPOINTS + NUM_SLOTS-1]; 94 + 95 + /* 96 + * Instructions to skip watching counter, used in should_watch(). We use a 97 + * per-CPU counter to avoid excessive contention. 98 + */ 99 + static DEFINE_PER_CPU(long, kcsan_skip); 100 + 101 + static __always_inline atomic_long_t *find_watchpoint(unsigned long addr, 102 + size_t size, 103 + bool expect_write, 104 + long *encoded_watchpoint) 105 + { 106 + const int slot = watchpoint_slot(addr); 107 + const unsigned long addr_masked = addr & WATCHPOINT_ADDR_MASK; 108 + atomic_long_t *watchpoint; 109 + unsigned long wp_addr_masked; 110 + size_t wp_size; 111 + bool is_write; 112 + int i; 113 + 114 + BUILD_BUG_ON(CONFIG_KCSAN_NUM_WATCHPOINTS < NUM_SLOTS); 115 + 116 + for (i = 0; i < NUM_SLOTS; ++i) { 117 + watchpoint = &watchpoints[SLOT_IDX_FAST(slot, i)]; 118 + *encoded_watchpoint = atomic_long_read(watchpoint); 119 + if (!decode_watchpoint(*encoded_watchpoint, &wp_addr_masked, 120 + &wp_size, &is_write)) 121 + continue; 122 + 123 + if (expect_write && !is_write) 124 + continue; 125 + 126 + /* Check if the watchpoint matches the access. */ 127 + if (matching_access(wp_addr_masked, wp_size, addr_masked, size)) 128 + return watchpoint; 129 + } 130 + 131 + return NULL; 132 + } 133 + 134 + static inline atomic_long_t * 135 + insert_watchpoint(unsigned long addr, size_t size, bool is_write) 136 + { 137 + const int slot = watchpoint_slot(addr); 138 + const long encoded_watchpoint = encode_watchpoint(addr, size, is_write); 139 + atomic_long_t *watchpoint; 140 + int i; 141 + 142 + /* Check slot index logic, ensuring we stay within array bounds. */ 143 + BUILD_BUG_ON(SLOT_IDX(0, 0) != KCSAN_CHECK_ADJACENT); 144 + BUILD_BUG_ON(SLOT_IDX(0, KCSAN_CHECK_ADJACENT+1) != 0); 145 + BUILD_BUG_ON(SLOT_IDX(CONFIG_KCSAN_NUM_WATCHPOINTS-1, KCSAN_CHECK_ADJACENT) != ARRAY_SIZE(watchpoints)-1); 146 + BUILD_BUG_ON(SLOT_IDX(CONFIG_KCSAN_NUM_WATCHPOINTS-1, KCSAN_CHECK_ADJACENT+1) != ARRAY_SIZE(watchpoints) - NUM_SLOTS); 147 + 148 + for (i = 0; i < NUM_SLOTS; ++i) { 149 + long expect_val = INVALID_WATCHPOINT; 150 + 151 + /* Try to acquire this slot. */ 152 + watchpoint = &watchpoints[SLOT_IDX(slot, i)]; 153 + if (atomic_long_try_cmpxchg_relaxed(watchpoint, &expect_val, encoded_watchpoint)) 154 + return watchpoint; 155 + } 156 + 157 + return NULL; 158 + } 159 + 160 + /* 161 + * Return true if watchpoint was successfully consumed, false otherwise. 162 + * 163 + * This may return false if: 164 + * 165 + * 1. another thread already consumed the watchpoint; 166 + * 2. the thread that set up the watchpoint already removed it; 167 + * 3. the watchpoint was removed and then re-used. 168 + */ 169 + static __always_inline bool 170 + try_consume_watchpoint(atomic_long_t *watchpoint, long encoded_watchpoint) 171 + { 172 + return atomic_long_try_cmpxchg_relaxed(watchpoint, &encoded_watchpoint, CONSUMED_WATCHPOINT); 173 + } 174 + 175 + /* Return true if watchpoint was not touched, false if already consumed. */ 176 + static inline bool consume_watchpoint(atomic_long_t *watchpoint) 177 + { 178 + return atomic_long_xchg_relaxed(watchpoint, CONSUMED_WATCHPOINT) != CONSUMED_WATCHPOINT; 179 + } 180 + 181 + /* Remove the watchpoint -- its slot may be reused after. */ 182 + static inline void remove_watchpoint(atomic_long_t *watchpoint) 183 + { 184 + atomic_long_set(watchpoint, INVALID_WATCHPOINT); 185 + } 186 + 187 + static __always_inline struct kcsan_ctx *get_ctx(void) 188 + { 189 + /* 190 + * In interrupts, use raw_cpu_ptr to avoid unnecessary checks, that would 191 + * also result in calls that generate warnings in uaccess regions. 192 + */ 193 + return in_task() ? &current->kcsan_ctx : raw_cpu_ptr(&kcsan_cpu_ctx); 194 + } 195 + 196 + /* Check scoped accesses; never inline because this is a slow-path! */ 197 + static noinline void kcsan_check_scoped_accesses(void) 198 + { 199 + struct kcsan_ctx *ctx = get_ctx(); 200 + struct list_head *prev_save = ctx->scoped_accesses.prev; 201 + struct kcsan_scoped_access *scoped_access; 202 + 203 + ctx->scoped_accesses.prev = NULL; /* Avoid recursion. */ 204 + list_for_each_entry(scoped_access, &ctx->scoped_accesses, list) 205 + __kcsan_check_access(scoped_access->ptr, scoped_access->size, scoped_access->type); 206 + ctx->scoped_accesses.prev = prev_save; 207 + } 208 + 209 + /* Rules for generic atomic accesses. Called from fast-path. */ 210 + static __always_inline bool 211 + is_atomic(const volatile void *ptr, size_t size, int type, struct kcsan_ctx *ctx) 212 + { 213 + if (type & KCSAN_ACCESS_ATOMIC) 214 + return true; 215 + 216 + /* 217 + * Unless explicitly declared atomic, never consider an assertion access 218 + * as atomic. This allows using them also in atomic regions, such as 219 + * seqlocks, without implicitly changing their semantics. 220 + */ 221 + if (type & KCSAN_ACCESS_ASSERT) 222 + return false; 223 + 224 + if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC) && 225 + (type & KCSAN_ACCESS_WRITE) && size <= sizeof(long) && 226 + IS_ALIGNED((unsigned long)ptr, size)) 227 + return true; /* Assume aligned writes up to word size are atomic. */ 228 + 229 + if (ctx->atomic_next > 0) { 230 + /* 231 + * Because we do not have separate contexts for nested 232 + * interrupts, in case atomic_next is set, we simply assume that 233 + * the outer interrupt set atomic_next. In the worst case, we 234 + * will conservatively consider operations as atomic. This is a 235 + * reasonable trade-off to make, since this case should be 236 + * extremely rare; however, even if extremely rare, it could 237 + * lead to false positives otherwise. 238 + */ 239 + if ((hardirq_count() >> HARDIRQ_SHIFT) < 2) 240 + --ctx->atomic_next; /* in task, or outer interrupt */ 241 + return true; 242 + } 243 + 244 + return ctx->atomic_nest_count > 0 || ctx->in_flat_atomic; 245 + } 246 + 247 + static __always_inline bool 248 + should_watch(const volatile void *ptr, size_t size, int type, struct kcsan_ctx *ctx) 249 + { 250 + /* 251 + * Never set up watchpoints when memory operations are atomic. 252 + * 253 + * Need to check this first, before kcsan_skip check below: (1) atomics 254 + * should not count towards skipped instructions, and (2) to actually 255 + * decrement kcsan_atomic_next for consecutive instruction stream. 256 + */ 257 + if (is_atomic(ptr, size, type, ctx)) 258 + return false; 259 + 260 + if (this_cpu_dec_return(kcsan_skip) >= 0) 261 + return false; 262 + 263 + /* 264 + * NOTE: If we get here, kcsan_skip must always be reset in slow path 265 + * via reset_kcsan_skip() to avoid underflow. 266 + */ 267 + 268 + /* this operation should be watched */ 269 + return true; 270 + } 271 + 272 + static inline void reset_kcsan_skip(void) 273 + { 274 + long skip_count = kcsan_skip_watch - 275 + (IS_ENABLED(CONFIG_KCSAN_SKIP_WATCH_RANDOMIZE) ? 276 + prandom_u32_max(kcsan_skip_watch) : 277 + 0); 278 + this_cpu_write(kcsan_skip, skip_count); 279 + } 280 + 281 + static __always_inline bool kcsan_is_enabled(void) 282 + { 283 + return READ_ONCE(kcsan_enabled) && get_ctx()->disable_count == 0; 284 + } 285 + 286 + static inline unsigned int get_delay(void) 287 + { 288 + unsigned int delay = in_task() ? kcsan_udelay_task : kcsan_udelay_interrupt; 289 + return delay - (IS_ENABLED(CONFIG_KCSAN_DELAY_RANDOMIZE) ? 290 + prandom_u32_max(delay) : 291 + 0); 292 + } 293 + 294 + /* 295 + * Pull everything together: check_access() below contains the performance 296 + * critical operations; the fast-path (including check_access) functions should 297 + * all be inlinable by the instrumentation functions. 298 + * 299 + * The slow-path (kcsan_found_watchpoint, kcsan_setup_watchpoint) are 300 + * non-inlinable -- note that, we prefix these with "kcsan_" to ensure they can 301 + * be filtered from the stacktrace, as well as give them unique names for the 302 + * UACCESS whitelist of objtool. Each function uses user_access_save/restore(), 303 + * since they do not access any user memory, but instrumentation is still 304 + * emitted in UACCESS regions. 305 + */ 306 + 307 + static noinline void kcsan_found_watchpoint(const volatile void *ptr, 308 + size_t size, 309 + int type, 310 + atomic_long_t *watchpoint, 311 + long encoded_watchpoint) 312 + { 313 + unsigned long flags; 314 + bool consumed; 315 + 316 + if (!kcsan_is_enabled()) 317 + return; 318 + 319 + /* 320 + * The access_mask check relies on value-change comparison. To avoid 321 + * reporting a race where e.g. the writer set up the watchpoint, but the 322 + * reader has access_mask!=0, we have to ignore the found watchpoint. 323 + */ 324 + if (get_ctx()->access_mask != 0) 325 + return; 326 + 327 + /* 328 + * Consume the watchpoint as soon as possible, to minimize the chances 329 + * of !consumed. Consuming the watchpoint must always be guarded by 330 + * kcsan_is_enabled() check, as otherwise we might erroneously 331 + * triggering reports when disabled. 332 + */ 333 + consumed = try_consume_watchpoint(watchpoint, encoded_watchpoint); 334 + 335 + /* keep this after try_consume_watchpoint */ 336 + flags = user_access_save(); 337 + 338 + if (consumed) { 339 + kcsan_report(ptr, size, type, KCSAN_VALUE_CHANGE_MAYBE, 340 + KCSAN_REPORT_CONSUMED_WATCHPOINT, 341 + watchpoint - watchpoints); 342 + } else { 343 + /* 344 + * The other thread may not print any diagnostics, as it has 345 + * already removed the watchpoint, or another thread consumed 346 + * the watchpoint before this thread. 347 + */ 348 + kcsan_counter_inc(KCSAN_COUNTER_REPORT_RACES); 349 + } 350 + 351 + if ((type & KCSAN_ACCESS_ASSERT) != 0) 352 + kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES); 353 + else 354 + kcsan_counter_inc(KCSAN_COUNTER_DATA_RACES); 355 + 356 + user_access_restore(flags); 357 + } 358 + 359 + static noinline void 360 + kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type) 361 + { 362 + const bool is_write = (type & KCSAN_ACCESS_WRITE) != 0; 363 + const bool is_assert = (type & KCSAN_ACCESS_ASSERT) != 0; 364 + atomic_long_t *watchpoint; 365 + union { 366 + u8 _1; 367 + u16 _2; 368 + u32 _4; 369 + u64 _8; 370 + } expect_value; 371 + unsigned long access_mask; 372 + enum kcsan_value_change value_change = KCSAN_VALUE_CHANGE_MAYBE; 373 + unsigned long ua_flags = user_access_save(); 374 + unsigned long irq_flags = 0; 375 + 376 + /* 377 + * Always reset kcsan_skip counter in slow-path to avoid underflow; see 378 + * should_watch(). 379 + */ 380 + reset_kcsan_skip(); 381 + 382 + if (!kcsan_is_enabled()) 383 + goto out; 384 + 385 + /* 386 + * Special atomic rules: unlikely to be true, so we check them here in 387 + * the slow-path, and not in the fast-path in is_atomic(). Call after 388 + * kcsan_is_enabled(), as we may access memory that is not yet 389 + * initialized during early boot. 390 + */ 391 + if (!is_assert && kcsan_is_atomic_special(ptr)) 392 + goto out; 393 + 394 + if (!check_encodable((unsigned long)ptr, size)) { 395 + kcsan_counter_inc(KCSAN_COUNTER_UNENCODABLE_ACCESSES); 396 + goto out; 397 + } 398 + 399 + if (!kcsan_interrupt_watcher) 400 + /* Use raw to avoid lockdep recursion via IRQ flags tracing. */ 401 + raw_local_irq_save(irq_flags); 402 + 403 + watchpoint = insert_watchpoint((unsigned long)ptr, size, is_write); 404 + if (watchpoint == NULL) { 405 + /* 406 + * Out of capacity: the size of 'watchpoints', and the frequency 407 + * with which should_watch() returns true should be tweaked so 408 + * that this case happens very rarely. 409 + */ 410 + kcsan_counter_inc(KCSAN_COUNTER_NO_CAPACITY); 411 + goto out_unlock; 412 + } 413 + 414 + kcsan_counter_inc(KCSAN_COUNTER_SETUP_WATCHPOINTS); 415 + kcsan_counter_inc(KCSAN_COUNTER_USED_WATCHPOINTS); 416 + 417 + /* 418 + * Read the current value, to later check and infer a race if the data 419 + * was modified via a non-instrumented access, e.g. from a device. 420 + */ 421 + expect_value._8 = 0; 422 + switch (size) { 423 + case 1: 424 + expect_value._1 = READ_ONCE(*(const u8 *)ptr); 425 + break; 426 + case 2: 427 + expect_value._2 = READ_ONCE(*(const u16 *)ptr); 428 + break; 429 + case 4: 430 + expect_value._4 = READ_ONCE(*(const u32 *)ptr); 431 + break; 432 + case 8: 433 + expect_value._8 = READ_ONCE(*(const u64 *)ptr); 434 + break; 435 + default: 436 + break; /* ignore; we do not diff the values */ 437 + } 438 + 439 + if (IS_ENABLED(CONFIG_KCSAN_DEBUG)) { 440 + kcsan_disable_current(); 441 + pr_err("KCSAN: watching %s, size: %zu, addr: %px [slot: %d, encoded: %lx]\n", 442 + is_write ? "write" : "read", size, ptr, 443 + watchpoint_slot((unsigned long)ptr), 444 + encode_watchpoint((unsigned long)ptr, size, is_write)); 445 + kcsan_enable_current(); 446 + } 447 + 448 + /* 449 + * Delay this thread, to increase probability of observing a racy 450 + * conflicting access. 451 + */ 452 + udelay(get_delay()); 453 + 454 + /* 455 + * Re-read value, and check if it is as expected; if not, we infer a 456 + * racy access. 457 + */ 458 + access_mask = get_ctx()->access_mask; 459 + switch (size) { 460 + case 1: 461 + expect_value._1 ^= READ_ONCE(*(const u8 *)ptr); 462 + if (access_mask) 463 + expect_value._1 &= (u8)access_mask; 464 + break; 465 + case 2: 466 + expect_value._2 ^= READ_ONCE(*(const u16 *)ptr); 467 + if (access_mask) 468 + expect_value._2 &= (u16)access_mask; 469 + break; 470 + case 4: 471 + expect_value._4 ^= READ_ONCE(*(const u32 *)ptr); 472 + if (access_mask) 473 + expect_value._4 &= (u32)access_mask; 474 + break; 475 + case 8: 476 + expect_value._8 ^= READ_ONCE(*(const u64 *)ptr); 477 + if (access_mask) 478 + expect_value._8 &= (u64)access_mask; 479 + break; 480 + default: 481 + break; /* ignore; we do not diff the values */ 482 + } 483 + 484 + /* Were we able to observe a value-change? */ 485 + if (expect_value._8 != 0) 486 + value_change = KCSAN_VALUE_CHANGE_TRUE; 487 + 488 + /* Check if this access raced with another. */ 489 + if (!consume_watchpoint(watchpoint)) { 490 + /* 491 + * Depending on the access type, map a value_change of MAYBE to 492 + * TRUE (always report) or FALSE (never report). 493 + */ 494 + if (value_change == KCSAN_VALUE_CHANGE_MAYBE) { 495 + if (access_mask != 0) { 496 + /* 497 + * For access with access_mask, we require a 498 + * value-change, as it is likely that races on 499 + * ~access_mask bits are expected. 500 + */ 501 + value_change = KCSAN_VALUE_CHANGE_FALSE; 502 + } else if (size > 8 || is_assert) { 503 + /* Always assume a value-change. */ 504 + value_change = KCSAN_VALUE_CHANGE_TRUE; 505 + } 506 + } 507 + 508 + /* 509 + * No need to increment 'data_races' counter, as the racing 510 + * thread already did. 511 + * 512 + * Count 'assert_failures' for each failed ASSERT access, 513 + * therefore both this thread and the racing thread may 514 + * increment this counter. 515 + */ 516 + if (is_assert && value_change == KCSAN_VALUE_CHANGE_TRUE) 517 + kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES); 518 + 519 + kcsan_report(ptr, size, type, value_change, KCSAN_REPORT_RACE_SIGNAL, 520 + watchpoint - watchpoints); 521 + } else if (value_change == KCSAN_VALUE_CHANGE_TRUE) { 522 + /* Inferring a race, since the value should not have changed. */ 523 + 524 + kcsan_counter_inc(KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN); 525 + if (is_assert) 526 + kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES); 527 + 528 + if (IS_ENABLED(CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN) || is_assert) 529 + kcsan_report(ptr, size, type, KCSAN_VALUE_CHANGE_TRUE, 530 + KCSAN_REPORT_RACE_UNKNOWN_ORIGIN, 531 + watchpoint - watchpoints); 532 + } 533 + 534 + /* 535 + * Remove watchpoint; must be after reporting, since the slot may be 536 + * reused after this point. 537 + */ 538 + remove_watchpoint(watchpoint); 539 + kcsan_counter_dec(KCSAN_COUNTER_USED_WATCHPOINTS); 540 + out_unlock: 541 + if (!kcsan_interrupt_watcher) 542 + raw_local_irq_restore(irq_flags); 543 + out: 544 + user_access_restore(ua_flags); 545 + } 546 + 547 + static __always_inline void check_access(const volatile void *ptr, size_t size, 548 + int type) 549 + { 550 + const bool is_write = (type & KCSAN_ACCESS_WRITE) != 0; 551 + atomic_long_t *watchpoint; 552 + long encoded_watchpoint; 553 + 554 + /* 555 + * Do nothing for 0 sized check; this comparison will be optimized out 556 + * for constant sized instrumentation (__tsan_{read,write}N). 557 + */ 558 + if (unlikely(size == 0)) 559 + return; 560 + 561 + /* 562 + * Avoid user_access_save in fast-path: find_watchpoint is safe without 563 + * user_access_save, as the address that ptr points to is only used to 564 + * check if a watchpoint exists; ptr is never dereferenced. 565 + */ 566 + watchpoint = find_watchpoint((unsigned long)ptr, size, !is_write, 567 + &encoded_watchpoint); 568 + /* 569 + * It is safe to check kcsan_is_enabled() after find_watchpoint in the 570 + * slow-path, as long as no state changes that cause a race to be 571 + * detected and reported have occurred until kcsan_is_enabled() is 572 + * checked. 573 + */ 574 + 575 + if (unlikely(watchpoint != NULL)) 576 + kcsan_found_watchpoint(ptr, size, type, watchpoint, 577 + encoded_watchpoint); 578 + else { 579 + struct kcsan_ctx *ctx = get_ctx(); /* Call only once in fast-path. */ 580 + 581 + if (unlikely(should_watch(ptr, size, type, ctx))) 582 + kcsan_setup_watchpoint(ptr, size, type); 583 + else if (unlikely(ctx->scoped_accesses.prev)) 584 + kcsan_check_scoped_accesses(); 585 + } 586 + } 587 + 588 + /* === Public interface ===================================================== */ 589 + 590 + void __init kcsan_init(void) 591 + { 592 + BUG_ON(!in_task()); 593 + 594 + kcsan_debugfs_init(); 595 + 596 + /* 597 + * We are in the init task, and no other tasks should be running; 598 + * WRITE_ONCE without memory barrier is sufficient. 599 + */ 600 + if (kcsan_early_enable) 601 + WRITE_ONCE(kcsan_enabled, true); 602 + } 603 + 604 + /* === Exported interface =================================================== */ 605 + 606 + void kcsan_disable_current(void) 607 + { 608 + ++get_ctx()->disable_count; 609 + } 610 + EXPORT_SYMBOL(kcsan_disable_current); 611 + 612 + void kcsan_enable_current(void) 613 + { 614 + if (get_ctx()->disable_count-- == 0) { 615 + /* 616 + * Warn if kcsan_enable_current() calls are unbalanced with 617 + * kcsan_disable_current() calls, which causes disable_count to 618 + * become negative and should not happen. 619 + */ 620 + kcsan_disable_current(); /* restore to 0, KCSAN still enabled */ 621 + kcsan_disable_current(); /* disable to generate warning */ 622 + WARN(1, "Unbalanced %s()", __func__); 623 + kcsan_enable_current(); 624 + } 625 + } 626 + EXPORT_SYMBOL(kcsan_enable_current); 627 + 628 + void kcsan_enable_current_nowarn(void) 629 + { 630 + if (get_ctx()->disable_count-- == 0) 631 + kcsan_disable_current(); 632 + } 633 + EXPORT_SYMBOL(kcsan_enable_current_nowarn); 634 + 635 + void kcsan_nestable_atomic_begin(void) 636 + { 637 + /* 638 + * Do *not* check and warn if we are in a flat atomic region: nestable 639 + * and flat atomic regions are independent from each other. 640 + * See include/linux/kcsan.h: struct kcsan_ctx comments for more 641 + * comments. 642 + */ 643 + 644 + ++get_ctx()->atomic_nest_count; 645 + } 646 + EXPORT_SYMBOL(kcsan_nestable_atomic_begin); 647 + 648 + void kcsan_nestable_atomic_end(void) 649 + { 650 + if (get_ctx()->atomic_nest_count-- == 0) { 651 + /* 652 + * Warn if kcsan_nestable_atomic_end() calls are unbalanced with 653 + * kcsan_nestable_atomic_begin() calls, which causes 654 + * atomic_nest_count to become negative and should not happen. 655 + */ 656 + kcsan_nestable_atomic_begin(); /* restore to 0 */ 657 + kcsan_disable_current(); /* disable to generate warning */ 658 + WARN(1, "Unbalanced %s()", __func__); 659 + kcsan_enable_current(); 660 + } 661 + } 662 + EXPORT_SYMBOL(kcsan_nestable_atomic_end); 663 + 664 + void kcsan_flat_atomic_begin(void) 665 + { 666 + get_ctx()->in_flat_atomic = true; 667 + } 668 + EXPORT_SYMBOL(kcsan_flat_atomic_begin); 669 + 670 + void kcsan_flat_atomic_end(void) 671 + { 672 + get_ctx()->in_flat_atomic = false; 673 + } 674 + EXPORT_SYMBOL(kcsan_flat_atomic_end); 675 + 676 + void kcsan_atomic_next(int n) 677 + { 678 + get_ctx()->atomic_next = n; 679 + } 680 + EXPORT_SYMBOL(kcsan_atomic_next); 681 + 682 + void kcsan_set_access_mask(unsigned long mask) 683 + { 684 + get_ctx()->access_mask = mask; 685 + } 686 + EXPORT_SYMBOL(kcsan_set_access_mask); 687 + 688 + struct kcsan_scoped_access * 689 + kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type, 690 + struct kcsan_scoped_access *sa) 691 + { 692 + struct kcsan_ctx *ctx = get_ctx(); 693 + 694 + __kcsan_check_access(ptr, size, type); 695 + 696 + ctx->disable_count++; /* Disable KCSAN, in case list debugging is on. */ 697 + 698 + INIT_LIST_HEAD(&sa->list); 699 + sa->ptr = ptr; 700 + sa->size = size; 701 + sa->type = type; 702 + 703 + if (!ctx->scoped_accesses.prev) /* Lazy initialize list head. */ 704 + INIT_LIST_HEAD(&ctx->scoped_accesses); 705 + list_add(&sa->list, &ctx->scoped_accesses); 706 + 707 + ctx->disable_count--; 708 + return sa; 709 + } 710 + EXPORT_SYMBOL(kcsan_begin_scoped_access); 711 + 712 + void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) 713 + { 714 + struct kcsan_ctx *ctx = get_ctx(); 715 + 716 + if (WARN(!ctx->scoped_accesses.prev, "Unbalanced %s()?", __func__)) 717 + return; 718 + 719 + ctx->disable_count++; /* Disable KCSAN, in case list debugging is on. */ 720 + 721 + list_del(&sa->list); 722 + if (list_empty(&ctx->scoped_accesses)) 723 + /* 724 + * Ensure we do not enter kcsan_check_scoped_accesses() 725 + * slow-path if unnecessary, and avoids requiring list_empty() 726 + * in the fast-path (to avoid a READ_ONCE() and potential 727 + * uaccess warning). 728 + */ 729 + ctx->scoped_accesses.prev = NULL; 730 + 731 + ctx->disable_count--; 732 + 733 + __kcsan_check_access(sa->ptr, sa->size, sa->type); 734 + } 735 + EXPORT_SYMBOL(kcsan_end_scoped_access); 736 + 737 + void __kcsan_check_access(const volatile void *ptr, size_t size, int type) 738 + { 739 + check_access(ptr, size, type); 740 + } 741 + EXPORT_SYMBOL(__kcsan_check_access); 742 + 743 + /* 744 + * KCSAN uses the same instrumentation that is emitted by supported compilers 745 + * for ThreadSanitizer (TSAN). 746 + * 747 + * When enabled, the compiler emits instrumentation calls (the functions 748 + * prefixed with "__tsan" below) for all loads and stores that it generated; 749 + * inline asm is not instrumented. 750 + * 751 + * Note that, not all supported compiler versions distinguish aligned/unaligned 752 + * accesses, but e.g. recent versions of Clang do. We simply alias the unaligned 753 + * version to the generic version, which can handle both. 754 + */ 755 + 756 + #define DEFINE_TSAN_READ_WRITE(size) \ 757 + void __tsan_read##size(void *ptr) \ 758 + { \ 759 + check_access(ptr, size, 0); \ 760 + } \ 761 + EXPORT_SYMBOL(__tsan_read##size); \ 762 + void __tsan_unaligned_read##size(void *ptr) \ 763 + __alias(__tsan_read##size); \ 764 + EXPORT_SYMBOL(__tsan_unaligned_read##size); \ 765 + void __tsan_write##size(void *ptr) \ 766 + { \ 767 + check_access(ptr, size, KCSAN_ACCESS_WRITE); \ 768 + } \ 769 + EXPORT_SYMBOL(__tsan_write##size); \ 770 + void __tsan_unaligned_write##size(void *ptr) \ 771 + __alias(__tsan_write##size); \ 772 + EXPORT_SYMBOL(__tsan_unaligned_write##size) 773 + 774 + DEFINE_TSAN_READ_WRITE(1); 775 + DEFINE_TSAN_READ_WRITE(2); 776 + DEFINE_TSAN_READ_WRITE(4); 777 + DEFINE_TSAN_READ_WRITE(8); 778 + DEFINE_TSAN_READ_WRITE(16); 779 + 780 + void __tsan_read_range(void *ptr, size_t size) 781 + { 782 + check_access(ptr, size, 0); 783 + } 784 + EXPORT_SYMBOL(__tsan_read_range); 785 + 786 + void __tsan_write_range(void *ptr, size_t size) 787 + { 788 + check_access(ptr, size, KCSAN_ACCESS_WRITE); 789 + } 790 + EXPORT_SYMBOL(__tsan_write_range); 791 + 792 + /* 793 + * Use of explicit volatile is generally disallowed [1], however, volatile is 794 + * still used in various concurrent context, whether in low-level 795 + * synchronization primitives or for legacy reasons. 796 + * [1] https://lwn.net/Articles/233479/ 797 + * 798 + * We only consider volatile accesses atomic if they are aligned and would pass 799 + * the size-check of compiletime_assert_rwonce_type(). 800 + */ 801 + #define DEFINE_TSAN_VOLATILE_READ_WRITE(size) \ 802 + void __tsan_volatile_read##size(void *ptr) \ 803 + { \ 804 + const bool is_atomic = size <= sizeof(long long) && \ 805 + IS_ALIGNED((unsigned long)ptr, size); \ 806 + if (IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS) && is_atomic) \ 807 + return; \ 808 + check_access(ptr, size, is_atomic ? KCSAN_ACCESS_ATOMIC : 0); \ 809 + } \ 810 + EXPORT_SYMBOL(__tsan_volatile_read##size); \ 811 + void __tsan_unaligned_volatile_read##size(void *ptr) \ 812 + __alias(__tsan_volatile_read##size); \ 813 + EXPORT_SYMBOL(__tsan_unaligned_volatile_read##size); \ 814 + void __tsan_volatile_write##size(void *ptr) \ 815 + { \ 816 + const bool is_atomic = size <= sizeof(long long) && \ 817 + IS_ALIGNED((unsigned long)ptr, size); \ 818 + if (IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS) && is_atomic) \ 819 + return; \ 820 + check_access(ptr, size, \ 821 + KCSAN_ACCESS_WRITE | \ 822 + (is_atomic ? KCSAN_ACCESS_ATOMIC : 0)); \ 823 + } \ 824 + EXPORT_SYMBOL(__tsan_volatile_write##size); \ 825 + void __tsan_unaligned_volatile_write##size(void *ptr) \ 826 + __alias(__tsan_volatile_write##size); \ 827 + EXPORT_SYMBOL(__tsan_unaligned_volatile_write##size) 828 + 829 + DEFINE_TSAN_VOLATILE_READ_WRITE(1); 830 + DEFINE_TSAN_VOLATILE_READ_WRITE(2); 831 + DEFINE_TSAN_VOLATILE_READ_WRITE(4); 832 + DEFINE_TSAN_VOLATILE_READ_WRITE(8); 833 + DEFINE_TSAN_VOLATILE_READ_WRITE(16); 834 + 835 + /* 836 + * The below are not required by KCSAN, but can still be emitted by the 837 + * compiler. 838 + */ 839 + void __tsan_func_entry(void *call_pc) 840 + { 841 + } 842 + EXPORT_SYMBOL(__tsan_func_entry); 843 + void __tsan_func_exit(void) 844 + { 845 + } 846 + EXPORT_SYMBOL(__tsan_func_exit); 847 + void __tsan_init(void) 848 + { 849 + } 850 + EXPORT_SYMBOL(__tsan_init);
+349
kernel/kcsan/debugfs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/atomic.h> 4 + #include <linux/bsearch.h> 5 + #include <linux/bug.h> 6 + #include <linux/debugfs.h> 7 + #include <linux/init.h> 8 + #include <linux/kallsyms.h> 9 + #include <linux/sched.h> 10 + #include <linux/seq_file.h> 11 + #include <linux/slab.h> 12 + #include <linux/sort.h> 13 + #include <linux/string.h> 14 + #include <linux/uaccess.h> 15 + 16 + #include "kcsan.h" 17 + 18 + /* 19 + * Statistics counters. 20 + */ 21 + static atomic_long_t counters[KCSAN_COUNTER_COUNT]; 22 + 23 + /* 24 + * Addresses for filtering functions from reporting. This list can be used as a 25 + * whitelist or blacklist. 26 + */ 27 + static struct { 28 + unsigned long *addrs; /* array of addresses */ 29 + size_t size; /* current size */ 30 + int used; /* number of elements used */ 31 + bool sorted; /* if elements are sorted */ 32 + bool whitelist; /* if list is a blacklist or whitelist */ 33 + } report_filterlist = { 34 + .addrs = NULL, 35 + .size = 8, /* small initial size */ 36 + .used = 0, 37 + .sorted = false, 38 + .whitelist = false, /* default is blacklist */ 39 + }; 40 + static DEFINE_SPINLOCK(report_filterlist_lock); 41 + 42 + static const char *counter_to_name(enum kcsan_counter_id id) 43 + { 44 + switch (id) { 45 + case KCSAN_COUNTER_USED_WATCHPOINTS: return "used_watchpoints"; 46 + case KCSAN_COUNTER_SETUP_WATCHPOINTS: return "setup_watchpoints"; 47 + case KCSAN_COUNTER_DATA_RACES: return "data_races"; 48 + case KCSAN_COUNTER_ASSERT_FAILURES: return "assert_failures"; 49 + case KCSAN_COUNTER_NO_CAPACITY: return "no_capacity"; 50 + case KCSAN_COUNTER_REPORT_RACES: return "report_races"; 51 + case KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN: return "races_unknown_origin"; 52 + case KCSAN_COUNTER_UNENCODABLE_ACCESSES: return "unencodable_accesses"; 53 + case KCSAN_COUNTER_ENCODING_FALSE_POSITIVES: return "encoding_false_positives"; 54 + case KCSAN_COUNTER_COUNT: 55 + BUG(); 56 + } 57 + return NULL; 58 + } 59 + 60 + void kcsan_counter_inc(enum kcsan_counter_id id) 61 + { 62 + atomic_long_inc(&counters[id]); 63 + } 64 + 65 + void kcsan_counter_dec(enum kcsan_counter_id id) 66 + { 67 + atomic_long_dec(&counters[id]); 68 + } 69 + 70 + /* 71 + * The microbenchmark allows benchmarking KCSAN core runtime only. To run 72 + * multiple threads, pipe 'microbench=<iters>' from multiple tasks into the 73 + * debugfs file. This will not generate any conflicts, and tests fast-path only. 74 + */ 75 + static noinline void microbenchmark(unsigned long iters) 76 + { 77 + const struct kcsan_ctx ctx_save = current->kcsan_ctx; 78 + const bool was_enabled = READ_ONCE(kcsan_enabled); 79 + cycles_t cycles; 80 + 81 + /* We may have been called from an atomic region; reset context. */ 82 + memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx)); 83 + /* 84 + * Disable to benchmark fast-path for all accesses, and (expected 85 + * negligible) call into slow-path, but never set up watchpoints. 86 + */ 87 + WRITE_ONCE(kcsan_enabled, false); 88 + 89 + pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters); 90 + 91 + cycles = get_cycles(); 92 + while (iters--) { 93 + unsigned long addr = iters & ((PAGE_SIZE << 8) - 1); 94 + int type = !(iters & 0x7f) ? KCSAN_ACCESS_ATOMIC : 95 + (!(iters & 0xf) ? KCSAN_ACCESS_WRITE : 0); 96 + __kcsan_check_access((void *)addr, sizeof(long), type); 97 + } 98 + cycles = get_cycles() - cycles; 99 + 100 + pr_info("KCSAN: %s end | cycles: %llu\n", __func__, cycles); 101 + 102 + WRITE_ONCE(kcsan_enabled, was_enabled); 103 + /* restore context */ 104 + current->kcsan_ctx = ctx_save; 105 + } 106 + 107 + /* 108 + * Simple test to create conflicting accesses. Write 'test=<iters>' to KCSAN's 109 + * debugfs file from multiple tasks to generate real conflicts and show reports. 110 + */ 111 + static long test_dummy; 112 + static long test_flags; 113 + static long test_scoped; 114 + static noinline void test_thread(unsigned long iters) 115 + { 116 + const long CHANGE_BITS = 0xff00ff00ff00ff00L; 117 + const struct kcsan_ctx ctx_save = current->kcsan_ctx; 118 + cycles_t cycles; 119 + 120 + /* We may have been called from an atomic region; reset context. */ 121 + memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx)); 122 + 123 + pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters); 124 + pr_info("test_dummy@%px, test_flags@%px, test_scoped@%px,\n", 125 + &test_dummy, &test_flags, &test_scoped); 126 + 127 + cycles = get_cycles(); 128 + while (iters--) { 129 + /* These all should generate reports. */ 130 + __kcsan_check_read(&test_dummy, sizeof(test_dummy)); 131 + ASSERT_EXCLUSIVE_WRITER(test_dummy); 132 + ASSERT_EXCLUSIVE_ACCESS(test_dummy); 133 + 134 + ASSERT_EXCLUSIVE_BITS(test_flags, ~CHANGE_BITS); /* no report */ 135 + __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */ 136 + 137 + ASSERT_EXCLUSIVE_BITS(test_flags, CHANGE_BITS); /* report */ 138 + __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */ 139 + 140 + /* not actually instrumented */ 141 + WRITE_ONCE(test_dummy, iters); /* to observe value-change */ 142 + __kcsan_check_write(&test_dummy, sizeof(test_dummy)); 143 + 144 + test_flags ^= CHANGE_BITS; /* generate value-change */ 145 + __kcsan_check_write(&test_flags, sizeof(test_flags)); 146 + 147 + BUG_ON(current->kcsan_ctx.scoped_accesses.prev); 148 + { 149 + /* Should generate reports anywhere in this block. */ 150 + ASSERT_EXCLUSIVE_WRITER_SCOPED(test_scoped); 151 + ASSERT_EXCLUSIVE_ACCESS_SCOPED(test_scoped); 152 + BUG_ON(!current->kcsan_ctx.scoped_accesses.prev); 153 + /* Unrelated accesses. */ 154 + __kcsan_check_access(&cycles, sizeof(cycles), 0); 155 + __kcsan_check_access(&cycles, sizeof(cycles), KCSAN_ACCESS_ATOMIC); 156 + } 157 + BUG_ON(current->kcsan_ctx.scoped_accesses.prev); 158 + } 159 + cycles = get_cycles() - cycles; 160 + 161 + pr_info("KCSAN: %s end | cycles: %llu\n", __func__, cycles); 162 + 163 + /* restore context */ 164 + current->kcsan_ctx = ctx_save; 165 + } 166 + 167 + static int cmp_filterlist_addrs(const void *rhs, const void *lhs) 168 + { 169 + const unsigned long a = *(const unsigned long *)rhs; 170 + const unsigned long b = *(const unsigned long *)lhs; 171 + 172 + return a < b ? -1 : a == b ? 0 : 1; 173 + } 174 + 175 + bool kcsan_skip_report_debugfs(unsigned long func_addr) 176 + { 177 + unsigned long symbolsize, offset; 178 + unsigned long flags; 179 + bool ret = false; 180 + 181 + if (!kallsyms_lookup_size_offset(func_addr, &symbolsize, &offset)) 182 + return false; 183 + func_addr -= offset; /* Get function start */ 184 + 185 + spin_lock_irqsave(&report_filterlist_lock, flags); 186 + if (report_filterlist.used == 0) 187 + goto out; 188 + 189 + /* Sort array if it is unsorted, and then do a binary search. */ 190 + if (!report_filterlist.sorted) { 191 + sort(report_filterlist.addrs, report_filterlist.used, 192 + sizeof(unsigned long), cmp_filterlist_addrs, NULL); 193 + report_filterlist.sorted = true; 194 + } 195 + ret = !!bsearch(&func_addr, report_filterlist.addrs, 196 + report_filterlist.used, sizeof(unsigned long), 197 + cmp_filterlist_addrs); 198 + if (report_filterlist.whitelist) 199 + ret = !ret; 200 + 201 + out: 202 + spin_unlock_irqrestore(&report_filterlist_lock, flags); 203 + return ret; 204 + } 205 + 206 + static void set_report_filterlist_whitelist(bool whitelist) 207 + { 208 + unsigned long flags; 209 + 210 + spin_lock_irqsave(&report_filterlist_lock, flags); 211 + report_filterlist.whitelist = whitelist; 212 + spin_unlock_irqrestore(&report_filterlist_lock, flags); 213 + } 214 + 215 + /* Returns 0 on success, error-code otherwise. */ 216 + static ssize_t insert_report_filterlist(const char *func) 217 + { 218 + unsigned long flags; 219 + unsigned long addr = kallsyms_lookup_name(func); 220 + ssize_t ret = 0; 221 + 222 + if (!addr) { 223 + pr_err("KCSAN: could not find function: '%s'\n", func); 224 + return -ENOENT; 225 + } 226 + 227 + spin_lock_irqsave(&report_filterlist_lock, flags); 228 + 229 + if (report_filterlist.addrs == NULL) { 230 + /* initial allocation */ 231 + report_filterlist.addrs = 232 + kmalloc_array(report_filterlist.size, 233 + sizeof(unsigned long), GFP_ATOMIC); 234 + if (report_filterlist.addrs == NULL) { 235 + ret = -ENOMEM; 236 + goto out; 237 + } 238 + } else if (report_filterlist.used == report_filterlist.size) { 239 + /* resize filterlist */ 240 + size_t new_size = report_filterlist.size * 2; 241 + unsigned long *new_addrs = 242 + krealloc(report_filterlist.addrs, 243 + new_size * sizeof(unsigned long), GFP_ATOMIC); 244 + 245 + if (new_addrs == NULL) { 246 + /* leave filterlist itself untouched */ 247 + ret = -ENOMEM; 248 + goto out; 249 + } 250 + 251 + report_filterlist.size = new_size; 252 + report_filterlist.addrs = new_addrs; 253 + } 254 + 255 + /* Note: deduplicating should be done in userspace. */ 256 + report_filterlist.addrs[report_filterlist.used++] = 257 + kallsyms_lookup_name(func); 258 + report_filterlist.sorted = false; 259 + 260 + out: 261 + spin_unlock_irqrestore(&report_filterlist_lock, flags); 262 + 263 + return ret; 264 + } 265 + 266 + static int show_info(struct seq_file *file, void *v) 267 + { 268 + int i; 269 + unsigned long flags; 270 + 271 + /* show stats */ 272 + seq_printf(file, "enabled: %i\n", READ_ONCE(kcsan_enabled)); 273 + for (i = 0; i < KCSAN_COUNTER_COUNT; ++i) 274 + seq_printf(file, "%s: %ld\n", counter_to_name(i), 275 + atomic_long_read(&counters[i])); 276 + 277 + /* show filter functions, and filter type */ 278 + spin_lock_irqsave(&report_filterlist_lock, flags); 279 + seq_printf(file, "\n%s functions: %s\n", 280 + report_filterlist.whitelist ? "whitelisted" : "blacklisted", 281 + report_filterlist.used == 0 ? "none" : ""); 282 + for (i = 0; i < report_filterlist.used; ++i) 283 + seq_printf(file, " %ps\n", (void *)report_filterlist.addrs[i]); 284 + spin_unlock_irqrestore(&report_filterlist_lock, flags); 285 + 286 + return 0; 287 + } 288 + 289 + static int debugfs_open(struct inode *inode, struct file *file) 290 + { 291 + return single_open(file, show_info, NULL); 292 + } 293 + 294 + static ssize_t 295 + debugfs_write(struct file *file, const char __user *buf, size_t count, loff_t *off) 296 + { 297 + char kbuf[KSYM_NAME_LEN]; 298 + char *arg; 299 + int read_len = count < (sizeof(kbuf) - 1) ? count : (sizeof(kbuf) - 1); 300 + 301 + if (copy_from_user(kbuf, buf, read_len)) 302 + return -EFAULT; 303 + kbuf[read_len] = '\0'; 304 + arg = strstrip(kbuf); 305 + 306 + if (!strcmp(arg, "on")) { 307 + WRITE_ONCE(kcsan_enabled, true); 308 + } else if (!strcmp(arg, "off")) { 309 + WRITE_ONCE(kcsan_enabled, false); 310 + } else if (!strncmp(arg, "microbench=", sizeof("microbench=") - 1)) { 311 + unsigned long iters; 312 + 313 + if (kstrtoul(&arg[sizeof("microbench=") - 1], 0, &iters)) 314 + return -EINVAL; 315 + microbenchmark(iters); 316 + } else if (!strncmp(arg, "test=", sizeof("test=") - 1)) { 317 + unsigned long iters; 318 + 319 + if (kstrtoul(&arg[sizeof("test=") - 1], 0, &iters)) 320 + return -EINVAL; 321 + test_thread(iters); 322 + } else if (!strcmp(arg, "whitelist")) { 323 + set_report_filterlist_whitelist(true); 324 + } else if (!strcmp(arg, "blacklist")) { 325 + set_report_filterlist_whitelist(false); 326 + } else if (arg[0] == '!') { 327 + ssize_t ret = insert_report_filterlist(&arg[1]); 328 + 329 + if (ret < 0) 330 + return ret; 331 + } else { 332 + return -EINVAL; 333 + } 334 + 335 + return count; 336 + } 337 + 338 + static const struct file_operations debugfs_ops = 339 + { 340 + .read = seq_read, 341 + .open = debugfs_open, 342 + .write = debugfs_write, 343 + .release = single_release 344 + }; 345 + 346 + void __init kcsan_debugfs_init(void) 347 + { 348 + debugfs_create_file("kcsan", 0644, NULL, NULL, &debugfs_ops); 349 + }
+95
kernel/kcsan/encoding.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _KERNEL_KCSAN_ENCODING_H 4 + #define _KERNEL_KCSAN_ENCODING_H 5 + 6 + #include <linux/bits.h> 7 + #include <linux/log2.h> 8 + #include <linux/mm.h> 9 + 10 + #include "kcsan.h" 11 + 12 + #define SLOT_RANGE PAGE_SIZE 13 + 14 + #define INVALID_WATCHPOINT 0 15 + #define CONSUMED_WATCHPOINT 1 16 + 17 + /* 18 + * The maximum useful size of accesses for which we set up watchpoints is the 19 + * max range of slots we check on an access. 20 + */ 21 + #define MAX_ENCODABLE_SIZE (SLOT_RANGE * (1 + KCSAN_CHECK_ADJACENT)) 22 + 23 + /* 24 + * Number of bits we use to store size info. 25 + */ 26 + #define WATCHPOINT_SIZE_BITS bits_per(MAX_ENCODABLE_SIZE) 27 + /* 28 + * This encoding for addresses discards the upper (1 for is-write + SIZE_BITS); 29 + * however, most 64-bit architectures do not use the full 64-bit address space. 30 + * Also, in order for a false positive to be observable 2 things need to happen: 31 + * 32 + * 1. different addresses but with the same encoded address race; 33 + * 2. and both map onto the same watchpoint slots; 34 + * 35 + * Both these are assumed to be very unlikely. However, in case it still happens 36 + * happens, the report logic will filter out the false positive (see report.c). 37 + */ 38 + #define WATCHPOINT_ADDR_BITS (BITS_PER_LONG-1 - WATCHPOINT_SIZE_BITS) 39 + 40 + /* 41 + * Masks to set/retrieve the encoded data. 42 + */ 43 + #define WATCHPOINT_WRITE_MASK BIT(BITS_PER_LONG-1) 44 + #define WATCHPOINT_SIZE_MASK \ 45 + GENMASK(BITS_PER_LONG-2, BITS_PER_LONG-2 - WATCHPOINT_SIZE_BITS) 46 + #define WATCHPOINT_ADDR_MASK \ 47 + GENMASK(BITS_PER_LONG-3 - WATCHPOINT_SIZE_BITS, 0) 48 + 49 + static inline bool check_encodable(unsigned long addr, size_t size) 50 + { 51 + return size <= MAX_ENCODABLE_SIZE; 52 + } 53 + 54 + static inline long 55 + encode_watchpoint(unsigned long addr, size_t size, bool is_write) 56 + { 57 + return (long)((is_write ? WATCHPOINT_WRITE_MASK : 0) | 58 + (size << WATCHPOINT_ADDR_BITS) | 59 + (addr & WATCHPOINT_ADDR_MASK)); 60 + } 61 + 62 + static __always_inline bool decode_watchpoint(long watchpoint, 63 + unsigned long *addr_masked, 64 + size_t *size, 65 + bool *is_write) 66 + { 67 + if (watchpoint == INVALID_WATCHPOINT || 68 + watchpoint == CONSUMED_WATCHPOINT) 69 + return false; 70 + 71 + *addr_masked = (unsigned long)watchpoint & WATCHPOINT_ADDR_MASK; 72 + *size = ((unsigned long)watchpoint & WATCHPOINT_SIZE_MASK) >> WATCHPOINT_ADDR_BITS; 73 + *is_write = !!((unsigned long)watchpoint & WATCHPOINT_WRITE_MASK); 74 + 75 + return true; 76 + } 77 + 78 + /* 79 + * Return watchpoint slot for an address. 80 + */ 81 + static __always_inline int watchpoint_slot(unsigned long addr) 82 + { 83 + return (addr / PAGE_SIZE) % CONFIG_KCSAN_NUM_WATCHPOINTS; 84 + } 85 + 86 + static __always_inline bool matching_access(unsigned long addr1, size_t size1, 87 + unsigned long addr2, size_t size2) 88 + { 89 + unsigned long end_range1 = addr1 + size1 - 1; 90 + unsigned long end_range2 = addr2 + size2 - 1; 91 + 92 + return addr1 <= end_range2 && addr2 <= end_range1; 93 + } 94 + 95 + #endif /* _KERNEL_KCSAN_ENCODING_H */
+142
kernel/kcsan/kcsan.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + /* 4 + * The Kernel Concurrency Sanitizer (KCSAN) infrastructure. For more info please 5 + * see Documentation/dev-tools/kcsan.rst. 6 + */ 7 + 8 + #ifndef _KERNEL_KCSAN_KCSAN_H 9 + #define _KERNEL_KCSAN_KCSAN_H 10 + 11 + #include <linux/kcsan.h> 12 + 13 + /* The number of adjacent watchpoints to check. */ 14 + #define KCSAN_CHECK_ADJACENT 1 15 + #define NUM_SLOTS (1 + 2*KCSAN_CHECK_ADJACENT) 16 + 17 + extern unsigned int kcsan_udelay_task; 18 + extern unsigned int kcsan_udelay_interrupt; 19 + 20 + /* 21 + * Globally enable and disable KCSAN. 22 + */ 23 + extern bool kcsan_enabled; 24 + 25 + /* 26 + * Initialize debugfs file. 27 + */ 28 + void kcsan_debugfs_init(void); 29 + 30 + enum kcsan_counter_id { 31 + /* 32 + * Number of watchpoints currently in use. 33 + */ 34 + KCSAN_COUNTER_USED_WATCHPOINTS, 35 + 36 + /* 37 + * Total number of watchpoints set up. 38 + */ 39 + KCSAN_COUNTER_SETUP_WATCHPOINTS, 40 + 41 + /* 42 + * Total number of data races. 43 + */ 44 + KCSAN_COUNTER_DATA_RACES, 45 + 46 + /* 47 + * Total number of ASSERT failures due to races. If the observed race is 48 + * due to two conflicting ASSERT type accesses, then both will be 49 + * counted. 50 + */ 51 + KCSAN_COUNTER_ASSERT_FAILURES, 52 + 53 + /* 54 + * Number of times no watchpoints were available. 55 + */ 56 + KCSAN_COUNTER_NO_CAPACITY, 57 + 58 + /* 59 + * A thread checking a watchpoint raced with another checking thread; 60 + * only one will be reported. 61 + */ 62 + KCSAN_COUNTER_REPORT_RACES, 63 + 64 + /* 65 + * Observed data value change, but writer thread unknown. 66 + */ 67 + KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN, 68 + 69 + /* 70 + * The access cannot be encoded to a valid watchpoint. 71 + */ 72 + KCSAN_COUNTER_UNENCODABLE_ACCESSES, 73 + 74 + /* 75 + * Watchpoint encoding caused a watchpoint to fire on mismatching 76 + * accesses. 77 + */ 78 + KCSAN_COUNTER_ENCODING_FALSE_POSITIVES, 79 + 80 + KCSAN_COUNTER_COUNT, /* number of counters */ 81 + }; 82 + 83 + /* 84 + * Increment/decrement counter with given id; avoid calling these in fast-path. 85 + */ 86 + extern void kcsan_counter_inc(enum kcsan_counter_id id); 87 + extern void kcsan_counter_dec(enum kcsan_counter_id id); 88 + 89 + /* 90 + * Returns true if data races in the function symbol that maps to func_addr 91 + * (offsets are ignored) should *not* be reported. 92 + */ 93 + extern bool kcsan_skip_report_debugfs(unsigned long func_addr); 94 + 95 + /* 96 + * Value-change states. 97 + */ 98 + enum kcsan_value_change { 99 + /* 100 + * Did not observe a value-change, however, it is valid to report the 101 + * race, depending on preferences. 102 + */ 103 + KCSAN_VALUE_CHANGE_MAYBE, 104 + 105 + /* 106 + * Did not observe a value-change, and it is invalid to report the race. 107 + */ 108 + KCSAN_VALUE_CHANGE_FALSE, 109 + 110 + /* 111 + * The value was observed to change, and the race should be reported. 112 + */ 113 + KCSAN_VALUE_CHANGE_TRUE, 114 + }; 115 + 116 + enum kcsan_report_type { 117 + /* 118 + * The thread that set up the watchpoint and briefly stalled was 119 + * signalled that another thread triggered the watchpoint. 120 + */ 121 + KCSAN_REPORT_RACE_SIGNAL, 122 + 123 + /* 124 + * A thread found and consumed a matching watchpoint. 125 + */ 126 + KCSAN_REPORT_CONSUMED_WATCHPOINT, 127 + 128 + /* 129 + * No other thread was observed to race with the access, but the data 130 + * value before and after the stall differs. 131 + */ 132 + KCSAN_REPORT_RACE_UNKNOWN_ORIGIN, 133 + }; 134 + 135 + /* 136 + * Print a race report from thread that encountered the race. 137 + */ 138 + extern void kcsan_report(const volatile void *ptr, size_t size, int access_type, 139 + enum kcsan_value_change value_change, 140 + enum kcsan_report_type type, int watchpoint_idx); 141 + 142 + #endif /* _KERNEL_KCSAN_KCSAN_H */
+634
kernel/kcsan/report.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/debug_locks.h> 4 + #include <linux/delay.h> 5 + #include <linux/jiffies.h> 6 + #include <linux/kernel.h> 7 + #include <linux/lockdep.h> 8 + #include <linux/preempt.h> 9 + #include <linux/printk.h> 10 + #include <linux/sched.h> 11 + #include <linux/spinlock.h> 12 + #include <linux/stacktrace.h> 13 + 14 + #include "kcsan.h" 15 + #include "encoding.h" 16 + 17 + /* 18 + * Max. number of stack entries to show in the report. 19 + */ 20 + #define NUM_STACK_ENTRIES 64 21 + 22 + /* Common access info. */ 23 + struct access_info { 24 + const volatile void *ptr; 25 + size_t size; 26 + int access_type; 27 + int task_pid; 28 + int cpu_id; 29 + }; 30 + 31 + /* 32 + * Other thread info: communicated from other racing thread to thread that set 33 + * up the watchpoint, which then prints the complete report atomically. 34 + */ 35 + struct other_info { 36 + struct access_info ai; 37 + unsigned long stack_entries[NUM_STACK_ENTRIES]; 38 + int num_stack_entries; 39 + 40 + /* 41 + * Optionally pass @current. Typically we do not need to pass @current 42 + * via @other_info since just @task_pid is sufficient. Passing @current 43 + * has additional overhead. 44 + * 45 + * To safely pass @current, we must either use get_task_struct/ 46 + * put_task_struct, or stall the thread that populated @other_info. 47 + * 48 + * We cannot rely on get_task_struct/put_task_struct in case 49 + * release_report() races with a task being released, and would have to 50 + * free it in release_report(). This may result in deadlock if we want 51 + * to use KCSAN on the allocators. 52 + * 53 + * Since we also want to reliably print held locks for 54 + * CONFIG_KCSAN_VERBOSE, the current implementation stalls the thread 55 + * that populated @other_info until it has been consumed. 56 + */ 57 + struct task_struct *task; 58 + }; 59 + 60 + /* 61 + * To never block any producers of struct other_info, we need as many elements 62 + * as we have watchpoints (upper bound on concurrent races to report). 63 + */ 64 + static struct other_info other_infos[CONFIG_KCSAN_NUM_WATCHPOINTS + NUM_SLOTS-1]; 65 + 66 + /* 67 + * Information about reported races; used to rate limit reporting. 68 + */ 69 + struct report_time { 70 + /* 71 + * The last time the race was reported. 72 + */ 73 + unsigned long time; 74 + 75 + /* 76 + * The frames of the 2 threads; if only 1 thread is known, one frame 77 + * will be 0. 78 + */ 79 + unsigned long frame1; 80 + unsigned long frame2; 81 + }; 82 + 83 + /* 84 + * Since we also want to be able to debug allocators with KCSAN, to avoid 85 + * deadlock, report_times cannot be dynamically resized with krealloc in 86 + * rate_limit_report. 87 + * 88 + * Therefore, we use a fixed-size array, which at most will occupy a page. This 89 + * still adequately rate limits reports, assuming that a) number of unique data 90 + * races is not excessive, and b) occurrence of unique races within the 91 + * same time window is limited. 92 + */ 93 + #define REPORT_TIMES_MAX (PAGE_SIZE / sizeof(struct report_time)) 94 + #define REPORT_TIMES_SIZE \ 95 + (CONFIG_KCSAN_REPORT_ONCE_IN_MS > REPORT_TIMES_MAX ? \ 96 + REPORT_TIMES_MAX : \ 97 + CONFIG_KCSAN_REPORT_ONCE_IN_MS) 98 + static struct report_time report_times[REPORT_TIMES_SIZE]; 99 + 100 + /* 101 + * Spinlock serializing report generation, and access to @other_infos. Although 102 + * it could make sense to have a finer-grained locking story for @other_infos, 103 + * report generation needs to be serialized either way, so not much is gained. 104 + */ 105 + static DEFINE_RAW_SPINLOCK(report_lock); 106 + 107 + /* 108 + * Checks if the race identified by thread frames frame1 and frame2 has 109 + * been reported since (now - KCSAN_REPORT_ONCE_IN_MS). 110 + */ 111 + static bool rate_limit_report(unsigned long frame1, unsigned long frame2) 112 + { 113 + struct report_time *use_entry = &report_times[0]; 114 + unsigned long invalid_before; 115 + int i; 116 + 117 + BUILD_BUG_ON(CONFIG_KCSAN_REPORT_ONCE_IN_MS != 0 && REPORT_TIMES_SIZE == 0); 118 + 119 + if (CONFIG_KCSAN_REPORT_ONCE_IN_MS == 0) 120 + return false; 121 + 122 + invalid_before = jiffies - msecs_to_jiffies(CONFIG_KCSAN_REPORT_ONCE_IN_MS); 123 + 124 + /* Check if a matching race report exists. */ 125 + for (i = 0; i < REPORT_TIMES_SIZE; ++i) { 126 + struct report_time *rt = &report_times[i]; 127 + 128 + /* 129 + * Must always select an entry for use to store info as we 130 + * cannot resize report_times; at the end of the scan, use_entry 131 + * will be the oldest entry, which ideally also happened before 132 + * KCSAN_REPORT_ONCE_IN_MS ago. 133 + */ 134 + if (time_before(rt->time, use_entry->time)) 135 + use_entry = rt; 136 + 137 + /* 138 + * Initially, no need to check any further as this entry as well 139 + * as following entries have never been used. 140 + */ 141 + if (rt->time == 0) 142 + break; 143 + 144 + /* Check if entry expired. */ 145 + if (time_before(rt->time, invalid_before)) 146 + continue; /* before KCSAN_REPORT_ONCE_IN_MS ago */ 147 + 148 + /* Reported recently, check if race matches. */ 149 + if ((rt->frame1 == frame1 && rt->frame2 == frame2) || 150 + (rt->frame1 == frame2 && rt->frame2 == frame1)) 151 + return true; 152 + } 153 + 154 + use_entry->time = jiffies; 155 + use_entry->frame1 = frame1; 156 + use_entry->frame2 = frame2; 157 + return false; 158 + } 159 + 160 + /* 161 + * Special rules to skip reporting. 162 + */ 163 + static bool 164 + skip_report(enum kcsan_value_change value_change, unsigned long top_frame) 165 + { 166 + /* Should never get here if value_change==FALSE. */ 167 + WARN_ON_ONCE(value_change == KCSAN_VALUE_CHANGE_FALSE); 168 + 169 + /* 170 + * The first call to skip_report always has value_change==TRUE, since we 171 + * cannot know the value written of an instrumented access. For the 2nd 172 + * call there are 6 cases with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY: 173 + * 174 + * 1. read watchpoint, conflicting write (value_change==TRUE): report; 175 + * 2. read watchpoint, conflicting write (value_change==MAYBE): skip; 176 + * 3. write watchpoint, conflicting write (value_change==TRUE): report; 177 + * 4. write watchpoint, conflicting write (value_change==MAYBE): skip; 178 + * 5. write watchpoint, conflicting read (value_change==MAYBE): skip; 179 + * 6. write watchpoint, conflicting read (value_change==TRUE): report; 180 + * 181 + * Cases 1-4 are intuitive and expected; case 5 ensures we do not report 182 + * data races where the write may have rewritten the same value; case 6 183 + * is possible either if the size is larger than what we check value 184 + * changes for or the access type is KCSAN_ACCESS_ASSERT. 185 + */ 186 + if (IS_ENABLED(CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY) && 187 + value_change == KCSAN_VALUE_CHANGE_MAYBE) { 188 + /* 189 + * The access is a write, but the data value did not change. 190 + * 191 + * We opt-out of this filter for certain functions at request of 192 + * maintainers. 193 + */ 194 + char buf[64]; 195 + int len = scnprintf(buf, sizeof(buf), "%ps", (void *)top_frame); 196 + 197 + if (!strnstr(buf, "rcu_", len) && 198 + !strnstr(buf, "_rcu", len) && 199 + !strnstr(buf, "_srcu", len)) 200 + return true; 201 + } 202 + 203 + return kcsan_skip_report_debugfs(top_frame); 204 + } 205 + 206 + static const char *get_access_type(int type) 207 + { 208 + if (type & KCSAN_ACCESS_ASSERT) { 209 + if (type & KCSAN_ACCESS_SCOPED) { 210 + if (type & KCSAN_ACCESS_WRITE) 211 + return "assert no accesses (scoped)"; 212 + else 213 + return "assert no writes (scoped)"; 214 + } else { 215 + if (type & KCSAN_ACCESS_WRITE) 216 + return "assert no accesses"; 217 + else 218 + return "assert no writes"; 219 + } 220 + } 221 + 222 + switch (type) { 223 + case 0: 224 + return "read"; 225 + case KCSAN_ACCESS_ATOMIC: 226 + return "read (marked)"; 227 + case KCSAN_ACCESS_WRITE: 228 + return "write"; 229 + case KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: 230 + return "write (marked)"; 231 + case KCSAN_ACCESS_SCOPED: 232 + return "read (scoped)"; 233 + case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_ATOMIC: 234 + return "read (marked, scoped)"; 235 + case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE: 236 + return "write (scoped)"; 237 + case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: 238 + return "write (marked, scoped)"; 239 + default: 240 + BUG(); 241 + } 242 + } 243 + 244 + static const char *get_bug_type(int type) 245 + { 246 + return (type & KCSAN_ACCESS_ASSERT) != 0 ? "assert: race" : "data-race"; 247 + } 248 + 249 + /* Return thread description: in task or interrupt. */ 250 + static const char *get_thread_desc(int task_id) 251 + { 252 + if (task_id != -1) { 253 + static char buf[32]; /* safe: protected by report_lock */ 254 + 255 + snprintf(buf, sizeof(buf), "task %i", task_id); 256 + return buf; 257 + } 258 + return "interrupt"; 259 + } 260 + 261 + /* Helper to skip KCSAN-related functions in stack-trace. */ 262 + static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries) 263 + { 264 + char buf[64]; 265 + char *cur; 266 + int len, skip; 267 + 268 + for (skip = 0; skip < num_entries; ++skip) { 269 + len = scnprintf(buf, sizeof(buf), "%ps", (void *)stack_entries[skip]); 270 + 271 + /* Never show tsan_* or {read,write}_once_size. */ 272 + if (strnstr(buf, "tsan_", len) || 273 + strnstr(buf, "_once_size", len)) 274 + continue; 275 + 276 + cur = strnstr(buf, "kcsan_", len); 277 + if (cur) { 278 + cur += sizeof("kcsan_") - 1; 279 + if (strncmp(cur, "test", sizeof("test") - 1)) 280 + continue; /* KCSAN runtime function. */ 281 + /* KCSAN related test. */ 282 + } 283 + 284 + /* 285 + * No match for runtime functions -- @skip entries to skip to 286 + * get to first frame of interest. 287 + */ 288 + break; 289 + } 290 + 291 + return skip; 292 + } 293 + 294 + /* Compares symbolized strings of addr1 and addr2. */ 295 + static int sym_strcmp(void *addr1, void *addr2) 296 + { 297 + char buf1[64]; 298 + char buf2[64]; 299 + 300 + snprintf(buf1, sizeof(buf1), "%pS", addr1); 301 + snprintf(buf2, sizeof(buf2), "%pS", addr2); 302 + 303 + return strncmp(buf1, buf2, sizeof(buf1)); 304 + } 305 + 306 + static void print_verbose_info(struct task_struct *task) 307 + { 308 + if (!task) 309 + return; 310 + 311 + pr_err("\n"); 312 + debug_show_held_locks(task); 313 + print_irqtrace_events(task); 314 + } 315 + 316 + /* 317 + * Returns true if a report was generated, false otherwise. 318 + */ 319 + static bool print_report(enum kcsan_value_change value_change, 320 + enum kcsan_report_type type, 321 + const struct access_info *ai, 322 + const struct other_info *other_info) 323 + { 324 + unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 }; 325 + int num_stack_entries = stack_trace_save(stack_entries, NUM_STACK_ENTRIES, 1); 326 + int skipnr = get_stack_skipnr(stack_entries, num_stack_entries); 327 + unsigned long this_frame = stack_entries[skipnr]; 328 + unsigned long other_frame = 0; 329 + int other_skipnr = 0; /* silence uninit warnings */ 330 + 331 + /* 332 + * Must check report filter rules before starting to print. 333 + */ 334 + if (skip_report(KCSAN_VALUE_CHANGE_TRUE, stack_entries[skipnr])) 335 + return false; 336 + 337 + if (type == KCSAN_REPORT_RACE_SIGNAL) { 338 + other_skipnr = get_stack_skipnr(other_info->stack_entries, 339 + other_info->num_stack_entries); 340 + other_frame = other_info->stack_entries[other_skipnr]; 341 + 342 + /* @value_change is only known for the other thread */ 343 + if (skip_report(value_change, other_frame)) 344 + return false; 345 + } 346 + 347 + if (rate_limit_report(this_frame, other_frame)) 348 + return false; 349 + 350 + /* Print report header. */ 351 + pr_err("==================================================================\n"); 352 + switch (type) { 353 + case KCSAN_REPORT_RACE_SIGNAL: { 354 + int cmp; 355 + 356 + /* 357 + * Order functions lexographically for consistent bug titles. 358 + * Do not print offset of functions to keep title short. 359 + */ 360 + cmp = sym_strcmp((void *)other_frame, (void *)this_frame); 361 + pr_err("BUG: KCSAN: %s in %ps / %ps\n", 362 + get_bug_type(ai->access_type | other_info->ai.access_type), 363 + (void *)(cmp < 0 ? other_frame : this_frame), 364 + (void *)(cmp < 0 ? this_frame : other_frame)); 365 + } break; 366 + 367 + case KCSAN_REPORT_RACE_UNKNOWN_ORIGIN: 368 + pr_err("BUG: KCSAN: %s in %pS\n", get_bug_type(ai->access_type), 369 + (void *)this_frame); 370 + break; 371 + 372 + default: 373 + BUG(); 374 + } 375 + 376 + pr_err("\n"); 377 + 378 + /* Print information about the racing accesses. */ 379 + switch (type) { 380 + case KCSAN_REPORT_RACE_SIGNAL: 381 + pr_err("%s to 0x%px of %zu bytes by %s on cpu %i:\n", 382 + get_access_type(other_info->ai.access_type), other_info->ai.ptr, 383 + other_info->ai.size, get_thread_desc(other_info->ai.task_pid), 384 + other_info->ai.cpu_id); 385 + 386 + /* Print the other thread's stack trace. */ 387 + stack_trace_print(other_info->stack_entries + other_skipnr, 388 + other_info->num_stack_entries - other_skipnr, 389 + 0); 390 + 391 + if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) 392 + print_verbose_info(other_info->task); 393 + 394 + pr_err("\n"); 395 + pr_err("%s to 0x%px of %zu bytes by %s on cpu %i:\n", 396 + get_access_type(ai->access_type), ai->ptr, ai->size, 397 + get_thread_desc(ai->task_pid), ai->cpu_id); 398 + break; 399 + 400 + case KCSAN_REPORT_RACE_UNKNOWN_ORIGIN: 401 + pr_err("race at unknown origin, with %s to 0x%px of %zu bytes by %s on cpu %i:\n", 402 + get_access_type(ai->access_type), ai->ptr, ai->size, 403 + get_thread_desc(ai->task_pid), ai->cpu_id); 404 + break; 405 + 406 + default: 407 + BUG(); 408 + } 409 + /* Print stack trace of this thread. */ 410 + stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 411 + 0); 412 + 413 + if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) 414 + print_verbose_info(current); 415 + 416 + /* Print report footer. */ 417 + pr_err("\n"); 418 + pr_err("Reported by Kernel Concurrency Sanitizer on:\n"); 419 + dump_stack_print_info(KERN_DEFAULT); 420 + pr_err("==================================================================\n"); 421 + 422 + return true; 423 + } 424 + 425 + static void release_report(unsigned long *flags, struct other_info *other_info) 426 + { 427 + if (other_info) 428 + /* 429 + * Use size to denote valid/invalid, since KCSAN entirely 430 + * ignores 0-sized accesses. 431 + */ 432 + other_info->ai.size = 0; 433 + 434 + raw_spin_unlock_irqrestore(&report_lock, *flags); 435 + } 436 + 437 + /* 438 + * Sets @other_info->task and awaits consumption of @other_info. 439 + * 440 + * Precondition: report_lock is held. 441 + * Postcondition: report_lock is held. 442 + */ 443 + static void set_other_info_task_blocking(unsigned long *flags, 444 + const struct access_info *ai, 445 + struct other_info *other_info) 446 + { 447 + /* 448 + * We may be instrumenting a code-path where current->state is already 449 + * something other than TASK_RUNNING. 450 + */ 451 + const bool is_running = current->state == TASK_RUNNING; 452 + /* 453 + * To avoid deadlock in case we are in an interrupt here and this is a 454 + * race with a task on the same CPU (KCSAN_INTERRUPT_WATCHER), provide a 455 + * timeout to ensure this works in all contexts. 456 + * 457 + * Await approximately the worst case delay of the reporting thread (if 458 + * we are not interrupted). 459 + */ 460 + int timeout = max(kcsan_udelay_task, kcsan_udelay_interrupt); 461 + 462 + other_info->task = current; 463 + do { 464 + if (is_running) { 465 + /* 466 + * Let lockdep know the real task is sleeping, to print 467 + * the held locks (recall we turned lockdep off, so 468 + * locking/unlocking @report_lock won't be recorded). 469 + */ 470 + set_current_state(TASK_UNINTERRUPTIBLE); 471 + } 472 + raw_spin_unlock_irqrestore(&report_lock, *flags); 473 + /* 474 + * We cannot call schedule() since we also cannot reliably 475 + * determine if sleeping here is permitted -- see in_atomic(). 476 + */ 477 + 478 + udelay(1); 479 + raw_spin_lock_irqsave(&report_lock, *flags); 480 + if (timeout-- < 0) { 481 + /* 482 + * Abort. Reset @other_info->task to NULL, since it 483 + * appears the other thread is still going to consume 484 + * it. It will result in no verbose info printed for 485 + * this task. 486 + */ 487 + other_info->task = NULL; 488 + break; 489 + } 490 + /* 491 + * If invalid, or @ptr nor @current matches, then @other_info 492 + * has been consumed and we may continue. If not, retry. 493 + */ 494 + } while (other_info->ai.size && other_info->ai.ptr == ai->ptr && 495 + other_info->task == current); 496 + if (is_running) 497 + set_current_state(TASK_RUNNING); 498 + } 499 + 500 + /* Populate @other_info; requires that the provided @other_info not in use. */ 501 + static void prepare_report_producer(unsigned long *flags, 502 + const struct access_info *ai, 503 + struct other_info *other_info) 504 + { 505 + raw_spin_lock_irqsave(&report_lock, *flags); 506 + 507 + /* 508 + * The same @other_infos entry cannot be used concurrently, because 509 + * there is a one-to-one mapping to watchpoint slots (@watchpoints in 510 + * core.c), and a watchpoint is only released for reuse after reporting 511 + * is done by the consumer of @other_info. Therefore, it is impossible 512 + * for another concurrent prepare_report_producer() to set the same 513 + * @other_info, and are guaranteed exclusivity for the @other_infos 514 + * entry pointed to by @other_info. 515 + * 516 + * To check this property holds, size should never be non-zero here, 517 + * because every consumer of struct other_info resets size to 0 in 518 + * release_report(). 519 + */ 520 + WARN_ON(other_info->ai.size); 521 + 522 + other_info->ai = *ai; 523 + other_info->num_stack_entries = stack_trace_save(other_info->stack_entries, NUM_STACK_ENTRIES, 2); 524 + 525 + if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) 526 + set_other_info_task_blocking(flags, ai, other_info); 527 + 528 + raw_spin_unlock_irqrestore(&report_lock, *flags); 529 + } 530 + 531 + /* Awaits producer to fill @other_info and then returns. */ 532 + static bool prepare_report_consumer(unsigned long *flags, 533 + const struct access_info *ai, 534 + struct other_info *other_info) 535 + { 536 + 537 + raw_spin_lock_irqsave(&report_lock, *flags); 538 + while (!other_info->ai.size) { /* Await valid @other_info. */ 539 + raw_spin_unlock_irqrestore(&report_lock, *flags); 540 + cpu_relax(); 541 + raw_spin_lock_irqsave(&report_lock, *flags); 542 + } 543 + 544 + /* Should always have a matching access based on watchpoint encoding. */ 545 + if (WARN_ON(!matching_access((unsigned long)other_info->ai.ptr & WATCHPOINT_ADDR_MASK, other_info->ai.size, 546 + (unsigned long)ai->ptr & WATCHPOINT_ADDR_MASK, ai->size))) 547 + goto discard; 548 + 549 + if (!matching_access((unsigned long)other_info->ai.ptr, other_info->ai.size, 550 + (unsigned long)ai->ptr, ai->size)) { 551 + /* 552 + * If the actual accesses to not match, this was a false 553 + * positive due to watchpoint encoding. 554 + */ 555 + kcsan_counter_inc(KCSAN_COUNTER_ENCODING_FALSE_POSITIVES); 556 + goto discard; 557 + } 558 + 559 + return true; 560 + 561 + discard: 562 + release_report(flags, other_info); 563 + return false; 564 + } 565 + 566 + /* 567 + * Depending on the report type either sets @other_info and returns false, or 568 + * awaits @other_info and returns true. If @other_info is not required for the 569 + * report type, simply acquires @report_lock and returns true. 570 + */ 571 + static noinline bool prepare_report(unsigned long *flags, 572 + enum kcsan_report_type type, 573 + const struct access_info *ai, 574 + struct other_info *other_info) 575 + { 576 + switch (type) { 577 + case KCSAN_REPORT_CONSUMED_WATCHPOINT: 578 + prepare_report_producer(flags, ai, other_info); 579 + return false; 580 + case KCSAN_REPORT_RACE_SIGNAL: 581 + return prepare_report_consumer(flags, ai, other_info); 582 + default: 583 + /* @other_info not required; just acquire @report_lock. */ 584 + raw_spin_lock_irqsave(&report_lock, *flags); 585 + return true; 586 + } 587 + } 588 + 589 + void kcsan_report(const volatile void *ptr, size_t size, int access_type, 590 + enum kcsan_value_change value_change, 591 + enum kcsan_report_type type, int watchpoint_idx) 592 + { 593 + unsigned long flags = 0; 594 + const struct access_info ai = { 595 + .ptr = ptr, 596 + .size = size, 597 + .access_type = access_type, 598 + .task_pid = in_task() ? task_pid_nr(current) : -1, 599 + .cpu_id = raw_smp_processor_id() 600 + }; 601 + struct other_info *other_info = type == KCSAN_REPORT_RACE_UNKNOWN_ORIGIN 602 + ? NULL : &other_infos[watchpoint_idx]; 603 + 604 + kcsan_disable_current(); 605 + if (WARN_ON(watchpoint_idx < 0 || watchpoint_idx >= ARRAY_SIZE(other_infos))) 606 + goto out; 607 + 608 + /* 609 + * With TRACE_IRQFLAGS, lockdep's IRQ trace state becomes corrupted if 610 + * we do not turn off lockdep here; this could happen due to recursion 611 + * into lockdep via KCSAN if we detect a race in utilities used by 612 + * lockdep. 613 + */ 614 + lockdep_off(); 615 + 616 + if (prepare_report(&flags, type, &ai, other_info)) { 617 + /* 618 + * Never report if value_change is FALSE, only if we it is 619 + * either TRUE or MAYBE. In case of MAYBE, further filtering may 620 + * be done once we know the full stack trace in print_report(). 621 + */ 622 + bool reported = value_change != KCSAN_VALUE_CHANGE_FALSE && 623 + print_report(value_change, type, &ai, other_info); 624 + 625 + if (reported && panic_on_warn) 626 + panic("panic_on_warn set ...\n"); 627 + 628 + release_report(&flags, other_info); 629 + } 630 + 631 + lockdep_on(); 632 + out: 633 + kcsan_enable_current(); 634 + }
+131
kernel/kcsan/test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/init.h> 4 + #include <linux/kernel.h> 5 + #include <linux/printk.h> 6 + #include <linux/random.h> 7 + #include <linux/types.h> 8 + 9 + #include "encoding.h" 10 + 11 + #define ITERS_PER_TEST 2000 12 + 13 + /* Test requirements. */ 14 + static bool test_requires(void) 15 + { 16 + /* random should be initialized for the below tests */ 17 + return prandom_u32() + prandom_u32() != 0; 18 + } 19 + 20 + /* 21 + * Test watchpoint encode and decode: check that encoding some access's info, 22 + * and then subsequent decode preserves the access's info. 23 + */ 24 + static bool test_encode_decode(void) 25 + { 26 + int i; 27 + 28 + for (i = 0; i < ITERS_PER_TEST; ++i) { 29 + size_t size = prandom_u32_max(MAX_ENCODABLE_SIZE) + 1; 30 + bool is_write = !!prandom_u32_max(2); 31 + unsigned long addr; 32 + 33 + prandom_bytes(&addr, sizeof(addr)); 34 + if (WARN_ON(!check_encodable(addr, size))) 35 + return false; 36 + 37 + /* Encode and decode */ 38 + { 39 + const long encoded_watchpoint = 40 + encode_watchpoint(addr, size, is_write); 41 + unsigned long verif_masked_addr; 42 + size_t verif_size; 43 + bool verif_is_write; 44 + 45 + /* Check special watchpoints */ 46 + if (WARN_ON(decode_watchpoint( 47 + INVALID_WATCHPOINT, &verif_masked_addr, 48 + &verif_size, &verif_is_write))) 49 + return false; 50 + if (WARN_ON(decode_watchpoint( 51 + CONSUMED_WATCHPOINT, &verif_masked_addr, 52 + &verif_size, &verif_is_write))) 53 + return false; 54 + 55 + /* Check decoding watchpoint returns same data */ 56 + if (WARN_ON(!decode_watchpoint( 57 + encoded_watchpoint, &verif_masked_addr, 58 + &verif_size, &verif_is_write))) 59 + return false; 60 + if (WARN_ON(verif_masked_addr != 61 + (addr & WATCHPOINT_ADDR_MASK))) 62 + goto fail; 63 + if (WARN_ON(verif_size != size)) 64 + goto fail; 65 + if (WARN_ON(is_write != verif_is_write)) 66 + goto fail; 67 + 68 + continue; 69 + fail: 70 + pr_err("%s fail: %s %zu bytes @ %lx -> encoded: %lx -> %s %zu bytes @ %lx\n", 71 + __func__, is_write ? "write" : "read", size, 72 + addr, encoded_watchpoint, 73 + verif_is_write ? "write" : "read", verif_size, 74 + verif_masked_addr); 75 + return false; 76 + } 77 + } 78 + 79 + return true; 80 + } 81 + 82 + /* Test access matching function. */ 83 + static bool test_matching_access(void) 84 + { 85 + if (WARN_ON(!matching_access(10, 1, 10, 1))) 86 + return false; 87 + if (WARN_ON(!matching_access(10, 2, 11, 1))) 88 + return false; 89 + if (WARN_ON(!matching_access(10, 1, 9, 2))) 90 + return false; 91 + if (WARN_ON(matching_access(10, 1, 11, 1))) 92 + return false; 93 + if (WARN_ON(matching_access(9, 1, 10, 1))) 94 + return false; 95 + 96 + /* 97 + * An access of size 0 could match another access, as demonstrated here. 98 + * Rather than add more comparisons to 'matching_access()', which would 99 + * end up in the fast-path for *all* checks, check_access() simply 100 + * returns for all accesses of size 0. 101 + */ 102 + if (WARN_ON(!matching_access(8, 8, 12, 0))) 103 + return false; 104 + 105 + return true; 106 + } 107 + 108 + static int __init kcsan_selftest(void) 109 + { 110 + int passed = 0; 111 + int total = 0; 112 + 113 + #define RUN_TEST(do_test) \ 114 + do { \ 115 + ++total; \ 116 + if (do_test()) \ 117 + ++passed; \ 118 + else \ 119 + pr_err("KCSAN selftest: " #do_test " failed"); \ 120 + } while (0) 121 + 122 + RUN_TEST(test_requires); 123 + RUN_TEST(test_encode_decode); 124 + RUN_TEST(test_matching_access); 125 + 126 + pr_info("KCSAN selftest: %d/%d tests passed\n", passed, total); 127 + if (passed != total) 128 + panic("KCSAN selftests failed"); 129 + return 0; 130 + } 131 + postcore_initcall(kcsan_selftest);
+3
kernel/locking/Makefile
··· 5 5 6 6 obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o 7 7 8 + # Avoid recursion lockdep -> KCSAN -> ... -> lockdep. 9 + KCSAN_SANITIZE_lockdep.o := n 10 + 8 11 ifdef CONFIG_FUNCTION_TRACER 9 12 CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE) 10 13 CFLAGS_REMOVE_lockdep_proc.o = $(CC_FLAGS_FTRACE)
+6
kernel/sched/Makefile
··· 7 7 # that is not a function of syscall inputs. E.g. involuntary context switches. 8 8 KCOV_INSTRUMENT := n 9 9 10 + # There are numerous data races here, however, most of them are due to plain accesses. 11 + # This would make it even harder for syzbot to find reproducers, because these 12 + # bugs trigger without specific input. Disable by default, but should re-enable 13 + # eventually. 14 + KCSAN_SANITIZE := n 15 + 10 16 ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) 11 17 # According to Alan Modra <alan@linuxcare.com.au>, the -fno-omit-frame-pointer is 12 18 # needed for x86 only. Why this used to be enabled for all architectures is beyond
+3
kernel/trace/Makefile
··· 6 6 ORIG_CFLAGS := $(KBUILD_CFLAGS) 7 7 KBUILD_CFLAGS = $(subst $(CC_FLAGS_FTRACE),,$(ORIG_CFLAGS)) 8 8 9 + # Avoid recursion due to instrumentation. 10 + KCSAN_SANITIZE := n 11 + 9 12 ifdef CONFIG_FTRACE_SELFTEST 10 13 # selftest needs instrumentation 11 14 CFLAGS_trace_selftest_dynamic.o = $(CC_FLAGS_FTRACE)
+2
lib/Kconfig.debug
··· 1570 1570 1571 1571 source "samples/Kconfig" 1572 1572 1573 + source "lib/Kconfig.kcsan" 1574 + 1573 1575 config ARCH_HAS_DEVMEM_IS_ALLOWED 1574 1576 bool 1575 1577
+199
lib/Kconfig.kcsan
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + 3 + config HAVE_ARCH_KCSAN 4 + bool 5 + 6 + config HAVE_KCSAN_COMPILER 7 + def_bool CC_IS_CLANG && $(cc-option,-fsanitize=thread -mllvm -tsan-distinguish-volatile=1) 8 + help 9 + For the list of compilers that support KCSAN, please see 10 + <file:Documentation/dev-tools/kcsan.rst>. 11 + 12 + config KCSAN_KCOV_BROKEN 13 + def_bool KCOV && CC_HAS_SANCOV_TRACE_PC 14 + depends on CC_IS_CLANG 15 + depends on !$(cc-option,-Werror=unused-command-line-argument -fsanitize=thread -fsanitize-coverage=trace-pc) 16 + help 17 + Some versions of clang support either KCSAN and KCOV but not the 18 + combination of the two. 19 + See https://bugs.llvm.org/show_bug.cgi?id=45831 for the status 20 + in newer releases. 21 + 22 + menuconfig KCSAN 23 + bool "KCSAN: dynamic data race detector" 24 + depends on HAVE_ARCH_KCSAN && HAVE_KCSAN_COMPILER 25 + depends on DEBUG_KERNEL && !KASAN 26 + depends on !KCSAN_KCOV_BROKEN 27 + select STACKTRACE 28 + help 29 + The Kernel Concurrency Sanitizer (KCSAN) is a dynamic 30 + data-race detector that relies on compile-time instrumentation. 31 + KCSAN uses a watchpoint-based sampling approach to detect races. 32 + 33 + While KCSAN's primary purpose is to detect data races, it 34 + also provides assertions to check data access constraints. 35 + These assertions can expose bugs that do not manifest as 36 + data races. 37 + 38 + See <file:Documentation/dev-tools/kcsan.rst> for more details. 39 + 40 + if KCSAN 41 + 42 + config KCSAN_VERBOSE 43 + bool "Show verbose reports with more information about system state" 44 + depends on PROVE_LOCKING 45 + help 46 + If enabled, reports show more information about the system state that 47 + may help better analyze and debug races. This includes held locks and 48 + IRQ trace events. 49 + 50 + While this option should generally be benign, we call into more 51 + external functions on report generation; if a race report is 52 + generated from any one of them, system stability may suffer due to 53 + deadlocks or recursion. If in doubt, say N. 54 + 55 + config KCSAN_DEBUG 56 + bool "Debugging of KCSAN internals" 57 + 58 + config KCSAN_SELFTEST 59 + bool "Perform short selftests on boot" 60 + default y 61 + help 62 + Run KCSAN selftests on boot. On test failure, causes the kernel to panic. 63 + 64 + config KCSAN_EARLY_ENABLE 65 + bool "Early enable during boot" 66 + default y 67 + help 68 + If KCSAN should be enabled globally as soon as possible. KCSAN can 69 + later be enabled/disabled via debugfs. 70 + 71 + config KCSAN_NUM_WATCHPOINTS 72 + int "Number of available watchpoints" 73 + default 64 74 + help 75 + Total number of available watchpoints. An address range maps into a 76 + specific watchpoint slot as specified in kernel/kcsan/encoding.h. 77 + Although larger number of watchpoints may not be usable due to 78 + limited number of CPUs, a larger value helps to improve performance 79 + due to reducing cache-line contention. The chosen default is a 80 + conservative value; we should almost never observe "no_capacity" 81 + events (see /sys/kernel/debug/kcsan). 82 + 83 + config KCSAN_UDELAY_TASK 84 + int "Delay in microseconds (for tasks)" 85 + default 80 86 + help 87 + For tasks, the microsecond delay after setting up a watchpoint. 88 + 89 + config KCSAN_UDELAY_INTERRUPT 90 + int "Delay in microseconds (for interrupts)" 91 + default 20 92 + help 93 + For interrupts, the microsecond delay after setting up a watchpoint. 94 + Interrupts have tighter latency requirements, and their delay should 95 + be lower than for tasks. 96 + 97 + config KCSAN_DELAY_RANDOMIZE 98 + bool "Randomize above delays" 99 + default y 100 + help 101 + If delays should be randomized, where the maximum is KCSAN_UDELAY_*. 102 + If false, the chosen delays are always the KCSAN_UDELAY_* values 103 + as defined above. 104 + 105 + config KCSAN_SKIP_WATCH 106 + int "Skip instructions before setting up watchpoint" 107 + default 4000 108 + help 109 + The number of per-CPU memory operations to skip, before another 110 + watchpoint is set up, i.e. one in KCSAN_WATCH_SKIP per-CPU 111 + memory operations are used to set up a watchpoint. A smaller value 112 + results in more aggressive race detection, whereas a larger value 113 + improves system performance at the cost of missing some races. 114 + 115 + config KCSAN_SKIP_WATCH_RANDOMIZE 116 + bool "Randomize watchpoint instruction skip count" 117 + default y 118 + help 119 + If instruction skip count should be randomized, where the maximum is 120 + KCSAN_WATCH_SKIP. If false, the chosen value is always 121 + KCSAN_WATCH_SKIP. 122 + 123 + config KCSAN_INTERRUPT_WATCHER 124 + bool "Interruptible watchers" 125 + help 126 + If enabled, a task that set up a watchpoint may be interrupted while 127 + delayed. This option will allow KCSAN to detect races between 128 + interrupted tasks and other threads of execution on the same CPU. 129 + 130 + Currently disabled by default, because not all safe per-CPU access 131 + primitives and patterns may be accounted for, and therefore could 132 + result in false positives. 133 + 134 + config KCSAN_REPORT_ONCE_IN_MS 135 + int "Duration in milliseconds, in which any given race is only reported once" 136 + default 3000 137 + help 138 + Any given race is only reported once in the defined time window. 139 + Different races may still generate reports within a duration that is 140 + smaller than the duration defined here. This allows rate limiting 141 + reporting to avoid flooding the console with reports. Setting this 142 + to 0 disables rate limiting. 143 + 144 + # The main purpose of the below options is to control reported data races (e.g. 145 + # in fuzzer configs), and are not expected to be switched frequently by other 146 + # users. We could turn some of them into boot parameters, but given they should 147 + # not be switched normally, let's keep them here to simplify configuration. 148 + # 149 + # The defaults below are chosen to be very conservative, and may miss certain 150 + # bugs. 151 + 152 + config KCSAN_REPORT_RACE_UNKNOWN_ORIGIN 153 + bool "Report races of unknown origin" 154 + default y 155 + help 156 + If KCSAN should report races where only one access is known, and the 157 + conflicting access is of unknown origin. This type of race is 158 + reported if it was only possible to infer a race due to a data value 159 + change while an access is being delayed on a watchpoint. 160 + 161 + config KCSAN_REPORT_VALUE_CHANGE_ONLY 162 + bool "Only report races where watcher observed a data value change" 163 + default y 164 + help 165 + If enabled and a conflicting write is observed via a watchpoint, but 166 + the data value of the memory location was observed to remain 167 + unchanged, do not report the data race. 168 + 169 + config KCSAN_ASSUME_PLAIN_WRITES_ATOMIC 170 + bool "Assume that plain aligned writes up to word size are atomic" 171 + default y 172 + help 173 + Assume that plain aligned writes up to word size are atomic by 174 + default, and also not subject to other unsafe compiler optimizations 175 + resulting in data races. This will cause KCSAN to not report data 176 + races due to conflicts where the only plain accesses are aligned 177 + writes up to word size: conflicts between marked reads and plain 178 + aligned writes up to word size will not be reported as data races; 179 + notice that data races between two conflicting plain aligned writes 180 + will also not be reported. 181 + 182 + config KCSAN_IGNORE_ATOMICS 183 + bool "Do not instrument marked atomic accesses" 184 + help 185 + Never instrument marked atomic accesses. This option can be used for 186 + additional filtering. Conflicting marked atomic reads and plain 187 + writes will never be reported as a data race, however, will cause 188 + plain reads and marked writes to result in "unknown origin" reports. 189 + If combined with CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN=n, data 190 + races where at least one access is marked atomic will never be 191 + reported. 192 + 193 + Similar to KCSAN_ASSUME_PLAIN_WRITES_ATOMIC, but including unaligned 194 + accesses, conflicting marked atomic reads and plain writes will not 195 + be reported as data races; however, unlike that option, data races 196 + due to two conflicting plain writes will be reported (aligned and 197 + unaligned, if CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n). 198 + 199 + endif # KCSAN
+11
lib/Kconfig.ubsan
··· 26 26 the system. For some system builders this is an acceptable 27 27 trade-off. 28 28 29 + config UBSAN_KCOV_BROKEN 30 + def_bool KCOV && CC_HAS_SANCOV_TRACE_PC 31 + depends on CC_IS_CLANG 32 + depends on !$(cc-option,-Werror=unused-command-line-argument -fsanitize=bounds -fsanitize-coverage=trace-pc) 33 + help 34 + Some versions of clang support either UBSAN or KCOV but not the 35 + combination of the two. 36 + See https://bugs.llvm.org/show_bug.cgi?id=45831 for the status 37 + in newer releases. 38 + 29 39 config UBSAN_BOUNDS 30 40 bool "Perform array index bounds checking" 31 41 default UBSAN 42 + depends on !UBSAN_KCOV_BROKEN 32 43 help 33 44 This option enables detection of directly indexed out of bounds 34 45 array accesses, where the array size is known at compile time.
+4
lib/Makefile
··· 25 25 CFLAGS_string.o := $(call cc-option, -fno-stack-protector) 26 26 endif 27 27 28 + # Used by KCSAN while enabled, avoid recursion. 29 + KCSAN_SANITIZE_random32.o := n 30 + 28 31 lib-y := ctype.o string.o vsprintf.o cmdline.o \ 29 32 rbtree.o radix-tree.o timerqueue.o xarray.o \ 30 33 idr.o extable.o sha1.o irq_regs.o argv_split.o \ ··· 299 296 300 297 UBSAN_SANITIZE_ubsan.o := n 301 298 KASAN_SANITIZE_ubsan.o := n 299 + KCSAN_SANITIZE_ubsan.o := n 302 300 CFLAGS_ubsan.o := $(call cc-option, -fno-stack-protector) $(DISABLE_STACKLEAK_PLUGIN) 303 301 304 302 obj-$(CONFIG_SBITMAP) += sbitmap.o
+4 -3
lib/iov_iter.c
··· 8 8 #include <linux/splice.h> 9 9 #include <net/checksum.h> 10 10 #include <linux/scatterlist.h> 11 + #include <linux/instrumented.h> 11 12 12 13 #define PIPE_PARANOIA /* for now */ 13 14 ··· 139 138 static int copyout(void __user *to, const void *from, size_t n) 140 139 { 141 140 if (access_ok(to, n)) { 142 - kasan_check_read(from, n); 141 + instrument_copy_to_user(to, from, n); 143 142 n = raw_copy_to_user(to, from, n); 144 143 } 145 144 return n; ··· 148 147 static int copyin(void *to, const void __user *from, size_t n) 149 148 { 150 149 if (access_ok(from, n)) { 151 - kasan_check_write(to, n); 150 + instrument_copy_from_user(to, from, n); 152 151 n = raw_copy_from_user(to, from, n); 153 152 } 154 153 return n; ··· 640 639 static int copyout_mcsafe(void __user *to, const void *from, size_t n) 641 640 { 642 641 if (access_ok(to, n)) { 643 - kasan_check_read(from, n); 642 + instrument_copy_to_user(to, from, n); 644 643 n = copy_to_user_mcsafe((__force void *) to, from, n); 645 644 } 646 645 return n;
+4 -3
lib/usercopy.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/uaccess.h> 3 2 #include <linux/bitops.h> 3 + #include <linux/instrumented.h> 4 + #include <linux/uaccess.h> 4 5 5 6 /* out-of-line parts */ 6 7 ··· 11 10 unsigned long res = n; 12 11 might_fault(); 13 12 if (likely(access_ok(from, n))) { 14 - kasan_check_write(to, n); 13 + instrument_copy_from_user(to, from, n); 15 14 res = raw_copy_from_user(to, from, n); 16 15 } 17 16 if (unlikely(res)) ··· 26 25 { 27 26 might_fault(); 28 27 if (likely(access_ok(to, n))) { 29 - kasan_check_read(from, n); 28 + instrument_copy_to_user(to, from, n); 30 29 n = raw_copy_to_user(to, from, n); 31 30 } 32 31 return n;
+8
mm/Makefile
··· 8 8 KASAN_SANITIZE_slub.o := n 9 9 KCSAN_SANITIZE_kmemleak.o := n 10 10 11 + # These produce frequent data race reports: most of them are due to races on 12 + # the same word but accesses to different bits of that word. Re-enable KCSAN 13 + # for these when we have more consensus on what to do about them. 14 + KCSAN_SANITIZE_slab_common.o := n 15 + KCSAN_SANITIZE_slab.o := n 16 + KCSAN_SANITIZE_slub.o := n 17 + KCSAN_SANITIZE_page_alloc.o := n 18 + 11 19 # These files are disabled because they produce non-interesting and/or 12 20 # flaky coverage that is not a function of syscall inputs. E.g. slab is out of 13 21 # free pages, or a task is migrated between nodes.
+19
scripts/Makefile.kcsan
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + ifdef CONFIG_KCSAN 3 + 4 + # GCC and Clang accept backend options differently. Do not wrap in cc-option, 5 + # because Clang accepts "--param" even if it is unused. 6 + ifdef CONFIG_CC_IS_CLANG 7 + cc-param = -mllvm -$(1) 8 + else 9 + cc-param = --param -$(1) 10 + endif 11 + 12 + # Keep most options here optional, to allow enabling more compilers if absence 13 + # of some options does not break KCSAN nor causes false positive reports. 14 + CFLAGS_KCSAN := -fsanitize=thread \ 15 + $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0) -fno-optimize-sibling-calls) \ 16 + $(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1)) \ 17 + $(call cc-param,tsan-distinguish-volatile=1) 18 + 19 + endif # CONFIG_KCSAN
+10
scripts/Makefile.lib
··· 152 152 $(CFLAGS_KCOV)) 153 153 endif 154 154 155 + # 156 + # Enable KCSAN flags except some files or directories we don't want to check 157 + # (depends on variables KCSAN_SANITIZE_obj.o, KCSAN_SANITIZE) 158 + # 159 + ifeq ($(CONFIG_KCSAN),y) 160 + _c_flags += $(if $(patsubst n%,, \ 161 + $(KCSAN_SANITIZE_$(basetarget).o)$(KCSAN_SANITIZE)y), \ 162 + $(CFLAGS_KCSAN)) 163 + endif 164 + 155 165 # $(srctree)/$(src) for including checkin headers from generated source files 156 166 # $(objtree)/$(obj) for including generated headers from checkin source files 157 167 ifeq ($(KBUILD_EXTMOD),)
+5 -4
scripts/atomic/gen-atomic-instrumented.sh
··· 20 20 # We don't write to constant parameters 21 21 [ ${type#c} != ${type} ] && rw="read" 22 22 23 - printf "\tkasan_check_${rw}(${name}, sizeof(*${name}));\n" 23 + printf "\tinstrument_atomic_${rw}(${name}, sizeof(*${name}));\n" 24 24 } 25 25 26 26 #gen_param_check(arg...) ··· 84 84 [ ! -z "${guard}" ] && printf "#if ${guard}\n" 85 85 86 86 cat <<EOF 87 - static inline ${ret} 87 + static __always_inline ${ret} 88 88 ${atomicname}(${params}) 89 89 { 90 90 ${checks} ··· 107 107 #define ${xchg}(ptr, ...) \\ 108 108 ({ \\ 109 109 typeof(ptr) __ai_ptr = (ptr); \\ 110 - kasan_check_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\ 110 + instrument_atomic_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\ 111 111 arch_${xchg}(__ai_ptr, __VA_ARGS__); \\ 112 112 }) 113 113 EOF ··· 147 147 #define _ASM_GENERIC_ATOMIC_INSTRUMENTED_H 148 148 149 149 #include <linux/build_bug.h> 150 - #include <linux/kasan-checks.h> 150 + #include <linux/compiler.h> 151 + #include <linux/instrumented.h> 151 152 152 153 EOF 153 154
+2 -1
scripts/atomic/gen-atomic-long.sh
··· 46 46 local retstmt="$(gen_ret_stmt "${meta}")" 47 47 48 48 cat <<EOF 49 - static inline ${ret} 49 + static __always_inline ${ret} 50 50 atomic_long_${name}(${params}) 51 51 { 52 52 ${retstmt}${atomic}_${name}(${argscast}); ··· 64 64 #ifndef _ASM_GENERIC_ATOMIC_LONG_H 65 65 #define _ASM_GENERIC_ATOMIC_LONG_H 66 66 67 + #include <linux/compiler.h> 67 68 #include <asm/types.h> 68 69 69 70 #ifdef CONFIG_64BIT
+8
scripts/checkpatch.pl
··· 5945 5945 } 5946 5946 } 5947 5947 5948 + # check for data_race without a comment. 5949 + if ($line =~ /\bdata_race\s*\(/) { 5950 + if (!ctx_has_comment($first_line, $linenr)) { 5951 + WARN("DATA_RACE", 5952 + "data_race without comment\n" . $herecurr); 5953 + } 5954 + } 5955 + 5948 5956 # check for smp_read_barrier_depends and read_barrier_depends 5949 5957 if (!$file && $line =~ /\b(smp_|)read_barrier_depends\s*\(/) { 5950 5958 WARN("READ_BARRIER_DEPENDS",
+22
tools/objtool/check.c
··· 505 505 "__asan_report_store4_noabort", 506 506 "__asan_report_store8_noabort", 507 507 "__asan_report_store16_noabort", 508 + /* KCSAN */ 509 + "__kcsan_check_access", 510 + "kcsan_found_watchpoint", 511 + "kcsan_setup_watchpoint", 512 + "kcsan_check_scoped_accesses", 513 + "kcsan_disable_current", 514 + "kcsan_enable_current_nowarn", 515 + /* KCSAN/TSAN */ 516 + "__tsan_func_entry", 517 + "__tsan_func_exit", 518 + "__tsan_read_range", 519 + "__tsan_write_range", 520 + "__tsan_read1", 521 + "__tsan_read2", 522 + "__tsan_read4", 523 + "__tsan_read8", 524 + "__tsan_read16", 525 + "__tsan_write1", 526 + "__tsan_write2", 527 + "__tsan_write4", 528 + "__tsan_write8", 529 + "__tsan_write16", 508 530 /* KCOV */ 509 531 "write_comp_data", 510 532 "check_kcov_mode",