Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

kcsan: Avoid checking scoped accesses from nested contexts

Avoid checking scoped accesses from nested contexts (such as nested
interrupts or in scheduler code) which share the same kcsan_ctx.

This is to avoid detecting false positive races of accesses in the same
thread with currently scoped accesses: consider setting up a watchpoint
for a non-scoped (normal) access that also "conflicts" with a current
scoped access. In a nested interrupt (or in the scheduler), which shares
the same kcsan_ctx, we cannot check scoped accesses set up in the parent
context -- simply ignore them in this case.

With the introduction of kcsan_ctx::disable_scoped, we can also clean up
kcsan_check_scoped_accesses()'s recursion guard, and do not need to
modify the list's prev pointer.

Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

authored by

Marco Elver and committed by
Paul E. McKenney
9756f64c 71f8de70

+16 -3
+1
include/linux/kcsan.h
··· 21 21 */ 22 22 struct kcsan_ctx { 23 23 int disable_count; /* disable counter */ 24 + int disable_scoped; /* disable scoped access counter */ 24 25 int atomic_next; /* number of following atomic ops */ 25 26 26 27 /*
+15 -3
kernel/kcsan/core.c
··· 204 204 static noinline void kcsan_check_scoped_accesses(void) 205 205 { 206 206 struct kcsan_ctx *ctx = get_ctx(); 207 - struct list_head *prev_save = ctx->scoped_accesses.prev; 208 207 struct kcsan_scoped_access *scoped_access; 209 208 210 - ctx->scoped_accesses.prev = NULL; /* Avoid recursion. */ 209 + if (ctx->disable_scoped) 210 + return; 211 + 212 + ctx->disable_scoped++; 211 213 list_for_each_entry(scoped_access, &ctx->scoped_accesses, list) { 212 214 check_access(scoped_access->ptr, scoped_access->size, 213 215 scoped_access->type, scoped_access->ip); 214 216 } 215 - ctx->scoped_accesses.prev = prev_save; 217 + ctx->disable_scoped--; 216 218 } 217 219 218 220 /* Rules for generic atomic accesses. Called from fast-path. */ ··· 468 466 } 469 467 470 468 /* 469 + * Avoid races of scoped accesses from nested interrupts (or scheduler). 470 + * Assume setting up a watchpoint for a non-scoped (normal) access that 471 + * also conflicts with a current scoped access. In a nested interrupt, 472 + * which shares the context, it would check a conflicting scoped access. 473 + * To avoid, disable scoped access checking. 474 + */ 475 + ctx->disable_scoped++; 476 + 477 + /* 471 478 * Save and restore the IRQ state trace touched by KCSAN, since KCSAN's 472 479 * runtime is entered for every memory access, and potentially useful 473 480 * information is lost if dirtied by KCSAN. ··· 589 578 if (!kcsan_interrupt_watcher) 590 579 local_irq_restore(irq_flags); 591 580 kcsan_restore_irqtrace(current); 581 + ctx->disable_scoped--; 592 582 out: 593 583 user_access_restore(ua_flags); 594 584 }