Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/kfence: select random number before taking raw lock

The RNG uses vanilla spinlocks, not raw spinlocks, so kfence should pick
its random numbers before taking its raw spinlocks. This also has the
nice effect of doing less work inside the lock. It should fix a splat
that Geert saw with CONFIG_PROVE_RAW_LOCK_NESTING:

dump_backtrace.part.0+0x98/0xc0
show_stack+0x14/0x28
dump_stack_lvl+0xac/0xec
dump_stack+0x14/0x2c
__lock_acquire+0x388/0x10a0
lock_acquire+0x190/0x2c0
_raw_spin_lock_irqsave+0x6c/0x94
crng_make_state+0x148/0x1e4
_get_random_bytes.part.0+0x4c/0xe8
get_random_u32+0x4c/0x140
__kfence_alloc+0x460/0x5c4
kmem_cache_alloc_trace+0x194/0x1dc
__kthread_create_on_node+0x5c/0x1a8
kthread_create_on_node+0x58/0x7c
printk_start_kthread.part.0+0x34/0xa8
printk_activate_kthreads+0x4c/0x54
do_one_initcall+0xec/0x278
kernel_init_freeable+0x11c/0x214
kernel_init+0x24/0x124
ret_from_fork+0x10/0x20

Link: https://lkml.kernel.org/r/20220609123319.17576-1-Jason@zx2c4.com
Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Marco Elver <elver@google.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Jason A. Donenfeld and committed by
akpm
327b18b7 8a6f62a2

+5 -2
+5 -2
mm/kfence/core.c
··· 360 360 unsigned long flags; 361 361 struct slab *slab; 362 362 void *addr; 363 + const bool random_right_allocate = prandom_u32_max(2); 364 + const bool random_fault = CONFIG_KFENCE_STRESS_TEST_FAULTS && 365 + !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS); 363 366 364 367 /* Try to obtain a free object. */ 365 368 raw_spin_lock_irqsave(&kfence_freelist_lock, flags); ··· 407 404 * is that the out-of-bounds accesses detected are deterministic for 408 405 * such allocations. 409 406 */ 410 - if (prandom_u32_max(2)) { 407 + if (random_right_allocate) { 411 408 /* Allocate on the "right" side, re-calculate address. */ 412 409 meta->addr += PAGE_SIZE - size; 413 410 meta->addr = ALIGN_DOWN(meta->addr, cache->align); ··· 447 444 if (cache->ctor) 448 445 cache->ctor(addr); 449 446 450 - if (CONFIG_KFENCE_STRESS_TEST_FAULTS && !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS)) 447 + if (random_fault) 451 448 kfence_protect(meta->addr); /* Random "faults" by protecting the object. */ 452 449 453 450 atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCATED]);