Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ARC: atomics: Add compiler barrier to atomic operations...

... to avoid unwanted gcc optimizations

SMP kernels fail to boot with commit 596ff4a09b89
("cpumask: re-introduce constant-sized cpumask optimizations").

|
| percpu: BUG: failure at mm/percpu.c:2981/pcpu_build_alloc_info()!
|

The write operation performed by the SCOND instruction in the atomic
inline asm code is not properly passed to the compiler. The compiler
cannot correctly optimize a nested loop that runs through the cpumask
in the pcpu_build_alloc_info() function.

Fix this by add a compiler barrier (memory clobber in inline asm).

Apparently atomic ops used to have memory clobber implicitly via
surrounding smp_mb(). However commit b64be6836993c431e
("ARC: atomics: implement relaxed variants") removed the smp_mb() for
the relaxed variants, but failed to add the explicit compiler barrier.

Link: https://github.com/foss-for-synopsys-dwc-arc-processors/linux/issues/135
Cc: <stable@vger.kernel.org> # v6.3+
Fixes: b64be6836993c43 ("ARC: atomics: implement relaxed variants")
Signed-off-by: Pavel Kozlov <pavel.kozlov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
[vgupta: tweaked the changelog and added Fixes tag]

authored by

Pavel Kozlov and committed by
Vineet Gupta
42f51fb2 4d369680

+6 -6
+3 -3
arch/arc/include/asm/atomic-llsc.h
··· 18 18 : [val] "=&r" (val) /* Early clobber to prevent reg reuse */ \ 19 19 : [ctr] "r" (&v->counter), /* Not "m": llock only supports reg direct addr mode */ \ 20 20 [i] "ir" (i) \ 21 - : "cc"); \ 21 + : "cc", "memory"); \ 22 22 } \ 23 23 24 24 #define ATOMIC_OP_RETURN(op, asm_op) \ ··· 34 34 : [val] "=&r" (val) \ 35 35 : [ctr] "r" (&v->counter), \ 36 36 [i] "ir" (i) \ 37 - : "cc"); \ 37 + : "cc", "memory"); \ 38 38 \ 39 39 return val; \ 40 40 } ··· 56 56 [orig] "=&r" (orig) \ 57 57 : [ctr] "r" (&v->counter), \ 58 58 [i] "ir" (i) \ 59 - : "cc"); \ 59 + : "cc", "memory"); \ 60 60 \ 61 61 return orig; \ 62 62 }
+3 -3
arch/arc/include/asm/atomic64-arcv2.h
··· 60 60 " bnz 1b \n" \ 61 61 : "=&r"(val) \ 62 62 : "r"(&v->counter), "ir"(a) \ 63 - : "cc"); \ 63 + : "cc", "memory"); \ 64 64 } \ 65 65 66 66 #define ATOMIC64_OP_RETURN(op, op1, op2) \ ··· 77 77 " bnz 1b \n" \ 78 78 : [val] "=&r"(val) \ 79 79 : "r"(&v->counter), "ir"(a) \ 80 - : "cc"); /* memory clobber comes from smp_mb() */ \ 80 + : "cc", "memory"); \ 81 81 \ 82 82 return val; \ 83 83 } ··· 99 99 " bnz 1b \n" \ 100 100 : "=&r"(orig), "=&r"(val) \ 101 101 : "r"(&v->counter), "ir"(a) \ 102 - : "cc"); /* memory clobber comes from smp_mb() */ \ 102 + : "cc", "memory"); \ 103 103 \ 104 104 return orig; \ 105 105 }