Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[S390] mutex: Introduce arch_mutex_cpu_relax()

The spinning mutex implementation uses cpu_relax() in busy loops as a
compiler barrier. Depending on the architecture, cpu_relax() may do more
than needed in this specific mutex spin loops. On System z we also give
up the time slice of the virtual cpu in cpu_relax(), which prevents
effective spinning on the mutex.

This patch replaces cpu_relax() in the spinning mutex code with
arch_mutex_cpu_relax(), which can be defined by each architecture that
selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so
this patch should not affect other architectures than System z for now.

Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1290437256.7455.4.camel@thinkpad>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by

Gerald Schaefer and committed by
Martin Schwidefsky
34b133f8 c0301754

+13 -2
+3
arch/Kconfig
··· 175 175 config HAVE_ARCH_JUMP_LABEL 176 176 bool 177 177 178 + config HAVE_ARCH_MUTEX_CPU_RELAX 179 + bool 180 + 178 181 source "kernel/gcov/Kconfig"
+1
arch/s390/Kconfig
··· 87 87 select HAVE_KERNEL_LZMA 88 88 select HAVE_KERNEL_LZO 89 89 select HAVE_GET_USER_PAGES_FAST 90 + select HAVE_ARCH_MUTEX_CPU_RELAX 90 91 select ARCH_INLINE_SPIN_TRYLOCK 91 92 select ARCH_INLINE_SPIN_TRYLOCK_BH 92 93 select ARCH_INLINE_SPIN_LOCK
+2
arch/s390/include/asm/mutex.h
··· 7 7 */ 8 8 9 9 #include <asm-generic/mutex-dec.h> 10 + 11 + #define arch_mutex_cpu_relax() barrier()
+4
include/linux/mutex.h
··· 160 160 extern void mutex_unlock(struct mutex *lock); 161 161 extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); 162 162 163 + #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX 164 + #define arch_mutex_cpu_relax() cpu_relax() 165 + #endif 166 + 163 167 #endif
+1 -1
kernel/mutex.c
··· 199 199 * memory barriers as we'll eventually observe the right 200 200 * values at the cost of a few extra spins. 201 201 */ 202 - cpu_relax(); 202 + arch_mutex_cpu_relax(); 203 203 } 204 204 #endif 205 205 spin_lock_mutex(&lock->wait_lock, flags);
+2 -1
kernel/sched.c
··· 75 75 76 76 #include <asm/tlb.h> 77 77 #include <asm/irq_regs.h> 78 + #include <asm/mutex.h> 78 79 79 80 #include "sched_cpupri.h" 80 81 #include "workqueue_sched.h" ··· 4215 4214 if (task_thread_info(rq->curr) != owner || need_resched()) 4216 4215 return 0; 4217 4216 4218 - cpu_relax(); 4217 + arch_mutex_cpu_relax(); 4219 4218 } 4220 4219 4221 4220 return 1;