Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

s390/spinlock: remove unneeded serializations at unlock

the kernel locks have aqcuire/release semantics. No operation done
after the lock can be "moved" before the lock and no operation before
the unlock can be moved after the unlock. But it is perfectly fine
that memory accesses which happen code wise after unlock are performed
within the critical section.
On s390x, reads are in-order with other reads (PoP section
"Storage-Operand Fetch References") and writes are in-order with
other writes (PoP section "Storage-Operand Store References"). Writes
are also in-order with reads to the same memory location (PoP section
"Storage-Operand Store References"). To other CPUs (and the channel
subsystem), reads additionally appear to be performed prior to reads or
writes that happen after them in the conceptual sequence (PoP section
"Relation between Operand Accesses").
So at least as observed by other CPUs and the channel subsystem, reads
inside the critical sections will not happen after unlock (and writes
are in-order anyway). That's exactly what we need for "RELEASE
operations" (memory-barriers.txt): "It guarantees that all memory
operations before the RELEASE operation will appear to happen before the
RELEASE operation with respect to the other components of the system."

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-By: Sascha Silbe <silbe@linux.vnet.ibm.com>
[cross-reading and lot of improvements for the patch description]
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

authored by

Christian Borntraeger and committed by
Martin Schwidefsky
fdbbe8e7 8139b89d

-3
-3
arch/s390/include/asm/spinlock.h
··· 87 87 { 88 88 typecheck(unsigned int, lp->lock); 89 89 asm volatile( 90 - __ASM_BARRIER 91 90 "st %1,%0\n" 92 91 : "+Q" (lp->lock) 93 92 : "d" (0) ··· 168 169 \ 169 170 typecheck(unsigned int *, ptr); \ 170 171 asm volatile( \ 171 - "bcr 14,0\n" \ 172 172 op_string " %0,%2,%1\n" \ 173 173 : "=d" (old_val), "+Q" (*ptr) \ 174 174 : "d" (op_val) \ ··· 241 243 242 244 rw->owner = 0; 243 245 asm volatile( 244 - __ASM_BARRIER 245 246 "st %1,%0\n" 246 247 : "+Q" (rw->lock) 247 248 : "d" (0)