Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

tools/memory-model: litmus: Add two tests for unlock(A)+lock(B) ordering

The memory model has been updated to provide a stronger ordering
guarantee for unlock(A)+lock(B) on the same CPU/thread. Therefore add
two litmus tests describing this new guarantee, these tests are simple
yet can clearly show the usage of the new guarantee, also they can serve
as the self tests for the modification in the model.

Co-developed-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

authored by

Boqun Feng and committed by
Paul E. McKenney
c438b7d8 b47c05ec

+76
+35
tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus
··· 1 + C LB+unlocklockonceonce+poacquireonce 2 + 3 + (* 4 + * Result: Never 5 + * 6 + * If two locked critical sections execute on the same CPU, all accesses 7 + * in the first must execute before any accesses in the second, even if the 8 + * critical sections are protected by different locks. Note: Even when a 9 + * write executes before a read, their memory effects can be reordered from 10 + * the viewpoint of another CPU (the kind of reordering allowed by TSO). 11 + *) 12 + 13 + {} 14 + 15 + P0(spinlock_t *s, spinlock_t *t, int *x, int *y) 16 + { 17 + int r1; 18 + 19 + spin_lock(s); 20 + r1 = READ_ONCE(*x); 21 + spin_unlock(s); 22 + spin_lock(t); 23 + WRITE_ONCE(*y, 1); 24 + spin_unlock(t); 25 + } 26 + 27 + P1(int *x, int *y) 28 + { 29 + int r2; 30 + 31 + r2 = smp_load_acquire(y); 32 + WRITE_ONCE(*x, 1); 33 + } 34 + 35 + exists (0:r1=1 /\ 1:r2=1)
+33
tools/memory-model/litmus-tests/MP+unlocklockonceonce+fencermbonceonce.litmus
··· 1 + C MP+unlocklockonceonce+fencermbonceonce 2 + 3 + (* 4 + * Result: Never 5 + * 6 + * If two locked critical sections execute on the same CPU, stores in the 7 + * first must propagate to each CPU before stores in the second do, even if 8 + * the critical sections are protected by different locks. 9 + *) 10 + 11 + {} 12 + 13 + P0(spinlock_t *s, spinlock_t *t, int *x, int *y) 14 + { 15 + spin_lock(s); 16 + WRITE_ONCE(*x, 1); 17 + spin_unlock(s); 18 + spin_lock(t); 19 + WRITE_ONCE(*y, 1); 20 + spin_unlock(t); 21 + } 22 + 23 + P1(int *x, int *y) 24 + { 25 + int r1; 26 + int r2; 27 + 28 + r1 = READ_ONCE(*y); 29 + smp_rmb(); 30 + r2 = READ_ONCE(*x); 31 + } 32 + 33 + exists (1:r1=1 /\ 1:r2=0)
+8
tools/memory-model/litmus-tests/README
··· 63 63 As above, but with store-release replaced with WRITE_ONCE() 64 64 and load-acquire replaced with READ_ONCE(). 65 65 66 + LB+unlocklockonceonce+poacquireonce.litmus 67 + Does a unlock+lock pair provides ordering guarantee between a 68 + load and a store? 69 + 66 70 MP+onceassign+derefonce.litmus 67 71 As below, but with rcu_assign_pointer() and an rcu_dereference(). 68 72 ··· 93 89 MP+porevlocks.litmus 94 90 As below, but with the first access of the writer process 95 91 and the second access of reader process protected by a lock. 92 + 93 + MP+unlocklockonceonce+fencermbonceonce.litmus 94 + Does a unlock+lock pair provides ordering guarantee between a 95 + store and another store? 96 96 97 97 MP+fencewmbonceonce+fencermbonceonce.litmus 98 98 Does a smp_wmb() (between the stores) and an smp_rmb() (between