Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[PATCH] spinlock consolidation

This patch (written by me and also containing many suggestions of Arjan van
de Ven) does a major cleanup of the spinlock code. It does the following
things:

- consolidates and enhances the spinlock/rwlock debugging code

- simplifies the asm/spinlock.h files

- encapsulates the raw spinlock type and moves generic spinlock
features (such as ->break_lock) into the generic code.

- cleans up the spinlock code hierarchy to get rid of the spaghetti.

Most notably there's now only a single variant of the debugging code,
located in lib/spinlock_debug.c. (previously we had one SMP debugging
variant per architecture, plus a separate generic one for UP builds)

Also, i've enhanced the rwlock debugging facility, it will now track
write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too.
All locks have lockup detection now, which will work for both soft and hard
spin/rwlock lockups.

The arch-level include files now only contain the minimally necessary
subset of the spinlock code - all the rest that can be generalized now
lives in the generic headers:

include/asm-i386/spinlock_types.h | 16
include/asm-x86_64/spinlock_types.h | 16

I have also split up the various spinlock variants into separate files,
making it easier to see which does what. The new layout is:

SMP | UP
----------------------------|-----------------------------------
asm/spinlock_types_smp.h | linux/spinlock_types_up.h
linux/spinlock_types.h | linux/spinlock_types.h
asm/spinlock_smp.h | linux/spinlock_up.h
linux/spinlock_api_smp.h | linux/spinlock_api_up.h
linux/spinlock.h | linux/spinlock.h

/*
* here's the role of the various spinlock/rwlock related include files:
*
* on SMP builds:
*
* asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
* initializers
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel
* implementations, mostly inline assembly code
*
* (also included on UP-debug builds:)
*
* linux/spinlock_api_smp.h:
* contains the prototypes for the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*
* on UP builds:
*
* linux/spinlock_type_up.h:
* contains the generic, simplified UP spinlock type.
* (which is an empty structure on non-debug builds)
*
* linux/spinlock_types.h:
* defines the generic type and initializers
*
* linux/spinlock_up.h:
* contains the __raw_spin_*()/etc. version of UP
* builds. (which are NOPs on non-debug, non-preempt
* builds)
*
* (included on UP-non-debug builds:)
*
* linux/spinlock_api_up.h:
* builds the _spin_*() APIs.
*
* linux/spinlock.h: builds the final spin_*() APIs.
*/

All SMP and UP architectures are converted by this patch.

arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should
be mostly fine.

From: Grant Grundler <grundler@parisc-linux.org>

Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
Builds 32-bit SMP kernel (not booted or tested). I did not try to build
non-SMP kernels. That should be trivial to fix up later if necessary.

I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids
some ugly nesting of linux/*.h and asm/*.h files. Those particular locks
are well tested and contained entirely inside arch specific code. I do NOT
expect any new issues to arise with them.

If someone does ever need to use debug/metrics with them, then they will
need to unravel this hairball between spinlocks, atomic ops, and bit ops
that exist only because parisc has exactly one atomic instruction: LDCW
(load and clear word).

From: "Luck, Tony" <tony.luck@intel.com>

ia64 fix

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjanv@infradead.org>
Signed-off-by: Grant Grundler <grundler@parisc-linux.org>
Cc: Matthew Wilcox <willy@debian.org>
Signed-off-by: Hirokazu Takata <takata@linux-m32r.org>
Signed-off-by: Mikael Pettersson <mikpe@csd.uu.se>
Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by

Ingo Molnar and committed by
Linus Torvalds
fb1c8f93 4327edf6

+1617 -2897
-9
arch/alpha/kernel/alpha_ksyms.c
··· 185 185 EXPORT_SYMBOL(smp_call_function); 186 186 EXPORT_SYMBOL(smp_call_function_on_cpu); 187 187 EXPORT_SYMBOL(_atomic_dec_and_lock); 188 - #ifdef CONFIG_DEBUG_SPINLOCK 189 - EXPORT_SYMBOL(_raw_spin_unlock); 190 - EXPORT_SYMBOL(debug_spin_lock); 191 - EXPORT_SYMBOL(debug_spin_trylock); 192 - #endif 193 - #ifdef CONFIG_DEBUG_RWLOCK 194 - EXPORT_SYMBOL(_raw_write_lock); 195 - EXPORT_SYMBOL(_raw_read_lock); 196 - #endif 197 188 EXPORT_SYMBOL(cpu_present_mask); 198 189 #endif /* CONFIG_SMP */ 199 190
-172
arch/alpha/kernel/smp.c
··· 989 989 990 990 preempt_enable(); 991 991 } 992 - 993 - #ifdef CONFIG_DEBUG_SPINLOCK 994 - void 995 - _raw_spin_unlock(spinlock_t * lock) 996 - { 997 - mb(); 998 - lock->lock = 0; 999 - 1000 - lock->on_cpu = -1; 1001 - lock->previous = NULL; 1002 - lock->task = NULL; 1003 - lock->base_file = "none"; 1004 - lock->line_no = 0; 1005 - } 1006 - 1007 - void 1008 - debug_spin_lock(spinlock_t * lock, const char *base_file, int line_no) 1009 - { 1010 - long tmp; 1011 - long stuck; 1012 - void *inline_pc = __builtin_return_address(0); 1013 - unsigned long started = jiffies; 1014 - int printed = 0; 1015 - int cpu = smp_processor_id(); 1016 - 1017 - stuck = 1L << 30; 1018 - try_again: 1019 - 1020 - /* Use sub-sections to put the actual loop at the end 1021 - of this object file's text section so as to perfect 1022 - branch prediction. */ 1023 - __asm__ __volatile__( 1024 - "1: ldl_l %0,%1\n" 1025 - " subq %2,1,%2\n" 1026 - " blbs %0,2f\n" 1027 - " or %0,1,%0\n" 1028 - " stl_c %0,%1\n" 1029 - " beq %0,3f\n" 1030 - "4: mb\n" 1031 - ".subsection 2\n" 1032 - "2: ldl %0,%1\n" 1033 - " subq %2,1,%2\n" 1034 - "3: blt %2,4b\n" 1035 - " blbs %0,2b\n" 1036 - " br 1b\n" 1037 - ".previous" 1038 - : "=r" (tmp), "=m" (lock->lock), "=r" (stuck) 1039 - : "m" (lock->lock), "2" (stuck) : "memory"); 1040 - 1041 - if (stuck < 0) { 1042 - printk(KERN_WARNING 1043 - "%s:%d spinlock stuck in %s at %p(%d)" 1044 - " owner %s at %p(%d) %s:%d\n", 1045 - base_file, line_no, 1046 - current->comm, inline_pc, cpu, 1047 - lock->task->comm, lock->previous, 1048 - lock->on_cpu, lock->base_file, lock->line_no); 1049 - stuck = 1L << 36; 1050 - printed = 1; 1051 - goto try_again; 1052 - } 1053 - 1054 - /* Exiting. Got the lock. */ 1055 - lock->on_cpu = cpu; 1056 - lock->previous = inline_pc; 1057 - lock->task = current; 1058 - lock->base_file = base_file; 1059 - lock->line_no = line_no; 1060 - 1061 - if (printed) { 1062 - printk(KERN_WARNING 1063 - "%s:%d spinlock grabbed in %s at %p(%d) %ld ticks\n", 1064 - base_file, line_no, current->comm, inline_pc, 1065 - cpu, jiffies - started); 1066 - } 1067 - } 1068 - 1069 - int 1070 - debug_spin_trylock(spinlock_t * lock, const char *base_file, int line_no) 1071 - { 1072 - int ret; 1073 - if ((ret = !test_and_set_bit(0, lock))) { 1074 - lock->on_cpu = smp_processor_id(); 1075 - lock->previous = __builtin_return_address(0); 1076 - lock->task = current; 1077 - } else { 1078 - lock->base_file = base_file; 1079 - lock->line_no = line_no; 1080 - } 1081 - return ret; 1082 - } 1083 - #endif /* CONFIG_DEBUG_SPINLOCK */ 1084 - 1085 - #ifdef CONFIG_DEBUG_RWLOCK 1086 - void _raw_write_lock(rwlock_t * lock) 1087 - { 1088 - long regx, regy; 1089 - int stuck_lock, stuck_reader; 1090 - void *inline_pc = __builtin_return_address(0); 1091 - 1092 - try_again: 1093 - 1094 - stuck_lock = 1<<30; 1095 - stuck_reader = 1<<30; 1096 - 1097 - __asm__ __volatile__( 1098 - "1: ldl_l %1,%0\n" 1099 - " blbs %1,6f\n" 1100 - " blt %1,8f\n" 1101 - " mov 1,%1\n" 1102 - " stl_c %1,%0\n" 1103 - " beq %1,6f\n" 1104 - "4: mb\n" 1105 - ".subsection 2\n" 1106 - "6: blt %3,4b # debug\n" 1107 - " subl %3,1,%3 # debug\n" 1108 - " ldl %1,%0\n" 1109 - " blbs %1,6b\n" 1110 - "8: blt %4,4b # debug\n" 1111 - " subl %4,1,%4 # debug\n" 1112 - " ldl %1,%0\n" 1113 - " blt %1,8b\n" 1114 - " br 1b\n" 1115 - ".previous" 1116 - : "=m" (*(volatile int *)lock), "=&r" (regx), "=&r" (regy), 1117 - "=&r" (stuck_lock), "=&r" (stuck_reader) 1118 - : "m" (*(volatile int *)lock), "3" (stuck_lock), "4" (stuck_reader) : "memory"); 1119 - 1120 - if (stuck_lock < 0) { 1121 - printk(KERN_WARNING "write_lock stuck at %p\n", inline_pc); 1122 - goto try_again; 1123 - } 1124 - if (stuck_reader < 0) { 1125 - printk(KERN_WARNING "write_lock stuck on readers at %p\n", 1126 - inline_pc); 1127 - goto try_again; 1128 - } 1129 - } 1130 - 1131 - void _raw_read_lock(rwlock_t * lock) 1132 - { 1133 - long regx; 1134 - int stuck_lock; 1135 - void *inline_pc = __builtin_return_address(0); 1136 - 1137 - try_again: 1138 - 1139 - stuck_lock = 1<<30; 1140 - 1141 - __asm__ __volatile__( 1142 - "1: ldl_l %1,%0;" 1143 - " blbs %1,6f;" 1144 - " subl %1,2,%1;" 1145 - " stl_c %1,%0;" 1146 - " beq %1,6f;" 1147 - "4: mb\n" 1148 - ".subsection 2\n" 1149 - "6: ldl %1,%0;" 1150 - " blt %2,4b # debug\n" 1151 - " subl %2,1,%2 # debug\n" 1152 - " blbs %1,6b;" 1153 - " br 1b\n" 1154 - ".previous" 1155 - : "=m" (*(volatile int *)lock), "=&r" (regx), "=&r" (stuck_lock) 1156 - : "m" (*(volatile int *)lock), "2" (stuck_lock) : "memory"); 1157 - 1158 - if (stuck_lock < 0) { 1159 - printk(KERN_WARNING "read_lock stuck at %p\n", inline_pc); 1160 - goto try_again; 1161 - } 1162 - } 1163 - #endif /* CONFIG_DEBUG_RWLOCK */
+1 -10
arch/ia64/kernel/mca.c
··· 491 491 unw_init_from_interruption(&info, current, pt, sw); 492 492 ia64_do_show_stack(&info, NULL); 493 493 494 - #ifdef CONFIG_SMP 495 - /* read_trylock() would be handy... */ 496 - if (!tasklist_lock.write_lock) 497 - read_lock(&tasklist_lock); 498 - #endif 499 - { 494 + if (read_trylock(&tasklist_lock)) { 500 495 struct task_struct *g, *t; 501 496 do_each_thread (g, t) { 502 497 if (t == current) ··· 501 506 show_stack(t, NULL); 502 507 } while_each_thread (g, t); 503 508 } 504 - #ifdef CONFIG_SMP 505 - if (!tasklist_lock.write_lock) 506 - read_unlock(&tasklist_lock); 507 - #endif 508 509 509 510 printk("\nINIT dump complete. Please reboot now.\n"); 510 511 while (1); /* hang city if no debugger */
+13 -37
arch/m32r/kernel/smp.c
··· 892 892 int try) 893 893 { 894 894 spinlock_t *ipilock; 895 - unsigned long flags = 0; 896 895 volatile unsigned long *ipicr_addr; 897 896 unsigned long ipicr_val; 898 897 unsigned long my_physid_mask; ··· 915 916 * write IPICRi (send IPIi) 916 917 * unlock ipi_lock[i] 917 918 */ 919 + spin_lock(ipilock); 918 920 __asm__ __volatile__ ( 919 - ";; LOCK ipi_lock[i] \n\t" 920 - ".fillinsn \n" 921 - "1: \n\t" 922 - "mvfc %1, psw \n\t" 923 - "clrpsw #0x40 -> nop \n\t" 924 - DCACHE_CLEAR("r4", "r5", "%2") 925 - "lock r4, @%2 \n\t" 926 - "addi r4, #-1 \n\t" 927 - "unlock r4, @%2 \n\t" 928 - "mvtc %1, psw \n\t" 929 - "bnez r4, 2f \n\t" 930 - LOCK_SECTION_START(".balign 4 \n\t") 931 - ".fillinsn \n" 932 - "2: \n\t" 933 - "ld r4, @%2 \n\t" 934 - "blez r4, 2b \n\t" 935 - "bra 1b \n\t" 936 - LOCK_SECTION_END 937 921 ";; CHECK IPICRi == 0 \n\t" 938 922 ".fillinsn \n" 939 - "3: \n\t" 940 - "ld %0, @%3 \n\t" 941 - "and %0, %6 \n\t" 942 - "beqz %0, 4f \n\t" 943 - "bnez %5, 5f \n\t" 944 - "bra 3b \n\t" 923 + "1: \n\t" 924 + "ld %0, @%1 \n\t" 925 + "and %0, %4 \n\t" 926 + "beqz %0, 2f \n\t" 927 + "bnez %3, 3f \n\t" 928 + "bra 1b \n\t" 945 929 ";; WRITE IPICRi (send IPIi) \n\t" 946 930 ".fillinsn \n" 947 - "4: \n\t" 948 - "st %4, @%3 \n\t" 949 - ";; UNLOCK ipi_lock[i] \n\t" 931 + "2: \n\t" 932 + "st %2, @%1 \n\t" 950 933 ".fillinsn \n" 951 - "5: \n\t" 952 - "ldi r4, #1 \n\t" 953 - "st r4, @%2 \n\t" 934 + "3: \n\t" 954 935 : "=&r"(ipicr_val) 955 - : "r"(flags), "r"(&ipilock->slock), "r"(ipicr_addr), 956 - "r"(mask), "r"(try), "r"(my_physid_mask) 957 - : "memory", "r4" 958 - #ifdef CONFIG_CHIP_M32700_TS1 959 - , "r5" 960 - #endif /* CONFIG_CHIP_M32700_TS1 */ 936 + : "r"(ipicr_addr), "r"(mask), "r"(try), "r"(my_physid_mask) 937 + : "memory" 961 938 ); 939 + spin_unlock(ipilock); 962 940 963 941 return ipicr_val; 964 942 }
-8
arch/mips/lib/dec_and_lock.c
··· 20 20 * has a cmpxchg, and where atomic->value is an int holding 21 21 * the value of the atomic (i.e. the high bits aren't used 22 22 * for a lock or anything like that). 23 - * 24 - * N.B. ATOMIC_DEC_AND_LOCK gets defined in include/linux/spinlock.h 25 - * if spinlocks are empty and thus atomic_dec_and_lock is defined 26 - * to be atomic_dec_and_test - in that case we don't need it 27 - * defined here as well. 28 23 */ 29 - 30 - #ifndef ATOMIC_DEC_AND_LOCK 31 24 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) 32 25 { 33 26 int counter; ··· 45 52 } 46 53 47 54 EXPORT_SYMBOL(_atomic_dec_and_lock); 48 - #endif /* ATOMIC_DEC_AND_LOCK */
-2
arch/parisc/lib/Makefile
··· 5 5 lib-y := lusercopy.o bitops.o checksum.o io.o memset.o fixup.o memcpy.o 6 6 7 7 obj-y := iomap.o 8 - 9 - lib-$(CONFIG_SMP) += debuglocks.o
+2 -2
arch/parisc/lib/bitops.c
··· 13 13 #include <asm/atomic.h> 14 14 15 15 #ifdef CONFIG_SMP 16 - spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned = { 17 - [0 ... (ATOMIC_HASH_SIZE-1)] = SPIN_LOCK_UNLOCKED 16 + raw_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned = { 17 + [0 ... (ATOMIC_HASH_SIZE-1)] = __RAW_SPIN_LOCK_UNLOCKED 18 18 }; 19 19 #endif 20 20
-277
arch/parisc/lib/debuglocks.c
··· 1 - /* 2 - * Debugging versions of SMP locking primitives. 3 - * 4 - * Copyright (C) 2004 Thibaut VARENE <varenet@parisc-linux.org> 5 - * 6 - * Some code stollen from alpha & sparc64 ;) 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 12 - * 13 - * This program is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - * GNU General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 - * 22 - * We use pdc_printf() throughout the file for all output messages, to avoid 23 - * losing messages because of disabled interrupts. Since we're using these 24 - * messages for debugging purposes, it makes sense not to send them to the 25 - * linux console. 26 - */ 27 - 28 - 29 - #include <linux/config.h> 30 - #include <linux/kernel.h> 31 - #include <linux/sched.h> 32 - #include <linux/spinlock.h> 33 - #include <linux/hardirq.h> /* in_interrupt() */ 34 - #include <asm/system.h> 35 - #include <asm/hardirq.h> /* in_interrupt() */ 36 - #include <asm/pdc.h> 37 - 38 - #undef INIT_STUCK 39 - #define INIT_STUCK 1L << 30 40 - 41 - #ifdef CONFIG_DEBUG_SPINLOCK 42 - 43 - 44 - void _dbg_spin_lock(spinlock_t * lock, const char *base_file, int line_no) 45 - { 46 - volatile unsigned int *a; 47 - long stuck = INIT_STUCK; 48 - void *inline_pc = __builtin_return_address(0); 49 - unsigned long started = jiffies; 50 - int printed = 0; 51 - int cpu = smp_processor_id(); 52 - 53 - try_again: 54 - 55 - /* Do the actual locking */ 56 - /* <T-Bone> ggg: we can't get stuck on the outter loop? 57 - * <ggg> T-Bone: We can hit the outer loop 58 - * alot if multiple CPUs are constantly racing for a lock 59 - * and the backplane is NOT fair about which CPU sees 60 - * the update first. But it won't hang since every failed 61 - * attempt will drop us back into the inner loop and 62 - * decrement `stuck'. 63 - * <ggg> K-class and some of the others are NOT fair in the HW 64 - * implementation so we could see false positives. 65 - * But fixing the lock contention is easier than 66 - * fixing the HW to be fair. 67 - * <tausq> __ldcw() returns 1 if we get the lock; otherwise we 68 - * spin until the value of the lock changes, or we time out. 69 - */ 70 - mb(); 71 - a = __ldcw_align(lock); 72 - while (stuck && (__ldcw(a) == 0)) 73 - while ((*a == 0) && --stuck); 74 - mb(); 75 - 76 - if (unlikely(stuck <= 0)) { 77 - pdc_printf( 78 - "%s:%d: spin_lock(%s/%p) stuck in %s at %p(%d)" 79 - " owned by %s:%d in %s at %p(%d)\n", 80 - base_file, line_no, lock->module, lock, 81 - current->comm, inline_pc, cpu, 82 - lock->bfile, lock->bline, lock->task->comm, 83 - lock->previous, lock->oncpu); 84 - stuck = INIT_STUCK; 85 - printed = 1; 86 - goto try_again; 87 - } 88 - 89 - /* Exiting. Got the lock. */ 90 - lock->oncpu = cpu; 91 - lock->previous = inline_pc; 92 - lock->task = current; 93 - lock->bfile = (char *)base_file; 94 - lock->bline = line_no; 95 - 96 - if (unlikely(printed)) { 97 - pdc_printf( 98 - "%s:%d: spin_lock grabbed in %s at %p(%d) %ld ticks\n", 99 - base_file, line_no, current->comm, inline_pc, 100 - cpu, jiffies - started); 101 - } 102 - } 103 - 104 - void _dbg_spin_unlock(spinlock_t * lock, const char *base_file, int line_no) 105 - { 106 - CHECK_LOCK(lock); 107 - volatile unsigned int *a; 108 - mb(); 109 - a = __ldcw_align(lock); 110 - if (unlikely((*a != 0) && lock->babble)) { 111 - lock->babble--; 112 - pdc_printf( 113 - "%s:%d: spin_unlock(%s:%p) not locked\n", 114 - base_file, line_no, lock->module, lock); 115 - } 116 - *a = 1; 117 - mb(); 118 - } 119 - 120 - int _dbg_spin_trylock(spinlock_t * lock, const char *base_file, int line_no) 121 - { 122 - int ret; 123 - volatile unsigned int *a; 124 - mb(); 125 - a = __ldcw_align(lock); 126 - ret = (__ldcw(a) != 0); 127 - mb(); 128 - if (ret) { 129 - lock->oncpu = smp_processor_id(); 130 - lock->previous = __builtin_return_address(0); 131 - lock->task = current; 132 - } else { 133 - lock->bfile = (char *)base_file; 134 - lock->bline = line_no; 135 - } 136 - return ret; 137 - } 138 - 139 - #endif /* CONFIG_DEBUG_SPINLOCK */ 140 - 141 - #ifdef CONFIG_DEBUG_RWLOCK 142 - 143 - /* Interrupts trouble detailed explanation, thx Grant: 144 - * 145 - * o writer (wants to modify data) attempts to acquire the rwlock 146 - * o He gets the write lock. 147 - * o Interupts are still enabled, we take an interrupt with the 148 - * write still holding the lock. 149 - * o interrupt handler tries to acquire the rwlock for read. 150 - * o deadlock since the writer can't release it at this point. 151 - * 152 - * In general, any use of spinlocks that competes between "base" 153 - * level and interrupt level code will risk deadlock. Interrupts 154 - * need to be disabled in the base level routines to avoid it. 155 - * Or more precisely, only the IRQ the base level routine 156 - * is competing with for the lock. But it's more efficient/faster 157 - * to just disable all interrupts on that CPU to guarantee 158 - * once it gets the lock it can release it quickly too. 159 - */ 160 - 161 - void _dbg_write_lock(rwlock_t *rw, const char *bfile, int bline) 162 - { 163 - void *inline_pc = __builtin_return_address(0); 164 - unsigned long started = jiffies; 165 - long stuck = INIT_STUCK; 166 - int printed = 0; 167 - int cpu = smp_processor_id(); 168 - 169 - if(unlikely(in_interrupt())) { /* acquiring write lock in interrupt context, bad idea */ 170 - pdc_printf("write_lock caller: %s:%d, IRQs enabled,\n", bfile, bline); 171 - BUG(); 172 - } 173 - 174 - /* Note: if interrupts are disabled (which is most likely), the printk 175 - will never show on the console. We might need a polling method to flush 176 - the dmesg buffer anyhow. */ 177 - 178 - retry: 179 - _raw_spin_lock(&rw->lock); 180 - 181 - if(rw->counter != 0) { 182 - /* this basically never happens */ 183 - _raw_spin_unlock(&rw->lock); 184 - 185 - stuck--; 186 - if ((unlikely(stuck <= 0)) && (rw->counter < 0)) { 187 - pdc_printf( 188 - "%s:%d: write_lock stuck on writer" 189 - " in %s at %p(%d) %ld ticks\n", 190 - bfile, bline, current->comm, inline_pc, 191 - cpu, jiffies - started); 192 - stuck = INIT_STUCK; 193 - printed = 1; 194 - } 195 - else if (unlikely(stuck <= 0)) { 196 - pdc_printf( 197 - "%s:%d: write_lock stuck on reader" 198 - " in %s at %p(%d) %ld ticks\n", 199 - bfile, bline, current->comm, inline_pc, 200 - cpu, jiffies - started); 201 - stuck = INIT_STUCK; 202 - printed = 1; 203 - } 204 - 205 - while(rw->counter != 0); 206 - 207 - goto retry; 208 - } 209 - 210 - /* got it. now leave without unlocking */ 211 - rw->counter = -1; /* remember we are locked */ 212 - 213 - if (unlikely(printed)) { 214 - pdc_printf( 215 - "%s:%d: write_lock grabbed in %s at %p(%d) %ld ticks\n", 216 - bfile, bline, current->comm, inline_pc, 217 - cpu, jiffies - started); 218 - } 219 - } 220 - 221 - int _dbg_write_trylock(rwlock_t *rw, const char *bfile, int bline) 222 - { 223 - #if 0 224 - void *inline_pc = __builtin_return_address(0); 225 - int cpu = smp_processor_id(); 226 - #endif 227 - 228 - if(unlikely(in_interrupt())) { /* acquiring write lock in interrupt context, bad idea */ 229 - pdc_printf("write_lock caller: %s:%d, IRQs enabled,\n", bfile, bline); 230 - BUG(); 231 - } 232 - 233 - /* Note: if interrupts are disabled (which is most likely), the printk 234 - will never show on the console. We might need a polling method to flush 235 - the dmesg buffer anyhow. */ 236 - 237 - _raw_spin_lock(&rw->lock); 238 - 239 - if(rw->counter != 0) { 240 - /* this basically never happens */ 241 - _raw_spin_unlock(&rw->lock); 242 - return 0; 243 - } 244 - 245 - /* got it. now leave without unlocking */ 246 - rw->counter = -1; /* remember we are locked */ 247 - #if 0 248 - pdc_printf("%s:%d: try write_lock grabbed in %s at %p(%d)\n", 249 - bfile, bline, current->comm, inline_pc, cpu); 250 - #endif 251 - return 1; 252 - } 253 - 254 - void _dbg_read_lock(rwlock_t * rw, const char *bfile, int bline) 255 - { 256 - #if 0 257 - void *inline_pc = __builtin_return_address(0); 258 - unsigned long started = jiffies; 259 - int cpu = smp_processor_id(); 260 - #endif 261 - unsigned long flags; 262 - 263 - local_irq_save(flags); 264 - _raw_spin_lock(&rw->lock); 265 - 266 - rw->counter++; 267 - #if 0 268 - pdc_printf( 269 - "%s:%d: read_lock grabbed in %s at %p(%d) %ld ticks\n", 270 - bfile, bline, current->comm, inline_pc, 271 - cpu, jiffies - started); 272 - #endif 273 - _raw_spin_unlock(&rw->lock); 274 - local_irq_restore(flags); 275 - } 276 - 277 - #endif /* CONFIG_DEBUG_RWLOCK */
-1
arch/ppc/lib/Makefile
··· 4 4 5 5 obj-y := checksum.o string.o strcase.o dec_and_lock.o div64.o 6 6 7 - obj-$(CONFIG_SMP) += locks.o 8 7 obj-$(CONFIG_8xx) += rheap.o 9 8 obj-$(CONFIG_CPM2) += rheap.o
-8
arch/ppc/lib/dec_and_lock.c
··· 11 11 * has a cmpxchg, and where atomic->value is an int holding 12 12 * the value of the atomic (i.e. the high bits aren't used 13 13 * for a lock or anything like that). 14 - * 15 - * N.B. ATOMIC_DEC_AND_LOCK gets defined in include/linux/spinlock.h 16 - * if spinlocks are empty and thus atomic_dec_and_lock is defined 17 - * to be atomic_dec_and_test - in that case we don't need it 18 - * defined here as well. 19 14 */ 20 - 21 - #ifndef ATOMIC_DEC_AND_LOCK 22 15 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) 23 16 { 24 17 int counter; ··· 36 43 } 37 44 38 45 EXPORT_SYMBOL(_atomic_dec_and_lock); 39 - #endif /* ATOMIC_DEC_AND_LOCK */
-8
arch/ppc64/lib/dec_and_lock.c
··· 20 20 * has a cmpxchg, and where atomic->value is an int holding 21 21 * the value of the atomic (i.e. the high bits aren't used 22 22 * for a lock or anything like that). 23 - * 24 - * N.B. ATOMIC_DEC_AND_LOCK gets defined in include/linux/spinlock.h 25 - * if spinlocks are empty and thus atomic_dec_and_lock is defined 26 - * to be atomic_dec_and_test - in that case we don't need it 27 - * defined here as well. 28 23 */ 29 - 30 - #ifndef ATOMIC_DEC_AND_LOCK 31 24 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) 32 25 { 33 26 int counter; ··· 45 52 } 46 53 47 54 EXPORT_SYMBOL(_atomic_dec_and_lock); 48 - #endif /* ATOMIC_DEC_AND_LOCK */
+7 -7
arch/ppc64/lib/locks.c
··· 23 23 /* waiting for a spinlock... */ 24 24 #if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES) 25 25 26 - void __spin_yield(spinlock_t *lock) 26 + void __spin_yield(raw_spinlock_t *lock) 27 27 { 28 28 unsigned int lock_value, holder_cpu, yield_count; 29 29 struct paca_struct *holder_paca; 30 30 31 - lock_value = lock->lock; 31 + lock_value = lock->slock; 32 32 if (lock_value == 0) 33 33 return; 34 34 holder_cpu = lock_value & 0xffff; ··· 38 38 if ((yield_count & 1) == 0) 39 39 return; /* virtual cpu is currently running */ 40 40 rmb(); 41 - if (lock->lock != lock_value) 41 + if (lock->slock != lock_value) 42 42 return; /* something has changed */ 43 43 #ifdef CONFIG_PPC_ISERIES 44 44 HvCall2(HvCallBaseYieldProcessor, HvCall_YieldToProc, ··· 54 54 * This turns out to be the same for read and write locks, since 55 55 * we only know the holder if it is write-locked. 56 56 */ 57 - void __rw_yield(rwlock_t *rw) 57 + void __rw_yield(raw_rwlock_t *rw) 58 58 { 59 59 int lock_value; 60 60 unsigned int holder_cpu, yield_count; ··· 82 82 } 83 83 #endif 84 84 85 - void spin_unlock_wait(spinlock_t *lock) 85 + void __raw_spin_unlock_wait(raw_spinlock_t *lock) 86 86 { 87 - while (lock->lock) { 87 + while (lock->slock) { 88 88 HMT_low(); 89 89 if (SHARED_PROCESSOR) 90 90 __spin_yield(lock); ··· 92 92 HMT_medium(); 93 93 } 94 94 95 - EXPORT_SYMBOL(spin_unlock_wait); 95 + EXPORT_SYMBOL(__raw_spin_unlock_wait);
+6 -6
arch/s390/lib/spinlock.c
··· 36 36 } 37 37 38 38 void 39 - _raw_spin_lock_wait(spinlock_t *lp, unsigned int pc) 39 + _raw_spin_lock_wait(raw_spinlock_t *lp, unsigned int pc) 40 40 { 41 41 int count = spin_retry; 42 42 ··· 53 53 EXPORT_SYMBOL(_raw_spin_lock_wait); 54 54 55 55 int 56 - _raw_spin_trylock_retry(spinlock_t *lp, unsigned int pc) 56 + _raw_spin_trylock_retry(raw_spinlock_t *lp, unsigned int pc) 57 57 { 58 58 int count = spin_retry; 59 59 ··· 67 67 EXPORT_SYMBOL(_raw_spin_trylock_retry); 68 68 69 69 void 70 - _raw_read_lock_wait(rwlock_t *rw) 70 + _raw_read_lock_wait(raw_rwlock_t *rw) 71 71 { 72 72 unsigned int old; 73 73 int count = spin_retry; ··· 86 86 EXPORT_SYMBOL(_raw_read_lock_wait); 87 87 88 88 int 89 - _raw_read_trylock_retry(rwlock_t *rw) 89 + _raw_read_trylock_retry(raw_rwlock_t *rw) 90 90 { 91 91 unsigned int old; 92 92 int count = spin_retry; ··· 102 102 EXPORT_SYMBOL(_raw_read_trylock_retry); 103 103 104 104 void 105 - _raw_write_lock_wait(rwlock_t *rw) 105 + _raw_write_lock_wait(raw_rwlock_t *rw) 106 106 { 107 107 int count = spin_retry; 108 108 ··· 119 119 EXPORT_SYMBOL(_raw_write_lock_wait); 120 120 121 121 int 122 - _raw_write_trylock_retry(rwlock_t *rw) 122 + _raw_write_trylock_retry(raw_rwlock_t *rw) 123 123 { 124 124 int count = spin_retry; 125 125
-10
arch/sparc/kernel/sparc_ksyms.c
··· 114 114 /* used by various drivers */ 115 115 EXPORT_SYMBOL(sparc_cpu_model); 116 116 EXPORT_SYMBOL(kernel_thread); 117 - #ifdef CONFIG_DEBUG_SPINLOCK 118 117 #ifdef CONFIG_SMP 119 - EXPORT_SYMBOL(_do_spin_lock); 120 - EXPORT_SYMBOL(_do_spin_unlock); 121 - EXPORT_SYMBOL(_spin_trylock); 122 - EXPORT_SYMBOL(_do_read_lock); 123 - EXPORT_SYMBOL(_do_read_unlock); 124 - EXPORT_SYMBOL(_do_write_lock); 125 - EXPORT_SYMBOL(_do_write_unlock); 126 - #endif 127 - #else 128 118 // XXX find what uses (or used) these. 129 119 EXPORT_SYMBOL(___rw_read_enter); 130 120 EXPORT_SYMBOL(___rw_read_exit);
-2
arch/sparc/lib/Makefile
··· 9 9 strncpy_from_user.o divdi3.o udivdi3.o strlen_user.o \ 10 10 copy_user.o locks.o atomic.o atomic32.o bitops.o \ 11 11 lshrdi3.o ashldi3.o rwsem.o muldi3.o bitext.o 12 - 13 - lib-$(CONFIG_DEBUG_SPINLOCK) += debuglocks.o
-202
arch/sparc/lib/debuglocks.c
··· 1 - /* $Id: debuglocks.c,v 1.11 2001/09/20 00:35:31 davem Exp $ 2 - * debuglocks.c: Debugging versions of SMP locking primitives. 3 - * 4 - * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu) 5 - * Copyright (C) 1998-99 Anton Blanchard (anton@progsoc.uts.edu.au) 6 - */ 7 - 8 - #include <linux/kernel.h> 9 - #include <linux/sched.h> 10 - #include <linux/threads.h> /* For NR_CPUS */ 11 - #include <linux/spinlock.h> 12 - #include <asm/psr.h> 13 - #include <asm/system.h> 14 - 15 - #ifdef CONFIG_SMP 16 - 17 - /* Some notes on how these debugging routines work. When a lock is acquired 18 - * an extra debugging member lock->owner_pc is set to the caller of the lock 19 - * acquisition routine. Right before releasing a lock, the debugging program 20 - * counter is cleared to zero. 21 - * 22 - * Furthermore, since PC's are 4 byte aligned on Sparc, we stuff the CPU 23 - * number of the owner in the lowest two bits. 24 - */ 25 - 26 - #define STORE_CALLER(A) __asm__ __volatile__("mov %%i7, %0" : "=r" (A)); 27 - 28 - static inline void show(char *str, spinlock_t *lock, unsigned long caller) 29 - { 30 - int cpu = smp_processor_id(); 31 - 32 - printk("%s(%p) CPU#%d stuck at %08lx, owner PC(%08lx):CPU(%lx)\n",str, 33 - lock, cpu, caller, lock->owner_pc & ~3, lock->owner_pc & 3); 34 - } 35 - 36 - static inline void show_read(char *str, rwlock_t *lock, unsigned long caller) 37 - { 38 - int cpu = smp_processor_id(); 39 - 40 - printk("%s(%p) CPU#%d stuck at %08lx, owner PC(%08lx):CPU(%lx)\n", str, 41 - lock, cpu, caller, lock->owner_pc & ~3, lock->owner_pc & 3); 42 - } 43 - 44 - static inline void show_write(char *str, rwlock_t *lock, unsigned long caller) 45 - { 46 - int cpu = smp_processor_id(); 47 - int i; 48 - 49 - printk("%s(%p) CPU#%d stuck at %08lx, owner PC(%08lx):CPU(%lx)", str, 50 - lock, cpu, caller, lock->owner_pc & ~3, lock->owner_pc & 3); 51 - 52 - for(i = 0; i < NR_CPUS; i++) 53 - printk(" reader[%d]=%08lx", i, lock->reader_pc[i]); 54 - 55 - printk("\n"); 56 - } 57 - 58 - #undef INIT_STUCK 59 - #define INIT_STUCK 100000000 60 - 61 - void _do_spin_lock(spinlock_t *lock, char *str) 62 - { 63 - unsigned long caller; 64 - unsigned long val; 65 - int cpu = smp_processor_id(); 66 - int stuck = INIT_STUCK; 67 - 68 - STORE_CALLER(caller); 69 - 70 - again: 71 - __asm__ __volatile__("ldstub [%1], %0" : "=r" (val) : "r" (&(lock->lock))); 72 - if(val) { 73 - while(lock->lock) { 74 - if (!--stuck) { 75 - show(str, lock, caller); 76 - stuck = INIT_STUCK; 77 - } 78 - barrier(); 79 - } 80 - goto again; 81 - } 82 - lock->owner_pc = (cpu & 3) | (caller & ~3); 83 - } 84 - 85 - int _spin_trylock(spinlock_t *lock) 86 - { 87 - unsigned long val; 88 - unsigned long caller; 89 - int cpu = smp_processor_id(); 90 - 91 - STORE_CALLER(caller); 92 - 93 - __asm__ __volatile__("ldstub [%1], %0" : "=r" (val) : "r" (&(lock->lock))); 94 - if(!val) { 95 - /* We got it, record our identity for debugging. */ 96 - lock->owner_pc = (cpu & 3) | (caller & ~3); 97 - } 98 - return val == 0; 99 - } 100 - 101 - void _do_spin_unlock(spinlock_t *lock) 102 - { 103 - lock->owner_pc = 0; 104 - barrier(); 105 - lock->lock = 0; 106 - } 107 - 108 - void _do_read_lock(rwlock_t *rw, char *str) 109 - { 110 - unsigned long caller; 111 - unsigned long val; 112 - int cpu = smp_processor_id(); 113 - int stuck = INIT_STUCK; 114 - 115 - STORE_CALLER(caller); 116 - 117 - wlock_again: 118 - __asm__ __volatile__("ldstub [%1 + 3], %0" : "=r" (val) : "r" (&(rw->lock))); 119 - if(val) { 120 - while(rw->lock & 0xff) { 121 - if (!--stuck) { 122 - show_read(str, rw, caller); 123 - stuck = INIT_STUCK; 124 - } 125 - barrier(); 126 - } 127 - goto wlock_again; 128 - } 129 - 130 - rw->reader_pc[cpu] = caller; 131 - barrier(); 132 - rw->lock++; 133 - } 134 - 135 - void _do_read_unlock(rwlock_t *rw, char *str) 136 - { 137 - unsigned long caller; 138 - unsigned long val; 139 - int cpu = smp_processor_id(); 140 - int stuck = INIT_STUCK; 141 - 142 - STORE_CALLER(caller); 143 - 144 - wlock_again: 145 - __asm__ __volatile__("ldstub [%1 + 3], %0" : "=r" (val) : "r" (&(rw->lock))); 146 - if(val) { 147 - while(rw->lock & 0xff) { 148 - if (!--stuck) { 149 - show_read(str, rw, caller); 150 - stuck = INIT_STUCK; 151 - } 152 - barrier(); 153 - } 154 - goto wlock_again; 155 - } 156 - 157 - rw->reader_pc[cpu] = 0; 158 - barrier(); 159 - rw->lock -= 0x1ff; 160 - } 161 - 162 - void _do_write_lock(rwlock_t *rw, char *str) 163 - { 164 - unsigned long caller; 165 - unsigned long val; 166 - int cpu = smp_processor_id(); 167 - int stuck = INIT_STUCK; 168 - 169 - STORE_CALLER(caller); 170 - 171 - wlock_again: 172 - __asm__ __volatile__("ldstub [%1 + 3], %0" : "=r" (val) : "r" (&(rw->lock))); 173 - if(val) { 174 - wlock_wait: 175 - while(rw->lock) { 176 - if (!--stuck) { 177 - show_write(str, rw, caller); 178 - stuck = INIT_STUCK; 179 - } 180 - barrier(); 181 - } 182 - goto wlock_again; 183 - } 184 - 185 - if (rw->lock & ~0xff) { 186 - *(((unsigned char *)&rw->lock)+3) = 0; 187 - barrier(); 188 - goto wlock_wait; 189 - } 190 - 191 - barrier(); 192 - rw->owner_pc = (cpu & 3) | (caller & ~3); 193 - } 194 - 195 - void _do_write_unlock(rwlock_t *rw) 196 - { 197 - rw->owner_pc = 0; 198 - barrier(); 199 - rw->lock = 0; 200 - } 201 - 202 - #endif /* SMP */
-5
arch/sparc64/kernel/process.c
··· 607 607 struct thread_info *t = p->thread_info; 608 608 char *child_trap_frame; 609 609 610 - #ifdef CONFIG_DEBUG_SPINLOCK 611 - p->thread.smp_lock_count = 0; 612 - p->thread.smp_lock_pc = 0; 613 - #endif 614 - 615 610 /* Calculate offset to stack_frame & pt_regs */ 616 611 child_trap_frame = ((char *)t) + (THREAD_SIZE - (TRACEREG_SZ+STACKFRAME_SZ)); 617 612 memcpy(child_trap_frame, (((struct sparc_stackf *)regs)-1), (TRACEREG_SZ+STACKFRAME_SZ));
-5
arch/sparc64/kernel/sparc64_ksyms.c
··· 115 115 116 116 /* used by various drivers */ 117 117 #ifdef CONFIG_SMP 118 - #ifndef CONFIG_DEBUG_SPINLOCK 119 118 /* Out of line rw-locking implementation. */ 120 119 EXPORT_SYMBOL(__read_lock); 121 120 EXPORT_SYMBOL(__read_unlock); 122 121 EXPORT_SYMBOL(__write_lock); 123 122 EXPORT_SYMBOL(__write_unlock); 124 123 EXPORT_SYMBOL(__write_trylock); 125 - /* Out of line spin-locking implementation. */ 126 - EXPORT_SYMBOL(_raw_spin_lock); 127 - EXPORT_SYMBOL(_raw_spin_lock_flags); 128 - #endif 129 124 130 125 /* Hard IRQ locking */ 131 126 EXPORT_SYMBOL(synchronize_irq);
-1
arch/sparc64/lib/Makefile
··· 14 14 copy_in_user.o user_fixup.o memmove.o \ 15 15 mcount.o ipcsum.o rwsem.o xor.o find_bit.o delay.o 16 16 17 - lib-$(CONFIG_DEBUG_SPINLOCK) += debuglocks.o 18 17 lib-$(CONFIG_HAVE_DEC_LOCK) += dec_and_lock.o 19 18 20 19 obj-y += iomap.o
-366
arch/sparc64/lib/debuglocks.c
··· 1 - /* $Id: debuglocks.c,v 1.9 2001/11/17 00:10:48 davem Exp $ 2 - * debuglocks.c: Debugging versions of SMP locking primitives. 3 - * 4 - * Copyright (C) 1998 David S. Miller (davem@redhat.com) 5 - */ 6 - 7 - #include <linux/config.h> 8 - #include <linux/kernel.h> 9 - #include <linux/sched.h> 10 - #include <linux/spinlock.h> 11 - #include <asm/system.h> 12 - 13 - #ifdef CONFIG_SMP 14 - 15 - static inline void show (char *str, spinlock_t *lock, unsigned long caller) 16 - { 17 - int cpu = smp_processor_id(); 18 - 19 - printk("%s(%p) CPU#%d stuck at %08x, owner PC(%08x):CPU(%x)\n", 20 - str, lock, cpu, (unsigned int) caller, 21 - lock->owner_pc, lock->owner_cpu); 22 - } 23 - 24 - static inline void show_read (char *str, rwlock_t *lock, unsigned long caller) 25 - { 26 - int cpu = smp_processor_id(); 27 - 28 - printk("%s(%p) CPU#%d stuck at %08x, writer PC(%08x):CPU(%x)\n", 29 - str, lock, cpu, (unsigned int) caller, 30 - lock->writer_pc, lock->writer_cpu); 31 - } 32 - 33 - static inline void show_write (char *str, rwlock_t *lock, unsigned long caller) 34 - { 35 - int cpu = smp_processor_id(); 36 - int i; 37 - 38 - printk("%s(%p) CPU#%d stuck at %08x\n", 39 - str, lock, cpu, (unsigned int) caller); 40 - printk("Writer: PC(%08x):CPU(%x)\n", 41 - lock->writer_pc, lock->writer_cpu); 42 - printk("Readers:"); 43 - for (i = 0; i < NR_CPUS; i++) 44 - if (lock->reader_pc[i]) 45 - printk(" %d[%08x]", i, lock->reader_pc[i]); 46 - printk("\n"); 47 - } 48 - 49 - #undef INIT_STUCK 50 - #define INIT_STUCK 100000000 51 - 52 - void _do_spin_lock(spinlock_t *lock, char *str, unsigned long caller) 53 - { 54 - unsigned long val; 55 - int stuck = INIT_STUCK; 56 - int cpu = get_cpu(); 57 - int shown = 0; 58 - 59 - again: 60 - __asm__ __volatile__("ldstub [%1], %0" 61 - : "=r" (val) 62 - : "r" (&(lock->lock)) 63 - : "memory"); 64 - membar_storeload_storestore(); 65 - if (val) { 66 - while (lock->lock) { 67 - if (!--stuck) { 68 - if (shown++ <= 2) 69 - show(str, lock, caller); 70 - stuck = INIT_STUCK; 71 - } 72 - rmb(); 73 - } 74 - goto again; 75 - } 76 - lock->owner_pc = ((unsigned int)caller); 77 - lock->owner_cpu = cpu; 78 - current->thread.smp_lock_count++; 79 - current->thread.smp_lock_pc = ((unsigned int)caller); 80 - 81 - put_cpu(); 82 - } 83 - 84 - int _do_spin_trylock(spinlock_t *lock, unsigned long caller) 85 - { 86 - unsigned long val; 87 - int cpu = get_cpu(); 88 - 89 - __asm__ __volatile__("ldstub [%1], %0" 90 - : "=r" (val) 91 - : "r" (&(lock->lock)) 92 - : "memory"); 93 - membar_storeload_storestore(); 94 - if (!val) { 95 - lock->owner_pc = ((unsigned int)caller); 96 - lock->owner_cpu = cpu; 97 - current->thread.smp_lock_count++; 98 - current->thread.smp_lock_pc = ((unsigned int)caller); 99 - } 100 - 101 - put_cpu(); 102 - 103 - return val == 0; 104 - } 105 - 106 - void _do_spin_unlock(spinlock_t *lock) 107 - { 108 - lock->owner_pc = 0; 109 - lock->owner_cpu = NO_PROC_ID; 110 - membar_storestore_loadstore(); 111 - lock->lock = 0; 112 - current->thread.smp_lock_count--; 113 - } 114 - 115 - /* Keep INIT_STUCK the same... */ 116 - 117 - void _do_read_lock(rwlock_t *rw, char *str, unsigned long caller) 118 - { 119 - unsigned long val; 120 - int stuck = INIT_STUCK; 121 - int cpu = get_cpu(); 122 - int shown = 0; 123 - 124 - wlock_again: 125 - /* Wait for any writer to go away. */ 126 - while (((long)(rw->lock)) < 0) { 127 - if (!--stuck) { 128 - if (shown++ <= 2) 129 - show_read(str, rw, caller); 130 - stuck = INIT_STUCK; 131 - } 132 - rmb(); 133 - } 134 - /* Try once to increment the counter. */ 135 - __asm__ __volatile__( 136 - " ldx [%0], %%g1\n" 137 - " brlz,a,pn %%g1, 2f\n" 138 - " mov 1, %0\n" 139 - " add %%g1, 1, %%g7\n" 140 - " casx [%0], %%g1, %%g7\n" 141 - " sub %%g1, %%g7, %0\n" 142 - "2:" : "=r" (val) 143 - : "0" (&(rw->lock)) 144 - : "g1", "g7", "memory"); 145 - membar_storeload_storestore(); 146 - if (val) 147 - goto wlock_again; 148 - rw->reader_pc[cpu] = ((unsigned int)caller); 149 - current->thread.smp_lock_count++; 150 - current->thread.smp_lock_pc = ((unsigned int)caller); 151 - 152 - put_cpu(); 153 - } 154 - 155 - void _do_read_unlock(rwlock_t *rw, char *str, unsigned long caller) 156 - { 157 - unsigned long val; 158 - int stuck = INIT_STUCK; 159 - int cpu = get_cpu(); 160 - int shown = 0; 161 - 162 - /* Drop our identity _first_. */ 163 - rw->reader_pc[cpu] = 0; 164 - current->thread.smp_lock_count--; 165 - runlock_again: 166 - /* Spin trying to decrement the counter using casx. */ 167 - __asm__ __volatile__( 168 - " membar #StoreLoad | #LoadLoad\n" 169 - " ldx [%0], %%g1\n" 170 - " sub %%g1, 1, %%g7\n" 171 - " casx [%0], %%g1, %%g7\n" 172 - " membar #StoreLoad | #StoreStore\n" 173 - " sub %%g1, %%g7, %0\n" 174 - : "=r" (val) 175 - : "0" (&(rw->lock)) 176 - : "g1", "g7", "memory"); 177 - if (val) { 178 - if (!--stuck) { 179 - if (shown++ <= 2) 180 - show_read(str, rw, caller); 181 - stuck = INIT_STUCK; 182 - } 183 - goto runlock_again; 184 - } 185 - 186 - put_cpu(); 187 - } 188 - 189 - void _do_write_lock(rwlock_t *rw, char *str, unsigned long caller) 190 - { 191 - unsigned long val; 192 - int stuck = INIT_STUCK; 193 - int cpu = get_cpu(); 194 - int shown = 0; 195 - 196 - wlock_again: 197 - /* Spin while there is another writer. */ 198 - while (((long)rw->lock) < 0) { 199 - if (!--stuck) { 200 - if (shown++ <= 2) 201 - show_write(str, rw, caller); 202 - stuck = INIT_STUCK; 203 - } 204 - rmb(); 205 - } 206 - 207 - /* Try to acuire the write bit. */ 208 - __asm__ __volatile__( 209 - " mov 1, %%g3\n" 210 - " sllx %%g3, 63, %%g3\n" 211 - " ldx [%0], %%g1\n" 212 - " brlz,pn %%g1, 1f\n" 213 - " or %%g1, %%g3, %%g7\n" 214 - " casx [%0], %%g1, %%g7\n" 215 - " membar #StoreLoad | #StoreStore\n" 216 - " ba,pt %%xcc, 2f\n" 217 - " sub %%g1, %%g7, %0\n" 218 - "1: mov 1, %0\n" 219 - "2:" : "=r" (val) 220 - : "0" (&(rw->lock)) 221 - : "g3", "g1", "g7", "memory"); 222 - if (val) { 223 - /* We couldn't get the write bit. */ 224 - if (!--stuck) { 225 - if (shown++ <= 2) 226 - show_write(str, rw, caller); 227 - stuck = INIT_STUCK; 228 - } 229 - goto wlock_again; 230 - } 231 - if ((rw->lock & ((1UL<<63)-1UL)) != 0UL) { 232 - /* Readers still around, drop the write 233 - * lock, spin, and try again. 234 - */ 235 - if (!--stuck) { 236 - if (shown++ <= 2) 237 - show_write(str, rw, caller); 238 - stuck = INIT_STUCK; 239 - } 240 - __asm__ __volatile__( 241 - " mov 1, %%g3\n" 242 - " sllx %%g3, 63, %%g3\n" 243 - "1: ldx [%0], %%g1\n" 244 - " andn %%g1, %%g3, %%g7\n" 245 - " casx [%0], %%g1, %%g7\n" 246 - " cmp %%g1, %%g7\n" 247 - " membar #StoreLoad | #StoreStore\n" 248 - " bne,pn %%xcc, 1b\n" 249 - " nop" 250 - : /* no outputs */ 251 - : "r" (&(rw->lock)) 252 - : "g3", "g1", "g7", "cc", "memory"); 253 - while(rw->lock != 0) { 254 - if (!--stuck) { 255 - if (shown++ <= 2) 256 - show_write(str, rw, caller); 257 - stuck = INIT_STUCK; 258 - } 259 - rmb(); 260 - } 261 - goto wlock_again; 262 - } 263 - 264 - /* We have it, say who we are. */ 265 - rw->writer_pc = ((unsigned int)caller); 266 - rw->writer_cpu = cpu; 267 - current->thread.smp_lock_count++; 268 - current->thread.smp_lock_pc = ((unsigned int)caller); 269 - 270 - put_cpu(); 271 - } 272 - 273 - void _do_write_unlock(rwlock_t *rw, unsigned long caller) 274 - { 275 - unsigned long val; 276 - int stuck = INIT_STUCK; 277 - int shown = 0; 278 - 279 - /* Drop our identity _first_ */ 280 - rw->writer_pc = 0; 281 - rw->writer_cpu = NO_PROC_ID; 282 - current->thread.smp_lock_count--; 283 - wlock_again: 284 - __asm__ __volatile__( 285 - " membar #StoreLoad | #LoadLoad\n" 286 - " mov 1, %%g3\n" 287 - " sllx %%g3, 63, %%g3\n" 288 - " ldx [%0], %%g1\n" 289 - " andn %%g1, %%g3, %%g7\n" 290 - " casx [%0], %%g1, %%g7\n" 291 - " membar #StoreLoad | #StoreStore\n" 292 - " sub %%g1, %%g7, %0\n" 293 - : "=r" (val) 294 - : "0" (&(rw->lock)) 295 - : "g3", "g1", "g7", "memory"); 296 - if (val) { 297 - if (!--stuck) { 298 - if (shown++ <= 2) 299 - show_write("write_unlock", rw, caller); 300 - stuck = INIT_STUCK; 301 - } 302 - goto wlock_again; 303 - } 304 - } 305 - 306 - int _do_write_trylock(rwlock_t *rw, char *str, unsigned long caller) 307 - { 308 - unsigned long val; 309 - int cpu = get_cpu(); 310 - 311 - /* Try to acuire the write bit. */ 312 - __asm__ __volatile__( 313 - " mov 1, %%g3\n" 314 - " sllx %%g3, 63, %%g3\n" 315 - " ldx [%0], %%g1\n" 316 - " brlz,pn %%g1, 1f\n" 317 - " or %%g1, %%g3, %%g7\n" 318 - " casx [%0], %%g1, %%g7\n" 319 - " membar #StoreLoad | #StoreStore\n" 320 - " ba,pt %%xcc, 2f\n" 321 - " sub %%g1, %%g7, %0\n" 322 - "1: mov 1, %0\n" 323 - "2:" : "=r" (val) 324 - : "0" (&(rw->lock)) 325 - : "g3", "g1", "g7", "memory"); 326 - 327 - if (val) { 328 - put_cpu(); 329 - return 0; 330 - } 331 - 332 - if ((rw->lock & ((1UL<<63)-1UL)) != 0UL) { 333 - /* Readers still around, drop the write 334 - * lock, return failure. 335 - */ 336 - __asm__ __volatile__( 337 - " mov 1, %%g3\n" 338 - " sllx %%g3, 63, %%g3\n" 339 - "1: ldx [%0], %%g1\n" 340 - " andn %%g1, %%g3, %%g7\n" 341 - " casx [%0], %%g1, %%g7\n" 342 - " cmp %%g1, %%g7\n" 343 - " membar #StoreLoad | #StoreStore\n" 344 - " bne,pn %%xcc, 1b\n" 345 - " nop" 346 - : /* no outputs */ 347 - : "r" (&(rw->lock)) 348 - : "g3", "g1", "g7", "cc", "memory"); 349 - 350 - put_cpu(); 351 - 352 - return 0; 353 - } 354 - 355 - /* We have it, say who we are. */ 356 - rw->writer_pc = ((unsigned int)caller); 357 - rw->writer_cpu = cpu; 358 - current->thread.smp_lock_count++; 359 - current->thread.smp_lock_pc = ((unsigned int)caller); 360 - 361 - put_cpu(); 362 - 363 - return 1; 364 - } 365 - 366 - #endif /* CONFIG_SMP */
+1
fs/buffer.c
··· 40 40 #include <linux/cpu.h> 41 41 #include <linux/bitops.h> 42 42 #include <linux/mpage.h> 43 + #include <linux/bit_spinlock.h> 43 44 44 45 static int fsync_buffers_list(spinlock_t *lock, struct list_head *list); 45 46 static void invalidate_bh_lrus(void);
+39 -81
include/asm-alpha/spinlock.h
··· 6 6 #include <linux/kernel.h> 7 7 #include <asm/current.h> 8 8 9 - 10 9 /* 11 10 * Simple spin lock operations. There are two variants, one clears IRQ's 12 11 * on the local processor, one does not. ··· 13 14 * We make no fairness assumptions. They have a cost. 14 15 */ 15 16 16 - typedef struct { 17 - volatile unsigned int lock; 18 - #ifdef CONFIG_DEBUG_SPINLOCK 19 - int on_cpu; 20 - int line_no; 21 - void *previous; 22 - struct task_struct * task; 23 - const char *base_file; 24 - #endif 25 - } spinlock_t; 17 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 18 + #define __raw_spin_is_locked(x) ((x)->lock != 0) 19 + #define __raw_spin_unlock_wait(x) \ 20 + do { cpu_relax(); } while ((x)->lock) 26 21 27 - #ifdef CONFIG_DEBUG_SPINLOCK 28 - #define SPIN_LOCK_UNLOCKED (spinlock_t){ 0, -1, 0, NULL, NULL, NULL } 29 - #else 30 - #define SPIN_LOCK_UNLOCKED (spinlock_t){ 0 } 31 - #endif 32 - 33 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 34 - #define spin_is_locked(x) ((x)->lock != 0) 35 - #define spin_unlock_wait(x) do { barrier(); } while ((x)->lock) 36 - 37 - #ifdef CONFIG_DEBUG_SPINLOCK 38 - extern void _raw_spin_unlock(spinlock_t * lock); 39 - extern void debug_spin_lock(spinlock_t * lock, const char *, int); 40 - extern int debug_spin_trylock(spinlock_t * lock, const char *, int); 41 - #define _raw_spin_lock(LOCK) \ 42 - debug_spin_lock(LOCK, __BASE_FILE__, __LINE__) 43 - #define _raw_spin_trylock(LOCK) \ 44 - debug_spin_trylock(LOCK, __BASE_FILE__, __LINE__) 45 - #else 46 - static inline void _raw_spin_unlock(spinlock_t * lock) 22 + static inline void __raw_spin_unlock(raw_spinlock_t * lock) 47 23 { 48 24 mb(); 49 25 lock->lock = 0; 50 26 } 51 27 52 - static inline void _raw_spin_lock(spinlock_t * lock) 28 + static inline void __raw_spin_lock(raw_spinlock_t * lock) 53 29 { 54 30 long tmp; 55 31 ··· 44 70 : "m"(lock->lock) : "memory"); 45 71 } 46 72 47 - static inline int _raw_spin_trylock(spinlock_t *lock) 73 + static inline int __raw_spin_trylock(raw_spinlock_t *lock) 48 74 { 49 75 return !test_and_set_bit(0, &lock->lock); 50 76 } 51 - #endif /* CONFIG_DEBUG_SPINLOCK */ 52 - 53 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 54 77 55 78 /***********************************************************/ 56 79 57 - typedef struct { 58 - volatile unsigned int lock; 59 - } rwlock_t; 60 - 61 - #define RW_LOCK_UNLOCKED (rwlock_t){ 0 } 62 - 63 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) 64 - 65 - static inline int read_can_lock(rwlock_t *lock) 80 + static inline int __raw_read_can_lock(raw_rwlock_t *lock) 66 81 { 67 82 return (lock->lock & 1) == 0; 68 83 } 69 84 70 - static inline int write_can_lock(rwlock_t *lock) 85 + static inline int __raw_write_can_lock(raw_rwlock_t *lock) 71 86 { 72 87 return lock->lock == 0; 73 88 } 74 89 75 - #ifdef CONFIG_DEBUG_RWLOCK 76 - extern void _raw_write_lock(rwlock_t * lock); 77 - extern void _raw_read_lock(rwlock_t * lock); 78 - #else 79 - static inline void _raw_write_lock(rwlock_t * lock) 90 + static inline void __raw_read_lock(raw_rwlock_t *lock) 91 + { 92 + long regx; 93 + 94 + __asm__ __volatile__( 95 + "1: ldl_l %1,%0\n" 96 + " blbs %1,6f\n" 97 + " subl %1,2,%1\n" 98 + " stl_c %1,%0\n" 99 + " beq %1,6f\n" 100 + " mb\n" 101 + ".subsection 2\n" 102 + "6: ldl %1,%0\n" 103 + " blbs %1,6b\n" 104 + " br 1b\n" 105 + ".previous" 106 + : "=m" (*lock), "=&r" (regx) 107 + : "m" (*lock) : "memory"); 108 + } 109 + 110 + static inline void __raw_write_lock(raw_rwlock_t *lock) 80 111 { 81 112 long regx; 82 113 ··· 101 122 : "m" (*lock) : "memory"); 102 123 } 103 124 104 - static inline void _raw_read_lock(rwlock_t * lock) 105 - { 106 - long regx; 107 - 108 - __asm__ __volatile__( 109 - "1: ldl_l %1,%0\n" 110 - " blbs %1,6f\n" 111 - " subl %1,2,%1\n" 112 - " stl_c %1,%0\n" 113 - " beq %1,6f\n" 114 - " mb\n" 115 - ".subsection 2\n" 116 - "6: ldl %1,%0\n" 117 - " blbs %1,6b\n" 118 - " br 1b\n" 119 - ".previous" 120 - : "=m" (*lock), "=&r" (regx) 121 - : "m" (*lock) : "memory"); 122 - } 123 - #endif /* CONFIG_DEBUG_RWLOCK */ 124 - 125 - static inline int _raw_read_trylock(rwlock_t * lock) 125 + static inline int __raw_read_trylock(raw_rwlock_t * lock) 126 126 { 127 127 long regx; 128 128 int success; ··· 123 165 return success; 124 166 } 125 167 126 - static inline int _raw_write_trylock(rwlock_t * lock) 168 + static inline int __raw_write_trylock(raw_rwlock_t * lock) 127 169 { 128 170 long regx; 129 171 int success; ··· 145 187 return success; 146 188 } 147 189 148 - static inline void _raw_write_unlock(rwlock_t * lock) 149 - { 150 - mb(); 151 - lock->lock = 0; 152 - } 153 - 154 - static inline void _raw_read_unlock(rwlock_t * lock) 190 + static inline void __raw_read_unlock(raw_rwlock_t * lock) 155 191 { 156 192 long regx; 157 193 __asm__ __volatile__( ··· 159 207 ".previous" 160 208 : "=m" (*lock), "=&r" (regx) 161 209 : "m" (*lock) : "memory"); 210 + } 211 + 212 + static inline void __raw_write_unlock(raw_rwlock_t * lock) 213 + { 214 + mb(); 215 + lock->lock = 0; 162 216 } 163 217 164 218 #endif /* _ALPHA_SPINLOCK_H */
+20
include/asm-alpha/spinlock_types.h
··· 1 + #ifndef _ALPHA_SPINLOCK_TYPES_H 2 + #define _ALPHA_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int lock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile unsigned int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { 0 } 19 + 20 + #endif
+17 -33
include/asm-arm/spinlock.h
··· 16 16 * Unlocked value: 0 17 17 * Locked value: 1 18 18 */ 19 - typedef struct { 20 - volatile unsigned int lock; 21 - #ifdef CONFIG_PREEMPT 22 - unsigned int break_lock; 23 - #endif 24 - } spinlock_t; 25 19 26 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 } 20 + #define __raw_spin_is_locked(x) ((x)->lock != 0) 21 + #define __raw_spin_unlock_wait(lock) \ 22 + do { while (__raw_spin_is_locked(lock)) cpu_relax(); } while (0) 27 23 28 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while (0) 29 - #define spin_is_locked(x) ((x)->lock != 0) 30 - #define spin_unlock_wait(x) do { barrier(); } while (spin_is_locked(x)) 31 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 24 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 32 25 33 - static inline void _raw_spin_lock(spinlock_t *lock) 26 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 34 27 { 35 28 unsigned long tmp; 36 29 ··· 40 47 smp_mb(); 41 48 } 42 49 43 - static inline int _raw_spin_trylock(spinlock_t *lock) 50 + static inline int __raw_spin_trylock(raw_spinlock_t *lock) 44 51 { 45 52 unsigned long tmp; 46 53 ··· 60 67 } 61 68 } 62 69 63 - static inline void _raw_spin_unlock(spinlock_t *lock) 70 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 64 71 { 65 72 smp_mb(); 66 73 ··· 73 80 74 81 /* 75 82 * RWLOCKS 76 - */ 77 - typedef struct { 78 - volatile unsigned int lock; 79 - #ifdef CONFIG_PREEMPT 80 - unsigned int break_lock; 81 - #endif 82 - } rwlock_t; 83 - 84 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0 } 85 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while (0) 86 - #define rwlock_is_locked(x) (*((volatile unsigned int *)(x)) != 0) 87 - 88 - /* 83 + * 84 + * 89 85 * Write locks are easy - we just set bit 31. When unlocking, we can 90 86 * just write zero since the lock is exclusively held. 91 87 */ 92 - static inline void _raw_write_lock(rwlock_t *rw) 88 + #define rwlock_is_locked(x) (*((volatile unsigned int *)(x)) != 0) 89 + 90 + static inline void __raw_write_lock(rwlock_t *rw) 93 91 { 94 92 unsigned long tmp; 95 93 ··· 97 113 smp_mb(); 98 114 } 99 115 100 - static inline int _raw_write_trylock(rwlock_t *rw) 116 + static inline int __raw_write_trylock(rwlock_t *rw) 101 117 { 102 118 unsigned long tmp; 103 119 ··· 117 133 } 118 134 } 119 135 120 - static inline void _raw_write_unlock(rwlock_t *rw) 136 + static inline void __raw_write_unlock(raw_rwlock_t *rw) 121 137 { 122 138 smp_mb(); 123 139 ··· 140 156 * currently active. However, we know we won't have any write 141 157 * locks. 142 158 */ 143 - static inline void _raw_read_lock(rwlock_t *rw) 159 + static inline void __raw_read_lock(raw_rwlock_t *rw) 144 160 { 145 161 unsigned long tmp, tmp2; 146 162 ··· 157 173 smp_mb(); 158 174 } 159 175 160 - static inline void _raw_read_unlock(rwlock_t *rw) 176 + static inline void __raw_read_unlock(rwlock_t *rw) 161 177 { 162 178 unsigned long tmp, tmp2; 163 179 ··· 174 190 : "cc"); 175 191 } 176 192 177 - #define _raw_read_trylock(lock) generic_raw_read_trylock(lock) 193 + #define __raw_read_trylock(lock) generic__raw_read_trylock(lock) 178 194 179 195 #endif /* __ASM_SPINLOCK_H */
+20
include/asm-arm/spinlock_types.h
··· 1 + #ifndef __ASM_SPINLOCK_TYPES_H 2 + #define __ASM_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int lock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile unsigned int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { 0 } 19 + 20 + #endif
+78 -138
include/asm-i386/spinlock.h
··· 7 7 #include <linux/config.h> 8 8 #include <linux/compiler.h> 9 9 10 - asmlinkage int printk(const char * fmt, ...) 11 - __attribute__ ((format (printf, 1, 2))); 12 - 13 10 /* 14 11 * Your basic SMP spinlocks, allowing only a single CPU anywhere 15 - */ 16 - 17 - typedef struct { 18 - volatile unsigned int slock; 19 - #ifdef CONFIG_DEBUG_SPINLOCK 20 - unsigned magic; 21 - #endif 22 - #ifdef CONFIG_PREEMPT 23 - unsigned int break_lock; 24 - #endif 25 - } spinlock_t; 26 - 27 - #define SPINLOCK_MAGIC 0xdead4ead 28 - 29 - #ifdef CONFIG_DEBUG_SPINLOCK 30 - #define SPINLOCK_MAGIC_INIT , SPINLOCK_MAGIC 31 - #else 32 - #define SPINLOCK_MAGIC_INIT /* */ 33 - #endif 34 - 35 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 1 SPINLOCK_MAGIC_INIT } 36 - 37 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 38 - 39 - /* 12 + * 40 13 * Simple spin lock operations. There are two variants, one clears IRQ's 41 14 * on the local processor, one does not. 42 15 * 43 16 * We make no fairness assumptions. They have a cost. 17 + * 18 + * (the type definitions are in asm/spinlock_types.h) 44 19 */ 45 20 46 - #define spin_is_locked(x) (*(volatile signed char *)(&(x)->slock) <= 0) 47 - #define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) 21 + #define __raw_spin_is_locked(x) \ 22 + (*(volatile signed char *)(&(x)->slock) <= 0) 48 23 49 - #define spin_lock_string \ 24 + #define __raw_spin_lock_string \ 50 25 "\n1:\t" \ 51 26 "lock ; decb %0\n\t" \ 52 27 "jns 3f\n" \ ··· 32 57 "jmp 1b\n" \ 33 58 "3:\n\t" 34 59 35 - #define spin_lock_string_flags \ 60 + #define __raw_spin_lock_string_flags \ 36 61 "\n1:\t" \ 37 62 "lock ; decb %0\n\t" \ 38 63 "jns 4f\n\t" \ ··· 48 73 "jmp 1b\n" \ 49 74 "4:\n\t" 50 75 51 - /* 52 - * This works. Despite all the confusion. 53 - * (except on PPro SMP or if we are using OOSTORE) 54 - * (PPro errata 66, 92) 55 - */ 56 - 57 - #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) 58 - 59 - #define spin_unlock_string \ 60 - "movb $1,%0" \ 61 - :"=m" (lock->slock) : : "memory" 62 - 63 - 64 - static inline void _raw_spin_unlock(spinlock_t *lock) 76 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 65 77 { 66 - #ifdef CONFIG_DEBUG_SPINLOCK 67 - BUG_ON(lock->magic != SPINLOCK_MAGIC); 68 - BUG_ON(!spin_is_locked(lock)); 69 - #endif 70 78 __asm__ __volatile__( 71 - spin_unlock_string 72 - ); 79 + __raw_spin_lock_string 80 + :"=m" (lock->slock) : : "memory"); 73 81 } 74 82 75 - #else 76 - 77 - #define spin_unlock_string \ 78 - "xchgb %b0, %1" \ 79 - :"=q" (oldval), "=m" (lock->slock) \ 80 - :"0" (oldval) : "memory" 81 - 82 - static inline void _raw_spin_unlock(spinlock_t *lock) 83 + static inline void __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags) 83 84 { 84 - char oldval = 1; 85 - #ifdef CONFIG_DEBUG_SPINLOCK 86 - BUG_ON(lock->magic != SPINLOCK_MAGIC); 87 - BUG_ON(!spin_is_locked(lock)); 88 - #endif 89 85 __asm__ __volatile__( 90 - spin_unlock_string 91 - ); 86 + __raw_spin_lock_string_flags 87 + :"=m" (lock->slock) : "r" (flags) : "memory"); 92 88 } 93 89 94 - #endif 95 - 96 - static inline int _raw_spin_trylock(spinlock_t *lock) 90 + static inline int __raw_spin_trylock(raw_spinlock_t *lock) 97 91 { 98 92 char oldval; 99 93 __asm__ __volatile__( ··· 72 128 return oldval > 0; 73 129 } 74 130 75 - static inline void _raw_spin_lock(spinlock_t *lock) 131 + /* 132 + * __raw_spin_unlock based on writing $1 to the low byte. 133 + * This method works. Despite all the confusion. 134 + * (except on PPro SMP or if we are using OOSTORE, so we use xchgb there) 135 + * (PPro errata 66, 92) 136 + */ 137 + 138 + #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) 139 + 140 + #define __raw_spin_unlock_string \ 141 + "movb $1,%0" \ 142 + :"=m" (lock->slock) : : "memory" 143 + 144 + 145 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 76 146 { 77 - #ifdef CONFIG_DEBUG_SPINLOCK 78 - if (unlikely(lock->magic != SPINLOCK_MAGIC)) { 79 - printk("eip: %p\n", __builtin_return_address(0)); 80 - BUG(); 81 - } 82 - #endif 83 147 __asm__ __volatile__( 84 - spin_lock_string 85 - :"=m" (lock->slock) : : "memory"); 148 + __raw_spin_unlock_string 149 + ); 86 150 } 87 151 88 - static inline void _raw_spin_lock_flags (spinlock_t *lock, unsigned long flags) 152 + #else 153 + 154 + #define __raw_spin_unlock_string \ 155 + "xchgb %b0, %1" \ 156 + :"=q" (oldval), "=m" (lock->slock) \ 157 + :"0" (oldval) : "memory" 158 + 159 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 89 160 { 90 - #ifdef CONFIG_DEBUG_SPINLOCK 91 - if (unlikely(lock->magic != SPINLOCK_MAGIC)) { 92 - printk("eip: %p\n", __builtin_return_address(0)); 93 - BUG(); 94 - } 95 - #endif 161 + char oldval = 1; 162 + 96 163 __asm__ __volatile__( 97 - spin_lock_string_flags 98 - :"=m" (lock->slock) : "r" (flags) : "memory"); 164 + __raw_spin_unlock_string 165 + ); 99 166 } 167 + 168 + #endif 169 + 170 + #define __raw_spin_unlock_wait(lock) \ 171 + do { while (__raw_spin_is_locked(lock)) cpu_relax(); } while (0) 100 172 101 173 /* 102 174 * Read-write spinlocks, allowing multiple readers ··· 123 163 * can "mix" irq-safe locks - any writer needs to get a 124 164 * irq-safe write-lock, but readers can get non-irqsafe 125 165 * read-locks. 126 - */ 127 - typedef struct { 128 - volatile unsigned int lock; 129 - #ifdef CONFIG_DEBUG_SPINLOCK 130 - unsigned magic; 131 - #endif 132 - #ifdef CONFIG_PREEMPT 133 - unsigned int break_lock; 134 - #endif 135 - } rwlock_t; 136 - 137 - #define RWLOCK_MAGIC 0xdeaf1eed 138 - 139 - #ifdef CONFIG_DEBUG_SPINLOCK 140 - #define RWLOCK_MAGIC_INIT , RWLOCK_MAGIC 141 - #else 142 - #define RWLOCK_MAGIC_INIT /* */ 143 - #endif 144 - 145 - #define RW_LOCK_UNLOCKED (rwlock_t) { RW_LOCK_BIAS RWLOCK_MAGIC_INIT } 146 - 147 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) 148 - 149 - /** 150 - * read_can_lock - would read_trylock() succeed? 151 - * @lock: the rwlock in question. 152 - */ 153 - #define read_can_lock(x) ((int)(x)->lock > 0) 154 - 155 - /** 156 - * write_can_lock - would write_trylock() succeed? 157 - * @lock: the rwlock in question. 158 - */ 159 - #define write_can_lock(x) ((x)->lock == RW_LOCK_BIAS) 160 - 161 - /* 166 + * 162 167 * On x86, we implement read-write locks as a 32-bit counter 163 168 * with the high bit (sign) being the "contended" bit. 164 169 * ··· 131 206 * 132 207 * Changed to use the same technique as rw semaphores. See 133 208 * semaphore.h for details. -ben 209 + * 210 + * the helpers are in arch/i386/kernel/semaphore.c 134 211 */ 135 - /* the spinlock helpers are in arch/i386/kernel/semaphore.c */ 136 212 137 - static inline void _raw_read_lock(rwlock_t *rw) 213 + /** 214 + * read_can_lock - would read_trylock() succeed? 215 + * @lock: the rwlock in question. 216 + */ 217 + #define __raw_read_can_lock(x) ((int)(x)->lock > 0) 218 + 219 + /** 220 + * write_can_lock - would write_trylock() succeed? 221 + * @lock: the rwlock in question. 222 + */ 223 + #define __raw_write_can_lock(x) ((x)->lock == RW_LOCK_BIAS) 224 + 225 + static inline void __raw_read_lock(raw_rwlock_t *rw) 138 226 { 139 - #ifdef CONFIG_DEBUG_SPINLOCK 140 - BUG_ON(rw->magic != RWLOCK_MAGIC); 141 - #endif 142 227 __build_read_lock(rw, "__read_lock_failed"); 143 228 } 144 229 145 - static inline void _raw_write_lock(rwlock_t *rw) 230 + static inline void __raw_write_lock(raw_rwlock_t *rw) 146 231 { 147 - #ifdef CONFIG_DEBUG_SPINLOCK 148 - BUG_ON(rw->magic != RWLOCK_MAGIC); 149 - #endif 150 232 __build_write_lock(rw, "__write_lock_failed"); 151 233 } 152 234 153 - #define _raw_read_unlock(rw) asm volatile("lock ; incl %0" :"=m" ((rw)->lock) : : "memory") 154 - #define _raw_write_unlock(rw) asm volatile("lock ; addl $" RW_LOCK_BIAS_STR ",%0":"=m" ((rw)->lock) : : "memory") 155 - 156 - static inline int _raw_read_trylock(rwlock_t *lock) 235 + static inline int __raw_read_trylock(raw_rwlock_t *lock) 157 236 { 158 237 atomic_t *count = (atomic_t *)lock; 159 238 atomic_dec(count); ··· 167 238 return 0; 168 239 } 169 240 170 - static inline int _raw_write_trylock(rwlock_t *lock) 241 + static inline int __raw_write_trylock(raw_rwlock_t *lock) 171 242 { 172 243 atomic_t *count = (atomic_t *)lock; 173 244 if (atomic_sub_and_test(RW_LOCK_BIAS, count)) 174 245 return 1; 175 246 atomic_add(RW_LOCK_BIAS, count); 176 247 return 0; 248 + } 249 + 250 + static inline void __raw_read_unlock(raw_rwlock_t *rw) 251 + { 252 + asm volatile("lock ; incl %0" :"=m" (rw->lock) : : "memory"); 253 + } 254 + 255 + static inline void __raw_write_unlock(raw_rwlock_t *rw) 256 + { 257 + asm volatile("lock ; addl $" RW_LOCK_BIAS_STR ", %0" 258 + : "=m" (rw->lock) : : "memory"); 177 259 } 178 260 179 261 #endif /* __ASM_SPINLOCK_H */
+20
include/asm-i386/spinlock_types.h
··· 1 + #ifndef __ASM_SPINLOCK_TYPES_H 2 + #define __ASM_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int slock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 1 } 13 + 14 + typedef struct { 15 + volatile unsigned int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { RW_LOCK_BIAS } 19 + 20 + #endif
+26 -43
include/asm-ia64/spinlock.h
··· 17 17 #include <asm/intrinsics.h> 18 18 #include <asm/system.h> 19 19 20 - typedef struct { 21 - volatile unsigned int lock; 22 - #ifdef CONFIG_PREEMPT 23 - unsigned int break_lock; 24 - #endif 25 - } spinlock_t; 26 - 27 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 } 28 - #define spin_lock_init(x) ((x)->lock = 0) 20 + #define __raw_spin_lock_init(x) ((x)->lock = 0) 29 21 30 22 #ifdef ASM_SUPPORTED 31 23 /* 32 24 * Try to get the lock. If we fail to get the lock, make a non-standard call to 33 25 * ia64_spinlock_contention(). We do not use a normal call because that would force all 34 - * callers of spin_lock() to be non-leaf routines. Instead, ia64_spinlock_contention() is 35 - * carefully coded to touch only those registers that spin_lock() marks "clobbered". 26 + * callers of __raw_spin_lock() to be non-leaf routines. Instead, ia64_spinlock_contention() is 27 + * carefully coded to touch only those registers that __raw_spin_lock() marks "clobbered". 36 28 */ 37 29 38 30 #define IA64_SPINLOCK_CLOBBERS "ar.ccv", "ar.pfs", "p14", "p15", "r27", "r28", "r29", "r30", "b6", "memory" 39 31 40 32 static inline void 41 - _raw_spin_lock_flags (spinlock_t *lock, unsigned long flags) 33 + __raw_spin_lock_flags (raw_spinlock_t *lock, unsigned long flags) 42 34 { 43 35 register volatile unsigned int *ptr asm ("r31") = &lock->lock; 44 36 ··· 86 94 #endif 87 95 } 88 96 89 - #define _raw_spin_lock(lock) _raw_spin_lock_flags(lock, 0) 97 + #define __raw_spin_lock(lock) __raw_spin_lock_flags(lock, 0) 90 98 91 99 /* Unlock by doing an ordered store and releasing the cacheline with nta */ 92 - static inline void _raw_spin_unlock(spinlock_t *x) { 100 + static inline void __raw_spin_unlock(raw_spinlock_t *x) { 93 101 barrier(); 94 102 asm volatile ("st4.rel.nta [%0] = r0\n\t" :: "r"(x)); 95 103 } 96 104 97 105 #else /* !ASM_SUPPORTED */ 98 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 99 - # define _raw_spin_lock(x) \ 106 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 107 + # define __raw_spin_lock(x) \ 100 108 do { \ 101 109 __u32 *ia64_spinlock_ptr = (__u32 *) (x); \ 102 110 __u64 ia64_spinlock_val; \ ··· 109 117 } while (ia64_spinlock_val); \ 110 118 } \ 111 119 } while (0) 112 - #define _raw_spin_unlock(x) do { barrier(); ((spinlock_t *) x)->lock = 0; } while (0) 120 + #define __raw_spin_unlock(x) do { barrier(); ((raw_spinlock_t *) x)->lock = 0; } while (0) 113 121 #endif /* !ASM_SUPPORTED */ 114 122 115 - #define spin_is_locked(x) ((x)->lock != 0) 116 - #define _raw_spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) == 0) 117 - #define spin_unlock_wait(x) do { barrier(); } while ((x)->lock) 123 + #define __raw_spin_is_locked(x) ((x)->lock != 0) 124 + #define __raw_spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) == 0) 125 + #define __raw_spin_unlock_wait(lock) \ 126 + do { while (__raw_spin_is_locked(lock)) cpu_relax(); } while (0) 118 127 119 - typedef struct { 120 - volatile unsigned int read_counter : 24; 121 - volatile unsigned int write_lock : 8; 122 - #ifdef CONFIG_PREEMPT 123 - unsigned int break_lock; 124 - #endif 125 - } rwlock_t; 126 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 } 128 + #define __raw_read_can_lock(rw) (*(volatile int *)(rw) >= 0) 129 + #define __raw_write_can_lock(rw) (*(volatile int *)(rw) == 0) 127 130 128 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) 129 - #define read_can_lock(rw) (*(volatile int *)(rw) >= 0) 130 - #define write_can_lock(rw) (*(volatile int *)(rw) == 0) 131 - 132 - #define _raw_read_lock(rw) \ 131 + #define __raw_read_lock(rw) \ 133 132 do { \ 134 - rwlock_t *__read_lock_ptr = (rw); \ 133 + raw_rwlock_t *__read_lock_ptr = (rw); \ 135 134 \ 136 135 while (unlikely(ia64_fetchadd(1, (int *) __read_lock_ptr, acq) < 0)) { \ 137 136 ia64_fetchadd(-1, (int *) __read_lock_ptr, rel); \ ··· 131 148 } \ 132 149 } while (0) 133 150 134 - #define _raw_read_unlock(rw) \ 151 + #define __raw_read_unlock(rw) \ 135 152 do { \ 136 - rwlock_t *__read_lock_ptr = (rw); \ 153 + raw_rwlock_t *__read_lock_ptr = (rw); \ 137 154 ia64_fetchadd(-1, (int *) __read_lock_ptr, rel); \ 138 155 } while (0) 139 156 140 157 #ifdef ASM_SUPPORTED 141 - #define _raw_write_lock(rw) \ 158 + #define __raw_write_lock(rw) \ 142 159 do { \ 143 160 __asm__ __volatile__ ( \ 144 161 "mov ar.ccv = r0\n" \ ··· 153 170 :: "r"(rw) : "ar.ccv", "p7", "r2", "r29", "memory"); \ 154 171 } while(0) 155 172 156 - #define _raw_write_trylock(rw) \ 173 + #define __raw_write_trylock(rw) \ 157 174 ({ \ 158 175 register long result; \ 159 176 \ ··· 165 182 (result == 0); \ 166 183 }) 167 184 168 - static inline void _raw_write_unlock(rwlock_t *x) 185 + static inline void __raw_write_unlock(raw_rwlock_t *x) 169 186 { 170 187 u8 *y = (u8 *)x; 171 188 barrier(); ··· 174 191 175 192 #else /* !ASM_SUPPORTED */ 176 193 177 - #define _raw_write_lock(l) \ 194 + #define __raw_write_lock(l) \ 178 195 ({ \ 179 196 __u64 ia64_val, ia64_set_val = ia64_dep_mi(-1, 0, 31, 1); \ 180 197 __u32 *ia64_write_lock_ptr = (__u32 *) (l); \ ··· 185 202 } while (ia64_val); \ 186 203 }) 187 204 188 - #define _raw_write_trylock(rw) \ 205 + #define __raw_write_trylock(rw) \ 189 206 ({ \ 190 207 __u64 ia64_val; \ 191 208 __u64 ia64_set_val = ia64_dep_mi(-1, 0, 31,1); \ ··· 193 210 (ia64_val == 0); \ 194 211 }) 195 212 196 - static inline void _raw_write_unlock(rwlock_t *x) 213 + static inline void __raw_write_unlock(raw_rwlock_t *x) 197 214 { 198 215 barrier(); 199 216 x->write_lock = 0; ··· 201 218 202 219 #endif /* !ASM_SUPPORTED */ 203 220 204 - #define _raw_read_trylock(lock) generic_raw_read_trylock(lock) 221 + #define __raw_read_trylock(lock) generic__raw_read_trylock(lock) 205 222 206 223 #endif /* _ASM_IA64_SPINLOCK_H */
+21
include/asm-ia64/spinlock_types.h
··· 1 + #ifndef _ASM_IA64_SPINLOCK_TYPES_H 2 + #define _ASM_IA64_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int lock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile unsigned int read_counter : 31; 16 + volatile unsigned int write_lock : 1; 17 + } raw_rwlock_t; 18 + 19 + #define __RAW_RW_LOCK_UNLOCKED { 0, 0 } 20 + 21 + #endif
+33 -100
include/asm-m32r/spinlock.h
··· 14 14 #include <asm/atomic.h> 15 15 #include <asm/page.h> 16 16 17 - extern int printk(const char * fmt, ...) 18 - __attribute__ ((format (printf, 1, 2))); 19 - 20 - #define RW_LOCK_BIAS 0x01000000 21 - #define RW_LOCK_BIAS_STR "0x01000000" 22 - 23 17 /* 24 18 * Your basic SMP spinlocks, allowing only a single CPU anywhere 25 - */ 26 - 27 - typedef struct { 28 - volatile int slock; 29 - #ifdef CONFIG_DEBUG_SPINLOCK 30 - unsigned magic; 31 - #endif 32 - #ifdef CONFIG_PREEMPT 33 - unsigned int break_lock; 34 - #endif 35 - } spinlock_t; 36 - 37 - #define SPINLOCK_MAGIC 0xdead4ead 38 - 39 - #ifdef CONFIG_DEBUG_SPINLOCK 40 - #define SPINLOCK_MAGIC_INIT , SPINLOCK_MAGIC 41 - #else 42 - #define SPINLOCK_MAGIC_INIT /* */ 43 - #endif 44 - 45 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 1 SPINLOCK_MAGIC_INIT } 46 - 47 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 48 - 49 - /* 19 + * 20 + * (the type definitions are in asm/spinlock_types.h) 21 + * 50 22 * Simple spin lock operations. There are two variants, one clears IRQ's 51 23 * on the local processor, one does not. 52 24 * 53 25 * We make no fairness assumptions. They have a cost. 54 26 */ 55 27 56 - #define spin_is_locked(x) (*(volatile int *)(&(x)->slock) <= 0) 57 - #define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) 58 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 28 + #define __raw_spin_is_locked(x) (*(volatile int *)(&(x)->slock) <= 0) 29 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 30 + #define __raw_spin_unlock_wait(x) \ 31 + do { cpu_relax(); } while (__raw_spin_is_locked(x)) 59 32 60 33 /** 61 - * _raw_spin_trylock - Try spin lock and return a result 34 + * __raw_spin_trylock - Try spin lock and return a result 62 35 * @lock: Pointer to the lock variable 63 36 * 64 - * _raw_spin_trylock() tries to get the lock and returns a result. 37 + * __raw_spin_trylock() tries to get the lock and returns a result. 65 38 * On the m32r, the result value is 1 (= Success) or 0 (= Failure). 66 39 */ 67 - static inline int _raw_spin_trylock(spinlock_t *lock) 40 + static inline int __raw_spin_trylock(raw_spinlock_t *lock) 68 41 { 69 42 int oldval; 70 43 unsigned long tmp1, tmp2; ··· 51 78 * } 52 79 */ 53 80 __asm__ __volatile__ ( 54 - "# spin_trylock \n\t" 81 + "# __raw_spin_trylock \n\t" 55 82 "ldi %1, #0; \n\t" 56 83 "mvfc %2, psw; \n\t" 57 84 "clrpsw #0x40 -> nop; \n\t" ··· 70 97 return (oldval > 0); 71 98 } 72 99 73 - static inline void _raw_spin_lock(spinlock_t *lock) 100 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 74 101 { 75 102 unsigned long tmp0, tmp1; 76 103 77 - #ifdef CONFIG_DEBUG_SPINLOCK 78 - if (unlikely(lock->magic != SPINLOCK_MAGIC)) { 79 - printk("pc: %p\n", __builtin_return_address(0)); 80 - BUG(); 81 - } 82 - #endif 83 104 /* 84 105 * lock->slock : =1 : unlock 85 106 * : <=0 : lock ··· 85 118 * } 86 119 */ 87 120 __asm__ __volatile__ ( 88 - "# spin_lock \n\t" 121 + "# __raw_spin_lock \n\t" 89 122 ".fillinsn \n" 90 123 "1: \n\t" 91 124 "mvfc %1, psw; \n\t" ··· 112 145 ); 113 146 } 114 147 115 - static inline void _raw_spin_unlock(spinlock_t *lock) 148 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 116 149 { 117 - #ifdef CONFIG_DEBUG_SPINLOCK 118 - BUG_ON(lock->magic != SPINLOCK_MAGIC); 119 - BUG_ON(!spin_is_locked(lock)); 120 - #endif 121 150 mb(); 122 151 lock->slock = 1; 123 152 } ··· 127 164 * can "mix" irq-safe locks - any writer needs to get a 128 165 * irq-safe write-lock, but readers can get non-irqsafe 129 166 * read-locks. 130 - */ 131 - typedef struct { 132 - volatile int lock; 133 - #ifdef CONFIG_DEBUG_SPINLOCK 134 - unsigned magic; 135 - #endif 136 - #ifdef CONFIG_PREEMPT 137 - unsigned int break_lock; 138 - #endif 139 - } rwlock_t; 140 - 141 - #define RWLOCK_MAGIC 0xdeaf1eed 142 - 143 - #ifdef CONFIG_DEBUG_SPINLOCK 144 - #define RWLOCK_MAGIC_INIT , RWLOCK_MAGIC 145 - #else 146 - #define RWLOCK_MAGIC_INIT /* */ 147 - #endif 148 - 149 - #define RW_LOCK_UNLOCKED (rwlock_t) { RW_LOCK_BIAS RWLOCK_MAGIC_INIT } 150 - 151 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) 152 - 153 - /** 154 - * read_can_lock - would read_trylock() succeed? 155 - * @lock: the rwlock in question. 156 - */ 157 - #define read_can_lock(x) ((int)(x)->lock > 0) 158 - 159 - /** 160 - * write_can_lock - would write_trylock() succeed? 161 - * @lock: the rwlock in question. 162 - */ 163 - #define write_can_lock(x) ((x)->lock == RW_LOCK_BIAS) 164 - 165 - /* 167 + * 166 168 * On x86, we implement read-write locks as a 32-bit counter 167 169 * with the high bit (sign) being the "contended" bit. 168 170 * ··· 136 208 * Changed to use the same technique as rw semaphores. See 137 209 * semaphore.h for details. -ben 138 210 */ 139 - /* the spinlock helpers are in arch/i386/kernel/semaphore.c */ 140 211 141 - static inline void _raw_read_lock(rwlock_t *rw) 212 + /** 213 + * read_can_lock - would read_trylock() succeed? 214 + * @lock: the rwlock in question. 215 + */ 216 + #define __raw_read_can_lock(x) ((int)(x)->lock > 0) 217 + 218 + /** 219 + * write_can_lock - would write_trylock() succeed? 220 + * @lock: the rwlock in question. 221 + */ 222 + #define __raw_write_can_lock(x) ((x)->lock == RW_LOCK_BIAS) 223 + 224 + static inline void __raw_read_lock(raw_rwlock_t *rw) 142 225 { 143 226 unsigned long tmp0, tmp1; 144 227 145 - #ifdef CONFIG_DEBUG_SPINLOCK 146 - BUG_ON(rw->magic != RWLOCK_MAGIC); 147 - #endif 148 228 /* 149 229 * rw->lock : >0 : unlock 150 230 * : <=0 : lock ··· 200 264 ); 201 265 } 202 266 203 - static inline void _raw_write_lock(rwlock_t *rw) 267 + static inline void __raw_write_lock(raw_rwlock_t *rw) 204 268 { 205 269 unsigned long tmp0, tmp1, tmp2; 206 270 207 - #ifdef CONFIG_DEBUG_SPINLOCK 208 - BUG_ON(rw->magic != RWLOCK_MAGIC); 209 - #endif 210 271 /* 211 272 * rw->lock : =RW_LOCK_BIAS_STR : unlock 212 273 * : !=RW_LOCK_BIAS_STR : lock ··· 253 320 ); 254 321 } 255 322 256 - static inline void _raw_read_unlock(rwlock_t *rw) 323 + static inline void __raw_read_unlock(raw_rwlock_t *rw) 257 324 { 258 325 unsigned long tmp0, tmp1; 259 326 ··· 275 342 ); 276 343 } 277 344 278 - static inline void _raw_write_unlock(rwlock_t *rw) 345 + static inline void __raw_write_unlock(raw_rwlock_t *rw) 279 346 { 280 347 unsigned long tmp0, tmp1, tmp2; 281 348 ··· 299 366 ); 300 367 } 301 368 302 - #define _raw_read_trylock(lock) generic_raw_read_trylock(lock) 369 + #define __raw_read_trylock(lock) generic__raw_read_trylock(lock) 303 370 304 - static inline int _raw_write_trylock(rwlock_t *lock) 371 + static inline int __raw_write_trylock(raw_rwlock_t *lock) 305 372 { 306 373 atomic_t *count = (atomic_t *)lock; 307 374 if (atomic_sub_and_test(RW_LOCK_BIAS, count))
+23
include/asm-m32r/spinlock_types.h
··· 1 + #ifndef _ASM_M32R_SPINLOCK_TYPES_H 2 + #define _ASM_M32R_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile int slock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 1 } 13 + 14 + typedef struct { 15 + volatile int lock; 16 + } raw_rwlock_t; 17 + 18 + #define RW_LOCK_BIAS 0x01000000 19 + #define RW_LOCK_BIAS_STR "0x01000000" 20 + 21 + #define __RAW_RW_LOCK_UNLOCKED { RW_LOCK_BIAS } 22 + 23 + #endif
+27 -48
include/asm-mips/spinlock.h
··· 16 16 * Your basic SMP spinlocks, allowing only a single CPU anywhere 17 17 */ 18 18 19 - typedef struct { 20 - volatile unsigned int lock; 21 - #ifdef CONFIG_PREEMPT 22 - unsigned int break_lock; 23 - #endif 24 - } spinlock_t; 25 - 26 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 } 27 - 28 - #define spin_lock_init(x) do { (x)->lock = 0; } while(0) 29 - 30 - #define spin_is_locked(x) ((x)->lock != 0) 31 - #define spin_unlock_wait(x) do { barrier(); } while ((x)->lock) 32 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 19 + #define __raw_spin_is_locked(x) ((x)->lock != 0) 20 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 21 + #define __raw_spin_unlock_wait(x) \ 22 + do { cpu_relax(); } while ((x)->lock) 33 23 34 24 /* 35 25 * Simple spin lock operations. There are two variants, one clears IRQ's ··· 28 38 * We make no fairness assumptions. They have a cost. 29 39 */ 30 40 31 - static inline void _raw_spin_lock(spinlock_t *lock) 41 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 32 42 { 33 43 unsigned int tmp; 34 44 35 45 if (R10000_LLSC_WAR) { 36 46 __asm__ __volatile__( 37 - " .set noreorder # _raw_spin_lock \n" 47 + " .set noreorder # __raw_spin_lock \n" 38 48 "1: ll %1, %2 \n" 39 49 " bnez %1, 1b \n" 40 50 " li %1, 1 \n" ··· 48 58 : "memory"); 49 59 } else { 50 60 __asm__ __volatile__( 51 - " .set noreorder # _raw_spin_lock \n" 61 + " .set noreorder # __raw_spin_lock \n" 52 62 "1: ll %1, %2 \n" 53 63 " bnez %1, 1b \n" 54 64 " li %1, 1 \n" ··· 62 72 } 63 73 } 64 74 65 - static inline void _raw_spin_unlock(spinlock_t *lock) 75 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 66 76 { 67 77 __asm__ __volatile__( 68 - " .set noreorder # _raw_spin_unlock \n" 78 + " .set noreorder # __raw_spin_unlock \n" 69 79 " sync \n" 70 80 " sw $0, %0 \n" 71 81 " .set\treorder \n" ··· 74 84 : "memory"); 75 85 } 76 86 77 - static inline unsigned int _raw_spin_trylock(spinlock_t *lock) 87 + static inline unsigned int __raw_spin_trylock(raw_spinlock_t *lock) 78 88 { 79 89 unsigned int temp, res; 80 90 81 91 if (R10000_LLSC_WAR) { 82 92 __asm__ __volatile__( 83 - " .set noreorder # _raw_spin_trylock \n" 93 + " .set noreorder # __raw_spin_trylock \n" 84 94 "1: ll %0, %3 \n" 85 95 " ori %2, %0, 1 \n" 86 96 " sc %2, %1 \n" ··· 94 104 : "memory"); 95 105 } else { 96 106 __asm__ __volatile__( 97 - " .set noreorder # _raw_spin_trylock \n" 107 + " .set noreorder # __raw_spin_trylock \n" 98 108 "1: ll %0, %3 \n" 99 109 " ori %2, %0, 1 \n" 100 110 " sc %2, %1 \n" ··· 119 129 * read-locks. 120 130 */ 121 131 122 - typedef struct { 123 - volatile unsigned int lock; 124 - #ifdef CONFIG_PREEMPT 125 - unsigned int break_lock; 126 - #endif 127 - } rwlock_t; 128 - 129 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0 } 130 - 131 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) 132 - 133 - static inline void _raw_read_lock(rwlock_t *rw) 132 + static inline void __raw_read_lock(raw_rwlock_t *rw) 134 133 { 135 134 unsigned int tmp; 136 135 137 136 if (R10000_LLSC_WAR) { 138 137 __asm__ __volatile__( 139 - " .set noreorder # _raw_read_lock \n" 138 + " .set noreorder # __raw_read_lock \n" 140 139 "1: ll %1, %2 \n" 141 140 " bltz %1, 1b \n" 142 141 " addu %1, 1 \n" ··· 139 160 : "memory"); 140 161 } else { 141 162 __asm__ __volatile__( 142 - " .set noreorder # _raw_read_lock \n" 163 + " .set noreorder # __raw_read_lock \n" 143 164 "1: ll %1, %2 \n" 144 165 " bltz %1, 1b \n" 145 166 " addu %1, 1 \n" ··· 156 177 /* Note the use of sub, not subu which will make the kernel die with an 157 178 overflow exception if we ever try to unlock an rwlock that is already 158 179 unlocked or is being held by a writer. */ 159 - static inline void _raw_read_unlock(rwlock_t *rw) 180 + static inline void __raw_read_unlock(raw_rwlock_t *rw) 160 181 { 161 182 unsigned int tmp; 162 183 163 184 if (R10000_LLSC_WAR) { 164 185 __asm__ __volatile__( 165 - "1: ll %1, %2 # _raw_read_unlock \n" 186 + "1: ll %1, %2 # __raw_read_unlock \n" 166 187 " sub %1, 1 \n" 167 188 " sc %1, %0 \n" 168 189 " beqzl %1, 1b \n" ··· 172 193 : "memory"); 173 194 } else { 174 195 __asm__ __volatile__( 175 - " .set noreorder # _raw_read_unlock \n" 196 + " .set noreorder # __raw_read_unlock \n" 176 197 "1: ll %1, %2 \n" 177 198 " sub %1, 1 \n" 178 199 " sc %1, %0 \n" ··· 185 206 } 186 207 } 187 208 188 - static inline void _raw_write_lock(rwlock_t *rw) 209 + static inline void __raw_write_lock(raw_rwlock_t *rw) 189 210 { 190 211 unsigned int tmp; 191 212 192 213 if (R10000_LLSC_WAR) { 193 214 __asm__ __volatile__( 194 - " .set noreorder # _raw_write_lock \n" 215 + " .set noreorder # __raw_write_lock \n" 195 216 "1: ll %1, %2 \n" 196 217 " bnez %1, 1b \n" 197 218 " lui %1, 0x8000 \n" ··· 205 226 : "memory"); 206 227 } else { 207 228 __asm__ __volatile__( 208 - " .set noreorder # _raw_write_lock \n" 229 + " .set noreorder # __raw_write_lock \n" 209 230 "1: ll %1, %2 \n" 210 231 " bnez %1, 1b \n" 211 232 " lui %1, 0x8000 \n" ··· 220 241 } 221 242 } 222 243 223 - static inline void _raw_write_unlock(rwlock_t *rw) 244 + static inline void __raw_write_unlock(raw_rwlock_t *rw) 224 245 { 225 246 __asm__ __volatile__( 226 - " sync # _raw_write_unlock \n" 247 + " sync # __raw_write_unlock \n" 227 248 " sw $0, %0 \n" 228 249 : "=m" (rw->lock) 229 250 : "m" (rw->lock) 230 251 : "memory"); 231 252 } 232 253 233 - #define _raw_read_trylock(lock) generic_raw_read_trylock(lock) 254 + #define __raw_read_trylock(lock) generic__raw_read_trylock(lock) 234 255 235 - static inline int _raw_write_trylock(rwlock_t *rw) 256 + static inline int __raw_write_trylock(raw_rwlock_t *rw) 236 257 { 237 258 unsigned int tmp; 238 259 int ret; 239 260 240 261 if (R10000_LLSC_WAR) { 241 262 __asm__ __volatile__( 242 - " .set noreorder # _raw_write_trylock \n" 263 + " .set noreorder # __raw_write_trylock \n" 243 264 " li %2, 0 \n" 244 265 "1: ll %1, %3 \n" 245 266 " bnez %1, 2f \n" ··· 256 277 : "memory"); 257 278 } else { 258 279 __asm__ __volatile__( 259 - " .set noreorder # _raw_write_trylock \n" 280 + " .set noreorder # __raw_write_trylock \n" 260 281 " li %2, 0 \n" 261 282 "1: ll %1, %3 \n" 262 283 " bnez %1, 2f \n"
+20
include/asm-mips/spinlock_types.h
··· 1 + #ifndef _ASM_SPINLOCK_TYPES_H 2 + #define _ASM_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int lock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile unsigned int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { 0 } 19 + 20 + #endif
+6 -6
include/asm-parisc/atomic.h
··· 24 24 # define ATOMIC_HASH_SIZE 4 25 25 # define ATOMIC_HASH(a) (&(__atomic_hash[ (((unsigned long) a)/L1_CACHE_BYTES) & (ATOMIC_HASH_SIZE-1) ])) 26 26 27 - extern spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned; 27 + extern raw_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned; 28 28 29 - /* Can't use _raw_spin_lock_irq because of #include problems, so 29 + /* Can't use raw_spin_lock_irq because of #include problems, so 30 30 * this is the substitute */ 31 31 #define _atomic_spin_lock_irqsave(l,f) do { \ 32 - spinlock_t *s = ATOMIC_HASH(l); \ 32 + raw_spinlock_t *s = ATOMIC_HASH(l); \ 33 33 local_irq_save(f); \ 34 - _raw_spin_lock(s); \ 34 + __raw_spin_lock(s); \ 35 35 } while(0) 36 36 37 37 #define _atomic_spin_unlock_irqrestore(l,f) do { \ 38 - spinlock_t *s = ATOMIC_HASH(l); \ 39 - _raw_spin_unlock(s); \ 38 + raw_spinlock_t *s = ATOMIC_HASH(l); \ 39 + __raw_spin_unlock(s); \ 40 40 local_irq_restore(f); \ 41 41 } while(0) 42 42
+1 -1
include/asm-parisc/bitops.h
··· 2 2 #define _PARISC_BITOPS_H 3 3 4 4 #include <linux/compiler.h> 5 - #include <asm/system.h> 5 + #include <asm/spinlock.h> 6 6 #include <asm/byteorder.h> 7 7 #include <asm/atomic.h> 8 8
+1
include/asm-parisc/cacheflush.h
··· 3 3 4 4 #include <linux/config.h> 5 5 #include <linux/mm.h> 6 + #include <asm/cache.h> /* for flush_user_dcache_range_asm() proto */ 6 7 7 8 /* The usual comment is "Caches aren't brain-dead on the <architecture>". 8 9 * Unfortunately, that doesn't apply to PA-RISC. */
+1
include/asm-parisc/processor.h
··· 11 11 #ifndef __ASSEMBLY__ 12 12 #include <linux/config.h> 13 13 #include <linux/threads.h> 14 + #include <linux/spinlock_types.h> 14 15 15 16 #include <asm/hardware.h> 16 17 #include <asm/page.h>
+28 -135
include/asm-parisc/spinlock.h
··· 2 2 #define __ASM_SPINLOCK_H 3 3 4 4 #include <asm/system.h> 5 + #include <asm/processor.h> 6 + #include <asm/spinlock_types.h> 5 7 6 8 /* Note that PA-RISC has to use `1' to mean unlocked and `0' to mean locked 7 9 * since it only has load-and-zero. Moreover, at least on some PA processors, 8 10 * the semaphore address has to be 16-byte aligned. 9 11 */ 10 12 11 - #ifndef CONFIG_DEBUG_SPINLOCK 12 - 13 - #define __SPIN_LOCK_UNLOCKED { { 1, 1, 1, 1 } } 14 - #undef SPIN_LOCK_UNLOCKED 15 - #define SPIN_LOCK_UNLOCKED (spinlock_t) __SPIN_LOCK_UNLOCKED 16 - 17 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 18 - 19 - static inline int spin_is_locked(spinlock_t *x) 13 + static inline int __raw_spin_is_locked(raw_spinlock_t *x) 20 14 { 21 15 volatile unsigned int *a = __ldcw_align(x); 22 16 return *a == 0; 23 17 } 24 18 25 - #define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) 26 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 19 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 20 + #define __raw_spin_unlock_wait(x) \ 21 + do { cpu_relax(); } while (__raw_spin_is_locked(x)) 27 22 28 - static inline void _raw_spin_lock(spinlock_t *x) 23 + static inline void __raw_spin_lock(raw_spinlock_t *x) 29 24 { 30 25 volatile unsigned int *a; 31 26 ··· 31 36 mb(); 32 37 } 33 38 34 - static inline void _raw_spin_unlock(spinlock_t *x) 39 + static inline void __raw_spin_unlock(raw_spinlock_t *x) 35 40 { 36 41 volatile unsigned int *a; 37 42 mb(); ··· 40 45 mb(); 41 46 } 42 47 43 - static inline int _raw_spin_trylock(spinlock_t *x) 48 + static inline int __raw_spin_trylock(raw_spinlock_t *x) 44 49 { 45 50 volatile unsigned int *a; 46 51 int ret; ··· 52 57 53 58 return ret; 54 59 } 55 - 56 - #define spin_lock_own(LOCK, LOCATION) ((void)0) 57 - 58 - #else /* !(CONFIG_DEBUG_SPINLOCK) */ 59 - 60 - #define SPINLOCK_MAGIC 0x1D244B3C 61 - 62 - #define __SPIN_LOCK_UNLOCKED { { 1, 1, 1, 1 }, SPINLOCK_MAGIC, 10, __FILE__ , NULL, 0, -1, NULL, NULL } 63 - #undef SPIN_LOCK_UNLOCKED 64 - #define SPIN_LOCK_UNLOCKED (spinlock_t) __SPIN_LOCK_UNLOCKED 65 - 66 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 67 - 68 - #define CHECK_LOCK(x) \ 69 - do { \ 70 - if (unlikely((x)->magic != SPINLOCK_MAGIC)) { \ 71 - printk(KERN_ERR "%s:%d: spin_is_locked" \ 72 - " on uninitialized spinlock %p.\n", \ 73 - __FILE__, __LINE__, (x)); \ 74 - } \ 75 - } while(0) 76 - 77 - #define spin_is_locked(x) \ 78 - ({ \ 79 - CHECK_LOCK(x); \ 80 - volatile unsigned int *a = __ldcw_align(x); \ 81 - if (unlikely((*a == 0) && (x)->babble)) { \ 82 - (x)->babble--; \ 83 - printk("KERN_WARNING \ 84 - %s:%d: spin_is_locked(%s/%p) already" \ 85 - " locked by %s:%d in %s at %p(%d)\n", \ 86 - __FILE__,__LINE__, (x)->module, (x), \ 87 - (x)->bfile, (x)->bline, (x)->task->comm,\ 88 - (x)->previous, (x)->oncpu); \ 89 - } \ 90 - *a == 0; \ 91 - }) 92 - 93 - #define spin_unlock_wait(x) \ 94 - do { \ 95 - CHECK_LOCK(x); \ 96 - volatile unsigned int *a = __ldcw_align(x); \ 97 - if (unlikely((*a == 0) && (x)->babble)) { \ 98 - (x)->babble--; \ 99 - printk("KERN_WARNING \ 100 - %s:%d: spin_unlock_wait(%s/%p)" \ 101 - " owned by %s:%d in %s at %p(%d)\n", \ 102 - __FILE__,__LINE__, (x)->module, (x), \ 103 - (x)->bfile, (x)->bline, (x)->task->comm,\ 104 - (x)->previous, (x)->oncpu); \ 105 - } \ 106 - barrier(); \ 107 - } while (*((volatile unsigned char *)(__ldcw_align(x))) == 0) 108 - 109 - extern void _dbg_spin_lock(spinlock_t *lock, const char *base_file, int line_no); 110 - extern void _dbg_spin_unlock(spinlock_t *lock, const char *, int); 111 - extern int _dbg_spin_trylock(spinlock_t * lock, const char *, int); 112 - 113 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 114 - 115 - #define _raw_spin_unlock(lock) _dbg_spin_unlock(lock, __FILE__, __LINE__) 116 - #define _raw_spin_lock(lock) _dbg_spin_lock(lock, __FILE__, __LINE__) 117 - #define _raw_spin_trylock(lock) _dbg_spin_trylock(lock, __FILE__, __LINE__) 118 - 119 - /* just in case we need it */ 120 - #define spin_lock_own(LOCK, LOCATION) \ 121 - do { \ 122 - volatile unsigned int *a = __ldcw_align(LOCK); \ 123 - if (!((*a == 0) && ((LOCK)->oncpu == smp_processor_id()))) \ 124 - printk("KERN_WARNING \ 125 - %s: called on %d from %p but lock %s on %d\n", \ 126 - LOCATION, smp_processor_id(), \ 127 - __builtin_return_address(0), \ 128 - (*a == 0) ? "taken" : "freed", (LOCK)->on_cpu); \ 129 - } while (0) 130 - 131 - #endif /* !(CONFIG_DEBUG_SPINLOCK) */ 132 60 133 61 /* 134 62 * Read-write spinlocks, allowing multiple readers 135 63 * but only one writer. 136 64 */ 137 - typedef struct { 138 - spinlock_t lock; 139 - volatile int counter; 140 - #ifdef CONFIG_PREEMPT 141 - unsigned int break_lock; 142 - #endif 143 - } rwlock_t; 144 65 145 - #define RW_LOCK_UNLOCKED (rwlock_t) { __SPIN_LOCK_UNLOCKED, 0 } 146 - 147 - #define rwlock_init(lp) do { *(lp) = RW_LOCK_UNLOCKED; } while (0) 148 - 149 - #define _raw_read_trylock(lock) generic_raw_read_trylock(lock) 66 + #define __raw_read_trylock(lock) generic__raw_read_trylock(lock) 150 67 151 68 /* read_lock, read_unlock are pretty straightforward. Of course it somehow 152 69 * sucks we end up saving/restoring flags twice for read_lock_irqsave aso. */ 153 70 154 - #ifdef CONFIG_DEBUG_RWLOCK 155 - extern void _dbg_read_lock(rwlock_t * rw, const char *bfile, int bline); 156 - #define _raw_read_lock(rw) _dbg_read_lock(rw, __FILE__, __LINE__) 157 - #else 158 - static __inline__ void _raw_read_lock(rwlock_t *rw) 71 + static __inline__ void __raw_read_lock(raw_rwlock_t *rw) 159 72 { 160 73 unsigned long flags; 161 74 local_irq_save(flags); 162 - _raw_spin_lock(&rw->lock); 75 + __raw_spin_lock(&rw->lock); 163 76 164 77 rw->counter++; 165 78 166 - _raw_spin_unlock(&rw->lock); 79 + __raw_spin_unlock(&rw->lock); 167 80 local_irq_restore(flags); 168 81 } 169 - #endif /* CONFIG_DEBUG_RWLOCK */ 170 82 171 - static __inline__ void _raw_read_unlock(rwlock_t *rw) 83 + static __inline__ void __raw_read_unlock(raw_rwlock_t *rw) 172 84 { 173 85 unsigned long flags; 174 86 local_irq_save(flags); 175 - _raw_spin_lock(&rw->lock); 87 + __raw_spin_lock(&rw->lock); 176 88 177 89 rw->counter--; 178 90 179 - _raw_spin_unlock(&rw->lock); 91 + __raw_spin_unlock(&rw->lock); 180 92 local_irq_restore(flags); 181 93 } 182 94 ··· 96 194 * writers) in interrupt handlers someone fucked up and we'd dead-lock 97 195 * sooner or later anyway. prumpf */ 98 196 99 - #ifdef CONFIG_DEBUG_RWLOCK 100 - extern void _dbg_write_lock(rwlock_t * rw, const char *bfile, int bline); 101 - #define _raw_write_lock(rw) _dbg_write_lock(rw, __FILE__, __LINE__) 102 - #else 103 - static __inline__ void _raw_write_lock(rwlock_t *rw) 197 + static __inline__ void __raw_write_lock(raw_rwlock_t *rw) 104 198 { 105 199 retry: 106 - _raw_spin_lock(&rw->lock); 200 + __raw_spin_lock(&rw->lock); 107 201 108 202 if(rw->counter != 0) { 109 203 /* this basically never happens */ 110 - _raw_spin_unlock(&rw->lock); 204 + __raw_spin_unlock(&rw->lock); 111 205 112 - while(rw->counter != 0); 206 + while (rw->counter != 0) 207 + cpu_relax(); 113 208 114 209 goto retry; 115 210 } ··· 114 215 /* got it. now leave without unlocking */ 115 216 rw->counter = -1; /* remember we are locked */ 116 217 } 117 - #endif /* CONFIG_DEBUG_RWLOCK */ 118 218 119 219 /* write_unlock is absolutely trivial - we don't have to wait for anything */ 120 220 121 - static __inline__ void _raw_write_unlock(rwlock_t *rw) 221 + static __inline__ void __raw_write_unlock(raw_rwlock_t *rw) 122 222 { 123 223 rw->counter = 0; 124 - _raw_spin_unlock(&rw->lock); 224 + __raw_spin_unlock(&rw->lock); 125 225 } 126 226 127 - #ifdef CONFIG_DEBUG_RWLOCK 128 - extern int _dbg_write_trylock(rwlock_t * rw, const char *bfile, int bline); 129 - #define _raw_write_trylock(rw) _dbg_write_trylock(rw, __FILE__, __LINE__) 130 - #else 131 - static __inline__ int _raw_write_trylock(rwlock_t *rw) 227 + static __inline__ int __raw_write_trylock(raw_rwlock_t *rw) 132 228 { 133 - _raw_spin_lock(&rw->lock); 229 + __raw_spin_lock(&rw->lock); 134 230 if (rw->counter != 0) { 135 231 /* this basically never happens */ 136 - _raw_spin_unlock(&rw->lock); 232 + __raw_spin_unlock(&rw->lock); 137 233 138 234 return 0; 139 235 } ··· 137 243 rw->counter = -1; /* remember we are locked */ 138 244 return 1; 139 245 } 140 - #endif /* CONFIG_DEBUG_RWLOCK */ 141 246 142 - static __inline__ int is_read_locked(rwlock_t *rw) 247 + static __inline__ int __raw_is_read_locked(raw_rwlock_t *rw) 143 248 { 144 249 return rw->counter > 0; 145 250 } 146 251 147 - static __inline__ int is_write_locked(rwlock_t *rw) 252 + static __inline__ int __raw_is_write_locked(raw_rwlock_t *rw) 148 253 { 149 254 return rw->counter < 0; 150 255 }
+21
include/asm-parisc/spinlock_types.h
··· 1 + #ifndef __ASM_SPINLOCK_TYPES_H 2 + #define __ASM_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int lock[4]; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { { 1, 1, 1, 1 } } 13 + 14 + typedef struct { 15 + raw_spinlock_t lock; 16 + volatile int counter; 17 + } raw_rwlock_t; 18 + 19 + #define __RAW_RW_LOCK_UNLOCKED { __RAW_SPIN_LOCK_UNLOCKED, 0 } 20 + 21 + #endif
+1 -23
include/asm-parisc/system.h
··· 160 160 }) 161 161 162 162 #ifdef CONFIG_SMP 163 - /* 164 - * Your basic SMP spinlocks, allowing only a single CPU anywhere 165 - */ 166 - 167 - typedef struct { 168 - volatile unsigned int lock[4]; 169 - #ifdef CONFIG_DEBUG_SPINLOCK 170 - unsigned long magic; 171 - volatile unsigned int babble; 172 - const char *module; 173 - char *bfile; 174 - int bline; 175 - int oncpu; 176 - void *previous; 177 - struct task_struct * task; 178 - #endif 179 - #ifdef CONFIG_PREEMPT 180 - unsigned int break_lock; 181 - #endif 182 - } spinlock_t; 183 - 184 - #define __lock_aligned __attribute__((__section__(".data.lock_aligned"))) 185 - 163 + # define __lock_aligned __attribute__((__section__(".data.lock_aligned"))) 186 164 #endif 187 165 188 166 #define KERNEL_START (0x10100000 - 0x1000)
+19 -70
include/asm-ppc/spinlock.h
··· 5 5 6 6 /* 7 7 * Simple spin lock operations. 8 + * 9 + * (the type definitions are in asm/raw_spinlock_types.h) 8 10 */ 9 11 10 - typedef struct { 11 - volatile unsigned long lock; 12 - #ifdef CONFIG_DEBUG_SPINLOCK 13 - volatile unsigned long owner_pc; 14 - volatile unsigned long owner_cpu; 15 - #endif 16 - #ifdef CONFIG_PREEMPT 17 - unsigned int break_lock; 18 - #endif 19 - } spinlock_t; 12 + #define __raw_spin_is_locked(x) ((x)->lock != 0) 13 + #define __raw_spin_unlock_wait(lock) \ 14 + do { while (__raw_spin_is_locked(lock)) cpu_relax(); } while (0) 15 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 20 16 21 - #ifdef __KERNEL__ 22 - #ifdef CONFIG_DEBUG_SPINLOCK 23 - #define SPINLOCK_DEBUG_INIT , 0, 0 24 - #else 25 - #define SPINLOCK_DEBUG_INIT /* */ 26 - #endif 27 - 28 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 SPINLOCK_DEBUG_INIT } 29 - 30 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 31 - #define spin_is_locked(x) ((x)->lock != 0) 32 - #define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) 33 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 34 - 35 - #ifndef CONFIG_DEBUG_SPINLOCK 36 - 37 - static inline void _raw_spin_lock(spinlock_t *lock) 17 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 38 18 { 39 19 unsigned long tmp; 40 20 41 21 __asm__ __volatile__( 42 - "b 1f # spin_lock\n\ 22 + "b 1f # __raw_spin_lock\n\ 43 23 2: lwzx %0,0,%1\n\ 44 24 cmpwi 0,%0,0\n\ 45 25 bne+ 2b\n\ ··· 35 55 : "cr0", "memory"); 36 56 } 37 57 38 - static inline void _raw_spin_unlock(spinlock_t *lock) 58 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 39 59 { 40 - __asm__ __volatile__("eieio # spin_unlock": : :"memory"); 60 + __asm__ __volatile__("eieio # __raw_spin_unlock": : :"memory"); 41 61 lock->lock = 0; 42 62 } 43 63 44 - #define _raw_spin_trylock(l) (!test_and_set_bit(0,&(l)->lock)) 45 - 46 - #else 47 - 48 - extern void _raw_spin_lock(spinlock_t *lock); 49 - extern void _raw_spin_unlock(spinlock_t *lock); 50 - extern int _raw_spin_trylock(spinlock_t *lock); 51 - 52 - #endif 64 + #define __raw_spin_trylock(l) (!test_and_set_bit(0,&(l)->lock)) 53 65 54 66 /* 55 67 * Read-write spinlocks, allowing multiple readers ··· 53 81 * irq-safe write-lock, but readers can get non-irqsafe 54 82 * read-locks. 55 83 */ 56 - typedef struct { 57 - volatile signed int lock; 58 - #ifdef CONFIG_PREEMPT 59 - unsigned int break_lock; 60 - #endif 61 - } rwlock_t; 62 84 63 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0 } 64 - #define rwlock_init(lp) do { *(lp) = RW_LOCK_UNLOCKED; } while(0) 85 + #define __raw_read_can_lock(rw) ((rw)->lock >= 0) 86 + #define __raw_write_can_lock(rw) (!(rw)->lock) 65 87 66 - #define read_can_lock(rw) ((rw)->lock >= 0) 67 - #define write_can_lock(rw) (!(rw)->lock) 68 - 69 - #ifndef CONFIG_DEBUG_SPINLOCK 70 - 71 - static __inline__ int _raw_read_trylock(rwlock_t *rw) 88 + static __inline__ int __raw_read_trylock(raw_rwlock_t *rw) 72 89 { 73 90 signed int tmp; 74 91 ··· 77 116 return tmp > 0; 78 117 } 79 118 80 - static __inline__ void _raw_read_lock(rwlock_t *rw) 119 + static __inline__ void __raw_read_lock(raw_rwlock_t *rw) 81 120 { 82 121 signed int tmp; 83 122 ··· 98 137 : "cr0", "memory"); 99 138 } 100 139 101 - static __inline__ void _raw_read_unlock(rwlock_t *rw) 140 + static __inline__ void __raw_read_unlock(raw_rwlock_t *rw) 102 141 { 103 142 signed int tmp; 104 143 ··· 114 153 : "cr0", "memory"); 115 154 } 116 155 117 - static __inline__ int _raw_write_trylock(rwlock_t *rw) 156 + static __inline__ int __raw_write_trylock(raw_rwlock_t *rw) 118 157 { 119 158 signed int tmp; 120 159 ··· 134 173 return tmp == 0; 135 174 } 136 175 137 - static __inline__ void _raw_write_lock(rwlock_t *rw) 176 + static __inline__ void __raw_write_lock(raw_rwlock_t *rw) 138 177 { 139 178 signed int tmp; 140 179 ··· 155 194 : "cr0", "memory"); 156 195 } 157 196 158 - static __inline__ void _raw_write_unlock(rwlock_t *rw) 197 + static __inline__ void __raw_write_unlock(raw_rwlock_t *rw) 159 198 { 160 199 __asm__ __volatile__("eieio # write_unlock": : :"memory"); 161 200 rw->lock = 0; 162 201 } 163 202 164 - #else 165 - 166 - extern void _raw_read_lock(rwlock_t *rw); 167 - extern void _raw_read_unlock(rwlock_t *rw); 168 - extern void _raw_write_lock(rwlock_t *rw); 169 - extern void _raw_write_unlock(rwlock_t *rw); 170 - extern int _raw_read_trylock(rwlock_t *rw); 171 - extern int _raw_write_trylock(rwlock_t *rw); 172 - 173 - #endif 174 - 175 203 #endif /* __ASM_SPINLOCK_H */ 176 - #endif /* __KERNEL__ */
+20
include/asm-ppc/spinlock_types.h
··· 1 + #ifndef __ASM_SPINLOCK_TYPES_H 2 + #define __ASM_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned long lock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile signed int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { 0 } 19 + 20 + #endif
+89 -108
include/asm-ppc64/spinlock.h
··· 15 15 * modify it under the terms of the GNU General Public License 16 16 * as published by the Free Software Foundation; either version 17 17 * 2 of the License, or (at your option) any later version. 18 + * 19 + * (the type definitions are in asm/spinlock_types.h) 18 20 */ 19 21 #include <linux/config.h> 20 22 #include <asm/paca.h> 21 23 #include <asm/hvcall.h> 22 24 #include <asm/iSeries/HvCall.h> 23 25 24 - typedef struct { 25 - volatile unsigned int lock; 26 - #ifdef CONFIG_PREEMPT 27 - unsigned int break_lock; 28 - #endif 29 - } spinlock_t; 26 + #define __raw_spin_is_locked(x) ((x)->slock != 0) 30 27 31 - typedef struct { 32 - volatile signed int lock; 33 - #ifdef CONFIG_PREEMPT 34 - unsigned int break_lock; 35 - #endif 36 - } rwlock_t; 37 - 38 - #ifdef __KERNEL__ 39 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 } 40 - 41 - #define spin_is_locked(x) ((x)->lock != 0) 42 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 43 - 44 - static __inline__ void _raw_spin_unlock(spinlock_t *lock) 28 + /* 29 + * This returns the old value in the lock, so we succeeded 30 + * in getting the lock if the return value is 0. 31 + */ 32 + static __inline__ unsigned long __spin_trylock(raw_spinlock_t *lock) 45 33 { 46 - __asm__ __volatile__("lwsync # spin_unlock": : :"memory"); 47 - lock->lock = 0; 34 + unsigned long tmp, tmp2; 35 + 36 + __asm__ __volatile__( 37 + " lwz %1,%3(13) # __spin_trylock\n\ 38 + 1: lwarx %0,0,%2\n\ 39 + cmpwi 0,%0,0\n\ 40 + bne- 2f\n\ 41 + stwcx. %1,0,%2\n\ 42 + bne- 1b\n\ 43 + isync\n\ 44 + 2:" : "=&r" (tmp), "=&r" (tmp2) 45 + : "r" (&lock->slock), "i" (offsetof(struct paca_struct, lock_token)) 46 + : "cr0", "memory"); 47 + 48 + return tmp; 49 + } 50 + 51 + static int __inline__ __raw_spin_trylock(raw_spinlock_t *lock) 52 + { 53 + return __spin_trylock(lock) == 0; 48 54 } 49 55 50 56 /* ··· 70 64 #if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES) 71 65 /* We only yield to the hypervisor if we are in shared processor mode */ 72 66 #define SHARED_PROCESSOR (get_paca()->lppaca.shared_proc) 73 - extern void __spin_yield(spinlock_t *lock); 74 - extern void __rw_yield(rwlock_t *lock); 67 + extern void __spin_yield(raw_spinlock_t *lock); 68 + extern void __rw_yield(raw_rwlock_t *lock); 75 69 #else /* SPLPAR || ISERIES */ 76 70 #define __spin_yield(x) barrier() 77 71 #define __rw_yield(x) barrier() 78 72 #define SHARED_PROCESSOR 0 79 73 #endif 80 - extern void spin_unlock_wait(spinlock_t *lock); 81 74 82 - /* 83 - * This returns the old value in the lock, so we succeeded 84 - * in getting the lock if the return value is 0. 85 - */ 86 - static __inline__ unsigned long __spin_trylock(spinlock_t *lock) 87 - { 88 - unsigned long tmp, tmp2; 89 - 90 - __asm__ __volatile__( 91 - " lwz %1,%3(13) # __spin_trylock\n\ 92 - 1: lwarx %0,0,%2\n\ 93 - cmpwi 0,%0,0\n\ 94 - bne- 2f\n\ 95 - stwcx. %1,0,%2\n\ 96 - bne- 1b\n\ 97 - isync\n\ 98 - 2:" : "=&r" (tmp), "=&r" (tmp2) 99 - : "r" (&lock->lock), "i" (offsetof(struct paca_struct, lock_token)) 100 - : "cr0", "memory"); 101 - 102 - return tmp; 103 - } 104 - 105 - static int __inline__ _raw_spin_trylock(spinlock_t *lock) 106 - { 107 - return __spin_trylock(lock) == 0; 108 - } 109 - 110 - static void __inline__ _raw_spin_lock(spinlock_t *lock) 75 + static void __inline__ __raw_spin_lock(raw_spinlock_t *lock) 111 76 { 112 77 while (1) { 113 78 if (likely(__spin_trylock(lock) == 0)) ··· 87 110 HMT_low(); 88 111 if (SHARED_PROCESSOR) 89 112 __spin_yield(lock); 90 - } while (unlikely(lock->lock != 0)); 113 + } while (unlikely(lock->slock != 0)); 91 114 HMT_medium(); 92 115 } 93 116 } 94 117 95 - static void __inline__ _raw_spin_lock_flags(spinlock_t *lock, unsigned long flags) 118 + static void __inline__ __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags) 96 119 { 97 120 unsigned long flags_dis; 98 121 ··· 105 128 HMT_low(); 106 129 if (SHARED_PROCESSOR) 107 130 __spin_yield(lock); 108 - } while (unlikely(lock->lock != 0)); 131 + } while (unlikely(lock->slock != 0)); 109 132 HMT_medium(); 110 133 local_irq_restore(flags_dis); 111 134 } 112 135 } 136 + 137 + static __inline__ void __raw_spin_unlock(raw_spinlock_t *lock) 138 + { 139 + __asm__ __volatile__("lwsync # __raw_spin_unlock": : :"memory"); 140 + lock->slock = 0; 141 + } 142 + 143 + extern void __raw_spin_unlock_wait(raw_spinlock_t *lock); 113 144 114 145 /* 115 146 * Read-write spinlocks, allowing multiple readers ··· 129 144 * irq-safe write-lock, but readers can get non-irqsafe 130 145 * read-locks. 131 146 */ 132 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0 } 133 147 134 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) 135 - 136 - #define read_can_lock(rw) ((rw)->lock >= 0) 137 - #define write_can_lock(rw) (!(rw)->lock) 138 - 139 - static __inline__ void _raw_write_unlock(rwlock_t *rw) 140 - { 141 - __asm__ __volatile__("lwsync # write_unlock": : :"memory"); 142 - rw->lock = 0; 143 - } 148 + #define __raw_read_can_lock(rw) ((rw)->lock >= 0) 149 + #define __raw_write_can_lock(rw) (!(rw)->lock) 144 150 145 151 /* 146 152 * This returns the old value in the lock + 1, 147 153 * so we got a read lock if the return value is > 0. 148 154 */ 149 - static long __inline__ __read_trylock(rwlock_t *rw) 155 + static long __inline__ __read_trylock(raw_rwlock_t *rw) 150 156 { 151 157 long tmp; 152 158 ··· 156 180 return tmp; 157 181 } 158 182 159 - static int __inline__ _raw_read_trylock(rwlock_t *rw) 160 - { 161 - return __read_trylock(rw) > 0; 162 - } 163 - 164 - static void __inline__ _raw_read_lock(rwlock_t *rw) 165 - { 166 - while (1) { 167 - if (likely(__read_trylock(rw) > 0)) 168 - break; 169 - do { 170 - HMT_low(); 171 - if (SHARED_PROCESSOR) 172 - __rw_yield(rw); 173 - } while (unlikely(rw->lock < 0)); 174 - HMT_medium(); 175 - } 176 - } 177 - 178 - static void __inline__ _raw_read_unlock(rwlock_t *rw) 179 - { 180 - long tmp; 181 - 182 - __asm__ __volatile__( 183 - "eieio # read_unlock\n\ 184 - 1: lwarx %0,0,%1\n\ 185 - addic %0,%0,-1\n\ 186 - stwcx. %0,0,%1\n\ 187 - bne- 1b" 188 - : "=&r"(tmp) 189 - : "r"(&rw->lock) 190 - : "cr0", "memory"); 191 - } 192 - 193 183 /* 194 184 * This returns the old value in the lock, 195 185 * so we got the write lock if the return value is 0. 196 186 */ 197 - static __inline__ long __write_trylock(rwlock_t *rw) 187 + static __inline__ long __write_trylock(raw_rwlock_t *rw) 198 188 { 199 189 long tmp, tmp2; 200 190 ··· 179 237 return tmp; 180 238 } 181 239 182 - static int __inline__ _raw_write_trylock(rwlock_t *rw) 240 + static void __inline__ __raw_read_lock(raw_rwlock_t *rw) 183 241 { 184 - return __write_trylock(rw) == 0; 242 + while (1) { 243 + if (likely(__read_trylock(rw) > 0)) 244 + break; 245 + do { 246 + HMT_low(); 247 + if (SHARED_PROCESSOR) 248 + __rw_yield(rw); 249 + } while (unlikely(rw->lock < 0)); 250 + HMT_medium(); 251 + } 185 252 } 186 253 187 - static void __inline__ _raw_write_lock(rwlock_t *rw) 254 + static void __inline__ __raw_write_lock(raw_rwlock_t *rw) 188 255 { 189 256 while (1) { 190 257 if (likely(__write_trylock(rw) == 0)) ··· 207 256 } 208 257 } 209 258 210 - #endif /* __KERNEL__ */ 259 + static int __inline__ __raw_read_trylock(raw_rwlock_t *rw) 260 + { 261 + return __read_trylock(rw) > 0; 262 + } 263 + 264 + static int __inline__ __raw_write_trylock(raw_rwlock_t *rw) 265 + { 266 + return __write_trylock(rw) == 0; 267 + } 268 + 269 + static void __inline__ __raw_read_unlock(raw_rwlock_t *rw) 270 + { 271 + long tmp; 272 + 273 + __asm__ __volatile__( 274 + "eieio # read_unlock\n\ 275 + 1: lwarx %0,0,%1\n\ 276 + addic %0,%0,-1\n\ 277 + stwcx. %0,0,%1\n\ 278 + bne- 1b" 279 + : "=&r"(tmp) 280 + : "r"(&rw->lock) 281 + : "cr0", "memory"); 282 + } 283 + 284 + static __inline__ void __raw_write_unlock(raw_rwlock_t *rw) 285 + { 286 + __asm__ __volatile__("lwsync # write_unlock": : :"memory"); 287 + rw->lock = 0; 288 + } 289 + 211 290 #endif /* __ASM_SPINLOCK_H */
+20
include/asm-ppc64/spinlock_types.h
··· 1 + #ifndef __ASM_SPINLOCK_TYPES_H 2 + #define __ASM_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int slock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile signed int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { 0 } 19 + 20 + #endif
+23 -40
include/asm-s390/spinlock.h
··· 27 27 * on the local processor, one does not. 28 28 * 29 29 * We make no fairness assumptions. They have a cost. 30 + * 31 + * (the type definitions are in asm/spinlock_types.h) 30 32 */ 31 33 32 - typedef struct { 33 - volatile unsigned int lock; 34 - #ifdef CONFIG_PREEMPT 35 - unsigned int break_lock; 36 - #endif 37 - } __attribute__ ((aligned (4))) spinlock_t; 34 + #define __raw_spin_is_locked(x) ((x)->lock != 0) 35 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 36 + #define __raw_spin_unlock_wait(lock) \ 37 + do { while (__raw_spin_is_locked(lock)) cpu_relax(); } while (0) 38 38 39 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 } 40 - #define spin_lock_init(lp) do { (lp)->lock = 0; } while(0) 41 - #define spin_unlock_wait(lp) do { barrier(); } while(((volatile spinlock_t *)(lp))->lock) 42 - #define spin_is_locked(x) ((x)->lock != 0) 43 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 39 + extern void _raw_spin_lock_wait(raw_spinlock_t *lp, unsigned int pc); 40 + extern int _raw_spin_trylock_retry(raw_spinlock_t *lp, unsigned int pc); 44 41 45 - extern void _raw_spin_lock_wait(spinlock_t *lp, unsigned int pc); 46 - extern int _raw_spin_trylock_retry(spinlock_t *lp, unsigned int pc); 47 - 48 - static inline void _raw_spin_lock(spinlock_t *lp) 42 + static inline void __raw_spin_lock(raw_spinlock_t *lp) 49 43 { 50 44 unsigned long pc = 1 | (unsigned long) __builtin_return_address(0); 51 45 ··· 47 53 _raw_spin_lock_wait(lp, pc); 48 54 } 49 55 50 - static inline int _raw_spin_trylock(spinlock_t *lp) 56 + static inline int __raw_spin_trylock(raw_spinlock_t *lp) 51 57 { 52 58 unsigned long pc = 1 | (unsigned long) __builtin_return_address(0); 53 59 ··· 56 62 return _raw_spin_trylock_retry(lp, pc); 57 63 } 58 64 59 - static inline void _raw_spin_unlock(spinlock_t *lp) 65 + static inline void __raw_spin_unlock(raw_spinlock_t *lp) 60 66 { 61 67 _raw_compare_and_swap(&lp->lock, lp->lock, 0); 62 68 } ··· 71 77 * irq-safe write-lock, but readers can get non-irqsafe 72 78 * read-locks. 73 79 */ 74 - typedef struct { 75 - volatile unsigned int lock; 76 - volatile unsigned long owner_pc; 77 - #ifdef CONFIG_PREEMPT 78 - unsigned int break_lock; 79 - #endif 80 - } rwlock_t; 81 - 82 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 } 83 - 84 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) 85 80 86 81 /** 87 82 * read_can_lock - would read_trylock() succeed? 88 83 * @lock: the rwlock in question. 89 84 */ 90 - #define read_can_lock(x) ((int)(x)->lock >= 0) 85 + #define __raw_read_can_lock(x) ((int)(x)->lock >= 0) 91 86 92 87 /** 93 88 * write_can_lock - would write_trylock() succeed? 94 89 * @lock: the rwlock in question. 95 90 */ 96 - #define write_can_lock(x) ((x)->lock == 0) 91 + #define __raw_write_can_lock(x) ((x)->lock == 0) 97 92 98 - extern void _raw_read_lock_wait(rwlock_t *lp); 99 - extern int _raw_read_trylock_retry(rwlock_t *lp); 100 - extern void _raw_write_lock_wait(rwlock_t *lp); 101 - extern int _raw_write_trylock_retry(rwlock_t *lp); 93 + extern void _raw_read_lock_wait(raw_rwlock_t *lp); 94 + extern int _raw_read_trylock_retry(raw_rwlock_t *lp); 95 + extern void _raw_write_lock_wait(raw_rwlock_t *lp); 96 + extern int _raw_write_trylock_retry(raw_rwlock_t *lp); 102 97 103 - static inline void _raw_read_lock(rwlock_t *rw) 98 + static inline void __raw_read_lock(raw_rwlock_t *rw) 104 99 { 105 100 unsigned int old; 106 101 old = rw->lock & 0x7fffffffU; ··· 97 114 _raw_read_lock_wait(rw); 98 115 } 99 116 100 - static inline void _raw_read_unlock(rwlock_t *rw) 117 + static inline void __raw_read_unlock(raw_rwlock_t *rw) 101 118 { 102 119 unsigned int old, cmp; 103 120 ··· 108 125 } while (cmp != old); 109 126 } 110 127 111 - static inline void _raw_write_lock(rwlock_t *rw) 128 + static inline void __raw_write_lock(raw_rwlock_t *rw) 112 129 { 113 130 if (unlikely(_raw_compare_and_swap(&rw->lock, 0, 0x80000000) != 0)) 114 131 _raw_write_lock_wait(rw); 115 132 } 116 133 117 - static inline void _raw_write_unlock(rwlock_t *rw) 134 + static inline void __raw_write_unlock(raw_rwlock_t *rw) 118 135 { 119 136 _raw_compare_and_swap(&rw->lock, 0x80000000, 0); 120 137 } 121 138 122 - static inline int _raw_read_trylock(rwlock_t *rw) 139 + static inline int __raw_read_trylock(raw_rwlock_t *rw) 123 140 { 124 141 unsigned int old; 125 142 old = rw->lock & 0x7fffffffU; ··· 128 145 return _raw_read_trylock_retry(rw); 129 146 } 130 147 131 - static inline int _raw_write_trylock(rwlock_t *rw) 148 + static inline int __raw_write_trylock(raw_rwlock_t *rw) 132 149 { 133 150 if (likely(_raw_compare_and_swap(&rw->lock, 0, 0x80000000) == 0)) 134 151 return 1;
+21
include/asm-s390/spinlock_types.h
··· 1 + #ifndef __ASM_SPINLOCK_TYPES_H 2 + #define __ASM_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int lock; 10 + } __attribute__ ((aligned (4))) raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile unsigned int lock; 16 + volatile unsigned int owner_pc; 17 + } raw_rwlock_t; 18 + 19 + #define __RAW_RW_LOCK_UNLOCKED { 0, 0 } 20 + 21 + #endif
+19 -40
include/asm-sh/spinlock.h
··· 15 15 /* 16 16 * Your basic SMP spinlocks, allowing only a single CPU anywhere 17 17 */ 18 - typedef struct { 19 - volatile unsigned long lock; 20 - #ifdef CONFIG_PREEMPT 21 - unsigned int break_lock; 22 - #endif 23 - } spinlock_t; 24 18 25 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 } 26 - 27 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 28 - 29 - #define spin_is_locked(x) ((x)->lock != 0) 30 - #define spin_unlock_wait(x) do { barrier(); } while (spin_is_locked(x)) 31 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 19 + #define __raw_spin_is_locked(x) ((x)->lock != 0) 20 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 21 + #define __raw_spin_unlock_wait(x) \ 22 + do { cpu_relax(); } while (__raw_spin_is_locked(x)) 32 23 33 24 /* 34 25 * Simple spin lock operations. There are two variants, one clears IRQ's ··· 27 36 * 28 37 * We make no fairness assumptions. They have a cost. 29 38 */ 30 - static inline void _raw_spin_lock(spinlock_t *lock) 39 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 31 40 { 32 41 __asm__ __volatile__ ( 33 42 "1:\n\t" ··· 40 49 ); 41 50 } 42 51 43 - static inline void _raw_spin_unlock(spinlock_t *lock) 52 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 44 53 { 45 54 assert_spin_locked(lock); 46 55 47 56 lock->lock = 0; 48 57 } 49 58 50 - #define _raw_spin_trylock(x) (!test_and_set_bit(0, &(x)->lock)) 59 + #define __raw_spin_trylock(x) (!test_and_set_bit(0, &(x)->lock)) 51 60 52 61 /* 53 62 * Read-write spinlocks, allowing multiple readers but only one writer. ··· 57 66 * needs to get a irq-safe write-lock, but readers can get non-irqsafe 58 67 * read-locks. 59 68 */ 60 - typedef struct { 61 - spinlock_t lock; 62 - atomic_t counter; 63 - #ifdef CONFIG_PREEMPT 64 - unsigned int break_lock; 65 - #endif 66 - } rwlock_t; 67 69 68 - #define RW_LOCK_BIAS 0x01000000 69 - #define RW_LOCK_UNLOCKED (rwlock_t) { { 0 }, { RW_LOCK_BIAS } } 70 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while (0) 71 - 72 - static inline void _raw_read_lock(rwlock_t *rw) 70 + static inline void __raw_read_lock(raw_rwlock_t *rw) 73 71 { 74 - _raw_spin_lock(&rw->lock); 72 + __raw_spin_lock(&rw->lock); 75 73 76 74 atomic_inc(&rw->counter); 77 75 78 - _raw_spin_unlock(&rw->lock); 76 + __raw_spin_unlock(&rw->lock); 79 77 } 80 78 81 - static inline void _raw_read_unlock(rwlock_t *rw) 79 + static inline void __raw_read_unlock(raw_rwlock_t *rw) 82 80 { 83 - _raw_spin_lock(&rw->lock); 81 + __raw_spin_lock(&rw->lock); 84 82 85 83 atomic_dec(&rw->counter); 86 84 87 - _raw_spin_unlock(&rw->lock); 85 + __raw_spin_unlock(&rw->lock); 88 86 } 89 87 90 - static inline void _raw_write_lock(rwlock_t *rw) 88 + static inline void __raw_write_lock(raw_rwlock_t *rw) 91 89 { 92 - _raw_spin_lock(&rw->lock); 90 + __raw_spin_lock(&rw->lock); 93 91 atomic_set(&rw->counter, -1); 94 92 } 95 93 96 - static inline void _raw_write_unlock(rwlock_t *rw) 94 + static inline void __raw_write_unlock(raw_rwlock_t *rw) 97 95 { 98 96 atomic_set(&rw->counter, 0); 99 - _raw_spin_unlock(&rw->lock); 97 + __raw_spin_unlock(&rw->lock); 100 98 } 101 99 102 - #define _raw_read_trylock(lock) generic_raw_read_trylock(lock) 100 + #define __raw_read_trylock(lock) generic__raw_read_trylock(lock) 103 101 104 - static inline int _raw_write_trylock(rwlock_t *rw) 102 + static inline int __raw_write_trylock(raw_rwlock_t *rw) 105 103 { 106 104 if (atomic_sub_and_test(RW_LOCK_BIAS, &rw->counter)) 107 105 return 1; ··· 101 121 } 102 122 103 123 #endif /* __ASM_SH_SPINLOCK_H */ 104 -
+22
include/asm-sh/spinlock_types.h
··· 1 + #ifndef __ASM_SH_SPINLOCK_TYPES_H 2 + #define __ASM_SH_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned long lock; 10 + } raw_spinlock_t; 11 + 12 + #define __SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + raw_spinlock_t lock; 16 + atomic_t counter; 17 + } raw_rwlock_t; 18 + 19 + #define RW_LOCK_BIAS 0x01000000 20 + #define __RAW_RW_LOCK_UNLOCKED { { 0 }, { RW_LOCK_BIAS } } 21 + 22 + #endif
+21 -119
include/asm-sparc/spinlock.h
··· 12 12 13 13 #include <asm/psr.h> 14 14 15 - #ifdef CONFIG_DEBUG_SPINLOCK 16 - struct _spinlock_debug { 17 - unsigned char lock; 18 - unsigned long owner_pc; 19 - #ifdef CONFIG_PREEMPT 20 - unsigned int break_lock; 21 - #endif 22 - }; 23 - typedef struct _spinlock_debug spinlock_t; 15 + #define __raw_spin_is_locked(lock) (*((volatile unsigned char *)(lock)) != 0) 24 16 25 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0, 0 } 26 - #define spin_lock_init(lp) do { *(lp)= SPIN_LOCK_UNLOCKED; } while(0) 27 - #define spin_is_locked(lp) (*((volatile unsigned char *)(&((lp)->lock))) != 0) 28 - #define spin_unlock_wait(lp) do { barrier(); } while(*(volatile unsigned char *)(&(lp)->lock)) 17 + #define __raw_spin_unlock_wait(lock) \ 18 + do { while (__raw_spin_is_locked(lock)) cpu_relax(); } while (0) 29 19 30 - extern void _do_spin_lock(spinlock_t *lock, char *str); 31 - extern int _spin_trylock(spinlock_t *lock); 32 - extern void _do_spin_unlock(spinlock_t *lock); 33 - 34 - #define _raw_spin_trylock(lp) _spin_trylock(lp) 35 - #define _raw_spin_lock(lock) _do_spin_lock(lock, "spin_lock") 36 - #define _raw_spin_unlock(lock) _do_spin_unlock(lock) 37 - 38 - struct _rwlock_debug { 39 - volatile unsigned int lock; 40 - unsigned long owner_pc; 41 - unsigned long reader_pc[NR_CPUS]; 42 - #ifdef CONFIG_PREEMPT 43 - unsigned int break_lock; 44 - #endif 45 - }; 46 - typedef struct _rwlock_debug rwlock_t; 47 - 48 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0, {0} } 49 - 50 - #define rwlock_init(lp) do { *(lp)= RW_LOCK_UNLOCKED; } while(0) 51 - 52 - extern void _do_read_lock(rwlock_t *rw, char *str); 53 - extern void _do_read_unlock(rwlock_t *rw, char *str); 54 - extern void _do_write_lock(rwlock_t *rw, char *str); 55 - extern void _do_write_unlock(rwlock_t *rw); 56 - 57 - #define _raw_read_lock(lock) \ 58 - do { unsigned long flags; \ 59 - local_irq_save(flags); \ 60 - _do_read_lock(lock, "read_lock"); \ 61 - local_irq_restore(flags); \ 62 - } while(0) 63 - 64 - #define _raw_read_unlock(lock) \ 65 - do { unsigned long flags; \ 66 - local_irq_save(flags); \ 67 - _do_read_unlock(lock, "read_unlock"); \ 68 - local_irq_restore(flags); \ 69 - } while(0) 70 - 71 - #define _raw_write_lock(lock) \ 72 - do { unsigned long flags; \ 73 - local_irq_save(flags); \ 74 - _do_write_lock(lock, "write_lock"); \ 75 - local_irq_restore(flags); \ 76 - } while(0) 77 - 78 - #define _raw_write_unlock(lock) \ 79 - do { unsigned long flags; \ 80 - local_irq_save(flags); \ 81 - _do_write_unlock(lock); \ 82 - local_irq_restore(flags); \ 83 - } while(0) 84 - 85 - #else /* !CONFIG_DEBUG_SPINLOCK */ 86 - 87 - typedef struct { 88 - unsigned char lock; 89 - #ifdef CONFIG_PREEMPT 90 - unsigned int break_lock; 91 - #endif 92 - } spinlock_t; 93 - 94 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 } 95 - 96 - #define spin_lock_init(lock) (*((unsigned char *)(lock)) = 0) 97 - #define spin_is_locked(lock) (*((volatile unsigned char *)(lock)) != 0) 98 - 99 - #define spin_unlock_wait(lock) \ 100 - do { \ 101 - barrier(); \ 102 - } while(*((volatile unsigned char *)lock)) 103 - 104 - extern __inline__ void _raw_spin_lock(spinlock_t *lock) 20 + extern __inline__ void __raw_spin_lock(raw_spinlock_t *lock) 105 21 { 106 22 __asm__ __volatile__( 107 23 "\n1:\n\t" ··· 37 121 : "g2", "memory", "cc"); 38 122 } 39 123 40 - extern __inline__ int _raw_spin_trylock(spinlock_t *lock) 124 + extern __inline__ int __raw_spin_trylock(raw_spinlock_t *lock) 41 125 { 42 126 unsigned int result; 43 127 __asm__ __volatile__("ldstub [%1], %0" ··· 47 131 return (result == 0); 48 132 } 49 133 50 - extern __inline__ void _raw_spin_unlock(spinlock_t *lock) 134 + extern __inline__ void __raw_spin_unlock(raw_spinlock_t *lock) 51 135 { 52 136 __asm__ __volatile__("stb %%g0, [%0]" : : "r" (lock) : "memory"); 53 137 } ··· 63 147 * 64 148 * XXX This might create some problems with my dual spinlock 65 149 * XXX scheme, deadlocks etc. -DaveM 66 - */ 67 - typedef struct { 68 - volatile unsigned int lock; 69 - #ifdef CONFIG_PREEMPT 70 - unsigned int break_lock; 71 - #endif 72 - } rwlock_t; 73 - 74 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0 } 75 - 76 - #define rwlock_init(lp) do { *(lp)= RW_LOCK_UNLOCKED; } while(0) 77 - 78 - 79 - /* Sort of like atomic_t's on Sparc, but even more clever. 150 + * 151 + * Sort of like atomic_t's on Sparc, but even more clever. 80 152 * 81 153 * ------------------------------------ 82 - * | 24-bit counter | wlock | rwlock_t 154 + * | 24-bit counter | wlock | raw_rwlock_t 83 155 * ------------------------------------ 84 156 * 31 8 7 0 85 157 * ··· 78 174 * 79 175 * Unfortunately this scheme limits us to ~16,000,000 cpus. 80 176 */ 81 - extern __inline__ void _read_lock(rwlock_t *rw) 177 + extern __inline__ void __read_lock(raw_rwlock_t *rw) 82 178 { 83 - register rwlock_t *lp asm("g1"); 179 + register raw_rwlock_t *lp asm("g1"); 84 180 lp = rw; 85 181 __asm__ __volatile__( 86 182 "mov %%o7, %%g4\n\t" ··· 91 187 : "g2", "g4", "memory", "cc"); 92 188 } 93 189 94 - #define _raw_read_lock(lock) \ 190 + #define __raw_read_lock(lock) \ 95 191 do { unsigned long flags; \ 96 192 local_irq_save(flags); \ 97 - _read_lock(lock); \ 193 + __raw_read_lock(lock); \ 98 194 local_irq_restore(flags); \ 99 195 } while(0) 100 196 101 - extern __inline__ void _read_unlock(rwlock_t *rw) 197 + extern __inline__ void __read_unlock(raw_rwlock_t *rw) 102 198 { 103 - register rwlock_t *lp asm("g1"); 199 + register raw_rwlock_t *lp asm("g1"); 104 200 lp = rw; 105 201 __asm__ __volatile__( 106 202 "mov %%o7, %%g4\n\t" ··· 111 207 : "g2", "g4", "memory", "cc"); 112 208 } 113 209 114 - #define _raw_read_unlock(lock) \ 210 + #define __raw_read_unlock(lock) \ 115 211 do { unsigned long flags; \ 116 212 local_irq_save(flags); \ 117 - _read_unlock(lock); \ 213 + __raw_read_unlock(lock); \ 118 214 local_irq_restore(flags); \ 119 215 } while(0) 120 216 121 - extern __inline__ void _raw_write_lock(rwlock_t *rw) 217 + extern __inline__ void __raw_write_lock(raw_rwlock_t *rw) 122 218 { 123 - register rwlock_t *lp asm("g1"); 219 + register raw_rwlock_t *lp asm("g1"); 124 220 lp = rw; 125 221 __asm__ __volatile__( 126 222 "mov %%o7, %%g4\n\t" ··· 131 227 : "g2", "g4", "memory", "cc"); 132 228 } 133 229 134 - #define _raw_write_unlock(rw) do { (rw)->lock = 0; } while(0) 230 + #define __raw_write_unlock(rw) do { (rw)->lock = 0; } while(0) 135 231 136 - #endif /* CONFIG_DEBUG_SPINLOCK */ 137 - 138 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 232 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 139 233 140 234 #endif /* !(__ASSEMBLY__) */ 141 235
+20
include/asm-sparc/spinlock_types.h
··· 1 + #ifndef __SPARC_SPINLOCK_TYPES_H 2 + #define __SPARC_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + unsigned char lock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile unsigned int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { 0 } 19 + 20 + #endif
+21 -137
include/asm-sparc64/spinlock.h
··· 29 29 * must be pre-V9 branches. 30 30 */ 31 31 32 - #ifndef CONFIG_DEBUG_SPINLOCK 32 + #define __raw_spin_is_locked(lp) ((lp)->lock != 0) 33 33 34 - typedef struct { 35 - volatile unsigned char lock; 36 - #ifdef CONFIG_PREEMPT 37 - unsigned int break_lock; 38 - #endif 39 - } spinlock_t; 40 - #define SPIN_LOCK_UNLOCKED (spinlock_t) {0,} 34 + #define __raw_spin_unlock_wait(lp) \ 35 + do { rmb(); \ 36 + } while((lp)->lock) 41 37 42 - #define spin_lock_init(lp) do { *(lp)= SPIN_LOCK_UNLOCKED; } while(0) 43 - #define spin_is_locked(lp) ((lp)->lock != 0) 44 - 45 - #define spin_unlock_wait(lp) \ 46 - do { rmb(); \ 47 - } while((lp)->lock) 48 - 49 - static inline void _raw_spin_lock(spinlock_t *lock) 38 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 50 39 { 51 40 unsigned long tmp; 52 41 ··· 56 67 : "memory"); 57 68 } 58 69 59 - static inline int _raw_spin_trylock(spinlock_t *lock) 70 + static inline int __raw_spin_trylock(raw_spinlock_t *lock) 60 71 { 61 72 unsigned long result; 62 73 ··· 70 81 return (result == 0UL); 71 82 } 72 83 73 - static inline void _raw_spin_unlock(spinlock_t *lock) 84 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 74 85 { 75 86 __asm__ __volatile__( 76 87 " membar #StoreStore | #LoadStore\n" ··· 80 91 : "memory"); 81 92 } 82 93 83 - static inline void _raw_spin_lock_flags(spinlock_t *lock, unsigned long flags) 94 + static inline void __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags) 84 95 { 85 96 unsigned long tmp1, tmp2; 86 97 ··· 104 115 : "memory"); 105 116 } 106 117 107 - #else /* !(CONFIG_DEBUG_SPINLOCK) */ 108 - 109 - typedef struct { 110 - volatile unsigned char lock; 111 - unsigned int owner_pc, owner_cpu; 112 - #ifdef CONFIG_PREEMPT 113 - unsigned int break_lock; 114 - #endif 115 - } spinlock_t; 116 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0, 0, 0xff } 117 - #define spin_lock_init(lp) do { *(lp)= SPIN_LOCK_UNLOCKED; } while(0) 118 - #define spin_is_locked(__lock) ((__lock)->lock != 0) 119 - #define spin_unlock_wait(__lock) \ 120 - do { \ 121 - rmb(); \ 122 - } while((__lock)->lock) 123 - 124 - extern void _do_spin_lock(spinlock_t *lock, char *str, unsigned long caller); 125 - extern void _do_spin_unlock(spinlock_t *lock); 126 - extern int _do_spin_trylock(spinlock_t *lock, unsigned long caller); 127 - 128 - #define _raw_spin_trylock(lp) \ 129 - _do_spin_trylock(lp, (unsigned long) __builtin_return_address(0)) 130 - #define _raw_spin_lock(lock) \ 131 - _do_spin_lock(lock, "spin_lock", \ 132 - (unsigned long) __builtin_return_address(0)) 133 - #define _raw_spin_unlock(lock) _do_spin_unlock(lock) 134 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 135 - 136 - #endif /* CONFIG_DEBUG_SPINLOCK */ 137 - 138 118 /* Multi-reader locks, these are much saner than the 32-bit Sparc ones... */ 139 119 140 - #ifndef CONFIG_DEBUG_SPINLOCK 141 - 142 - typedef struct { 143 - volatile unsigned int lock; 144 - #ifdef CONFIG_PREEMPT 145 - unsigned int break_lock; 146 - #endif 147 - } rwlock_t; 148 - #define RW_LOCK_UNLOCKED (rwlock_t) {0,} 149 - #define rwlock_init(lp) do { *(lp) = RW_LOCK_UNLOCKED; } while(0) 150 - 151 - static void inline __read_lock(rwlock_t *lock) 120 + static void inline __read_lock(raw_rwlock_t *lock) 152 121 { 153 122 unsigned long tmp1, tmp2; 154 123 ··· 131 184 : "memory"); 132 185 } 133 186 134 - static void inline __read_unlock(rwlock_t *lock) 187 + static void inline __read_unlock(raw_rwlock_t *lock) 135 188 { 136 189 unsigned long tmp1, tmp2; 137 190 ··· 148 201 : "memory"); 149 202 } 150 203 151 - static void inline __write_lock(rwlock_t *lock) 204 + static void inline __write_lock(raw_rwlock_t *lock) 152 205 { 153 206 unsigned long mask, tmp1, tmp2; 154 207 ··· 175 228 : "memory"); 176 229 } 177 230 178 - static void inline __write_unlock(rwlock_t *lock) 231 + static void inline __write_unlock(raw_rwlock_t *lock) 179 232 { 180 233 __asm__ __volatile__( 181 234 " membar #LoadStore | #StoreStore\n" ··· 185 238 : "memory"); 186 239 } 187 240 188 - static int inline __write_trylock(rwlock_t *lock) 241 + static int inline __write_trylock(raw_rwlock_t *lock) 189 242 { 190 243 unsigned long mask, tmp1, tmp2, result; 191 244 ··· 210 263 return result; 211 264 } 212 265 213 - #define _raw_read_lock(p) __read_lock(p) 214 - #define _raw_read_unlock(p) __read_unlock(p) 215 - #define _raw_write_lock(p) __write_lock(p) 216 - #define _raw_write_unlock(p) __write_unlock(p) 217 - #define _raw_write_trylock(p) __write_trylock(p) 266 + #define __raw_read_lock(p) __read_lock(p) 267 + #define __raw_read_unlock(p) __read_unlock(p) 268 + #define __raw_write_lock(p) __write_lock(p) 269 + #define __raw_write_unlock(p) __write_unlock(p) 270 + #define __raw_write_trylock(p) __write_trylock(p) 218 271 219 - #else /* !(CONFIG_DEBUG_SPINLOCK) */ 220 - 221 - typedef struct { 222 - volatile unsigned long lock; 223 - unsigned int writer_pc, writer_cpu; 224 - unsigned int reader_pc[NR_CPUS]; 225 - #ifdef CONFIG_PREEMPT 226 - unsigned int break_lock; 227 - #endif 228 - } rwlock_t; 229 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0, 0xff, { } } 230 - #define rwlock_init(lp) do { *(lp) = RW_LOCK_UNLOCKED; } while(0) 231 - 232 - extern void _do_read_lock(rwlock_t *rw, char *str, unsigned long caller); 233 - extern void _do_read_unlock(rwlock_t *rw, char *str, unsigned long caller); 234 - extern void _do_write_lock(rwlock_t *rw, char *str, unsigned long caller); 235 - extern void _do_write_unlock(rwlock_t *rw, unsigned long caller); 236 - extern int _do_write_trylock(rwlock_t *rw, char *str, unsigned long caller); 237 - 238 - #define _raw_read_lock(lock) \ 239 - do { unsigned long flags; \ 240 - local_irq_save(flags); \ 241 - _do_read_lock(lock, "read_lock", \ 242 - (unsigned long) __builtin_return_address(0)); \ 243 - local_irq_restore(flags); \ 244 - } while(0) 245 - 246 - #define _raw_read_unlock(lock) \ 247 - do { unsigned long flags; \ 248 - local_irq_save(flags); \ 249 - _do_read_unlock(lock, "read_unlock", \ 250 - (unsigned long) __builtin_return_address(0)); \ 251 - local_irq_restore(flags); \ 252 - } while(0) 253 - 254 - #define _raw_write_lock(lock) \ 255 - do { unsigned long flags; \ 256 - local_irq_save(flags); \ 257 - _do_write_lock(lock, "write_lock", \ 258 - (unsigned long) __builtin_return_address(0)); \ 259 - local_irq_restore(flags); \ 260 - } while(0) 261 - 262 - #define _raw_write_unlock(lock) \ 263 - do { unsigned long flags; \ 264 - local_irq_save(flags); \ 265 - _do_write_unlock(lock, \ 266 - (unsigned long) __builtin_return_address(0)); \ 267 - local_irq_restore(flags); \ 268 - } while(0) 269 - 270 - #define _raw_write_trylock(lock) \ 271 - ({ unsigned long flags; \ 272 - int val; \ 273 - local_irq_save(flags); \ 274 - val = _do_write_trylock(lock, "write_trylock", \ 275 - (unsigned long) __builtin_return_address(0)); \ 276 - local_irq_restore(flags); \ 277 - val; \ 278 - }) 279 - 280 - #endif /* CONFIG_DEBUG_SPINLOCK */ 281 - 282 - #define _raw_read_trylock(lock) generic_raw_read_trylock(lock) 283 - #define read_can_lock(rw) (!((rw)->lock & 0x80000000UL)) 284 - #define write_can_lock(rw) (!(rw)->lock) 272 + #define __raw_read_trylock(lock) generic__raw_read_trylock(lock) 273 + #define __raw_read_can_lock(rw) (!((rw)->lock & 0x80000000UL)) 274 + #define __raw_write_can_lock(rw) (!(rw)->lock) 285 275 286 276 #endif /* !(__ASSEMBLY__) */ 287 277
+20
include/asm-sparc64/spinlock_types.h
··· 1 + #ifndef __SPARC64_SPINLOCK_TYPES_H 2 + #define __SPARC64_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned char lock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 0 } 13 + 14 + typedef struct { 15 + volatile unsigned int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { 0 } 19 + 20 + #endif
+42 -122
include/asm-x86_64/spinlock.h
··· 6 6 #include <asm/page.h> 7 7 #include <linux/config.h> 8 8 9 - extern int printk(const char * fmt, ...) 10 - __attribute__ ((format (printf, 1, 2))); 11 - 12 9 /* 13 10 * Your basic SMP spinlocks, allowing only a single CPU anywhere 14 - */ 15 - 16 - typedef struct { 17 - volatile unsigned int lock; 18 - #ifdef CONFIG_DEBUG_SPINLOCK 19 - unsigned magic; 20 - #endif 21 - #ifdef CONFIG_PREEMPT 22 - unsigned int break_lock; 23 - #endif 24 - } spinlock_t; 25 - 26 - #define SPINLOCK_MAGIC 0xdead4ead 27 - 28 - #ifdef CONFIG_DEBUG_SPINLOCK 29 - #define SPINLOCK_MAGIC_INIT , SPINLOCK_MAGIC 30 - #else 31 - #define SPINLOCK_MAGIC_INIT /* */ 32 - #endif 33 - 34 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 1 SPINLOCK_MAGIC_INIT } 35 - 36 - #define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0) 37 - 38 - /* 11 + * 39 12 * Simple spin lock operations. There are two variants, one clears IRQ's 40 13 * on the local processor, one does not. 41 14 * 42 15 * We make no fairness assumptions. They have a cost. 16 + * 17 + * (the type definitions are in asm/spinlock_types.h) 43 18 */ 44 19 45 - #define spin_is_locked(x) (*(volatile signed char *)(&(x)->lock) <= 0) 46 - #define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) 47 - #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 20 + #define __raw_spin_is_locked(x) \ 21 + (*(volatile signed char *)(&(x)->slock) <= 0) 48 22 49 - #define spin_lock_string \ 23 + #define __raw_spin_lock_string \ 50 24 "\n1:\t" \ 51 25 "lock ; decb %0\n\t" \ 52 26 "js 2f\n" \ ··· 32 58 "jmp 1b\n" \ 33 59 LOCK_SECTION_END 34 60 35 - /* 36 - * This works. Despite all the confusion. 37 - * (except on PPro SMP or if we are using OOSTORE) 38 - * (PPro errata 66, 92) 39 - */ 40 - 41 - #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) 42 - 43 - #define spin_unlock_string \ 61 + #define __raw_spin_unlock_string \ 44 62 "movb $1,%0" \ 45 - :"=m" (lock->lock) : : "memory" 63 + :"=m" (lock->slock) : : "memory" 46 64 47 - 48 - static inline void _raw_spin_unlock(spinlock_t *lock) 65 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 49 66 { 50 - #ifdef CONFIG_DEBUG_SPINLOCK 51 - BUG_ON(lock->magic != SPINLOCK_MAGIC); 52 - assert_spin_locked(lock); 53 - #endif 54 67 __asm__ __volatile__( 55 - spin_unlock_string 56 - ); 68 + __raw_spin_lock_string 69 + :"=m" (lock->slock) : : "memory"); 57 70 } 58 71 59 - #else 72 + #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) 60 73 61 - #define spin_unlock_string \ 62 - "xchgb %b0, %1" \ 63 - :"=q" (oldval), "=m" (lock->lock) \ 64 - :"0" (oldval) : "memory" 65 - 66 - static inline void _raw_spin_unlock(spinlock_t *lock) 67 - { 68 - char oldval = 1; 69 - #ifdef CONFIG_DEBUG_SPINLOCK 70 - BUG_ON(lock->magic != SPINLOCK_MAGIC); 71 - assert_spin_locked(lock); 72 - #endif 73 - __asm__ __volatile__( 74 - spin_unlock_string 75 - ); 76 - } 77 - 78 - #endif 79 - 80 - static inline int _raw_spin_trylock(spinlock_t *lock) 74 + static inline int __raw_spin_trylock(raw_spinlock_t *lock) 81 75 { 82 76 char oldval; 77 + 83 78 __asm__ __volatile__( 84 79 "xchgb %b0,%1" 85 - :"=q" (oldval), "=m" (lock->lock) 80 + :"=q" (oldval), "=m" (lock->slock) 86 81 :"0" (0) : "memory"); 82 + 87 83 return oldval > 0; 88 84 } 89 85 90 - static inline void _raw_spin_lock(spinlock_t *lock) 86 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 91 87 { 92 - #ifdef CONFIG_DEBUG_SPINLOCK 93 - if (lock->magic != SPINLOCK_MAGIC) { 94 - printk("eip: %p\n", __builtin_return_address(0)); 95 - BUG(); 96 - } 97 - #endif 98 88 __asm__ __volatile__( 99 - spin_lock_string 100 - :"=m" (lock->lock) : : "memory"); 89 + __raw_spin_unlock_string 90 + ); 101 91 } 102 92 93 + #define __raw_spin_unlock_wait(lock) \ 94 + do { while (__raw_spin_is_locked(lock)) cpu_relax(); } while (0) 103 95 104 96 /* 105 97 * Read-write spinlocks, allowing multiple readers ··· 76 136 * can "mix" irq-safe locks - any writer needs to get a 77 137 * irq-safe write-lock, but readers can get non-irqsafe 78 138 * read-locks. 79 - */ 80 - typedef struct { 81 - volatile unsigned int lock; 82 - #ifdef CONFIG_DEBUG_SPINLOCK 83 - unsigned magic; 84 - #endif 85 - #ifdef CONFIG_PREEMPT 86 - unsigned int break_lock; 87 - #endif 88 - } rwlock_t; 89 - 90 - #define RWLOCK_MAGIC 0xdeaf1eed 91 - 92 - #ifdef CONFIG_DEBUG_SPINLOCK 93 - #define RWLOCK_MAGIC_INIT , RWLOCK_MAGIC 94 - #else 95 - #define RWLOCK_MAGIC_INIT /* */ 96 - #endif 97 - 98 - #define RW_LOCK_UNLOCKED (rwlock_t) { RW_LOCK_BIAS RWLOCK_MAGIC_INIT } 99 - 100 - #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) 101 - 102 - #define read_can_lock(x) ((int)(x)->lock > 0) 103 - #define write_can_lock(x) ((x)->lock == RW_LOCK_BIAS) 104 - 105 - /* 139 + * 106 140 * On x86, we implement read-write locks as a 32-bit counter 107 141 * with the high bit (sign) being the "contended" bit. 108 142 * ··· 84 170 * 85 171 * Changed to use the same technique as rw semaphores. See 86 172 * semaphore.h for details. -ben 173 + * 174 + * the helpers are in arch/i386/kernel/semaphore.c 87 175 */ 88 - /* the spinlock helpers are in arch/i386/kernel/semaphore.c */ 89 176 90 - static inline void _raw_read_lock(rwlock_t *rw) 177 + #define __raw_read_can_lock(x) ((int)(x)->lock > 0) 178 + #define __raw_write_can_lock(x) ((x)->lock == RW_LOCK_BIAS) 179 + 180 + static inline void __raw_read_lock(raw_rwlock_t *rw) 91 181 { 92 - #ifdef CONFIG_DEBUG_SPINLOCK 93 - BUG_ON(rw->magic != RWLOCK_MAGIC); 94 - #endif 95 182 __build_read_lock(rw, "__read_lock_failed"); 96 183 } 97 184 98 - static inline void _raw_write_lock(rwlock_t *rw) 185 + static inline void __raw_write_lock(raw_rwlock_t *rw) 99 186 { 100 - #ifdef CONFIG_DEBUG_SPINLOCK 101 - BUG_ON(rw->magic != RWLOCK_MAGIC); 102 - #endif 103 187 __build_write_lock(rw, "__write_lock_failed"); 104 188 } 105 189 106 - #define _raw_read_unlock(rw) asm volatile("lock ; incl %0" :"=m" ((rw)->lock) : : "memory") 107 - #define _raw_write_unlock(rw) asm volatile("lock ; addl $" RW_LOCK_BIAS_STR ",%0":"=m" ((rw)->lock) : : "memory") 108 - 109 - static inline int _raw_read_trylock(rwlock_t *lock) 190 + static inline int __raw_read_trylock(raw_rwlock_t *lock) 110 191 { 111 192 atomic_t *count = (atomic_t *)lock; 112 193 atomic_dec(count); ··· 111 202 return 0; 112 203 } 113 204 114 - static inline int _raw_write_trylock(rwlock_t *lock) 205 + static inline int __raw_write_trylock(raw_rwlock_t *lock) 115 206 { 116 207 atomic_t *count = (atomic_t *)lock; 117 208 if (atomic_sub_and_test(RW_LOCK_BIAS, count)) 118 209 return 1; 119 210 atomic_add(RW_LOCK_BIAS, count); 120 211 return 0; 212 + } 213 + 214 + static inline void __raw_read_unlock(raw_rwlock_t *rw) 215 + { 216 + asm volatile("lock ; incl %0" :"=m" (rw->lock) : : "memory"); 217 + } 218 + 219 + static inline void __raw_write_unlock(raw_rwlock_t *rw) 220 + { 221 + asm volatile("lock ; addl $" RW_LOCK_BIAS_STR ",%0" 222 + : "=m" (rw->lock) : : "memory"); 121 223 } 122 224 123 225 #endif /* __ASM_SPINLOCK_H */
+20
include/asm-x86_64/spinlock_types.h
··· 1 + #ifndef __ASM_SPINLOCK_TYPES_H 2 + #define __ASM_SPINLOCK_TYPES_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + typedef struct { 9 + volatile unsigned int slock; 10 + } raw_spinlock_t; 11 + 12 + #define __RAW_SPIN_LOCK_UNLOCKED { 1 } 13 + 14 + typedef struct { 15 + volatile unsigned int lock; 16 + } raw_rwlock_t; 17 + 18 + #define __RAW_RW_LOCK_UNLOCKED { RW_LOCK_BIAS } 19 + 20 + #endif
+77
include/linux/bit_spinlock.h
··· 1 + #ifndef __LINUX_BIT_SPINLOCK_H 2 + #define __LINUX_BIT_SPINLOCK_H 3 + 4 + /* 5 + * bit-based spin_lock() 6 + * 7 + * Don't use this unless you really need to: spin_lock() and spin_unlock() 8 + * are significantly faster. 9 + */ 10 + static inline void bit_spin_lock(int bitnum, unsigned long *addr) 11 + { 12 + /* 13 + * Assuming the lock is uncontended, this never enters 14 + * the body of the outer loop. If it is contended, then 15 + * within the inner loop a non-atomic test is used to 16 + * busywait with less bus contention for a good time to 17 + * attempt to acquire the lock bit. 18 + */ 19 + preempt_disable(); 20 + #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 21 + while (test_and_set_bit(bitnum, addr)) { 22 + while (test_bit(bitnum, addr)) { 23 + preempt_enable(); 24 + cpu_relax(); 25 + preempt_disable(); 26 + } 27 + } 28 + #endif 29 + __acquire(bitlock); 30 + } 31 + 32 + /* 33 + * Return true if it was acquired 34 + */ 35 + static inline int bit_spin_trylock(int bitnum, unsigned long *addr) 36 + { 37 + preempt_disable(); 38 + #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 39 + if (test_and_set_bit(bitnum, addr)) { 40 + preempt_enable(); 41 + return 0; 42 + } 43 + #endif 44 + __acquire(bitlock); 45 + return 1; 46 + } 47 + 48 + /* 49 + * bit-based spin_unlock() 50 + */ 51 + static inline void bit_spin_unlock(int bitnum, unsigned long *addr) 52 + { 53 + #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 54 + BUG_ON(!test_bit(bitnum, addr)); 55 + smp_mb__before_clear_bit(); 56 + clear_bit(bitnum, addr); 57 + #endif 58 + preempt_enable(); 59 + __release(bitlock); 60 + } 61 + 62 + /* 63 + * Return true if the lock is held. 64 + */ 65 + static inline int bit_spin_is_locked(int bitnum, unsigned long *addr) 66 + { 67 + #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 68 + return test_bit(bitnum, addr); 69 + #elif defined CONFIG_PREEMPT 70 + return preempt_count(); 71 + #else 72 + return 1; 73 + #endif 74 + } 75 + 76 + #endif /* __LINUX_BIT_SPINLOCK_H */ 77 +
+1
include/linux/jbd.h
··· 28 28 #include <linux/buffer_head.h> 29 29 #include <linux/journal-head.h> 30 30 #include <linux/stddef.h> 31 + #include <linux/bit_spinlock.h> 31 32 #include <asm/semaphore.h> 32 33 #endif 33 34
+127 -504
include/linux/spinlock.h
··· 2 2 #define __LINUX_SPINLOCK_H 3 3 4 4 /* 5 - * include/linux/spinlock.h - generic locking declarations 5 + * include/linux/spinlock.h - generic spinlock/rwlock declarations 6 + * 7 + * here's the role of the various spinlock/rwlock related include files: 8 + * 9 + * on SMP builds: 10 + * 11 + * asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the 12 + * initializers 13 + * 14 + * linux/spinlock_types.h: 15 + * defines the generic type and initializers 16 + * 17 + * asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel 18 + * implementations, mostly inline assembly code 19 + * 20 + * (also included on UP-debug builds:) 21 + * 22 + * linux/spinlock_api_smp.h: 23 + * contains the prototypes for the _spin_*() APIs. 24 + * 25 + * linux/spinlock.h: builds the final spin_*() APIs. 26 + * 27 + * on UP builds: 28 + * 29 + * linux/spinlock_type_up.h: 30 + * contains the generic, simplified UP spinlock type. 31 + * (which is an empty structure on non-debug builds) 32 + * 33 + * linux/spinlock_types.h: 34 + * defines the generic type and initializers 35 + * 36 + * linux/spinlock_up.h: 37 + * contains the __raw_spin_*()/etc. version of UP 38 + * builds. (which are NOPs on non-debug, non-preempt 39 + * builds) 40 + * 41 + * (included on UP-non-debug builds:) 42 + * 43 + * linux/spinlock_api_up.h: 44 + * builds the _spin_*() APIs. 45 + * 46 + * linux/spinlock.h: builds the final spin_*() APIs. 6 47 */ 7 48 8 49 #include <linux/config.h> ··· 54 13 #include <linux/kernel.h> 55 14 #include <linux/stringify.h> 56 15 57 - #include <asm/processor.h> /* for cpu relax */ 58 16 #include <asm/system.h> 59 17 60 18 /* ··· 75 35 #define __lockfunc fastcall __attribute__((section(".spinlock.text"))) 76 36 77 37 /* 78 - * If CONFIG_SMP is set, pull in the _raw_* definitions 38 + * Pull the raw_spinlock_t and raw_rwlock_t definitions: 79 39 */ 80 - #ifdef CONFIG_SMP 40 + #include <linux/spinlock_types.h> 81 41 82 - #define assert_spin_locked(x) BUG_ON(!spin_is_locked(x)) 83 - #include <asm/spinlock.h> 42 + extern int __lockfunc generic__raw_read_trylock(raw_rwlock_t *lock); 84 43 85 - int __lockfunc _spin_trylock(spinlock_t *lock); 86 - int __lockfunc _read_trylock(rwlock_t *lock); 87 - int __lockfunc _write_trylock(rwlock_t *lock); 88 - 89 - void __lockfunc _spin_lock(spinlock_t *lock) __acquires(spinlock_t); 90 - void __lockfunc _read_lock(rwlock_t *lock) __acquires(rwlock_t); 91 - void __lockfunc _write_lock(rwlock_t *lock) __acquires(rwlock_t); 92 - 93 - void __lockfunc _spin_unlock(spinlock_t *lock) __releases(spinlock_t); 94 - void __lockfunc _read_unlock(rwlock_t *lock) __releases(rwlock_t); 95 - void __lockfunc _write_unlock(rwlock_t *lock) __releases(rwlock_t); 96 - 97 - unsigned long __lockfunc _spin_lock_irqsave(spinlock_t *lock) __acquires(spinlock_t); 98 - unsigned long __lockfunc _read_lock_irqsave(rwlock_t *lock) __acquires(rwlock_t); 99 - unsigned long __lockfunc _write_lock_irqsave(rwlock_t *lock) __acquires(rwlock_t); 100 - 101 - void __lockfunc _spin_lock_irq(spinlock_t *lock) __acquires(spinlock_t); 102 - void __lockfunc _spin_lock_bh(spinlock_t *lock) __acquires(spinlock_t); 103 - void __lockfunc _read_lock_irq(rwlock_t *lock) __acquires(rwlock_t); 104 - void __lockfunc _read_lock_bh(rwlock_t *lock) __acquires(rwlock_t); 105 - void __lockfunc _write_lock_irq(rwlock_t *lock) __acquires(rwlock_t); 106 - void __lockfunc _write_lock_bh(rwlock_t *lock) __acquires(rwlock_t); 107 - 108 - void __lockfunc _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) __releases(spinlock_t); 109 - void __lockfunc _spin_unlock_irq(spinlock_t *lock) __releases(spinlock_t); 110 - void __lockfunc _spin_unlock_bh(spinlock_t *lock) __releases(spinlock_t); 111 - void __lockfunc _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) __releases(rwlock_t); 112 - void __lockfunc _read_unlock_irq(rwlock_t *lock) __releases(rwlock_t); 113 - void __lockfunc _read_unlock_bh(rwlock_t *lock) __releases(rwlock_t); 114 - void __lockfunc _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) __releases(rwlock_t); 115 - void __lockfunc _write_unlock_irq(rwlock_t *lock) __releases(rwlock_t); 116 - void __lockfunc _write_unlock_bh(rwlock_t *lock) __releases(rwlock_t); 117 - 118 - int __lockfunc _spin_trylock_bh(spinlock_t *lock); 119 - int __lockfunc generic_raw_read_trylock(rwlock_t *lock); 120 - int in_lock_functions(unsigned long addr); 121 - 44 + /* 45 + * Pull the __raw*() functions/declarations (UP-nondebug doesnt need them): 46 + */ 47 + #if defined(CONFIG_SMP) 48 + # include <asm/spinlock.h> 122 49 #else 50 + # include <linux/spinlock_up.h> 51 + #endif 123 52 124 - #define in_lock_functions(ADDR) 0 53 + #define spin_lock_init(lock) do { *(lock) = SPIN_LOCK_UNLOCKED; } while (0) 54 + #define rwlock_init(lock) do { *(lock) = RW_LOCK_UNLOCKED; } while (0) 125 55 126 - #if !defined(CONFIG_PREEMPT) && !defined(CONFIG_DEBUG_SPINLOCK) 127 - # define _atomic_dec_and_lock(atomic,lock) atomic_dec_and_test(atomic) 128 - # define ATOMIC_DEC_AND_LOCK 56 + #define spin_is_locked(lock) __raw_spin_is_locked(&(lock)->raw_lock) 57 + 58 + /** 59 + * spin_unlock_wait - wait until the spinlock gets unlocked 60 + * @lock: the spinlock in question. 61 + */ 62 + #define spin_unlock_wait(lock) __raw_spin_unlock_wait(&(lock)->raw_lock) 63 + 64 + /* 65 + * Pull the _spin_*()/_read_*()/_write_*() functions/declarations: 66 + */ 67 + #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 68 + # include <linux/spinlock_api_smp.h> 69 + #else 70 + # include <linux/spinlock_api_up.h> 129 71 #endif 130 72 131 73 #ifdef CONFIG_DEBUG_SPINLOCK 132 - 133 - #define SPINLOCK_MAGIC 0x1D244B3C 134 - typedef struct { 135 - unsigned long magic; 136 - volatile unsigned long lock; 137 - volatile unsigned int babble; 138 - const char *module; 139 - char *owner; 140 - int oline; 141 - } spinlock_t; 142 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { SPINLOCK_MAGIC, 0, 10, __FILE__ , NULL, 0} 74 + extern void _raw_spin_lock(spinlock_t *lock); 75 + #define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock) 76 + extern int _raw_spin_trylock(spinlock_t *lock); 77 + extern void _raw_spin_unlock(spinlock_t *lock); 143 78 144 - #define spin_lock_init(x) \ 145 - do { \ 146 - (x)->magic = SPINLOCK_MAGIC; \ 147 - (x)->lock = 0; \ 148 - (x)->babble = 5; \ 149 - (x)->module = __FILE__; \ 150 - (x)->owner = NULL; \ 151 - (x)->oline = 0; \ 152 - } while (0) 153 - 154 - #define CHECK_LOCK(x) \ 155 - do { \ 156 - if ((x)->magic != SPINLOCK_MAGIC) { \ 157 - printk(KERN_ERR "%s:%d: spin_is_locked on uninitialized spinlock %p.\n", \ 158 - __FILE__, __LINE__, (x)); \ 159 - } \ 160 - } while(0) 161 - 162 - #define _raw_spin_lock(x) \ 163 - do { \ 164 - CHECK_LOCK(x); \ 165 - if ((x)->lock&&(x)->babble) { \ 166 - (x)->babble--; \ 167 - printk("%s:%d: spin_lock(%s:%p) already locked by %s/%d\n", \ 168 - __FILE__,__LINE__, (x)->module, \ 169 - (x), (x)->owner, (x)->oline); \ 170 - } \ 171 - (x)->lock = 1; \ 172 - (x)->owner = __FILE__; \ 173 - (x)->oline = __LINE__; \ 174 - } while (0) 175 - 176 - /* without debugging, spin_is_locked on UP always says 177 - * FALSE. --> printk if already locked. */ 178 - #define spin_is_locked(x) \ 179 - ({ \ 180 - CHECK_LOCK(x); \ 181 - if ((x)->lock&&(x)->babble) { \ 182 - (x)->babble--; \ 183 - printk("%s:%d: spin_is_locked(%s:%p) already locked by %s/%d\n", \ 184 - __FILE__,__LINE__, (x)->module, \ 185 - (x), (x)->owner, (x)->oline); \ 186 - } \ 187 - 0; \ 188 - }) 189 - 190 - /* with debugging, assert_spin_locked() on UP does check 191 - * the lock value properly */ 192 - #define assert_spin_locked(x) \ 193 - ({ \ 194 - CHECK_LOCK(x); \ 195 - BUG_ON(!(x)->lock); \ 196 - }) 197 - 198 - /* without debugging, spin_trylock on UP always says 199 - * TRUE. --> printk if already locked. */ 200 - #define _raw_spin_trylock(x) \ 201 - ({ \ 202 - CHECK_LOCK(x); \ 203 - if ((x)->lock&&(x)->babble) { \ 204 - (x)->babble--; \ 205 - printk("%s:%d: spin_trylock(%s:%p) already locked by %s/%d\n", \ 206 - __FILE__,__LINE__, (x)->module, \ 207 - (x), (x)->owner, (x)->oline); \ 208 - } \ 209 - (x)->lock = 1; \ 210 - (x)->owner = __FILE__; \ 211 - (x)->oline = __LINE__; \ 212 - 1; \ 213 - }) 214 - 215 - #define spin_unlock_wait(x) \ 216 - do { \ 217 - CHECK_LOCK(x); \ 218 - if ((x)->lock&&(x)->babble) { \ 219 - (x)->babble--; \ 220 - printk("%s:%d: spin_unlock_wait(%s:%p) owned by %s/%d\n", \ 221 - __FILE__,__LINE__, (x)->module, (x), \ 222 - (x)->owner, (x)->oline); \ 223 - }\ 224 - } while (0) 225 - 226 - #define _raw_spin_unlock(x) \ 227 - do { \ 228 - CHECK_LOCK(x); \ 229 - if (!(x)->lock&&(x)->babble) { \ 230 - (x)->babble--; \ 231 - printk("%s:%d: spin_unlock(%s:%p) not locked\n", \ 232 - __FILE__,__LINE__, (x)->module, (x));\ 233 - } \ 234 - (x)->lock = 0; \ 235 - } while (0) 79 + extern void _raw_read_lock(rwlock_t *lock); 80 + extern int _raw_read_trylock(rwlock_t *lock); 81 + extern void _raw_read_unlock(rwlock_t *lock); 82 + extern void _raw_write_lock(rwlock_t *lock); 83 + extern int _raw_write_trylock(rwlock_t *lock); 84 + extern void _raw_write_unlock(rwlock_t *lock); 236 85 #else 237 - /* 238 - * gcc versions before ~2.95 have a nasty bug with empty initializers. 239 - */ 240 - #if (__GNUC__ > 2) 241 - typedef struct { } spinlock_t; 242 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { } 243 - #else 244 - typedef struct { int gcc_is_buggy; } spinlock_t; 245 - #define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 } 86 + # define _raw_spin_unlock(lock) __raw_spin_unlock(&(lock)->raw_lock) 87 + # define _raw_spin_trylock(lock) __raw_spin_trylock(&(lock)->raw_lock) 88 + # define _raw_spin_lock(lock) __raw_spin_lock(&(lock)->raw_lock) 89 + # define _raw_spin_lock_flags(lock, flags) \ 90 + __raw_spin_lock_flags(&(lock)->raw_lock, *(flags)) 91 + # define _raw_read_lock(rwlock) __raw_read_lock(&(rwlock)->raw_lock) 92 + # define _raw_write_lock(rwlock) __raw_write_lock(&(rwlock)->raw_lock) 93 + # define _raw_read_unlock(rwlock) __raw_read_unlock(&(rwlock)->raw_lock) 94 + # define _raw_write_unlock(rwlock) __raw_write_unlock(&(rwlock)->raw_lock) 95 + # define _raw_read_trylock(rwlock) __raw_read_trylock(&(rwlock)->raw_lock) 96 + # define _raw_write_trylock(rwlock) __raw_write_trylock(&(rwlock)->raw_lock) 246 97 #endif 247 98 248 - /* 249 - * If CONFIG_SMP is unset, declare the _raw_* definitions as nops 250 - */ 251 - #define spin_lock_init(lock) do { (void)(lock); } while(0) 252 - #define _raw_spin_lock(lock) do { (void)(lock); } while(0) 253 - #define spin_is_locked(lock) ((void)(lock), 0) 254 - #define assert_spin_locked(lock) do { (void)(lock); } while(0) 255 - #define _raw_spin_trylock(lock) (((void)(lock), 1)) 256 - #define spin_unlock_wait(lock) (void)(lock) 257 - #define _raw_spin_unlock(lock) do { (void)(lock); } while(0) 258 - #endif /* CONFIG_DEBUG_SPINLOCK */ 259 - 260 - /* RW spinlocks: No debug version */ 261 - 262 - #if (__GNUC__ > 2) 263 - typedef struct { } rwlock_t; 264 - #define RW_LOCK_UNLOCKED (rwlock_t) { } 265 - #else 266 - typedef struct { int gcc_is_buggy; } rwlock_t; 267 - #define RW_LOCK_UNLOCKED (rwlock_t) { 0 } 268 - #endif 269 - 270 - #define rwlock_init(lock) do { (void)(lock); } while(0) 271 - #define _raw_read_lock(lock) do { (void)(lock); } while(0) 272 - #define _raw_read_unlock(lock) do { (void)(lock); } while(0) 273 - #define _raw_write_lock(lock) do { (void)(lock); } while(0) 274 - #define _raw_write_unlock(lock) do { (void)(lock); } while(0) 275 - #define read_can_lock(lock) (((void)(lock), 1)) 276 - #define write_can_lock(lock) (((void)(lock), 1)) 277 - #define _raw_read_trylock(lock) ({ (void)(lock); (1); }) 278 - #define _raw_write_trylock(lock) ({ (void)(lock); (1); }) 279 - 280 - #define _spin_trylock(lock) ({preempt_disable(); _raw_spin_trylock(lock) ? \ 281 - 1 : ({preempt_enable(); 0;});}) 282 - 283 - #define _read_trylock(lock) ({preempt_disable();_raw_read_trylock(lock) ? \ 284 - 1 : ({preempt_enable(); 0;});}) 285 - 286 - #define _write_trylock(lock) ({preempt_disable(); _raw_write_trylock(lock) ? \ 287 - 1 : ({preempt_enable(); 0;});}) 288 - 289 - #define _spin_trylock_bh(lock) ({preempt_disable(); local_bh_disable(); \ 290 - _raw_spin_trylock(lock) ? \ 291 - 1 : ({preempt_enable_no_resched(); local_bh_enable(); 0;});}) 292 - 293 - #define _spin_lock(lock) \ 294 - do { \ 295 - preempt_disable(); \ 296 - _raw_spin_lock(lock); \ 297 - __acquire(lock); \ 298 - } while(0) 299 - 300 - #define _write_lock(lock) \ 301 - do { \ 302 - preempt_disable(); \ 303 - _raw_write_lock(lock); \ 304 - __acquire(lock); \ 305 - } while(0) 306 - 307 - #define _read_lock(lock) \ 308 - do { \ 309 - preempt_disable(); \ 310 - _raw_read_lock(lock); \ 311 - __acquire(lock); \ 312 - } while(0) 313 - 314 - #define _spin_unlock(lock) \ 315 - do { \ 316 - _raw_spin_unlock(lock); \ 317 - preempt_enable(); \ 318 - __release(lock); \ 319 - } while (0) 320 - 321 - #define _write_unlock(lock) \ 322 - do { \ 323 - _raw_write_unlock(lock); \ 324 - preempt_enable(); \ 325 - __release(lock); \ 326 - } while(0) 327 - 328 - #define _read_unlock(lock) \ 329 - do { \ 330 - _raw_read_unlock(lock); \ 331 - preempt_enable(); \ 332 - __release(lock); \ 333 - } while(0) 334 - 335 - #define _spin_lock_irqsave(lock, flags) \ 336 - do { \ 337 - local_irq_save(flags); \ 338 - preempt_disable(); \ 339 - _raw_spin_lock(lock); \ 340 - __acquire(lock); \ 341 - } while (0) 342 - 343 - #define _spin_lock_irq(lock) \ 344 - do { \ 345 - local_irq_disable(); \ 346 - preempt_disable(); \ 347 - _raw_spin_lock(lock); \ 348 - __acquire(lock); \ 349 - } while (0) 350 - 351 - #define _spin_lock_bh(lock) \ 352 - do { \ 353 - local_bh_disable(); \ 354 - preempt_disable(); \ 355 - _raw_spin_lock(lock); \ 356 - __acquire(lock); \ 357 - } while (0) 358 - 359 - #define _read_lock_irqsave(lock, flags) \ 360 - do { \ 361 - local_irq_save(flags); \ 362 - preempt_disable(); \ 363 - _raw_read_lock(lock); \ 364 - __acquire(lock); \ 365 - } while (0) 366 - 367 - #define _read_lock_irq(lock) \ 368 - do { \ 369 - local_irq_disable(); \ 370 - preempt_disable(); \ 371 - _raw_read_lock(lock); \ 372 - __acquire(lock); \ 373 - } while (0) 374 - 375 - #define _read_lock_bh(lock) \ 376 - do { \ 377 - local_bh_disable(); \ 378 - preempt_disable(); \ 379 - _raw_read_lock(lock); \ 380 - __acquire(lock); \ 381 - } while (0) 382 - 383 - #define _write_lock_irqsave(lock, flags) \ 384 - do { \ 385 - local_irq_save(flags); \ 386 - preempt_disable(); \ 387 - _raw_write_lock(lock); \ 388 - __acquire(lock); \ 389 - } while (0) 390 - 391 - #define _write_lock_irq(lock) \ 392 - do { \ 393 - local_irq_disable(); \ 394 - preempt_disable(); \ 395 - _raw_write_lock(lock); \ 396 - __acquire(lock); \ 397 - } while (0) 398 - 399 - #define _write_lock_bh(lock) \ 400 - do { \ 401 - local_bh_disable(); \ 402 - preempt_disable(); \ 403 - _raw_write_lock(lock); \ 404 - __acquire(lock); \ 405 - } while (0) 406 - 407 - #define _spin_unlock_irqrestore(lock, flags) \ 408 - do { \ 409 - _raw_spin_unlock(lock); \ 410 - local_irq_restore(flags); \ 411 - preempt_enable(); \ 412 - __release(lock); \ 413 - } while (0) 414 - 415 - #define _spin_unlock_irq(lock) \ 416 - do { \ 417 - _raw_spin_unlock(lock); \ 418 - local_irq_enable(); \ 419 - preempt_enable(); \ 420 - __release(lock); \ 421 - } while (0) 422 - 423 - #define _spin_unlock_bh(lock) \ 424 - do { \ 425 - _raw_spin_unlock(lock); \ 426 - preempt_enable_no_resched(); \ 427 - local_bh_enable(); \ 428 - __release(lock); \ 429 - } while (0) 430 - 431 - #define _write_unlock_bh(lock) \ 432 - do { \ 433 - _raw_write_unlock(lock); \ 434 - preempt_enable_no_resched(); \ 435 - local_bh_enable(); \ 436 - __release(lock); \ 437 - } while (0) 438 - 439 - #define _read_unlock_irqrestore(lock, flags) \ 440 - do { \ 441 - _raw_read_unlock(lock); \ 442 - local_irq_restore(flags); \ 443 - preempt_enable(); \ 444 - __release(lock); \ 445 - } while (0) 446 - 447 - #define _write_unlock_irqrestore(lock, flags) \ 448 - do { \ 449 - _raw_write_unlock(lock); \ 450 - local_irq_restore(flags); \ 451 - preempt_enable(); \ 452 - __release(lock); \ 453 - } while (0) 454 - 455 - #define _read_unlock_irq(lock) \ 456 - do { \ 457 - _raw_read_unlock(lock); \ 458 - local_irq_enable(); \ 459 - preempt_enable(); \ 460 - __release(lock); \ 461 - } while (0) 462 - 463 - #define _read_unlock_bh(lock) \ 464 - do { \ 465 - _raw_read_unlock(lock); \ 466 - preempt_enable_no_resched(); \ 467 - local_bh_enable(); \ 468 - __release(lock); \ 469 - } while (0) 470 - 471 - #define _write_unlock_irq(lock) \ 472 - do { \ 473 - _raw_write_unlock(lock); \ 474 - local_irq_enable(); \ 475 - preempt_enable(); \ 476 - __release(lock); \ 477 - } while (0) 478 - 479 - #endif /* !SMP */ 99 + #define read_can_lock(rwlock) __raw_read_can_lock(&(rwlock)->raw_lock) 100 + #define write_can_lock(rwlock) __raw_write_can_lock(&(rwlock)->raw_lock) 480 101 481 102 /* 482 103 * Define the various spin_lock and rw_lock methods. Note we define these 483 104 * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various 484 105 * methods are defined as nops in the case they are not required. 485 106 */ 486 - #define spin_trylock(lock) __cond_lock(_spin_trylock(lock)) 487 - #define read_trylock(lock) __cond_lock(_read_trylock(lock)) 488 - #define write_trylock(lock) __cond_lock(_write_trylock(lock)) 107 + #define spin_trylock(lock) __cond_lock(_spin_trylock(lock)) 108 + #define read_trylock(lock) __cond_lock(_read_trylock(lock)) 109 + #define write_trylock(lock) __cond_lock(_write_trylock(lock)) 489 110 490 - #define spin_lock(lock) _spin_lock(lock) 491 - #define write_lock(lock) _write_lock(lock) 492 - #define read_lock(lock) _read_lock(lock) 111 + #define spin_lock(lock) _spin_lock(lock) 112 + #define write_lock(lock) _write_lock(lock) 113 + #define read_lock(lock) _read_lock(lock) 493 114 494 - #ifdef CONFIG_SMP 115 + #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 495 116 #define spin_lock_irqsave(lock, flags) flags = _spin_lock_irqsave(lock) 496 117 #define read_lock_irqsave(lock, flags) flags = _read_lock_irqsave(lock) 497 118 #define write_lock_irqsave(lock, flags) flags = _write_lock_irqsave(lock) ··· 171 470 #define write_lock_irq(lock) _write_lock_irq(lock) 172 471 #define write_lock_bh(lock) _write_lock_bh(lock) 173 472 174 - #define spin_unlock(lock) _spin_unlock(lock) 175 - #define write_unlock(lock) _write_unlock(lock) 176 - #define read_unlock(lock) _read_unlock(lock) 473 + #define spin_unlock(lock) _spin_unlock(lock) 474 + #define write_unlock(lock) _write_unlock(lock) 475 + #define read_unlock(lock) _read_unlock(lock) 177 476 178 - #define spin_unlock_irqrestore(lock, flags) _spin_unlock_irqrestore(lock, flags) 477 + #define spin_unlock_irqrestore(lock, flags) \ 478 + _spin_unlock_irqrestore(lock, flags) 179 479 #define spin_unlock_irq(lock) _spin_unlock_irq(lock) 180 480 #define spin_unlock_bh(lock) _spin_unlock_bh(lock) 181 481 182 - #define read_unlock_irqrestore(lock, flags) _read_unlock_irqrestore(lock, flags) 183 - #define read_unlock_irq(lock) _read_unlock_irq(lock) 184 - #define read_unlock_bh(lock) _read_unlock_bh(lock) 482 + #define read_unlock_irqrestore(lock, flags) \ 483 + _read_unlock_irqrestore(lock, flags) 484 + #define read_unlock_irq(lock) _read_unlock_irq(lock) 485 + #define read_unlock_bh(lock) _read_unlock_bh(lock) 185 486 186 - #define write_unlock_irqrestore(lock, flags) _write_unlock_irqrestore(lock, flags) 187 - #define write_unlock_irq(lock) _write_unlock_irq(lock) 188 - #define write_unlock_bh(lock) _write_unlock_bh(lock) 487 + #define write_unlock_irqrestore(lock, flags) \ 488 + _write_unlock_irqrestore(lock, flags) 489 + #define write_unlock_irq(lock) _write_unlock_irq(lock) 490 + #define write_unlock_bh(lock) _write_unlock_bh(lock) 189 491 190 - #define spin_trylock_bh(lock) __cond_lock(_spin_trylock_bh(lock)) 492 + #define spin_trylock_bh(lock) __cond_lock(_spin_trylock_bh(lock)) 191 493 192 494 #define spin_trylock_irq(lock) \ 193 495 ({ \ 194 496 local_irq_disable(); \ 195 497 _spin_trylock(lock) ? \ 196 - 1 : ({local_irq_enable(); 0; }); \ 498 + 1 : ({ local_irq_enable(); 0; }); \ 197 499 }) 198 500 199 501 #define spin_trylock_irqsave(lock, flags) \ 200 502 ({ \ 201 503 local_irq_save(flags); \ 202 504 _spin_trylock(lock) ? \ 203 - 1 : ({local_irq_restore(flags); 0;}); \ 505 + 1 : ({ local_irq_restore(flags); 0; }); \ 204 506 }) 205 507 206 - #ifdef CONFIG_LOCKMETER 207 - extern void _metered_spin_lock (spinlock_t *lock); 208 - extern void _metered_spin_unlock (spinlock_t *lock); 209 - extern int _metered_spin_trylock(spinlock_t *lock); 210 - extern void _metered_read_lock (rwlock_t *lock); 211 - extern void _metered_read_unlock (rwlock_t *lock); 212 - extern void _metered_write_lock (rwlock_t *lock); 213 - extern void _metered_write_unlock (rwlock_t *lock); 214 - extern int _metered_read_trylock (rwlock_t *lock); 215 - extern int _metered_write_trylock(rwlock_t *lock); 216 - #endif 217 - 218 - /* "lock on reference count zero" */ 219 - #ifndef ATOMIC_DEC_AND_LOCK 508 + /* 509 + * Pull the atomic_t declaration: 510 + * (asm-mips/atomic.h needs above definitions) 511 + */ 220 512 #include <asm/atomic.h> 513 + /** 514 + * atomic_dec_and_lock - lock on reaching reference count zero 515 + * @atomic: the atomic counter 516 + * @lock: the spinlock in question 517 + */ 221 518 extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); 222 - #endif 223 - 224 - #define atomic_dec_and_lock(atomic,lock) __cond_lock(_atomic_dec_and_lock(atomic,lock)) 225 - 226 - /* 227 - * bit-based spin_lock() 228 - * 229 - * Don't use this unless you really need to: spin_lock() and spin_unlock() 230 - * are significantly faster. 231 - */ 232 - static inline void bit_spin_lock(int bitnum, unsigned long *addr) 233 - { 234 - /* 235 - * Assuming the lock is uncontended, this never enters 236 - * the body of the outer loop. If it is contended, then 237 - * within the inner loop a non-atomic test is used to 238 - * busywait with less bus contention for a good time to 239 - * attempt to acquire the lock bit. 240 - */ 241 - preempt_disable(); 242 - #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 243 - while (test_and_set_bit(bitnum, addr)) { 244 - while (test_bit(bitnum, addr)) { 245 - preempt_enable(); 246 - cpu_relax(); 247 - preempt_disable(); 248 - } 249 - } 250 - #endif 251 - __acquire(bitlock); 252 - } 253 - 254 - /* 255 - * Return true if it was acquired 256 - */ 257 - static inline int bit_spin_trylock(int bitnum, unsigned long *addr) 258 - { 259 - preempt_disable(); 260 - #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 261 - if (test_and_set_bit(bitnum, addr)) { 262 - preempt_enable(); 263 - return 0; 264 - } 265 - #endif 266 - __acquire(bitlock); 267 - return 1; 268 - } 269 - 270 - /* 271 - * bit-based spin_unlock() 272 - */ 273 - static inline void bit_spin_unlock(int bitnum, unsigned long *addr) 274 - { 275 - #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 276 - BUG_ON(!test_bit(bitnum, addr)); 277 - smp_mb__before_clear_bit(); 278 - clear_bit(bitnum, addr); 279 - #endif 280 - preempt_enable(); 281 - __release(bitlock); 282 - } 283 - 284 - /* 285 - * Return true if the lock is held. 286 - */ 287 - static inline int bit_spin_is_locked(int bitnum, unsigned long *addr) 288 - { 289 - #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 290 - return test_bit(bitnum, addr); 291 - #elif defined CONFIG_PREEMPT 292 - return preempt_count(); 293 - #else 294 - return 1; 295 - #endif 296 - } 297 - 298 - #define DEFINE_SPINLOCK(x) spinlock_t x = SPIN_LOCK_UNLOCKED 299 - #define DEFINE_RWLOCK(x) rwlock_t x = RW_LOCK_UNLOCKED 519 + #define atomic_dec_and_lock(atomic, lock) \ 520 + __cond_lock(_atomic_dec_and_lock(atomic, lock)) 300 521 301 522 /** 302 523 * spin_can_lock - would spin_trylock() succeed? 303 524 * @lock: the spinlock in question. 304 525 */ 305 - #define spin_can_lock(lock) (!spin_is_locked(lock)) 526 + #define spin_can_lock(lock) (!spin_is_locked(lock)) 306 527 307 528 #endif /* __LINUX_SPINLOCK_H */
+57
include/linux/spinlock_api_smp.h
··· 1 + #ifndef __LINUX_SPINLOCK_API_SMP_H 2 + #define __LINUX_SPINLOCK_API_SMP_H 3 + 4 + #ifndef __LINUX_SPINLOCK_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + /* 9 + * include/linux/spinlock_api_smp.h 10 + * 11 + * spinlock API declarations on SMP (and debug) 12 + * (implemented in kernel/spinlock.c) 13 + * 14 + * portions Copyright 2005, Red Hat, Inc., Ingo Molnar 15 + * Released under the General Public License (GPL). 16 + */ 17 + 18 + int in_lock_functions(unsigned long addr); 19 + 20 + #define assert_spin_locked(x) BUG_ON(!spin_is_locked(x)) 21 + 22 + void __lockfunc _spin_lock(spinlock_t *lock) __acquires(spinlock_t); 23 + void __lockfunc _read_lock(rwlock_t *lock) __acquires(rwlock_t); 24 + void __lockfunc _write_lock(rwlock_t *lock) __acquires(rwlock_t); 25 + void __lockfunc _spin_lock_bh(spinlock_t *lock) __acquires(spinlock_t); 26 + void __lockfunc _read_lock_bh(rwlock_t *lock) __acquires(rwlock_t); 27 + void __lockfunc _write_lock_bh(rwlock_t *lock) __acquires(rwlock_t); 28 + void __lockfunc _spin_lock_irq(spinlock_t *lock) __acquires(spinlock_t); 29 + void __lockfunc _read_lock_irq(rwlock_t *lock) __acquires(rwlock_t); 30 + void __lockfunc _write_lock_irq(rwlock_t *lock) __acquires(rwlock_t); 31 + unsigned long __lockfunc _spin_lock_irqsave(spinlock_t *lock) 32 + __acquires(spinlock_t); 33 + unsigned long __lockfunc _read_lock_irqsave(rwlock_t *lock) 34 + __acquires(rwlock_t); 35 + unsigned long __lockfunc _write_lock_irqsave(rwlock_t *lock) 36 + __acquires(rwlock_t); 37 + int __lockfunc _spin_trylock(spinlock_t *lock); 38 + int __lockfunc _read_trylock(rwlock_t *lock); 39 + int __lockfunc _write_trylock(rwlock_t *lock); 40 + int __lockfunc _spin_trylock_bh(spinlock_t *lock); 41 + void __lockfunc _spin_unlock(spinlock_t *lock) __releases(spinlock_t); 42 + void __lockfunc _read_unlock(rwlock_t *lock) __releases(rwlock_t); 43 + void __lockfunc _write_unlock(rwlock_t *lock) __releases(rwlock_t); 44 + void __lockfunc _spin_unlock_bh(spinlock_t *lock) __releases(spinlock_t); 45 + void __lockfunc _read_unlock_bh(rwlock_t *lock) __releases(rwlock_t); 46 + void __lockfunc _write_unlock_bh(rwlock_t *lock) __releases(rwlock_t); 47 + void __lockfunc _spin_unlock_irq(spinlock_t *lock) __releases(spinlock_t); 48 + void __lockfunc _read_unlock_irq(rwlock_t *lock) __releases(rwlock_t); 49 + void __lockfunc _write_unlock_irq(rwlock_t *lock) __releases(rwlock_t); 50 + void __lockfunc _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) 51 + __releases(spinlock_t); 52 + void __lockfunc _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) 53 + __releases(rwlock_t); 54 + void __lockfunc _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) 55 + __releases(rwlock_t); 56 + 57 + #endif /* __LINUX_SPINLOCK_API_SMP_H */
+80
include/linux/spinlock_api_up.h
··· 1 + #ifndef __LINUX_SPINLOCK_API_UP_H 2 + #define __LINUX_SPINLOCK_API_UP_H 3 + 4 + #ifndef __LINUX_SPINLOCK_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + /* 9 + * include/linux/spinlock_api_up.h 10 + * 11 + * spinlock API implementation on UP-nondebug (inlined implementation) 12 + * 13 + * portions Copyright 2005, Red Hat, Inc., Ingo Molnar 14 + * Released under the General Public License (GPL). 15 + */ 16 + 17 + #define in_lock_functions(ADDR) 0 18 + 19 + #define assert_spin_locked(lock) do { (void)(lock); } while (0) 20 + 21 + /* 22 + * In the UP-nondebug case there's no real locking going on, so the 23 + * only thing we have to do is to keep the preempt counts and irq 24 + * flags straight, to supress compiler warnings of unused lock 25 + * variables, and to add the proper checker annotations: 26 + */ 27 + #define __LOCK(lock) \ 28 + do { preempt_disable(); __acquire(lock); (void)(lock); } while (0) 29 + 30 + #define __LOCK_BH(lock) \ 31 + do { local_bh_disable(); __LOCK(lock); } while (0) 32 + 33 + #define __LOCK_IRQ(lock) \ 34 + do { local_irq_disable(); __LOCK(lock); } while (0) 35 + 36 + #define __LOCK_IRQSAVE(lock, flags) \ 37 + do { local_irq_save(flags); __LOCK(lock); } while (0) 38 + 39 + #define __UNLOCK(lock) \ 40 + do { preempt_enable(); __release(lock); (void)(lock); } while (0) 41 + 42 + #define __UNLOCK_BH(lock) \ 43 + do { preempt_enable_no_resched(); local_bh_enable(); __release(lock); (void)(lock); } while (0) 44 + 45 + #define __UNLOCK_IRQ(lock) \ 46 + do { local_irq_enable(); __UNLOCK(lock); } while (0) 47 + 48 + #define __UNLOCK_IRQRESTORE(lock, flags) \ 49 + do { local_irq_restore(flags); __UNLOCK(lock); } while (0) 50 + 51 + #define _spin_lock(lock) __LOCK(lock) 52 + #define _read_lock(lock) __LOCK(lock) 53 + #define _write_lock(lock) __LOCK(lock) 54 + #define _spin_lock_bh(lock) __LOCK_BH(lock) 55 + #define _read_lock_bh(lock) __LOCK_BH(lock) 56 + #define _write_lock_bh(lock) __LOCK_BH(lock) 57 + #define _spin_lock_irq(lock) __LOCK_IRQ(lock) 58 + #define _read_lock_irq(lock) __LOCK_IRQ(lock) 59 + #define _write_lock_irq(lock) __LOCK_IRQ(lock) 60 + #define _spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) 61 + #define _read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) 62 + #define _write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) 63 + #define _spin_trylock(lock) ({ __LOCK(lock); 1; }) 64 + #define _read_trylock(lock) ({ __LOCK(lock); 1; }) 65 + #define _write_trylock(lock) ({ __LOCK(lock); 1; }) 66 + #define _spin_trylock_bh(lock) ({ __LOCK_BH(lock); 1; }) 67 + #define _spin_unlock(lock) __UNLOCK(lock) 68 + #define _read_unlock(lock) __UNLOCK(lock) 69 + #define _write_unlock(lock) __UNLOCK(lock) 70 + #define _spin_unlock_bh(lock) __UNLOCK_BH(lock) 71 + #define _write_unlock_bh(lock) __UNLOCK_BH(lock) 72 + #define _read_unlock_bh(lock) __UNLOCK_BH(lock) 73 + #define _spin_unlock_irq(lock) __UNLOCK_IRQ(lock) 74 + #define _read_unlock_irq(lock) __UNLOCK_IRQ(lock) 75 + #define _write_unlock_irq(lock) __UNLOCK_IRQ(lock) 76 + #define _spin_unlock_irqrestore(lock, flags) __UNLOCK_IRQRESTORE(lock, flags) 77 + #define _read_unlock_irqrestore(lock, flags) __UNLOCK_IRQRESTORE(lock, flags) 78 + #define _write_unlock_irqrestore(lock, flags) __UNLOCK_IRQRESTORE(lock, flags) 79 + 80 + #endif /* __LINUX_SPINLOCK_API_UP_H */
+67
include/linux/spinlock_types.h
··· 1 + #ifndef __LINUX_SPINLOCK_TYPES_H 2 + #define __LINUX_SPINLOCK_TYPES_H 3 + 4 + /* 5 + * include/linux/spinlock_types.h - generic spinlock type definitions 6 + * and initializers 7 + * 8 + * portions Copyright 2005, Red Hat, Inc., Ingo Molnar 9 + * Released under the General Public License (GPL). 10 + */ 11 + 12 + #if defined(CONFIG_SMP) 13 + # include <asm/spinlock_types.h> 14 + #else 15 + # include <linux/spinlock_types_up.h> 16 + #endif 17 + 18 + typedef struct { 19 + raw_spinlock_t raw_lock; 20 + #if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP) 21 + unsigned int break_lock; 22 + #endif 23 + #ifdef CONFIG_DEBUG_SPINLOCK 24 + unsigned int magic, owner_cpu; 25 + void *owner; 26 + #endif 27 + } spinlock_t; 28 + 29 + #define SPINLOCK_MAGIC 0xdead4ead 30 + 31 + typedef struct { 32 + raw_rwlock_t raw_lock; 33 + #if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP) 34 + unsigned int break_lock; 35 + #endif 36 + #ifdef CONFIG_DEBUG_SPINLOCK 37 + unsigned int magic, owner_cpu; 38 + void *owner; 39 + #endif 40 + } rwlock_t; 41 + 42 + #define RWLOCK_MAGIC 0xdeaf1eed 43 + 44 + #define SPINLOCK_OWNER_INIT ((void *)-1L) 45 + 46 + #ifdef CONFIG_DEBUG_SPINLOCK 47 + # define SPIN_LOCK_UNLOCKED \ 48 + (spinlock_t) { .raw_lock = __RAW_SPIN_LOCK_UNLOCKED, \ 49 + .magic = SPINLOCK_MAGIC, \ 50 + .owner = SPINLOCK_OWNER_INIT, \ 51 + .owner_cpu = -1 } 52 + #define RW_LOCK_UNLOCKED \ 53 + (rwlock_t) { .raw_lock = __RAW_RW_LOCK_UNLOCKED, \ 54 + .magic = RWLOCK_MAGIC, \ 55 + .owner = SPINLOCK_OWNER_INIT, \ 56 + .owner_cpu = -1 } 57 + #else 58 + # define SPIN_LOCK_UNLOCKED \ 59 + (spinlock_t) { .raw_lock = __RAW_SPIN_LOCK_UNLOCKED } 60 + #define RW_LOCK_UNLOCKED \ 61 + (rwlock_t) { .raw_lock = __RAW_RW_LOCK_UNLOCKED } 62 + #endif 63 + 64 + #define DEFINE_SPINLOCK(x) spinlock_t x = SPIN_LOCK_UNLOCKED 65 + #define DEFINE_RWLOCK(x) rwlock_t x = RW_LOCK_UNLOCKED 66 + 67 + #endif /* __LINUX_SPINLOCK_TYPES_H */
+51
include/linux/spinlock_types_up.h
··· 1 + #ifndef __LINUX_SPINLOCK_TYPES_UP_H 2 + #define __LINUX_SPINLOCK_TYPES_UP_H 3 + 4 + #ifndef __LINUX_SPINLOCK_TYPES_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + /* 9 + * include/linux/spinlock_types_up.h - spinlock type definitions for UP 10 + * 11 + * portions Copyright 2005, Red Hat, Inc., Ingo Molnar 12 + * Released under the General Public License (GPL). 13 + */ 14 + 15 + #ifdef CONFIG_DEBUG_SPINLOCK 16 + 17 + typedef struct { 18 + volatile unsigned int slock; 19 + } raw_spinlock_t; 20 + 21 + #define __RAW_SPIN_LOCK_UNLOCKED { 1 } 22 + 23 + #else 24 + 25 + /* 26 + * All gcc 2.95 versions and early versions of 2.96 have a nasty bug 27 + * with empty initializers. 28 + */ 29 + #if (__GNUC__ > 2) 30 + typedef struct { } raw_spinlock_t; 31 + 32 + #define __RAW_SPIN_LOCK_UNLOCKED { } 33 + #else 34 + typedef struct { int gcc_is_buggy; } raw_spinlock_t; 35 + #define __RAW_SPIN_LOCK_UNLOCKED (raw_spinlock_t) { 0 } 36 + #endif 37 + 38 + #endif 39 + 40 + #if (__GNUC__ > 2) 41 + typedef struct { 42 + /* no debug version on UP */ 43 + } raw_rwlock_t; 44 + 45 + #define __RAW_RW_LOCK_UNLOCKED { } 46 + #else 47 + typedef struct { int gcc_is_buggy; } raw_rwlock_t; 48 + #define __RAW_RW_LOCK_UNLOCKED (raw_rwlock_t) { 0 } 49 + #endif 50 + 51 + #endif /* __LINUX_SPINLOCK_TYPES_UP_H */
+74
include/linux/spinlock_up.h
··· 1 + #ifndef __LINUX_SPINLOCK_UP_H 2 + #define __LINUX_SPINLOCK_UP_H 3 + 4 + #ifndef __LINUX_SPINLOCK_H 5 + # error "please don't include this file directly" 6 + #endif 7 + 8 + /* 9 + * include/linux/spinlock_up.h - UP-debug version of spinlocks. 10 + * 11 + * portions Copyright 2005, Red Hat, Inc., Ingo Molnar 12 + * Released under the General Public License (GPL). 13 + * 14 + * In the debug case, 1 means unlocked, 0 means locked. (the values 15 + * are inverted, to catch initialization bugs) 16 + * 17 + * No atomicity anywhere, we are on UP. 18 + */ 19 + 20 + #ifdef CONFIG_DEBUG_SPINLOCK 21 + 22 + #define __raw_spin_is_locked(x) ((x)->slock == 0) 23 + 24 + static inline void __raw_spin_lock(raw_spinlock_t *lock) 25 + { 26 + lock->slock = 0; 27 + } 28 + 29 + static inline void 30 + __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags) 31 + { 32 + local_irq_save(flags); 33 + lock->slock = 0; 34 + } 35 + 36 + static inline int __raw_spin_trylock(raw_spinlock_t *lock) 37 + { 38 + char oldval = lock->slock; 39 + 40 + lock->slock = 0; 41 + 42 + return oldval > 0; 43 + } 44 + 45 + static inline void __raw_spin_unlock(raw_spinlock_t *lock) 46 + { 47 + lock->slock = 1; 48 + } 49 + 50 + /* 51 + * Read-write spinlocks. No debug version. 52 + */ 53 + #define __raw_read_lock(lock) do { (void)(lock); } while (0) 54 + #define __raw_write_lock(lock) do { (void)(lock); } while (0) 55 + #define __raw_read_trylock(lock) ({ (void)(lock); 1; }) 56 + #define __raw_write_trylock(lock) ({ (void)(lock); 1; }) 57 + #define __raw_read_unlock(lock) do { (void)(lock); } while (0) 58 + #define __raw_write_unlock(lock) do { (void)(lock); } while (0) 59 + 60 + #else /* DEBUG_SPINLOCK */ 61 + #define __raw_spin_is_locked(lock) ((void)(lock), 0) 62 + /* for sched.c and kernel_lock.c: */ 63 + # define __raw_spin_lock(lock) do { (void)(lock); } while (0) 64 + # define __raw_spin_unlock(lock) do { (void)(lock); } while (0) 65 + # define __raw_spin_trylock(lock) ({ (void)(lock); 1; }) 66 + #endif /* DEBUG_SPINLOCK */ 67 + 68 + #define __raw_read_can_lock(lock) (((void)(lock), 1)) 69 + #define __raw_write_can_lock(lock) (((void)(lock), 1)) 70 + 71 + #define __raw_spin_unlock_wait(lock) \ 72 + do { cpu_relax(); } while (__raw_spin_is_locked(lock)) 73 + 74 + #endif /* __LINUX_SPINLOCK_UP_H */
+1
kernel/Makefile
··· 12 12 obj-$(CONFIG_FUTEX) += futex.o 13 13 obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o 14 14 obj-$(CONFIG_SMP) += cpu.o spinlock.o 15 + obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o 15 16 obj-$(CONFIG_UID16) += uid16.o 16 17 obj-$(CONFIG_MODULES) += module.o 17 18 obj-$(CONFIG_KALLSYMS) += kallsyms.o
+4
kernel/sched.c
··· 1511 1511 * Manfred Spraul <manfred@colorfullife.com> 1512 1512 */ 1513 1513 prev_task_flags = prev->flags; 1514 + #ifdef CONFIG_DEBUG_SPINLOCK 1515 + /* this is a valid case when another task releases the spinlock */ 1516 + rq->lock.owner = current; 1517 + #endif 1514 1518 finish_arch_switch(prev); 1515 1519 finish_lock_switch(rq, prev); 1516 1520 if (mm)
+9 -6
kernel/spinlock.c
··· 3 3 * 4 4 * Author: Zwane Mwaikambo <zwane@fsmlabs.com> 5 5 * 6 - * Copyright (2004) Ingo Molnar 6 + * Copyright (2004, 2005) Ingo Molnar 7 + * 8 + * This file contains the spinlock/rwlock implementations for the 9 + * SMP and the DEBUG_SPINLOCK cases. (UP-nondebug inlines them) 7 10 */ 8 11 9 12 #include <linux/config.h> ··· 20 17 * Generic declaration of the raw read_trylock() function, 21 18 * architectures are supposed to optimize this: 22 19 */ 23 - int __lockfunc generic_raw_read_trylock(rwlock_t *lock) 20 + int __lockfunc generic__raw_read_trylock(raw_rwlock_t *lock) 24 21 { 25 - _raw_read_lock(lock); 22 + __raw_read_lock(lock); 26 23 return 1; 27 24 } 28 - EXPORT_SYMBOL(generic_raw_read_trylock); 25 + EXPORT_SYMBOL(generic__raw_read_trylock); 29 26 30 27 int __lockfunc _spin_trylock(spinlock_t *lock) 31 28 { ··· 60 57 } 61 58 EXPORT_SYMBOL(_write_trylock); 62 59 63 - #ifndef CONFIG_PREEMPT 60 + #if !defined(CONFIG_PREEMPT) || !defined(CONFIG_SMP) 64 61 65 62 void __lockfunc _read_lock(rwlock_t *lock) 66 63 { ··· 75 72 76 73 local_irq_save(flags); 77 74 preempt_disable(); 78 - _raw_spin_lock_flags(lock, flags); 75 + _raw_spin_lock_flags(lock, &flags); 79 76 return flags; 80 77 } 81 78 EXPORT_SYMBOL(_spin_lock_irqsave);
+1
lib/Makefile
··· 16 16 CFLAGS_kobject_uevent.o += -DDEBUG 17 17 endif 18 18 19 + obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o 19 20 lib-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o 20 21 lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o 21 22 lib-$(CONFIG_SEMAPHORE_SLEEPERS) += semaphore-sleepers.o
-3
lib/dec_and_lock.c
··· 25 25 * this is trivially done efficiently using a load-locked 26 26 * store-conditional approach, for example. 27 27 */ 28 - 29 - #ifndef ATOMIC_DEC_AND_LOCK 30 28 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) 31 29 { 32 30 spin_lock(lock); ··· 35 37 } 36 38 37 39 EXPORT_SYMBOL(_atomic_dec_and_lock); 38 - #endif
+1 -2
lib/kernel_lock.c
··· 177 177 178 178 static inline void __unlock_kernel(void) 179 179 { 180 - _raw_spin_unlock(&kernel_flag); 181 - preempt_enable(); 180 + spin_unlock(&kernel_flag); 182 181 } 183 182 184 183 /*
+257
lib/spinlock_debug.c
··· 1 + /* 2 + * Copyright 2005, Red Hat, Inc., Ingo Molnar 3 + * Released under the General Public License (GPL). 4 + * 5 + * This file contains the spinlock/rwlock implementations for 6 + * DEBUG_SPINLOCK. 7 + */ 8 + 9 + #include <linux/config.h> 10 + #include <linux/spinlock.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/delay.h> 13 + 14 + static void spin_bug(spinlock_t *lock, const char *msg) 15 + { 16 + static long print_once = 1; 17 + struct task_struct *owner = NULL; 18 + 19 + if (xchg(&print_once, 0)) { 20 + if (lock->owner && lock->owner != SPINLOCK_OWNER_INIT) 21 + owner = lock->owner; 22 + printk("BUG: spinlock %s on CPU#%d, %s/%d\n", 23 + msg, smp_processor_id(), current->comm, current->pid); 24 + printk(" lock: %p, .magic: %08x, .owner: %s/%d, .owner_cpu: %d\n", 25 + lock, lock->magic, 26 + owner ? owner->comm : "<none>", 27 + owner ? owner->pid : -1, 28 + lock->owner_cpu); 29 + dump_stack(); 30 + #ifdef CONFIG_SMP 31 + /* 32 + * We cannot continue on SMP: 33 + */ 34 + // panic("bad locking"); 35 + #endif 36 + } 37 + } 38 + 39 + #define SPIN_BUG_ON(cond, lock, msg) if (unlikely(cond)) spin_bug(lock, msg) 40 + 41 + static inline void debug_spin_lock_before(spinlock_t *lock) 42 + { 43 + SPIN_BUG_ON(lock->magic != SPINLOCK_MAGIC, lock, "bad magic"); 44 + SPIN_BUG_ON(lock->owner == current, lock, "recursion"); 45 + SPIN_BUG_ON(lock->owner_cpu == raw_smp_processor_id(), 46 + lock, "cpu recursion"); 47 + } 48 + 49 + static inline void debug_spin_lock_after(spinlock_t *lock) 50 + { 51 + lock->owner_cpu = raw_smp_processor_id(); 52 + lock->owner = current; 53 + } 54 + 55 + static inline void debug_spin_unlock(spinlock_t *lock) 56 + { 57 + SPIN_BUG_ON(lock->magic != SPINLOCK_MAGIC, lock, "bad magic"); 58 + SPIN_BUG_ON(!spin_is_locked(lock), lock, "already unlocked"); 59 + SPIN_BUG_ON(lock->owner != current, lock, "wrong owner"); 60 + SPIN_BUG_ON(lock->owner_cpu != raw_smp_processor_id(), 61 + lock, "wrong CPU"); 62 + lock->owner = SPINLOCK_OWNER_INIT; 63 + lock->owner_cpu = -1; 64 + } 65 + 66 + static void __spin_lock_debug(spinlock_t *lock) 67 + { 68 + int print_once = 1; 69 + u64 i; 70 + 71 + for (;;) { 72 + for (i = 0; i < loops_per_jiffy * HZ; i++) { 73 + cpu_relax(); 74 + if (__raw_spin_trylock(&lock->raw_lock)) 75 + return; 76 + } 77 + /* lockup suspected: */ 78 + if (print_once) { 79 + print_once = 0; 80 + printk("BUG: spinlock lockup on CPU#%d, %s/%d, %p\n", 81 + smp_processor_id(), current->comm, current->pid, 82 + lock); 83 + dump_stack(); 84 + } 85 + } 86 + } 87 + 88 + void _raw_spin_lock(spinlock_t *lock) 89 + { 90 + debug_spin_lock_before(lock); 91 + if (unlikely(!__raw_spin_trylock(&lock->raw_lock))) 92 + __spin_lock_debug(lock); 93 + debug_spin_lock_after(lock); 94 + } 95 + 96 + int _raw_spin_trylock(spinlock_t *lock) 97 + { 98 + int ret = __raw_spin_trylock(&lock->raw_lock); 99 + 100 + if (ret) 101 + debug_spin_lock_after(lock); 102 + #ifndef CONFIG_SMP 103 + /* 104 + * Must not happen on UP: 105 + */ 106 + SPIN_BUG_ON(!ret, lock, "trylock failure on UP"); 107 + #endif 108 + return ret; 109 + } 110 + 111 + void _raw_spin_unlock(spinlock_t *lock) 112 + { 113 + debug_spin_unlock(lock); 114 + __raw_spin_unlock(&lock->raw_lock); 115 + } 116 + 117 + static void rwlock_bug(rwlock_t *lock, const char *msg) 118 + { 119 + static long print_once = 1; 120 + 121 + if (xchg(&print_once, 0)) { 122 + printk("BUG: rwlock %s on CPU#%d, %s/%d, %p\n", msg, 123 + smp_processor_id(), current->comm, current->pid, lock); 124 + dump_stack(); 125 + #ifdef CONFIG_SMP 126 + /* 127 + * We cannot continue on SMP: 128 + */ 129 + panic("bad locking"); 130 + #endif 131 + } 132 + } 133 + 134 + #define RWLOCK_BUG_ON(cond, lock, msg) if (unlikely(cond)) rwlock_bug(lock, msg) 135 + 136 + static void __read_lock_debug(rwlock_t *lock) 137 + { 138 + int print_once = 1; 139 + u64 i; 140 + 141 + for (;;) { 142 + for (i = 0; i < loops_per_jiffy * HZ; i++) { 143 + cpu_relax(); 144 + if (__raw_read_trylock(&lock->raw_lock)) 145 + return; 146 + } 147 + /* lockup suspected: */ 148 + if (print_once) { 149 + print_once = 0; 150 + printk("BUG: read-lock lockup on CPU#%d, %s/%d, %p\n", 151 + smp_processor_id(), current->comm, current->pid, 152 + lock); 153 + dump_stack(); 154 + } 155 + } 156 + } 157 + 158 + void _raw_read_lock(rwlock_t *lock) 159 + { 160 + RWLOCK_BUG_ON(lock->magic != RWLOCK_MAGIC, lock, "bad magic"); 161 + if (unlikely(!__raw_read_trylock(&lock->raw_lock))) 162 + __read_lock_debug(lock); 163 + } 164 + 165 + int _raw_read_trylock(rwlock_t *lock) 166 + { 167 + int ret = __raw_read_trylock(&lock->raw_lock); 168 + 169 + #ifndef CONFIG_SMP 170 + /* 171 + * Must not happen on UP: 172 + */ 173 + RWLOCK_BUG_ON(!ret, lock, "trylock failure on UP"); 174 + #endif 175 + return ret; 176 + } 177 + 178 + void _raw_read_unlock(rwlock_t *lock) 179 + { 180 + RWLOCK_BUG_ON(lock->magic != RWLOCK_MAGIC, lock, "bad magic"); 181 + __raw_read_unlock(&lock->raw_lock); 182 + } 183 + 184 + static inline void debug_write_lock_before(rwlock_t *lock) 185 + { 186 + RWLOCK_BUG_ON(lock->magic != RWLOCK_MAGIC, lock, "bad magic"); 187 + RWLOCK_BUG_ON(lock->owner == current, lock, "recursion"); 188 + RWLOCK_BUG_ON(lock->owner_cpu == raw_smp_processor_id(), 189 + lock, "cpu recursion"); 190 + } 191 + 192 + static inline void debug_write_lock_after(rwlock_t *lock) 193 + { 194 + lock->owner_cpu = raw_smp_processor_id(); 195 + lock->owner = current; 196 + } 197 + 198 + static inline void debug_write_unlock(rwlock_t *lock) 199 + { 200 + RWLOCK_BUG_ON(lock->magic != RWLOCK_MAGIC, lock, "bad magic"); 201 + RWLOCK_BUG_ON(lock->owner != current, lock, "wrong owner"); 202 + RWLOCK_BUG_ON(lock->owner_cpu != raw_smp_processor_id(), 203 + lock, "wrong CPU"); 204 + lock->owner = SPINLOCK_OWNER_INIT; 205 + lock->owner_cpu = -1; 206 + } 207 + 208 + static void __write_lock_debug(rwlock_t *lock) 209 + { 210 + int print_once = 1; 211 + u64 i; 212 + 213 + for (;;) { 214 + for (i = 0; i < loops_per_jiffy * HZ; i++) { 215 + cpu_relax(); 216 + if (__raw_write_trylock(&lock->raw_lock)) 217 + return; 218 + } 219 + /* lockup suspected: */ 220 + if (print_once) { 221 + print_once = 0; 222 + printk("BUG: write-lock lockup on CPU#%d, %s/%d, %p\n", 223 + smp_processor_id(), current->comm, current->pid, 224 + lock); 225 + dump_stack(); 226 + } 227 + } 228 + } 229 + 230 + void _raw_write_lock(rwlock_t *lock) 231 + { 232 + debug_write_lock_before(lock); 233 + if (unlikely(!__raw_write_trylock(&lock->raw_lock))) 234 + __write_lock_debug(lock); 235 + debug_write_lock_after(lock); 236 + } 237 + 238 + int _raw_write_trylock(rwlock_t *lock) 239 + { 240 + int ret = __raw_write_trylock(&lock->raw_lock); 241 + 242 + if (ret) 243 + debug_write_lock_after(lock); 244 + #ifndef CONFIG_SMP 245 + /* 246 + * Must not happen on UP: 247 + */ 248 + RWLOCK_BUG_ON(!ret, lock, "trylock failure on UP"); 249 + #endif 250 + return ret; 251 + } 252 + 253 + void _raw_write_unlock(rwlock_t *lock) 254 + { 255 + debug_write_unlock(lock); 256 + __raw_write_unlock(&lock->raw_lock); 257 + }