Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mutex: Add support for wound/wait style locks

Wound/wait mutexes are used when other multiple lock
acquisitions of a similar type can be done in an arbitrary
order. The deadlock handling used here is called wait/wound in
the RDBMS literature: The older tasks waits until it can acquire
the contended lock. The younger tasks needs to back off and drop
all the locks it is currently holding, i.e. the younger task is
wounded.

For full documentation please read Documentation/ww-mutex-design.txt.

References: https://lwn.net/Articles/548909/
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Rob Clark <robdclark@gmail.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: rostedt@goodmis.org
Cc: daniel@ffwll.ch
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/51C8038C.9000106@canonical.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Maarten Lankhorst and committed by
Ingo Molnar
040a0a37 a41b56ef

+1004 -18
+344
Documentation/ww-mutex-design.txt
··· 1 + Wait/Wound Deadlock-Proof Mutex Design 2 + ====================================== 3 + 4 + Please read mutex-design.txt first, as it applies to wait/wound mutexes too. 5 + 6 + Motivation for WW-Mutexes 7 + ------------------------- 8 + 9 + GPU's do operations that commonly involve many buffers. Those buffers 10 + can be shared across contexts/processes, exist in different memory 11 + domains (for example VRAM vs system memory), and so on. And with 12 + PRIME / dmabuf, they can even be shared across devices. So there are 13 + a handful of situations where the driver needs to wait for buffers to 14 + become ready. If you think about this in terms of waiting on a buffer 15 + mutex for it to become available, this presents a problem because 16 + there is no way to guarantee that buffers appear in a execbuf/batch in 17 + the same order in all contexts. That is directly under control of 18 + userspace, and a result of the sequence of GL calls that an application 19 + makes. Which results in the potential for deadlock. The problem gets 20 + more complex when you consider that the kernel may need to migrate the 21 + buffer(s) into VRAM before the GPU operates on the buffer(s), which 22 + may in turn require evicting some other buffers (and you don't want to 23 + evict other buffers which are already queued up to the GPU), but for a 24 + simplified understanding of the problem you can ignore this. 25 + 26 + The algorithm that the TTM graphics subsystem came up with for dealing with 27 + this problem is quite simple. For each group of buffers (execbuf) that need 28 + to be locked, the caller would be assigned a unique reservation id/ticket, 29 + from a global counter. In case of deadlock while locking all the buffers 30 + associated with a execbuf, the one with the lowest reservation ticket (i.e. 31 + the oldest task) wins, and the one with the higher reservation id (i.e. the 32 + younger task) unlocks all of the buffers that it has already locked, and then 33 + tries again. 34 + 35 + In the RDBMS literature this deadlock handling approach is called wait/wound: 36 + The older tasks waits until it can acquire the contended lock. The younger tasks 37 + needs to back off and drop all the locks it is currently holding, i.e. the 38 + younger task is wounded. 39 + 40 + Concepts 41 + -------- 42 + 43 + Compared to normal mutexes two additional concepts/objects show up in the lock 44 + interface for w/w mutexes: 45 + 46 + Acquire context: To ensure eventual forward progress it is important the a task 47 + trying to acquire locks doesn't grab a new reservation id, but keeps the one it 48 + acquired when starting the lock acquisition. This ticket is stored in the 49 + acquire context. Furthermore the acquire context keeps track of debugging state 50 + to catch w/w mutex interface abuse. 51 + 52 + W/w class: In contrast to normal mutexes the lock class needs to be explicit for 53 + w/w mutexes, since it is required to initialize the acquire context. 54 + 55 + Furthermore there are three different class of w/w lock acquire functions: 56 + 57 + * Normal lock acquisition with a context, using ww_mutex_lock. 58 + 59 + * Slowpath lock acquisition on the contending lock, used by the wounded task 60 + after having dropped all already acquired locks. These functions have the 61 + _slow postfix. 62 + 63 + From a simple semantics point-of-view the _slow functions are not strictly 64 + required, since simply calling the normal ww_mutex_lock functions on the 65 + contending lock (after having dropped all other already acquired locks) will 66 + work correctly. After all if no other ww mutex has been acquired yet there's 67 + no deadlock potential and hence the ww_mutex_lock call will block and not 68 + prematurely return -EDEADLK. The advantage of the _slow functions is in 69 + interface safety: 70 + - ww_mutex_lock has a __must_check int return type, whereas ww_mutex_lock_slow 71 + has a void return type. Note that since ww mutex code needs loops/retries 72 + anyway the __must_check doesn't result in spurious warnings, even though the 73 + very first lock operation can never fail. 74 + - When full debugging is enabled ww_mutex_lock_slow checks that all acquired 75 + ww mutex have been released (preventing deadlocks) and makes sure that we 76 + block on the contending lock (preventing spinning through the -EDEADLK 77 + slowpath until the contended lock can be acquired). 78 + 79 + * Functions to only acquire a single w/w mutex, which results in the exact same 80 + semantics as a normal mutex. This is done by calling ww_mutex_lock with a NULL 81 + context. 82 + 83 + Again this is not strictly required. But often you only want to acquire a 84 + single lock in which case it's pointless to set up an acquire context (and so 85 + better to avoid grabbing a deadlock avoidance ticket). 86 + 87 + Of course, all the usual variants for handling wake-ups due to signals are also 88 + provided. 89 + 90 + Usage 91 + ----- 92 + 93 + Three different ways to acquire locks within the same w/w class. Common 94 + definitions for methods #1 and #2: 95 + 96 + static DEFINE_WW_CLASS(ww_class); 97 + 98 + struct obj { 99 + struct ww_mutex lock; 100 + /* obj data */ 101 + }; 102 + 103 + struct obj_entry { 104 + struct list_head head; 105 + struct obj *obj; 106 + }; 107 + 108 + Method 1, using a list in execbuf->buffers that's not allowed to be reordered. 109 + This is useful if a list of required objects is already tracked somewhere. 110 + Furthermore the lock helper can use propagate the -EALREADY return code back to 111 + the caller as a signal that an object is twice on the list. This is useful if 112 + the list is constructed from userspace input and the ABI requires userspace to 113 + not have duplicate entries (e.g. for a gpu commandbuffer submission ioctl). 114 + 115 + int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 116 + { 117 + struct obj *res_obj = NULL; 118 + struct obj_entry *contended_entry = NULL; 119 + struct obj_entry *entry; 120 + 121 + ww_acquire_init(ctx, &ww_class); 122 + 123 + retry: 124 + list_for_each_entry (entry, list, head) { 125 + if (entry->obj == res_obj) { 126 + res_obj = NULL; 127 + continue; 128 + } 129 + ret = ww_mutex_lock(&entry->obj->lock, ctx); 130 + if (ret < 0) { 131 + contended_entry = entry; 132 + goto err; 133 + } 134 + } 135 + 136 + ww_acquire_done(ctx); 137 + return 0; 138 + 139 + err: 140 + list_for_each_entry_continue_reverse (entry, list, head) 141 + ww_mutex_unlock(&entry->obj->lock); 142 + 143 + if (res_obj) 144 + ww_mutex_unlock(&res_obj->lock); 145 + 146 + if (ret == -EDEADLK) { 147 + /* we lost out in a seqno race, lock and retry.. */ 148 + ww_mutex_lock_slow(&contended_entry->obj->lock, ctx); 149 + res_obj = contended_entry->obj; 150 + goto retry; 151 + } 152 + ww_acquire_fini(ctx); 153 + 154 + return ret; 155 + } 156 + 157 + Method 2, using a list in execbuf->buffers that can be reordered. Same semantics 158 + of duplicate entry detection using -EALREADY as method 1 above. But the 159 + list-reordering allows for a bit more idiomatic code. 160 + 161 + int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 162 + { 163 + struct obj_entry *entry, *entry2; 164 + 165 + ww_acquire_init(ctx, &ww_class); 166 + 167 + list_for_each_entry (entry, list, head) { 168 + ret = ww_mutex_lock(&entry->obj->lock, ctx); 169 + if (ret < 0) { 170 + entry2 = entry; 171 + 172 + list_for_each_entry_continue_reverse (entry2, list, head) 173 + ww_mutex_unlock(&entry2->obj->lock); 174 + 175 + if (ret != -EDEADLK) { 176 + ww_acquire_fini(ctx); 177 + return ret; 178 + } 179 + 180 + /* we lost out in a seqno race, lock and retry.. */ 181 + ww_mutex_lock_slow(&entry->obj->lock, ctx); 182 + 183 + /* 184 + * Move buf to head of the list, this will point 185 + * buf->next to the first unlocked entry, 186 + * restarting the for loop. 187 + */ 188 + list_del(&entry->head); 189 + list_add(&entry->head, list); 190 + } 191 + } 192 + 193 + ww_acquire_done(ctx); 194 + return 0; 195 + } 196 + 197 + Unlocking works the same way for both methods #1 and #2: 198 + 199 + void unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 200 + { 201 + struct obj_entry *entry; 202 + 203 + list_for_each_entry (entry, list, head) 204 + ww_mutex_unlock(&entry->obj->lock); 205 + 206 + ww_acquire_fini(ctx); 207 + } 208 + 209 + Method 3 is useful if the list of objects is constructed ad-hoc and not upfront, 210 + e.g. when adjusting edges in a graph where each node has its own ww_mutex lock, 211 + and edges can only be changed when holding the locks of all involved nodes. w/w 212 + mutexes are a natural fit for such a case for two reasons: 213 + - They can handle lock-acquisition in any order which allows us to start walking 214 + a graph from a starting point and then iteratively discovering new edges and 215 + locking down the nodes those edges connect to. 216 + - Due to the -EALREADY return code signalling that a given objects is already 217 + held there's no need for additional book-keeping to break cycles in the graph 218 + or keep track off which looks are already held (when using more than one node 219 + as a starting point). 220 + 221 + Note that this approach differs in two important ways from the above methods: 222 + - Since the list of objects is dynamically constructed (and might very well be 223 + different when retrying due to hitting the -EDEADLK wound condition) there's 224 + no need to keep any object on a persistent list when it's not locked. We can 225 + therefore move the list_head into the object itself. 226 + - On the other hand the dynamic object list construction also means that the -EALREADY return 227 + code can't be propagated. 228 + 229 + Note also that methods #1 and #2 and method #3 can be combined, e.g. to first lock a 230 + list of starting nodes (passed in from userspace) using one of the above 231 + methods. And then lock any additional objects affected by the operations using 232 + method #3 below. The backoff/retry procedure will be a bit more involved, since 233 + when the dynamic locking step hits -EDEADLK we also need to unlock all the 234 + objects acquired with the fixed list. But the w/w mutex debug checks will catch 235 + any interface misuse for these cases. 236 + 237 + Also, method 3 can't fail the lock acquisition step since it doesn't return 238 + -EALREADY. Of course this would be different when using the _interruptible 239 + variants, but that's outside of the scope of these examples here. 240 + 241 + struct obj { 242 + struct ww_mutex ww_mutex; 243 + struct list_head locked_list; 244 + }; 245 + 246 + static DEFINE_WW_CLASS(ww_class); 247 + 248 + void __unlock_objs(struct list_head *list) 249 + { 250 + struct obj *entry, *temp; 251 + 252 + list_for_each_entry_safe (entry, temp, list, locked_list) { 253 + /* need to do that before unlocking, since only the current lock holder is 254 + allowed to use object */ 255 + list_del(&entry->locked_list); 256 + ww_mutex_unlock(entry->ww_mutex) 257 + } 258 + } 259 + 260 + void lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 261 + { 262 + struct obj *obj; 263 + 264 + ww_acquire_init(ctx, &ww_class); 265 + 266 + retry: 267 + /* re-init loop start state */ 268 + loop { 269 + /* magic code which walks over a graph and decides which objects 270 + * to lock */ 271 + 272 + ret = ww_mutex_lock(obj->ww_mutex, ctx); 273 + if (ret == -EALREADY) { 274 + /* we have that one already, get to the next object */ 275 + continue; 276 + } 277 + if (ret == -EDEADLK) { 278 + __unlock_objs(list); 279 + 280 + ww_mutex_lock_slow(obj, ctx); 281 + list_add(&entry->locked_list, list); 282 + goto retry; 283 + } 284 + 285 + /* locked a new object, add it to the list */ 286 + list_add_tail(&entry->locked_list, list); 287 + } 288 + 289 + ww_acquire_done(ctx); 290 + return 0; 291 + } 292 + 293 + void unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 294 + { 295 + __unlock_objs(list); 296 + ww_acquire_fini(ctx); 297 + } 298 + 299 + Method 4: Only lock one single objects. In that case deadlock detection and 300 + prevention is obviously overkill, since with grabbing just one lock you can't 301 + produce a deadlock within just one class. To simplify this case the w/w mutex 302 + api can be used with a NULL context. 303 + 304 + Implementation Details 305 + ---------------------- 306 + 307 + Design: 308 + ww_mutex currently encapsulates a struct mutex, this means no extra overhead for 309 + normal mutex locks, which are far more common. As such there is only a small 310 + increase in code size if wait/wound mutexes are not used. 311 + 312 + In general, not much contention is expected. The locks are typically used to 313 + serialize access to resources for devices. The only way to make wakeups 314 + smarter would be at the cost of adding a field to struct mutex_waiter. This 315 + would add overhead to all cases where normal mutexes are used, and 316 + ww_mutexes are generally less performance sensitive. 317 + 318 + Lockdep: 319 + Special care has been taken to warn for as many cases of api abuse 320 + as possible. Some common api abuses will be caught with 321 + CONFIG_DEBUG_MUTEXES, but CONFIG_PROVE_LOCKING is recommended. 322 + 323 + Some of the errors which will be warned about: 324 + - Forgetting to call ww_acquire_fini or ww_acquire_init. 325 + - Attempting to lock more mutexes after ww_acquire_done. 326 + - Attempting to lock the wrong mutex after -EDEADLK and 327 + unlocking all mutexes. 328 + - Attempting to lock the right mutex after -EDEADLK, 329 + before unlocking all mutexes. 330 + 331 + - Calling ww_mutex_lock_slow before -EDEADLK was returned. 332 + 333 + - Unlocking mutexes with the wrong unlock function. 334 + - Calling one of the ww_acquire_* twice on the same context. 335 + - Using a different ww_class for the mutex than for the ww_acquire_ctx. 336 + - Normal lockdep errors that can result in deadlocks. 337 + 338 + Some of the lockdep errors that can result in deadlocks: 339 + - Calling ww_acquire_init to initialize a second ww_acquire_ctx before 340 + having called ww_acquire_fini on the first. 341 + - 'normal' deadlocks that can occur. 342 + 343 + FIXME: Update this section once we have the TASK_DEADLOCK task state flag magic 344 + implemented.
+1
include/linux/mutex-debug.h
··· 3 3 4 4 #include <linux/linkage.h> 5 5 #include <linux/lockdep.h> 6 + #include <linux/debug_locks.h> 6 7 7 8 /* 8 9 * Mutexes - debugging helpers:
+354 -1
include/linux/mutex.h
··· 10 10 #ifndef __LINUX_MUTEX_H 11 11 #define __LINUX_MUTEX_H 12 12 13 + #include <asm/current.h> 13 14 #include <linux/list.h> 14 15 #include <linux/spinlock_types.h> 15 16 #include <linux/linkage.h> ··· 78 77 #endif 79 78 }; 80 79 80 + struct ww_class { 81 + atomic_long_t stamp; 82 + struct lock_class_key acquire_key; 83 + struct lock_class_key mutex_key; 84 + const char *acquire_name; 85 + const char *mutex_name; 86 + }; 87 + 88 + struct ww_acquire_ctx { 89 + struct task_struct *task; 90 + unsigned long stamp; 91 + unsigned acquired; 92 + #ifdef CONFIG_DEBUG_MUTEXES 93 + unsigned done_acquire; 94 + struct ww_class *ww_class; 95 + struct ww_mutex *contending_lock; 96 + #endif 97 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 98 + struct lockdep_map dep_map; 99 + #endif 100 + }; 101 + 102 + struct ww_mutex { 103 + struct mutex base; 104 + struct ww_acquire_ctx *ctx; 105 + #ifdef CONFIG_DEBUG_MUTEXES 106 + struct ww_class *ww_class; 107 + #endif 108 + }; 109 + 81 110 #ifdef CONFIG_DEBUG_MUTEXES 82 111 # include <linux/mutex-debug.h> 83 112 #else ··· 132 101 #ifdef CONFIG_DEBUG_LOCK_ALLOC 133 102 # define __DEP_MAP_MUTEX_INITIALIZER(lockname) \ 134 103 , .dep_map = { .name = #lockname } 104 + # define __WW_CLASS_MUTEX_INITIALIZER(lockname, ww_class) \ 105 + , .ww_class = &ww_class 135 106 #else 136 107 # define __DEP_MAP_MUTEX_INITIALIZER(lockname) 108 + # define __WW_CLASS_MUTEX_INITIALIZER(lockname, ww_class) 137 109 #endif 138 110 139 111 #define __MUTEX_INITIALIZER(lockname) \ ··· 146 112 __DEBUG_MUTEX_INITIALIZER(lockname) \ 147 113 __DEP_MAP_MUTEX_INITIALIZER(lockname) } 148 114 115 + #define __WW_CLASS_INITIALIZER(ww_class) \ 116 + { .stamp = ATOMIC_LONG_INIT(0) \ 117 + , .acquire_name = #ww_class "_acquire" \ 118 + , .mutex_name = #ww_class "_mutex" } 119 + 120 + #define __WW_MUTEX_INITIALIZER(lockname, class) \ 121 + { .base = { \__MUTEX_INITIALIZER(lockname) } \ 122 + __WW_CLASS_MUTEX_INITIALIZER(lockname, class) } 123 + 149 124 #define DEFINE_MUTEX(mutexname) \ 150 125 struct mutex mutexname = __MUTEX_INITIALIZER(mutexname) 151 126 127 + #define DEFINE_WW_CLASS(classname) \ 128 + struct ww_class classname = __WW_CLASS_INITIALIZER(classname) 129 + 130 + #define DEFINE_WW_MUTEX(mutexname, ww_class) \ 131 + struct ww_mutex mutexname = __WW_MUTEX_INITIALIZER(mutexname, ww_class) 132 + 133 + 152 134 extern void __mutex_init(struct mutex *lock, const char *name, 153 135 struct lock_class_key *key); 136 + 137 + /** 138 + * ww_mutex_init - initialize the w/w mutex 139 + * @lock: the mutex to be initialized 140 + * @ww_class: the w/w class the mutex should belong to 141 + * 142 + * Initialize the w/w mutex to unlocked state and associate it with the given 143 + * class. 144 + * 145 + * It is not allowed to initialize an already locked mutex. 146 + */ 147 + static inline void ww_mutex_init(struct ww_mutex *lock, 148 + struct ww_class *ww_class) 149 + { 150 + __mutex_init(&lock->base, ww_class->mutex_name, &ww_class->mutex_key); 151 + lock->ctx = NULL; 152 + #ifdef CONFIG_DEBUG_MUTEXES 153 + lock->ww_class = ww_class; 154 + #endif 155 + } 154 156 155 157 /** 156 158 * mutex_is_locked - is the mutex locked ··· 206 136 #ifdef CONFIG_DEBUG_LOCK_ALLOC 207 137 extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); 208 138 extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock); 139 + 209 140 extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, 210 141 unsigned int subclass); 211 142 extern int __must_check mutex_lock_killable_nested(struct mutex *lock, ··· 218 147 219 148 #define mutex_lock_nest_lock(lock, nest_lock) \ 220 149 do { \ 221 - typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ 150 + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ 222 151 _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \ 223 152 } while (0) 224 153 ··· 241 170 */ 242 171 extern int mutex_trylock(struct mutex *lock); 243 172 extern void mutex_unlock(struct mutex *lock); 173 + 174 + /** 175 + * ww_acquire_init - initialize a w/w acquire context 176 + * @ctx: w/w acquire context to initialize 177 + * @ww_class: w/w class of the context 178 + * 179 + * Initializes an context to acquire multiple mutexes of the given w/w class. 180 + * 181 + * Context-based w/w mutex acquiring can be done in any order whatsoever within 182 + * a given lock class. Deadlocks will be detected and handled with the 183 + * wait/wound logic. 184 + * 185 + * Mixing of context-based w/w mutex acquiring and single w/w mutex locking can 186 + * result in undetected deadlocks and is so forbidden. Mixing different contexts 187 + * for the same w/w class when acquiring mutexes can also result in undetected 188 + * deadlocks, and is hence also forbidden. Both types of abuse will be caught by 189 + * enabling CONFIG_PROVE_LOCKING. 190 + * 191 + * Nesting of acquire contexts for _different_ w/w classes is possible, subject 192 + * to the usual locking rules between different lock classes. 193 + * 194 + * An acquire context must be released with ww_acquire_fini by the same task 195 + * before the memory is freed. It is recommended to allocate the context itself 196 + * on the stack. 197 + */ 198 + static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, 199 + struct ww_class *ww_class) 200 + { 201 + ctx->task = current; 202 + ctx->stamp = atomic_long_inc_return(&ww_class->stamp); 203 + ctx->acquired = 0; 204 + #ifdef CONFIG_DEBUG_MUTEXES 205 + ctx->ww_class = ww_class; 206 + ctx->done_acquire = 0; 207 + ctx->contending_lock = NULL; 208 + #endif 209 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 210 + debug_check_no_locks_freed((void *)ctx, sizeof(*ctx)); 211 + lockdep_init_map(&ctx->dep_map, ww_class->acquire_name, 212 + &ww_class->acquire_key, 0); 213 + mutex_acquire(&ctx->dep_map, 0, 0, _RET_IP_); 214 + #endif 215 + } 216 + 217 + /** 218 + * ww_acquire_done - marks the end of the acquire phase 219 + * @ctx: the acquire context 220 + * 221 + * Marks the end of the acquire phase, any further w/w mutex lock calls using 222 + * this context are forbidden. 223 + * 224 + * Calling this function is optional, it is just useful to document w/w mutex 225 + * code and clearly designated the acquire phase from actually using the locked 226 + * data structures. 227 + */ 228 + static inline void ww_acquire_done(struct ww_acquire_ctx *ctx) 229 + { 230 + #ifdef CONFIG_DEBUG_MUTEXES 231 + lockdep_assert_held(ctx); 232 + 233 + DEBUG_LOCKS_WARN_ON(ctx->done_acquire); 234 + ctx->done_acquire = 1; 235 + #endif 236 + } 237 + 238 + /** 239 + * ww_acquire_fini - releases a w/w acquire context 240 + * @ctx: the acquire context to free 241 + * 242 + * Releases a w/w acquire context. This must be called _after_ all acquired w/w 243 + * mutexes have been released with ww_mutex_unlock. 244 + */ 245 + static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx) 246 + { 247 + #ifdef CONFIG_DEBUG_MUTEXES 248 + mutex_release(&ctx->dep_map, 0, _THIS_IP_); 249 + 250 + DEBUG_LOCKS_WARN_ON(ctx->acquired); 251 + if (!config_enabled(CONFIG_PROVE_LOCKING)) 252 + /* 253 + * lockdep will normally handle this, 254 + * but fail without anyway 255 + */ 256 + ctx->done_acquire = 1; 257 + 258 + if (!config_enabled(CONFIG_DEBUG_LOCK_ALLOC)) 259 + /* ensure ww_acquire_fini will still fail if called twice */ 260 + ctx->acquired = ~0U; 261 + #endif 262 + } 263 + 264 + extern int __must_check __ww_mutex_lock(struct ww_mutex *lock, 265 + struct ww_acquire_ctx *ctx); 266 + extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock, 267 + struct ww_acquire_ctx *ctx); 268 + 269 + /** 270 + * ww_mutex_lock - acquire the w/w mutex 271 + * @lock: the mutex to be acquired 272 + * @ctx: w/w acquire context, or NULL to acquire only a single lock. 273 + * 274 + * Lock the w/w mutex exclusively for this task. 275 + * 276 + * Deadlocks within a given w/w class of locks are detected and handled with the 277 + * wait/wound algorithm. If the lock isn't immediately avaiable this function 278 + * will either sleep until it is (wait case). Or it selects the current context 279 + * for backing off by returning -EDEADLK (wound case). Trying to acquire the 280 + * same lock with the same context twice is also detected and signalled by 281 + * returning -EALREADY. Returns 0 if the mutex was successfully acquired. 282 + * 283 + * In the wound case the caller must release all currently held w/w mutexes for 284 + * the given context and then wait for this contending lock to be available by 285 + * calling ww_mutex_lock_slow. Alternatively callers can opt to not acquire this 286 + * lock and proceed with trying to acquire further w/w mutexes (e.g. when 287 + * scanning through lru lists trying to free resources). 288 + * 289 + * The mutex must later on be released by the same task that 290 + * acquired it. The task may not exit without first unlocking the mutex. Also, 291 + * kernel memory where the mutex resides must not be freed with the mutex still 292 + * locked. The mutex must first be initialized (or statically defined) before it 293 + * can be locked. memset()-ing the mutex to 0 is not allowed. The mutex must be 294 + * of the same w/w lock class as was used to initialize the acquire context. 295 + * 296 + * A mutex acquired with this function must be released with ww_mutex_unlock. 297 + */ 298 + static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 299 + { 300 + if (ctx) 301 + return __ww_mutex_lock(lock, ctx); 302 + else { 303 + mutex_lock(&lock->base); 304 + return 0; 305 + } 306 + } 307 + 308 + /** 309 + * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible 310 + * @lock: the mutex to be acquired 311 + * @ctx: w/w acquire context 312 + * 313 + * Lock the w/w mutex exclusively for this task. 314 + * 315 + * Deadlocks within a given w/w class of locks are detected and handled with the 316 + * wait/wound algorithm. If the lock isn't immediately avaiable this function 317 + * will either sleep until it is (wait case). Or it selects the current context 318 + * for backing off by returning -EDEADLK (wound case). Trying to acquire the 319 + * same lock with the same context twice is also detected and signalled by 320 + * returning -EALREADY. Returns 0 if the mutex was successfully acquired. If a 321 + * signal arrives while waiting for the lock then this function returns -EINTR. 322 + * 323 + * In the wound case the caller must release all currently held w/w mutexes for 324 + * the given context and then wait for this contending lock to be available by 325 + * calling ww_mutex_lock_slow_interruptible. Alternatively callers can opt to 326 + * not acquire this lock and proceed with trying to acquire further w/w mutexes 327 + * (e.g. when scanning through lru lists trying to free resources). 328 + * 329 + * The mutex must later on be released by the same task that 330 + * acquired it. The task may not exit without first unlocking the mutex. Also, 331 + * kernel memory where the mutex resides must not be freed with the mutex still 332 + * locked. The mutex must first be initialized (or statically defined) before it 333 + * can be locked. memset()-ing the mutex to 0 is not allowed. The mutex must be 334 + * of the same w/w lock class as was used to initialize the acquire context. 335 + * 336 + * A mutex acquired with this function must be released with ww_mutex_unlock. 337 + */ 338 + static inline int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock, 339 + struct ww_acquire_ctx *ctx) 340 + { 341 + if (ctx) 342 + return __ww_mutex_lock_interruptible(lock, ctx); 343 + else 344 + return mutex_lock_interruptible(&lock->base); 345 + } 346 + 347 + /** 348 + * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex 349 + * @lock: the mutex to be acquired 350 + * @ctx: w/w acquire context 351 + * 352 + * Acquires a w/w mutex with the given context after a wound case. This function 353 + * will sleep until the lock becomes available. 354 + * 355 + * The caller must have released all w/w mutexes already acquired with the 356 + * context and then call this function on the contended lock. 357 + * 358 + * Afterwards the caller may continue to (re)acquire the other w/w mutexes it 359 + * needs with ww_mutex_lock. Note that the -EALREADY return code from 360 + * ww_mutex_lock can be used to avoid locking this contended mutex twice. 361 + * 362 + * It is forbidden to call this function with any other w/w mutexes associated 363 + * with the context held. It is forbidden to call this on anything else than the 364 + * contending mutex. 365 + * 366 + * Note that the slowpath lock acquiring can also be done by calling 367 + * ww_mutex_lock directly. This function here is simply to help w/w mutex 368 + * locking code readability by clearly denoting the slowpath. 369 + */ 370 + static inline void 371 + ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 372 + { 373 + int ret; 374 + #ifdef CONFIG_DEBUG_MUTEXES 375 + DEBUG_LOCKS_WARN_ON(!ctx->contending_lock); 376 + #endif 377 + ret = ww_mutex_lock(lock, ctx); 378 + (void)ret; 379 + } 380 + 381 + /** 382 + * ww_mutex_lock_slow_interruptible - slowpath acquiring of the w/w mutex, 383 + * interruptible 384 + * @lock: the mutex to be acquired 385 + * @ctx: w/w acquire context 386 + * 387 + * Acquires a w/w mutex with the given context after a wound case. This function 388 + * will sleep until the lock becomes available and returns 0 when the lock has 389 + * been acquired. If a signal arrives while waiting for the lock then this 390 + * function returns -EINTR. 391 + * 392 + * The caller must have released all w/w mutexes already acquired with the 393 + * context and then call this function on the contended lock. 394 + * 395 + * Afterwards the caller may continue to (re)acquire the other w/w mutexes it 396 + * needs with ww_mutex_lock. Note that the -EALREADY return code from 397 + * ww_mutex_lock can be used to avoid locking this contended mutex twice. 398 + * 399 + * It is forbidden to call this function with any other w/w mutexes associated 400 + * with the given context held. It is forbidden to call this on anything else 401 + * than the contending mutex. 402 + * 403 + * Note that the slowpath lock acquiring can also be done by calling 404 + * ww_mutex_lock_interruptible directly. This function here is simply to help 405 + * w/w mutex locking code readability by clearly denoting the slowpath. 406 + */ 407 + static inline int __must_check 408 + ww_mutex_lock_slow_interruptible(struct ww_mutex *lock, 409 + struct ww_acquire_ctx *ctx) 410 + { 411 + #ifdef CONFIG_DEBUG_MUTEXES 412 + DEBUG_LOCKS_WARN_ON(!ctx->contending_lock); 413 + #endif 414 + return ww_mutex_lock_interruptible(lock, ctx); 415 + } 416 + 417 + extern void ww_mutex_unlock(struct ww_mutex *lock); 418 + 419 + /** 420 + * ww_mutex_trylock - tries to acquire the w/w mutex without acquire context 421 + * @lock: mutex to lock 422 + * 423 + * Trylocks a mutex without acquire context, so no deadlock detection is 424 + * possible. Returns 1 if the mutex has been acquired successfully, 0 otherwise. 425 + */ 426 + static inline int __must_check ww_mutex_trylock(struct ww_mutex *lock) 427 + { 428 + return mutex_trylock(&lock->base); 429 + } 430 + 431 + /*** 432 + * ww_mutex_destroy - mark a w/w mutex unusable 433 + * @lock: the mutex to be destroyed 434 + * 435 + * This function marks the mutex uninitialized, and any subsequent 436 + * use of the mutex is forbidden. The mutex must not be locked when 437 + * this function is called. 438 + */ 439 + static inline void ww_mutex_destroy(struct ww_mutex *lock) 440 + { 441 + mutex_destroy(&lock->base); 442 + } 443 + 444 + /** 445 + * ww_mutex_is_locked - is the w/w mutex locked 446 + * @lock: the mutex to be queried 447 + * 448 + * Returns 1 if the mutex is locked, 0 if unlocked. 449 + */ 450 + static inline bool ww_mutex_is_locked(struct ww_mutex *lock) 451 + { 452 + return mutex_is_locked(&lock->base); 453 + } 454 + 244 455 extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); 245 456 246 457 #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX
+303 -17
kernel/mutex.c
··· 254 254 255 255 EXPORT_SYMBOL(mutex_unlock); 256 256 257 + /** 258 + * ww_mutex_unlock - release the w/w mutex 259 + * @lock: the mutex to be released 260 + * 261 + * Unlock a mutex that has been locked by this task previously with any of the 262 + * ww_mutex_lock* functions (with or without an acquire context). It is 263 + * forbidden to release the locks after releasing the acquire context. 264 + * 265 + * This function must not be used in interrupt context. Unlocking 266 + * of a unlocked mutex is not allowed. 267 + */ 268 + void __sched ww_mutex_unlock(struct ww_mutex *lock) 269 + { 270 + /* 271 + * The unlocking fastpath is the 0->1 transition from 'locked' 272 + * into 'unlocked' state: 273 + */ 274 + if (lock->ctx) { 275 + #ifdef CONFIG_DEBUG_MUTEXES 276 + DEBUG_LOCKS_WARN_ON(!lock->ctx->acquired); 277 + #endif 278 + if (lock->ctx->acquired > 0) 279 + lock->ctx->acquired--; 280 + lock->ctx = NULL; 281 + } 282 + 283 + #ifndef CONFIG_DEBUG_MUTEXES 284 + /* 285 + * When debugging is enabled we must not clear the owner before time, 286 + * the slow path will always be taken, and that clears the owner field 287 + * after verifying that it was indeed current. 288 + */ 289 + mutex_clear_owner(&lock->base); 290 + #endif 291 + __mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath); 292 + } 293 + EXPORT_SYMBOL(ww_mutex_unlock); 294 + 295 + static inline int __sched 296 + __mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx) 297 + { 298 + struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); 299 + struct ww_acquire_ctx *hold_ctx = ACCESS_ONCE(ww->ctx); 300 + 301 + if (!hold_ctx) 302 + return 0; 303 + 304 + if (unlikely(ctx == hold_ctx)) 305 + return -EALREADY; 306 + 307 + if (ctx->stamp - hold_ctx->stamp <= LONG_MAX && 308 + (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) { 309 + #ifdef CONFIG_DEBUG_MUTEXES 310 + DEBUG_LOCKS_WARN_ON(ctx->contending_lock); 311 + ctx->contending_lock = ww; 312 + #endif 313 + return -EDEADLK; 314 + } 315 + 316 + return 0; 317 + } 318 + 319 + static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww, 320 + struct ww_acquire_ctx *ww_ctx) 321 + { 322 + #ifdef CONFIG_DEBUG_MUTEXES 323 + /* 324 + * If this WARN_ON triggers, you used ww_mutex_lock to acquire, 325 + * but released with a normal mutex_unlock in this call. 326 + * 327 + * This should never happen, always use ww_mutex_unlock. 328 + */ 329 + DEBUG_LOCKS_WARN_ON(ww->ctx); 330 + 331 + /* 332 + * Not quite done after calling ww_acquire_done() ? 333 + */ 334 + DEBUG_LOCKS_WARN_ON(ww_ctx->done_acquire); 335 + 336 + if (ww_ctx->contending_lock) { 337 + /* 338 + * After -EDEADLK you tried to 339 + * acquire a different ww_mutex? Bad! 340 + */ 341 + DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock != ww); 342 + 343 + /* 344 + * You called ww_mutex_lock after receiving -EDEADLK, 345 + * but 'forgot' to unlock everything else first? 346 + */ 347 + DEBUG_LOCKS_WARN_ON(ww_ctx->acquired > 0); 348 + ww_ctx->contending_lock = NULL; 349 + } 350 + 351 + /* 352 + * Naughty, using a different class will lead to undefined behavior! 353 + */ 354 + DEBUG_LOCKS_WARN_ON(ww_ctx->ww_class != ww->ww_class); 355 + #endif 356 + ww_ctx->acquired++; 357 + } 358 + 359 + /* 360 + * after acquiring lock with fastpath or when we lost out in contested 361 + * slowpath, set ctx and wake up any waiters so they can recheck. 362 + * 363 + * This function is never called when CONFIG_DEBUG_LOCK_ALLOC is set, 364 + * as the fastpath and opportunistic spinning are disabled in that case. 365 + */ 366 + static __always_inline void 367 + ww_mutex_set_context_fastpath(struct ww_mutex *lock, 368 + struct ww_acquire_ctx *ctx) 369 + { 370 + unsigned long flags; 371 + struct mutex_waiter *cur; 372 + 373 + ww_mutex_lock_acquired(lock, ctx); 374 + 375 + lock->ctx = ctx; 376 + 377 + /* 378 + * The lock->ctx update should be visible on all cores before 379 + * the atomic read is done, otherwise contended waiters might be 380 + * missed. The contended waiters will either see ww_ctx == NULL 381 + * and keep spinning, or it will acquire wait_lock, add itself 382 + * to waiter list and sleep. 383 + */ 384 + smp_mb(); /* ^^^ */ 385 + 386 + /* 387 + * Check if lock is contended, if not there is nobody to wake up 388 + */ 389 + if (likely(atomic_read(&lock->base.count) == 0)) 390 + return; 391 + 392 + /* 393 + * Uh oh, we raced in fastpath, wake up everyone in this case, 394 + * so they can see the new lock->ctx. 395 + */ 396 + spin_lock_mutex(&lock->base.wait_lock, flags); 397 + list_for_each_entry(cur, &lock->base.wait_list, list) { 398 + debug_mutex_wake_waiter(&lock->base, cur); 399 + wake_up_process(cur->task); 400 + } 401 + spin_unlock_mutex(&lock->base.wait_lock, flags); 402 + } 403 + 257 404 /* 258 405 * Lock a mutex (possibly interruptible), slowpath: 259 406 */ 260 - static inline int __sched 407 + static __always_inline int __sched 261 408 __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, 262 - struct lockdep_map *nest_lock, unsigned long ip) 409 + struct lockdep_map *nest_lock, unsigned long ip, 410 + struct ww_acquire_ctx *ww_ctx) 263 411 { 264 412 struct task_struct *task = current; 265 413 struct mutex_waiter waiter; 266 414 unsigned long flags; 415 + int ret; 267 416 268 417 preempt_disable(); 269 418 mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip); ··· 447 298 struct task_struct *owner; 448 299 struct mspin_node node; 449 300 301 + if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) { 302 + struct ww_mutex *ww; 303 + 304 + ww = container_of(lock, struct ww_mutex, base); 305 + /* 306 + * If ww->ctx is set the contents are undefined, only 307 + * by acquiring wait_lock there is a guarantee that 308 + * they are not invalid when reading. 309 + * 310 + * As such, when deadlock detection needs to be 311 + * performed the optimistic spinning cannot be done. 312 + */ 313 + if (ACCESS_ONCE(ww->ctx)) 314 + break; 315 + } 316 + 450 317 /* 451 318 * If there's an owner, wait for it to either 452 319 * release the lock or go to sleep. ··· 477 312 if ((atomic_read(&lock->count) == 1) && 478 313 (atomic_cmpxchg(&lock->count, 1, 0) == 1)) { 479 314 lock_acquired(&lock->dep_map, ip); 315 + if (!__builtin_constant_p(ww_ctx == NULL)) { 316 + struct ww_mutex *ww; 317 + ww = container_of(lock, struct ww_mutex, base); 318 + 319 + ww_mutex_set_context_fastpath(ww, ww_ctx); 320 + } 321 + 480 322 mutex_set_owner(lock); 481 323 mspin_unlock(MLOCK(lock), &node); 482 324 preempt_enable(); ··· 543 371 * TASK_UNINTERRUPTIBLE case.) 544 372 */ 545 373 if (unlikely(signal_pending_state(state, task))) { 546 - mutex_remove_waiter(lock, &waiter, 547 - task_thread_info(task)); 548 - mutex_release(&lock->dep_map, 1, ip); 549 - spin_unlock_mutex(&lock->wait_lock, flags); 550 - 551 - debug_mutex_free_waiter(&waiter); 552 - preempt_enable(); 553 - return -EINTR; 374 + ret = -EINTR; 375 + goto err; 554 376 } 377 + 378 + if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) { 379 + ret = __mutex_lock_check_stamp(lock, ww_ctx); 380 + if (ret) 381 + goto err; 382 + } 383 + 555 384 __set_task_state(task, state); 556 385 557 386 /* didn't get the lock, go to sleep: */ ··· 567 394 mutex_remove_waiter(lock, &waiter, current_thread_info()); 568 395 mutex_set_owner(lock); 569 396 397 + if (!__builtin_constant_p(ww_ctx == NULL)) { 398 + struct ww_mutex *ww = container_of(lock, 399 + struct ww_mutex, 400 + base); 401 + struct mutex_waiter *cur; 402 + 403 + /* 404 + * This branch gets optimized out for the common case, 405 + * and is only important for ww_mutex_lock. 406 + */ 407 + 408 + ww_mutex_lock_acquired(ww, ww_ctx); 409 + ww->ctx = ww_ctx; 410 + 411 + /* 412 + * Give any possible sleeping processes the chance to wake up, 413 + * so they can recheck if they have to back off. 414 + */ 415 + list_for_each_entry(cur, &lock->wait_list, list) { 416 + debug_mutex_wake_waiter(lock, cur); 417 + wake_up_process(cur->task); 418 + } 419 + } 420 + 570 421 /* set it to 0 if there are no waiters left: */ 571 422 if (likely(list_empty(&lock->wait_list))) 572 423 atomic_set(&lock->count, 0); ··· 601 404 preempt_enable(); 602 405 603 406 return 0; 407 + 408 + err: 409 + mutex_remove_waiter(lock, &waiter, task_thread_info(task)); 410 + spin_unlock_mutex(&lock->wait_lock, flags); 411 + debug_mutex_free_waiter(&waiter); 412 + mutex_release(&lock->dep_map, 1, ip); 413 + preempt_enable(); 414 + return ret; 604 415 } 605 416 606 417 #ifdef CONFIG_DEBUG_LOCK_ALLOC ··· 616 411 mutex_lock_nested(struct mutex *lock, unsigned int subclass) 617 412 { 618 413 might_sleep(); 619 - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_); 414 + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 415 + subclass, NULL, _RET_IP_, NULL); 620 416 } 621 417 622 418 EXPORT_SYMBOL_GPL(mutex_lock_nested); ··· 626 420 _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest) 627 421 { 628 422 might_sleep(); 629 - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_); 423 + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 424 + 0, nest, _RET_IP_, NULL); 630 425 } 631 426 632 427 EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock); ··· 636 429 mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass) 637 430 { 638 431 might_sleep(); 639 - return __mutex_lock_common(lock, TASK_KILLABLE, subclass, NULL, _RET_IP_); 432 + return __mutex_lock_common(lock, TASK_KILLABLE, 433 + subclass, NULL, _RET_IP_, NULL); 640 434 } 641 435 EXPORT_SYMBOL_GPL(mutex_lock_killable_nested); 642 436 ··· 646 438 { 647 439 might_sleep(); 648 440 return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 649 - subclass, NULL, _RET_IP_); 441 + subclass, NULL, _RET_IP_, NULL); 650 442 } 651 443 652 444 EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested); 445 + 446 + 447 + int __sched 448 + __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 449 + { 450 + might_sleep(); 451 + return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 452 + 0, &ctx->dep_map, _RET_IP_, ctx); 453 + } 454 + EXPORT_SYMBOL_GPL(__ww_mutex_lock); 455 + 456 + int __sched 457 + __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 458 + { 459 + might_sleep(); 460 + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 461 + 0, &ctx->dep_map, _RET_IP_, ctx); 462 + } 463 + EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible); 464 + 653 465 #endif 654 466 655 467 /* ··· 772 544 { 773 545 struct mutex *lock = container_of(lock_count, struct mutex, count); 774 546 775 - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_); 547 + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, 548 + NULL, _RET_IP_, NULL); 776 549 } 777 550 778 551 static noinline int __sched 779 552 __mutex_lock_killable_slowpath(struct mutex *lock) 780 553 { 781 - return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_); 554 + return __mutex_lock_common(lock, TASK_KILLABLE, 0, 555 + NULL, _RET_IP_, NULL); 782 556 } 783 557 784 558 static noinline int __sched 785 559 __mutex_lock_interruptible_slowpath(struct mutex *lock) 786 560 { 787 - return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_); 561 + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, 562 + NULL, _RET_IP_, NULL); 788 563 } 564 + 565 + static noinline int __sched 566 + __ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 567 + { 568 + return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0, 569 + NULL, _RET_IP_, ctx); 570 + } 571 + 572 + static noinline int __sched 573 + __ww_mutex_lock_interruptible_slowpath(struct ww_mutex *lock, 574 + struct ww_acquire_ctx *ctx) 575 + { 576 + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0, 577 + NULL, _RET_IP_, ctx); 578 + } 579 + 789 580 #endif 790 581 791 582 /* ··· 859 612 return ret; 860 613 } 861 614 EXPORT_SYMBOL(mutex_trylock); 615 + 616 + #ifndef CONFIG_DEBUG_LOCK_ALLOC 617 + int __sched 618 + __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 619 + { 620 + int ret; 621 + 622 + might_sleep(); 623 + 624 + ret = __mutex_fastpath_lock_retval(&lock->base.count); 625 + 626 + if (likely(!ret)) { 627 + ww_mutex_set_context_fastpath(lock, ctx); 628 + mutex_set_owner(&lock->base); 629 + } else 630 + ret = __ww_mutex_lock_slowpath(lock, ctx); 631 + return ret; 632 + } 633 + EXPORT_SYMBOL(__ww_mutex_lock); 634 + 635 + int __sched 636 + __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 637 + { 638 + int ret; 639 + 640 + might_sleep(); 641 + 642 + ret = __mutex_fastpath_lock_retval(&lock->base.count); 643 + 644 + if (likely(!ret)) { 645 + ww_mutex_set_context_fastpath(lock, ctx); 646 + mutex_set_owner(&lock->base); 647 + } else 648 + ret = __ww_mutex_lock_interruptible_slowpath(lock, ctx); 649 + return ret; 650 + } 651 + EXPORT_SYMBOL(__ww_mutex_lock_interruptible); 652 + 653 + #endif 862 654 863 655 /** 864 656 * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0
+2
lib/debug_locks.c
··· 30 30 * a locking bug is detected. 31 31 */ 32 32 int debug_locks_silent; 33 + EXPORT_SYMBOL_GPL(debug_locks_silent); 33 34 34 35 /* 35 36 * Generic 'turn off all lock debugging' function: ··· 45 44 } 46 45 return 0; 47 46 } 47 + EXPORT_SYMBOL_GPL(debug_locks_off);