Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'core/mutexes' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into drm-next

Merge in the tip core/mutexes branch for future GPU driver use.

Ingo will send this branch to Linus prior to drm-next.

* 'core/mutexes' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits)
locking-selftests: Handle unexpected failures more strictly
mutex: Add more w/w tests to test EDEADLK path handling
mutex: Add more tests to lib/locking-selftest.c
mutex: Add w/w tests to lib/locking-selftest.c
mutex: Add w/w mutex slowpath debugging
mutex: Add support for wound/wait style locks
arch: Make __mutex_fastpath_lock_retval return whether fastpath succeeded or not
powerpc/pci: Fix boot panic on mpc83xx (regression)
s390/ipl: Fix FCP WWPN and LUN format strings for read
fs: fix new splice.c kernel-doc warning
spi/pxa2xx: fix memory corruption due to wrong size used in devm_kzalloc()
s390/mem_detect: fix memory hole handling
s390/dma: support debug_dma_mapping_error
s390/dma: fix mapping_error detection
s390/irq: Only define synchronize_irq() on SMP
Input: xpad - fix for "Mad Catz Street Fighter IV FightPad" controllers
Input: wacom - add a new stylus (0x100802) for Intuos5 and Cintiqs
spi/pxa2xx: use GFP_ATOMIC in sg table allocation
fuse: hold i_mutex in fuse_file_fallocate()
Input: add missing dependencies on CONFIG_HAS_IOMEM
...

+1863 -131
+344
Documentation/ww-mutex-design.txt
··· 1 + Wait/Wound Deadlock-Proof Mutex Design 2 + ====================================== 3 + 4 + Please read mutex-design.txt first, as it applies to wait/wound mutexes too. 5 + 6 + Motivation for WW-Mutexes 7 + ------------------------- 8 + 9 + GPU's do operations that commonly involve many buffers. Those buffers 10 + can be shared across contexts/processes, exist in different memory 11 + domains (for example VRAM vs system memory), and so on. And with 12 + PRIME / dmabuf, they can even be shared across devices. So there are 13 + a handful of situations where the driver needs to wait for buffers to 14 + become ready. If you think about this in terms of waiting on a buffer 15 + mutex for it to become available, this presents a problem because 16 + there is no way to guarantee that buffers appear in a execbuf/batch in 17 + the same order in all contexts. That is directly under control of 18 + userspace, and a result of the sequence of GL calls that an application 19 + makes. Which results in the potential for deadlock. The problem gets 20 + more complex when you consider that the kernel may need to migrate the 21 + buffer(s) into VRAM before the GPU operates on the buffer(s), which 22 + may in turn require evicting some other buffers (and you don't want to 23 + evict other buffers which are already queued up to the GPU), but for a 24 + simplified understanding of the problem you can ignore this. 25 + 26 + The algorithm that the TTM graphics subsystem came up with for dealing with 27 + this problem is quite simple. For each group of buffers (execbuf) that need 28 + to be locked, the caller would be assigned a unique reservation id/ticket, 29 + from a global counter. In case of deadlock while locking all the buffers 30 + associated with a execbuf, the one with the lowest reservation ticket (i.e. 31 + the oldest task) wins, and the one with the higher reservation id (i.e. the 32 + younger task) unlocks all of the buffers that it has already locked, and then 33 + tries again. 34 + 35 + In the RDBMS literature this deadlock handling approach is called wait/wound: 36 + The older tasks waits until it can acquire the contended lock. The younger tasks 37 + needs to back off and drop all the locks it is currently holding, i.e. the 38 + younger task is wounded. 39 + 40 + Concepts 41 + -------- 42 + 43 + Compared to normal mutexes two additional concepts/objects show up in the lock 44 + interface for w/w mutexes: 45 + 46 + Acquire context: To ensure eventual forward progress it is important the a task 47 + trying to acquire locks doesn't grab a new reservation id, but keeps the one it 48 + acquired when starting the lock acquisition. This ticket is stored in the 49 + acquire context. Furthermore the acquire context keeps track of debugging state 50 + to catch w/w mutex interface abuse. 51 + 52 + W/w class: In contrast to normal mutexes the lock class needs to be explicit for 53 + w/w mutexes, since it is required to initialize the acquire context. 54 + 55 + Furthermore there are three different class of w/w lock acquire functions: 56 + 57 + * Normal lock acquisition with a context, using ww_mutex_lock. 58 + 59 + * Slowpath lock acquisition on the contending lock, used by the wounded task 60 + after having dropped all already acquired locks. These functions have the 61 + _slow postfix. 62 + 63 + From a simple semantics point-of-view the _slow functions are not strictly 64 + required, since simply calling the normal ww_mutex_lock functions on the 65 + contending lock (after having dropped all other already acquired locks) will 66 + work correctly. After all if no other ww mutex has been acquired yet there's 67 + no deadlock potential and hence the ww_mutex_lock call will block and not 68 + prematurely return -EDEADLK. The advantage of the _slow functions is in 69 + interface safety: 70 + - ww_mutex_lock has a __must_check int return type, whereas ww_mutex_lock_slow 71 + has a void return type. Note that since ww mutex code needs loops/retries 72 + anyway the __must_check doesn't result in spurious warnings, even though the 73 + very first lock operation can never fail. 74 + - When full debugging is enabled ww_mutex_lock_slow checks that all acquired 75 + ww mutex have been released (preventing deadlocks) and makes sure that we 76 + block on the contending lock (preventing spinning through the -EDEADLK 77 + slowpath until the contended lock can be acquired). 78 + 79 + * Functions to only acquire a single w/w mutex, which results in the exact same 80 + semantics as a normal mutex. This is done by calling ww_mutex_lock with a NULL 81 + context. 82 + 83 + Again this is not strictly required. But often you only want to acquire a 84 + single lock in which case it's pointless to set up an acquire context (and so 85 + better to avoid grabbing a deadlock avoidance ticket). 86 + 87 + Of course, all the usual variants for handling wake-ups due to signals are also 88 + provided. 89 + 90 + Usage 91 + ----- 92 + 93 + Three different ways to acquire locks within the same w/w class. Common 94 + definitions for methods #1 and #2: 95 + 96 + static DEFINE_WW_CLASS(ww_class); 97 + 98 + struct obj { 99 + struct ww_mutex lock; 100 + /* obj data */ 101 + }; 102 + 103 + struct obj_entry { 104 + struct list_head head; 105 + struct obj *obj; 106 + }; 107 + 108 + Method 1, using a list in execbuf->buffers that's not allowed to be reordered. 109 + This is useful if a list of required objects is already tracked somewhere. 110 + Furthermore the lock helper can use propagate the -EALREADY return code back to 111 + the caller as a signal that an object is twice on the list. This is useful if 112 + the list is constructed from userspace input and the ABI requires userspace to 113 + not have duplicate entries (e.g. for a gpu commandbuffer submission ioctl). 114 + 115 + int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 116 + { 117 + struct obj *res_obj = NULL; 118 + struct obj_entry *contended_entry = NULL; 119 + struct obj_entry *entry; 120 + 121 + ww_acquire_init(ctx, &ww_class); 122 + 123 + retry: 124 + list_for_each_entry (entry, list, head) { 125 + if (entry->obj == res_obj) { 126 + res_obj = NULL; 127 + continue; 128 + } 129 + ret = ww_mutex_lock(&entry->obj->lock, ctx); 130 + if (ret < 0) { 131 + contended_entry = entry; 132 + goto err; 133 + } 134 + } 135 + 136 + ww_acquire_done(ctx); 137 + return 0; 138 + 139 + err: 140 + list_for_each_entry_continue_reverse (entry, list, head) 141 + ww_mutex_unlock(&entry->obj->lock); 142 + 143 + if (res_obj) 144 + ww_mutex_unlock(&res_obj->lock); 145 + 146 + if (ret == -EDEADLK) { 147 + /* we lost out in a seqno race, lock and retry.. */ 148 + ww_mutex_lock_slow(&contended_entry->obj->lock, ctx); 149 + res_obj = contended_entry->obj; 150 + goto retry; 151 + } 152 + ww_acquire_fini(ctx); 153 + 154 + return ret; 155 + } 156 + 157 + Method 2, using a list in execbuf->buffers that can be reordered. Same semantics 158 + of duplicate entry detection using -EALREADY as method 1 above. But the 159 + list-reordering allows for a bit more idiomatic code. 160 + 161 + int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 162 + { 163 + struct obj_entry *entry, *entry2; 164 + 165 + ww_acquire_init(ctx, &ww_class); 166 + 167 + list_for_each_entry (entry, list, head) { 168 + ret = ww_mutex_lock(&entry->obj->lock, ctx); 169 + if (ret < 0) { 170 + entry2 = entry; 171 + 172 + list_for_each_entry_continue_reverse (entry2, list, head) 173 + ww_mutex_unlock(&entry2->obj->lock); 174 + 175 + if (ret != -EDEADLK) { 176 + ww_acquire_fini(ctx); 177 + return ret; 178 + } 179 + 180 + /* we lost out in a seqno race, lock and retry.. */ 181 + ww_mutex_lock_slow(&entry->obj->lock, ctx); 182 + 183 + /* 184 + * Move buf to head of the list, this will point 185 + * buf->next to the first unlocked entry, 186 + * restarting the for loop. 187 + */ 188 + list_del(&entry->head); 189 + list_add(&entry->head, list); 190 + } 191 + } 192 + 193 + ww_acquire_done(ctx); 194 + return 0; 195 + } 196 + 197 + Unlocking works the same way for both methods #1 and #2: 198 + 199 + void unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 200 + { 201 + struct obj_entry *entry; 202 + 203 + list_for_each_entry (entry, list, head) 204 + ww_mutex_unlock(&entry->obj->lock); 205 + 206 + ww_acquire_fini(ctx); 207 + } 208 + 209 + Method 3 is useful if the list of objects is constructed ad-hoc and not upfront, 210 + e.g. when adjusting edges in a graph where each node has its own ww_mutex lock, 211 + and edges can only be changed when holding the locks of all involved nodes. w/w 212 + mutexes are a natural fit for such a case for two reasons: 213 + - They can handle lock-acquisition in any order which allows us to start walking 214 + a graph from a starting point and then iteratively discovering new edges and 215 + locking down the nodes those edges connect to. 216 + - Due to the -EALREADY return code signalling that a given objects is already 217 + held there's no need for additional book-keeping to break cycles in the graph 218 + or keep track off which looks are already held (when using more than one node 219 + as a starting point). 220 + 221 + Note that this approach differs in two important ways from the above methods: 222 + - Since the list of objects is dynamically constructed (and might very well be 223 + different when retrying due to hitting the -EDEADLK wound condition) there's 224 + no need to keep any object on a persistent list when it's not locked. We can 225 + therefore move the list_head into the object itself. 226 + - On the other hand the dynamic object list construction also means that the -EALREADY return 227 + code can't be propagated. 228 + 229 + Note also that methods #1 and #2 and method #3 can be combined, e.g. to first lock a 230 + list of starting nodes (passed in from userspace) using one of the above 231 + methods. And then lock any additional objects affected by the operations using 232 + method #3 below. The backoff/retry procedure will be a bit more involved, since 233 + when the dynamic locking step hits -EDEADLK we also need to unlock all the 234 + objects acquired with the fixed list. But the w/w mutex debug checks will catch 235 + any interface misuse for these cases. 236 + 237 + Also, method 3 can't fail the lock acquisition step since it doesn't return 238 + -EALREADY. Of course this would be different when using the _interruptible 239 + variants, but that's outside of the scope of these examples here. 240 + 241 + struct obj { 242 + struct ww_mutex ww_mutex; 243 + struct list_head locked_list; 244 + }; 245 + 246 + static DEFINE_WW_CLASS(ww_class); 247 + 248 + void __unlock_objs(struct list_head *list) 249 + { 250 + struct obj *entry, *temp; 251 + 252 + list_for_each_entry_safe (entry, temp, list, locked_list) { 253 + /* need to do that before unlocking, since only the current lock holder is 254 + allowed to use object */ 255 + list_del(&entry->locked_list); 256 + ww_mutex_unlock(entry->ww_mutex) 257 + } 258 + } 259 + 260 + void lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 261 + { 262 + struct obj *obj; 263 + 264 + ww_acquire_init(ctx, &ww_class); 265 + 266 + retry: 267 + /* re-init loop start state */ 268 + loop { 269 + /* magic code which walks over a graph and decides which objects 270 + * to lock */ 271 + 272 + ret = ww_mutex_lock(obj->ww_mutex, ctx); 273 + if (ret == -EALREADY) { 274 + /* we have that one already, get to the next object */ 275 + continue; 276 + } 277 + if (ret == -EDEADLK) { 278 + __unlock_objs(list); 279 + 280 + ww_mutex_lock_slow(obj, ctx); 281 + list_add(&entry->locked_list, list); 282 + goto retry; 283 + } 284 + 285 + /* locked a new object, add it to the list */ 286 + list_add_tail(&entry->locked_list, list); 287 + } 288 + 289 + ww_acquire_done(ctx); 290 + return 0; 291 + } 292 + 293 + void unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) 294 + { 295 + __unlock_objs(list); 296 + ww_acquire_fini(ctx); 297 + } 298 + 299 + Method 4: Only lock one single objects. In that case deadlock detection and 300 + prevention is obviously overkill, since with grabbing just one lock you can't 301 + produce a deadlock within just one class. To simplify this case the w/w mutex 302 + api can be used with a NULL context. 303 + 304 + Implementation Details 305 + ---------------------- 306 + 307 + Design: 308 + ww_mutex currently encapsulates a struct mutex, this means no extra overhead for 309 + normal mutex locks, which are far more common. As such there is only a small 310 + increase in code size if wait/wound mutexes are not used. 311 + 312 + In general, not much contention is expected. The locks are typically used to 313 + serialize access to resources for devices. The only way to make wakeups 314 + smarter would be at the cost of adding a field to struct mutex_waiter. This 315 + would add overhead to all cases where normal mutexes are used, and 316 + ww_mutexes are generally less performance sensitive. 317 + 318 + Lockdep: 319 + Special care has been taken to warn for as many cases of api abuse 320 + as possible. Some common api abuses will be caught with 321 + CONFIG_DEBUG_MUTEXES, but CONFIG_PROVE_LOCKING is recommended. 322 + 323 + Some of the errors which will be warned about: 324 + - Forgetting to call ww_acquire_fini or ww_acquire_init. 325 + - Attempting to lock more mutexes after ww_acquire_done. 326 + - Attempting to lock the wrong mutex after -EDEADLK and 327 + unlocking all mutexes. 328 + - Attempting to lock the right mutex after -EDEADLK, 329 + before unlocking all mutexes. 330 + 331 + - Calling ww_mutex_lock_slow before -EDEADLK was returned. 332 + 333 + - Unlocking mutexes with the wrong unlock function. 334 + - Calling one of the ww_acquire_* twice on the same context. 335 + - Using a different ww_class for the mutex than for the ww_acquire_ctx. 336 + - Normal lockdep errors that can result in deadlocks. 337 + 338 + Some of the lockdep errors that can result in deadlocks: 339 + - Calling ww_acquire_init to initialize a second ww_acquire_ctx before 340 + having called ww_acquire_fini on the first. 341 + - 'normal' deadlocks that can occur. 342 + 343 + FIXME: Update this section once we have the TASK_DEADLOCK task state flag magic 344 + implemented.
+4 -6
arch/ia64/include/asm/mutex.h
··· 29 29 * __mutex_fastpath_lock_retval - try to take the lock by moving the count 30 30 * from 1 to a 0 value 31 31 * @count: pointer of type atomic_t 32 - * @fail_fn: function to call if the original value was not 1 33 32 * 34 - * Change the count from 1 to a value lower than 1, and call <fail_fn> if 35 - * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, 36 - * or anything the slow path function returns. 33 + * Change the count from 1 to a value lower than 1. This function returns 0 34 + * if the fastpath succeeds, or -1 otherwise. 37 35 */ 38 36 static inline int 39 - __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) 37 + __mutex_fastpath_lock_retval(atomic_t *count) 40 38 { 41 39 if (unlikely(ia64_fetchadd4_acq(count, -1) != 1)) 42 - return fail_fn(count); 40 + return -1; 43 41 return 0; 44 42 } 45 43
+4 -6
arch/powerpc/include/asm/mutex.h
··· 82 82 * __mutex_fastpath_lock_retval - try to take the lock by moving the count 83 83 * from 1 to a 0 value 84 84 * @count: pointer of type atomic_t 85 - * @fail_fn: function to call if the original value was not 1 86 85 * 87 - * Change the count from 1 to a value lower than 1, and call <fail_fn> if 88 - * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, 89 - * or anything the slow path function returns. 86 + * Change the count from 1 to a value lower than 1. This function returns 0 87 + * if the fastpath succeeds, or -1 otherwise. 90 88 */ 91 89 static inline int 92 - __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) 90 + __mutex_fastpath_lock_retval(atomic_t *count) 93 91 { 94 92 if (unlikely(__mutex_dec_return_lock(count) < 0)) 95 - return fail_fn(count); 93 + return -1; 96 94 return 0; 97 95 } 98 96
+9 -15
arch/powerpc/sysdev/fsl_pci.c
··· 97 97 return indirect_read_config(bus, devfn, offset, len, val); 98 98 } 99 99 100 - static struct pci_ops fsl_indirect_pci_ops = 100 + #if defined(CONFIG_FSL_SOC_BOOKE) || defined(CONFIG_PPC_86xx) 101 + 102 + static struct pci_ops fsl_indirect_pcie_ops = 101 103 { 102 104 .read = fsl_indirect_read_config, 103 105 .write = indirect_write_config, 104 106 }; 105 - 106 - static void __init fsl_setup_indirect_pci(struct pci_controller* hose, 107 - resource_size_t cfg_addr, 108 - resource_size_t cfg_data, u32 flags) 109 - { 110 - setup_indirect_pci(hose, cfg_addr, cfg_data, flags); 111 - hose->ops = &fsl_indirect_pci_ops; 112 - } 113 - 114 - #if defined(CONFIG_FSL_SOC_BOOKE) || defined(CONFIG_PPC_86xx) 115 107 116 108 #define MAX_PHYS_ADDR_BITS 40 117 109 static u64 pci64_dma_offset = 1ull << MAX_PHYS_ADDR_BITS; ··· 496 504 if (!hose->private_data) 497 505 goto no_bridge; 498 506 499 - fsl_setup_indirect_pci(hose, rsrc.start, rsrc.start + 0x4, 500 - PPC_INDIRECT_TYPE_BIG_ENDIAN); 507 + setup_indirect_pci(hose, rsrc.start, rsrc.start + 0x4, 508 + PPC_INDIRECT_TYPE_BIG_ENDIAN); 501 509 502 510 if (in_be32(&pci->block_rev1) < PCIE_IP_REV_3_0) 503 511 hose->indirect_type |= PPC_INDIRECT_TYPE_FSL_CFG_REG_LINK; 504 512 505 513 if (early_find_capability(hose, 0, 0, PCI_CAP_ID_EXP)) { 514 + /* use fsl_indirect_read_config for PCIe */ 515 + hose->ops = &fsl_indirect_pcie_ops; 506 516 /* For PCIE read HEADER_TYPE to identify controler mode */ 507 517 early_read_config_byte(hose, 0, 0, PCI_HEADER_TYPE, &hdr_type); 508 518 if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) ··· 808 814 if (ret) 809 815 goto err0; 810 816 } else { 811 - fsl_setup_indirect_pci(hose, rsrc_cfg.start, 812 - rsrc_cfg.start + 4, 0); 817 + setup_indirect_pci(hose, rsrc_cfg.start, 818 + rsrc_cfg.start + 4, 0); 813 819 } 814 820 815 821 printk(KERN_INFO "Found FSL PCI host bridge at 0x%016llx. "
+2 -1
arch/s390/include/asm/dma-mapping.h
··· 50 50 { 51 51 struct dma_map_ops *dma_ops = get_dma_ops(dev); 52 52 53 + debug_dma_mapping_error(dev, dma_addr); 53 54 if (dma_ops->mapping_error) 54 55 return dma_ops->mapping_error(dev, dma_addr); 55 - return (dma_addr == 0UL); 56 + return (dma_addr == DMA_ERROR_CODE); 56 57 } 57 58 58 59 static inline void *dma_alloc_coherent(struct device *dev, size_t size,
+4 -4
arch/s390/kernel/ipl.c
··· 754 754 .write = reipl_fcp_scpdata_write, 755 755 }; 756 756 757 - DEFINE_IPL_ATTR_RW(reipl_fcp, wwpn, "0x%016llx\n", "%016llx\n", 757 + DEFINE_IPL_ATTR_RW(reipl_fcp, wwpn, "0x%016llx\n", "%llx\n", 758 758 reipl_block_fcp->ipl_info.fcp.wwpn); 759 - DEFINE_IPL_ATTR_RW(reipl_fcp, lun, "0x%016llx\n", "%016llx\n", 759 + DEFINE_IPL_ATTR_RW(reipl_fcp, lun, "0x%016llx\n", "%llx\n", 760 760 reipl_block_fcp->ipl_info.fcp.lun); 761 761 DEFINE_IPL_ATTR_RW(reipl_fcp, bootprog, "%lld\n", "%lld\n", 762 762 reipl_block_fcp->ipl_info.fcp.bootprog); ··· 1323 1323 1324 1324 /* FCP dump device attributes */ 1325 1325 1326 - DEFINE_IPL_ATTR_RW(dump_fcp, wwpn, "0x%016llx\n", "%016llx\n", 1326 + DEFINE_IPL_ATTR_RW(dump_fcp, wwpn, "0x%016llx\n", "%llx\n", 1327 1327 dump_block_fcp->ipl_info.fcp.wwpn); 1328 - DEFINE_IPL_ATTR_RW(dump_fcp, lun, "0x%016llx\n", "%016llx\n", 1328 + DEFINE_IPL_ATTR_RW(dump_fcp, lun, "0x%016llx\n", "%llx\n", 1329 1329 dump_block_fcp->ipl_info.fcp.lun); 1330 1330 DEFINE_IPL_ATTR_RW(dump_fcp, bootprog, "%lld\n", "%lld\n", 1331 1331 dump_block_fcp->ipl_info.fcp.bootprog);
+2
arch/s390/kernel/irq.c
··· 312 312 } 313 313 EXPORT_SYMBOL(measurement_alert_subclass_unregister); 314 314 315 + #ifdef CONFIG_SMP 315 316 void synchronize_irq(unsigned int irq) 316 317 { 317 318 /* ··· 321 320 */ 322 321 } 323 322 EXPORT_SYMBOL_GPL(synchronize_irq); 323 + #endif 324 324 325 325 #ifndef CONFIG_PCI 326 326
+2 -1
arch/s390/mm/mem_detect.c
··· 123 123 continue; 124 124 } else if ((addr <= chunk->addr) && 125 125 (addr + size >= chunk->addr + chunk->size)) { 126 - memset(chunk, 0 , sizeof(*chunk)); 126 + memmove(chunk, chunk + 1, (MEMORY_CHUNKS-i-1) * sizeof(*chunk)); 127 + memset(&mem_chunk[MEMORY_CHUNKS-1], 0, sizeof(*chunk)); 127 128 } else if (addr + size < chunk->addr + chunk->size) { 128 129 chunk->size = chunk->addr + chunk->size - addr - size; 129 130 chunk->addr = addr + size;
+2 -2
arch/sh/include/asm/mutex-llsc.h
··· 37 37 } 38 38 39 39 static inline int 40 - __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) 40 + __mutex_fastpath_lock_retval(atomic_t *count) 41 41 { 42 42 int __done, __res; 43 43 ··· 51 51 : "t"); 52 52 53 53 if (unlikely(!__done || __res != 0)) 54 - __res = fail_fn(count); 54 + __res = -1; 55 55 56 56 return __res; 57 57 }
+4 -7
arch/x86/include/asm/mutex_32.h
··· 42 42 * __mutex_fastpath_lock_retval - try to take the lock by moving the count 43 43 * from 1 to a 0 value 44 44 * @count: pointer of type atomic_t 45 - * @fail_fn: function to call if the original value was not 1 46 45 * 47 - * Change the count from 1 to a value lower than 1, and call <fail_fn> if it 48 - * wasn't 1 originally. This function returns 0 if the fastpath succeeds, 49 - * or anything the slow path function returns 46 + * Change the count from 1 to a value lower than 1. This function returns 0 47 + * if the fastpath succeeds, or -1 otherwise. 50 48 */ 51 - static inline int __mutex_fastpath_lock_retval(atomic_t *count, 52 - int (*fail_fn)(atomic_t *)) 49 + static inline int __mutex_fastpath_lock_retval(atomic_t *count) 53 50 { 54 51 if (unlikely(atomic_dec_return(count) < 0)) 55 - return fail_fn(count); 52 + return -1; 56 53 else 57 54 return 0; 58 55 }
+4 -7
arch/x86/include/asm/mutex_64.h
··· 37 37 * __mutex_fastpath_lock_retval - try to take the lock by moving the count 38 38 * from 1 to a 0 value 39 39 * @count: pointer of type atomic_t 40 - * @fail_fn: function to call if the original value was not 1 41 40 * 42 - * Change the count from 1 to a value lower than 1, and call <fail_fn> if 43 - * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, 44 - * or anything the slow path function returns 41 + * Change the count from 1 to a value lower than 1. This function returns 0 42 + * if the fastpath succeeds, or -1 otherwise. 45 43 */ 46 - static inline int __mutex_fastpath_lock_retval(atomic_t *count, 47 - int (*fail_fn)(atomic_t *)) 44 + static inline int __mutex_fastpath_lock_retval(atomic_t *count) 48 45 { 49 46 if (unlikely(atomic_dec_return(count) < 0)) 50 - return fail_fn(count); 47 + return -1; 51 48 else 52 49 return 0; 53 50 }
+1 -1
drivers/input/joystick/xpad.c
··· 137 137 { 0x0738, 0x4540, "Mad Catz Beat Pad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX }, 138 138 { 0x0738, 0x4556, "Mad Catz Lynx Wireless Controller", 0, XTYPE_XBOX }, 139 139 { 0x0738, 0x4716, "Mad Catz Wired Xbox 360 Controller", 0, XTYPE_XBOX360 }, 140 - { 0x0738, 0x4728, "Mad Catz Street Fighter IV FightPad", XTYPE_XBOX360 }, 140 + { 0x0738, 0x4728, "Mad Catz Street Fighter IV FightPad", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 141 141 { 0x0738, 0x4738, "Mad Catz Wired Xbox 360 Controller (SFIV)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 142 142 { 0x0738, 0x6040, "Mad Catz Beat Pad Pro", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX }, 143 143 { 0x0738, 0xbeef, "Mad Catz JOYTECH NEO SE Advanced GamePad", XTYPE_XBOX360 },
+1
drivers/input/keyboard/Kconfig
··· 431 431 432 432 config KEYBOARD_OPENCORES 433 433 tristate "OpenCores Keyboard Controller" 434 + depends on HAS_IOMEM 434 435 help 435 436 Say Y here if you want to use the OpenCores Keyboard Controller 436 437 http://www.opencores.org/project,keyboardcontroller
+1
drivers/input/serio/Kconfig
··· 205 205 206 206 config SERIO_ALTERA_PS2 207 207 tristate "Altera UP PS/2 controller" 208 + depends on HAS_IOMEM 208 209 help 209 210 Say Y here if you have Altera University Program PS/2 ports. 210 211
+2
drivers/input/tablet/wacom_wac.c
··· 363 363 case 0x140802: /* Intuos4/5 13HD/24HD Classic Pen */ 364 364 case 0x160802: /* Cintiq 13HD Pro Pen */ 365 365 case 0x180802: /* DTH2242 Pen */ 366 + case 0x100802: /* Intuos4/5 13HD/24HD General Pen */ 366 367 wacom->tool[idx] = BTN_TOOL_PEN; 367 368 break; 368 369 ··· 402 401 case 0x10080c: /* Intuos4/5 13HD/24HD Art Pen Eraser */ 403 402 case 0x16080a: /* Cintiq 13HD Pro Pen Eraser */ 404 403 case 0x18080a: /* DTH2242 Eraser */ 404 + case 0x10080a: /* Intuos4/5 13HD/24HD General Pen Eraser */ 405 405 wacom->tool[idx] = BTN_TOOL_RUBBER; 406 406 break; 407 407
+21 -7
drivers/input/touchscreen/cyttsp_core.c
··· 116 116 return ttsp_write_block_data(ts, CY_REG_BASE, sizeof(cmd), &cmd); 117 117 } 118 118 119 + static int cyttsp_handshake(struct cyttsp *ts) 120 + { 121 + if (ts->pdata->use_hndshk) 122 + return ttsp_send_command(ts, 123 + ts->xy_data.hst_mode ^ CY_HNDSHK_BIT); 124 + 125 + return 0; 126 + } 127 + 119 128 static int cyttsp_load_bl_regs(struct cyttsp *ts) 120 129 { 121 130 memset(&ts->bl_data, 0, sizeof(ts->bl_data)); ··· 142 133 memcpy(bl_cmd, bl_command, sizeof(bl_command)); 143 134 if (ts->pdata->bl_keys) 144 135 memcpy(&bl_cmd[sizeof(bl_command) - CY_NUM_BL_KEYS], 145 - ts->pdata->bl_keys, sizeof(bl_command)); 136 + ts->pdata->bl_keys, CY_NUM_BL_KEYS); 146 137 147 138 error = ttsp_write_block_data(ts, CY_REG_BASE, 148 139 sizeof(bl_cmd), bl_cmd); ··· 176 167 if (error) 177 168 return error; 178 169 170 + error = cyttsp_handshake(ts); 171 + if (error) 172 + return error; 173 + 179 174 return ts->xy_data.act_dist == CY_ACT_DIST_DFLT ? -EIO : 0; 180 175 } 181 176 ··· 198 185 msleep(CY_DELAY_DFLT); 199 186 error = ttsp_read_block_data(ts, CY_REG_BASE, sizeof(ts->sysinfo_data), 200 187 &ts->sysinfo_data); 188 + if (error) 189 + return error; 190 + 191 + error = cyttsp_handshake(ts); 201 192 if (error) 202 193 return error; 203 194 ··· 361 344 goto out; 362 345 363 346 /* provide flow control handshake */ 364 - if (ts->pdata->use_hndshk) { 365 - error = ttsp_send_command(ts, 366 - ts->xy_data.hst_mode ^ CY_HNDSHK_BIT); 367 - if (error) 368 - goto out; 369 - } 347 + error = cyttsp_handshake(ts); 348 + if (error) 349 + goto out; 370 350 371 351 if (unlikely(ts->state == CY_IDLE_STATE)) 372 352 goto out;
+1 -1
drivers/input/touchscreen/cyttsp_core.h
··· 67 67 /* TTSP System Information interface definition */ 68 68 struct cyttsp_sysinfo_data { 69 69 u8 hst_mode; 70 - u8 mfg_cmd; 71 70 u8 mfg_stat; 71 + u8 mfg_cmd; 72 72 u8 cid[3]; 73 73 u8 tt_undef1; 74 74 u8 uid[8];
+1 -1
drivers/spi/spi-pxa2xx-dma.c
··· 59 59 int ret; 60 60 61 61 sg_free_table(sgt); 62 - ret = sg_alloc_table(sgt, nents, GFP_KERNEL); 62 + ret = sg_alloc_table(sgt, nents, GFP_ATOMIC); 63 63 if (ret) 64 64 return ret; 65 65 }
+1 -1
drivers/spi/spi-pxa2xx.c
··· 1075 1075 acpi_bus_get_device(ACPI_HANDLE(&pdev->dev), &adev)) 1076 1076 return NULL; 1077 1077 1078 - pdata = devm_kzalloc(&pdev->dev, sizeof(*ssp), GFP_KERNEL); 1078 + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 1079 1079 if (!pdata) { 1080 1080 dev_err(&pdev->dev, 1081 1081 "failed to allocate memory for platform data\n");
+1 -1
drivers/spi/spi-s3c64xx.c
··· 444 444 } 445 445 446 446 ret = pm_runtime_get_sync(&sdd->pdev->dev); 447 - if (ret != 0) { 447 + if (ret < 0) { 448 448 dev_err(dev, "Failed to enable device: %d\n", ret); 449 449 goto out_tx; 450 450 }
+8 -4
fs/fuse/file.c
··· 2470 2470 .mode = mode 2471 2471 }; 2472 2472 int err; 2473 + bool lock_inode = !(mode & FALLOC_FL_KEEP_SIZE) || 2474 + (mode & FALLOC_FL_PUNCH_HOLE); 2473 2475 2474 2476 if (fc->no_fallocate) 2475 2477 return -EOPNOTSUPP; 2476 2478 2477 - if (mode & FALLOC_FL_PUNCH_HOLE) { 2479 + if (lock_inode) { 2478 2480 mutex_lock(&inode->i_mutex); 2479 - fuse_set_nowrite(inode); 2481 + if (mode & FALLOC_FL_PUNCH_HOLE) 2482 + fuse_set_nowrite(inode); 2480 2483 } 2481 2484 2482 2485 req = fuse_get_req_nopages(fc); ··· 2514 2511 fuse_invalidate_attr(inode); 2515 2512 2516 2513 out: 2517 - if (mode & FALLOC_FL_PUNCH_HOLE) { 2518 - fuse_release_nowrite(inode); 2514 + if (lock_inode) { 2515 + if (mode & FALLOC_FL_PUNCH_HOLE) 2516 + fuse_release_nowrite(inode); 2519 2517 mutex_unlock(&inode->i_mutex); 2520 2518 } 2521 2519
+1
fs/splice.c
··· 1283 1283 * @in: file to splice from 1284 1284 * @ppos: input file offset 1285 1285 * @out: file to splice to 1286 + * @opos: output file offset 1286 1287 * @len: number of bytes to splice 1287 1288 * @flags: splice modifier flags 1288 1289 *
+4 -6
include/asm-generic/mutex-dec.h
··· 28 28 * __mutex_fastpath_lock_retval - try to take the lock by moving the count 29 29 * from 1 to a 0 value 30 30 * @count: pointer of type atomic_t 31 - * @fail_fn: function to call if the original value was not 1 32 31 * 33 - * Change the count from 1 to a value lower than 1, and call <fail_fn> if 34 - * it wasn't 1 originally. This function returns 0 if the fastpath succeeds, 35 - * or anything the slow path function returns. 32 + * Change the count from 1 to a value lower than 1. This function returns 0 33 + * if the fastpath succeeds, or -1 otherwise. 36 34 */ 37 35 static inline int 38 - __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) 36 + __mutex_fastpath_lock_retval(atomic_t *count) 39 37 { 40 38 if (unlikely(atomic_dec_return(count) < 0)) 41 - return fail_fn(count); 39 + return -1; 42 40 return 0; 43 41 } 44 42
+1 -1
include/asm-generic/mutex-null.h
··· 11 11 #define _ASM_GENERIC_MUTEX_NULL_H 12 12 13 13 #define __mutex_fastpath_lock(count, fail_fn) fail_fn(count) 14 - #define __mutex_fastpath_lock_retval(count, fail_fn) fail_fn(count) 14 + #define __mutex_fastpath_lock_retval(count) (-1) 15 15 #define __mutex_fastpath_unlock(count, fail_fn) fail_fn(count) 16 16 #define __mutex_fastpath_trylock(count, fail_fn) fail_fn(count) 17 17 #define __mutex_slowpath_needs_to_unlock() 1
+4 -6
include/asm-generic/mutex-xchg.h
··· 39 39 * __mutex_fastpath_lock_retval - try to take the lock by moving the count 40 40 * from 1 to a 0 value 41 41 * @count: pointer of type atomic_t 42 - * @fail_fn: function to call if the original value was not 1 43 42 * 44 - * Change the count from 1 to a value lower than 1, and call <fail_fn> if it 45 - * wasn't 1 originally. This function returns 0 if the fastpath succeeds, 46 - * or anything the slow path function returns 43 + * Change the count from 1 to a value lower than 1. This function returns 0 44 + * if the fastpath succeeds, or -1 otherwise. 47 45 */ 48 46 static inline int 49 - __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) 47 + __mutex_fastpath_lock_retval(atomic_t *count) 50 48 { 51 49 if (unlikely(atomic_xchg(count, 0) != 1)) 52 50 if (likely(atomic_xchg(count, -1) != 1)) 53 - return fail_fn(count); 51 + return -1; 54 52 return 0; 55 53 } 56 54
+1
include/linux/mutex-debug.h
··· 3 3 4 4 #include <linux/linkage.h> 5 5 #include <linux/lockdep.h> 6 + #include <linux/debug_locks.h> 6 7 7 8 /* 8 9 * Mutexes - debugging helpers:
+362 -1
include/linux/mutex.h
··· 10 10 #ifndef __LINUX_MUTEX_H 11 11 #define __LINUX_MUTEX_H 12 12 13 + #include <asm/current.h> 13 14 #include <linux/list.h> 14 15 #include <linux/spinlock_types.h> 15 16 #include <linux/linkage.h> ··· 78 77 #endif 79 78 }; 80 79 80 + struct ww_class { 81 + atomic_long_t stamp; 82 + struct lock_class_key acquire_key; 83 + struct lock_class_key mutex_key; 84 + const char *acquire_name; 85 + const char *mutex_name; 86 + }; 87 + 88 + struct ww_acquire_ctx { 89 + struct task_struct *task; 90 + unsigned long stamp; 91 + unsigned acquired; 92 + #ifdef CONFIG_DEBUG_MUTEXES 93 + unsigned done_acquire; 94 + struct ww_class *ww_class; 95 + struct ww_mutex *contending_lock; 96 + #endif 97 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 98 + struct lockdep_map dep_map; 99 + #endif 100 + #ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH 101 + unsigned deadlock_inject_interval; 102 + unsigned deadlock_inject_countdown; 103 + #endif 104 + }; 105 + 106 + struct ww_mutex { 107 + struct mutex base; 108 + struct ww_acquire_ctx *ctx; 109 + #ifdef CONFIG_DEBUG_MUTEXES 110 + struct ww_class *ww_class; 111 + #endif 112 + }; 113 + 81 114 #ifdef CONFIG_DEBUG_MUTEXES 82 115 # include <linux/mutex-debug.h> 83 116 #else ··· 136 101 #ifdef CONFIG_DEBUG_LOCK_ALLOC 137 102 # define __DEP_MAP_MUTEX_INITIALIZER(lockname) \ 138 103 , .dep_map = { .name = #lockname } 104 + # define __WW_CLASS_MUTEX_INITIALIZER(lockname, ww_class) \ 105 + , .ww_class = &ww_class 139 106 #else 140 107 # define __DEP_MAP_MUTEX_INITIALIZER(lockname) 108 + # define __WW_CLASS_MUTEX_INITIALIZER(lockname, ww_class) 141 109 #endif 142 110 143 111 #define __MUTEX_INITIALIZER(lockname) \ ··· 150 112 __DEBUG_MUTEX_INITIALIZER(lockname) \ 151 113 __DEP_MAP_MUTEX_INITIALIZER(lockname) } 152 114 115 + #define __WW_CLASS_INITIALIZER(ww_class) \ 116 + { .stamp = ATOMIC_LONG_INIT(0) \ 117 + , .acquire_name = #ww_class "_acquire" \ 118 + , .mutex_name = #ww_class "_mutex" } 119 + 120 + #define __WW_MUTEX_INITIALIZER(lockname, class) \ 121 + { .base = { \__MUTEX_INITIALIZER(lockname) } \ 122 + __WW_CLASS_MUTEX_INITIALIZER(lockname, class) } 123 + 153 124 #define DEFINE_MUTEX(mutexname) \ 154 125 struct mutex mutexname = __MUTEX_INITIALIZER(mutexname) 155 126 127 + #define DEFINE_WW_CLASS(classname) \ 128 + struct ww_class classname = __WW_CLASS_INITIALIZER(classname) 129 + 130 + #define DEFINE_WW_MUTEX(mutexname, ww_class) \ 131 + struct ww_mutex mutexname = __WW_MUTEX_INITIALIZER(mutexname, ww_class) 132 + 133 + 156 134 extern void __mutex_init(struct mutex *lock, const char *name, 157 135 struct lock_class_key *key); 136 + 137 + /** 138 + * ww_mutex_init - initialize the w/w mutex 139 + * @lock: the mutex to be initialized 140 + * @ww_class: the w/w class the mutex should belong to 141 + * 142 + * Initialize the w/w mutex to unlocked state and associate it with the given 143 + * class. 144 + * 145 + * It is not allowed to initialize an already locked mutex. 146 + */ 147 + static inline void ww_mutex_init(struct ww_mutex *lock, 148 + struct ww_class *ww_class) 149 + { 150 + __mutex_init(&lock->base, ww_class->mutex_name, &ww_class->mutex_key); 151 + lock->ctx = NULL; 152 + #ifdef CONFIG_DEBUG_MUTEXES 153 + lock->ww_class = ww_class; 154 + #endif 155 + } 158 156 159 157 /** 160 158 * mutex_is_locked - is the mutex locked ··· 210 136 #ifdef CONFIG_DEBUG_LOCK_ALLOC 211 137 extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); 212 138 extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock); 139 + 213 140 extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, 214 141 unsigned int subclass); 215 142 extern int __must_check mutex_lock_killable_nested(struct mutex *lock, ··· 222 147 223 148 #define mutex_lock_nest_lock(lock, nest_lock) \ 224 149 do { \ 225 - typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ 150 + typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \ 226 151 _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \ 227 152 } while (0) 228 153 ··· 245 170 */ 246 171 extern int mutex_trylock(struct mutex *lock); 247 172 extern void mutex_unlock(struct mutex *lock); 173 + 174 + /** 175 + * ww_acquire_init - initialize a w/w acquire context 176 + * @ctx: w/w acquire context to initialize 177 + * @ww_class: w/w class of the context 178 + * 179 + * Initializes an context to acquire multiple mutexes of the given w/w class. 180 + * 181 + * Context-based w/w mutex acquiring can be done in any order whatsoever within 182 + * a given lock class. Deadlocks will be detected and handled with the 183 + * wait/wound logic. 184 + * 185 + * Mixing of context-based w/w mutex acquiring and single w/w mutex locking can 186 + * result in undetected deadlocks and is so forbidden. Mixing different contexts 187 + * for the same w/w class when acquiring mutexes can also result in undetected 188 + * deadlocks, and is hence also forbidden. Both types of abuse will be caught by 189 + * enabling CONFIG_PROVE_LOCKING. 190 + * 191 + * Nesting of acquire contexts for _different_ w/w classes is possible, subject 192 + * to the usual locking rules between different lock classes. 193 + * 194 + * An acquire context must be released with ww_acquire_fini by the same task 195 + * before the memory is freed. It is recommended to allocate the context itself 196 + * on the stack. 197 + */ 198 + static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, 199 + struct ww_class *ww_class) 200 + { 201 + ctx->task = current; 202 + ctx->stamp = atomic_long_inc_return(&ww_class->stamp); 203 + ctx->acquired = 0; 204 + #ifdef CONFIG_DEBUG_MUTEXES 205 + ctx->ww_class = ww_class; 206 + ctx->done_acquire = 0; 207 + ctx->contending_lock = NULL; 208 + #endif 209 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 210 + debug_check_no_locks_freed((void *)ctx, sizeof(*ctx)); 211 + lockdep_init_map(&ctx->dep_map, ww_class->acquire_name, 212 + &ww_class->acquire_key, 0); 213 + mutex_acquire(&ctx->dep_map, 0, 0, _RET_IP_); 214 + #endif 215 + #ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH 216 + ctx->deadlock_inject_interval = 1; 217 + ctx->deadlock_inject_countdown = ctx->stamp & 0xf; 218 + #endif 219 + } 220 + 221 + /** 222 + * ww_acquire_done - marks the end of the acquire phase 223 + * @ctx: the acquire context 224 + * 225 + * Marks the end of the acquire phase, any further w/w mutex lock calls using 226 + * this context are forbidden. 227 + * 228 + * Calling this function is optional, it is just useful to document w/w mutex 229 + * code and clearly designated the acquire phase from actually using the locked 230 + * data structures. 231 + */ 232 + static inline void ww_acquire_done(struct ww_acquire_ctx *ctx) 233 + { 234 + #ifdef CONFIG_DEBUG_MUTEXES 235 + lockdep_assert_held(ctx); 236 + 237 + DEBUG_LOCKS_WARN_ON(ctx->done_acquire); 238 + ctx->done_acquire = 1; 239 + #endif 240 + } 241 + 242 + /** 243 + * ww_acquire_fini - releases a w/w acquire context 244 + * @ctx: the acquire context to free 245 + * 246 + * Releases a w/w acquire context. This must be called _after_ all acquired w/w 247 + * mutexes have been released with ww_mutex_unlock. 248 + */ 249 + static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx) 250 + { 251 + #ifdef CONFIG_DEBUG_MUTEXES 252 + mutex_release(&ctx->dep_map, 0, _THIS_IP_); 253 + 254 + DEBUG_LOCKS_WARN_ON(ctx->acquired); 255 + if (!config_enabled(CONFIG_PROVE_LOCKING)) 256 + /* 257 + * lockdep will normally handle this, 258 + * but fail without anyway 259 + */ 260 + ctx->done_acquire = 1; 261 + 262 + if (!config_enabled(CONFIG_DEBUG_LOCK_ALLOC)) 263 + /* ensure ww_acquire_fini will still fail if called twice */ 264 + ctx->acquired = ~0U; 265 + #endif 266 + } 267 + 268 + extern int __must_check __ww_mutex_lock(struct ww_mutex *lock, 269 + struct ww_acquire_ctx *ctx); 270 + extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock, 271 + struct ww_acquire_ctx *ctx); 272 + 273 + /** 274 + * ww_mutex_lock - acquire the w/w mutex 275 + * @lock: the mutex to be acquired 276 + * @ctx: w/w acquire context, or NULL to acquire only a single lock. 277 + * 278 + * Lock the w/w mutex exclusively for this task. 279 + * 280 + * Deadlocks within a given w/w class of locks are detected and handled with the 281 + * wait/wound algorithm. If the lock isn't immediately avaiable this function 282 + * will either sleep until it is (wait case). Or it selects the current context 283 + * for backing off by returning -EDEADLK (wound case). Trying to acquire the 284 + * same lock with the same context twice is also detected and signalled by 285 + * returning -EALREADY. Returns 0 if the mutex was successfully acquired. 286 + * 287 + * In the wound case the caller must release all currently held w/w mutexes for 288 + * the given context and then wait for this contending lock to be available by 289 + * calling ww_mutex_lock_slow. Alternatively callers can opt to not acquire this 290 + * lock and proceed with trying to acquire further w/w mutexes (e.g. when 291 + * scanning through lru lists trying to free resources). 292 + * 293 + * The mutex must later on be released by the same task that 294 + * acquired it. The task may not exit without first unlocking the mutex. Also, 295 + * kernel memory where the mutex resides must not be freed with the mutex still 296 + * locked. The mutex must first be initialized (or statically defined) before it 297 + * can be locked. memset()-ing the mutex to 0 is not allowed. The mutex must be 298 + * of the same w/w lock class as was used to initialize the acquire context. 299 + * 300 + * A mutex acquired with this function must be released with ww_mutex_unlock. 301 + */ 302 + static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 303 + { 304 + if (ctx) 305 + return __ww_mutex_lock(lock, ctx); 306 + else { 307 + mutex_lock(&lock->base); 308 + return 0; 309 + } 310 + } 311 + 312 + /** 313 + * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible 314 + * @lock: the mutex to be acquired 315 + * @ctx: w/w acquire context 316 + * 317 + * Lock the w/w mutex exclusively for this task. 318 + * 319 + * Deadlocks within a given w/w class of locks are detected and handled with the 320 + * wait/wound algorithm. If the lock isn't immediately avaiable this function 321 + * will either sleep until it is (wait case). Or it selects the current context 322 + * for backing off by returning -EDEADLK (wound case). Trying to acquire the 323 + * same lock with the same context twice is also detected and signalled by 324 + * returning -EALREADY. Returns 0 if the mutex was successfully acquired. If a 325 + * signal arrives while waiting for the lock then this function returns -EINTR. 326 + * 327 + * In the wound case the caller must release all currently held w/w mutexes for 328 + * the given context and then wait for this contending lock to be available by 329 + * calling ww_mutex_lock_slow_interruptible. Alternatively callers can opt to 330 + * not acquire this lock and proceed with trying to acquire further w/w mutexes 331 + * (e.g. when scanning through lru lists trying to free resources). 332 + * 333 + * The mutex must later on be released by the same task that 334 + * acquired it. The task may not exit without first unlocking the mutex. Also, 335 + * kernel memory where the mutex resides must not be freed with the mutex still 336 + * locked. The mutex must first be initialized (or statically defined) before it 337 + * can be locked. memset()-ing the mutex to 0 is not allowed. The mutex must be 338 + * of the same w/w lock class as was used to initialize the acquire context. 339 + * 340 + * A mutex acquired with this function must be released with ww_mutex_unlock. 341 + */ 342 + static inline int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock, 343 + struct ww_acquire_ctx *ctx) 344 + { 345 + if (ctx) 346 + return __ww_mutex_lock_interruptible(lock, ctx); 347 + else 348 + return mutex_lock_interruptible(&lock->base); 349 + } 350 + 351 + /** 352 + * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex 353 + * @lock: the mutex to be acquired 354 + * @ctx: w/w acquire context 355 + * 356 + * Acquires a w/w mutex with the given context after a wound case. This function 357 + * will sleep until the lock becomes available. 358 + * 359 + * The caller must have released all w/w mutexes already acquired with the 360 + * context and then call this function on the contended lock. 361 + * 362 + * Afterwards the caller may continue to (re)acquire the other w/w mutexes it 363 + * needs with ww_mutex_lock. Note that the -EALREADY return code from 364 + * ww_mutex_lock can be used to avoid locking this contended mutex twice. 365 + * 366 + * It is forbidden to call this function with any other w/w mutexes associated 367 + * with the context held. It is forbidden to call this on anything else than the 368 + * contending mutex. 369 + * 370 + * Note that the slowpath lock acquiring can also be done by calling 371 + * ww_mutex_lock directly. This function here is simply to help w/w mutex 372 + * locking code readability by clearly denoting the slowpath. 373 + */ 374 + static inline void 375 + ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 376 + { 377 + int ret; 378 + #ifdef CONFIG_DEBUG_MUTEXES 379 + DEBUG_LOCKS_WARN_ON(!ctx->contending_lock); 380 + #endif 381 + ret = ww_mutex_lock(lock, ctx); 382 + (void)ret; 383 + } 384 + 385 + /** 386 + * ww_mutex_lock_slow_interruptible - slowpath acquiring of the w/w mutex, 387 + * interruptible 388 + * @lock: the mutex to be acquired 389 + * @ctx: w/w acquire context 390 + * 391 + * Acquires a w/w mutex with the given context after a wound case. This function 392 + * will sleep until the lock becomes available and returns 0 when the lock has 393 + * been acquired. If a signal arrives while waiting for the lock then this 394 + * function returns -EINTR. 395 + * 396 + * The caller must have released all w/w mutexes already acquired with the 397 + * context and then call this function on the contended lock. 398 + * 399 + * Afterwards the caller may continue to (re)acquire the other w/w mutexes it 400 + * needs with ww_mutex_lock. Note that the -EALREADY return code from 401 + * ww_mutex_lock can be used to avoid locking this contended mutex twice. 402 + * 403 + * It is forbidden to call this function with any other w/w mutexes associated 404 + * with the given context held. It is forbidden to call this on anything else 405 + * than the contending mutex. 406 + * 407 + * Note that the slowpath lock acquiring can also be done by calling 408 + * ww_mutex_lock_interruptible directly. This function here is simply to help 409 + * w/w mutex locking code readability by clearly denoting the slowpath. 410 + */ 411 + static inline int __must_check 412 + ww_mutex_lock_slow_interruptible(struct ww_mutex *lock, 413 + struct ww_acquire_ctx *ctx) 414 + { 415 + #ifdef CONFIG_DEBUG_MUTEXES 416 + DEBUG_LOCKS_WARN_ON(!ctx->contending_lock); 417 + #endif 418 + return ww_mutex_lock_interruptible(lock, ctx); 419 + } 420 + 421 + extern void ww_mutex_unlock(struct ww_mutex *lock); 422 + 423 + /** 424 + * ww_mutex_trylock - tries to acquire the w/w mutex without acquire context 425 + * @lock: mutex to lock 426 + * 427 + * Trylocks a mutex without acquire context, so no deadlock detection is 428 + * possible. Returns 1 if the mutex has been acquired successfully, 0 otherwise. 429 + */ 430 + static inline int __must_check ww_mutex_trylock(struct ww_mutex *lock) 431 + { 432 + return mutex_trylock(&lock->base); 433 + } 434 + 435 + /*** 436 + * ww_mutex_destroy - mark a w/w mutex unusable 437 + * @lock: the mutex to be destroyed 438 + * 439 + * This function marks the mutex uninitialized, and any subsequent 440 + * use of the mutex is forbidden. The mutex must not be locked when 441 + * this function is called. 442 + */ 443 + static inline void ww_mutex_destroy(struct ww_mutex *lock) 444 + { 445 + mutex_destroy(&lock->base); 446 + } 447 + 448 + /** 449 + * ww_mutex_is_locked - is the w/w mutex locked 450 + * @lock: the mutex to be queried 451 + * 452 + * Returns 1 if the mutex is locked, 0 if unlocked. 453 + */ 454 + static inline bool ww_mutex_is_locked(struct ww_mutex *lock) 455 + { 456 + return mutex_is_locked(&lock->base); 457 + } 458 + 248 459 extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); 249 460 250 461 #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX
+355 -35
kernel/mutex.c
··· 254 254 255 255 EXPORT_SYMBOL(mutex_unlock); 256 256 257 + /** 258 + * ww_mutex_unlock - release the w/w mutex 259 + * @lock: the mutex to be released 260 + * 261 + * Unlock a mutex that has been locked by this task previously with any of the 262 + * ww_mutex_lock* functions (with or without an acquire context). It is 263 + * forbidden to release the locks after releasing the acquire context. 264 + * 265 + * This function must not be used in interrupt context. Unlocking 266 + * of a unlocked mutex is not allowed. 267 + */ 268 + void __sched ww_mutex_unlock(struct ww_mutex *lock) 269 + { 270 + /* 271 + * The unlocking fastpath is the 0->1 transition from 'locked' 272 + * into 'unlocked' state: 273 + */ 274 + if (lock->ctx) { 275 + #ifdef CONFIG_DEBUG_MUTEXES 276 + DEBUG_LOCKS_WARN_ON(!lock->ctx->acquired); 277 + #endif 278 + if (lock->ctx->acquired > 0) 279 + lock->ctx->acquired--; 280 + lock->ctx = NULL; 281 + } 282 + 283 + #ifndef CONFIG_DEBUG_MUTEXES 284 + /* 285 + * When debugging is enabled we must not clear the owner before time, 286 + * the slow path will always be taken, and that clears the owner field 287 + * after verifying that it was indeed current. 288 + */ 289 + mutex_clear_owner(&lock->base); 290 + #endif 291 + __mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath); 292 + } 293 + EXPORT_SYMBOL(ww_mutex_unlock); 294 + 295 + static inline int __sched 296 + __mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx) 297 + { 298 + struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); 299 + struct ww_acquire_ctx *hold_ctx = ACCESS_ONCE(ww->ctx); 300 + 301 + if (!hold_ctx) 302 + return 0; 303 + 304 + if (unlikely(ctx == hold_ctx)) 305 + return -EALREADY; 306 + 307 + if (ctx->stamp - hold_ctx->stamp <= LONG_MAX && 308 + (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) { 309 + #ifdef CONFIG_DEBUG_MUTEXES 310 + DEBUG_LOCKS_WARN_ON(ctx->contending_lock); 311 + ctx->contending_lock = ww; 312 + #endif 313 + return -EDEADLK; 314 + } 315 + 316 + return 0; 317 + } 318 + 319 + static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww, 320 + struct ww_acquire_ctx *ww_ctx) 321 + { 322 + #ifdef CONFIG_DEBUG_MUTEXES 323 + /* 324 + * If this WARN_ON triggers, you used ww_mutex_lock to acquire, 325 + * but released with a normal mutex_unlock in this call. 326 + * 327 + * This should never happen, always use ww_mutex_unlock. 328 + */ 329 + DEBUG_LOCKS_WARN_ON(ww->ctx); 330 + 331 + /* 332 + * Not quite done after calling ww_acquire_done() ? 333 + */ 334 + DEBUG_LOCKS_WARN_ON(ww_ctx->done_acquire); 335 + 336 + if (ww_ctx->contending_lock) { 337 + /* 338 + * After -EDEADLK you tried to 339 + * acquire a different ww_mutex? Bad! 340 + */ 341 + DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock != ww); 342 + 343 + /* 344 + * You called ww_mutex_lock after receiving -EDEADLK, 345 + * but 'forgot' to unlock everything else first? 346 + */ 347 + DEBUG_LOCKS_WARN_ON(ww_ctx->acquired > 0); 348 + ww_ctx->contending_lock = NULL; 349 + } 350 + 351 + /* 352 + * Naughty, using a different class will lead to undefined behavior! 353 + */ 354 + DEBUG_LOCKS_WARN_ON(ww_ctx->ww_class != ww->ww_class); 355 + #endif 356 + ww_ctx->acquired++; 357 + } 358 + 359 + /* 360 + * after acquiring lock with fastpath or when we lost out in contested 361 + * slowpath, set ctx and wake up any waiters so they can recheck. 362 + * 363 + * This function is never called when CONFIG_DEBUG_LOCK_ALLOC is set, 364 + * as the fastpath and opportunistic spinning are disabled in that case. 365 + */ 366 + static __always_inline void 367 + ww_mutex_set_context_fastpath(struct ww_mutex *lock, 368 + struct ww_acquire_ctx *ctx) 369 + { 370 + unsigned long flags; 371 + struct mutex_waiter *cur; 372 + 373 + ww_mutex_lock_acquired(lock, ctx); 374 + 375 + lock->ctx = ctx; 376 + 377 + /* 378 + * The lock->ctx update should be visible on all cores before 379 + * the atomic read is done, otherwise contended waiters might be 380 + * missed. The contended waiters will either see ww_ctx == NULL 381 + * and keep spinning, or it will acquire wait_lock, add itself 382 + * to waiter list and sleep. 383 + */ 384 + smp_mb(); /* ^^^ */ 385 + 386 + /* 387 + * Check if lock is contended, if not there is nobody to wake up 388 + */ 389 + if (likely(atomic_read(&lock->base.count) == 0)) 390 + return; 391 + 392 + /* 393 + * Uh oh, we raced in fastpath, wake up everyone in this case, 394 + * so they can see the new lock->ctx. 395 + */ 396 + spin_lock_mutex(&lock->base.wait_lock, flags); 397 + list_for_each_entry(cur, &lock->base.wait_list, list) { 398 + debug_mutex_wake_waiter(&lock->base, cur); 399 + wake_up_process(cur->task); 400 + } 401 + spin_unlock_mutex(&lock->base.wait_lock, flags); 402 + } 403 + 257 404 /* 258 405 * Lock a mutex (possibly interruptible), slowpath: 259 406 */ 260 - static inline int __sched 407 + static __always_inline int __sched 261 408 __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, 262 - struct lockdep_map *nest_lock, unsigned long ip) 409 + struct lockdep_map *nest_lock, unsigned long ip, 410 + struct ww_acquire_ctx *ww_ctx) 263 411 { 264 412 struct task_struct *task = current; 265 413 struct mutex_waiter waiter; 266 414 unsigned long flags; 415 + int ret; 267 416 268 417 preempt_disable(); 269 418 mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip); ··· 447 298 struct task_struct *owner; 448 299 struct mspin_node node; 449 300 301 + if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) { 302 + struct ww_mutex *ww; 303 + 304 + ww = container_of(lock, struct ww_mutex, base); 305 + /* 306 + * If ww->ctx is set the contents are undefined, only 307 + * by acquiring wait_lock there is a guarantee that 308 + * they are not invalid when reading. 309 + * 310 + * As such, when deadlock detection needs to be 311 + * performed the optimistic spinning cannot be done. 312 + */ 313 + if (ACCESS_ONCE(ww->ctx)) 314 + break; 315 + } 316 + 450 317 /* 451 318 * If there's an owner, wait for it to either 452 319 * release the lock or go to sleep. ··· 477 312 if ((atomic_read(&lock->count) == 1) && 478 313 (atomic_cmpxchg(&lock->count, 1, 0) == 1)) { 479 314 lock_acquired(&lock->dep_map, ip); 315 + if (!__builtin_constant_p(ww_ctx == NULL)) { 316 + struct ww_mutex *ww; 317 + ww = container_of(lock, struct ww_mutex, base); 318 + 319 + ww_mutex_set_context_fastpath(ww, ww_ctx); 320 + } 321 + 480 322 mutex_set_owner(lock); 481 323 mspin_unlock(MLOCK(lock), &node); 482 324 preempt_enable(); ··· 543 371 * TASK_UNINTERRUPTIBLE case.) 544 372 */ 545 373 if (unlikely(signal_pending_state(state, task))) { 546 - mutex_remove_waiter(lock, &waiter, 547 - task_thread_info(task)); 548 - mutex_release(&lock->dep_map, 1, ip); 549 - spin_unlock_mutex(&lock->wait_lock, flags); 550 - 551 - debug_mutex_free_waiter(&waiter); 552 - preempt_enable(); 553 - return -EINTR; 374 + ret = -EINTR; 375 + goto err; 554 376 } 377 + 378 + if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) { 379 + ret = __mutex_lock_check_stamp(lock, ww_ctx); 380 + if (ret) 381 + goto err; 382 + } 383 + 555 384 __set_task_state(task, state); 556 385 557 386 /* didn't get the lock, go to sleep: */ ··· 567 394 mutex_remove_waiter(lock, &waiter, current_thread_info()); 568 395 mutex_set_owner(lock); 569 396 397 + if (!__builtin_constant_p(ww_ctx == NULL)) { 398 + struct ww_mutex *ww = container_of(lock, 399 + struct ww_mutex, 400 + base); 401 + struct mutex_waiter *cur; 402 + 403 + /* 404 + * This branch gets optimized out for the common case, 405 + * and is only important for ww_mutex_lock. 406 + */ 407 + 408 + ww_mutex_lock_acquired(ww, ww_ctx); 409 + ww->ctx = ww_ctx; 410 + 411 + /* 412 + * Give any possible sleeping processes the chance to wake up, 413 + * so they can recheck if they have to back off. 414 + */ 415 + list_for_each_entry(cur, &lock->wait_list, list) { 416 + debug_mutex_wake_waiter(lock, cur); 417 + wake_up_process(cur->task); 418 + } 419 + } 420 + 570 421 /* set it to 0 if there are no waiters left: */ 571 422 if (likely(list_empty(&lock->wait_list))) 572 423 atomic_set(&lock->count, 0); ··· 601 404 preempt_enable(); 602 405 603 406 return 0; 407 + 408 + err: 409 + mutex_remove_waiter(lock, &waiter, task_thread_info(task)); 410 + spin_unlock_mutex(&lock->wait_lock, flags); 411 + debug_mutex_free_waiter(&waiter); 412 + mutex_release(&lock->dep_map, 1, ip); 413 + preempt_enable(); 414 + return ret; 604 415 } 605 416 606 417 #ifdef CONFIG_DEBUG_LOCK_ALLOC ··· 616 411 mutex_lock_nested(struct mutex *lock, unsigned int subclass) 617 412 { 618 413 might_sleep(); 619 - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_); 414 + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 415 + subclass, NULL, _RET_IP_, NULL); 620 416 } 621 417 622 418 EXPORT_SYMBOL_GPL(mutex_lock_nested); ··· 626 420 _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest) 627 421 { 628 422 might_sleep(); 629 - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_); 423 + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 424 + 0, nest, _RET_IP_, NULL); 630 425 } 631 426 632 427 EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock); ··· 636 429 mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass) 637 430 { 638 431 might_sleep(); 639 - return __mutex_lock_common(lock, TASK_KILLABLE, subclass, NULL, _RET_IP_); 432 + return __mutex_lock_common(lock, TASK_KILLABLE, 433 + subclass, NULL, _RET_IP_, NULL); 640 434 } 641 435 EXPORT_SYMBOL_GPL(mutex_lock_killable_nested); 642 436 ··· 646 438 { 647 439 might_sleep(); 648 440 return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 649 - subclass, NULL, _RET_IP_); 441 + subclass, NULL, _RET_IP_, NULL); 650 442 } 651 443 652 444 EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested); 445 + 446 + static inline int 447 + ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 448 + { 449 + #ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH 450 + unsigned tmp; 451 + 452 + if (ctx->deadlock_inject_countdown-- == 0) { 453 + tmp = ctx->deadlock_inject_interval; 454 + if (tmp > UINT_MAX/4) 455 + tmp = UINT_MAX; 456 + else 457 + tmp = tmp*2 + tmp + tmp/2; 458 + 459 + ctx->deadlock_inject_interval = tmp; 460 + ctx->deadlock_inject_countdown = tmp; 461 + ctx->contending_lock = lock; 462 + 463 + ww_mutex_unlock(lock); 464 + 465 + return -EDEADLK; 466 + } 467 + #endif 468 + 469 + return 0; 470 + } 471 + 472 + int __sched 473 + __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 474 + { 475 + int ret; 476 + 477 + might_sleep(); 478 + ret = __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 479 + 0, &ctx->dep_map, _RET_IP_, ctx); 480 + if (!ret && ctx->acquired > 0) 481 + return ww_mutex_deadlock_injection(lock, ctx); 482 + 483 + return ret; 484 + } 485 + EXPORT_SYMBOL_GPL(__ww_mutex_lock); 486 + 487 + int __sched 488 + __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 489 + { 490 + int ret; 491 + 492 + might_sleep(); 493 + ret = __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 494 + 0, &ctx->dep_map, _RET_IP_, ctx); 495 + 496 + if (!ret && ctx->acquired > 0) 497 + return ww_mutex_deadlock_injection(lock, ctx); 498 + 499 + return ret; 500 + } 501 + EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible); 502 + 653 503 #endif 654 504 655 505 /* ··· 760 494 * mutex_lock_interruptible() and mutex_trylock(). 761 495 */ 762 496 static noinline int __sched 763 - __mutex_lock_killable_slowpath(atomic_t *lock_count); 497 + __mutex_lock_killable_slowpath(struct mutex *lock); 764 498 765 499 static noinline int __sched 766 - __mutex_lock_interruptible_slowpath(atomic_t *lock_count); 500 + __mutex_lock_interruptible_slowpath(struct mutex *lock); 767 501 768 502 /** 769 503 * mutex_lock_interruptible - acquire the mutex, interruptible ··· 781 515 int ret; 782 516 783 517 might_sleep(); 784 - ret = __mutex_fastpath_lock_retval 785 - (&lock->count, __mutex_lock_interruptible_slowpath); 786 - if (!ret) 518 + ret = __mutex_fastpath_lock_retval(&lock->count); 519 + if (likely(!ret)) { 787 520 mutex_set_owner(lock); 788 - 789 - return ret; 521 + return 0; 522 + } else 523 + return __mutex_lock_interruptible_slowpath(lock); 790 524 } 791 525 792 526 EXPORT_SYMBOL(mutex_lock_interruptible); ··· 796 530 int ret; 797 531 798 532 might_sleep(); 799 - ret = __mutex_fastpath_lock_retval 800 - (&lock->count, __mutex_lock_killable_slowpath); 801 - if (!ret) 533 + ret = __mutex_fastpath_lock_retval(&lock->count); 534 + if (likely(!ret)) { 802 535 mutex_set_owner(lock); 803 - 804 - return ret; 536 + return 0; 537 + } else 538 + return __mutex_lock_killable_slowpath(lock); 805 539 } 806 540 EXPORT_SYMBOL(mutex_lock_killable); 807 541 ··· 810 544 { 811 545 struct mutex *lock = container_of(lock_count, struct mutex, count); 812 546 813 - __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_); 547 + __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, 548 + NULL, _RET_IP_, NULL); 814 549 } 815 550 816 551 static noinline int __sched 817 - __mutex_lock_killable_slowpath(atomic_t *lock_count) 552 + __mutex_lock_killable_slowpath(struct mutex *lock) 818 553 { 819 - struct mutex *lock = container_of(lock_count, struct mutex, count); 820 - 821 - return __mutex_lock_common(lock, TASK_KILLABLE, 0, NULL, _RET_IP_); 554 + return __mutex_lock_common(lock, TASK_KILLABLE, 0, 555 + NULL, _RET_IP_, NULL); 822 556 } 823 557 824 558 static noinline int __sched 825 - __mutex_lock_interruptible_slowpath(atomic_t *lock_count) 559 + __mutex_lock_interruptible_slowpath(struct mutex *lock) 826 560 { 827 - struct mutex *lock = container_of(lock_count, struct mutex, count); 828 - 829 - return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_); 561 + return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0, 562 + NULL, _RET_IP_, NULL); 830 563 } 564 + 565 + static noinline int __sched 566 + __ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 567 + { 568 + return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0, 569 + NULL, _RET_IP_, ctx); 570 + } 571 + 572 + static noinline int __sched 573 + __ww_mutex_lock_interruptible_slowpath(struct ww_mutex *lock, 574 + struct ww_acquire_ctx *ctx) 575 + { 576 + return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0, 577 + NULL, _RET_IP_, ctx); 578 + } 579 + 831 580 #endif 832 581 833 582 /* ··· 897 616 return ret; 898 617 } 899 618 EXPORT_SYMBOL(mutex_trylock); 619 + 620 + #ifndef CONFIG_DEBUG_LOCK_ALLOC 621 + int __sched 622 + __ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 623 + { 624 + int ret; 625 + 626 + might_sleep(); 627 + 628 + ret = __mutex_fastpath_lock_retval(&lock->base.count); 629 + 630 + if (likely(!ret)) { 631 + ww_mutex_set_context_fastpath(lock, ctx); 632 + mutex_set_owner(&lock->base); 633 + } else 634 + ret = __ww_mutex_lock_slowpath(lock, ctx); 635 + return ret; 636 + } 637 + EXPORT_SYMBOL(__ww_mutex_lock); 638 + 639 + int __sched 640 + __ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 641 + { 642 + int ret; 643 + 644 + might_sleep(); 645 + 646 + ret = __mutex_fastpath_lock_retval(&lock->base.count); 647 + 648 + if (likely(!ret)) { 649 + ww_mutex_set_context_fastpath(lock, ctx); 650 + mutex_set_owner(&lock->base); 651 + } else 652 + ret = __ww_mutex_lock_interruptible_slowpath(lock, ctx); 653 + return ret; 654 + } 655 + EXPORT_SYMBOL(__ww_mutex_lock_interruptible); 656 + 657 + #endif 900 658 901 659 /** 902 660 * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0
+13
lib/Kconfig.debug
··· 547 547 This feature allows mutex semantics violations to be detected and 548 548 reported. 549 549 550 + config DEBUG_WW_MUTEX_SLOWPATH 551 + bool "Wait/wound mutex debugging: Slowpath testing" 552 + depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT 553 + select DEBUG_LOCK_ALLOC 554 + select DEBUG_SPINLOCK 555 + select DEBUG_MUTEXES 556 + help 557 + This feature enables slowpath testing for w/w mutex users by 558 + injecting additional -EDEADLK wound/backoff cases. Together with 559 + the full mutex checks enabled with (CONFIG_PROVE_LOCKING) this 560 + will test all possible w/w mutex interface abuse with the 561 + exception of simply not acquiring all the required locks. 562 + 550 563 config DEBUG_LOCK_ALLOC 551 564 bool "Lock debugging: detect incorrect freeing of live locks" 552 565 depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+2
lib/debug_locks.c
··· 30 30 * a locking bug is detected. 31 31 */ 32 32 int debug_locks_silent; 33 + EXPORT_SYMBOL_GPL(debug_locks_silent); 33 34 34 35 /* 35 36 * Generic 'turn off all lock debugging' function: ··· 45 44 } 46 45 return 0; 47 46 } 47 + EXPORT_SYMBOL_GPL(debug_locks_off);
+701 -17
lib/locking-selftest.c
··· 26 26 */ 27 27 static unsigned int debug_locks_verbose; 28 28 29 + static DEFINE_WW_CLASS(ww_lockdep); 30 + 29 31 static int __init setup_debug_locks_verbose(char *str) 30 32 { 31 33 get_option(&str, &debug_locks_verbose); ··· 44 42 #define LOCKTYPE_RWLOCK 0x2 45 43 #define LOCKTYPE_MUTEX 0x4 46 44 #define LOCKTYPE_RWSEM 0x8 45 + #define LOCKTYPE_WW 0x10 46 + 47 + static struct ww_acquire_ctx t, t2; 48 + static struct ww_mutex o, o2, o3; 47 49 48 50 /* 49 51 * Normal standalone locks, for the circular and irq-context ··· 198 192 #define RSL(x) down_read(&rwsem_##x) 199 193 #define RSU(x) up_read(&rwsem_##x) 200 194 #define RWSI(x) init_rwsem(&rwsem_##x) 195 + 196 + #ifndef CONFIG_DEBUG_WW_MUTEX_SLOWPATH 197 + #define WWAI(x) ww_acquire_init(x, &ww_lockdep) 198 + #else 199 + #define WWAI(x) do { ww_acquire_init(x, &ww_lockdep); (x)->deadlock_inject_countdown = ~0U; } while (0) 200 + #endif 201 + #define WWAD(x) ww_acquire_done(x) 202 + #define WWAF(x) ww_acquire_fini(x) 203 + 204 + #define WWL(x, c) ww_mutex_lock(x, c) 205 + #define WWT(x) ww_mutex_trylock(x) 206 + #define WWL1(x) ww_mutex_lock(x, NULL) 207 + #define WWU(x) ww_mutex_unlock(x) 208 + 201 209 202 210 #define LOCK_UNLOCK_2(x,y) LOCK(x); LOCK(y); UNLOCK(y); UNLOCK(x) 203 211 ··· 914 894 # define I_RWLOCK(x) lockdep_reset_lock(&rwlock_##x.dep_map) 915 895 # define I_MUTEX(x) lockdep_reset_lock(&mutex_##x.dep_map) 916 896 # define I_RWSEM(x) lockdep_reset_lock(&rwsem_##x.dep_map) 897 + # define I_WW(x) lockdep_reset_lock(&x.dep_map) 917 898 #else 918 899 # define I_SPINLOCK(x) 919 900 # define I_RWLOCK(x) 920 901 # define I_MUTEX(x) 921 902 # define I_RWSEM(x) 903 + # define I_WW(x) 922 904 #endif 923 905 924 906 #define I1(x) \ ··· 942 920 static void reset_locks(void) 943 921 { 944 922 local_irq_disable(); 923 + lockdep_free_key_range(&ww_lockdep.acquire_key, 1); 924 + lockdep_free_key_range(&ww_lockdep.mutex_key, 1); 925 + 945 926 I1(A); I1(B); I1(C); I1(D); 946 927 I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2); 928 + I_WW(t); I_WW(t2); I_WW(o.base); I_WW(o2.base); I_WW(o3.base); 947 929 lockdep_reset(); 948 930 I2(A); I2(B); I2(C); I2(D); 949 931 init_shared_classes(); 932 + 933 + ww_mutex_init(&o, &ww_lockdep); ww_mutex_init(&o2, &ww_lockdep); ww_mutex_init(&o3, &ww_lockdep); 934 + memset(&t, 0, sizeof(t)); memset(&t2, 0, sizeof(t2)); 935 + memset(&ww_lockdep.acquire_key, 0, sizeof(ww_lockdep.acquire_key)); 936 + memset(&ww_lockdep.mutex_key, 0, sizeof(ww_lockdep.mutex_key)); 950 937 local_irq_enable(); 951 938 } 952 939 ··· 969 938 static void dotest(void (*testcase_fn)(void), int expected, int lockclass_mask) 970 939 { 971 940 unsigned long saved_preempt_count = preempt_count(); 972 - int expected_failure = 0; 973 941 974 942 WARN_ON(irqs_disabled()); 975 943 ··· 977 947 * Filter out expected failures: 978 948 */ 979 949 #ifndef CONFIG_PROVE_LOCKING 980 - if ((lockclass_mask & LOCKTYPE_SPIN) && debug_locks != expected) 981 - expected_failure = 1; 982 - if ((lockclass_mask & LOCKTYPE_RWLOCK) && debug_locks != expected) 983 - expected_failure = 1; 984 - if ((lockclass_mask & LOCKTYPE_MUTEX) && debug_locks != expected) 985 - expected_failure = 1; 986 - if ((lockclass_mask & LOCKTYPE_RWSEM) && debug_locks != expected) 987 - expected_failure = 1; 950 + if (expected == FAILURE && debug_locks) { 951 + expected_testcase_failures++; 952 + printk("failed|"); 953 + } 954 + else 988 955 #endif 989 956 if (debug_locks != expected) { 990 - if (expected_failure) { 991 - expected_testcase_failures++; 992 - printk("failed|"); 993 - } else { 994 - unexpected_testcase_failures++; 957 + unexpected_testcase_failures++; 958 + printk("FAILED|"); 995 959 996 - printk("FAILED|"); 997 - dump_stack(); 998 - } 960 + dump_stack(); 999 961 } else { 1000 962 testcase_successes++; 1001 963 printk(" ok |"); ··· 1130 1108 DO_TESTCASE_6IRW(desc, name, 312); \ 1131 1109 DO_TESTCASE_6IRW(desc, name, 321); 1132 1110 1111 + static void ww_test_fail_acquire(void) 1112 + { 1113 + int ret; 1114 + 1115 + WWAI(&t); 1116 + t.stamp++; 1117 + 1118 + ret = WWL(&o, &t); 1119 + 1120 + if (WARN_ON(!o.ctx) || 1121 + WARN_ON(ret)) 1122 + return; 1123 + 1124 + /* No lockdep test, pure API */ 1125 + ret = WWL(&o, &t); 1126 + WARN_ON(ret != -EALREADY); 1127 + 1128 + ret = WWT(&o); 1129 + WARN_ON(ret); 1130 + 1131 + t2 = t; 1132 + t2.stamp++; 1133 + ret = WWL(&o, &t2); 1134 + WARN_ON(ret != -EDEADLK); 1135 + WWU(&o); 1136 + 1137 + if (WWT(&o)) 1138 + WWU(&o); 1139 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 1140 + else 1141 + DEBUG_LOCKS_WARN_ON(1); 1142 + #endif 1143 + } 1144 + 1145 + static void ww_test_normal(void) 1146 + { 1147 + int ret; 1148 + 1149 + WWAI(&t); 1150 + 1151 + /* 1152 + * None of the ww_mutex codepaths should be taken in the 'normal' 1153 + * mutex calls. The easiest way to verify this is by using the 1154 + * normal mutex calls, and making sure o.ctx is unmodified. 1155 + */ 1156 + 1157 + /* mutex_lock (and indirectly, mutex_lock_nested) */ 1158 + o.ctx = (void *)~0UL; 1159 + mutex_lock(&o.base); 1160 + mutex_unlock(&o.base); 1161 + WARN_ON(o.ctx != (void *)~0UL); 1162 + 1163 + /* mutex_lock_interruptible (and *_nested) */ 1164 + o.ctx = (void *)~0UL; 1165 + ret = mutex_lock_interruptible(&o.base); 1166 + if (!ret) 1167 + mutex_unlock(&o.base); 1168 + else 1169 + WARN_ON(1); 1170 + WARN_ON(o.ctx != (void *)~0UL); 1171 + 1172 + /* mutex_lock_killable (and *_nested) */ 1173 + o.ctx = (void *)~0UL; 1174 + ret = mutex_lock_killable(&o.base); 1175 + if (!ret) 1176 + mutex_unlock(&o.base); 1177 + else 1178 + WARN_ON(1); 1179 + WARN_ON(o.ctx != (void *)~0UL); 1180 + 1181 + /* trylock, succeeding */ 1182 + o.ctx = (void *)~0UL; 1183 + ret = mutex_trylock(&o.base); 1184 + WARN_ON(!ret); 1185 + if (ret) 1186 + mutex_unlock(&o.base); 1187 + else 1188 + WARN_ON(1); 1189 + WARN_ON(o.ctx != (void *)~0UL); 1190 + 1191 + /* trylock, failing */ 1192 + o.ctx = (void *)~0UL; 1193 + mutex_lock(&o.base); 1194 + ret = mutex_trylock(&o.base); 1195 + WARN_ON(ret); 1196 + mutex_unlock(&o.base); 1197 + WARN_ON(o.ctx != (void *)~0UL); 1198 + 1199 + /* nest_lock */ 1200 + o.ctx = (void *)~0UL; 1201 + mutex_lock_nest_lock(&o.base, &t); 1202 + mutex_unlock(&o.base); 1203 + WARN_ON(o.ctx != (void *)~0UL); 1204 + } 1205 + 1206 + static void ww_test_two_contexts(void) 1207 + { 1208 + WWAI(&t); 1209 + WWAI(&t2); 1210 + } 1211 + 1212 + static void ww_test_diff_class(void) 1213 + { 1214 + WWAI(&t); 1215 + #ifdef CONFIG_DEBUG_MUTEXES 1216 + t.ww_class = NULL; 1217 + #endif 1218 + WWL(&o, &t); 1219 + } 1220 + 1221 + static void ww_test_context_done_twice(void) 1222 + { 1223 + WWAI(&t); 1224 + WWAD(&t); 1225 + WWAD(&t); 1226 + WWAF(&t); 1227 + } 1228 + 1229 + static void ww_test_context_unlock_twice(void) 1230 + { 1231 + WWAI(&t); 1232 + WWAD(&t); 1233 + WWAF(&t); 1234 + WWAF(&t); 1235 + } 1236 + 1237 + static void ww_test_context_fini_early(void) 1238 + { 1239 + WWAI(&t); 1240 + WWL(&o, &t); 1241 + WWAD(&t); 1242 + WWAF(&t); 1243 + } 1244 + 1245 + static void ww_test_context_lock_after_done(void) 1246 + { 1247 + WWAI(&t); 1248 + WWAD(&t); 1249 + WWL(&o, &t); 1250 + } 1251 + 1252 + static void ww_test_object_unlock_twice(void) 1253 + { 1254 + WWL1(&o); 1255 + WWU(&o); 1256 + WWU(&o); 1257 + } 1258 + 1259 + static void ww_test_object_lock_unbalanced(void) 1260 + { 1261 + WWAI(&t); 1262 + WWL(&o, &t); 1263 + t.acquired = 0; 1264 + WWU(&o); 1265 + WWAF(&t); 1266 + } 1267 + 1268 + static void ww_test_object_lock_stale_context(void) 1269 + { 1270 + WWAI(&t); 1271 + o.ctx = &t2; 1272 + WWL(&o, &t); 1273 + } 1274 + 1275 + static void ww_test_edeadlk_normal(void) 1276 + { 1277 + int ret; 1278 + 1279 + mutex_lock(&o2.base); 1280 + o2.ctx = &t2; 1281 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1282 + 1283 + WWAI(&t); 1284 + t2 = t; 1285 + t2.stamp--; 1286 + 1287 + ret = WWL(&o, &t); 1288 + WARN_ON(ret); 1289 + 1290 + ret = WWL(&o2, &t); 1291 + WARN_ON(ret != -EDEADLK); 1292 + 1293 + o2.ctx = NULL; 1294 + mutex_acquire(&o2.base.dep_map, 0, 1, _THIS_IP_); 1295 + mutex_unlock(&o2.base); 1296 + WWU(&o); 1297 + 1298 + WWL(&o2, &t); 1299 + } 1300 + 1301 + static void ww_test_edeadlk_normal_slow(void) 1302 + { 1303 + int ret; 1304 + 1305 + mutex_lock(&o2.base); 1306 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1307 + o2.ctx = &t2; 1308 + 1309 + WWAI(&t); 1310 + t2 = t; 1311 + t2.stamp--; 1312 + 1313 + ret = WWL(&o, &t); 1314 + WARN_ON(ret); 1315 + 1316 + ret = WWL(&o2, &t); 1317 + WARN_ON(ret != -EDEADLK); 1318 + 1319 + o2.ctx = NULL; 1320 + mutex_acquire(&o2.base.dep_map, 0, 1, _THIS_IP_); 1321 + mutex_unlock(&o2.base); 1322 + WWU(&o); 1323 + 1324 + ww_mutex_lock_slow(&o2, &t); 1325 + } 1326 + 1327 + static void ww_test_edeadlk_no_unlock(void) 1328 + { 1329 + int ret; 1330 + 1331 + mutex_lock(&o2.base); 1332 + o2.ctx = &t2; 1333 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1334 + 1335 + WWAI(&t); 1336 + t2 = t; 1337 + t2.stamp--; 1338 + 1339 + ret = WWL(&o, &t); 1340 + WARN_ON(ret); 1341 + 1342 + ret = WWL(&o2, &t); 1343 + WARN_ON(ret != -EDEADLK); 1344 + 1345 + o2.ctx = NULL; 1346 + mutex_acquire(&o2.base.dep_map, 0, 1, _THIS_IP_); 1347 + mutex_unlock(&o2.base); 1348 + 1349 + WWL(&o2, &t); 1350 + } 1351 + 1352 + static void ww_test_edeadlk_no_unlock_slow(void) 1353 + { 1354 + int ret; 1355 + 1356 + mutex_lock(&o2.base); 1357 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1358 + o2.ctx = &t2; 1359 + 1360 + WWAI(&t); 1361 + t2 = t; 1362 + t2.stamp--; 1363 + 1364 + ret = WWL(&o, &t); 1365 + WARN_ON(ret); 1366 + 1367 + ret = WWL(&o2, &t); 1368 + WARN_ON(ret != -EDEADLK); 1369 + 1370 + o2.ctx = NULL; 1371 + mutex_acquire(&o2.base.dep_map, 0, 1, _THIS_IP_); 1372 + mutex_unlock(&o2.base); 1373 + 1374 + ww_mutex_lock_slow(&o2, &t); 1375 + } 1376 + 1377 + static void ww_test_edeadlk_acquire_more(void) 1378 + { 1379 + int ret; 1380 + 1381 + mutex_lock(&o2.base); 1382 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1383 + o2.ctx = &t2; 1384 + 1385 + WWAI(&t); 1386 + t2 = t; 1387 + t2.stamp--; 1388 + 1389 + ret = WWL(&o, &t); 1390 + WARN_ON(ret); 1391 + 1392 + ret = WWL(&o2, &t); 1393 + WARN_ON(ret != -EDEADLK); 1394 + 1395 + ret = WWL(&o3, &t); 1396 + } 1397 + 1398 + static void ww_test_edeadlk_acquire_more_slow(void) 1399 + { 1400 + int ret; 1401 + 1402 + mutex_lock(&o2.base); 1403 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1404 + o2.ctx = &t2; 1405 + 1406 + WWAI(&t); 1407 + t2 = t; 1408 + t2.stamp--; 1409 + 1410 + ret = WWL(&o, &t); 1411 + WARN_ON(ret); 1412 + 1413 + ret = WWL(&o2, &t); 1414 + WARN_ON(ret != -EDEADLK); 1415 + 1416 + ww_mutex_lock_slow(&o3, &t); 1417 + } 1418 + 1419 + static void ww_test_edeadlk_acquire_more_edeadlk(void) 1420 + { 1421 + int ret; 1422 + 1423 + mutex_lock(&o2.base); 1424 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1425 + o2.ctx = &t2; 1426 + 1427 + mutex_lock(&o3.base); 1428 + mutex_release(&o3.base.dep_map, 1, _THIS_IP_); 1429 + o3.ctx = &t2; 1430 + 1431 + WWAI(&t); 1432 + t2 = t; 1433 + t2.stamp--; 1434 + 1435 + ret = WWL(&o, &t); 1436 + WARN_ON(ret); 1437 + 1438 + ret = WWL(&o2, &t); 1439 + WARN_ON(ret != -EDEADLK); 1440 + 1441 + ret = WWL(&o3, &t); 1442 + WARN_ON(ret != -EDEADLK); 1443 + } 1444 + 1445 + static void ww_test_edeadlk_acquire_more_edeadlk_slow(void) 1446 + { 1447 + int ret; 1448 + 1449 + mutex_lock(&o2.base); 1450 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1451 + o2.ctx = &t2; 1452 + 1453 + mutex_lock(&o3.base); 1454 + mutex_release(&o3.base.dep_map, 1, _THIS_IP_); 1455 + o3.ctx = &t2; 1456 + 1457 + WWAI(&t); 1458 + t2 = t; 1459 + t2.stamp--; 1460 + 1461 + ret = WWL(&o, &t); 1462 + WARN_ON(ret); 1463 + 1464 + ret = WWL(&o2, &t); 1465 + WARN_ON(ret != -EDEADLK); 1466 + 1467 + ww_mutex_lock_slow(&o3, &t); 1468 + } 1469 + 1470 + static void ww_test_edeadlk_acquire_wrong(void) 1471 + { 1472 + int ret; 1473 + 1474 + mutex_lock(&o2.base); 1475 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1476 + o2.ctx = &t2; 1477 + 1478 + WWAI(&t); 1479 + t2 = t; 1480 + t2.stamp--; 1481 + 1482 + ret = WWL(&o, &t); 1483 + WARN_ON(ret); 1484 + 1485 + ret = WWL(&o2, &t); 1486 + WARN_ON(ret != -EDEADLK); 1487 + if (!ret) 1488 + WWU(&o2); 1489 + 1490 + WWU(&o); 1491 + 1492 + ret = WWL(&o3, &t); 1493 + } 1494 + 1495 + static void ww_test_edeadlk_acquire_wrong_slow(void) 1496 + { 1497 + int ret; 1498 + 1499 + mutex_lock(&o2.base); 1500 + mutex_release(&o2.base.dep_map, 1, _THIS_IP_); 1501 + o2.ctx = &t2; 1502 + 1503 + WWAI(&t); 1504 + t2 = t; 1505 + t2.stamp--; 1506 + 1507 + ret = WWL(&o, &t); 1508 + WARN_ON(ret); 1509 + 1510 + ret = WWL(&o2, &t); 1511 + WARN_ON(ret != -EDEADLK); 1512 + if (!ret) 1513 + WWU(&o2); 1514 + 1515 + WWU(&o); 1516 + 1517 + ww_mutex_lock_slow(&o3, &t); 1518 + } 1519 + 1520 + static void ww_test_spin_nest_unlocked(void) 1521 + { 1522 + raw_spin_lock_nest_lock(&lock_A, &o.base); 1523 + U(A); 1524 + } 1525 + 1526 + static void ww_test_unneeded_slow(void) 1527 + { 1528 + WWAI(&t); 1529 + 1530 + ww_mutex_lock_slow(&o, &t); 1531 + } 1532 + 1533 + static void ww_test_context_block(void) 1534 + { 1535 + int ret; 1536 + 1537 + WWAI(&t); 1538 + 1539 + ret = WWL(&o, &t); 1540 + WARN_ON(ret); 1541 + WWL1(&o2); 1542 + } 1543 + 1544 + static void ww_test_context_try(void) 1545 + { 1546 + int ret; 1547 + 1548 + WWAI(&t); 1549 + 1550 + ret = WWL(&o, &t); 1551 + WARN_ON(ret); 1552 + 1553 + ret = WWT(&o2); 1554 + WARN_ON(!ret); 1555 + WWU(&o2); 1556 + WWU(&o); 1557 + } 1558 + 1559 + static void ww_test_context_context(void) 1560 + { 1561 + int ret; 1562 + 1563 + WWAI(&t); 1564 + 1565 + ret = WWL(&o, &t); 1566 + WARN_ON(ret); 1567 + 1568 + ret = WWL(&o2, &t); 1569 + WARN_ON(ret); 1570 + 1571 + WWU(&o2); 1572 + WWU(&o); 1573 + } 1574 + 1575 + static void ww_test_try_block(void) 1576 + { 1577 + bool ret; 1578 + 1579 + ret = WWT(&o); 1580 + WARN_ON(!ret); 1581 + 1582 + WWL1(&o2); 1583 + WWU(&o2); 1584 + WWU(&o); 1585 + } 1586 + 1587 + static void ww_test_try_try(void) 1588 + { 1589 + bool ret; 1590 + 1591 + ret = WWT(&o); 1592 + WARN_ON(!ret); 1593 + ret = WWT(&o2); 1594 + WARN_ON(!ret); 1595 + WWU(&o2); 1596 + WWU(&o); 1597 + } 1598 + 1599 + static void ww_test_try_context(void) 1600 + { 1601 + int ret; 1602 + 1603 + ret = WWT(&o); 1604 + WARN_ON(!ret); 1605 + 1606 + WWAI(&t); 1607 + 1608 + ret = WWL(&o2, &t); 1609 + WARN_ON(ret); 1610 + } 1611 + 1612 + static void ww_test_block_block(void) 1613 + { 1614 + WWL1(&o); 1615 + WWL1(&o2); 1616 + } 1617 + 1618 + static void ww_test_block_try(void) 1619 + { 1620 + bool ret; 1621 + 1622 + WWL1(&o); 1623 + ret = WWT(&o2); 1624 + WARN_ON(!ret); 1625 + } 1626 + 1627 + static void ww_test_block_context(void) 1628 + { 1629 + int ret; 1630 + 1631 + WWL1(&o); 1632 + WWAI(&t); 1633 + 1634 + ret = WWL(&o2, &t); 1635 + WARN_ON(ret); 1636 + } 1637 + 1638 + static void ww_test_spin_block(void) 1639 + { 1640 + L(A); 1641 + U(A); 1642 + 1643 + WWL1(&o); 1644 + L(A); 1645 + U(A); 1646 + WWU(&o); 1647 + 1648 + L(A); 1649 + WWL1(&o); 1650 + WWU(&o); 1651 + U(A); 1652 + } 1653 + 1654 + static void ww_test_spin_try(void) 1655 + { 1656 + bool ret; 1657 + 1658 + L(A); 1659 + U(A); 1660 + 1661 + ret = WWT(&o); 1662 + WARN_ON(!ret); 1663 + L(A); 1664 + U(A); 1665 + WWU(&o); 1666 + 1667 + L(A); 1668 + ret = WWT(&o); 1669 + WARN_ON(!ret); 1670 + WWU(&o); 1671 + U(A); 1672 + } 1673 + 1674 + static void ww_test_spin_context(void) 1675 + { 1676 + int ret; 1677 + 1678 + L(A); 1679 + U(A); 1680 + 1681 + WWAI(&t); 1682 + 1683 + ret = WWL(&o, &t); 1684 + WARN_ON(ret); 1685 + L(A); 1686 + U(A); 1687 + WWU(&o); 1688 + 1689 + L(A); 1690 + ret = WWL(&o, &t); 1691 + WARN_ON(ret); 1692 + WWU(&o); 1693 + U(A); 1694 + } 1695 + 1696 + static void ww_tests(void) 1697 + { 1698 + printk(" --------------------------------------------------------------------------\n"); 1699 + printk(" | Wound/wait tests |\n"); 1700 + printk(" ---------------------\n"); 1701 + 1702 + print_testname("ww api failures"); 1703 + dotest(ww_test_fail_acquire, SUCCESS, LOCKTYPE_WW); 1704 + dotest(ww_test_normal, SUCCESS, LOCKTYPE_WW); 1705 + dotest(ww_test_unneeded_slow, FAILURE, LOCKTYPE_WW); 1706 + printk("\n"); 1707 + 1708 + print_testname("ww contexts mixing"); 1709 + dotest(ww_test_two_contexts, FAILURE, LOCKTYPE_WW); 1710 + dotest(ww_test_diff_class, FAILURE, LOCKTYPE_WW); 1711 + printk("\n"); 1712 + 1713 + print_testname("finishing ww context"); 1714 + dotest(ww_test_context_done_twice, FAILURE, LOCKTYPE_WW); 1715 + dotest(ww_test_context_unlock_twice, FAILURE, LOCKTYPE_WW); 1716 + dotest(ww_test_context_fini_early, FAILURE, LOCKTYPE_WW); 1717 + dotest(ww_test_context_lock_after_done, FAILURE, LOCKTYPE_WW); 1718 + printk("\n"); 1719 + 1720 + print_testname("locking mismatches"); 1721 + dotest(ww_test_object_unlock_twice, FAILURE, LOCKTYPE_WW); 1722 + dotest(ww_test_object_lock_unbalanced, FAILURE, LOCKTYPE_WW); 1723 + dotest(ww_test_object_lock_stale_context, FAILURE, LOCKTYPE_WW); 1724 + printk("\n"); 1725 + 1726 + print_testname("EDEADLK handling"); 1727 + dotest(ww_test_edeadlk_normal, SUCCESS, LOCKTYPE_WW); 1728 + dotest(ww_test_edeadlk_normal_slow, SUCCESS, LOCKTYPE_WW); 1729 + dotest(ww_test_edeadlk_no_unlock, FAILURE, LOCKTYPE_WW); 1730 + dotest(ww_test_edeadlk_no_unlock_slow, FAILURE, LOCKTYPE_WW); 1731 + dotest(ww_test_edeadlk_acquire_more, FAILURE, LOCKTYPE_WW); 1732 + dotest(ww_test_edeadlk_acquire_more_slow, FAILURE, LOCKTYPE_WW); 1733 + dotest(ww_test_edeadlk_acquire_more_edeadlk, FAILURE, LOCKTYPE_WW); 1734 + dotest(ww_test_edeadlk_acquire_more_edeadlk_slow, FAILURE, LOCKTYPE_WW); 1735 + dotest(ww_test_edeadlk_acquire_wrong, FAILURE, LOCKTYPE_WW); 1736 + dotest(ww_test_edeadlk_acquire_wrong_slow, FAILURE, LOCKTYPE_WW); 1737 + printk("\n"); 1738 + 1739 + print_testname("spinlock nest unlocked"); 1740 + dotest(ww_test_spin_nest_unlocked, FAILURE, LOCKTYPE_WW); 1741 + printk("\n"); 1742 + 1743 + printk(" -----------------------------------------------------\n"); 1744 + printk(" |block | try |context|\n"); 1745 + printk(" -----------------------------------------------------\n"); 1746 + 1747 + print_testname("context"); 1748 + dotest(ww_test_context_block, FAILURE, LOCKTYPE_WW); 1749 + dotest(ww_test_context_try, SUCCESS, LOCKTYPE_WW); 1750 + dotest(ww_test_context_context, SUCCESS, LOCKTYPE_WW); 1751 + printk("\n"); 1752 + 1753 + print_testname("try"); 1754 + dotest(ww_test_try_block, FAILURE, LOCKTYPE_WW); 1755 + dotest(ww_test_try_try, SUCCESS, LOCKTYPE_WW); 1756 + dotest(ww_test_try_context, FAILURE, LOCKTYPE_WW); 1757 + printk("\n"); 1758 + 1759 + print_testname("block"); 1760 + dotest(ww_test_block_block, FAILURE, LOCKTYPE_WW); 1761 + dotest(ww_test_block_try, SUCCESS, LOCKTYPE_WW); 1762 + dotest(ww_test_block_context, FAILURE, LOCKTYPE_WW); 1763 + printk("\n"); 1764 + 1765 + print_testname("spinlock"); 1766 + dotest(ww_test_spin_block, FAILURE, LOCKTYPE_WW); 1767 + dotest(ww_test_spin_try, SUCCESS, LOCKTYPE_WW); 1768 + dotest(ww_test_spin_context, FAILURE, LOCKTYPE_WW); 1769 + printk("\n"); 1770 + } 1133 1771 1134 1772 void locking_selftest(void) 1135 1773 { ··· 1869 1187 1870 1188 DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion); 1871 1189 // DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2); 1190 + 1191 + ww_tests(); 1872 1192 1873 1193 if (unexpected_testcase_failures) { 1874 1194 printk("-----------------------------------------------------------------\n");