Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

hwspinlock/core: use a mutex to protect the radix tree

Since we're using non-atomic radix tree allocations, we
should be protecting the tree using a mutex and not a
spinlock.

Non-atomic allocations and process context locking is good enough,
as the tree is manipulated only when locks are registered/
unregistered/requested/freed.

The locks themselves are still protected by spinlocks of course,
and mutexes are not involved in the locking/unlocking paths.

Cc: <stable@kernel.org>
Signed-off-by: Juan Gutierrez <jgutierrez@ti.com>
[ohad@wizery.com: rewrite the commit log, #include mutex.h, add minor
commentary]
[ohad@wizery.com: update register/unregister parts in hwspinlock.txt]
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>

authored by

Juan Gutierrez and committed by
Ohad Ben-Cohen
93b465c2 c3c1250e

+27 -36
+7 -11
Documentation/hwspinlock.txt
··· 39 39 in case an unused hwspinlock isn't available. Users of this 40 40 API will usually want to communicate the lock's id to the remote core 41 41 before it can be used to achieve synchronization. 42 - Can be called from an atomic context (this function will not sleep) but 43 - not from within interrupt context. 42 + Should be called from a process context (might sleep). 44 43 45 44 struct hwspinlock *hwspin_lock_request_specific(unsigned int id); 46 45 - assign a specific hwspinlock id and return its address, or NULL 47 46 if that hwspinlock is already in use. Usually board code will 48 47 be calling this function in order to reserve specific hwspinlock 49 48 ids for predefined purposes. 50 - Can be called from an atomic context (this function will not sleep) but 51 - not from within interrupt context. 49 + Should be called from a process context (might sleep). 52 50 53 51 int hwspin_lock_free(struct hwspinlock *hwlock); 54 52 - free a previously-assigned hwspinlock; returns 0 on success, or an 55 53 appropriate error code on failure (e.g. -EINVAL if the hwspinlock 56 54 is already free). 57 - Can be called from an atomic context (this function will not sleep) but 58 - not from within interrupt context. 55 + Should be called from a process context (might sleep). 59 56 60 57 int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); 61 58 - lock a previously-assigned hwspinlock with a timeout limit (specified in ··· 229 232 230 233 int hwspin_lock_register(struct hwspinlock *hwlock); 231 234 - to be called from the underlying platform-specific implementation, in 232 - order to register a new hwspinlock instance. Can be called from an atomic 233 - context (this function will not sleep) but not from within interrupt 234 - context. Returns 0 on success, or appropriate error code on failure. 235 + order to register a new hwspinlock instance. Should be called from 236 + a process context (this function might sleep). 237 + Returns 0 on success, or appropriate error code on failure. 235 238 236 239 struct hwspinlock *hwspin_lock_unregister(unsigned int id); 237 240 - to be called from the underlying vendor-specific implementation, in order 238 241 to unregister an existing (and unused) hwspinlock instance. 239 - Can be called from an atomic context (will not sleep) but not from 240 - within interrupt context. 242 + Should be called from a process context (this function might sleep). 241 243 Returns the address of hwspinlock on success, or NULL on error (e.g. 242 244 if the hwspinlock is sill in use). 243 245
+20 -25
drivers/hwspinlock/hwspinlock_core.c
··· 26 26 #include <linux/radix-tree.h> 27 27 #include <linux/hwspinlock.h> 28 28 #include <linux/pm_runtime.h> 29 + #include <linux/mutex.h> 29 30 30 31 #include "hwspinlock_internal.h" 31 32 ··· 53 52 static RADIX_TREE(hwspinlock_tree, GFP_KERNEL); 54 53 55 54 /* 56 - * Synchronization of access to the tree is achieved using this spinlock, 55 + * Synchronization of access to the tree is achieved using this mutex, 57 56 * as the radix-tree API requires that users provide all synchronisation. 57 + * A mutex is needed because we're using non-atomic radix tree allocations. 58 58 */ 59 - static DEFINE_SPINLOCK(hwspinlock_tree_lock); 59 + static DEFINE_MUTEX(hwspinlock_tree_lock); 60 + 60 61 61 62 /** 62 63 * __hwspin_trylock() - attempt to lock a specific hwspinlock ··· 264 261 * This function should be called from the underlying platform-specific 265 262 * implementation, to register a new hwspinlock instance. 266 263 * 267 - * Can be called from an atomic context (will not sleep) but not from 268 - * within interrupt context. 264 + * Should be called from a process context (might sleep) 269 265 * 270 266 * Returns 0 on success, or an appropriate error code on failure 271 267 */ ··· 281 279 282 280 spin_lock_init(&hwlock->lock); 283 281 284 - spin_lock(&hwspinlock_tree_lock); 282 + mutex_lock(&hwspinlock_tree_lock); 285 283 286 284 ret = radix_tree_insert(&hwspinlock_tree, hwlock->id, hwlock); 287 285 if (ret == -EEXIST) ··· 297 295 WARN_ON(tmp != hwlock); 298 296 299 297 out: 300 - spin_unlock(&hwspinlock_tree_lock); 298 + mutex_unlock(&hwspinlock_tree_lock); 301 299 return ret; 302 300 } 303 301 EXPORT_SYMBOL_GPL(hwspin_lock_register); ··· 309 307 * This function should be called from the underlying platform-specific 310 308 * implementation, to unregister an existing (and unused) hwspinlock. 311 309 * 312 - * Can be called from an atomic context (will not sleep) but not from 313 - * within interrupt context. 310 + * Should be called from a process context (might sleep) 314 311 * 315 312 * Returns the address of hwspinlock @id on success, or NULL on failure 316 313 */ ··· 318 317 struct hwspinlock *hwlock = NULL; 319 318 int ret; 320 319 321 - spin_lock(&hwspinlock_tree_lock); 320 + mutex_lock(&hwspinlock_tree_lock); 322 321 323 322 /* make sure the hwspinlock is not in use (tag is set) */ 324 323 ret = radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED); ··· 334 333 } 335 334 336 335 out: 337 - spin_unlock(&hwspinlock_tree_lock); 336 + mutex_unlock(&hwspinlock_tree_lock); 338 337 return hwlock; 339 338 } 340 339 EXPORT_SYMBOL_GPL(hwspin_lock_unregister); ··· 403 402 * to the remote core before it can be used for synchronization (to get the 404 403 * id of a given hwlock, use hwspin_lock_get_id()). 405 404 * 406 - * Can be called from an atomic context (will not sleep) but not from 407 - * within interrupt context (simply because there is no use case for 408 - * that yet). 405 + * Should be called from a process context (might sleep) 409 406 * 410 407 * Returns the address of the assigned hwspinlock, or NULL on error 411 408 */ ··· 412 413 struct hwspinlock *hwlock; 413 414 int ret; 414 415 415 - spin_lock(&hwspinlock_tree_lock); 416 + mutex_lock(&hwspinlock_tree_lock); 416 417 417 418 /* look for an unused lock */ 418 419 ret = radix_tree_gang_lookup_tag(&hwspinlock_tree, (void **)&hwlock, ··· 432 433 hwlock = NULL; 433 434 434 435 out: 435 - spin_unlock(&hwspinlock_tree_lock); 436 + mutex_unlock(&hwspinlock_tree_lock); 436 437 return hwlock; 437 438 } 438 439 EXPORT_SYMBOL_GPL(hwspin_lock_request); ··· 446 447 * Usually early board code will be calling this function in order to 447 448 * reserve specific hwspinlock ids for predefined purposes. 448 449 * 449 - * Can be called from an atomic context (will not sleep) but not from 450 - * within interrupt context (simply because there is no use case for 451 - * that yet). 450 + * Should be called from a process context (might sleep) 452 451 * 453 452 * Returns the address of the assigned hwspinlock, or NULL on error 454 453 */ ··· 455 458 struct hwspinlock *hwlock; 456 459 int ret; 457 460 458 - spin_lock(&hwspinlock_tree_lock); 461 + mutex_lock(&hwspinlock_tree_lock); 459 462 460 463 /* make sure this hwspinlock exists */ 461 464 hwlock = radix_tree_lookup(&hwspinlock_tree, id); ··· 481 484 hwlock = NULL; 482 485 483 486 out: 484 - spin_unlock(&hwspinlock_tree_lock); 487 + mutex_unlock(&hwspinlock_tree_lock); 485 488 return hwlock; 486 489 } 487 490 EXPORT_SYMBOL_GPL(hwspin_lock_request_specific); ··· 494 497 * Should only be called with an @hwlock that was retrieved from 495 498 * an earlier call to omap_hwspin_lock_request{_specific}. 496 499 * 497 - * Can be called from an atomic context (will not sleep) but not from 498 - * within interrupt context (simply because there is no use case for 499 - * that yet). 500 + * Should be called from a process context (might sleep) 500 501 * 501 502 * Returns 0 on success, or an appropriate error code on failure 502 503 */ ··· 508 513 return -EINVAL; 509 514 } 510 515 511 - spin_lock(&hwspinlock_tree_lock); 516 + mutex_lock(&hwspinlock_tree_lock); 512 517 513 518 /* make sure the hwspinlock is used */ 514 519 ret = radix_tree_tag_get(&hwspinlock_tree, hwlock->id, ··· 535 540 module_put(hwlock->dev->driver->owner); 536 541 537 542 out: 538 - spin_unlock(&hwspinlock_tree_lock); 543 + mutex_unlock(&hwspinlock_tree_lock); 539 544 return ret; 540 545 } 541 546 EXPORT_SYMBOL_GPL(hwspin_lock_free);