Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock:
hwspinlock: add MAINTAINERS entries
hwspinlock/omap: omap_hwspinlock_remove should be __devexit
hwspinlock/u8500: add hwspinlock driver
hwspinlock/core: register a bank of hwspinlocks in a single API call
hwspinlock/core: remove stubs for register/unregister
hwspinlock/core: use a mutex to protect the radix tree
hwspinlock/core/omap: fix id issues on multiple hwspinlock devices
hwspinlock/omap: simplify allocation scheme
hwspinlock/core: simplify 'owner' handling
hwspinlock/core: simplify Kconfig

Fix up trivial conflicts (addition of omap_hwspinlock_pdata, removal of
omap_spinlock_latency) in arch/arm/mach-omap2/hwspinlock.c

Also, do an "evil merge" to fix a compile error in omap_hsmmc.c which
for some reason was reported in the same email thread as the "please
pull hwspinlock changes".

+517 -227
+45 -31
Documentation/hwspinlock.txt
··· 39 39 in case an unused hwspinlock isn't available. Users of this 40 40 API will usually want to communicate the lock's id to the remote core 41 41 before it can be used to achieve synchronization. 42 - Can be called from an atomic context (this function will not sleep) but 43 - not from within interrupt context. 42 + Should be called from a process context (might sleep). 44 43 45 44 struct hwspinlock *hwspin_lock_request_specific(unsigned int id); 46 45 - assign a specific hwspinlock id and return its address, or NULL 47 46 if that hwspinlock is already in use. Usually board code will 48 47 be calling this function in order to reserve specific hwspinlock 49 48 ids for predefined purposes. 50 - Can be called from an atomic context (this function will not sleep) but 51 - not from within interrupt context. 49 + Should be called from a process context (might sleep). 52 50 53 51 int hwspin_lock_free(struct hwspinlock *hwlock); 54 52 - free a previously-assigned hwspinlock; returns 0 on success, or an 55 53 appropriate error code on failure (e.g. -EINVAL if the hwspinlock 56 54 is already free). 57 - Can be called from an atomic context (this function will not sleep) but 58 - not from within interrupt context. 55 + Should be called from a process context (might sleep). 59 56 60 57 int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); 61 58 - lock a previously-assigned hwspinlock with a timeout limit (specified in ··· 227 230 228 231 4. API for implementors 229 232 230 - int hwspin_lock_register(struct hwspinlock *hwlock); 233 + int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, 234 + const struct hwspinlock_ops *ops, int base_id, int num_locks); 231 235 - to be called from the underlying platform-specific implementation, in 232 - order to register a new hwspinlock instance. Can be called from an atomic 233 - context (this function will not sleep) but not from within interrupt 234 - context. Returns 0 on success, or appropriate error code on failure. 236 + order to register a new hwspinlock device (which is usually a bank of 237 + numerous locks). Should be called from a process context (this function 238 + might sleep). 239 + Returns 0 on success, or appropriate error code on failure. 235 240 236 - struct hwspinlock *hwspin_lock_unregister(unsigned int id); 241 + int hwspin_lock_unregister(struct hwspinlock_device *bank); 237 242 - to be called from the underlying vendor-specific implementation, in order 238 - to unregister an existing (and unused) hwspinlock instance. 239 - Can be called from an atomic context (will not sleep) but not from 240 - within interrupt context. 243 + to unregister an hwspinlock device (which is usually a bank of numerous 244 + locks). 245 + Should be called from a process context (this function might sleep). 241 246 Returns the address of hwspinlock on success, or NULL on error (e.g. 242 247 if the hwspinlock is sill in use). 243 248 244 - 5. struct hwspinlock 249 + 5. Important structs 245 250 246 - This struct represents an hwspinlock instance. It is registered by the 247 - underlying hwspinlock implementation using the hwspin_lock_register() API. 251 + struct hwspinlock_device is a device which usually contains a bank 252 + of hardware locks. It is registered by the underlying hwspinlock 253 + implementation using the hwspin_lock_register() API. 248 254 249 255 /** 250 - * struct hwspinlock - vendor-specific hwspinlock implementation 251 - * 252 - * @dev: underlying device, will be used with runtime PM api 253 - * @ops: vendor-specific hwspinlock handlers 254 - * @id: a global, unique, system-wide, index of the lock. 255 - * @lock: initialized and used by hwspinlock core 256 - * @owner: underlying implementation module, used to maintain module ref count 256 + * struct hwspinlock_device - a device which usually spans numerous hwspinlocks 257 + * @dev: underlying device, will be used to invoke runtime PM api 258 + * @ops: platform-specific hwspinlock handlers 259 + * @base_id: id index of the first lock in this device 260 + * @num_locks: number of locks in this device 261 + * @lock: dynamically allocated array of 'struct hwspinlock' 257 262 */ 258 - struct hwspinlock { 263 + struct hwspinlock_device { 259 264 struct device *dev; 260 265 const struct hwspinlock_ops *ops; 261 - int id; 262 - spinlock_t lock; 263 - struct module *owner; 266 + int base_id; 267 + int num_locks; 268 + struct hwspinlock lock[0]; 264 269 }; 265 270 266 - The underlying implementation is responsible to assign the dev, ops, id and 267 - owner members. The lock member, OTOH, is initialized and used by the hwspinlock 268 - core. 271 + struct hwspinlock_device contains an array of hwspinlock structs, each 272 + of which represents a single hardware lock: 273 + 274 + /** 275 + * struct hwspinlock - this struct represents a single hwspinlock instance 276 + * @bank: the hwspinlock_device structure which owns this lock 277 + * @lock: initialized and used by hwspinlock core 278 + * @priv: private data, owned by the underlying platform-specific hwspinlock drv 279 + */ 280 + struct hwspinlock { 281 + struct hwspinlock_device *bank; 282 + spinlock_t lock; 283 + void *priv; 284 + }; 285 + 286 + When registering a bank of locks, the hwspinlock driver only needs to 287 + set the priv members of the locks. The rest of the members are set and 288 + initialized by the hwspinlock core itself. 269 289 270 290 6. Implementation callbacks 271 291
+14
MAINTAINERS
··· 3018 3018 F: drivers/char/hw_random/ 3019 3019 F: include/linux/hw_random.h 3020 3020 3021 + HARDWARE SPINLOCK CORE 3022 + M: Ohad Ben-Cohen <ohad@wizery.com> 3023 + S: Maintained 3024 + F: Documentation/hwspinlock.txt 3025 + F: drivers/hwspinlock/hwspinlock_* 3026 + F: include/linux/hwspinlock.h 3027 + 3021 3028 HARMONY SOUND DRIVER 3022 3029 M: Kyle McMartin <kyle@mcmartin.ca> 3023 3030 L: linux-parisc@vger.kernel.org ··· 4720 4713 S: Maintained 4721 4714 F: drivers/video/omap2/ 4722 4715 F: Documentation/arm/OMAP/DSS 4716 + 4717 + OMAP HARDWARE SPINLOCK SUPPORT 4718 + M: Ohad Ben-Cohen <ohad@wizery.com> 4719 + L: linux-omap@vger.kernel.org 4720 + S: Maintained 4721 + F: drivers/hwspinlock/omap_hwspinlock.c 4722 + F: arch/arm/mach-omap2/hwspinlock.c 4723 4723 4724 4724 OMAP MMC SUPPORT 4725 4725 M: Jarkko Lavinen <jarkko.lavinen@nokia.com>
+8 -1
arch/arm/mach-omap2/hwspinlock.c
··· 19 19 #include <linux/kernel.h> 20 20 #include <linux/init.h> 21 21 #include <linux/err.h> 22 + #include <linux/hwspinlock.h> 22 23 23 24 #include <plat/omap_hwmod.h> 24 25 #include <plat/omap_device.h> 26 + 27 + static struct hwspinlock_pdata omap_hwspinlock_pdata __initdata = { 28 + .base_id = 0, 29 + }; 25 30 26 31 int __init hwspinlocks_init(void) 27 32 { ··· 45 40 if (oh == NULL) 46 41 return -EINVAL; 47 42 48 - pdev = omap_device_build(dev_name, 0, oh, NULL, 0, NULL, 0, false); 43 + pdev = omap_device_build(dev_name, 0, oh, &omap_hwspinlock_pdata, 44 + sizeof(struct hwspinlock_pdata), 45 + NULL, 0, false); 49 46 if (IS_ERR(pdev)) { 50 47 pr_err("Can't build omap_device for %s:%s\n", dev_name, 51 48 oh_name);
+18 -9
drivers/hwspinlock/Kconfig
··· 2 2 # Generic HWSPINLOCK framework 3 3 # 4 4 5 + # HWSPINLOCK always gets selected by whoever wants it. 5 6 config HWSPINLOCK 6 - tristate "Generic Hardware Spinlock framework" 7 - depends on ARCH_OMAP4 8 - help 9 - Say y here to support the generic hardware spinlock framework. 10 - You only need to enable this if you have hardware spinlock module 11 - on your system (usually only relevant if your system has remote slave 12 - coprocessors). 7 + tristate 13 8 14 - If unsure, say N. 9 + menu "Hardware Spinlock drivers" 15 10 16 11 config HWSPINLOCK_OMAP 17 12 tristate "OMAP Hardware Spinlock device" 18 - depends on HWSPINLOCK && ARCH_OMAP4 13 + depends on ARCH_OMAP4 14 + select HWSPINLOCK 19 15 help 20 16 Say y here to support the OMAP Hardware Spinlock device (firstly 21 17 introduced in OMAP4). 22 18 23 19 If unsure, say N. 20 + 21 + config HSEM_U8500 22 + tristate "STE Hardware Semaphore functionality" 23 + depends on ARCH_U8500 24 + select HWSPINLOCK 25 + help 26 + Say y here to support the STE Hardware Semaphore functionality, which 27 + provides a synchronisation mechanism for the various processor on the 28 + SoC. 29 + 30 + If unsure, say N. 31 + 32 + endmenu
+1
drivers/hwspinlock/Makefile
··· 4 4 5 5 obj-$(CONFIG_HWSPINLOCK) += hwspinlock_core.o 6 6 obj-$(CONFIG_HWSPINLOCK_OMAP) += omap_hwspinlock.o 7 + obj-$(CONFIG_HSEM_U8500) += u8500_hsem.o
+128 -78
drivers/hwspinlock/hwspinlock_core.c
··· 26 26 #include <linux/radix-tree.h> 27 27 #include <linux/hwspinlock.h> 28 28 #include <linux/pm_runtime.h> 29 + #include <linux/mutex.h> 29 30 30 31 #include "hwspinlock_internal.h" 31 32 ··· 53 52 static RADIX_TREE(hwspinlock_tree, GFP_KERNEL); 54 53 55 54 /* 56 - * Synchronization of access to the tree is achieved using this spinlock, 55 + * Synchronization of access to the tree is achieved using this mutex, 57 56 * as the radix-tree API requires that users provide all synchronisation. 57 + * A mutex is needed because we're using non-atomic radix tree allocations. 58 58 */ 59 - static DEFINE_SPINLOCK(hwspinlock_tree_lock); 59 + static DEFINE_MUTEX(hwspinlock_tree_lock); 60 + 60 61 61 62 /** 62 63 * __hwspin_trylock() - attempt to lock a specific hwspinlock ··· 117 114 return -EBUSY; 118 115 119 116 /* try to take the hwspinlock device */ 120 - ret = hwlock->ops->trylock(hwlock); 117 + ret = hwlock->bank->ops->trylock(hwlock); 121 118 122 119 /* if hwlock is already taken, undo spin_trylock_* and exit */ 123 120 if (!ret) { ··· 199 196 * Allow platform-specific relax handlers to prevent 200 197 * hogging the interconnect (no sleeping, though) 201 198 */ 202 - if (hwlock->ops->relax) 203 - hwlock->ops->relax(hwlock); 199 + if (hwlock->bank->ops->relax) 200 + hwlock->bank->ops->relax(hwlock); 204 201 } 205 202 206 203 return ret; ··· 245 242 */ 246 243 mb(); 247 244 248 - hwlock->ops->unlock(hwlock); 245 + hwlock->bank->ops->unlock(hwlock); 249 246 250 247 /* Undo the spin_trylock{_irq, _irqsave} called while locking */ 251 248 if (mode == HWLOCK_IRQSTATE) ··· 257 254 } 258 255 EXPORT_SYMBOL_GPL(__hwspin_unlock); 259 256 260 - /** 261 - * hwspin_lock_register() - register a new hw spinlock 262 - * @hwlock: hwspinlock to register. 263 - * 264 - * This function should be called from the underlying platform-specific 265 - * implementation, to register a new hwspinlock instance. 266 - * 267 - * Can be called from an atomic context (will not sleep) but not from 268 - * within interrupt context. 269 - * 270 - * Returns 0 on success, or an appropriate error code on failure 271 - */ 272 - int hwspin_lock_register(struct hwspinlock *hwlock) 257 + static int hwspin_lock_register_single(struct hwspinlock *hwlock, int id) 273 258 { 274 259 struct hwspinlock *tmp; 275 260 int ret; 276 261 277 - if (!hwlock || !hwlock->ops || 278 - !hwlock->ops->trylock || !hwlock->ops->unlock) { 279 - pr_err("invalid parameters\n"); 280 - return -EINVAL; 262 + mutex_lock(&hwspinlock_tree_lock); 263 + 264 + ret = radix_tree_insert(&hwspinlock_tree, id, hwlock); 265 + if (ret) { 266 + if (ret == -EEXIST) 267 + pr_err("hwspinlock id %d already exists!\n", id); 268 + goto out; 281 269 } 282 270 283 - spin_lock_init(&hwlock->lock); 284 - 285 - spin_lock(&hwspinlock_tree_lock); 286 - 287 - ret = radix_tree_insert(&hwspinlock_tree, hwlock->id, hwlock); 288 - if (ret) 289 - goto out; 290 - 291 271 /* mark this hwspinlock as available */ 292 - tmp = radix_tree_tag_set(&hwspinlock_tree, hwlock->id, 293 - HWSPINLOCK_UNUSED); 272 + tmp = radix_tree_tag_set(&hwspinlock_tree, id, HWSPINLOCK_UNUSED); 294 273 295 274 /* self-sanity check which should never fail */ 296 275 WARN_ON(tmp != hwlock); 297 276 298 277 out: 299 - spin_unlock(&hwspinlock_tree_lock); 300 - return ret; 278 + mutex_unlock(&hwspinlock_tree_lock); 279 + return 0; 301 280 } 302 - EXPORT_SYMBOL_GPL(hwspin_lock_register); 303 281 304 - /** 305 - * hwspin_lock_unregister() - unregister an hw spinlock 306 - * @id: index of the specific hwspinlock to unregister 307 - * 308 - * This function should be called from the underlying platform-specific 309 - * implementation, to unregister an existing (and unused) hwspinlock. 310 - * 311 - * Can be called from an atomic context (will not sleep) but not from 312 - * within interrupt context. 313 - * 314 - * Returns the address of hwspinlock @id on success, or NULL on failure 315 - */ 316 - struct hwspinlock *hwspin_lock_unregister(unsigned int id) 282 + static struct hwspinlock *hwspin_lock_unregister_single(unsigned int id) 317 283 { 318 284 struct hwspinlock *hwlock = NULL; 319 285 int ret; 320 286 321 - spin_lock(&hwspinlock_tree_lock); 287 + mutex_lock(&hwspinlock_tree_lock); 322 288 323 289 /* make sure the hwspinlock is not in use (tag is set) */ 324 290 ret = radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED); ··· 303 331 } 304 332 305 333 out: 306 - spin_unlock(&hwspinlock_tree_lock); 334 + mutex_unlock(&hwspinlock_tree_lock); 307 335 return hwlock; 336 + } 337 + 338 + /** 339 + * hwspin_lock_register() - register a new hw spinlock device 340 + * @bank: the hwspinlock device, which usually provides numerous hw locks 341 + * @dev: the backing device 342 + * @ops: hwspinlock handlers for this device 343 + * @base_id: id of the first hardware spinlock in this bank 344 + * @num_locks: number of hwspinlocks provided by this device 345 + * 346 + * This function should be called from the underlying platform-specific 347 + * implementation, to register a new hwspinlock device instance. 348 + * 349 + * Should be called from a process context (might sleep) 350 + * 351 + * Returns 0 on success, or an appropriate error code on failure 352 + */ 353 + int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, 354 + const struct hwspinlock_ops *ops, int base_id, int num_locks) 355 + { 356 + struct hwspinlock *hwlock; 357 + int ret = 0, i; 358 + 359 + if (!bank || !ops || !dev || !num_locks || !ops->trylock || 360 + !ops->unlock) { 361 + pr_err("invalid parameters\n"); 362 + return -EINVAL; 363 + } 364 + 365 + bank->dev = dev; 366 + bank->ops = ops; 367 + bank->base_id = base_id; 368 + bank->num_locks = num_locks; 369 + 370 + for (i = 0; i < num_locks; i++) { 371 + hwlock = &bank->lock[i]; 372 + 373 + spin_lock_init(&hwlock->lock); 374 + hwlock->bank = bank; 375 + 376 + ret = hwspin_lock_register_single(hwlock, i); 377 + if (ret) 378 + goto reg_failed; 379 + } 380 + 381 + return 0; 382 + 383 + reg_failed: 384 + while (--i >= 0) 385 + hwspin_lock_unregister_single(i); 386 + return ret; 387 + } 388 + EXPORT_SYMBOL_GPL(hwspin_lock_register); 389 + 390 + /** 391 + * hwspin_lock_unregister() - unregister an hw spinlock device 392 + * @bank: the hwspinlock device, which usually provides numerous hw locks 393 + * 394 + * This function should be called from the underlying platform-specific 395 + * implementation, to unregister an existing (and unused) hwspinlock. 396 + * 397 + * Should be called from a process context (might sleep) 398 + * 399 + * Returns 0 on success, or an appropriate error code on failure 400 + */ 401 + int hwspin_lock_unregister(struct hwspinlock_device *bank) 402 + { 403 + struct hwspinlock *hwlock, *tmp; 404 + int i; 405 + 406 + for (i = 0; i < bank->num_locks; i++) { 407 + hwlock = &bank->lock[i]; 408 + 409 + tmp = hwspin_lock_unregister_single(bank->base_id + i); 410 + if (!tmp) 411 + return -EBUSY; 412 + 413 + /* self-sanity check that should never fail */ 414 + WARN_ON(tmp != hwlock); 415 + } 416 + 417 + return 0; 308 418 } 309 419 EXPORT_SYMBOL_GPL(hwspin_lock_unregister); 310 420 ··· 402 348 */ 403 349 static int __hwspin_lock_request(struct hwspinlock *hwlock) 404 350 { 351 + struct device *dev = hwlock->bank->dev; 405 352 struct hwspinlock *tmp; 406 353 int ret; 407 354 408 355 /* prevent underlying implementation from being removed */ 409 - if (!try_module_get(hwlock->owner)) { 410 - dev_err(hwlock->dev, "%s: can't get owner\n", __func__); 356 + if (!try_module_get(dev->driver->owner)) { 357 + dev_err(dev, "%s: can't get owner\n", __func__); 411 358 return -EINVAL; 412 359 } 413 360 414 361 /* notify PM core that power is now needed */ 415 - ret = pm_runtime_get_sync(hwlock->dev); 362 + ret = pm_runtime_get_sync(dev); 416 363 if (ret < 0) { 417 - dev_err(hwlock->dev, "%s: can't power on device\n", __func__); 364 + dev_err(dev, "%s: can't power on device\n", __func__); 418 365 return ret; 419 366 } 420 367 421 368 /* mark hwspinlock as used, should not fail */ 422 - tmp = radix_tree_tag_clear(&hwspinlock_tree, hwlock->id, 369 + tmp = radix_tree_tag_clear(&hwspinlock_tree, hwlock_to_id(hwlock), 423 370 HWSPINLOCK_UNUSED); 424 371 425 372 /* self-sanity check that should never fail */ ··· 442 387 return -EINVAL; 443 388 } 444 389 445 - return hwlock->id; 390 + return hwlock_to_id(hwlock); 446 391 } 447 392 EXPORT_SYMBOL_GPL(hwspin_lock_get_id); 448 393 ··· 455 400 * to the remote core before it can be used for synchronization (to get the 456 401 * id of a given hwlock, use hwspin_lock_get_id()). 457 402 * 458 - * Can be called from an atomic context (will not sleep) but not from 459 - * within interrupt context (simply because there is no use case for 460 - * that yet). 403 + * Should be called from a process context (might sleep) 461 404 * 462 405 * Returns the address of the assigned hwspinlock, or NULL on error 463 406 */ ··· 464 411 struct hwspinlock *hwlock; 465 412 int ret; 466 413 467 - spin_lock(&hwspinlock_tree_lock); 414 + mutex_lock(&hwspinlock_tree_lock); 468 415 469 416 /* look for an unused lock */ 470 417 ret = radix_tree_gang_lookup_tag(&hwspinlock_tree, (void **)&hwlock, ··· 484 431 hwlock = NULL; 485 432 486 433 out: 487 - spin_unlock(&hwspinlock_tree_lock); 434 + mutex_unlock(&hwspinlock_tree_lock); 488 435 return hwlock; 489 436 } 490 437 EXPORT_SYMBOL_GPL(hwspin_lock_request); ··· 498 445 * Usually early board code will be calling this function in order to 499 446 * reserve specific hwspinlock ids for predefined purposes. 500 447 * 501 - * Can be called from an atomic context (will not sleep) but not from 502 - * within interrupt context (simply because there is no use case for 503 - * that yet). 448 + * Should be called from a process context (might sleep) 504 449 * 505 450 * Returns the address of the assigned hwspinlock, or NULL on error 506 451 */ ··· 507 456 struct hwspinlock *hwlock; 508 457 int ret; 509 458 510 - spin_lock(&hwspinlock_tree_lock); 459 + mutex_lock(&hwspinlock_tree_lock); 511 460 512 461 /* make sure this hwspinlock exists */ 513 462 hwlock = radix_tree_lookup(&hwspinlock_tree, id); ··· 517 466 } 518 467 519 468 /* sanity check (this shouldn't happen) */ 520 - WARN_ON(hwlock->id != id); 469 + WARN_ON(hwlock_to_id(hwlock) != id); 521 470 522 471 /* make sure this hwspinlock is unused */ 523 472 ret = radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED); ··· 533 482 hwlock = NULL; 534 483 535 484 out: 536 - spin_unlock(&hwspinlock_tree_lock); 485 + mutex_unlock(&hwspinlock_tree_lock); 537 486 return hwlock; 538 487 } 539 488 EXPORT_SYMBOL_GPL(hwspin_lock_request_specific); ··· 546 495 * Should only be called with an @hwlock that was retrieved from 547 496 * an earlier call to omap_hwspin_lock_request{_specific}. 548 497 * 549 - * Can be called from an atomic context (will not sleep) but not from 550 - * within interrupt context (simply because there is no use case for 551 - * that yet). 498 + * Should be called from a process context (might sleep) 552 499 * 553 500 * Returns 0 on success, or an appropriate error code on failure 554 501 */ 555 502 int hwspin_lock_free(struct hwspinlock *hwlock) 556 503 { 504 + struct device *dev = hwlock->bank->dev; 557 505 struct hwspinlock *tmp; 558 506 int ret; 559 507 ··· 561 511 return -EINVAL; 562 512 } 563 513 564 - spin_lock(&hwspinlock_tree_lock); 514 + mutex_lock(&hwspinlock_tree_lock); 565 515 566 516 /* make sure the hwspinlock is used */ 567 - ret = radix_tree_tag_get(&hwspinlock_tree, hwlock->id, 517 + ret = radix_tree_tag_get(&hwspinlock_tree, hwlock_to_id(hwlock), 568 518 HWSPINLOCK_UNUSED); 569 519 if (ret == 1) { 570 - dev_err(hwlock->dev, "%s: hwlock is already free\n", __func__); 520 + dev_err(dev, "%s: hwlock is already free\n", __func__); 571 521 dump_stack(); 572 522 ret = -EINVAL; 573 523 goto out; 574 524 } 575 525 576 526 /* notify the underlying device that power is not needed */ 577 - ret = pm_runtime_put(hwlock->dev); 527 + ret = pm_runtime_put(dev); 578 528 if (ret < 0) 579 529 goto out; 580 530 581 531 /* mark this hwspinlock as available */ 582 - tmp = radix_tree_tag_set(&hwspinlock_tree, hwlock->id, 532 + tmp = radix_tree_tag_set(&hwspinlock_tree, hwlock_to_id(hwlock), 583 533 HWSPINLOCK_UNUSED); 584 534 585 535 /* sanity check (this shouldn't happen) */ 586 536 WARN_ON(tmp != hwlock); 587 537 588 - module_put(hwlock->owner); 538 + module_put(dev->driver->owner); 589 539 590 540 out: 591 - spin_unlock(&hwspinlock_tree_lock); 541 + mutex_unlock(&hwspinlock_tree_lock); 592 542 return ret; 593 543 } 594 544 EXPORT_SYMBOL_GPL(hwspin_lock_free);
+28 -12
drivers/hwspinlock/hwspinlock_internal.h
··· 21 21 #include <linux/spinlock.h> 22 22 #include <linux/device.h> 23 23 24 + struct hwspinlock_device; 25 + 24 26 /** 25 27 * struct hwspinlock_ops - platform-specific hwspinlock handlers 26 28 * ··· 41 39 42 40 /** 43 41 * struct hwspinlock - this struct represents a single hwspinlock instance 44 - * 45 - * @dev: underlying device, will be used to invoke runtime PM api 46 - * @ops: platform-specific hwspinlock handlers 47 - * @id: a global, unique, system-wide, index of the lock. 42 + * @bank: the hwspinlock_device structure which owns this lock 48 43 * @lock: initialized and used by hwspinlock core 49 - * @owner: underlying implementation module, used to maintain module ref count 50 - * 51 - * Note: currently simplicity was opted for, but later we can squeeze some 52 - * memory bytes by grouping the dev, ops and owner members in a single 53 - * per-platform struct, and have all hwspinlocks point at it. 44 + * @priv: private data, owned by the underlying platform-specific hwspinlock drv 54 45 */ 55 46 struct hwspinlock { 47 + struct hwspinlock_device *bank; 48 + spinlock_t lock; 49 + void *priv; 50 + }; 51 + 52 + /** 53 + * struct hwspinlock_device - a device which usually spans numerous hwspinlocks 54 + * @dev: underlying device, will be used to invoke runtime PM api 55 + * @ops: platform-specific hwspinlock handlers 56 + * @base_id: id index of the first lock in this device 57 + * @num_locks: number of locks in this device 58 + * @lock: dynamically allocated array of 'struct hwspinlock' 59 + */ 60 + struct hwspinlock_device { 56 61 struct device *dev; 57 62 const struct hwspinlock_ops *ops; 58 - int id; 59 - spinlock_t lock; 60 - struct module *owner; 63 + int base_id; 64 + int num_locks; 65 + struct hwspinlock lock[0]; 61 66 }; 67 + 68 + static inline int hwlock_to_id(struct hwspinlock *hwlock) 69 + { 70 + int local_id = hwlock - &hwlock->bank->lock[0]; 71 + 72 + return hwlock->bank->base_id + local_id; 73 + } 62 74 63 75 #endif /* __HWSPINLOCK_HWSPINLOCK_H */
+42 -83
drivers/hwspinlock/omap_hwspinlock.c
··· 41 41 #define SPINLOCK_NOTTAKEN (0) /* free */ 42 42 #define SPINLOCK_TAKEN (1) /* locked */ 43 43 44 - #define to_omap_hwspinlock(lock) \ 45 - container_of(lock, struct omap_hwspinlock, lock) 46 - 47 - struct omap_hwspinlock { 48 - struct hwspinlock lock; 49 - void __iomem *addr; 50 - }; 51 - 52 - struct omap_hwspinlock_state { 53 - int num_locks; /* Total number of locks in system */ 54 - void __iomem *io_base; /* Mapped base address */ 55 - }; 56 - 57 44 static int omap_hwspinlock_trylock(struct hwspinlock *lock) 58 45 { 59 - struct omap_hwspinlock *omap_lock = to_omap_hwspinlock(lock); 46 + void __iomem *lock_addr = lock->priv; 60 47 61 48 /* attempt to acquire the lock by reading its value */ 62 - return (SPINLOCK_NOTTAKEN == readl(omap_lock->addr)); 49 + return (SPINLOCK_NOTTAKEN == readl(lock_addr)); 63 50 } 64 51 65 52 static void omap_hwspinlock_unlock(struct hwspinlock *lock) 66 53 { 67 - struct omap_hwspinlock *omap_lock = to_omap_hwspinlock(lock); 54 + void __iomem *lock_addr = lock->priv; 68 55 69 56 /* release the lock by writing 0 to it */ 70 - writel(SPINLOCK_NOTTAKEN, omap_lock->addr); 57 + writel(SPINLOCK_NOTTAKEN, lock_addr); 71 58 } 72 59 73 60 /* ··· 80 93 81 94 static int __devinit omap_hwspinlock_probe(struct platform_device *pdev) 82 95 { 83 - struct omap_hwspinlock *omap_lock; 84 - struct omap_hwspinlock_state *state; 85 - struct hwspinlock *lock; 96 + struct hwspinlock_pdata *pdata = pdev->dev.platform_data; 97 + struct hwspinlock_device *bank; 98 + struct hwspinlock *hwlock; 86 99 struct resource *res; 87 100 void __iomem *io_base; 88 - int i, ret; 101 + int num_locks, i, ret; 102 + 103 + if (!pdata) 104 + return -ENODEV; 89 105 90 106 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 91 107 if (!res) 92 108 return -ENODEV; 93 109 94 - state = kzalloc(sizeof(*state), GFP_KERNEL); 95 - if (!state) 96 - return -ENOMEM; 97 - 98 110 io_base = ioremap(res->start, resource_size(res)); 99 - if (!io_base) { 100 - ret = -ENOMEM; 101 - goto free_state; 102 - } 111 + if (!io_base) 112 + return -ENOMEM; 103 113 104 114 /* Determine number of locks */ 105 115 i = readl(io_base + SYSSTATUS_OFFSET); ··· 108 124 goto iounmap_base; 109 125 } 110 126 111 - state->num_locks = i * 32; 112 - state->io_base = io_base; 127 + num_locks = i * 32; /* actual number of locks in this device */ 113 128 114 - platform_set_drvdata(pdev, state); 129 + bank = kzalloc(sizeof(*bank) + num_locks * sizeof(*hwlock), GFP_KERNEL); 130 + if (!bank) { 131 + ret = -ENOMEM; 132 + goto iounmap_base; 133 + } 134 + 135 + platform_set_drvdata(pdev, bank); 136 + 137 + for (i = 0, hwlock = &bank->lock[0]; i < num_locks; i++, hwlock++) 138 + hwlock->priv = io_base + LOCK_BASE_OFFSET + sizeof(u32) * i; 115 139 116 140 /* 117 141 * runtime PM will make sure the clock of this module is ··· 127 135 */ 128 136 pm_runtime_enable(&pdev->dev); 129 137 130 - for (i = 0; i < state->num_locks; i++) { 131 - omap_lock = kzalloc(sizeof(*omap_lock), GFP_KERNEL); 132 - if (!omap_lock) { 133 - ret = -ENOMEM; 134 - goto free_locks; 135 - } 136 - 137 - omap_lock->lock.dev = &pdev->dev; 138 - omap_lock->lock.owner = THIS_MODULE; 139 - omap_lock->lock.id = i; 140 - omap_lock->lock.ops = &omap_hwspinlock_ops; 141 - omap_lock->addr = io_base + LOCK_BASE_OFFSET + sizeof(u32) * i; 142 - 143 - ret = hwspin_lock_register(&omap_lock->lock); 144 - if (ret) { 145 - kfree(omap_lock); 146 - goto free_locks; 147 - } 148 - } 138 + ret = hwspin_lock_register(bank, &pdev->dev, &omap_hwspinlock_ops, 139 + pdata->base_id, num_locks); 140 + if (ret) 141 + goto reg_fail; 149 142 150 143 return 0; 151 144 152 - free_locks: 153 - while (--i >= 0) { 154 - lock = hwspin_lock_unregister(i); 155 - /* this should't happen, but let's give our best effort */ 156 - if (!lock) { 157 - dev_err(&pdev->dev, "%s: cleanups failed\n", __func__); 158 - continue; 159 - } 160 - omap_lock = to_omap_hwspinlock(lock); 161 - kfree(omap_lock); 162 - } 145 + reg_fail: 163 146 pm_runtime_disable(&pdev->dev); 147 + kfree(bank); 164 148 iounmap_base: 165 149 iounmap(io_base); 166 - free_state: 167 - kfree(state); 168 150 return ret; 169 151 } 170 152 171 - static int omap_hwspinlock_remove(struct platform_device *pdev) 153 + static int __devexit omap_hwspinlock_remove(struct platform_device *pdev) 172 154 { 173 - struct omap_hwspinlock_state *state = platform_get_drvdata(pdev); 174 - struct hwspinlock *lock; 175 - struct omap_hwspinlock *omap_lock; 176 - int i; 155 + struct hwspinlock_device *bank = platform_get_drvdata(pdev); 156 + void __iomem *io_base = bank->lock[0].priv - LOCK_BASE_OFFSET; 157 + int ret; 177 158 178 - for (i = 0; i < state->num_locks; i++) { 179 - lock = hwspin_lock_unregister(i); 180 - /* this shouldn't happen at this point. if it does, at least 181 - * don't continue with the remove */ 182 - if (!lock) { 183 - dev_err(&pdev->dev, "%s: failed on %d\n", __func__, i); 184 - return -EBUSY; 185 - } 186 - 187 - omap_lock = to_omap_hwspinlock(lock); 188 - kfree(omap_lock); 159 + ret = hwspin_lock_unregister(bank); 160 + if (ret) { 161 + dev_err(&pdev->dev, "%s failed: %d\n", __func__, ret); 162 + return ret; 189 163 } 190 164 191 165 pm_runtime_disable(&pdev->dev); 192 - iounmap(state->io_base); 193 - kfree(state); 166 + iounmap(io_base); 167 + kfree(bank); 194 168 195 169 return 0; 196 170 } 197 171 198 172 static struct platform_driver omap_hwspinlock_driver = { 199 173 .probe = omap_hwspinlock_probe, 200 - .remove = omap_hwspinlock_remove, 174 + .remove = __devexit_p(omap_hwspinlock_remove), 201 175 .driver = { 202 176 .name = "omap_hwspinlock", 177 + .owner = THIS_MODULE, 203 178 }, 204 179 }; 205 180
+198
drivers/hwspinlock/u8500_hsem.c
··· 1 + /* 2 + * u8500 HWSEM driver 3 + * 4 + * Copyright (C) 2010-2011 ST-Ericsson 5 + * 6 + * Implements u8500 semaphore handling for protocol 1, no interrupts. 7 + * 8 + * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 9 + * Heavily borrowed from the work of : 10 + * Simon Que <sque@ti.com> 11 + * Hari Kanigeri <h-kanigeri2@ti.com> 12 + * Ohad Ben-Cohen <ohad@wizery.com> 13 + * 14 + * This program is free software; you can redistribute it and/or 15 + * modify it under the terms of the GNU General Public License 16 + * version 2 as published by the Free Software Foundation. 17 + * 18 + * This program is distributed in the hope that it will be useful, but 19 + * WITHOUT ANY WARRANTY; without even the implied warranty of 20 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 21 + * General Public License for more details. 22 + */ 23 + 24 + #include <linux/delay.h> 25 + #include <linux/io.h> 26 + #include <linux/pm_runtime.h> 27 + #include <linux/slab.h> 28 + #include <linux/spinlock.h> 29 + #include <linux/hwspinlock.h> 30 + #include <linux/platform_device.h> 31 + 32 + #include "hwspinlock_internal.h" 33 + 34 + /* 35 + * Implementation of STE's HSem protocol 1 without interrutps. 36 + * The only masterID we allow is '0x01' to force people to use 37 + * HSems for synchronisation between processors rather than processes 38 + * on the ARM core. 39 + */ 40 + 41 + #define U8500_MAX_SEMAPHORE 32 /* a total of 32 semaphore */ 42 + #define RESET_SEMAPHORE (0) /* free */ 43 + 44 + /* 45 + * CPU ID for master running u8500 kernel. 46 + * Hswpinlocks should only be used to synchonise operations 47 + * between the Cortex A9 core and the other CPUs. Hence 48 + * forcing the masterID to a preset value. 49 + */ 50 + #define HSEM_MASTER_ID 0x01 51 + 52 + #define HSEM_REGISTER_OFFSET 0x08 53 + 54 + #define HSEM_CTRL_REG 0x00 55 + #define HSEM_ICRALL 0x90 56 + #define HSEM_PROTOCOL_1 0x01 57 + 58 + static int u8500_hsem_trylock(struct hwspinlock *lock) 59 + { 60 + void __iomem *lock_addr = lock->priv; 61 + 62 + writel(HSEM_MASTER_ID, lock_addr); 63 + 64 + /* get only first 4 bit and compare to masterID. 65 + * if equal, we have the semaphore, otherwise 66 + * someone else has it. 67 + */ 68 + return (HSEM_MASTER_ID == (0x0F & readl(lock_addr))); 69 + } 70 + 71 + static void u8500_hsem_unlock(struct hwspinlock *lock) 72 + { 73 + void __iomem *lock_addr = lock->priv; 74 + 75 + /* release the lock by writing 0 to it */ 76 + writel(RESET_SEMAPHORE, lock_addr); 77 + } 78 + 79 + /* 80 + * u8500: what value is recommended here ? 81 + */ 82 + static void u8500_hsem_relax(struct hwspinlock *lock) 83 + { 84 + ndelay(50); 85 + } 86 + 87 + static const struct hwspinlock_ops u8500_hwspinlock_ops = { 88 + .trylock = u8500_hsem_trylock, 89 + .unlock = u8500_hsem_unlock, 90 + .relax = u8500_hsem_relax, 91 + }; 92 + 93 + static int __devinit u8500_hsem_probe(struct platform_device *pdev) 94 + { 95 + struct hwspinlock_pdata *pdata = pdev->dev.platform_data; 96 + struct hwspinlock_device *bank; 97 + struct hwspinlock *hwlock; 98 + struct resource *res; 99 + void __iomem *io_base; 100 + int i, ret, num_locks = U8500_MAX_SEMAPHORE; 101 + ulong val; 102 + 103 + if (!pdata) 104 + return -ENODEV; 105 + 106 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 107 + if (!res) 108 + return -ENODEV; 109 + 110 + io_base = ioremap(res->start, resource_size(res)); 111 + if (!io_base) { 112 + ret = -ENOMEM; 113 + goto free_state; 114 + } 115 + 116 + /* make sure protocol 1 is selected */ 117 + val = readl(io_base + HSEM_CTRL_REG); 118 + writel((val & ~HSEM_PROTOCOL_1), io_base + HSEM_CTRL_REG); 119 + 120 + /* clear all interrupts */ 121 + writel(0xFFFF, io_base + HSEM_ICRALL); 122 + 123 + bank = kzalloc(sizeof(*bank) + num_locks * sizeof(*hwlock), GFP_KERNEL); 124 + if (!bank) { 125 + ret = -ENOMEM; 126 + goto iounmap_base; 127 + } 128 + 129 + platform_set_drvdata(pdev, bank); 130 + 131 + for (i = 0, hwlock = &bank->lock[0]; i < num_locks; i++, hwlock++) 132 + hwlock->priv = io_base + HSEM_REGISTER_OFFSET + sizeof(u32) * i; 133 + 134 + /* no pm needed for HSem but required to comply with hwspilock core */ 135 + pm_runtime_enable(&pdev->dev); 136 + 137 + ret = hwspin_lock_register(bank, &pdev->dev, &u8500_hwspinlock_ops, 138 + pdata->base_id, num_locks); 139 + if (ret) 140 + goto reg_fail; 141 + 142 + return 0; 143 + 144 + reg_fail: 145 + pm_runtime_disable(&pdev->dev); 146 + kfree(bank); 147 + iounmap_base: 148 + iounmap(io_base); 149 + return ret; 150 + } 151 + 152 + static int __devexit u8500_hsem_remove(struct platform_device *pdev) 153 + { 154 + struct hwspinlock_device *bank = platform_get_drvdata(pdev); 155 + void __iomem *io_base = bank->lock[0].priv - HSEM_REGISTER_OFFSET; 156 + int ret; 157 + 158 + /* clear all interrupts */ 159 + writel(0xFFFF, io_base + HSEM_ICRALL); 160 + 161 + ret = hwspin_lock_unregister(bank); 162 + if (ret) { 163 + dev_err(&pdev->dev, "%s failed: %d\n", __func__, ret); 164 + return ret; 165 + } 166 + 167 + pm_runtime_disable(&pdev->dev); 168 + iounmap(io_base); 169 + kfree(bank); 170 + 171 + return 0; 172 + } 173 + 174 + static struct platform_driver u8500_hsem_driver = { 175 + .probe = u8500_hsem_probe, 176 + .remove = __devexit_p(u8500_hsem_remove), 177 + .driver = { 178 + .name = "u8500_hsem", 179 + .owner = THIS_MODULE, 180 + }, 181 + }; 182 + 183 + static int __init u8500_hsem_init(void) 184 + { 185 + return platform_driver_register(&u8500_hsem_driver); 186 + } 187 + /* board init code might need to reserve hwspinlocks for predefined purposes */ 188 + postcore_initcall(u8500_hsem_init); 189 + 190 + static void __exit u8500_hsem_exit(void) 191 + { 192 + platform_driver_unregister(&u8500_hsem_driver); 193 + } 194 + module_exit(u8500_hsem_exit); 195 + 196 + MODULE_LICENSE("GPL v2"); 197 + MODULE_DESCRIPTION("Hardware Spinlock driver for u8500"); 198 + MODULE_AUTHOR("Mathieu Poirier <mathieu.poirier@linaro.org>");
+1 -1
drivers/mmc/host/omap_hsmmc.c
··· 1270 1270 } 1271 1271 } else { 1272 1272 if (!host->protect_card) { 1273 - pr_info"%s: cover is open, " 1273 + pr_info("%s: cover is open, " 1274 1274 "card is now inaccessible\n", 1275 1275 mmc_hostname(host->mmc)); 1276 1276 host->protect_card = 1;
+34 -12
include/linux/hwspinlock.h
··· 20 20 21 21 #include <linux/err.h> 22 22 #include <linux/sched.h> 23 + #include <linux/device.h> 23 24 24 25 /* hwspinlock mode argument */ 25 26 #define HWLOCK_IRQSTATE 0x01 /* Disable interrupts, save state */ 26 27 #define HWLOCK_IRQ 0x02 /* Disable interrupts, don't save state */ 27 28 28 29 struct hwspinlock; 30 + struct hwspinlock_device; 31 + struct hwspinlock_ops; 32 + 33 + /** 34 + * struct hwspinlock_pdata - platform data for hwspinlock drivers 35 + * @base_id: base id for this hwspinlock device 36 + * 37 + * hwspinlock devices provide system-wide hardware locks that are used 38 + * by remote processors that have no other way to achieve synchronization. 39 + * 40 + * To achieve that, each physical lock must have a system-wide id number 41 + * that is agreed upon, otherwise remote processors can't possibly assume 42 + * they're using the same hardware lock. 43 + * 44 + * Usually boards have a single hwspinlock device, which provides several 45 + * hwspinlocks, and in this case, they can be trivially numbered 0 to 46 + * (num-of-locks - 1). 47 + * 48 + * In case boards have several hwspinlocks devices, a different base id 49 + * should be used for each hwspinlock device (they can't all use 0 as 50 + * a starting id!). 51 + * 52 + * This platform data structure should be used to provide the base id 53 + * for each device (which is trivially 0 when only a single hwspinlock 54 + * device exists). It can be shared between different platforms, hence 55 + * its location. 56 + */ 57 + struct hwspinlock_pdata { 58 + int base_id; 59 + }; 29 60 30 61 #if defined(CONFIG_HWSPINLOCK) || defined(CONFIG_HWSPINLOCK_MODULE) 31 62 32 - int hwspin_lock_register(struct hwspinlock *lock); 33 - struct hwspinlock *hwspin_lock_unregister(unsigned int id); 63 + int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev, 64 + const struct hwspinlock_ops *ops, int base_id, int num_locks); 65 + int hwspin_lock_unregister(struct hwspinlock_device *bank); 34 66 struct hwspinlock *hwspin_lock_request(void); 35 67 struct hwspinlock *hwspin_lock_request_specific(unsigned int id); 36 68 int hwspin_lock_free(struct hwspinlock *hwlock); ··· 124 92 static inline int hwspin_lock_get_id(struct hwspinlock *hwlock) 125 93 { 126 94 return 0; 127 - } 128 - 129 - static inline int hwspin_lock_register(struct hwspinlock *hwlock) 130 - { 131 - return -ENODEV; 132 - } 133 - 134 - static inline struct hwspinlock *hwspin_lock_unregister(unsigned int id) 135 - { 136 - return NULL; 137 95 } 138 96 139 97 #endif /* !CONFIG_HWSPINLOCK */