Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge v3.2-rc4 into usb-next

This lets us handle the PS3 merge easier, as well as syncing up with
other USB fixes already in the -rc4 tree.

Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>

+3259 -1866
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 33 33 ramtron Ramtron International 34 34 samsung Samsung Semiconductor 35 35 schindler Schindler 36 + sil Silicon Image 36 37 simtek 37 38 sirf SiRF Technology, Inc. 38 39 stericsson ST-Ericsson
+2 -2
Documentation/filesystems/btrfs.txt
··· 63 63 Userspace tools for creating and manipulating Btrfs file systems are 64 64 available from the git repository at the following location: 65 65 66 - http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs-unstable.git 67 - git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs-unstable.git 66 + http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git 67 + git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git 68 68 69 69 These include the following tools: 70 70
+66 -39
Documentation/power/devices.txt
··· 123 123 Subsystem-Level Methods 124 124 ----------------------- 125 125 The core methods to suspend and resume devices reside in struct dev_pm_ops 126 - pointed to by the pm member of struct bus_type, struct device_type and 127 - struct class. They are mostly of interest to the people writing infrastructure 128 - for buses, like PCI or USB, or device type and device class drivers. 126 + pointed to by the ops member of struct dev_pm_domain, or by the pm member of 127 + struct bus_type, struct device_type and struct class. They are mostly of 128 + interest to the people writing infrastructure for platforms and buses, like PCI 129 + or USB, or device type and device class drivers. 129 130 130 131 Bus drivers implement these methods as appropriate for the hardware and the 131 132 drivers using it; PCI works differently from USB, and so on. Not many people ··· 140 139 141 140 /sys/devices/.../power/wakeup files 142 141 ----------------------------------- 143 - All devices in the driver model have two flags to control handling of wakeup 144 - events (hardware signals that can force the device and/or system out of a low 145 - power state). These flags are initialized by bus or device driver code using 142 + All device objects in the driver model contain fields that control the handling 143 + of system wakeup events (hardware signals that can force the system out of a 144 + sleep state). These fields are initialized by bus or device driver code using 146 145 device_set_wakeup_capable() and device_set_wakeup_enable(), defined in 147 146 include/linux/pm_wakeup.h. 148 147 149 - The "can_wakeup" flag just records whether the device (and its driver) can 148 + The "power.can_wakeup" flag just records whether the device (and its driver) can 150 149 physically support wakeup events. The device_set_wakeup_capable() routine 151 - affects this flag. The "should_wakeup" flag controls whether the device should 152 - try to use its wakeup mechanism. device_set_wakeup_enable() affects this flag; 153 - for the most part drivers should not change its value. The initial value of 154 - should_wakeup is supposed to be false for the majority of devices; the major 155 - exceptions are power buttons, keyboards, and Ethernet adapters whose WoL 156 - (wake-on-LAN) feature has been set up with ethtool. It should also default 157 - to true for devices that don't generate wakeup requests on their own but merely 158 - forward wakeup requests from one bus to another (like PCI bridges). 150 + affects this flag. The "power.wakeup" field is a pointer to an object of type 151 + struct wakeup_source used for controlling whether or not the device should use 152 + its system wakeup mechanism and for notifying the PM core of system wakeup 153 + events signaled by the device. This object is only present for wakeup-capable 154 + devices (i.e. devices whose "can_wakeup" flags are set) and is created (or 155 + removed) by device_set_wakeup_capable(). 159 156 160 157 Whether or not a device is capable of issuing wakeup events is a hardware 161 158 matter, and the kernel is responsible for keeping track of it. By contrast, 162 159 whether or not a wakeup-capable device should issue wakeup events is a policy 163 160 decision, and it is managed by user space through a sysfs attribute: the 164 - power/wakeup file. User space can write the strings "enabled" or "disabled" to 165 - set or clear the "should_wakeup" flag, respectively. This file is only present 166 - for wakeup-capable devices (i.e. devices whose "can_wakeup" flags are set) 167 - and is created (or removed) by device_set_wakeup_capable(). Reads from the 168 - file will return the corresponding string. 161 + "power/wakeup" file. User space can write the strings "enabled" or "disabled" 162 + to it to indicate whether or not, respectively, the device is supposed to signal 163 + system wakeup. This file is only present if the "power.wakeup" object exists 164 + for the given device and is created (or removed) along with that object, by 165 + device_set_wakeup_capable(). Reads from the file will return the corresponding 166 + string. 169 167 170 - The device_may_wakeup() routine returns true only if both flags are set. 168 + The "power/wakeup" file is supposed to contain the "disabled" string initially 169 + for the majority of devices; the major exceptions are power buttons, keyboards, 170 + and Ethernet adapters whose WoL (wake-on-LAN) feature has been set up with 171 + ethtool. It should also default to "enabled" for devices that don't generate 172 + wakeup requests on their own but merely forward wakeup requests from one bus to 173 + another (like PCI Express ports). 174 + 175 + The device_may_wakeup() routine returns true only if the "power.wakeup" object 176 + exists and the corresponding "power/wakeup" file contains the string "enabled". 171 177 This information is used by subsystems, like the PCI bus type code, to see 172 178 whether or not to enable the devices' wakeup mechanisms. If device wakeup 173 179 mechanisms are enabled or disabled directly by drivers, they also should use 174 180 device_may_wakeup() to decide what to do during a system sleep transition. 175 - However for runtime power management, wakeup events should be enabled whenever 176 - the device and driver both support them, regardless of the should_wakeup flag. 181 + Device drivers, however, are not supposed to call device_set_wakeup_enable() 182 + directly in any case. 177 183 184 + It ought to be noted that system wakeup is conceptually different from "remote 185 + wakeup" used by runtime power management, although it may be supported by the 186 + same physical mechanism. Remote wakeup is a feature allowing devices in 187 + low-power states to trigger specific interrupts to signal conditions in which 188 + they should be put into the full-power state. Those interrupts may or may not 189 + be used to signal system wakeup events, depending on the hardware design. On 190 + some systems it is impossible to trigger them from system sleep states. In any 191 + case, remote wakeup should always be enabled for runtime power management for 192 + all devices and drivers that support it. 178 193 179 194 /sys/devices/.../power/control files 180 195 ------------------------------------ ··· 266 249 support all these callbacks and not all drivers use all the callbacks. The 267 250 various phases always run after tasks have been frozen and before they are 268 251 unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have 269 - been disabled (except for those marked with the IRQ_WAKEUP flag). 252 + been disabled (except for those marked with the IRQF_NO_SUSPEND flag). 270 253 271 - All phases use bus, type, or class callbacks (that is, methods defined in 272 - dev->bus->pm, dev->type->pm, or dev->class->pm). These callbacks are mutually 273 - exclusive, so if the device type provides a struct dev_pm_ops object pointed to 274 - by its pm field (i.e. both dev->type and dev->type->pm are defined), the 275 - callbacks included in that object (i.e. dev->type->pm) will be used. Otherwise, 276 - if the class provides a struct dev_pm_ops object pointed to by its pm field 277 - (i.e. both dev->class and dev->class->pm are defined), the PM core will use the 278 - callbacks from that object (i.e. dev->class->pm). Finally, if the pm fields of 279 - both the device type and class objects are NULL (or those objects do not exist), 280 - the callbacks provided by the bus (that is, the callbacks from dev->bus->pm) 281 - will be used (this allows device types to override callbacks provided by bus 282 - types or classes if necessary). 254 + All phases use PM domain, bus, type, or class callbacks (that is, methods 255 + defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, or dev->class->pm). 256 + These callbacks are regarded by the PM core as mutually exclusive. Moreover, 257 + PM domain callbacks always take precedence over bus, type and class callbacks, 258 + while type callbacks take precedence over bus and class callbacks, and class 259 + callbacks take precedence over bus callbacks. To be precise, the following 260 + rules are used to determine which callback to execute in the given phase: 261 + 262 + 1. If dev->pm_domain is present, the PM core will attempt to execute the 263 + callback included in dev->pm_domain->ops. If that callback is not 264 + present, no action will be carried out for the given device. 265 + 266 + 2. Otherwise, if both dev->type and dev->type->pm are present, the callback 267 + included in dev->type->pm will be executed. 268 + 269 + 3. Otherwise, if both dev->class and dev->class->pm are present, the 270 + callback included in dev->class->pm will be executed. 271 + 272 + 4. Otherwise, if both dev->bus and dev->bus->pm are present, the callback 273 + included in dev->bus->pm will be executed. 274 + 275 + This allows PM domains and device types to override callbacks provided by bus 276 + types or device classes if necessary. 283 277 284 278 These callbacks may in turn invoke device- or driver-specific methods stored in 285 279 dev->driver->pm, but they don't have to. ··· 311 283 312 284 After the prepare callback method returns, no new children may be 313 285 registered below the device. The method may also prepare the device or 314 - driver in some way for the upcoming system power transition (for 315 - example, by allocating additional memory required for this purpose), but 316 - it should not put the device into a low-power state. 286 + driver in some way for the upcoming system power transition, but it 287 + should not put the device into a low-power state. 317 288 318 289 2. The suspend methods should quiesce the device to stop it from performing 319 290 I/O. They also may save the device registers and put it into the
+24 -16
Documentation/power/runtime_pm.txt
··· 44 44 }; 45 45 46 46 The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks 47 - are executed by the PM core for either the power domain, or the device type 48 - (if the device power domain's struct dev_pm_ops does not exist), or the class 49 - (if the device power domain's and type's struct dev_pm_ops object does not 50 - exist), or the bus type (if the device power domain's, type's and class' 51 - struct dev_pm_ops objects do not exist) of the given device, so the priority 52 - order of callbacks from high to low is that power domain callbacks, device 53 - type callbacks, class callbacks and bus type callbacks, and the high priority 54 - one will take precedence over low priority one. The bus type, device type and 55 - class callbacks are referred to as subsystem-level callbacks in what follows, 56 - and generally speaking, the power domain callbacks are used for representing 57 - power domains within a SoC. 47 + are executed by the PM core for the device's subsystem that may be either of 48 + the following: 49 + 50 + 1. PM domain of the device, if the device's PM domain object, dev->pm_domain, 51 + is present. 52 + 53 + 2. Device type of the device, if both dev->type and dev->type->pm are present. 54 + 55 + 3. Device class of the device, if both dev->class and dev->class->pm are 56 + present. 57 + 58 + 4. Bus type of the device, if both dev->bus and dev->bus->pm are present. 59 + 60 + The PM core always checks which callback to use in the order given above, so the 61 + priority order of callbacks from high to low is: PM domain, device type, class 62 + and bus type. Moreover, the high-priority one will always take precedence over 63 + a low-priority one. The PM domain, bus type, device type and class callbacks 64 + are referred to as subsystem-level callbacks in what follows. 58 65 59 66 By default, the callbacks are always invoked in process context with interrupts 60 67 enabled. However, subsystems can use the pm_runtime_irq_safe() helper function 61 - to tell the PM core that a device's ->runtime_suspend() and ->runtime_resume() 62 - callbacks should be invoked in atomic context with interrupts disabled. 63 - This implies that these callback routines must not block or sleep, but it also 64 - means that the synchronous helper functions listed at the end of Section 4 can 65 - be used within an interrupt handler or in an atomic context. 68 + to tell the PM core that their ->runtime_suspend(), ->runtime_resume() and 69 + ->runtime_idle() callbacks may be invoked in atomic context with interrupts 70 + disabled for a given device. This implies that the callback routines in 71 + question must not block or sleep, but it also means that the synchronous helper 72 + functions listed at the end of Section 4 may be used for that device within an 73 + interrupt handler or generally in an atomic context. 66 74 67 75 The subsystem-level suspend callback is _entirely_ _responsible_ for handling 68 76 the suspend of the device as appropriate, which may, but need not include
+8 -1
MAINTAINERS
··· 789 789 S: Maintained 790 790 T: git git://git.pengutronix.de/git/imx/linux-2.6.git 791 791 F: arch/arm/mach-mx*/ 792 + F: arch/arm/mach-imx/ 792 793 F: arch/arm/plat-mxc/ 793 794 794 795 ARM/FREESCALE IMX51 ··· 804 803 S: Maintained 805 804 T: git git://git.linaro.org/people/shawnguo/linux-2.6.git 806 805 F: arch/arm/mach-imx/*imx6* 806 + 807 + ARM/FREESCALE MXS ARM ARCHITECTURE 808 + M: Shawn Guo <shawn.guo@linaro.org> 809 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 810 + S: Maintained 811 + T: git git://git.linaro.org/people/shawnguo/linux-2.6.git 812 + F: arch/arm/mach-mxs/ 807 813 808 814 ARM/GLOMATION GESBC9312SX MACHINE SUPPORT 809 815 M: Lennert Buytenhek <kernel@wantstofly.org> ··· 5675 5667 F: include/media/*7146* 5676 5668 5677 5669 SAMSUNG AUDIO (ASoC) DRIVERS 5678 - M: Jassi Brar <jassisinghbrar@gmail.com> 5679 5670 M: Sangbeom Kim <sbkim73@samsung.com> 5680 5671 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 5681 5672 S: Supported
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 2 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc4 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+16 -4
arch/arm/Kconfig
··· 1231 1231 capabilities of the processor. 1232 1232 1233 1233 config PL310_ERRATA_588369 1234 - bool "Clean & Invalidate maintenance operations do not invalidate clean lines" 1234 + bool "PL310 errata: Clean & Invalidate maintenance operations do not invalidate clean lines" 1235 1235 depends on CACHE_L2X0 1236 1236 help 1237 1237 The PL310 L2 cache controller implements three types of Clean & ··· 1256 1256 entries regardless of the ASID. 1257 1257 1258 1258 config PL310_ERRATA_727915 1259 - bool "Background Clean & Invalidate by Way operation can cause data corruption" 1259 + bool "PL310 errata: Background Clean & Invalidate by Way operation can cause data corruption" 1260 1260 depends on CACHE_L2X0 1261 1261 help 1262 1262 PL310 implements the Clean & Invalidate by Way L2 cache maintenance ··· 1289 1289 operation is received by a CPU before the ICIALLUIS has completed, 1290 1290 potentially leading to corrupted entries in the cache or TLB. 1291 1291 1292 - config ARM_ERRATA_753970 1293 - bool "ARM errata: cache sync operation may be faulty" 1292 + config PL310_ERRATA_753970 1293 + bool "PL310 errata: cache sync operation may be faulty" 1294 1294 depends on CACHE_PL310 1295 1295 help 1296 1296 This option enables the workaround for the 753970 PL310 (r3p0) erratum. ··· 1351 1351 system. This workaround adds a DSB instruction before the 1352 1352 relevant cache maintenance functions and sets a specific bit 1353 1353 in the diagnostic control register of the SCU. 1354 + 1355 + config PL310_ERRATA_769419 1356 + bool "PL310 errata: no automatic Store Buffer drain" 1357 + depends on CACHE_L2X0 1358 + help 1359 + On revisions of the PL310 prior to r3p2, the Store Buffer does 1360 + not automatically drain. This can cause normal, non-cacheable 1361 + writes to be retained when the memory system is idle, leading 1362 + to suboptimal I/O performance for drivers using coherent DMA. 1363 + This option adds a write barrier to the cpu_idle loop so that, 1364 + on systems with an outer cache, the store buffer is drained 1365 + explicitly. 1354 1366 1355 1367 endmenu 1356 1368
+10 -6
arch/arm/common/gic.c
··· 526 526 sizeof(u32)); 527 527 BUG_ON(!gic->saved_ppi_conf); 528 528 529 - cpu_pm_register_notifier(&gic_notifier_block); 529 + if (gic == &gic_data[0]) 530 + cpu_pm_register_notifier(&gic_notifier_block); 530 531 } 531 532 #else 532 533 static void __init gic_pm_init(struct gic_chip_data *gic) ··· 582 581 * For primary GICs, skip over SGIs. 583 582 * For secondary GICs, skip over PPIs, too. 584 583 */ 584 + domain->hwirq_base = 32; 585 585 if (gic_nr == 0) { 586 586 gic_cpu_base_addr = cpu_base; 587 - domain->hwirq_base = 16; 588 - if (irq_start > 0) 589 - irq_start = (irq_start & ~31) + 16; 590 - } else 591 - domain->hwirq_base = 32; 587 + 588 + if ((irq_start & 31) > 0) { 589 + domain->hwirq_base = 16; 590 + if (irq_start != -1) 591 + irq_start = (irq_start & ~31) + 16; 592 + } 593 + } 592 594 593 595 /* 594 596 * Find out how many interrupts are supported.
+9 -3
arch/arm/common/pl330.c
··· 1211 1211 ccr |= (rqc->brst_size << CC_SRCBRSTSIZE_SHFT); 1212 1212 ccr |= (rqc->brst_size << CC_DSTBRSTSIZE_SHFT); 1213 1213 1214 - ccr |= (rqc->dcctl << CC_SRCCCTRL_SHFT); 1215 - ccr |= (rqc->scctl << CC_DSTCCTRL_SHFT); 1214 + ccr |= (rqc->scctl << CC_SRCCCTRL_SHFT); 1215 + ccr |= (rqc->dcctl << CC_DSTCCTRL_SHFT); 1216 1216 1217 1217 ccr |= (rqc->swap << CC_SWAP_SHFT); 1218 1218 ··· 1623 1623 return -1; 1624 1624 } 1625 1625 1626 + static bool _chan_ns(const struct pl330_info *pi, int i) 1627 + { 1628 + return pi->pcfg.irq_ns & (1 << i); 1629 + } 1630 + 1626 1631 /* Upon success, returns IdentityToken for the 1627 1632 * allocated channel, NULL otherwise. 1628 1633 */ ··· 1652 1647 1653 1648 for (i = 0; i < chans; i++) { 1654 1649 thrd = &pl330->channels[i]; 1655 - if (thrd->free) { 1650 + if ((thrd->free) && (!_manager_ns(thrd) || 1651 + _chan_ns(pi, i))) { 1656 1652 thrd->ev = _alloc_event(thrd); 1657 1653 if (thrd->ev >= 0) { 1658 1654 thrd->free = false;
+15 -51
arch/arm/configs/at91cap9adk_defconfig arch/arm/configs/at91sam9rl_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91CAP9=y 15 - CONFIG_MACH_AT91CAP9ADK=y 16 - CONFIG_MTD_AT91_DATAFLASH_CARD=y 14 + CONFIG_ARCH_AT91SAM9RL=y 15 + CONFIG_MACH_AT91SAM9RLEK=y 17 16 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 18 17 # CONFIG_ARM_THUMB is not set 19 - CONFIG_AEABI=y 20 - CONFIG_LEDS=y 21 - CONFIG_LEDS_CPU=y 22 18 CONFIG_ZBOOT_ROM_TEXT=0x0 23 19 CONFIG_ZBOOT_ROM_BSS=0x0 24 - CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw" 20 + CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw" 25 21 CONFIG_FPE_NWFPE=y 26 22 CONFIG_NET=y 27 - CONFIG_PACKET=y 28 23 CONFIG_UNIX=y 29 - CONFIG_INET=y 30 - CONFIG_IP_PNP=y 31 - CONFIG_IP_PNP_BOOTP=y 32 - CONFIG_IP_PNP_RARP=y 33 - # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 34 - # CONFIG_INET_XFRM_MODE_TUNNEL is not set 35 - # CONFIG_INET_XFRM_MODE_BEET is not set 36 - # CONFIG_INET_LRO is not set 37 - # CONFIG_INET_DIAG is not set 38 - # CONFIG_IPV6 is not set 39 24 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 40 25 CONFIG_MTD=y 41 - CONFIG_MTD_PARTITIONS=y 42 26 CONFIG_MTD_CMDLINE_PARTS=y 43 27 CONFIG_MTD_CHAR=y 44 28 CONFIG_MTD_BLOCK=y 45 - CONFIG_MTD_CFI=y 46 - CONFIG_MTD_JEDECPROBE=y 47 - CONFIG_MTD_CFI_AMDSTD=y 48 - CONFIG_MTD_PHYSMAP=y 49 29 CONFIG_MTD_DATAFLASH=y 50 30 CONFIG_MTD_NAND=y 51 31 CONFIG_MTD_NAND_ATMEL=y 52 32 CONFIG_BLK_DEV_LOOP=y 53 33 CONFIG_BLK_DEV_RAM=y 54 - CONFIG_BLK_DEV_RAM_SIZE=8192 55 - CONFIG_ATMEL_SSC=y 34 + CONFIG_BLK_DEV_RAM_COUNT=4 35 + CONFIG_BLK_DEV_RAM_SIZE=24576 56 36 CONFIG_SCSI=y 57 37 CONFIG_BLK_DEV_SD=y 58 38 CONFIG_SCSI_MULTI_LUN=y 59 - CONFIG_NETDEVICES=y 60 - CONFIG_NET_ETHERNET=y 61 - CONFIG_MII=y 62 - CONFIG_MACB=y 63 - # CONFIG_NETDEV_1000 is not set 64 - # CONFIG_NETDEV_10000 is not set 65 39 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 40 + CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 41 + CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 66 42 CONFIG_INPUT_EVDEV=y 67 43 # CONFIG_INPUT_KEYBOARD is not set 68 44 # CONFIG_INPUT_MOUSE is not set 69 45 CONFIG_INPUT_TOUCHSCREEN=y 70 - CONFIG_TOUCHSCREEN_ADS7846=y 46 + CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y 71 47 # CONFIG_SERIO is not set 72 48 CONFIG_SERIAL_ATMEL=y 73 49 CONFIG_SERIAL_ATMEL_CONSOLE=y 74 - CONFIG_HW_RANDOM=y 50 + # CONFIG_HW_RANDOM is not set 75 51 CONFIG_I2C=y 76 52 CONFIG_I2C_CHARDEV=y 53 + CONFIG_I2C_GPIO=y 77 54 CONFIG_SPI=y 78 55 CONFIG_SPI_ATMEL=y 79 56 # CONFIG_HWMON is not set 80 57 CONFIG_WATCHDOG=y 81 58 CONFIG_WATCHDOG_NOWAYOUT=y 59 + CONFIG_AT91SAM9X_WATCHDOG=y 82 60 CONFIG_FB=y 83 61 CONFIG_FB_ATMEL=y 84 - # CONFIG_VGA_CONSOLE is not set 85 - CONFIG_LOGO=y 86 - # CONFIG_LOGO_LINUX_MONO is not set 87 - # CONFIG_LOGO_LINUX_CLUT224 is not set 88 - # CONFIG_USB_HID is not set 89 - CONFIG_USB=y 90 - CONFIG_USB_DEVICEFS=y 91 - CONFIG_USB_MON=y 92 - CONFIG_USB_OHCI_HCD=y 93 - CONFIG_USB_STORAGE=y 94 - CONFIG_USB_GADGET=y 95 - CONFIG_USB_ETH=m 96 - CONFIG_USB_FILE_STORAGE=m 97 62 CONFIG_MMC=y 98 63 CONFIG_MMC_AT91=m 99 64 CONFIG_RTC_CLASS=y 100 65 CONFIG_RTC_DRV_AT91SAM9=y 101 66 CONFIG_EXT2_FS=y 102 - CONFIG_INOTIFY=y 67 + CONFIG_MSDOS_FS=y 103 68 CONFIG_VFAT_FS=y 104 69 CONFIG_TMPFS=y 105 - CONFIG_JFFS2_FS=y 106 70 CONFIG_CRAMFS=y 107 - CONFIG_NFS_FS=y 108 - CONFIG_ROOT_NFS=y 109 71 CONFIG_NLS_CODEPAGE_437=y 110 72 CONFIG_NLS_CODEPAGE_850=y 111 73 CONFIG_NLS_ISO8859_1=y 112 - CONFIG_DEBUG_FS=y 74 + CONFIG_NLS_ISO8859_15=y 75 + CONFIG_NLS_UTF8=y 113 76 CONFIG_DEBUG_KERNEL=y 114 77 CONFIG_DEBUG_INFO=y 115 78 CONFIG_DEBUG_USER=y 79 + CONFIG_DEBUG_LL=y
+14 -33
arch/arm/configs/at91rm9200_defconfig
··· 5 5 CONFIG_IKCONFIG=y 6 6 CONFIG_IKCONFIG_PROC=y 7 7 CONFIG_LOG_BUF_SHIFT=14 8 - CONFIG_SYSFS_DEPRECATED_V2=y 9 8 CONFIG_BLK_DEV_INITRD=y 10 9 CONFIG_MODULES=y 11 10 CONFIG_MODULE_FORCE_LOAD=y ··· 55 56 CONFIG_IP_PNP_DHCP=y 56 57 CONFIG_IP_PNP_BOOTP=y 57 58 CONFIG_NET_IPIP=m 58 - CONFIG_NET_IPGRE=m 59 59 CONFIG_INET_AH=m 60 60 CONFIG_INET_ESP=m 61 61 CONFIG_INET_IPCOMP=m ··· 73 75 CONFIG_BRIDGE=m 74 76 CONFIG_VLAN_8021Q=m 75 77 CONFIG_BT=m 76 - CONFIG_BT_L2CAP=m 77 - CONFIG_BT_SCO=m 78 - CONFIG_BT_RFCOMM=m 79 - CONFIG_BT_RFCOMM_TTY=y 80 - CONFIG_BT_BNEP=m 81 - CONFIG_BT_BNEP_MC_FILTER=y 82 - CONFIG_BT_BNEP_PROTO_FILTER=y 83 - CONFIG_BT_HIDP=m 84 78 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 85 79 CONFIG_MTD=y 86 - CONFIG_MTD_CONCAT=y 87 - CONFIG_MTD_PARTITIONS=y 88 80 CONFIG_MTD_CMDLINE_PARTS=y 89 81 CONFIG_MTD_AFS_PARTS=y 90 82 CONFIG_MTD_CHAR=y ··· 96 108 CONFIG_BLK_DEV_NBD=y 97 109 CONFIG_BLK_DEV_RAM=y 98 110 CONFIG_BLK_DEV_RAM_SIZE=8192 99 - CONFIG_ATMEL_TCLIB=y 100 - CONFIG_EEPROM_LEGACY=m 101 111 CONFIG_SCSI=y 102 112 CONFIG_BLK_DEV_SD=y 103 113 CONFIG_BLK_DEV_SR=m ··· 105 119 # CONFIG_SCSI_LOWLEVEL is not set 106 120 CONFIG_NETDEVICES=y 107 121 CONFIG_TUN=m 122 + CONFIG_ARM_AT91_ETHER=y 108 123 CONFIG_PHYLIB=y 109 124 CONFIG_DAVICOM_PHY=y 110 125 CONFIG_SMSC_PHY=y 111 126 CONFIG_MICREL_PHY=y 112 - CONFIG_NET_ETHERNET=y 113 - CONFIG_ARM_AT91_ETHER=y 114 - # CONFIG_NETDEV_1000 is not set 115 - # CONFIG_NETDEV_10000 is not set 127 + CONFIG_PPP=y 128 + CONFIG_PPP_BSDCOMP=y 129 + CONFIG_PPP_DEFLATE=y 130 + CONFIG_PPP_FILTER=y 131 + CONFIG_PPP_MPPE=m 132 + CONFIG_PPP_MULTILINK=y 133 + CONFIG_PPPOE=m 134 + CONFIG_PPP_ASYNC=y 135 + CONFIG_SLIP=m 136 + CONFIG_SLIP_COMPRESSED=y 137 + CONFIG_SLIP_SMART=y 138 + CONFIG_SLIP_MODE_SLIP6=y 116 139 CONFIG_USB_CATC=m 117 140 CONFIG_USB_KAWETH=m 118 141 CONFIG_USB_PEGASUS=m ··· 134 139 CONFIG_USB_ALI_M5632=y 135 140 CONFIG_USB_AN2720=y 136 141 CONFIG_USB_EPSON2888=y 137 - CONFIG_PPP=y 138 - CONFIG_PPP_MULTILINK=y 139 - CONFIG_PPP_FILTER=y 140 - CONFIG_PPP_ASYNC=y 141 - CONFIG_PPP_DEFLATE=y 142 - CONFIG_PPP_BSDCOMP=y 143 - CONFIG_PPP_MPPE=m 144 - CONFIG_PPPOE=m 145 - CONFIG_SLIP=m 146 - CONFIG_SLIP_COMPRESSED=y 147 - CONFIG_SLIP_SMART=y 148 - CONFIG_SLIP_MODE_SLIP6=y 149 142 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 150 143 CONFIG_INPUT_MOUSEDEV_SCREEN_X=640 151 144 CONFIG_INPUT_MOUSEDEV_SCREEN_Y=480 ··· 141 158 CONFIG_KEYBOARD_GPIO=y 142 159 # CONFIG_INPUT_MOUSE is not set 143 160 CONFIG_INPUT_TOUCHSCREEN=y 161 + CONFIG_LEGACY_PTY_COUNT=32 144 162 CONFIG_SERIAL_ATMEL=y 145 163 CONFIG_SERIAL_ATMEL_CONSOLE=y 146 - CONFIG_LEGACY_PTY_COUNT=32 147 164 CONFIG_HW_RANDOM=y 148 165 CONFIG_I2C=y 149 166 CONFIG_I2C_CHARDEV=y ··· 273 290 CONFIG_NFS_V4=y 274 291 CONFIG_ROOT_NFS=y 275 292 CONFIG_NFSD=y 276 - CONFIG_SMB_FS=m 277 293 CONFIG_CIFS=m 278 294 CONFIG_PARTITION_ADVANCED=y 279 295 CONFIG_MAC_PARTITION=y ··· 317 335 CONFIG_MAGIC_SYSRQ=y 318 336 CONFIG_DEBUG_FS=y 319 337 CONFIG_DEBUG_KERNEL=y 320 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 321 338 # CONFIG_FTRACE is not set 322 339 CONFIG_CRYPTO_PCBC=y 323 340 CONFIG_CRYPTO_SHA1=y
+61 -20
arch/arm/configs/at91sam9260ek_defconfig arch/arm/configs/at91sam9g20_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9260=y 15 - CONFIG_MACH_AT91SAM9260EK=y 14 + CONFIG_ARCH_AT91SAM9G20=y 15 + CONFIG_MACH_AT91SAM9G20EK=y 16 + CONFIG_MACH_AT91SAM9G20EK_2MMC=y 17 + CONFIG_MACH_CPU9G20=y 18 + CONFIG_MACH_ACMENETUSFOXG20=y 19 + CONFIG_MACH_PORTUXG20=y 20 + CONFIG_MACH_STAMP9G20=y 21 + CONFIG_MACH_PCONTROL_G20=y 22 + CONFIG_MACH_GSIA18S=y 23 + CONFIG_MACH_USB_A9G20=y 24 + CONFIG_MACH_SNAPPER_9260=y 25 + CONFIG_MACH_AT91SAM_DT=y 16 26 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 17 27 # CONFIG_ARM_THUMB is not set 28 + CONFIG_AEABI=y 29 + CONFIG_LEDS=y 30 + CONFIG_LEDS_CPU=y 18 31 CONFIG_ZBOOT_ROM_TEXT=0x0 19 32 CONFIG_ZBOOT_ROM_BSS=0x0 33 + CONFIG_ARM_APPENDED_DTB=y 34 + CONFIG_ARM_ATAG_DTB_COMPAT=y 20 35 CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 21 36 CONFIG_FPE_NWFPE=y 22 37 CONFIG_NET=y ··· 46 31 # CONFIG_INET_LRO is not set 47 32 # CONFIG_IPV6 is not set 48 33 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 34 + CONFIG_MTD=y 35 + CONFIG_MTD_CMDLINE_PARTS=y 36 + CONFIG_MTD_CHAR=y 37 + CONFIG_MTD_BLOCK=y 38 + CONFIG_MTD_DATAFLASH=y 39 + CONFIG_MTD_NAND=y 40 + CONFIG_MTD_NAND_ATMEL=y 41 + CONFIG_BLK_DEV_LOOP=y 49 42 CONFIG_BLK_DEV_RAM=y 50 43 CONFIG_BLK_DEV_RAM_SIZE=8192 51 - CONFIG_ATMEL_SSC=y 52 44 CONFIG_SCSI=y 53 45 CONFIG_BLK_DEV_SD=y 54 46 CONFIG_SCSI_MULTI_LUN=y 47 + # CONFIG_SCSI_LOWLEVEL is not set 55 48 CONFIG_NETDEVICES=y 56 - CONFIG_NET_ETHERNET=y 57 49 CONFIG_MII=y 58 50 CONFIG_MACB=y 59 51 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 60 - # CONFIG_INPUT_KEYBOARD is not set 52 + CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 53 + CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 54 + CONFIG_INPUT_EVDEV=y 55 + # CONFIG_KEYBOARD_ATKBD is not set 56 + CONFIG_KEYBOARD_GPIO=y 61 57 # CONFIG_INPUT_MOUSE is not set 62 - # CONFIG_SERIO is not set 58 + CONFIG_LEGACY_PTY_COUNT=16 63 59 CONFIG_SERIAL_ATMEL=y 64 60 CONFIG_SERIAL_ATMEL_CONSOLE=y 65 - # CONFIG_HW_RANDOM is not set 66 - CONFIG_I2C=y 67 - CONFIG_I2C_CHARDEV=y 68 - CONFIG_I2C_GPIO=y 61 + CONFIG_HW_RANDOM=y 62 + CONFIG_SPI=y 63 + CONFIG_SPI_ATMEL=y 64 + CONFIG_SPI_SPIDEV=y 69 65 # CONFIG_HWMON is not set 70 - CONFIG_WATCHDOG=y 71 - CONFIG_WATCHDOG_NOWAYOUT=y 72 - CONFIG_AT91SAM9X_WATCHDOG=y 73 - # CONFIG_VGA_CONSOLE is not set 74 - # CONFIG_USB_HID is not set 66 + CONFIG_SOUND=y 67 + CONFIG_SND=y 68 + CONFIG_SND_SEQUENCER=y 69 + CONFIG_SND_MIXER_OSS=y 70 + CONFIG_SND_PCM_OSS=y 71 + CONFIG_SND_SEQUENCER_OSS=y 72 + # CONFIG_SND_VERBOSE_PROCFS is not set 75 73 CONFIG_USB=y 76 74 CONFIG_USB_DEVICEFS=y 75 + # CONFIG_USB_DEVICE_CLASS is not set 77 76 CONFIG_USB_MON=y 78 77 CONFIG_USB_OHCI_HCD=y 79 78 CONFIG_USB_STORAGE=y 80 - CONFIG_USB_STORAGE_DEBUG=y 81 79 CONFIG_USB_GADGET=y 82 80 CONFIG_USB_ZERO=m 83 81 CONFIG_USB_GADGETFS=m 84 82 CONFIG_USB_FILE_STORAGE=m 85 83 CONFIG_USB_G_SERIAL=m 84 + CONFIG_MMC=y 85 + CONFIG_MMC_AT91=m 86 + CONFIG_NEW_LEDS=y 87 + CONFIG_LEDS_CLASS=y 88 + CONFIG_LEDS_GPIO=y 89 + CONFIG_LEDS_TRIGGERS=y 90 + CONFIG_LEDS_TRIGGER_TIMER=y 91 + CONFIG_LEDS_TRIGGER_HEARTBEAT=y 86 92 CONFIG_RTC_CLASS=y 87 93 CONFIG_RTC_DRV_AT91SAM9=y 88 94 CONFIG_EXT2_FS=y 89 - CONFIG_INOTIFY=y 95 + CONFIG_MSDOS_FS=y 90 96 CONFIG_VFAT_FS=y 91 97 CONFIG_TMPFS=y 98 + CONFIG_JFFS2_FS=y 99 + CONFIG_JFFS2_SUMMARY=y 92 100 CONFIG_CRAMFS=y 101 + CONFIG_NFS_FS=y 102 + CONFIG_NFS_V3=y 103 + CONFIG_ROOT_NFS=y 93 104 CONFIG_NLS_CODEPAGE_437=y 94 105 CONFIG_NLS_CODEPAGE_850=y 95 106 CONFIG_NLS_ISO8859_1=y 96 - CONFIG_DEBUG_KERNEL=y 97 - CONFIG_DEBUG_USER=y 98 - CONFIG_DEBUG_LL=y 107 + CONFIG_NLS_ISO8859_15=y 108 + CONFIG_NLS_UTF8=y 109 + # CONFIG_ENABLE_WARN_DEPRECATED is not set
+29 -44
arch/arm/configs/at91sam9g20ek_defconfig arch/arm/configs/at91cap9_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9G20=y 15 - CONFIG_MACH_AT91SAM9G20EK=y 16 - CONFIG_MACH_AT91SAM9G20EK_2MMC=y 14 + CONFIG_ARCH_AT91CAP9=y 15 + CONFIG_MACH_AT91CAP9ADK=y 16 + CONFIG_MTD_AT91_DATAFLASH_CARD=y 17 17 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 18 18 # CONFIG_ARM_THUMB is not set 19 19 CONFIG_AEABI=y ··· 21 21 CONFIG_LEDS_CPU=y 22 22 CONFIG_ZBOOT_ROM_TEXT=0x0 23 23 CONFIG_ZBOOT_ROM_BSS=0x0 24 - CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 24 + CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw" 25 25 CONFIG_FPE_NWFPE=y 26 - CONFIG_PM=y 27 26 CONFIG_NET=y 28 27 CONFIG_PACKET=y 29 28 CONFIG_UNIX=y 30 29 CONFIG_INET=y 31 30 CONFIG_IP_PNP=y 32 31 CONFIG_IP_PNP_BOOTP=y 32 + CONFIG_IP_PNP_RARP=y 33 33 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 34 34 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 35 35 # CONFIG_INET_XFRM_MODE_BEET is not set 36 36 # CONFIG_INET_LRO is not set 37 + # CONFIG_INET_DIAG is not set 37 38 # CONFIG_IPV6 is not set 38 39 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 39 40 CONFIG_MTD=y 40 - CONFIG_MTD_CONCAT=y 41 - CONFIG_MTD_PARTITIONS=y 42 41 CONFIG_MTD_CMDLINE_PARTS=y 43 42 CONFIG_MTD_CHAR=y 44 43 CONFIG_MTD_BLOCK=y 44 + CONFIG_MTD_CFI=y 45 + CONFIG_MTD_JEDECPROBE=y 46 + CONFIG_MTD_CFI_AMDSTD=y 47 + CONFIG_MTD_PHYSMAP=y 45 48 CONFIG_MTD_DATAFLASH=y 46 49 CONFIG_MTD_NAND=y 47 50 CONFIG_MTD_NAND_ATMEL=y 48 51 CONFIG_BLK_DEV_LOOP=y 49 52 CONFIG_BLK_DEV_RAM=y 50 53 CONFIG_BLK_DEV_RAM_SIZE=8192 51 - CONFIG_ATMEL_SSC=y 52 54 CONFIG_SCSI=y 53 55 CONFIG_BLK_DEV_SD=y 54 56 CONFIG_SCSI_MULTI_LUN=y 55 - # CONFIG_SCSI_LOWLEVEL is not set 56 57 CONFIG_NETDEVICES=y 57 - CONFIG_NET_ETHERNET=y 58 58 CONFIG_MII=y 59 59 CONFIG_MACB=y 60 - # CONFIG_NETDEV_1000 is not set 61 - # CONFIG_NETDEV_10000 is not set 62 60 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 63 - CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 64 - CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 65 61 CONFIG_INPUT_EVDEV=y 66 - # CONFIG_KEYBOARD_ATKBD is not set 67 - CONFIG_KEYBOARD_GPIO=y 62 + # CONFIG_INPUT_KEYBOARD is not set 68 63 # CONFIG_INPUT_MOUSE is not set 64 + CONFIG_INPUT_TOUCHSCREEN=y 65 + CONFIG_TOUCHSCREEN_ADS7846=y 66 + # CONFIG_SERIO is not set 69 67 CONFIG_SERIAL_ATMEL=y 70 68 CONFIG_SERIAL_ATMEL_CONSOLE=y 71 - CONFIG_LEGACY_PTY_COUNT=16 72 69 CONFIG_HW_RANDOM=y 70 + CONFIG_I2C=y 71 + CONFIG_I2C_CHARDEV=y 73 72 CONFIG_SPI=y 74 73 CONFIG_SPI_ATMEL=y 75 - CONFIG_SPI_SPIDEV=y 76 74 # CONFIG_HWMON is not set 77 - # CONFIG_VGA_CONSOLE is not set 78 - CONFIG_SOUND=y 79 - CONFIG_SND=y 80 - CONFIG_SND_SEQUENCER=y 81 - CONFIG_SND_MIXER_OSS=y 82 - CONFIG_SND_PCM_OSS=y 83 - CONFIG_SND_SEQUENCER_OSS=y 84 - # CONFIG_SND_VERBOSE_PROCFS is not set 85 - CONFIG_SND_AT73C213=y 75 + CONFIG_WATCHDOG=y 76 + CONFIG_WATCHDOG_NOWAYOUT=y 77 + CONFIG_FB=y 78 + CONFIG_FB_ATMEL=y 79 + CONFIG_LOGO=y 80 + # CONFIG_LOGO_LINUX_MONO is not set 81 + # CONFIG_LOGO_LINUX_CLUT224 is not set 82 + # CONFIG_USB_HID is not set 86 83 CONFIG_USB=y 87 84 CONFIG_USB_DEVICEFS=y 88 - # CONFIG_USB_DEVICE_CLASS is not set 89 85 CONFIG_USB_MON=y 90 86 CONFIG_USB_OHCI_HCD=y 91 87 CONFIG_USB_STORAGE=y 92 88 CONFIG_USB_GADGET=y 93 - CONFIG_USB_ZERO=m 94 - CONFIG_USB_GADGETFS=m 89 + CONFIG_USB_ETH=m 95 90 CONFIG_USB_FILE_STORAGE=m 96 - CONFIG_USB_G_SERIAL=m 97 91 CONFIG_MMC=y 98 92 CONFIG_MMC_AT91=m 99 - CONFIG_NEW_LEDS=y 100 - CONFIG_LEDS_CLASS=y 101 - CONFIG_LEDS_GPIO=y 102 - CONFIG_LEDS_TRIGGERS=y 103 - CONFIG_LEDS_TRIGGER_TIMER=y 104 - CONFIG_LEDS_TRIGGER_HEARTBEAT=y 105 93 CONFIG_RTC_CLASS=y 106 94 CONFIG_RTC_DRV_AT91SAM9=y 107 95 CONFIG_EXT2_FS=y 108 - CONFIG_INOTIFY=y 109 - CONFIG_MSDOS_FS=y 110 96 CONFIG_VFAT_FS=y 111 97 CONFIG_TMPFS=y 112 98 CONFIG_JFFS2_FS=y 113 - CONFIG_JFFS2_SUMMARY=y 114 99 CONFIG_CRAMFS=y 115 100 CONFIG_NFS_FS=y 116 - CONFIG_NFS_V3=y 117 101 CONFIG_ROOT_NFS=y 118 102 CONFIG_NLS_CODEPAGE_437=y 119 103 CONFIG_NLS_CODEPAGE_850=y 120 104 CONFIG_NLS_ISO8859_1=y 121 - CONFIG_NLS_ISO8859_15=y 122 - CONFIG_NLS_UTF8=y 123 - # CONFIG_ENABLE_WARN_DEPRECATED is not set 105 + CONFIG_DEBUG_FS=y 106 + CONFIG_DEBUG_KERNEL=y 107 + CONFIG_DEBUG_INFO=y 108 + CONFIG_DEBUG_USER=y
+2 -5
arch/arm/configs/at91sam9g45_defconfig
··· 18 18 CONFIG_ARCH_AT91=y 19 19 CONFIG_ARCH_AT91SAM9G45=y 20 20 CONFIG_MACH_AT91SAM9M10G45EK=y 21 + CONFIG_MACH_AT91SAM_DT=y 21 22 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 22 23 CONFIG_AT91_SLOW_CLOCK=y 23 24 CONFIG_AEABI=y ··· 74 73 # CONFIG_SCSI_LOWLEVEL is not set 75 74 CONFIG_NETDEVICES=y 76 75 CONFIG_MII=y 77 - CONFIG_DAVICOM_PHY=y 78 - CONFIG_NET_ETHERNET=y 79 76 CONFIG_MACB=y 80 - # CONFIG_NETDEV_1000 is not set 81 - # CONFIG_NETDEV_10000 is not set 77 + CONFIG_DAVICOM_PHY=y 82 78 CONFIG_LIBERTAS_THINFIRM=m 83 79 CONFIG_LIBERTAS_THINFIRM_USB=m 84 80 CONFIG_AT76C50X_USB=m ··· 129 131 CONFIG_SPI=y 130 132 CONFIG_SPI_ATMEL=y 131 133 # CONFIG_HWMON is not set 132 - # CONFIG_MFD_SUPPORT is not set 133 134 CONFIG_FB=y 134 135 CONFIG_FB_ATMEL=y 135 136 CONFIG_FB_UDL=m
+40 -33
arch/arm/configs/at91sam9rlek_defconfig arch/arm/configs/at91sam9260_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9RL=y 15 - CONFIG_MACH_AT91SAM9RLEK=y 14 + CONFIG_ARCH_AT91SAM9260=y 15 + CONFIG_ARCH_AT91SAM9260_SAM9XE=y 16 + CONFIG_MACH_AT91SAM9260EK=y 17 + CONFIG_MACH_CAM60=y 18 + CONFIG_MACH_SAM9_L9260=y 19 + CONFIG_MACH_AFEB9260=y 20 + CONFIG_MACH_USB_A9260=y 21 + CONFIG_MACH_QIL_A9260=y 22 + CONFIG_MACH_CPU9260=y 23 + CONFIG_MACH_FLEXIBITY=y 24 + CONFIG_MACH_SNAPPER_9260=y 25 + CONFIG_MACH_AT91SAM_DT=y 16 26 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 17 27 # CONFIG_ARM_THUMB is not set 18 28 CONFIG_ZBOOT_ROM_TEXT=0x0 19 29 CONFIG_ZBOOT_ROM_BSS=0x0 20 - CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw" 30 + CONFIG_ARM_APPENDED_DTB=y 31 + CONFIG_ARM_ATAG_DTB_COMPAT=y 32 + CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 21 33 CONFIG_FPE_NWFPE=y 22 34 CONFIG_NET=y 35 + CONFIG_PACKET=y 23 36 CONFIG_UNIX=y 37 + CONFIG_INET=y 38 + CONFIG_IP_PNP=y 39 + CONFIG_IP_PNP_BOOTP=y 40 + # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 41 + # CONFIG_INET_XFRM_MODE_TUNNEL is not set 42 + # CONFIG_INET_XFRM_MODE_BEET is not set 43 + # CONFIG_INET_LRO is not set 44 + # CONFIG_IPV6 is not set 24 45 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 25 - CONFIG_MTD=y 26 - CONFIG_MTD_CONCAT=y 27 - CONFIG_MTD_PARTITIONS=y 28 - CONFIG_MTD_CMDLINE_PARTS=y 29 - CONFIG_MTD_CHAR=y 30 - CONFIG_MTD_BLOCK=y 31 - CONFIG_MTD_DATAFLASH=y 32 - CONFIG_MTD_NAND=y 33 - CONFIG_MTD_NAND_ATMEL=y 34 - CONFIG_BLK_DEV_LOOP=y 35 46 CONFIG_BLK_DEV_RAM=y 36 - CONFIG_BLK_DEV_RAM_COUNT=4 37 - CONFIG_BLK_DEV_RAM_SIZE=24576 38 - CONFIG_ATMEL_SSC=y 47 + CONFIG_BLK_DEV_RAM_SIZE=8192 39 48 CONFIG_SCSI=y 40 49 CONFIG_BLK_DEV_SD=y 41 50 CONFIG_SCSI_MULTI_LUN=y 51 + CONFIG_NETDEVICES=y 52 + CONFIG_MII=y 53 + CONFIG_MACB=y 42 54 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 43 - CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 44 - CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 45 - CONFIG_INPUT_EVDEV=y 46 55 # CONFIG_INPUT_KEYBOARD is not set 47 56 # CONFIG_INPUT_MOUSE is not set 48 - CONFIG_INPUT_TOUCHSCREEN=y 49 - CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y 50 57 # CONFIG_SERIO is not set 51 58 CONFIG_SERIAL_ATMEL=y 52 59 CONFIG_SERIAL_ATMEL_CONSOLE=y ··· 61 54 CONFIG_I2C=y 62 55 CONFIG_I2C_CHARDEV=y 63 56 CONFIG_I2C_GPIO=y 64 - CONFIG_SPI=y 65 - CONFIG_SPI_ATMEL=y 66 57 # CONFIG_HWMON is not set 67 58 CONFIG_WATCHDOG=y 68 59 CONFIG_WATCHDOG_NOWAYOUT=y 69 60 CONFIG_AT91SAM9X_WATCHDOG=y 70 - CONFIG_FB=y 71 - CONFIG_FB_ATMEL=y 72 - # CONFIG_VGA_CONSOLE is not set 73 - CONFIG_MMC=y 74 - CONFIG_MMC_AT91=m 61 + # CONFIG_USB_HID is not set 62 + CONFIG_USB=y 63 + CONFIG_USB_DEVICEFS=y 64 + CONFIG_USB_MON=y 65 + CONFIG_USB_OHCI_HCD=y 66 + CONFIG_USB_STORAGE=y 67 + CONFIG_USB_STORAGE_DEBUG=y 68 + CONFIG_USB_GADGET=y 69 + CONFIG_USB_ZERO=m 70 + CONFIG_USB_GADGETFS=m 71 + CONFIG_USB_FILE_STORAGE=m 72 + CONFIG_USB_G_SERIAL=m 75 73 CONFIG_RTC_CLASS=y 76 74 CONFIG_RTC_DRV_AT91SAM9=y 77 75 CONFIG_EXT2_FS=y 78 - CONFIG_INOTIFY=y 79 - CONFIG_MSDOS_FS=y 80 76 CONFIG_VFAT_FS=y 81 77 CONFIG_TMPFS=y 82 78 CONFIG_CRAMFS=y 83 79 CONFIG_NLS_CODEPAGE_437=y 84 80 CONFIG_NLS_CODEPAGE_850=y 85 81 CONFIG_NLS_ISO8859_1=y 86 - CONFIG_NLS_ISO8859_15=y 87 - CONFIG_NLS_UTF8=y 88 82 CONFIG_DEBUG_KERNEL=y 89 - CONFIG_DEBUG_INFO=y 90 83 CONFIG_DEBUG_USER=y 91 84 CONFIG_DEBUG_LL=y
+1 -1
arch/arm/configs/ezx_defconfig
··· 287 287 # CONFIG_USB_DEVICE_CLASS is not set 288 288 CONFIG_USB_OHCI_HCD=y 289 289 CONFIG_USB_GADGET=y 290 - CONFIG_USB_GADGET_PXA27X=y 290 + CONFIG_USB_PXA27X=y 291 291 CONFIG_USB_ETH=m 292 292 # CONFIG_USB_ETH_RNDIS is not set 293 293 CONFIG_MMC=y
+1 -1
arch/arm/configs/imote2_defconfig
··· 263 263 # CONFIG_USB_DEVICE_CLASS is not set 264 264 CONFIG_USB_OHCI_HCD=y 265 265 CONFIG_USB_GADGET=y 266 - CONFIG_USB_GADGET_PXA27X=y 266 + CONFIG_USB_PXA27X=y 267 267 CONFIG_USB_ETH=m 268 268 # CONFIG_USB_ETH_RNDIS is not set 269 269 CONFIG_MMC=y
+1 -1
arch/arm/configs/magician_defconfig
··· 132 132 CONFIG_USB_OHCI_HCD=y 133 133 CONFIG_USB_GADGET=y 134 134 CONFIG_USB_GADGET_VBUS_DRAW=500 135 - CONFIG_USB_GADGET_PXA27X=y 135 + CONFIG_USB_PXA27X=y 136 136 CONFIG_USB_ETH=m 137 137 # CONFIG_USB_ETH_RNDIS is not set 138 138 CONFIG_USB_GADGETFS=m
-1
arch/arm/configs/omap1_defconfig
··· 48 48 CONFIG_MACH_NOKIA770=y 49 49 CONFIG_MACH_AMS_DELTA=y 50 50 CONFIG_MACH_OMAP_GENERIC=y 51 - CONFIG_OMAP_CLOCKS_SET_BY_BOOTLOADER=y 52 51 CONFIG_OMAP_ARM_216MHZ=y 53 52 CONFIG_OMAP_ARM_195MHZ=y 54 53 CONFIG_OMAP_ARM_192MHZ=y
+6 -7
arch/arm/configs/u300_defconfig
··· 14 14 CONFIG_ARCH_U300=y 15 15 CONFIG_MACH_U300=y 16 16 CONFIG_MACH_U300_BS335=y 17 - CONFIG_MACH_U300_DUAL_RAM=y 18 - CONFIG_U300_DEBUG=y 19 17 CONFIG_MACH_U300_SPIDUMMY=y 20 18 CONFIG_NO_HZ=y 21 19 CONFIG_HIGH_RES_TIMERS=y ··· 24 26 CONFIG_CMDLINE="root=/dev/ram0 rw rootfstype=rootfs console=ttyAMA0,115200n8 lpj=515072" 25 27 CONFIG_CPU_IDLE=y 26 28 CONFIG_FPE_NWFPE=y 27 - CONFIG_PM=y 28 29 # CONFIG_SUSPEND is not set 29 30 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 31 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 31 - # CONFIG_MISC_DEVICES is not set 32 + CONFIG_MTD=y 33 + CONFIG_MTD_CMDLINE_PARTS=y 34 + CONFIG_MTD_NAND=y 35 + CONFIG_MTD_NAND_FSMC=y 32 36 # CONFIG_INPUT_MOUSEDEV is not set 33 37 CONFIG_INPUT_EVDEV=y 34 38 # CONFIG_KEYBOARD_ATKBD is not set 35 39 # CONFIG_INPUT_MOUSE is not set 36 40 # CONFIG_SERIO is not set 41 + CONFIG_LEGACY_PTY_COUNT=16 37 42 CONFIG_SERIAL_AMBA_PL011=y 38 43 CONFIG_SERIAL_AMBA_PL011_CONSOLE=y 39 - CONFIG_LEGACY_PTY_COUNT=16 40 44 # CONFIG_HW_RANDOM is not set 41 45 CONFIG_I2C=y 42 46 # CONFIG_HWMON is not set ··· 51 51 # CONFIG_HID_SUPPORT is not set 52 52 # CONFIG_USB_SUPPORT is not set 53 53 CONFIG_MMC=y 54 + CONFIG_MMC_CLKGATE=y 54 55 CONFIG_MMC_ARMMMCI=y 55 56 CONFIG_RTC_CLASS=y 56 57 # CONFIG_RTC_HCTOSYS is not set ··· 66 65 CONFIG_NLS_ISO8859_1=y 67 66 CONFIG_PRINTK_TIME=y 68 67 CONFIG_DEBUG_FS=y 69 - CONFIG_DEBUG_KERNEL=y 70 68 # CONFIG_SCHED_DEBUG is not set 71 69 CONFIG_TIMER_STATS=y 72 70 # CONFIG_DEBUG_PREEMPT is not set 73 71 CONFIG_DEBUG_INFO=y 74 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 75 72 # CONFIG_CRC32 is not set
+5 -9
arch/arm/configs/u8500_defconfig
··· 10 10 CONFIG_ARCH_U8500=y 11 11 CONFIG_UX500_SOC_DB5500=y 12 12 CONFIG_UX500_SOC_DB8500=y 13 - CONFIG_MACH_U8500=y 13 + CONFIG_MACH_HREFV60=y 14 14 CONFIG_MACH_SNOWBALL=y 15 15 CONFIG_MACH_U5500=y 16 16 CONFIG_NO_HZ=y ··· 24 24 CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y 25 25 CONFIG_VFP=y 26 26 CONFIG_NEON=y 27 + CONFIG_PM_RUNTIME=y 27 28 CONFIG_NET=y 28 29 CONFIG_PACKET=y 29 30 CONFIG_UNIX=y ··· 42 41 CONFIG_AB8500_PWM=y 43 42 CONFIG_SENSORS_BH1780=y 44 43 CONFIG_NETDEVICES=y 45 - CONFIG_SMSC_PHY=y 46 - CONFIG_NET_ETHERNET=y 47 44 CONFIG_SMSC911X=y 48 - # CONFIG_NETDEV_1000 is not set 49 - # CONFIG_NETDEV_10000 is not set 45 + CONFIG_SMSC_PHY=y 50 46 # CONFIG_WLAN is not set 51 47 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 52 48 CONFIG_INPUT_EVDEV=y ··· 70 72 CONFIG_SPI_PL022=y 71 73 CONFIG_GPIO_STMPE=y 72 74 CONFIG_GPIO_TC3589X=y 73 - # CONFIG_HWMON is not set 74 75 CONFIG_MFD_STMPE=y 75 76 CONFIG_MFD_TC3589X=y 77 + CONFIG_AB5500_CORE=y 76 78 CONFIG_AB8500_CORE=y 77 79 CONFIG_REGULATOR_AB8500=y 78 80 # CONFIG_HID_SUPPORT is not set 79 - CONFIG_USB_MUSB_HDRC=y 80 - CONFIG_USB_GADGET_MUSB_HDRC=y 81 - CONFIG_MUSB_PIO_ONLY=y 82 81 CONFIG_USB_GADGET=y 83 82 CONFIG_AB8500_USB=y 84 83 CONFIG_MMC=y ··· 92 97 CONFIG_STE_DMA40=y 93 98 CONFIG_STAGING=y 94 99 CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y 100 + CONFIG_HSEM_U8500=y 95 101 CONFIG_EXT2_FS=y 96 102 CONFIG_EXT2_FS_XATTR=y 97 103 CONFIG_EXT2_FS_POSIX_ACL=y
+1 -1
arch/arm/configs/zeus_defconfig
··· 140 140 CONFIG_USB_SERIAL_GENERIC=y 141 141 CONFIG_USB_SERIAL_MCT_U232=m 142 142 CONFIG_USB_GADGET=m 143 - CONFIG_USB_GADGET_PXA27X=y 143 + CONFIG_USB_PXA27X=y 144 144 CONFIG_USB_ETH=m 145 145 CONFIG_USB_GADGETFS=m 146 146 CONFIG_USB_FILE_STORAGE=m
-10
arch/arm/include/asm/pmu.h
··· 55 55 extern void 56 56 release_pmu(enum arm_pmu_type type); 57 57 58 - /** 59 - * init_pmu() - Initialise the PMU. 60 - * 61 - * Initialise the system ready for PMU enabling. This should typically set the 62 - * IRQ affinity and nothing else. The users (oprofile/perf events etc) will do 63 - * the actual hardware initialisation. 64 - */ 65 - extern int 66 - init_pmu(enum arm_pmu_type type); 67 - 68 58 #else /* CONFIG_CPU_HAS_PMU */ 69 59 70 60 #include <linux/err.h>
+1 -1
arch/arm/include/asm/topology.h
··· 25 25 26 26 void init_cpu_topology(void); 27 27 void store_cpu_topology(unsigned int cpuid); 28 - const struct cpumask *cpu_coregroup_mask(unsigned int cpu); 28 + const struct cpumask *cpu_coregroup_mask(int cpu); 29 29 30 30 #else 31 31
+1 -1
arch/arm/kernel/entry-armv.S
··· 497 497 .popsection 498 498 .pushsection __ex_table,"a" 499 499 .long 1b, 4b 500 - #if __LINUX_ARM_ARCH__ >= 7 500 + #if CONFIG_ARM_THUMB && __LINUX_ARM_ARCH__ >= 6 && CONFIG_CPU_V7 501 501 .long 2b, 4b 502 502 .long 3b, 4b 503 503 #endif
+3 -1
arch/arm/kernel/kprobes-arm.c
··· 519 519 static const union decode_item arm_cccc_0001_____1001_table[] = { 520 520 /* Synchronization primitives */ 521 521 522 + #if __LINUX_ARM_ARCH__ < 6 523 + /* Deprecated on ARMv6 and may be UNDEFINED on v7 */ 522 524 /* SMP/SWPB cccc 0001 0x00 xxxx xxxx xxxx 1001 xxxx */ 523 525 DECODE_EMULATEX (0x0fb000f0, 0x01000090, emulate_rd12rn16rm0_rwflags_nopc, 524 526 REGS(NOPC, NOPC, 0, 0, NOPC)), 525 - 527 + #endif 526 528 /* LDREX/STREX{,D,B,H} cccc 0001 1xxx xxxx xxxx xxxx 1001 xxxx */ 527 529 /* And unallocated instructions... */ 528 530 DECODE_END
+17 -10
arch/arm/kernel/kprobes-test-arm.c
··· 427 427 428 428 TEST_GROUP("Synchronization primitives") 429 429 430 - /* 431 - * Use hard coded constants for SWP instructions to avoid warnings 432 - * about deprecated instructions. 433 - */ 434 - TEST_RP( ".word 0xe108e097 @ swp lr, r",7,VAL2,", [r",8,0,"]") 435 - TEST_R( ".word 0x610d0091 @ swpvs r0, r",1,VAL1,", [sp]") 436 - TEST_RP( ".word 0xe10cd09e @ swp sp, r",14,VAL2,", [r",12,13*4,"]") 430 + #if __LINUX_ARM_ARCH__ < 6 431 + TEST_RP("swp lr, r",7,VAL2,", [r",8,0,"]") 432 + TEST_R( "swpvs r0, r",1,VAL1,", [sp]") 433 + TEST_RP("swp sp, r",14,VAL2,", [r",12,13*4,"]") 434 + #else 435 + TEST_UNSUPPORTED(".word 0xe108e097 @ swp lr, r7, [r8]") 436 + TEST_UNSUPPORTED(".word 0x610d0091 @ swpvs r0, r1, [sp]") 437 + TEST_UNSUPPORTED(".word 0xe10cd09e @ swp sp, r14 [r12]") 438 + #endif 437 439 TEST_UNSUPPORTED(".word 0xe102f091 @ swp pc, r1, [r2]") 438 440 TEST_UNSUPPORTED(".word 0xe102009f @ swp r0, pc, [r2]") 439 441 TEST_UNSUPPORTED(".word 0xe10f0091 @ swp r0, r1, [pc]") 440 - TEST_RP( ".word 0xe148e097 @ swpb lr, r",7,VAL2,", [r",8,0,"]") 441 - TEST_R( ".word 0x614d0091 @ swpvsb r0, r",1,VAL1,", [sp]") 442 + #if __LINUX_ARM_ARCH__ < 6 443 + TEST_RP("swpb lr, r",7,VAL2,", [r",8,0,"]") 444 + TEST_R( "swpvsb r0, r",1,VAL1,", [sp]") 445 + #else 446 + TEST_UNSUPPORTED(".word 0xe148e097 @ swpb lr, r7, [r8]") 447 + TEST_UNSUPPORTED(".word 0x614d0091 @ swpvsb r0, r1, [sp]") 448 + #endif 442 449 TEST_UNSUPPORTED(".word 0xe142f091 @ swpb pc, r1, [r2]") 443 450 444 451 TEST_UNSUPPORTED(".word 0xe1100090") /* Unallocated space */ ··· 557 550 TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") 558 551 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") 559 552 TEST_RPR( "strcsd r",12,VAL2,", [r",11,48,", -r",10,24,"]!") 560 - TEST_RPR( "strd r",2, VAL1,", [r",3, 24,"], r",4,48,"") 553 + TEST_RPR( "strd r",2, VAL1,", [r",5, 24,"], r",4,48,"") 561 554 TEST_RPR( "strd r",10,VAL2,", [r",9, 48,"], -r",7,24,"") 562 555 TEST_UNSUPPORTED(".word 0xe1afc0fa @ strd r12, [pc, r10]!") 563 556
+8 -8
arch/arm/kernel/kprobes-test-thumb.c
··· 222 222 DONT_TEST_IN_ITBLOCK( 223 223 TEST_BF_R( "cbnz r",0,0, ", 2f") 224 224 TEST_BF_R( "cbz r",2,-1,", 2f") 225 - TEST_BF_RX( "cbnz r",4,1, ", 2f",0x20) 226 - TEST_BF_RX( "cbz r",7,0, ", 2f",0x40) 225 + TEST_BF_RX( "cbnz r",4,1, ", 2f", SPACE_0x20) 226 + TEST_BF_RX( "cbz r",7,0, ", 2f", SPACE_0x40) 227 227 ) 228 228 TEST_R("sxth r0, r",7, HH1,"") 229 229 TEST_R("sxth r7, r",0, HH2,"") ··· 246 246 TESTCASE_START(code) \ 247 247 TEST_ARG_PTR(13, offset) \ 248 248 TEST_ARG_END("") \ 249 - TEST_BRANCH_F(code,0) \ 249 + TEST_BRANCH_F(code) \ 250 250 TESTCASE_END 251 251 252 252 TEST("push {r0}") ··· 319 319 320 320 TEST_BF( "b 2f") 321 321 TEST_BB( "b 2b") 322 - TEST_BF_X("b 2f", 0x400) 323 - TEST_BB_X("b 2b", 0x400) 322 + TEST_BF_X("b 2f", SPACE_0x400) 323 + TEST_BB_X("b 2b", SPACE_0x400) 324 324 325 325 TEST_GROUP("Testing instructions in IT blocks") 326 326 ··· 746 746 TEST_BB("bne.w 2b") 747 747 TEST_BF("bgt.w 2f") 748 748 TEST_BB("blt.w 2b") 749 - TEST_BF_X("bpl.w 2f",0x1000) 749 + TEST_BF_X("bpl.w 2f", SPACE_0x1000) 750 750 ) 751 751 752 752 TEST_UNSUPPORTED("msr cpsr, r0") ··· 786 786 787 787 TEST_BF( "b.w 2f") 788 788 TEST_BB( "b.w 2b") 789 - TEST_BF_X("b.w 2f", 0x1000) 789 + TEST_BF_X("b.w 2f", SPACE_0x1000) 790 790 791 791 TEST_BF( "bl.w 2f") 792 792 TEST_BB( "bl.w 2b") 793 - TEST_BB_X("bl.w 2b", 0x1000) 793 + TEST_BB_X("bl.w 2b", SPACE_0x1000) 794 794 795 795 TEST_X( "blx __dummy_arm_subroutine", 796 796 ".arm \n\t"
+71 -31
arch/arm/kernel/kprobes-test.h
··· 149 149 "1: "instruction" \n\t" \ 150 150 " nop \n\t" 151 151 152 - #define TEST_BRANCH_F(instruction, xtra_dist) \ 152 + #define TEST_BRANCH_F(instruction) \ 153 153 TEST_INSTRUCTION(instruction) \ 154 - ".if "#xtra_dist" \n\t" \ 155 - " b 99f \n\t" \ 156 - ".space "#xtra_dist" \n\t" \ 157 - ".endif \n\t" \ 158 154 " b 99f \n\t" \ 159 155 "2: nop \n\t" 160 156 161 - #define TEST_BRANCH_B(instruction, xtra_dist) \ 157 + #define TEST_BRANCH_B(instruction) \ 162 158 " b 50f \n\t" \ 163 159 " b 99f \n\t" \ 164 160 "2: nop \n\t" \ 165 161 " b 99f \n\t" \ 166 - ".if "#xtra_dist" \n\t" \ 167 - ".space "#xtra_dist" \n\t" \ 168 - ".endif \n\t" \ 162 + TEST_INSTRUCTION(instruction) 163 + 164 + #define TEST_BRANCH_FX(instruction, codex) \ 165 + TEST_INSTRUCTION(instruction) \ 166 + " b 99f \n\t" \ 167 + codex" \n\t" \ 168 + " b 99f \n\t" \ 169 + "2: nop \n\t" 170 + 171 + #define TEST_BRANCH_BX(instruction, codex) \ 172 + " b 50f \n\t" \ 173 + " b 99f \n\t" \ 174 + "2: nop \n\t" \ 175 + " b 99f \n\t" \ 176 + codex" \n\t" \ 169 177 TEST_INSTRUCTION(instruction) 170 178 171 179 #define TESTCASE_END \ ··· 309 301 TESTCASE_START(code1 #reg1 code2) \ 310 302 TEST_ARG_PTR(reg1, val1) \ 311 303 TEST_ARG_END("") \ 312 - TEST_BRANCH_F(code1 #reg1 code2, 0) \ 304 + TEST_BRANCH_F(code1 #reg1 code2) \ 313 305 TESTCASE_END 314 306 315 - #define TEST_BF_X(code, xtra_dist) \ 307 + #define TEST_BF(code) \ 316 308 TESTCASE_START(code) \ 317 309 TEST_ARG_END("") \ 318 - TEST_BRANCH_F(code, xtra_dist) \ 310 + TEST_BRANCH_F(code) \ 319 311 TESTCASE_END 320 312 321 - #define TEST_BB_X(code, xtra_dist) \ 313 + #define TEST_BB(code) \ 322 314 TESTCASE_START(code) \ 323 315 TEST_ARG_END("") \ 324 - TEST_BRANCH_B(code, xtra_dist) \ 316 + TEST_BRANCH_B(code) \ 325 317 TESTCASE_END 326 318 327 - #define TEST_BF_RX(code1, reg, val, code2, xtra_dist) \ 328 - TESTCASE_START(code1 #reg code2) \ 329 - TEST_ARG_REG(reg, val) \ 330 - TEST_ARG_END("") \ 331 - TEST_BRANCH_F(code1 #reg code2, xtra_dist) \ 319 + #define TEST_BF_R(code1, reg, val, code2) \ 320 + TESTCASE_START(code1 #reg code2) \ 321 + TEST_ARG_REG(reg, val) \ 322 + TEST_ARG_END("") \ 323 + TEST_BRANCH_F(code1 #reg code2) \ 332 324 TESTCASE_END 333 325 334 - #define TEST_BB_RX(code1, reg, val, code2, xtra_dist) \ 335 - TESTCASE_START(code1 #reg code2) \ 336 - TEST_ARG_REG(reg, val) \ 337 - TEST_ARG_END("") \ 338 - TEST_BRANCH_B(code1 #reg code2, xtra_dist) \ 326 + #define TEST_BB_R(code1, reg, val, code2) \ 327 + TESTCASE_START(code1 #reg code2) \ 328 + TEST_ARG_REG(reg, val) \ 329 + TEST_ARG_END("") \ 330 + TEST_BRANCH_B(code1 #reg code2) \ 339 331 TESTCASE_END 340 - 341 - #define TEST_BF(code) TEST_BF_X(code, 0) 342 - #define TEST_BB(code) TEST_BB_X(code, 0) 343 - 344 - #define TEST_BF_R(code1, reg, val, code2) TEST_BF_RX(code1, reg, val, code2, 0) 345 - #define TEST_BB_R(code1, reg, val, code2) TEST_BB_RX(code1, reg, val, code2, 0) 346 332 347 333 #define TEST_BF_RR(code1, reg1, val1, code2, reg2, val2, code3) \ 348 334 TESTCASE_START(code1 #reg1 code2 #reg2 code3) \ 349 335 TEST_ARG_REG(reg1, val1) \ 350 336 TEST_ARG_REG(reg2, val2) \ 351 337 TEST_ARG_END("") \ 352 - TEST_BRANCH_F(code1 #reg1 code2 #reg2 code3, 0) \ 338 + TEST_BRANCH_F(code1 #reg1 code2 #reg2 code3) \ 339 + TESTCASE_END 340 + 341 + #define TEST_BF_X(code, codex) \ 342 + TESTCASE_START(code) \ 343 + TEST_ARG_END("") \ 344 + TEST_BRANCH_FX(code, codex) \ 345 + TESTCASE_END 346 + 347 + #define TEST_BB_X(code, codex) \ 348 + TESTCASE_START(code) \ 349 + TEST_ARG_END("") \ 350 + TEST_BRANCH_BX(code, codex) \ 351 + TESTCASE_END 352 + 353 + #define TEST_BF_RX(code1, reg, val, code2, codex) \ 354 + TESTCASE_START(code1 #reg code2) \ 355 + TEST_ARG_REG(reg, val) \ 356 + TEST_ARG_END("") \ 357 + TEST_BRANCH_FX(code1 #reg code2, codex) \ 353 358 TESTCASE_END 354 359 355 360 #define TEST_X(code, codex) \ ··· 391 370 " b 99f \n\t" \ 392 371 " "codex" \n\t" \ 393 372 TESTCASE_END 373 + 374 + 375 + /* 376 + * Macros for defining space directives spread over multiple lines. 377 + * These are required so the compiler guesses better the length of inline asm 378 + * code and will spill the literal pool early enough to avoid generating PC 379 + * relative loads with out of range offsets. 380 + */ 381 + #define TWICE(x) x x 382 + #define SPACE_0x8 TWICE(".space 4\n\t") 383 + #define SPACE_0x10 TWICE(SPACE_0x8) 384 + #define SPACE_0x20 TWICE(SPACE_0x10) 385 + #define SPACE_0x40 TWICE(SPACE_0x20) 386 + #define SPACE_0x80 TWICE(SPACE_0x40) 387 + #define SPACE_0x100 TWICE(SPACE_0x80) 388 + #define SPACE_0x200 TWICE(SPACE_0x100) 389 + #define SPACE_0x400 TWICE(SPACE_0x200) 390 + #define SPACE_0x800 TWICE(SPACE_0x400) 391 + #define SPACE_0x1000 TWICE(SPACE_0x800) 394 392 395 393 396 394 /* Various values used in test cases... */
+10 -1
arch/arm/kernel/perf_event.c
··· 343 343 { 344 344 struct perf_event *sibling, *leader = event->group_leader; 345 345 struct pmu_hw_events fake_pmu; 346 + DECLARE_BITMAP(fake_used_mask, ARMPMU_MAX_HWEVENTS); 346 347 347 - memset(&fake_pmu, 0, sizeof(fake_pmu)); 348 + /* 349 + * Initialise the fake PMU. We only need to populate the 350 + * used_mask for the purposes of validation. 351 + */ 352 + memset(fake_used_mask, 0, sizeof(fake_used_mask)); 353 + fake_pmu.used_mask = fake_used_mask; 348 354 349 355 if (!validate_event(&fake_pmu, leader)) 350 356 return -ENOSPC; ··· 401 395 irq_handler_t handle_irq; 402 396 int i, err, irq, irqs; 403 397 struct platform_device *pmu_device = armpmu->plat_device; 398 + 399 + if (!pmu_device) 400 + return -ENODEV; 404 401 405 402 err = reserve_pmu(armpmu->type); 406 403 if (err) {
+1
arch/arm/kernel/pmu.c
··· 33 33 { 34 34 clear_bit_unlock(type, pmu_lock); 35 35 } 36 + EXPORT_SYMBOL_GPL(release_pmu);
+3
arch/arm/kernel/process.c
··· 192 192 #endif 193 193 194 194 local_irq_disable(); 195 + #ifdef CONFIG_PL310_ERRATA_769419 196 + wmb(); 197 + #endif 195 198 if (hlt_counter) { 196 199 local_irq_enable(); 197 200 cpu_relax();
+1 -1
arch/arm/kernel/topology.c
··· 43 43 44 44 struct cputopo_arm cpu_topology[NR_CPUS]; 45 45 46 - const struct cpumask *cpu_coregroup_mask(unsigned int cpu) 46 + const struct cpumask *cpu_coregroup_mask(int cpu) 47 47 { 48 48 return &cpu_topology[cpu].core_sibling; 49 49 }
+22 -4
arch/arm/lib/bitops.h
··· 1 + #include <asm/unwind.h> 2 + 1 3 #if __LINUX_ARM_ARCH__ >= 6 2 - .macro bitop, instr 4 + .macro bitop, name, instr 5 + ENTRY( \name ) 6 + UNWIND( .fnstart ) 3 7 ands ip, r1, #3 4 8 strneb r1, [ip] @ assert word-aligned 5 9 mov r2, #1 ··· 17 13 cmp r0, #0 18 14 bne 1b 19 15 bx lr 16 + UNWIND( .fnend ) 17 + ENDPROC(\name ) 20 18 .endm 21 19 22 - .macro testop, instr, store 20 + .macro testop, name, instr, store 21 + ENTRY( \name ) 22 + UNWIND( .fnstart ) 23 23 ands ip, r1, #3 24 24 strneb r1, [ip] @ assert word-aligned 25 25 mov r2, #1 ··· 42 34 cmp r0, #0 43 35 movne r0, #1 44 36 2: bx lr 37 + UNWIND( .fnend ) 38 + ENDPROC(\name ) 45 39 .endm 46 40 #else 47 - .macro bitop, instr 41 + .macro bitop, name, instr 42 + ENTRY( \name ) 43 + UNWIND( .fnstart ) 48 44 ands ip, r1, #3 49 45 strneb r1, [ip] @ assert word-aligned 50 46 and r2, r0, #31 ··· 61 49 str r2, [r1, r0, lsl #2] 62 50 restore_irqs ip 63 51 mov pc, lr 52 + UNWIND( .fnend ) 53 + ENDPROC(\name ) 64 54 .endm 65 55 66 56 /** ··· 73 59 * Note: we can trivially conditionalise the store instruction 74 60 * to avoid dirtying the data cache. 75 61 */ 76 - .macro testop, instr, store 62 + .macro testop, name, instr, store 63 + ENTRY( \name ) 64 + UNWIND( .fnstart ) 77 65 ands ip, r1, #3 78 66 strneb r1, [ip] @ assert word-aligned 79 67 and r3, r0, #31 ··· 89 73 moveq r0, #0 90 74 restore_irqs ip 91 75 mov pc, lr 76 + UNWIND( .fnend ) 77 + ENDPROC(\name ) 92 78 .endm 93 79 #endif
+1 -3
arch/arm/lib/changebit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_change_bit) 16 - bitop eor 17 - ENDPROC(_change_bit) 15 + bitop _change_bit, eor
+1 -3
arch/arm/lib/clearbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_clear_bit) 16 - bitop bic 17 - ENDPROC(_clear_bit) 15 + bitop _clear_bit, bic
+1 -3
arch/arm/lib/setbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_set_bit) 16 - bitop orr 17 - ENDPROC(_set_bit) 15 + bitop _set_bit, orr
+1 -3
arch/arm/lib/testchangebit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_change_bit) 16 - testop eor, str 17 - ENDPROC(_test_and_change_bit) 15 + testop _test_and_change_bit, eor, str
+1 -3
arch/arm/lib/testclearbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_clear_bit) 16 - testop bicne, strne 17 - ENDPROC(_test_and_clear_bit) 15 + testop _test_and_clear_bit, bicne, strne
+1 -3
arch/arm/lib/testsetbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_set_bit) 16 - testop orreq, streq 17 - ENDPROC(_test_and_set_bit) 15 + testop _test_and_set_bit, orreq, streq
+2
arch/arm/mach-exynos/cpuidle.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/cpuidle.h> 14 14 #include <linux/io.h> 15 + #include <linux/export.h> 16 + #include <linux/time.h> 15 17 16 18 #include <asm/proc-fns.h> 17 19
+4
arch/arm/mach-highbank/highbank.c
··· 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_platform.h> 24 24 #include <linux/of_address.h> 25 + #include <linux/smp.h> 25 26 26 27 #include <asm/cacheflush.h> 27 28 #include <asm/unified.h> ··· 73 72 74 73 void highbank_set_cpu_jump(int cpu, void *jump_addr) 75 74 { 75 + #ifdef CONFIG_SMP 76 + cpu = cpu_logical_map(cpu); 77 + #endif 76 78 writel(BSYM(virt_to_phys(jump_addr)), HB_JUMP_TABLE_VIRT(cpu)); 77 79 __cpuc_flush_dcache_area(HB_JUMP_TABLE_VIRT(cpu), 16); 78 80 outer_clean_range(HB_JUMP_TABLE_PHYS(cpu),
-13
arch/arm/mach-imx/Kconfig
··· 10 10 config HAVE_IMX_SRC 11 11 bool 12 12 13 - # 14 - # ARCH_MX31 and ARCH_MX35 are left for compatibility 15 - # Some usages assume that having one of them implies not having (e.g.) ARCH_MX2. 16 - # To easily distinguish good and reviewed from unreviewed usages new (and IMHO 17 - # more sensible) names are used: SOC_IMX31 and SOC_IMX35 18 13 config ARCH_MX1 19 14 bool 20 15 ··· 20 25 bool 21 26 22 27 config MACH_MX27 23 - bool 24 - 25 - config ARCH_MX31 26 - bool 27 - 28 - config ARCH_MX35 29 28 bool 30 29 31 30 config SOC_IMX1 ··· 61 72 select CPU_V6 62 73 select IMX_HAVE_PLATFORM_MXC_RNGA 63 74 select ARCH_MXC_AUDMUX_V2 64 - select ARCH_MX31 65 75 select MXC_AVIC 66 76 select SMP_ON_UP if SMP 67 77 ··· 70 82 select ARCH_MXC_IOMUX_V3 71 83 select ARCH_MXC_AUDMUX_V2 72 84 select HAVE_EPIT 73 - select ARCH_MX35 74 85 select MXC_AVIC 75 86 select SMP_ON_UP if SMP 76 87
+5 -2
arch/arm/mach-imx/clock-imx6q.c
··· 1953 1953 imx_map_entry(MX6Q, ANATOP, MT_DEVICE), 1954 1954 }; 1955 1955 1956 + void __init imx6q_clock_map_io(void) 1957 + { 1958 + iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc)); 1959 + } 1960 + 1956 1961 int __init mx6q_clocks_init(void) 1957 1962 { 1958 1963 struct device_node *np; 1959 1964 void __iomem *base; 1960 1965 int i, irq; 1961 - 1962 - iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc)); 1963 1966 1964 1967 /* retrieve the freqency of fixed clocks from device tree */ 1965 1968 for_each_compatible_node(np, NULL, "fixed-clock") {
+1
arch/arm/mach-imx/mach-imx6q.c
··· 34 34 { 35 35 imx_lluart_map_io(); 36 36 imx_scu_map_io(); 37 + imx6q_clock_map_io(); 37 38 } 38 39 39 40 static void __init imx6q_gpio_add_irq_domain(struct device_node *np,
+58 -51
arch/arm/mach-imx/mm-imx3.c
··· 33 33 static void imx3_idle(void) 34 34 { 35 35 unsigned long reg = 0; 36 - __asm__ __volatile__( 37 - /* disable I and D cache */ 38 - "mrc p15, 0, %0, c1, c0, 0\n" 39 - "bic %0, %0, #0x00001000\n" 40 - "bic %0, %0, #0x00000004\n" 41 - "mcr p15, 0, %0, c1, c0, 0\n" 42 - /* invalidate I cache */ 43 - "mov %0, #0\n" 44 - "mcr p15, 0, %0, c7, c5, 0\n" 45 - /* clear and invalidate D cache */ 46 - "mov %0, #0\n" 47 - "mcr p15, 0, %0, c7, c14, 0\n" 48 - /* WFI */ 49 - "mov %0, #0\n" 50 - "mcr p15, 0, %0, c7, c0, 4\n" 51 - "nop\n" "nop\n" "nop\n" "nop\n" 52 - "nop\n" "nop\n" "nop\n" 53 - /* enable I and D cache */ 54 - "mrc p15, 0, %0, c1, c0, 0\n" 55 - "orr %0, %0, #0x00001000\n" 56 - "orr %0, %0, #0x00000004\n" 57 - "mcr p15, 0, %0, c1, c0, 0\n" 58 - : "=r" (reg)); 36 + 37 + if (!need_resched()) 38 + __asm__ __volatile__( 39 + /* disable I and D cache */ 40 + "mrc p15, 0, %0, c1, c0, 0\n" 41 + "bic %0, %0, #0x00001000\n" 42 + "bic %0, %0, #0x00000004\n" 43 + "mcr p15, 0, %0, c1, c0, 0\n" 44 + /* invalidate I cache */ 45 + "mov %0, #0\n" 46 + "mcr p15, 0, %0, c7, c5, 0\n" 47 + /* clear and invalidate D cache */ 48 + "mov %0, #0\n" 49 + "mcr p15, 0, %0, c7, c14, 0\n" 50 + /* WFI */ 51 + "mov %0, #0\n" 52 + "mcr p15, 0, %0, c7, c0, 4\n" 53 + "nop\n" "nop\n" "nop\n" "nop\n" 54 + "nop\n" "nop\n" "nop\n" 55 + /* enable I and D cache */ 56 + "mrc p15, 0, %0, c1, c0, 0\n" 57 + "orr %0, %0, #0x00001000\n" 58 + "orr %0, %0, #0x00000004\n" 59 + "mcr p15, 0, %0, c1, c0, 0\n" 60 + : "=r" (reg)); 61 + local_irq_enable(); 59 62 } 60 63 61 64 static void __iomem *imx3_ioremap(unsigned long phys_addr, size_t size, ··· 111 108 l2x0_init(l2x0_base, 0x00030024, 0x00000000); 112 109 } 113 110 111 + #ifdef CONFIG_SOC_IMX31 114 112 static struct map_desc mx31_io_desc[] __initdata = { 115 113 imx_map_entry(MX31, X_MEMC, MT_DEVICE), 116 114 imx_map_entry(MX31, AVIC, MT_DEVICE_NONSHARED), ··· 130 126 iotable_init(mx31_io_desc, ARRAY_SIZE(mx31_io_desc)); 131 127 } 132 128 133 - static struct map_desc mx35_io_desc[] __initdata = { 134 - imx_map_entry(MX35, X_MEMC, MT_DEVICE), 135 - imx_map_entry(MX35, AVIC, MT_DEVICE_NONSHARED), 136 - imx_map_entry(MX35, AIPS1, MT_DEVICE_NONSHARED), 137 - imx_map_entry(MX35, AIPS2, MT_DEVICE_NONSHARED), 138 - imx_map_entry(MX35, SPBA0, MT_DEVICE_NONSHARED), 139 - }; 140 - 141 - void __init mx35_map_io(void) 142 - { 143 - iotable_init(mx35_io_desc, ARRAY_SIZE(mx35_io_desc)); 144 - } 145 - 146 129 void __init imx31_init_early(void) 147 130 { 148 131 mxc_set_cpu_type(MXC_CPU_MX31); 149 132 mxc_arch_reset_init(MX31_IO_ADDRESS(MX31_WDOG_BASE_ADDR)); 150 - imx_idle = imx3_idle; 151 - imx_ioremap = imx3_ioremap; 152 - } 153 - 154 - void __init imx35_init_early(void) 155 - { 156 - mxc_set_cpu_type(MXC_CPU_MX35); 157 - mxc_iomux_v3_init(MX35_IO_ADDRESS(MX35_IOMUXC_BASE_ADDR)); 158 - mxc_arch_reset_init(MX35_IO_ADDRESS(MX35_WDOG_BASE_ADDR)); 159 - imx_idle = imx3_idle; 133 + pm_idle = imx3_idle; 160 134 imx_ioremap = imx3_ioremap; 161 135 } 162 136 163 137 void __init mx31_init_irq(void) 164 138 { 165 139 mxc_init_irq(MX31_IO_ADDRESS(MX31_AVIC_BASE_ADDR)); 166 - } 167 - 168 - void __init mx35_init_irq(void) 169 - { 170 - mxc_init_irq(MX35_IO_ADDRESS(MX35_AVIC_BASE_ADDR)); 171 140 } 172 141 173 142 static struct sdma_script_start_addrs imx31_to1_sdma_script __initdata = { ··· 175 198 } 176 199 177 200 imx_add_imx_sdma("imx31-sdma", MX31_SDMA_BASE_ADDR, MX31_INT_SDMA, &imx31_sdma_pdata); 201 + } 202 + #endif /* ifdef CONFIG_SOC_IMX31 */ 203 + 204 + #ifdef CONFIG_SOC_IMX35 205 + static struct map_desc mx35_io_desc[] __initdata = { 206 + imx_map_entry(MX35, X_MEMC, MT_DEVICE), 207 + imx_map_entry(MX35, AVIC, MT_DEVICE_NONSHARED), 208 + imx_map_entry(MX35, AIPS1, MT_DEVICE_NONSHARED), 209 + imx_map_entry(MX35, AIPS2, MT_DEVICE_NONSHARED), 210 + imx_map_entry(MX35, SPBA0, MT_DEVICE_NONSHARED), 211 + }; 212 + 213 + void __init mx35_map_io(void) 214 + { 215 + iotable_init(mx35_io_desc, ARRAY_SIZE(mx35_io_desc)); 216 + } 217 + 218 + void __init imx35_init_early(void) 219 + { 220 + mxc_set_cpu_type(MXC_CPU_MX35); 221 + mxc_iomux_v3_init(MX35_IO_ADDRESS(MX35_IOMUXC_BASE_ADDR)); 222 + mxc_arch_reset_init(MX35_IO_ADDRESS(MX35_WDOG_BASE_ADDR)); 223 + pm_idle = imx3_idle; 224 + imx_ioremap = imx3_ioremap; 225 + } 226 + 227 + void __init mx35_init_irq(void) 228 + { 229 + mxc_init_irq(MX35_IO_ADDRESS(MX35_AVIC_BASE_ADDR)); 178 230 } 179 231 180 232 static struct sdma_script_start_addrs imx35_to1_sdma_script __initdata = { ··· 260 254 261 255 imx_add_imx_sdma("imx35-sdma", MX35_SDMA_BASE_ADDR, MX35_INT_SDMA, &imx35_sdma_pdata); 262 256 } 257 + #endif /* ifdef CONFIG_SOC_IMX35 */
+7
arch/arm/mach-imx/src.c
··· 14 14 #include <linux/io.h> 15 15 #include <linux/of.h> 16 16 #include <linux/of_address.h> 17 + #include <linux/smp.h> 17 18 #include <asm/unified.h> 18 19 19 20 #define SRC_SCR 0x000 ··· 24 23 25 24 static void __iomem *src_base; 26 25 26 + #ifndef CONFIG_SMP 27 + #define cpu_logical_map(cpu) 0 28 + #endif 29 + 27 30 void imx_enable_cpu(int cpu, bool enable) 28 31 { 29 32 u32 mask, val; 30 33 34 + cpu = cpu_logical_map(cpu); 31 35 mask = 1 << (BP_SRC_SCR_CORE1_ENABLE + cpu - 1); 32 36 val = readl_relaxed(src_base + SRC_SCR); 33 37 val = enable ? val | mask : val & ~mask; ··· 41 35 42 36 void imx_set_cpu_jump(int cpu, void *jump_addr) 43 37 { 38 + cpu = cpu_logical_map(cpu); 44 39 writel_relaxed(BSYM(virt_to_phys(jump_addr)), 45 40 src_base + SRC_GPR1 + cpu * 8); 46 41 }
+1 -1
arch/arm/mach-mmp/gplugd.c
··· 182 182 183 183 /* on-chip devices */ 184 184 pxa168_add_uart(3); 185 - pxa168_add_ssp(0); 185 + pxa168_add_ssp(1); 186 186 pxa168_add_twsi(0, NULL, ARRAY_AND_SIZE(gplugd_i2c_board_info)); 187 187 188 188 pxa168_add_eth(&gplugd_eth_platform_data);
+1 -1
arch/arm/mach-mmp/include/mach/gpio-pxa.h
··· 7 7 #define GPIO_REGS_VIRT (APB_VIRT_BASE + 0x19000) 8 8 9 9 #define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2)) 10 - #define GPIO_REG(x) (GPIO_REGS_VIRT + (x)) 10 + #define GPIO_REG(x) (*(volatile u32 *)(GPIO_REGS_VIRT + (x))) 11 11 12 12 #define NR_BUILTIN_GPIO IRQ_GPIO_NUM 13 13
+3 -2
arch/arm/mach-mx5/cpu.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/module.h> 18 18 #include <mach/hardware.h> 19 - #include <asm/io.h> 19 + #include <linux/io.h> 20 20 21 21 static int mx5_cpu_rev = -1; 22 22 ··· 67 67 if (!cpu_is_mx51()) 68 68 return 0; 69 69 70 - if (mx51_revision() < IMX_CHIP_REVISION_3_0 && (elf_hwcap & HWCAP_NEON)) { 70 + if (mx51_revision() < IMX_CHIP_REVISION_3_0 && 71 + (elf_hwcap & HWCAP_NEON)) { 71 72 elf_hwcap &= ~HWCAP_NEON; 72 73 pr_info("Turning off NEON support, detected broken NEON implementation\n"); 73 74 }
+4 -2
arch/arm/mach-mx5/mm.c
··· 23 23 24 24 static void imx5_idle(void) 25 25 { 26 - mx5_cpu_lp_set(WAIT_UNCLOCKED_POWER_OFF); 26 + if (!need_resched()) 27 + mx5_cpu_lp_set(WAIT_UNCLOCKED_POWER_OFF); 28 + local_irq_enable(); 27 29 } 28 30 29 31 /* ··· 91 89 mxc_set_cpu_type(MXC_CPU_MX51); 92 90 mxc_iomux_v3_init(MX51_IO_ADDRESS(MX51_IOMUXC_BASE_ADDR)); 93 91 mxc_arch_reset_init(MX51_IO_ADDRESS(MX51_WDOG1_BASE_ADDR)); 94 - imx_idle = imx5_idle; 92 + pm_idle = imx5_idle; 95 93 } 96 94 97 95 void __init imx53_init_early(void)
+1 -1
arch/arm/mach-mxs/clock-mx28.c
··· 404 404 reg = __raw_readl(CLKCTRL_BASE_ADDR + HW_CLKCTRL_##dr); \ 405 405 reg &= ~BM_CLKCTRL_##dr##_DIV; \ 406 406 reg |= div << BP_CLKCTRL_##dr##_DIV; \ 407 - if (reg | (1 << clk->enable_shift)) { \ 407 + if (reg & (1 << clk->enable_shift)) { \ 408 408 pr_err("%s: clock is gated\n", __func__); \ 409 409 return -EINVAL; \ 410 410 } \
-8
arch/arm/mach-omap1/Kconfig
··· 171 171 comment "OMAP CPU Speed" 172 172 depends on ARCH_OMAP1 173 173 174 - config OMAP_CLOCKS_SET_BY_BOOTLOADER 175 - bool "OMAP clocks set by bootloader" 176 - depends on ARCH_OMAP1 177 - help 178 - Enable this option to prevent the kernel from overriding the clock 179 - frequencies programmed by bootloader for MPU, DSP, MMUs, TC, 180 - internal LCD controller and MPU peripherals. 181 - 182 174 config OMAP_ARM_216MHZ 183 175 bool "OMAP ARM 216 MHz CPU (1710 only)" 184 176 depends on ARCH_OMAP1 && ARCH_OMAP16XX
+7 -3
arch/arm/mach-omap1/board-ams-delta.c
··· 302 302 omap_cfg_reg(J19_1610_CAM_D6); 303 303 omap_cfg_reg(J18_1610_CAM_D7); 304 304 305 - iotable_init(ams_delta_io_desc, ARRAY_SIZE(ams_delta_io_desc)); 306 - 307 305 omap_board_config = ams_delta_config; 308 306 omap_board_config_size = ARRAY_SIZE(ams_delta_config); 309 307 omap_serial_init(); ··· 371 373 } 372 374 arch_initcall(ams_delta_modem_init); 373 375 376 + static void __init ams_delta_map_io(void) 377 + { 378 + omap15xx_map_io(); 379 + iotable_init(ams_delta_io_desc, ARRAY_SIZE(ams_delta_io_desc)); 380 + } 381 + 374 382 MACHINE_START(AMS_DELTA, "Amstrad E3 (Delta)") 375 383 /* Maintainer: Jonathan McDowell <noodles@earth.li> */ 376 384 .atag_offset = 0x100, 377 - .map_io = omap15xx_map_io, 385 + .map_io = ams_delta_map_io, 378 386 .init_early = omap1_init_early, 379 387 .reserve = omap_reserve, 380 388 .init_irq = omap1_init_irq,
+2 -1
arch/arm/mach-omap1/clock.h
··· 17 17 18 18 #include <plat/clock.h> 19 19 20 - extern int __init omap1_clk_init(void); 20 + int omap1_clk_init(void); 21 + void omap1_clk_late_init(void); 21 22 extern int omap1_clk_enable(struct clk *clk); 22 23 extern void omap1_clk_disable(struct clk *clk); 23 24 extern long omap1_clk_round_rate(struct clk *clk, unsigned long rate);
+34 -19
arch/arm/mach-omap1/clock_data.c
··· 767 767 .clk_disable_unused = omap1_clk_disable_unused, 768 768 }; 769 769 770 + static void __init omap1_show_rates(void) 771 + { 772 + pr_notice("Clocking rate (xtal/DPLL1/MPU): " 773 + "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n", 774 + ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10, 775 + ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10, 776 + arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10); 777 + } 778 + 770 779 int __init omap1_clk_init(void) 771 780 { 772 781 struct omap_clk *c; ··· 844 835 /* We want to be in syncronous scalable mode */ 845 836 omap_writew(0x1000, ARM_SYSST); 846 837 847 - #ifdef CONFIG_OMAP_CLOCKS_SET_BY_BOOTLOADER 848 - /* Use values set by bootloader. Determine PLL rate and recalculate 849 - * dependent clocks as if kernel had changed PLL or divisors. 838 + 839 + /* 840 + * Initially use the values set by bootloader. Determine PLL rate and 841 + * recalculate dependent clocks as if kernel had changed PLL or 842 + * divisors. See also omap1_clk_late_init() that can reprogram dpll1 843 + * after the SRAM is initialized. 850 844 */ 851 845 { 852 846 unsigned pll_ctl_val = omap_readw(DPLL_CTL); ··· 874 862 } 875 863 } 876 864 } 877 - #else 878 - /* Find the highest supported frequency and enable it */ 879 - if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) { 880 - printk(KERN_ERR "System frequencies not set. Check your config.\n"); 881 - /* Guess sane values (60MHz) */ 882 - omap_writew(0x2290, DPLL_CTL); 883 - omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL); 884 - ck_dpll1.rate = 60000000; 885 - } 886 - #endif 887 865 propagate_rate(&ck_dpll1); 888 866 /* Cache rates for clocks connected to ck_ref (not dpll1) */ 889 867 propagate_rate(&ck_ref); 890 - printk(KERN_INFO "Clocking rate (xtal/DPLL1/MPU): " 891 - "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n", 892 - ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10, 893 - ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10, 894 - arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10); 895 - 868 + omap1_show_rates(); 896 869 if (machine_is_omap_perseus2() || machine_is_omap_fsample()) { 897 870 /* Select slicer output as OMAP input clock */ 898 871 omap_writew(omap_readw(OMAP7XX_PCC_UPLD_CTRL) & ~0x1, ··· 921 924 clk_enable(&arm_gpio_ck); 922 925 923 926 return 0; 927 + } 928 + 929 + #define OMAP1_DPLL1_SANE_VALUE 60000000 930 + 931 + void __init omap1_clk_late_init(void) 932 + { 933 + if (ck_dpll1.rate >= OMAP1_DPLL1_SANE_VALUE) 934 + return; 935 + 936 + /* Find the highest supported frequency and enable it */ 937 + if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) { 938 + pr_err("System frequencies not set, using default. Check your config.\n"); 939 + omap_writew(0x2290, DPLL_CTL); 940 + omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL); 941 + ck_dpll1.rate = OMAP1_DPLL1_SANE_VALUE; 942 + } 943 + propagate_rate(&ck_dpll1); 944 + omap1_show_rates(); 924 945 }
+3
arch/arm/mach-omap1/devices.c
··· 30 30 #include <plat/omap7xx.h> 31 31 #include <plat/mcbsp.h> 32 32 33 + #include "clock.h" 34 + 33 35 /*-------------------------------------------------------------------------*/ 34 36 35 37 #if defined(CONFIG_RTC_DRV_OMAP) || defined(CONFIG_RTC_DRV_OMAP_MODULE) ··· 295 293 return -ENODEV; 296 294 297 295 omap_sram_init(); 296 + omap1_clk_late_init(); 298 297 299 298 /* please keep these calls, and their implementations above, 300 299 * in alphabetical order so they're easier to sort through.
+1
arch/arm/mach-omap2/Kconfig
··· 334 334 config OMAP3_EMU 335 335 bool "OMAP3 debugging peripherals" 336 336 depends on ARCH_OMAP3 337 + select ARM_AMBA 337 338 select OC_ETM 338 339 help 339 340 Say Y here to enable debugging hardware of omap3
+1 -4
arch/arm/mach-omap2/Makefile
··· 4 4 5 5 # Common support 6 6 obj-y := id.o io.o control.o mux.o devices.o serial.o gpmc.o timer.o pm.o \ 7 - common.o gpio.o dma.o wd_timer.o 7 + common.o gpio.o dma.o wd_timer.o display.o 8 8 9 9 omap-2-3-common = irq.o sdrc.o 10 10 hwmod-common = omap_hwmod.o \ ··· 263 263 smsc911x-$(CONFIG_SMSC911X) := gpmc-smsc911x.o 264 264 obj-y += $(smsc911x-m) $(smsc911x-y) 265 265 obj-$(CONFIG_ARCH_OMAP4) += hwspinlock.o 266 - 267 - disp-$(CONFIG_OMAP2_DSS) := display.o 268 - obj-y += $(disp-m) $(disp-y) 269 266 270 267 obj-y += common-board-devices.o twl-common.o
+1
arch/arm/mach-omap2/cpuidle34xx.c
··· 24 24 25 25 #include <linux/sched.h> 26 26 #include <linux/cpuidle.h> 27 + #include <linux/export.h> 27 28 28 29 #include <plat/prcm.h> 29 30 #include <plat/irqs.h>
+159
arch/arm/mach-omap2/display.c
··· 27 27 #include <plat/omap_hwmod.h> 28 28 #include <plat/omap_device.h> 29 29 #include <plat/omap-pm.h> 30 + #include <plat/common.h> 30 31 31 32 #include "control.h" 33 + #include "display.h" 34 + 35 + #define DISPC_CONTROL 0x0040 36 + #define DISPC_CONTROL2 0x0238 37 + #define DISPC_IRQSTATUS 0x0018 38 + 39 + #define DSS_SYSCONFIG 0x10 40 + #define DSS_SYSSTATUS 0x14 41 + #define DSS_CONTROL 0x40 42 + #define DSS_SDI_CONTROL 0x44 43 + #define DSS_PLL_CONTROL 0x48 44 + 45 + #define LCD_EN_MASK (0x1 << 0) 46 + #define DIGIT_EN_MASK (0x1 << 1) 47 + 48 + #define FRAMEDONE_IRQ_SHIFT 0 49 + #define EVSYNC_EVEN_IRQ_SHIFT 2 50 + #define EVSYNC_ODD_IRQ_SHIFT 3 51 + #define FRAMEDONE2_IRQ_SHIFT 22 52 + #define FRAMEDONETV_IRQ_SHIFT 24 53 + 54 + /* 55 + * FRAMEDONE_IRQ_TIMEOUT: how long (in milliseconds) to wait during DISPC 56 + * reset before deciding that something has gone wrong 57 + */ 58 + #define FRAMEDONE_IRQ_TIMEOUT 100 32 59 33 60 static struct platform_device omap_display_device = { 34 61 .name = "omapdss", ··· 196 169 r = platform_device_register(&omap_display_device); 197 170 if (r < 0) 198 171 printk(KERN_ERR "Unable to register OMAP-Display device\n"); 172 + 173 + return r; 174 + } 175 + 176 + static void dispc_disable_outputs(void) 177 + { 178 + u32 v, irq_mask = 0; 179 + bool lcd_en, digit_en, lcd2_en = false; 180 + int i; 181 + struct omap_dss_dispc_dev_attr *da; 182 + struct omap_hwmod *oh; 183 + 184 + oh = omap_hwmod_lookup("dss_dispc"); 185 + if (!oh) { 186 + WARN(1, "display: could not disable outputs during reset - could not find dss_dispc hwmod\n"); 187 + return; 188 + } 189 + 190 + if (!oh->dev_attr) { 191 + pr_err("display: could not disable outputs during reset due to missing dev_attr\n"); 192 + return; 193 + } 194 + 195 + da = (struct omap_dss_dispc_dev_attr *)oh->dev_attr; 196 + 197 + /* store value of LCDENABLE and DIGITENABLE bits */ 198 + v = omap_hwmod_read(oh, DISPC_CONTROL); 199 + lcd_en = v & LCD_EN_MASK; 200 + digit_en = v & DIGIT_EN_MASK; 201 + 202 + /* store value of LCDENABLE for LCD2 */ 203 + if (da->manager_count > 2) { 204 + v = omap_hwmod_read(oh, DISPC_CONTROL2); 205 + lcd2_en = v & LCD_EN_MASK; 206 + } 207 + 208 + if (!(lcd_en | digit_en | lcd2_en)) 209 + return; /* no managers currently enabled */ 210 + 211 + /* 212 + * If any manager was enabled, we need to disable it before 213 + * DSS clocks are disabled or DISPC module is reset 214 + */ 215 + if (lcd_en) 216 + irq_mask |= 1 << FRAMEDONE_IRQ_SHIFT; 217 + 218 + if (digit_en) { 219 + if (da->has_framedonetv_irq) { 220 + irq_mask |= 1 << FRAMEDONETV_IRQ_SHIFT; 221 + } else { 222 + irq_mask |= 1 << EVSYNC_EVEN_IRQ_SHIFT | 223 + 1 << EVSYNC_ODD_IRQ_SHIFT; 224 + } 225 + } 226 + 227 + if (lcd2_en) 228 + irq_mask |= 1 << FRAMEDONE2_IRQ_SHIFT; 229 + 230 + /* 231 + * clear any previous FRAMEDONE, FRAMEDONETV, 232 + * EVSYNC_EVEN/ODD or FRAMEDONE2 interrupts 233 + */ 234 + omap_hwmod_write(irq_mask, oh, DISPC_IRQSTATUS); 235 + 236 + /* disable LCD and TV managers */ 237 + v = omap_hwmod_read(oh, DISPC_CONTROL); 238 + v &= ~(LCD_EN_MASK | DIGIT_EN_MASK); 239 + omap_hwmod_write(v, oh, DISPC_CONTROL); 240 + 241 + /* disable LCD2 manager */ 242 + if (da->manager_count > 2) { 243 + v = omap_hwmod_read(oh, DISPC_CONTROL2); 244 + v &= ~LCD_EN_MASK; 245 + omap_hwmod_write(v, oh, DISPC_CONTROL2); 246 + } 247 + 248 + i = 0; 249 + while ((omap_hwmod_read(oh, DISPC_IRQSTATUS) & irq_mask) != 250 + irq_mask) { 251 + i++; 252 + if (i > FRAMEDONE_IRQ_TIMEOUT) { 253 + pr_err("didn't get FRAMEDONE1/2 or TV interrupt\n"); 254 + break; 255 + } 256 + mdelay(1); 257 + } 258 + } 259 + 260 + #define MAX_MODULE_SOFTRESET_WAIT 10000 261 + int omap_dss_reset(struct omap_hwmod *oh) 262 + { 263 + struct omap_hwmod_opt_clk *oc; 264 + int c = 0; 265 + int i, r; 266 + 267 + if (!(oh->class->sysc->sysc_flags & SYSS_HAS_RESET_STATUS)) { 268 + pr_err("dss_core: hwmod data doesn't contain reset data\n"); 269 + return -EINVAL; 270 + } 271 + 272 + for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 273 + if (oc->_clk) 274 + clk_enable(oc->_clk); 275 + 276 + dispc_disable_outputs(); 277 + 278 + /* clear SDI registers */ 279 + if (cpu_is_omap3430()) { 280 + omap_hwmod_write(0x0, oh, DSS_SDI_CONTROL); 281 + omap_hwmod_write(0x0, oh, DSS_PLL_CONTROL); 282 + } 283 + 284 + /* 285 + * clear DSS_CONTROL register to switch DSS clock sources to 286 + * PRCM clock, if any 287 + */ 288 + omap_hwmod_write(0x0, oh, DSS_CONTROL); 289 + 290 + omap_test_timeout((omap_hwmod_read(oh, oh->class->sysc->syss_offs) 291 + & SYSS_RESETDONE_MASK), 292 + MAX_MODULE_SOFTRESET_WAIT, c); 293 + 294 + if (c == MAX_MODULE_SOFTRESET_WAIT) 295 + pr_warning("dss_core: waiting for reset to finish failed\n"); 296 + else 297 + pr_debug("dss_core: softreset done\n"); 298 + 299 + for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 300 + if (oc->_clk) 301 + clk_disable(oc->_clk); 302 + 303 + r = (c == MAX_MODULE_SOFTRESET_WAIT) ? -ETIMEDOUT : 0; 199 304 200 305 return r; 201 306 }
+29
arch/arm/mach-omap2/display.h
··· 1 + /* 2 + * display.h - OMAP2+ integration-specific DSS header 3 + * 4 + * Copyright (C) 2011 Texas Instruments, Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License version 2 as published by 8 + * the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + * 15 + * You should have received a copy of the GNU General Public License along with 16 + * this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #ifndef __ARCH_ARM_MACH_OMAP2_DISPLAY_H 20 + #define __ARCH_ARM_MACH_OMAP2_DISPLAY_H 21 + 22 + #include <linux/kernel.h> 23 + 24 + struct omap_dss_dispc_dev_attr { 25 + u8 manager_count; 26 + bool has_framedonetv_irq; 27 + }; 28 + 29 + #endif
arch/arm/mach-omap2/io.h
+3 -3
arch/arm/mach-omap2/omap_hwmod.c
··· 749 749 ohii = &oh->mpu_irqs[i++]; 750 750 } while (ohii->irq != -1); 751 751 752 - return i; 752 + return i-1; 753 753 } 754 754 755 755 /** ··· 772 772 ohdi = &oh->sdma_reqs[i++]; 773 773 } while (ohdi->dma_req != -1); 774 774 775 - return i; 775 + return i-1; 776 776 } 777 777 778 778 /** ··· 795 795 mem = &os->addr[i++]; 796 796 } while (mem->pa_start != mem->pa_end); 797 797 798 - return i; 798 + return i-1; 799 799 } 800 800 801 801 /**
+14 -3
arch/arm/mach-omap2/omap_hwmod_2420_data.c
··· 875 875 }; 876 876 877 877 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 878 + /* 879 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 880 + * driver does not use these clocks. 881 + */ 878 882 { .role = "tv_clk", .clk = "dss_54m_fck" }, 879 883 { .role = "sys_clk", .clk = "dss2_fck" }, 880 884 }; ··· 903 899 .slaves_cnt = ARRAY_SIZE(omap2420_dss_slaves), 904 900 .masters = omap2420_dss_masters, 905 901 .masters_cnt = ARRAY_SIZE(omap2420_dss_masters), 906 - .flags = HWMOD_NO_IDLEST, 902 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 907 903 }; 908 904 909 905 /* l4_core -> dss_dispc */ ··· 943 939 .slaves = omap2420_dss_dispc_slaves, 944 940 .slaves_cnt = ARRAY_SIZE(omap2420_dss_dispc_slaves), 945 941 .flags = HWMOD_NO_IDLEST, 942 + .dev_attr = &omap2_3_dss_dispc_dev_attr 946 943 }; 947 944 948 945 /* l4_core -> dss_rfbi */ ··· 966 961 &omap2420_l4_core__dss_rfbi, 967 962 }; 968 963 964 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 965 + { .role = "ick", .clk = "dss_ick" }, 966 + }; 967 + 969 968 static struct omap_hwmod omap2420_dss_rfbi_hwmod = { 970 969 .name = "dss_rfbi", 971 970 .class = &omap2_rfbi_hwmod_class, ··· 981 972 .module_offs = CORE_MOD, 982 973 }, 983 974 }, 975 + .opt_clks = dss_rfbi_opt_clks, 976 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 984 977 .slaves = omap2420_dss_rfbi_slaves, 985 978 .slaves_cnt = ARRAY_SIZE(omap2420_dss_rfbi_slaves), 986 979 .flags = HWMOD_NO_IDLEST, ··· 992 981 static struct omap_hwmod_ocp_if omap2420_l4_core__dss_venc = { 993 982 .master = &omap2420_l4_core_hwmod, 994 983 .slave = &omap2420_dss_venc_hwmod, 995 - .clk = "dss_54m_fck", 984 + .clk = "dss_ick", 996 985 .addr = omap2_dss_venc_addrs, 997 986 .fw = { 998 987 .omap2 = { ··· 1012 1001 static struct omap_hwmod omap2420_dss_venc_hwmod = { 1013 1002 .name = "dss_venc", 1014 1003 .class = &omap2_venc_hwmod_class, 1015 - .main_clk = "dss1_fck", 1004 + .main_clk = "dss_54m_fck", 1016 1005 .prcm = { 1017 1006 .omap2 = { 1018 1007 .prcm_reg_id = 1,
+14 -3
arch/arm/mach-omap2/omap_hwmod_2430_data.c
··· 942 942 }; 943 943 944 944 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 945 + /* 946 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 947 + * driver does not use these clocks. 948 + */ 945 949 { .role = "tv_clk", .clk = "dss_54m_fck" }, 946 950 { .role = "sys_clk", .clk = "dss2_fck" }, 947 951 }; ··· 970 966 .slaves_cnt = ARRAY_SIZE(omap2430_dss_slaves), 971 967 .masters = omap2430_dss_masters, 972 968 .masters_cnt = ARRAY_SIZE(omap2430_dss_masters), 973 - .flags = HWMOD_NO_IDLEST, 969 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 974 970 }; 975 971 976 972 /* l4_core -> dss_dispc */ ··· 1004 1000 .slaves = omap2430_dss_dispc_slaves, 1005 1001 .slaves_cnt = ARRAY_SIZE(omap2430_dss_dispc_slaves), 1006 1002 .flags = HWMOD_NO_IDLEST, 1003 + .dev_attr = &omap2_3_dss_dispc_dev_attr 1007 1004 }; 1008 1005 1009 1006 /* l4_core -> dss_rfbi */ ··· 1021 1016 &omap2430_l4_core__dss_rfbi, 1022 1017 }; 1023 1018 1019 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 1020 + { .role = "ick", .clk = "dss_ick" }, 1021 + }; 1022 + 1024 1023 static struct omap_hwmod omap2430_dss_rfbi_hwmod = { 1025 1024 .name = "dss_rfbi", 1026 1025 .class = &omap2_rfbi_hwmod_class, ··· 1036 1027 .module_offs = CORE_MOD, 1037 1028 }, 1038 1029 }, 1030 + .opt_clks = dss_rfbi_opt_clks, 1031 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 1039 1032 .slaves = omap2430_dss_rfbi_slaves, 1040 1033 .slaves_cnt = ARRAY_SIZE(omap2430_dss_rfbi_slaves), 1041 1034 .flags = HWMOD_NO_IDLEST, ··· 1047 1036 static struct omap_hwmod_ocp_if omap2430_l4_core__dss_venc = { 1048 1037 .master = &omap2430_l4_core_hwmod, 1049 1038 .slave = &omap2430_dss_venc_hwmod, 1050 - .clk = "dss_54m_fck", 1039 + .clk = "dss_ick", 1051 1040 .addr = omap2_dss_venc_addrs, 1052 1041 .flags = OCPIF_SWSUP_IDLE, 1053 1042 .user = OCP_USER_MPU | OCP_USER_SDMA, ··· 1061 1050 static struct omap_hwmod omap2430_dss_venc_hwmod = { 1062 1051 .name = "dss_venc", 1063 1052 .class = &omap2_venc_hwmod_class, 1064 - .main_clk = "dss1_fck", 1053 + .main_clk = "dss_54m_fck", 1065 1054 .prcm = { 1066 1055 .omap2 = { 1067 1056 .prcm_reg_id = 1,
+4 -1
arch/arm/mach-omap2/omap_hwmod_2xxx_3xxx_ipblock_data.c
··· 11 11 #include <plat/omap_hwmod.h> 12 12 #include <plat/serial.h> 13 13 #include <plat/dma.h> 14 + #include <plat/common.h> 14 15 15 16 #include <mach/irqs.h> 16 17 ··· 44 43 .rev_offs = 0x0000, 45 44 .sysc_offs = 0x0010, 46 45 .syss_offs = 0x0014, 47 - .sysc_flags = (SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE), 46 + .sysc_flags = (SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE | 47 + SYSS_HAS_RESET_STATUS), 48 48 .sysc_fields = &omap_hwmod_sysc_type1, 49 49 }; 50 50 51 51 struct omap_hwmod_class omap2_dss_hwmod_class = { 52 52 .name = "dss", 53 53 .sysc = &omap2_dss_sysc, 54 + .reset = omap_dss_reset, 54 55 }; 55 56 56 57 /*
+32 -5
arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
··· 1369 1369 }; 1370 1370 1371 1371 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 1372 - { .role = "tv_clk", .clk = "dss_tv_fck" }, 1373 - { .role = "video_clk", .clk = "dss_96m_fck" }, 1372 + /* 1373 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 1374 + * driver does not use these clocks. 1375 + */ 1374 1376 { .role = "sys_clk", .clk = "dss2_alwon_fck" }, 1377 + { .role = "tv_clk", .clk = "dss_tv_fck" }, 1378 + /* required only on OMAP3430 */ 1379 + { .role = "tv_dac_clk", .clk = "dss_96m_fck" }, 1375 1380 }; 1376 1381 1377 1382 static struct omap_hwmod omap3430es1_dss_core_hwmod = { ··· 1399 1394 .slaves_cnt = ARRAY_SIZE(omap3430es1_dss_slaves), 1400 1395 .masters = omap3xxx_dss_masters, 1401 1396 .masters_cnt = ARRAY_SIZE(omap3xxx_dss_masters), 1402 - .flags = HWMOD_NO_IDLEST, 1397 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1403 1398 }; 1404 1399 1405 1400 static struct omap_hwmod omap3xxx_dss_core_hwmod = { 1406 1401 .name = "dss_core", 1402 + .flags = HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1407 1403 .class = &omap2_dss_hwmod_class, 1408 1404 .main_clk = "dss1_alwon_fck", /* instead of dss_fck */ 1409 1405 .sdma_reqs = omap3xxx_dss_sdma_chs, ··· 1462 1456 .slaves = omap3xxx_dss_dispc_slaves, 1463 1457 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_dispc_slaves), 1464 1458 .flags = HWMOD_NO_IDLEST, 1459 + .dev_attr = &omap2_3_dss_dispc_dev_attr 1465 1460 }; 1466 1461 1467 1462 /* ··· 1493 1486 static struct omap_hwmod_ocp_if omap3xxx_l4_core__dss_dsi1 = { 1494 1487 .master = &omap3xxx_l4_core_hwmod, 1495 1488 .slave = &omap3xxx_dss_dsi1_hwmod, 1489 + .clk = "dss_ick", 1496 1490 .addr = omap3xxx_dss_dsi1_addrs, 1497 1491 .fw = { 1498 1492 .omap2 = { ··· 1510 1502 &omap3xxx_l4_core__dss_dsi1, 1511 1503 }; 1512 1504 1505 + static struct omap_hwmod_opt_clk dss_dsi1_opt_clks[] = { 1506 + { .role = "sys_clk", .clk = "dss2_alwon_fck" }, 1507 + }; 1508 + 1513 1509 static struct omap_hwmod omap3xxx_dss_dsi1_hwmod = { 1514 1510 .name = "dss_dsi1", 1515 1511 .class = &omap3xxx_dsi_hwmod_class, ··· 1526 1514 .module_offs = OMAP3430_DSS_MOD, 1527 1515 }, 1528 1516 }, 1517 + .opt_clks = dss_dsi1_opt_clks, 1518 + .opt_clks_cnt = ARRAY_SIZE(dss_dsi1_opt_clks), 1529 1519 .slaves = omap3xxx_dss_dsi1_slaves, 1530 1520 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_dsi1_slaves), 1531 1521 .flags = HWMOD_NO_IDLEST, ··· 1554 1540 &omap3xxx_l4_core__dss_rfbi, 1555 1541 }; 1556 1542 1543 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 1544 + { .role = "ick", .clk = "dss_ick" }, 1545 + }; 1546 + 1557 1547 static struct omap_hwmod omap3xxx_dss_rfbi_hwmod = { 1558 1548 .name = "dss_rfbi", 1559 1549 .class = &omap2_rfbi_hwmod_class, ··· 1569 1551 .module_offs = OMAP3430_DSS_MOD, 1570 1552 }, 1571 1553 }, 1554 + .opt_clks = dss_rfbi_opt_clks, 1555 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 1572 1556 .slaves = omap3xxx_dss_rfbi_slaves, 1573 1557 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_rfbi_slaves), 1574 1558 .flags = HWMOD_NO_IDLEST, ··· 1580 1560 static struct omap_hwmod_ocp_if omap3xxx_l4_core__dss_venc = { 1581 1561 .master = &omap3xxx_l4_core_hwmod, 1582 1562 .slave = &omap3xxx_dss_venc_hwmod, 1583 - .clk = "dss_tv_fck", 1563 + .clk = "dss_ick", 1584 1564 .addr = omap2_dss_venc_addrs, 1585 1565 .fw = { 1586 1566 .omap2 = { ··· 1598 1578 &omap3xxx_l4_core__dss_venc, 1599 1579 }; 1600 1580 1581 + static struct omap_hwmod_opt_clk dss_venc_opt_clks[] = { 1582 + /* required only on OMAP3430 */ 1583 + { .role = "tv_dac_clk", .clk = "dss_96m_fck" }, 1584 + }; 1585 + 1601 1586 static struct omap_hwmod omap3xxx_dss_venc_hwmod = { 1602 1587 .name = "dss_venc", 1603 1588 .class = &omap2_venc_hwmod_class, 1604 - .main_clk = "dss1_alwon_fck", 1589 + .main_clk = "dss_tv_fck", 1605 1590 .prcm = { 1606 1591 .omap2 = { 1607 1592 .prcm_reg_id = 1, ··· 1614 1589 .module_offs = OMAP3430_DSS_MOD, 1615 1590 }, 1616 1591 }, 1592 + .opt_clks = dss_venc_opt_clks, 1593 + .opt_clks_cnt = ARRAY_SIZE(dss_venc_opt_clks), 1617 1594 .slaves = omap3xxx_dss_venc_slaves, 1618 1595 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_venc_slaves), 1619 1596 .flags = HWMOD_NO_IDLEST,
+12 -12
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 30 30 #include <plat/mmc.h> 31 31 #include <plat/i2c.h> 32 32 #include <plat/dmtimer.h> 33 + #include <plat/common.h> 33 34 34 35 #include "omap_hwmod_common_data.h" 35 36 ··· 1188 1187 static struct omap_hwmod_class omap44xx_dss_hwmod_class = { 1189 1188 .name = "dss", 1190 1189 .sysc = &omap44xx_dss_sysc, 1190 + .reset = omap_dss_reset, 1191 1191 }; 1192 1192 1193 1193 /* dss */ ··· 1242 1240 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 1243 1241 { .role = "sys_clk", .clk = "dss_sys_clk" }, 1244 1242 { .role = "tv_clk", .clk = "dss_tv_clk" }, 1245 - { .role = "dss_clk", .clk = "dss_dss_clk" }, 1246 - { .role = "video_clk", .clk = "dss_48mhz_clk" }, 1243 + { .role = "hdmi_clk", .clk = "dss_48mhz_clk" }, 1247 1244 }; 1248 1245 1249 1246 static struct omap_hwmod omap44xx_dss_hwmod = { 1250 1247 .name = "dss_core", 1248 + .flags = HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1251 1249 .class = &omap44xx_dss_hwmod_class, 1252 1250 .clkdm_name = "l3_dss_clkdm", 1253 1251 .main_clk = "dss_dss_clk", ··· 1327 1325 { } 1328 1326 }; 1329 1327 1328 + static struct omap_dss_dispc_dev_attr omap44xx_dss_dispc_dev_attr = { 1329 + .manager_count = 3, 1330 + .has_framedonetv_irq = 1 1331 + }; 1332 + 1330 1333 /* l4_per -> dss_dispc */ 1331 1334 static struct omap_hwmod_ocp_if omap44xx_l4_per__dss_dispc = { 1332 1335 .master = &omap44xx_l4_per_hwmod, ··· 1347 1340 &omap44xx_l4_per__dss_dispc, 1348 1341 }; 1349 1342 1350 - static struct omap_hwmod_opt_clk dss_dispc_opt_clks[] = { 1351 - { .role = "sys_clk", .clk = "dss_sys_clk" }, 1352 - { .role = "tv_clk", .clk = "dss_tv_clk" }, 1353 - { .role = "hdmi_clk", .clk = "dss_48mhz_clk" }, 1354 - }; 1355 - 1356 1343 static struct omap_hwmod omap44xx_dss_dispc_hwmod = { 1357 1344 .name = "dss_dispc", 1358 1345 .class = &omap44xx_dispc_hwmod_class, ··· 1360 1359 .context_offs = OMAP4_RM_DSS_DSS_CONTEXT_OFFSET, 1361 1360 }, 1362 1361 }, 1363 - .opt_clks = dss_dispc_opt_clks, 1364 - .opt_clks_cnt = ARRAY_SIZE(dss_dispc_opt_clks), 1365 1362 .slaves = omap44xx_dss_dispc_slaves, 1366 1363 .slaves_cnt = ARRAY_SIZE(omap44xx_dss_dispc_slaves), 1364 + .dev_attr = &omap44xx_dss_dispc_dev_attr 1367 1365 }; 1368 1366 1369 1367 /* ··· 1624 1624 .clkdm_name = "l3_dss_clkdm", 1625 1625 .mpu_irqs = omap44xx_dss_hdmi_irqs, 1626 1626 .sdma_reqs = omap44xx_dss_hdmi_sdma_reqs, 1627 - .main_clk = "dss_dss_clk", 1627 + .main_clk = "dss_48mhz_clk", 1628 1628 .prcm = { 1629 1629 .omap4 = { 1630 1630 .clkctrl_offs = OMAP4_CM_DSS_DSS_CLKCTRL_OFFSET, ··· 1785 1785 .name = "dss_venc", 1786 1786 .class = &omap44xx_venc_hwmod_class, 1787 1787 .clkdm_name = "l3_dss_clkdm", 1788 - .main_clk = "dss_dss_clk", 1788 + .main_clk = "dss_tv_clk", 1789 1789 .prcm = { 1790 1790 .omap4 = { 1791 1791 .clkctrl_offs = OMAP4_CM_DSS_DSS_CLKCTRL_OFFSET,
+4
arch/arm/mach-omap2/omap_hwmod_common_data.c
··· 49 49 .srst_shift = SYSC_TYPE2_SOFTRESET_SHIFT, 50 50 }; 51 51 52 + struct omap_dss_dispc_dev_attr omap2_3_dss_dispc_dev_attr = { 53 + .manager_count = 2, 54 + .has_framedonetv_irq = 0 55 + };
+4
arch/arm/mach-omap2/omap_hwmod_common_data.h
··· 16 16 17 17 #include <plat/omap_hwmod.h> 18 18 19 + #include "display.h" 20 + 19 21 /* Common address space across OMAP2xxx */ 20 22 extern struct omap_hwmod_addr_space omap2xxx_uart1_addr_space[]; 21 23 extern struct omap_hwmod_addr_space omap2xxx_uart2_addr_space[]; ··· 112 110 extern struct omap_hwmod_class omap2xxx_dma_hwmod_class; 113 111 extern struct omap_hwmod_class omap2xxx_mailbox_hwmod_class; 114 112 extern struct omap_hwmod_class omap2xxx_mcspi_class; 113 + 114 + extern struct omap_dss_dispc_dev_attr omap2_3_dss_dispc_dev_attr; 115 115 116 116 #endif
+1 -1
arch/arm/mach-omap2/omap_l3_noc.c
··· 237 237 static const struct of_device_id l3_noc_match[] = { 238 238 {.compatible = "ti,omap4-l3-noc", }, 239 239 {}, 240 - } 240 + }; 241 241 MODULE_DEVICE_TABLE(of, l3_noc_match); 242 242 #else 243 243 #define l3_noc_match NULL
+2 -4
arch/arm/mach-omap2/pm.c
··· 24 24 #include "powerdomain.h" 25 25 #include "clockdomain.h" 26 26 #include "pm.h" 27 + #include "twl-common.h" 27 28 28 29 static struct omap_device_pm_latency *pm_lats; 29 30 ··· 227 226 228 227 static int __init omap2_common_pm_late_init(void) 229 228 { 230 - /* Init the OMAP TWL parameters */ 231 - omap3_twl_init(); 232 - omap4_twl_init(); 233 - 234 229 /* Init the voltage layer */ 230 + omap_pmic_late_init(); 235 231 omap_voltage_late_init(); 236 232 237 233 /* Initialize the voltages */
+1 -1
arch/arm/mach-omap2/smartreflex.c
··· 139 139 sr_write_reg(sr_info, ERRCONFIG_V1, status); 140 140 } else if (sr_info->ip_type == SR_TYPE_V2) { 141 141 /* Read the status bits */ 142 - sr_read_reg(sr_info, IRQSTATUS); 142 + status = sr_read_reg(sr_info, IRQSTATUS); 143 143 144 144 /* Clear them by writing back */ 145 145 sr_write_reg(sr_info, IRQSTATUS, status);
+11
arch/arm/mach-omap2/twl-common.c
··· 30 30 #include <plat/usb.h> 31 31 32 32 #include "twl-common.h" 33 + #include "pm.h" 33 34 34 35 static struct i2c_board_info __initdata pmic_i2c_board_info = { 35 36 .addr = 0x48, ··· 47 46 pmic_i2c_board_info.platform_data = pmic_data; 48 47 49 48 omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1); 49 + } 50 + 51 + void __init omap_pmic_late_init(void) 52 + { 53 + /* Init the OMAP TWL parameters (if PMIC has been registerd) */ 54 + if (!pmic_i2c_board_info.irq) 55 + return; 56 + 57 + omap3_twl_init(); 58 + omap4_twl_init(); 50 59 } 51 60 52 61 #if defined(CONFIG_ARCH_OMAP3)
+3
arch/arm/mach-omap2/twl-common.h
··· 1 1 #ifndef __OMAP_PMIC_COMMON__ 2 2 #define __OMAP_PMIC_COMMON__ 3 3 4 + #include <plat/irqs.h> 5 + 4 6 #define TWL_COMMON_PDATA_USB (1 << 0) 5 7 #define TWL_COMMON_PDATA_BCI (1 << 1) 6 8 #define TWL_COMMON_PDATA_MADC (1 << 2) ··· 32 30 33 31 void omap_pmic_init(int bus, u32 clkrate, const char *pmic_type, int pmic_irq, 34 32 struct twl4030_platform_data *pmic_data); 33 + void omap_pmic_late_init(void); 35 34 36 35 static inline void omap2_pmic_init(const char *pmic_type, 37 36 struct twl4030_platform_data *pmic_data)
+1 -1
arch/arm/mach-pxa/balloon3.c
··· 307 307 /****************************************************************************** 308 308 * USB Gadget 309 309 ******************************************************************************/ 310 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 310 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 311 311 static void balloon3_udc_command(int cmd) 312 312 { 313 313 if (cmd == PXA2XX_UDC_CMD_CONNECT)
+1 -1
arch/arm/mach-pxa/colibri-pxa320.c
··· 146 146 static inline void __init colibri_pxa320_init_eth(void) {} 147 147 #endif /* CONFIG_AX88796 */ 148 148 149 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 149 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 150 150 static struct gpio_vbus_mach_info colibri_pxa320_gpio_vbus_info = { 151 151 .gpio_vbus = mfp_to_gpio(MFP_PIN_GPIO96), 152 152 .gpio_pullup = -1,
+1 -1
arch/arm/mach-pxa/gumstix.c
··· 106 106 } 107 107 #endif 108 108 109 - #ifdef CONFIG_USB_GADGET_PXA25X 109 + #ifdef CONFIG_USB_PXA25X 110 110 static struct gpio_vbus_mach_info gumstix_udc_info = { 111 111 .gpio_vbus = GPIO_GUMSTIX_USB_GPIOn, 112 112 .gpio_pullup = GPIO_GUMSTIX_USB_GPIOx,
+2 -2
arch/arm/mach-pxa/include/mach/palm27x.h
··· 37 37 #define palm27x_lcd_init(power, mode) do {} while (0) 38 38 #endif 39 39 40 - #if defined(CONFIG_USB_GADGET_PXA27X) || \ 41 - defined(CONFIG_USB_GADGET_PXA27X_MODULE) 40 + #if defined(CONFIG_USB_PXA27X) || \ 41 + defined(CONFIG_USB_PXA27X_MODULE) 42 42 extern void __init palm27x_udc_init(int vbus, int pullup, 43 43 int vbus_inverted); 44 44 #else
+2 -2
arch/arm/mach-pxa/palm27x.c
··· 164 164 /****************************************************************************** 165 165 * USB Gadget 166 166 ******************************************************************************/ 167 - #if defined(CONFIG_USB_GADGET_PXA27X) || \ 168 - defined(CONFIG_USB_GADGET_PXA27X_MODULE) 167 + #if defined(CONFIG_USB_PXA27X) || \ 168 + defined(CONFIG_USB_PXA27X_MODULE) 169 169 static struct gpio_vbus_mach_info palm27x_udc_info = { 170 170 .gpio_vbus_inverted = 1, 171 171 };
+1 -1
arch/arm/mach-pxa/palmtc.c
··· 338 338 /****************************************************************************** 339 339 * UDC 340 340 ******************************************************************************/ 341 - #if defined(CONFIG_USB_GADGET_PXA25X)||defined(CONFIG_USB_GADGET_PXA25X_MODULE) 341 + #if defined(CONFIG_USB_PXA25X)||defined(CONFIG_USB_PXA25X_MODULE) 342 342 static struct gpio_vbus_mach_info palmtc_udc_info = { 343 343 .gpio_vbus = GPIO_NR_PALMTC_USB_DETECT_N, 344 344 .gpio_vbus_inverted = 1,
+1 -1
arch/arm/mach-pxa/vpac270.c
··· 343 343 /****************************************************************************** 344 344 * USB Gadget 345 345 ******************************************************************************/ 346 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 346 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 347 347 static struct gpio_vbus_mach_info vpac270_gpio_vbus_info = { 348 348 .gpio_vbus = GPIO41_VPAC270_UDC_DETECT, 349 349 .gpio_pullup = -1,
+1 -1
arch/arm/mach-s3c64xx/mach-crag6410-module.c
··· 8 8 * published by the Free Software Foundation. 9 9 */ 10 10 11 - #include <linux/module.h> 11 + #include <linux/export.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/i2c.h> 14 14
+1 -1
arch/arm/mm/cache-l2x0.c
··· 61 61 { 62 62 void __iomem *base = l2x0_base; 63 63 64 - #ifdef CONFIG_ARM_ERRATA_753970 64 + #ifdef CONFIG_PL310_ERRATA_753970 65 65 /* write to an unmmapped register */ 66 66 writel_relaxed(0, base + L2X0_DUMMY_REG); 67 67 #else
+10 -1
arch/arm/mm/dma-mapping.c
··· 168 168 pte_t *pte; 169 169 int i = 0; 170 170 unsigned long base = consistent_base; 171 - unsigned long num_ptes = (CONSISTENT_END - base) >> PGDIR_SHIFT; 171 + unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT; 172 172 173 173 consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL); 174 174 if (!consistent_pte) { ··· 331 331 { 332 332 struct page *page; 333 333 void *addr; 334 + 335 + /* 336 + * Following is a work-around (a.k.a. hack) to prevent pages 337 + * with __GFP_COMP being passed to split_page() which cannot 338 + * handle them. The real problem is that this flag probably 339 + * should be 0 on ARM as it is not supported on this 340 + * platform; see CONFIG_HUGETLBFS. 341 + */ 342 + gfp &= ~(__GFP_COMP); 334 343 335 344 *handle = ~0; 336 345 size = PAGE_ALIGN(size);
+6 -17
arch/arm/mm/mmap.c
··· 9 9 #include <linux/io.h> 10 10 #include <linux/personality.h> 11 11 #include <linux/random.h> 12 - #include <asm/cputype.h> 13 - #include <asm/system.h> 12 + #include <asm/cachetype.h> 14 13 15 14 #define COLOUR_ALIGN(addr,pgoff) \ 16 15 ((((addr)+SHMLBA-1)&~(SHMLBA-1)) + \ ··· 31 32 struct mm_struct *mm = current->mm; 32 33 struct vm_area_struct *vma; 33 34 unsigned long start_addr; 34 - #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K) 35 - unsigned int cache_type; 36 - int do_align = 0, aliasing = 0; 35 + int do_align = 0; 36 + int aliasing = cache_is_vipt_aliasing(); 37 37 38 38 /* 39 39 * We only need to do colour alignment if either the I or D 40 - * caches alias. This is indicated by bits 9 and 21 of the 41 - * cache type register. 40 + * caches alias. 42 41 */ 43 - cache_type = read_cpuid_cachetype(); 44 - if (cache_type != read_cpuid_id()) { 45 - aliasing = (cache_type | cache_type >> 12) & (1 << 11); 46 - if (aliasing) 47 - do_align = filp || flags & MAP_SHARED; 48 - } 49 - #else 50 - #define do_align 0 51 - #define aliasing 0 52 - #endif 42 + if (aliasing) 43 + do_align = filp || (flags & MAP_SHARED); 53 44 54 45 /* 55 46 * We enforce the MAP_FIXED case.
+1 -1
arch/arm/plat-mxc/include/mach/common.h
··· 85 85 }; 86 86 87 87 extern void mx5_cpu_lp_set(enum mxc_cpu_pwr_mode mode); 88 - extern void (*imx_idle)(void); 89 88 extern void imx_print_silicon_rev(const char *cpu, int srev); 90 89 91 90 void avic_handle_irq(struct pt_regs *); ··· 132 133 extern void imx53_smd_common_init(void); 133 134 extern int imx6q_set_lpm(enum mxc_cpu_pwr_mode mode); 134 135 extern void imx6q_pm_init(void); 136 + extern void imx6q_clock_map_io(void); 135 137 #endif
-14
arch/arm/plat-mxc/include/mach/mxc.h
··· 50 50 #define IMX_CHIP_REVISION_3_3 0x33 51 51 #define IMX_CHIP_REVISION_UNKNOWN 0xff 52 52 53 - #define IMX_CHIP_REVISION_1_0_STRING "1.0" 54 - #define IMX_CHIP_REVISION_1_1_STRING "1.1" 55 - #define IMX_CHIP_REVISION_1_2_STRING "1.2" 56 - #define IMX_CHIP_REVISION_1_3_STRING "1.3" 57 - #define IMX_CHIP_REVISION_2_0_STRING "2.0" 58 - #define IMX_CHIP_REVISION_2_1_STRING "2.1" 59 - #define IMX_CHIP_REVISION_2_2_STRING "2.2" 60 - #define IMX_CHIP_REVISION_2_3_STRING "2.3" 61 - #define IMX_CHIP_REVISION_3_0_STRING "3.0" 62 - #define IMX_CHIP_REVISION_3_1_STRING "3.1" 63 - #define IMX_CHIP_REVISION_3_2_STRING "3.2" 64 - #define IMX_CHIP_REVISION_3_3_STRING "3.3" 65 - #define IMX_CHIP_REVISION_UNKNOWN_STRING "unknown" 66 - 67 53 #ifndef __ASSEMBLY__ 68 54 extern unsigned int __mxc_cpu_type; 69 55 #endif
+1 -6
arch/arm/plat-mxc/include/mach/system.h
··· 17 17 #ifndef __ASM_ARCH_MXC_SYSTEM_H__ 18 18 #define __ASM_ARCH_MXC_SYSTEM_H__ 19 19 20 - extern void (*imx_idle)(void); 21 - 22 20 static inline void arch_idle(void) 23 21 { 24 - if (imx_idle != NULL) 25 - (imx_idle)(); 26 - else 27 - cpu_do_idle(); 22 + cpu_do_idle(); 28 23 } 29 24 30 25 void arch_reset(char mode, const char *cmd);
+2 -1
arch/arm/plat-mxc/system.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/err.h> 23 23 #include <linux/delay.h> 24 + #include <linux/module.h> 24 25 25 26 #include <mach/hardware.h> 26 27 #include <mach/common.h> ··· 29 28 #include <asm/system.h> 30 29 #include <asm/mach-types.h> 31 30 32 - void (*imx_idle)(void) = NULL; 33 31 void __iomem *(*imx_ioremap)(unsigned long, size_t, unsigned int) = NULL; 32 + EXPORT_SYMBOL_GPL(imx_ioremap); 34 33 35 34 static void __iomem *wdog_base; 36 35
+1 -1
arch/arm/plat-omap/include/plat/clock.h
··· 165 165 u8 auto_recal_bit; 166 166 u8 recal_en_bit; 167 167 u8 recal_st_bit; 168 - u8 flags; 169 168 # endif 169 + u8 flags; 170 170 }; 171 171 172 172 #endif
+3
arch/arm/plat-omap/include/plat/common.h
··· 30 30 #include <linux/delay.h> 31 31 32 32 #include <plat/i2c.h> 33 + #include <plat/omap_hwmod.h> 33 34 34 35 struct sys_timer; 35 36 ··· 55 54 void am35xx_init_early(void); 56 55 void ti816x_init_early(void); 57 56 void omap4430_init_early(void); 57 + 58 + extern int omap_dss_reset(struct omap_hwmod *); 58 59 59 60 void omap_sram_init(void); 60 61
+1 -1
arch/arm/plat-s3c24xx/cpu-freq-debugfs.c
··· 12 12 */ 13 13 14 14 #include <linux/init.h> 15 - #include <linux/module.h> 15 + #include <linux/export.h> 16 16 #include <linux/interrupt.h> 17 17 #include <linux/ioport.h> 18 18 #include <linux/cpufreq.h>
+1
arch/arm/plat-s5p/sysmmu.c
··· 11 11 #include <linux/io.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/platform_device.h> 14 + #include <linux/export.h> 14 15 15 16 #include <asm/pgtable.h> 16 17
+2
arch/arm/plat-samsung/include/plat/gpio-cfg.h
··· 24 24 #ifndef __PLAT_GPIO_CFG_H 25 25 #define __PLAT_GPIO_CFG_H __FILE__ 26 26 27 + #include<linux/types.h> 28 + 27 29 typedef unsigned int __bitwise__ samsung_gpio_pull_t; 28 30 typedef unsigned int __bitwise__ s5p_gpio_drvstr_t; 29 31
+1 -1
arch/arm/plat-samsung/pd.c
··· 11 11 */ 12 12 13 13 #include <linux/init.h> 14 - #include <linux/module.h> 14 + #include <linux/export.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/err.h> 17 17 #include <linux/pm_runtime.h>
+1 -1
arch/arm/plat-samsung/pwm.c
··· 11 11 * the Free Software Foundation; either version 2 of the License. 12 12 */ 13 13 14 - #include <linux/module.h> 14 + #include <linux/export.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/platform_device.h> 17 17 #include <linux/slab.h>
+1
arch/arm/tools/mach-types
··· 1123 1123 thales_adc MACH_THALES_ADC THALES_ADC 3492 1124 1124 ubisys_p9d_evp MACH_UBISYS_P9D_EVP UBISYS_P9D_EVP 3493 1125 1125 atdgp318 MACH_ATDGP318 ATDGP318 3494 1126 + m28evk MACH_M28EVK M28EVK 3613 1126 1127 smdk4212 MACH_SMDK4212 SMDK4212 3638 1127 1128 smdk4412 MACH_SMDK4412 SMDK4412 3765
+13 -4
arch/powerpc/boot/dts/p1023rds.dts
··· 449 449 interrupt-parent = <&mpic>; 450 450 interrupts = <16 2>; 451 451 interrupt-map-mask = <0xf800 0 0 7>; 452 + /* IRQ[0:3] are pulled up on board, set to active-low */ 452 453 interrupt-map = < 453 454 /* IDSEL 0x0 */ 454 455 0000 0 0 1 &mpic 0 1 ··· 489 488 interrupt-parent = <&mpic>; 490 489 interrupts = <16 2>; 491 490 interrupt-map-mask = <0xf800 0 0 7>; 491 + /* 492 + * IRQ[4:6] only for PCIe, set to active-high, 493 + * IRQ[7] is pulled up on board, set to active-low 494 + */ 492 495 interrupt-map = < 493 496 /* IDSEL 0x0 */ 494 - 0000 0 0 1 &mpic 4 1 495 - 0000 0 0 2 &mpic 5 1 496 - 0000 0 0 3 &mpic 6 1 497 + 0000 0 0 1 &mpic 4 2 498 + 0000 0 0 2 &mpic 5 2 499 + 0000 0 0 3 &mpic 6 2 497 500 0000 0 0 4 &mpic 7 1 498 501 >; 499 502 ranges = <0x2000000 0x0 0xa0000000 ··· 532 527 interrupt-parent = <&mpic>; 533 528 interrupts = <16 2>; 534 529 interrupt-map-mask = <0xf800 0 0 7>; 530 + /* 531 + * IRQ[8:10] are pulled up on board, set to active-low 532 + * IRQ[11] only for PCIe, set to active-high, 533 + */ 535 534 interrupt-map = < 536 535 /* IDSEL 0x0 */ 537 536 0000 0 0 1 &mpic 8 1 538 537 0000 0 0 2 &mpic 9 1 539 538 0000 0 0 3 &mpic 10 1 540 - 0000 0 0 4 &mpic 11 1 539 + 0000 0 0 4 &mpic 11 2 541 540 >; 542 541 ranges = <0x2000000 0x0 0x80000000 543 542 0x2000000 0x0 0x80000000
+2
arch/powerpc/configs/ppc44x_defconfig
··· 52 52 CONFIG_MTD_JEDECPROBE=y 53 53 CONFIG_MTD_CFI_AMDSTD=y 54 54 CONFIG_MTD_PHYSMAP_OF=y 55 + CONFIG_MTD_NAND=m 56 + CONFIG_MTD_NAND_NDFC=m 55 57 CONFIG_MTD_UBI=m 56 58 CONFIG_MTD_UBI_GLUEBI=m 57 59 CONFIG_PROC_DEVICETREE=y
+1
arch/powerpc/mm/hugetlbpage.c
··· 15 15 #include <linux/of_fdt.h> 16 16 #include <linux/memblock.h> 17 17 #include <linux/bootmem.h> 18 + #include <linux/moduleparam.h> 18 19 #include <asm/pgtable.h> 19 20 #include <asm/pgalloc.h> 20 21 #include <asm/tlb.h>
+1 -1
arch/powerpc/platforms/85xx/Kconfig
··· 203 203 select PPC_E500MC 204 204 select PHYS_64BIT 205 205 select SWIOTLB 206 - select MPC8xxx_GPIO 206 + select GPIO_MPC8XXX 207 207 select HAS_RAPIDIO 208 208 select PPC_EPAPR_HV_PIC 209 209 help
+1 -1
arch/powerpc/platforms/85xx/p3060_qds.c
··· 70 70 .power_save = e500_idle, 71 71 }; 72 72 73 - machine_device_initcall(p3060_qds, declare_of_platform_devices); 73 + machine_device_initcall(p3060_qds, corenet_ds_publish_devices); 74 74 75 75 #ifdef CONFIG_SWIOTLB 76 76 machine_arch_initcall(p3060_qds, swiotlb_setup_bus_notifier);
+1
arch/powerpc/sysdev/ehv_pic.c
··· 280 280 281 281 if (!ehv_pic->irqhost) { 282 282 of_node_put(np); 283 + kfree(ehv_pic); 283 284 return; 284 285 } 285 286
+1
arch/powerpc/sysdev/fsl_lbc.c
··· 328 328 err: 329 329 iounmap(fsl_lbc_ctrl_dev->regs); 330 330 kfree(fsl_lbc_ctrl_dev); 331 + fsl_lbc_ctrl_dev = NULL; 331 332 return ret; 332 333 } 333 334
+1 -1
arch/powerpc/sysdev/qe_lib/qe.c
··· 216 216 /* Errata QE_General4, which affects some MPC832x and MPC836x SOCs, says 217 217 that the BRG divisor must be even if you're not using divide-by-16 218 218 mode. */ 219 - if (!div16 && (divisor & 1)) 219 + if (!div16 && (divisor & 1) && (divisor > 3)) 220 220 divisor++; 221 221 222 222 tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) |
+22 -9
drivers/acpi/apei/erst.c
··· 932 932 static int erst_open_pstore(struct pstore_info *psi); 933 933 static int erst_close_pstore(struct pstore_info *psi); 934 934 static ssize_t erst_reader(u64 *id, enum pstore_type_id *type, 935 - struct timespec *time, struct pstore_info *psi); 935 + struct timespec *time, char **buf, 936 + struct pstore_info *psi); 936 937 static int erst_writer(enum pstore_type_id type, u64 *id, unsigned int part, 937 938 size_t size, struct pstore_info *psi); 938 939 static int erst_clearer(enum pstore_type_id type, u64 id, ··· 987 986 } 988 987 989 988 static ssize_t erst_reader(u64 *id, enum pstore_type_id *type, 990 - struct timespec *time, struct pstore_info *psi) 989 + struct timespec *time, char **buf, 990 + struct pstore_info *psi) 991 991 { 992 992 int rc; 993 993 ssize_t len = 0; 994 994 u64 record_id; 995 - struct cper_pstore_record *rcd = (struct cper_pstore_record *) 996 - (erst_info.buf - sizeof(*rcd)); 995 + struct cper_pstore_record *rcd; 996 + size_t rcd_len = sizeof(*rcd) + erst_info.bufsize; 997 997 998 998 if (erst_disable) 999 999 return -ENODEV; 1000 1000 1001 + rcd = kmalloc(rcd_len, GFP_KERNEL); 1002 + if (!rcd) { 1003 + rc = -ENOMEM; 1004 + goto out; 1005 + } 1001 1006 skip: 1002 1007 rc = erst_get_record_id_next(&reader_pos, &record_id); 1003 1008 if (rc) ··· 1011 1004 1012 1005 /* no more record */ 1013 1006 if (record_id == APEI_ERST_INVALID_RECORD_ID) { 1014 - rc = -1; 1007 + rc = -EINVAL; 1015 1008 goto out; 1016 1009 } 1017 1010 1018 - len = erst_read(record_id, &rcd->hdr, sizeof(*rcd) + 1019 - erst_info.bufsize); 1011 + len = erst_read(record_id, &rcd->hdr, rcd_len); 1020 1012 /* The record may be cleared by others, try read next record */ 1021 1013 if (len == -ENOENT) 1022 1014 goto skip; 1023 - else if (len < 0) { 1024 - rc = -1; 1015 + else if (len < sizeof(*rcd)) { 1016 + rc = -EIO; 1025 1017 goto out; 1026 1018 } 1027 1019 if (uuid_le_cmp(rcd->hdr.creator_id, CPER_CREATOR_PSTORE) != 0) 1028 1020 goto skip; 1029 1021 1022 + *buf = kmalloc(len, GFP_KERNEL); 1023 + if (*buf == NULL) { 1024 + rc = -ENOMEM; 1025 + goto out; 1026 + } 1027 + memcpy(*buf, rcd->data, len - sizeof(*rcd)); 1030 1028 *id = record_id; 1031 1029 if (uuid_le_cmp(rcd->sec_hdr.section_type, 1032 1030 CPER_SECTION_TYPE_DMESG) == 0) ··· 1049 1037 time->tv_nsec = 0; 1050 1038 1051 1039 out: 1040 + kfree(rcd); 1052 1041 return (rc < 0) ? rc : (len - sizeof(*rcd)); 1053 1042 } 1054 1043
+7 -5
drivers/crypto/mv_cesa.c
··· 343 343 else 344 344 op.config |= CFG_MID_FRAG; 345 345 346 - writel(req_ctx->state[0], cpg->reg + DIGEST_INITIAL_VAL_A); 347 - writel(req_ctx->state[1], cpg->reg + DIGEST_INITIAL_VAL_B); 348 - writel(req_ctx->state[2], cpg->reg + DIGEST_INITIAL_VAL_C); 349 - writel(req_ctx->state[3], cpg->reg + DIGEST_INITIAL_VAL_D); 350 - writel(req_ctx->state[4], cpg->reg + DIGEST_INITIAL_VAL_E); 346 + if (first_block) { 347 + writel(req_ctx->state[0], cpg->reg + DIGEST_INITIAL_VAL_A); 348 + writel(req_ctx->state[1], cpg->reg + DIGEST_INITIAL_VAL_B); 349 + writel(req_ctx->state[2], cpg->reg + DIGEST_INITIAL_VAL_C); 350 + writel(req_ctx->state[3], cpg->reg + DIGEST_INITIAL_VAL_D); 351 + writel(req_ctx->state[4], cpg->reg + DIGEST_INITIAL_VAL_E); 352 + } 351 353 } 352 354 353 355 memcpy(cpg->sram + SRAM_CONFIG, &op, sizeof(struct sec_accel_config));
+1 -1
drivers/edac/mpc85xx_edac.c
··· 1128 1128 { .compatible = "fsl,p1020-memory-controller", }, 1129 1129 { .compatible = "fsl,p1021-memory-controller", }, 1130 1130 { .compatible = "fsl,p2020-memory-controller", }, 1131 - { .compatible = "fsl,p4080-memory-controller", }, 1131 + { .compatible = "fsl,qoriq-memory-controller", }, 1132 1132 {}, 1133 1133 }; 1134 1134 MODULE_DEVICE_TABLE(of, mpc85xx_mc_err_of_match);
+9 -3
drivers/firmware/efivars.c
··· 457 457 } 458 458 459 459 static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, 460 - struct timespec *timespec, struct pstore_info *psi) 460 + struct timespec *timespec, 461 + char **buf, struct pstore_info *psi) 461 462 { 462 463 efi_guid_t vendor = LINUX_EFI_CRASH_GUID; 463 464 struct efivars *efivars = psi->data; ··· 479 478 timespec->tv_nsec = 0; 480 479 get_var_data_locked(efivars, &efivars->walk_entry->var); 481 480 size = efivars->walk_entry->var.DataSize; 482 - memcpy(psi->buf, efivars->walk_entry->var.Data, size); 481 + *buf = kmalloc(size, GFP_KERNEL); 482 + if (*buf == NULL) 483 + return -ENOMEM; 484 + memcpy(*buf, efivars->walk_entry->var.Data, 485 + size); 483 486 efivars->walk_entry = list_entry(efivars->walk_entry->list.next, 484 487 struct efivar_entry, list); 485 488 return size; ··· 581 576 } 582 577 583 578 static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, 584 - struct timespec *time, struct pstore_info *psi) 579 + struct timespec *timespec, 580 + char **buf, struct pstore_info *psi) 585 581 { 586 582 return -1; 587 583 }
+2 -2
drivers/gpio/gpio-pca953x.c
··· 546 546 * Translate OpenFirmware node properties into platform_data 547 547 * WARNING: This is DEPRECATED and will be removed eventually! 548 548 */ 549 - void 549 + static void 550 550 pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert) 551 551 { 552 552 struct device_node *node; ··· 574 574 *invert = *val; 575 575 } 576 576 #else 577 - void 577 + static void 578 578 pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert) 579 579 { 580 580 *gpio_base = -1;
+32 -30
drivers/gpu/drm/exynos/exynos_drm_buf.c
··· 27 27 #include "drm.h" 28 28 29 29 #include "exynos_drm_drv.h" 30 + #include "exynos_drm_gem.h" 30 31 #include "exynos_drm_buf.h" 31 32 32 - static DEFINE_MUTEX(exynos_drm_buf_lock); 33 - 34 33 static int lowlevel_buffer_allocate(struct drm_device *dev, 35 - struct exynos_drm_buf_entry *entry) 34 + struct exynos_drm_gem_buf *buffer) 36 35 { 37 36 DRM_DEBUG_KMS("%s\n", __FILE__); 38 37 39 - entry->vaddr = dma_alloc_writecombine(dev->dev, entry->size, 40 - (dma_addr_t *)&entry->paddr, GFP_KERNEL); 41 - if (!entry->paddr) { 38 + buffer->kvaddr = dma_alloc_writecombine(dev->dev, buffer->size, 39 + &buffer->dma_addr, GFP_KERNEL); 40 + if (!buffer->kvaddr) { 42 41 DRM_ERROR("failed to allocate buffer.\n"); 43 42 return -ENOMEM; 44 43 } 45 44 46 - DRM_DEBUG_KMS("allocated : vaddr(0x%x), paddr(0x%x), size(0x%x)\n", 47 - (unsigned int)entry->vaddr, entry->paddr, entry->size); 45 + DRM_DEBUG_KMS("vaddr(0x%lx), dma_addr(0x%lx), size(0x%lx)\n", 46 + (unsigned long)buffer->kvaddr, 47 + (unsigned long)buffer->dma_addr, 48 + buffer->size); 48 49 49 50 return 0; 50 51 } 51 52 52 53 static void lowlevel_buffer_deallocate(struct drm_device *dev, 53 - struct exynos_drm_buf_entry *entry) 54 + struct exynos_drm_gem_buf *buffer) 54 55 { 55 56 DRM_DEBUG_KMS("%s.\n", __FILE__); 56 57 57 - if (entry->paddr && entry->vaddr && entry->size) 58 - dma_free_writecombine(dev->dev, entry->size, entry->vaddr, 59 - entry->paddr); 58 + if (buffer->dma_addr && buffer->size) 59 + dma_free_writecombine(dev->dev, buffer->size, buffer->kvaddr, 60 + (dma_addr_t)buffer->dma_addr); 60 61 else 61 - DRM_DEBUG_KMS("entry data is null.\n"); 62 + DRM_DEBUG_KMS("buffer data are invalid.\n"); 62 63 } 63 64 64 - struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev, 65 + struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev, 65 66 unsigned int size) 66 67 { 67 - struct exynos_drm_buf_entry *entry; 68 + struct exynos_drm_gem_buf *buffer; 68 69 69 70 DRM_DEBUG_KMS("%s.\n", __FILE__); 71 + DRM_DEBUG_KMS("desired size = 0x%x\n", size); 70 72 71 - entry = kzalloc(sizeof(*entry), GFP_KERNEL); 72 - if (!entry) { 73 - DRM_ERROR("failed to allocate exynos_drm_buf_entry.\n"); 73 + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); 74 + if (!buffer) { 75 + DRM_ERROR("failed to allocate exynos_drm_gem_buf.\n"); 74 76 return ERR_PTR(-ENOMEM); 75 77 } 76 78 77 - entry->size = size; 79 + buffer->size = size; 78 80 79 81 /* 80 82 * allocate memory region with size and set the memory information 81 - * to vaddr and paddr of a entry object. 83 + * to vaddr and dma_addr of a buffer object. 82 84 */ 83 - if (lowlevel_buffer_allocate(dev, entry) < 0) { 84 - kfree(entry); 85 - entry = NULL; 85 + if (lowlevel_buffer_allocate(dev, buffer) < 0) { 86 + kfree(buffer); 87 + buffer = NULL; 86 88 return ERR_PTR(-ENOMEM); 87 89 } 88 90 89 - return entry; 91 + return buffer; 90 92 } 91 93 92 94 void exynos_drm_buf_destroy(struct drm_device *dev, 93 - struct exynos_drm_buf_entry *entry) 95 + struct exynos_drm_gem_buf *buffer) 94 96 { 95 97 DRM_DEBUG_KMS("%s.\n", __FILE__); 96 98 97 - if (!entry) { 98 - DRM_DEBUG_KMS("entry is null.\n"); 99 + if (!buffer) { 100 + DRM_DEBUG_KMS("buffer is null.\n"); 99 101 return; 100 102 } 101 103 102 - lowlevel_buffer_deallocate(dev, entry); 104 + lowlevel_buffer_deallocate(dev, buffer); 103 105 104 - kfree(entry); 105 - entry = NULL; 106 + kfree(buffer); 107 + buffer = NULL; 106 108 } 107 109 108 110 MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+4 -17
drivers/gpu/drm/exynos/exynos_drm_buf.h
··· 26 26 #ifndef _EXYNOS_DRM_BUF_H_ 27 27 #define _EXYNOS_DRM_BUF_H_ 28 28 29 - /* 30 - * exynos drm buffer entry structure. 31 - * 32 - * @paddr: physical address of allocated memory. 33 - * @vaddr: kernel virtual address of allocated memory. 34 - * @size: size of allocated memory. 35 - */ 36 - struct exynos_drm_buf_entry { 37 - dma_addr_t paddr; 38 - void __iomem *vaddr; 39 - unsigned int size; 40 - }; 41 - 42 29 /* allocate physical memory. */ 43 - struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev, 30 + struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev, 44 31 unsigned int size); 45 32 46 - /* get physical memory information of a drm framebuffer. */ 47 - struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb); 33 + /* get memory information of a drm framebuffer. */ 34 + struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb); 48 35 49 36 /* remove allocated physical memory. */ 50 37 void exynos_drm_buf_destroy(struct drm_device *dev, 51 - struct exynos_drm_buf_entry *entry); 38 + struct exynos_drm_gem_buf *buffer); 52 39 53 40 #endif
+56 -22
drivers/gpu/drm/exynos/exynos_drm_connector.c
··· 37 37 38 38 struct exynos_drm_connector { 39 39 struct drm_connector drm_connector; 40 + uint32_t encoder_id; 41 + struct exynos_drm_manager *manager; 40 42 }; 41 43 42 44 /* convert exynos_video_timings to drm_display_mode */ ··· 49 47 DRM_DEBUG_KMS("%s\n", __FILE__); 50 48 51 49 mode->clock = timing->pixclock / 1000; 50 + mode->vrefresh = timing->refresh; 52 51 53 52 mode->hdisplay = timing->xres; 54 53 mode->hsync_start = mode->hdisplay + timing->left_margin; ··· 60 57 mode->vsync_start = mode->vdisplay + timing->upper_margin; 61 58 mode->vsync_end = mode->vsync_start + timing->vsync_len; 62 59 mode->vtotal = mode->vsync_end + timing->lower_margin; 60 + 61 + if (timing->vmode & FB_VMODE_INTERLACED) 62 + mode->flags |= DRM_MODE_FLAG_INTERLACE; 63 + 64 + if (timing->vmode & FB_VMODE_DOUBLE) 65 + mode->flags |= DRM_MODE_FLAG_DBLSCAN; 63 66 } 64 67 65 68 /* convert drm_display_mode to exynos_video_timings */ ··· 78 69 memset(timing, 0, sizeof(*timing)); 79 70 80 71 timing->pixclock = mode->clock * 1000; 81 - timing->refresh = mode->vrefresh; 72 + timing->refresh = drm_mode_vrefresh(mode); 82 73 83 74 timing->xres = mode->hdisplay; 84 75 timing->left_margin = mode->hsync_start - mode->hdisplay; ··· 101 92 102 93 static int exynos_drm_connector_get_modes(struct drm_connector *connector) 103 94 { 104 - struct exynos_drm_manager *manager = 105 - exynos_drm_get_manager(connector->encoder); 106 - struct exynos_drm_display *display = manager->display; 95 + struct exynos_drm_connector *exynos_connector = 96 + to_exynos_connector(connector); 97 + struct exynos_drm_manager *manager = exynos_connector->manager; 98 + struct exynos_drm_display_ops *display_ops = manager->display_ops; 107 99 unsigned int count; 108 100 109 101 DRM_DEBUG_KMS("%s\n", __FILE__); 110 102 111 - if (!display) { 112 - DRM_DEBUG_KMS("display is null.\n"); 103 + if (!display_ops) { 104 + DRM_DEBUG_KMS("display_ops is null.\n"); 113 105 return 0; 114 106 } 115 107 ··· 122 112 * P.S. in case of lcd panel, count is always 1 if success 123 113 * because lcd panel has only one mode. 124 114 */ 125 - if (display->get_edid) { 115 + if (display_ops->get_edid) { 126 116 int ret; 127 117 void *edid; 128 118 ··· 132 122 return 0; 133 123 } 134 124 135 - ret = display->get_edid(manager->dev, connector, 125 + ret = display_ops->get_edid(manager->dev, connector, 136 126 edid, MAX_EDID); 137 127 if (ret < 0) { 138 128 DRM_ERROR("failed to get edid data.\n"); ··· 150 140 struct drm_display_mode *mode = drm_mode_create(connector->dev); 151 141 struct fb_videomode *timing; 152 142 153 - if (display->get_timing) 154 - timing = display->get_timing(manager->dev); 143 + if (display_ops->get_timing) 144 + timing = display_ops->get_timing(manager->dev); 155 145 else { 156 146 drm_mode_destroy(connector->dev, mode); 157 147 return 0; ··· 172 162 static int exynos_drm_connector_mode_valid(struct drm_connector *connector, 173 163 struct drm_display_mode *mode) 174 164 { 175 - struct exynos_drm_manager *manager = 176 - exynos_drm_get_manager(connector->encoder); 177 - struct exynos_drm_display *display = manager->display; 165 + struct exynos_drm_connector *exynos_connector = 166 + to_exynos_connector(connector); 167 + struct exynos_drm_manager *manager = exynos_connector->manager; 168 + struct exynos_drm_display_ops *display_ops = manager->display_ops; 178 169 struct fb_videomode timing; 179 170 int ret = MODE_BAD; 180 171 ··· 183 172 184 173 convert_to_video_timing(&timing, mode); 185 174 186 - if (display && display->check_timing) 187 - if (!display->check_timing(manager->dev, (void *)&timing)) 175 + if (display_ops && display_ops->check_timing) 176 + if (!display_ops->check_timing(manager->dev, (void *)&timing)) 188 177 ret = MODE_OK; 189 178 190 179 return ret; ··· 192 181 193 182 struct drm_encoder *exynos_drm_best_encoder(struct drm_connector *connector) 194 183 { 184 + struct drm_device *dev = connector->dev; 185 + struct exynos_drm_connector *exynos_connector = 186 + to_exynos_connector(connector); 187 + struct drm_mode_object *obj; 188 + struct drm_encoder *encoder; 189 + 195 190 DRM_DEBUG_KMS("%s\n", __FILE__); 196 191 197 - return connector->encoder; 192 + obj = drm_mode_object_find(dev, exynos_connector->encoder_id, 193 + DRM_MODE_OBJECT_ENCODER); 194 + if (!obj) { 195 + DRM_DEBUG_KMS("Unknown ENCODER ID %d\n", 196 + exynos_connector->encoder_id); 197 + return NULL; 198 + } 199 + 200 + encoder = obj_to_encoder(obj); 201 + 202 + return encoder; 198 203 } 199 204 200 205 static struct drm_connector_helper_funcs exynos_connector_helper_funcs = { ··· 223 196 static enum drm_connector_status 224 197 exynos_drm_connector_detect(struct drm_connector *connector, bool force) 225 198 { 226 - struct exynos_drm_manager *manager = 227 - exynos_drm_get_manager(connector->encoder); 228 - struct exynos_drm_display *display = manager->display; 199 + struct exynos_drm_connector *exynos_connector = 200 + to_exynos_connector(connector); 201 + struct exynos_drm_manager *manager = exynos_connector->manager; 202 + struct exynos_drm_display_ops *display_ops = 203 + manager->display_ops; 229 204 enum drm_connector_status status = connector_status_disconnected; 230 205 231 206 DRM_DEBUG_KMS("%s\n", __FILE__); 232 207 233 - if (display && display->is_connected) { 234 - if (display->is_connected(manager->dev)) 208 + if (display_ops && display_ops->is_connected) { 209 + if (display_ops->is_connected(manager->dev)) 235 210 status = connector_status_connected; 236 211 else 237 212 status = connector_status_disconnected; ··· 280 251 281 252 connector = &exynos_connector->drm_connector; 282 253 283 - switch (manager->display->type) { 254 + switch (manager->display_ops->type) { 284 255 case EXYNOS_DISPLAY_TYPE_HDMI: 285 256 type = DRM_MODE_CONNECTOR_HDMIA; 257 + connector->interlace_allowed = true; 258 + connector->polled = DRM_CONNECTOR_POLL_HPD; 286 259 break; 287 260 default: 288 261 type = DRM_MODE_CONNECTOR_Unknown; ··· 298 267 if (err) 299 268 goto err_connector; 300 269 270 + exynos_connector->encoder_id = encoder->base.id; 271 + exynos_connector->manager = manager; 301 272 connector->encoder = encoder; 273 + 302 274 err = drm_mode_connector_attach_encoder(connector, encoder); 303 275 if (err) { 304 276 DRM_ERROR("failed to attach a connector to a encoder\n");
+39 -37
drivers/gpu/drm/exynos/exynos_drm_crtc.c
··· 29 29 #include "drmP.h" 30 30 #include "drm_crtc_helper.h" 31 31 32 + #include "exynos_drm_crtc.h" 32 33 #include "exynos_drm_drv.h" 33 34 #include "exynos_drm_fb.h" 34 35 #include "exynos_drm_encoder.h" 36 + #include "exynos_drm_gem.h" 35 37 #include "exynos_drm_buf.h" 36 38 37 39 #define to_exynos_crtc(x) container_of(x, struct exynos_drm_crtc,\ 38 40 drm_crtc) 39 - 40 - /* 41 - * Exynos specific crtc postion structure. 42 - * 43 - * @fb_x: offset x on a framebuffer to be displyed 44 - * - the unit is screen coordinates. 45 - * @fb_y: offset y on a framebuffer to be displayed 46 - * - the unit is screen coordinates. 47 - * @crtc_x: offset x on hardware screen. 48 - * @crtc_y: offset y on hardware screen. 49 - * @crtc_w: width of hardware screen. 50 - * @crtc_h: height of hardware screen. 51 - */ 52 - struct exynos_drm_crtc_pos { 53 - unsigned int fb_x; 54 - unsigned int fb_y; 55 - unsigned int crtc_x; 56 - unsigned int crtc_y; 57 - unsigned int crtc_w; 58 - unsigned int crtc_h; 59 - }; 60 41 61 42 /* 62 43 * Exynos specific crtc structure. ··· 66 85 67 86 exynos_drm_fn_encoder(crtc, overlay, 68 87 exynos_drm_encoder_crtc_mode_set); 69 - exynos_drm_fn_encoder(crtc, NULL, exynos_drm_encoder_crtc_commit); 88 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 89 + exynos_drm_encoder_crtc_commit); 70 90 } 71 91 72 - static int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 73 - struct drm_framebuffer *fb, 74 - struct drm_display_mode *mode, 75 - struct exynos_drm_crtc_pos *pos) 92 + int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 93 + struct drm_framebuffer *fb, 94 + struct drm_display_mode *mode, 95 + struct exynos_drm_crtc_pos *pos) 76 96 { 77 - struct exynos_drm_buf_entry *entry; 97 + struct exynos_drm_gem_buf *buffer; 78 98 unsigned int actual_w; 79 99 unsigned int actual_h; 80 100 81 - entry = exynos_drm_fb_get_buf(fb); 82 - if (!entry) { 83 - DRM_LOG_KMS("entry is null.\n"); 101 + buffer = exynos_drm_fb_get_buf(fb); 102 + if (!buffer) { 103 + DRM_LOG_KMS("buffer is null.\n"); 84 104 return -EFAULT; 85 105 } 86 106 87 - overlay->paddr = entry->paddr; 88 - overlay->vaddr = entry->vaddr; 107 + overlay->dma_addr = buffer->dma_addr; 108 + overlay->vaddr = buffer->kvaddr; 89 109 90 - DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n", 110 + DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n", 91 111 (unsigned long)overlay->vaddr, 92 - (unsigned long)overlay->paddr); 112 + (unsigned long)overlay->dma_addr); 93 113 94 114 actual_w = min((mode->hdisplay - pos->crtc_x), pos->crtc_w); 95 115 actual_h = min((mode->vdisplay - pos->crtc_y), pos->crtc_h); ··· 153 171 154 172 static void exynos_drm_crtc_dpms(struct drm_crtc *crtc, int mode) 155 173 { 156 - DRM_DEBUG_KMS("%s\n", __FILE__); 174 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 157 175 158 - /* TODO */ 176 + DRM_DEBUG_KMS("crtc[%d] mode[%d]\n", crtc->base.id, mode); 177 + 178 + switch (mode) { 179 + case DRM_MODE_DPMS_ON: 180 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 181 + exynos_drm_encoder_crtc_commit); 182 + break; 183 + case DRM_MODE_DPMS_STANDBY: 184 + case DRM_MODE_DPMS_SUSPEND: 185 + case DRM_MODE_DPMS_OFF: 186 + /* TODO */ 187 + exynos_drm_fn_encoder(crtc, NULL, 188 + exynos_drm_encoder_crtc_disable); 189 + break; 190 + default: 191 + DRM_DEBUG_KMS("unspecified mode %d\n", mode); 192 + break; 193 + } 159 194 } 160 195 161 196 static void exynos_drm_crtc_prepare(struct drm_crtc *crtc) ··· 184 185 185 186 static void exynos_drm_crtc_commit(struct drm_crtc *crtc) 186 187 { 188 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 189 + 187 190 DRM_DEBUG_KMS("%s\n", __FILE__); 188 191 189 - /* drm framework doesn't check NULL. */ 192 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 193 + exynos_drm_encoder_crtc_commit); 190 194 } 191 195 192 196 static bool
+25
drivers/gpu/drm/exynos/exynos_drm_crtc.h
··· 35 35 int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int crtc); 36 36 void exynos_drm_crtc_disable_vblank(struct drm_device *dev, int crtc); 37 37 38 + /* 39 + * Exynos specific crtc postion structure. 40 + * 41 + * @fb_x: offset x on a framebuffer to be displyed 42 + * - the unit is screen coordinates. 43 + * @fb_y: offset y on a framebuffer to be displayed 44 + * - the unit is screen coordinates. 45 + * @crtc_x: offset x on hardware screen. 46 + * @crtc_y: offset y on hardware screen. 47 + * @crtc_w: width of hardware screen. 48 + * @crtc_h: height of hardware screen. 49 + */ 50 + struct exynos_drm_crtc_pos { 51 + unsigned int fb_x; 52 + unsigned int fb_y; 53 + unsigned int crtc_x; 54 + unsigned int crtc_y; 55 + unsigned int crtc_w; 56 + unsigned int crtc_h; 57 + }; 58 + 59 + int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 60 + struct drm_framebuffer *fb, 61 + struct drm_display_mode *mode, 62 + struct exynos_drm_crtc_pos *pos); 38 63 #endif
+5
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 27 27 28 28 #include "drmP.h" 29 29 #include "drm.h" 30 + #include "drm_crtc_helper.h" 30 31 31 32 #include <drm/exynos_drm.h> 32 33 ··· 61 60 dev->dev_private = (void *)private; 62 61 63 62 drm_mode_config_init(dev); 63 + 64 + /* init kms poll for handling hpd */ 65 + drm_kms_helper_poll_init(dev); 64 66 65 67 exynos_drm_mode_config_init(dev); 66 68 ··· 120 116 exynos_drm_fbdev_fini(dev); 121 117 exynos_drm_device_unregister(dev); 122 118 drm_vblank_cleanup(dev); 119 + drm_kms_helper_poll_fini(dev); 123 120 drm_mode_config_cleanup(dev); 124 121 kfree(dev->dev_private); 125 122
+8 -5
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 29 29 #ifndef _EXYNOS_DRM_DRV_H_ 30 30 #define _EXYNOS_DRM_DRV_H_ 31 31 32 + #include <linux/module.h> 32 33 #include "drm.h" 33 34 34 35 #define MAX_CRTC 2 ··· 80 79 * @scan_flag: interlace or progressive way. 81 80 * (it could be DRM_MODE_FLAG_*) 82 81 * @bpp: pixel size.(in bit) 83 - * @paddr: bus(accessed by dma) physical memory address to this overlay 84 - * and this is physically continuous. 82 + * @dma_addr: bus(accessed by dma) address to the memory region allocated 83 + * for a overlay. 85 84 * @vaddr: virtual memory addresss to this overlay. 86 85 * @default_win: a window to be enabled. 87 86 * @color_key: color key on or off. ··· 109 108 unsigned int scan_flag; 110 109 unsigned int bpp; 111 110 unsigned int pitch; 112 - dma_addr_t paddr; 111 + dma_addr_t dma_addr; 113 112 void __iomem *vaddr; 114 113 115 114 bool default_win; ··· 131 130 * @check_timing: check if timing is valid or not. 132 131 * @power_on: display device on or off. 133 132 */ 134 - struct exynos_drm_display { 133 + struct exynos_drm_display_ops { 135 134 enum exynos_drm_output_type type; 136 135 bool (*is_connected)(struct device *dev); 137 136 int (*get_edid)(struct device *dev, struct drm_connector *connector, ··· 147 146 * @mode_set: convert drm_display_mode to hw specific display mode and 148 147 * would be called by encoder->mode_set(). 149 148 * @commit: set current hw specific display mode to hw. 149 + * @disable: disable hardware specific display mode. 150 150 * @enable_vblank: specific driver callback for enabling vblank interrupt. 151 151 * @disable_vblank: specific driver callback for disabling vblank interrupt. 152 152 */ 153 153 struct exynos_drm_manager_ops { 154 154 void (*mode_set)(struct device *subdrv_dev, void *mode); 155 155 void (*commit)(struct device *subdrv_dev); 156 + void (*disable)(struct device *subdrv_dev); 156 157 int (*enable_vblank)(struct device *subdrv_dev); 157 158 void (*disable_vblank)(struct device *subdrv_dev); 158 159 }; ··· 181 178 int pipe; 182 179 struct exynos_drm_manager_ops *ops; 183 180 struct exynos_drm_overlay_ops *overlay_ops; 184 - struct exynos_drm_display *display; 181 + struct exynos_drm_display_ops *display_ops; 185 182 }; 186 183 187 184 /*
+72 -11
drivers/gpu/drm/exynos/exynos_drm_encoder.c
··· 53 53 struct drm_device *dev = encoder->dev; 54 54 struct drm_connector *connector; 55 55 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder); 56 + struct exynos_drm_manager_ops *manager_ops = manager->ops; 56 57 57 58 DRM_DEBUG_KMS("%s, encoder dpms: %d\n", __FILE__, mode); 58 59 60 + switch (mode) { 61 + case DRM_MODE_DPMS_ON: 62 + if (manager_ops && manager_ops->commit) 63 + manager_ops->commit(manager->dev); 64 + break; 65 + case DRM_MODE_DPMS_STANDBY: 66 + case DRM_MODE_DPMS_SUSPEND: 67 + case DRM_MODE_DPMS_OFF: 68 + /* TODO */ 69 + if (manager_ops && manager_ops->disable) 70 + manager_ops->disable(manager->dev); 71 + break; 72 + default: 73 + DRM_ERROR("unspecified mode %d\n", mode); 74 + break; 75 + } 76 + 59 77 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 60 78 if (connector->encoder == encoder) { 61 - struct exynos_drm_display *display = manager->display; 79 + struct exynos_drm_display_ops *display_ops = 80 + manager->display_ops; 62 81 63 - if (display && display->power_on) 64 - display->power_on(manager->dev, mode); 82 + DRM_DEBUG_KMS("connector[%d] dpms[%d]\n", 83 + connector->base.id, mode); 84 + if (display_ops && display_ops->power_on) 85 + display_ops->power_on(manager->dev, mode); 65 86 } 66 87 } 67 88 } ··· 137 116 { 138 117 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder); 139 118 struct exynos_drm_manager_ops *manager_ops = manager->ops; 140 - struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 141 119 142 120 DRM_DEBUG_KMS("%s\n", __FILE__); 143 121 144 122 if (manager_ops && manager_ops->commit) 145 123 manager_ops->commit(manager->dev); 146 - 147 - if (overlay_ops && overlay_ops->commit) 148 - overlay_ops->commit(manager->dev); 149 124 } 150 125 151 126 static struct drm_crtc * ··· 225 208 { 226 209 struct drm_device *dev = crtc->dev; 227 210 struct drm_encoder *encoder; 211 + struct exynos_drm_private *private = dev->dev_private; 212 + struct exynos_drm_manager *manager; 228 213 229 214 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 230 - if (encoder->crtc != crtc) 231 - continue; 215 + /* 216 + * if crtc is detached from encoder, check pipe, 217 + * otherwise check crtc attached to encoder 218 + */ 219 + if (!encoder->crtc) { 220 + manager = to_exynos_encoder(encoder)->manager; 221 + if (manager->pipe < 0 || 222 + private->crtc[manager->pipe] != crtc) 223 + continue; 224 + } else { 225 + if (encoder->crtc != crtc) 226 + continue; 227 + } 232 228 233 229 fn(encoder, data); 234 230 } ··· 280 250 struct exynos_drm_manager *manager = 281 251 to_exynos_encoder(encoder)->manager; 282 252 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 253 + int crtc = *(int *)data; 283 254 284 - overlay_ops->commit(manager->dev); 255 + DRM_DEBUG_KMS("%s\n", __FILE__); 256 + 257 + /* 258 + * when crtc is detached from encoder, this pipe is used 259 + * to select manager operation 260 + */ 261 + manager->pipe = crtc; 262 + 263 + if (overlay_ops && overlay_ops->commit) 264 + overlay_ops->commit(manager->dev); 285 265 } 286 266 287 267 void exynos_drm_encoder_crtc_mode_set(struct drm_encoder *encoder, void *data) ··· 301 261 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 302 262 struct exynos_drm_overlay *overlay = data; 303 263 304 - overlay_ops->mode_set(manager->dev, overlay); 264 + if (overlay_ops && overlay_ops->mode_set) 265 + overlay_ops->mode_set(manager->dev, overlay); 266 + } 267 + 268 + void exynos_drm_encoder_crtc_disable(struct drm_encoder *encoder, void *data) 269 + { 270 + struct exynos_drm_manager *manager = 271 + to_exynos_encoder(encoder)->manager; 272 + struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 273 + 274 + DRM_DEBUG_KMS("\n"); 275 + 276 + if (overlay_ops && overlay_ops->disable) 277 + overlay_ops->disable(manager->dev); 278 + 279 + /* 280 + * crtc is already detached from encoder and last 281 + * function for detaching is properly done, so 282 + * clear pipe from manager to prevent repeated call 283 + */ 284 + if (!encoder->crtc) 285 + manager->pipe = -1; 305 286 } 306 287 307 288 MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+1
drivers/gpu/drm/exynos/exynos_drm_encoder.h
··· 41 41 void exynos_drm_disable_vblank(struct drm_encoder *encoder, void *data); 42 42 void exynos_drm_encoder_crtc_commit(struct drm_encoder *encoder, void *data); 43 43 void exynos_drm_encoder_crtc_mode_set(struct drm_encoder *encoder, void *data); 44 + void exynos_drm_encoder_crtc_disable(struct drm_encoder *encoder, void *data); 44 45 45 46 #endif
+39 -27
drivers/gpu/drm/exynos/exynos_drm_fb.c
··· 29 29 #include "drmP.h" 30 30 #include "drm_crtc.h" 31 31 #include "drm_crtc_helper.h" 32 + #include "drm_fb_helper.h" 32 33 34 + #include "exynos_drm_drv.h" 33 35 #include "exynos_drm_fb.h" 34 36 #include "exynos_drm_buf.h" 35 37 #include "exynos_drm_gem.h" ··· 43 41 * 44 42 * @fb: drm framebuffer obejct. 45 43 * @exynos_gem_obj: exynos specific gem object containing a gem object. 46 - * @entry: pointer to exynos drm buffer entry object. 47 - * - containing only the information to physically continuous memory 48 - * region allocated at default framebuffer creation. 44 + * @buffer: pointer to exynos_drm_gem_buffer object. 45 + * - contain the memory information to memory region allocated 46 + * at default framebuffer creation. 49 47 */ 50 48 struct exynos_drm_fb { 51 49 struct drm_framebuffer fb; 52 50 struct exynos_drm_gem_obj *exynos_gem_obj; 53 - struct exynos_drm_buf_entry *entry; 51 + struct exynos_drm_gem_buf *buffer; 54 52 }; 55 53 56 54 static void exynos_drm_fb_destroy(struct drm_framebuffer *fb) ··· 65 63 * default framebuffer has no gem object so 66 64 * a buffer of the default framebuffer should be released at here. 67 65 */ 68 - if (!exynos_fb->exynos_gem_obj && exynos_fb->entry) 69 - exynos_drm_buf_destroy(fb->dev, exynos_fb->entry); 66 + if (!exynos_fb->exynos_gem_obj && exynos_fb->buffer) 67 + exynos_drm_buf_destroy(fb->dev, exynos_fb->buffer); 70 68 71 69 kfree(exynos_fb); 72 70 exynos_fb = NULL; ··· 145 143 */ 146 144 if (!mode_cmd->handle) { 147 145 if (!file_priv) { 148 - struct exynos_drm_buf_entry *entry; 146 + struct exynos_drm_gem_buf *buffer; 149 147 150 148 /* 151 149 * in case that file_priv is NULL, it allocates 152 150 * only buffer and this buffer would be used 153 151 * for default framebuffer. 154 152 */ 155 - entry = exynos_drm_buf_create(dev, size); 156 - if (IS_ERR(entry)) { 157 - ret = PTR_ERR(entry); 153 + buffer = exynos_drm_buf_create(dev, size); 154 + if (IS_ERR(buffer)) { 155 + ret = PTR_ERR(buffer); 158 156 goto err_buffer; 159 157 } 160 158 161 - exynos_fb->entry = entry; 159 + exynos_fb->buffer = buffer; 162 160 163 - DRM_LOG_KMS("default fb: paddr = 0x%lx, size = 0x%x\n", 164 - (unsigned long)entry->paddr, size); 161 + DRM_LOG_KMS("default: dma_addr = 0x%lx, size = 0x%x\n", 162 + (unsigned long)buffer->dma_addr, size); 165 163 166 164 goto out; 167 165 } else { 168 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, 169 - size, 170 - &mode_cmd->handle); 166 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, 167 + &mode_cmd->handle, 168 + size); 171 169 if (IS_ERR(exynos_gem_obj)) { 172 170 ret = PTR_ERR(exynos_gem_obj); 173 171 goto err_buffer; ··· 191 189 * so that default framebuffer has no its own gem object, 192 190 * only its own buffer object. 193 191 */ 194 - exynos_fb->entry = exynos_gem_obj->entry; 192 + exynos_fb->buffer = exynos_gem_obj->buffer; 195 193 196 - DRM_LOG_KMS("paddr = 0x%lx, size = 0x%x, gem object = 0x%x\n", 197 - (unsigned long)exynos_fb->entry->paddr, size, 194 + DRM_LOG_KMS("dma_addr = 0x%lx, size = 0x%x, gem object = 0x%x\n", 195 + (unsigned long)exynos_fb->buffer->dma_addr, size, 198 196 (unsigned int)&exynos_gem_obj->base); 199 197 200 198 out: ··· 222 220 return exynos_drm_fb_init(file_priv, dev, mode_cmd); 223 221 } 224 222 225 - struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb) 223 + struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb) 226 224 { 227 225 struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb); 228 - struct exynos_drm_buf_entry *entry; 226 + struct exynos_drm_gem_buf *buffer; 229 227 230 228 DRM_DEBUG_KMS("%s\n", __FILE__); 231 229 232 - entry = exynos_fb->entry; 233 - if (!entry) 230 + buffer = exynos_fb->buffer; 231 + if (!buffer) 234 232 return NULL; 235 233 236 - DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n", 237 - (unsigned long)entry->vaddr, 238 - (unsigned long)entry->paddr); 234 + DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n", 235 + (unsigned long)buffer->kvaddr, 236 + (unsigned long)buffer->dma_addr); 239 237 240 - return entry; 238 + return buffer; 239 + } 240 + 241 + static void exynos_drm_output_poll_changed(struct drm_device *dev) 242 + { 243 + struct exynos_drm_private *private = dev->dev_private; 244 + struct drm_fb_helper *fb_helper = private->fb_helper; 245 + 246 + if (fb_helper) 247 + drm_fb_helper_hotplug_event(fb_helper); 241 248 } 242 249 243 250 static struct drm_mode_config_funcs exynos_drm_mode_config_funcs = { 244 251 .fb_create = exynos_drm_fb_create, 252 + .output_poll_changed = exynos_drm_output_poll_changed, 245 253 }; 246 254 247 255 void exynos_drm_mode_config_init(struct drm_device *dev)
+28 -16
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
··· 33 33 34 34 #include "exynos_drm_drv.h" 35 35 #include "exynos_drm_fb.h" 36 + #include "exynos_drm_gem.h" 36 37 #include "exynos_drm_buf.h" 37 38 38 39 #define MAX_CONNECTOR 4 ··· 86 85 }; 87 86 88 87 static int exynos_drm_fbdev_update(struct drm_fb_helper *helper, 89 - struct drm_framebuffer *fb, 90 - unsigned int fb_width, 91 - unsigned int fb_height) 88 + struct drm_framebuffer *fb) 92 89 { 93 90 struct fb_info *fbi = helper->fbdev; 94 91 struct drm_device *dev = helper->dev; 95 92 struct exynos_drm_fbdev *exynos_fb = to_exynos_fbdev(helper); 96 - struct exynos_drm_buf_entry *entry; 97 - unsigned int size = fb_width * fb_height * (fb->bits_per_pixel >> 3); 93 + struct exynos_drm_gem_buf *buffer; 94 + unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3); 98 95 unsigned long offset; 99 96 100 97 DRM_DEBUG_KMS("%s\n", __FILE__); ··· 100 101 exynos_fb->fb = fb; 101 102 102 103 drm_fb_helper_fill_fix(fbi, fb->pitch, fb->depth); 103 - drm_fb_helper_fill_var(fbi, helper, fb_width, fb_height); 104 + drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height); 104 105 105 - entry = exynos_drm_fb_get_buf(fb); 106 - if (!entry) { 107 - DRM_LOG_KMS("entry is null.\n"); 106 + buffer = exynos_drm_fb_get_buf(fb); 107 + if (!buffer) { 108 + DRM_LOG_KMS("buffer is null.\n"); 108 109 return -EFAULT; 109 110 } 110 111 111 112 offset = fbi->var.xoffset * (fb->bits_per_pixel >> 3); 112 113 offset += fbi->var.yoffset * fb->pitch; 113 114 114 - dev->mode_config.fb_base = entry->paddr; 115 - fbi->screen_base = entry->vaddr + offset; 116 - fbi->fix.smem_start = entry->paddr + offset; 115 + dev->mode_config.fb_base = (resource_size_t)buffer->dma_addr; 116 + fbi->screen_base = buffer->kvaddr + offset; 117 + fbi->fix.smem_start = (unsigned long)(buffer->dma_addr + offset); 117 118 fbi->screen_size = size; 118 119 fbi->fix.smem_len = size; 119 120 ··· 170 171 goto out; 171 172 } 172 173 173 - ret = exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width, 174 - sizes->fb_height); 174 + ret = exynos_drm_fbdev_update(helper, helper->fb); 175 175 if (ret < 0) 176 176 fb_dealloc_cmap(&fbi->cmap); 177 177 ··· 233 235 } 234 236 235 237 helper->fb = exynos_fbdev->fb; 236 - return exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width, 237 - sizes->fb_height); 238 + return exynos_drm_fbdev_update(helper, helper->fb); 238 239 } 239 240 240 241 static int exynos_drm_fbdev_probe(struct drm_fb_helper *helper, ··· 402 405 fb_helper = private->fb_helper; 403 406 404 407 if (fb_helper) { 408 + struct list_head temp_list; 409 + 410 + INIT_LIST_HEAD(&temp_list); 411 + 412 + /* 413 + * fb_helper is reintialized but kernel fb is reused 414 + * so kernel_fb_list need to be backuped and restored 415 + */ 416 + if (!list_empty(&fb_helper->kernel_fb_list)) 417 + list_replace_init(&fb_helper->kernel_fb_list, 418 + &temp_list); 419 + 405 420 drm_fb_helper_fini(fb_helper); 406 421 407 422 ret = drm_fb_helper_init(dev, fb_helper, ··· 422 413 DRM_ERROR("failed to initialize drm fb helper\n"); 423 414 return ret; 424 415 } 416 + 417 + if (!list_empty(&temp_list)) 418 + list_replace(&temp_list, &fb_helper->kernel_fb_list); 425 419 426 420 ret = drm_fb_helper_single_add_all_connectors(fb_helper); 427 421 if (ret < 0) {
+53 -18
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 64 64 unsigned int fb_width; 65 65 unsigned int fb_height; 66 66 unsigned int bpp; 67 - dma_addr_t paddr; 67 + dma_addr_t dma_addr; 68 68 void __iomem *vaddr; 69 69 unsigned int buf_offsize; 70 70 unsigned int line_size; /* bytes */ ··· 124 124 return 0; 125 125 } 126 126 127 - static struct exynos_drm_display fimd_display = { 127 + static struct exynos_drm_display_ops fimd_display_ops = { 128 128 .type = EXYNOS_DISPLAY_TYPE_LCD, 129 129 .is_connected = fimd_display_is_connected, 130 130 .get_timing = fimd_get_timing, ··· 177 177 writel(val, ctx->regs + VIDCON0); 178 178 } 179 179 180 + static void fimd_disable(struct device *dev) 181 + { 182 + struct fimd_context *ctx = get_fimd_context(dev); 183 + struct exynos_drm_subdrv *subdrv = &ctx->subdrv; 184 + struct drm_device *drm_dev = subdrv->drm_dev; 185 + struct exynos_drm_manager *manager = &subdrv->manager; 186 + u32 val; 187 + 188 + DRM_DEBUG_KMS("%s\n", __FILE__); 189 + 190 + /* fimd dma off */ 191 + val = readl(ctx->regs + VIDCON0); 192 + val &= ~(VIDCON0_ENVID | VIDCON0_ENVID_F); 193 + writel(val, ctx->regs + VIDCON0); 194 + 195 + /* 196 + * if vblank is enabled status with dma off then 197 + * it disables vsync interrupt. 198 + */ 199 + if (drm_dev->vblank_enabled[manager->pipe] && 200 + atomic_read(&drm_dev->vblank_refcount[manager->pipe])) { 201 + drm_vblank_put(drm_dev, manager->pipe); 202 + 203 + /* 204 + * if vblank_disable_allowed is 0 then disable 205 + * vsync interrupt right now else the vsync interrupt 206 + * would be disabled by drm timer once a current process 207 + * gives up ownershop of vblank event. 208 + */ 209 + if (!drm_dev->vblank_disable_allowed) 210 + drm_vblank_off(drm_dev, manager->pipe); 211 + } 212 + } 213 + 180 214 static int fimd_enable_vblank(struct device *dev) 181 215 { 182 216 struct fimd_context *ctx = get_fimd_context(dev); ··· 254 220 255 221 static struct exynos_drm_manager_ops fimd_manager_ops = { 256 222 .commit = fimd_commit, 223 + .disable = fimd_disable, 257 224 .enable_vblank = fimd_enable_vblank, 258 225 .disable_vblank = fimd_disable_vblank, 259 226 }; ··· 286 251 win_data->ovl_height = overlay->crtc_height; 287 252 win_data->fb_width = overlay->fb_width; 288 253 win_data->fb_height = overlay->fb_height; 289 - win_data->paddr = overlay->paddr + offset; 254 + win_data->dma_addr = overlay->dma_addr + offset; 290 255 win_data->vaddr = overlay->vaddr + offset; 291 256 win_data->bpp = overlay->bpp; 292 257 win_data->buf_offsize = (overlay->fb_width - overlay->crtc_width) * ··· 298 263 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n", 299 264 win_data->ovl_width, win_data->ovl_height); 300 265 DRM_DEBUG_KMS("paddr = 0x%lx, vaddr = 0x%lx\n", 301 - (unsigned long)win_data->paddr, 266 + (unsigned long)win_data->dma_addr, 302 267 (unsigned long)win_data->vaddr); 303 268 DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n", 304 269 overlay->fb_width, overlay->crtc_width); ··· 411 376 writel(val, ctx->regs + SHADOWCON); 412 377 413 378 /* buffer start address */ 414 - val = win_data->paddr; 379 + val = (unsigned long)win_data->dma_addr; 415 380 writel(val, ctx->regs + VIDWx_BUF_START(win, 0)); 416 381 417 382 /* buffer end address */ 418 383 size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3); 419 - val = win_data->paddr + size; 384 + val = (unsigned long)(win_data->dma_addr + size); 420 385 writel(val, ctx->regs + VIDWx_BUF_END(win, 0)); 421 386 422 387 DRM_DEBUG_KMS("start addr = 0x%lx, end addr = 0x%lx, size = 0x%lx\n", 423 - (unsigned long)win_data->paddr, val, size); 388 + (unsigned long)win_data->dma_addr, val, size); 424 389 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n", 425 390 win_data->ovl_width, win_data->ovl_height); 426 391 ··· 482 447 static void fimd_win_disable(struct device *dev) 483 448 { 484 449 struct fimd_context *ctx = get_fimd_context(dev); 485 - struct fimd_win_data *win_data; 486 450 int win = ctx->default_win; 487 451 u32 val; 488 452 ··· 489 455 490 456 if (win < 0 || win > WINDOWS_NR) 491 457 return; 492 - 493 - win_data = &ctx->win_data[win]; 494 458 495 459 /* protect windows */ 496 460 val = readl(ctx->regs + SHADOWCON); ··· 560 528 /* VSYNC interrupt */ 561 529 writel(VIDINTCON1_INT_FRAME, ctx->regs + VIDINTCON1); 562 530 531 + /* 532 + * in case that vblank_disable_allowed is 1, it could induce 533 + * the problem that manager->pipe could be -1 because with 534 + * disable callback, vsync interrupt isn't disabled and at this moment, 535 + * vsync interrupt could occur. the vsync interrupt would be disabled 536 + * by timer handler later. 537 + */ 538 + if (manager->pipe == -1) 539 + return IRQ_HANDLED; 540 + 563 541 drm_handle_vblank(drm_dev, manager->pipe); 564 542 fimd_finish_pageflip(drm_dev, manager->pipe); 565 543 ··· 589 547 * drm framework supports only one irq handler. 590 548 */ 591 549 drm_dev->irq_enabled = 1; 592 - 593 - /* 594 - * with vblank_disable_allowed = 1, vblank interrupt will be disabled 595 - * by drm timer once a current process gives up ownership of 596 - * vblank event.(drm_vblank_put function was called) 597 - */ 598 - drm_dev->vblank_disable_allowed = 1; 599 550 600 551 return 0; 601 552 } ··· 766 731 subdrv->manager.pipe = -1; 767 732 subdrv->manager.ops = &fimd_manager_ops; 768 733 subdrv->manager.overlay_ops = &fimd_overlay_ops; 769 - subdrv->manager.display = &fimd_display; 734 + subdrv->manager.display_ops = &fimd_display_ops; 770 735 subdrv->manager.dev = dev; 771 736 772 737 platform_set_drvdata(pdev, ctx);
+52 -37
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 62 62 return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT; 63 63 } 64 64 65 - struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv, 66 - struct drm_device *dev, unsigned int size, 67 - unsigned int *handle) 65 + static struct exynos_drm_gem_obj 66 + *exynos_drm_gem_init(struct drm_device *drm_dev, 67 + struct drm_file *file_priv, unsigned int *handle, 68 + unsigned int size) 68 69 { 69 70 struct exynos_drm_gem_obj *exynos_gem_obj; 70 - struct exynos_drm_buf_entry *entry; 71 71 struct drm_gem_object *obj; 72 72 int ret; 73 - 74 - DRM_DEBUG_KMS("%s\n", __FILE__); 75 - 76 - size = roundup(size, PAGE_SIZE); 77 73 78 74 exynos_gem_obj = kzalloc(sizeof(*exynos_gem_obj), GFP_KERNEL); 79 75 if (!exynos_gem_obj) { ··· 77 81 return ERR_PTR(-ENOMEM); 78 82 } 79 83 80 - /* allocate the new buffer object and memory region. */ 81 - entry = exynos_drm_buf_create(dev, size); 82 - if (!entry) { 83 - kfree(exynos_gem_obj); 84 - return ERR_PTR(-ENOMEM); 85 - } 86 - 87 - exynos_gem_obj->entry = entry; 88 - 89 84 obj = &exynos_gem_obj->base; 90 85 91 - ret = drm_gem_object_init(dev, obj, size); 86 + ret = drm_gem_object_init(drm_dev, obj, size); 92 87 if (ret < 0) { 93 - DRM_ERROR("failed to initailize gem object.\n"); 94 - goto err_obj_init; 88 + DRM_ERROR("failed to initialize gem object.\n"); 89 + ret = -EINVAL; 90 + goto err_object_init; 95 91 } 96 92 97 93 DRM_DEBUG_KMS("created file object = 0x%x\n", (unsigned int)obj->filp); ··· 115 127 err_create_mmap_offset: 116 128 drm_gem_object_release(obj); 117 129 118 - err_obj_init: 119 - exynos_drm_buf_destroy(dev, exynos_gem_obj->entry); 120 - 130 + err_object_init: 121 131 kfree(exynos_gem_obj); 122 132 123 133 return ERR_PTR(ret); 124 134 } 125 135 136 + struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev, 137 + struct drm_file *file_priv, 138 + unsigned int *handle, unsigned long size) 139 + { 140 + 141 + struct exynos_drm_gem_obj *exynos_gem_obj = NULL; 142 + struct exynos_drm_gem_buf *buffer; 143 + 144 + size = roundup(size, PAGE_SIZE); 145 + 146 + DRM_DEBUG_KMS("%s: size = 0x%lx\n", __FILE__, size); 147 + 148 + buffer = exynos_drm_buf_create(dev, size); 149 + if (IS_ERR(buffer)) { 150 + return ERR_CAST(buffer); 151 + } 152 + 153 + exynos_gem_obj = exynos_drm_gem_init(dev, file_priv, handle, size); 154 + if (IS_ERR(exynos_gem_obj)) { 155 + exynos_drm_buf_destroy(dev, buffer); 156 + return exynos_gem_obj; 157 + } 158 + 159 + exynos_gem_obj->buffer = buffer; 160 + 161 + return exynos_gem_obj; 162 + } 163 + 126 164 int exynos_drm_gem_create_ioctl(struct drm_device *dev, void *data, 127 - struct drm_file *file_priv) 165 + struct drm_file *file_priv) 128 166 { 129 167 struct drm_exynos_gem_create *args = data; 130 - struct exynos_drm_gem_obj *exynos_gem_obj; 168 + struct exynos_drm_gem_obj *exynos_gem_obj = NULL; 131 169 132 - DRM_DEBUG_KMS("%s : size = 0x%x\n", __FILE__, args->size); 170 + DRM_DEBUG_KMS("%s\n", __FILE__); 133 171 134 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size, 135 - &args->handle); 172 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, 173 + &args->handle, args->size); 136 174 if (IS_ERR(exynos_gem_obj)) 137 175 return PTR_ERR(exynos_gem_obj); 138 176 ··· 189 175 { 190 176 struct drm_gem_object *obj = filp->private_data; 191 177 struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj); 192 - struct exynos_drm_buf_entry *entry; 178 + struct exynos_drm_gem_buf *buffer; 193 179 unsigned long pfn, vm_size; 194 180 195 181 DRM_DEBUG_KMS("%s\n", __FILE__); ··· 201 187 202 188 vm_size = vma->vm_end - vma->vm_start; 203 189 /* 204 - * a entry contains information to physically continuous memory 190 + * a buffer contains information to physically continuous memory 205 191 * allocated by user request or at framebuffer creation. 206 192 */ 207 - entry = exynos_gem_obj->entry; 193 + buffer = exynos_gem_obj->buffer; 208 194 209 195 /* check if user-requested size is valid. */ 210 - if (vm_size > entry->size) 196 + if (vm_size > buffer->size) 211 197 return -EINVAL; 212 198 213 199 /* 214 200 * get page frame number to physical memory to be mapped 215 201 * to user space. 216 202 */ 217 - pfn = exynos_gem_obj->entry->paddr >> PAGE_SHIFT; 203 + pfn = ((unsigned long)exynos_gem_obj->buffer->dma_addr) >> PAGE_SHIFT; 218 204 219 205 DRM_DEBUG_KMS("pfn = 0x%lx\n", pfn); 220 206 ··· 295 281 296 282 exynos_gem_obj = to_exynos_gem_obj(gem_obj); 297 283 298 - exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->entry); 284 + exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->buffer); 299 285 300 286 kfree(exynos_gem_obj); 301 287 } ··· 316 302 args->pitch = args->width * args->bpp >> 3; 317 303 args->size = args->pitch * args->height; 318 304 319 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size, 320 - &args->handle); 305 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, &args->handle, 306 + args->size); 321 307 if (IS_ERR(exynos_gem_obj)) 322 308 return PTR_ERR(exynos_gem_obj); 323 309 ··· 374 360 375 361 mutex_lock(&dev->struct_mutex); 376 362 377 - pfn = (exynos_gem_obj->entry->paddr >> PAGE_SHIFT) + page_offset; 363 + pfn = (((unsigned long)exynos_gem_obj->buffer->dma_addr) >> 364 + PAGE_SHIFT) + page_offset; 378 365 379 366 ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn); 380 367
+22 -6
drivers/gpu/drm/exynos/exynos_drm_gem.h
··· 30 30 struct exynos_drm_gem_obj, base) 31 31 32 32 /* 33 + * exynos drm gem buffer structure. 34 + * 35 + * @kvaddr: kernel virtual address to allocated memory region. 36 + * @dma_addr: bus address(accessed by dma) to allocated memory region. 37 + * - this address could be physical address without IOMMU and 38 + * device address with IOMMU. 39 + * @size: size of allocated memory region. 40 + */ 41 + struct exynos_drm_gem_buf { 42 + void __iomem *kvaddr; 43 + dma_addr_t dma_addr; 44 + unsigned long size; 45 + }; 46 + 47 + /* 33 48 * exynos drm buffer structure. 34 49 * 35 50 * @base: a gem object. 36 51 * - a new handle to this gem object would be created 37 52 * by drm_gem_handle_create(). 38 - * @entry: pointer to exynos drm buffer entry object. 39 - * - containing the information to physically 53 + * @buffer: a pointer to exynos_drm_gem_buffer object. 54 + * - contain the information to memory region allocated 55 + * by user request or at framebuffer creation. 40 56 * continuous memory region allocated by user request 41 57 * or at framebuffer creation. 42 58 * ··· 61 45 */ 62 46 struct exynos_drm_gem_obj { 63 47 struct drm_gem_object base; 64 - struct exynos_drm_buf_entry *entry; 48 + struct exynos_drm_gem_buf *buffer; 65 49 }; 66 50 67 51 /* create a new buffer and get a new gem handle. */ 68 - struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv, 69 - struct drm_device *dev, unsigned int size, 70 - unsigned int *handle); 52 + struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev, 53 + struct drm_file *file_priv, 54 + unsigned int *handle, unsigned long size); 71 55 72 56 /* 73 57 * request gem object creation and buffer allocation as the size
-1
drivers/hwmon/ad7314.c
··· 160 160 static struct spi_driver ad7314_driver = { 161 161 .driver = { 162 162 .name = "ad7314", 163 - .bus = &spi_bus_type, 164 163 .owner = THIS_MODULE, 165 164 }, 166 165 .probe = ad7314_probe,
-1
drivers/hwmon/ads7871.c
··· 227 227 static struct spi_driver ads7871_driver = { 228 228 .driver = { 229 229 .name = DEVICE_NAME, 230 - .bus = &spi_bus_type, 231 230 .owner = THIS_MODULE, 232 231 }, 233 232
+1 -11
drivers/hwmon/exynos4_tmu.c
··· 506 506 .resume = exynos4_tmu_resume, 507 507 }; 508 508 509 - static int __init exynos4_tmu_driver_init(void) 510 - { 511 - return platform_driver_register(&exynos4_tmu_driver); 512 - } 513 - module_init(exynos4_tmu_driver_init); 514 - 515 - static void __exit exynos4_tmu_driver_exit(void) 516 - { 517 - platform_driver_unregister(&exynos4_tmu_driver); 518 - } 519 - module_exit(exynos4_tmu_driver_exit); 509 + module_platform_driver(exynos4_tmu_driver); 520 510 521 511 MODULE_DESCRIPTION("EXYNOS4 TMU Driver"); 522 512 MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>");
+1 -12
drivers/hwmon/gpio-fan.c
··· 539 539 }, 540 540 }; 541 541 542 - static int __init gpio_fan_init(void) 543 - { 544 - return platform_driver_register(&gpio_fan_driver); 545 - } 546 - 547 - static void __exit gpio_fan_exit(void) 548 - { 549 - platform_driver_unregister(&gpio_fan_driver); 550 - } 551 - 552 - module_init(gpio_fan_init); 553 - module_exit(gpio_fan_exit); 542 + module_platform_driver(gpio_fan_driver); 554 543 555 544 MODULE_AUTHOR("Simon Guinot <sguinot@lacie.com>"); 556 545 MODULE_DESCRIPTION("GPIO FAN driver");
+1 -11
drivers/hwmon/jz4740-hwmon.c
··· 212 212 }, 213 213 }; 214 214 215 - static int __init jz4740_hwmon_init(void) 216 - { 217 - return platform_driver_register(&jz4740_hwmon_driver); 218 - } 219 - module_init(jz4740_hwmon_init); 220 - 221 - static void __exit jz4740_hwmon_exit(void) 222 - { 223 - platform_driver_unregister(&jz4740_hwmon_driver); 224 - } 225 - module_exit(jz4740_hwmon_exit); 215 + module_platform_driver(jz4740_hwmon_driver); 226 216 227 217 MODULE_DESCRIPTION("JZ4740 SoC HWMON driver"); 228 218 MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
+1 -13
drivers/hwmon/ntc_thermistor.c
··· 432 432 .id_table = ntc_thermistor_id, 433 433 }; 434 434 435 - static int __init ntc_thermistor_init(void) 436 - { 437 - return platform_driver_register(&ntc_thermistor_driver); 438 - } 439 - 440 - module_init(ntc_thermistor_init); 441 - 442 - static void __exit ntc_thermistor_cleanup(void) 443 - { 444 - platform_driver_unregister(&ntc_thermistor_driver); 445 - } 446 - 447 - module_exit(ntc_thermistor_cleanup); 435 + module_platform_driver(ntc_thermistor_driver); 448 436 449 437 MODULE_DESCRIPTION("NTC Thermistor Driver"); 450 438 MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
+1 -12
drivers/hwmon/s3c-hwmon.c
··· 393 393 .remove = __devexit_p(s3c_hwmon_remove), 394 394 }; 395 395 396 - static int __init s3c_hwmon_init(void) 397 - { 398 - return platform_driver_register(&s3c_hwmon_driver); 399 - } 400 - 401 - static void __exit s3c_hwmon_exit(void) 402 - { 403 - platform_driver_unregister(&s3c_hwmon_driver); 404 - } 405 - 406 - module_init(s3c_hwmon_init); 407 - module_exit(s3c_hwmon_exit); 396 + module_platform_driver(s3c_hwmon_driver); 408 397 409 398 MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>"); 410 399 MODULE_DESCRIPTION("S3C ADC HWMon driver");
+1 -12
drivers/hwmon/sch5627.c
··· 590 590 .remove = sch5627_remove, 591 591 }; 592 592 593 - static int __init sch5627_init(void) 594 - { 595 - return platform_driver_register(&sch5627_driver); 596 - } 597 - 598 - static void __exit sch5627_exit(void) 599 - { 600 - platform_driver_unregister(&sch5627_driver); 601 - } 593 + module_platform_driver(sch5627_driver); 602 594 603 595 MODULE_DESCRIPTION("SMSC SCH5627 Hardware Monitoring Driver"); 604 596 MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 605 597 MODULE_LICENSE("GPL"); 606 - 607 - module_init(sch5627_init); 608 - module_exit(sch5627_exit);
+1 -12
drivers/hwmon/sch5636.c
··· 521 521 .remove = sch5636_remove, 522 522 }; 523 523 524 - static int __init sch5636_init(void) 525 - { 526 - return platform_driver_register(&sch5636_driver); 527 - } 528 - 529 - static void __exit sch5636_exit(void) 530 - { 531 - platform_driver_unregister(&sch5636_driver); 532 - } 524 + module_platform_driver(sch5636_driver); 533 525 534 526 MODULE_DESCRIPTION("SMSC SCH5636 Hardware Monitoring Driver"); 535 527 MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 536 528 MODULE_LICENSE("GPL"); 537 - 538 - module_init(sch5636_init); 539 - module_exit(sch5636_exit);
+1 -13
drivers/hwmon/twl4030-madc-hwmon.c
··· 136 136 }, 137 137 }; 138 138 139 - static int __init twl4030_madc_hwmon_init(void) 140 - { 141 - return platform_driver_register(&twl4030_madc_hwmon_driver); 142 - } 143 - 144 - module_init(twl4030_madc_hwmon_init); 145 - 146 - static void __exit twl4030_madc_hwmon_exit(void) 147 - { 148 - platform_driver_unregister(&twl4030_madc_hwmon_driver); 149 - } 150 - 151 - module_exit(twl4030_madc_hwmon_exit); 139 + module_platform_driver(twl4030_madc_hwmon_driver); 152 140 153 141 MODULE_DESCRIPTION("TWL4030 ADC Hwmon driver"); 154 142 MODULE_LICENSE("GPL");
+1 -12
drivers/hwmon/ultra45_env.c
··· 309 309 .remove = __devexit_p(env_remove), 310 310 }; 311 311 312 - static int __init env_init(void) 313 - { 314 - return platform_driver_register(&env_driver); 315 - } 316 - 317 - static void __exit env_exit(void) 318 - { 319 - platform_driver_unregister(&env_driver); 320 - } 321 - 322 - module_init(env_init); 323 - module_exit(env_exit); 312 + module_platform_driver(env_driver);
+1 -11
drivers/hwmon/wm831x-hwmon.c
··· 209 209 }, 210 210 }; 211 211 212 - static int __init wm831x_hwmon_init(void) 213 - { 214 - return platform_driver_register(&wm831x_hwmon_driver); 215 - } 216 - module_init(wm831x_hwmon_init); 217 - 218 - static void __exit wm831x_hwmon_exit(void) 219 - { 220 - platform_driver_unregister(&wm831x_hwmon_driver); 221 - } 222 - module_exit(wm831x_hwmon_exit); 212 + module_platform_driver(wm831x_hwmon_driver); 223 213 224 214 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 225 215 MODULE_DESCRIPTION("WM831x Hardware Monitoring");
+1 -11
drivers/hwmon/wm8350-hwmon.c
··· 133 133 }, 134 134 }; 135 135 136 - static int __init wm8350_hwmon_init(void) 137 - { 138 - return platform_driver_register(&wm8350_hwmon_driver); 139 - } 140 - module_init(wm8350_hwmon_init); 141 - 142 - static void __exit wm8350_hwmon_exit(void) 143 - { 144 - platform_driver_unregister(&wm8350_hwmon_driver); 145 - } 146 - module_exit(wm8350_hwmon_exit); 136 + module_platform_driver(wm8350_hwmon_driver); 147 137 148 138 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 149 139 MODULE_DESCRIPTION("WM8350 Hardware Monitoring");
+1 -1
drivers/i2c/busses/i2c-nuc900.c
··· 593 593 i2c->adap.algo_data = i2c; 594 594 i2c->adap.dev.parent = &pdev->dev; 595 595 596 - mfp_set_groupg(&pdev->dev); 596 + mfp_set_groupg(&pdev->dev, NULL); 597 597 598 598 clk_get_rate(i2c->clk); 599 599
+6 -3
drivers/infiniband/core/addr.c
··· 216 216 217 217 neigh = neigh_lookup(&arp_tbl, &rt->rt_gateway, rt->dst.dev); 218 218 if (!neigh || !(neigh->nud_state & NUD_VALID)) { 219 + rcu_read_lock(); 219 220 neigh_event_send(dst_get_neighbour(&rt->dst), NULL); 221 + rcu_read_unlock(); 220 222 ret = -ENODATA; 221 223 if (neigh) 222 224 goto release; ··· 276 274 goto put; 277 275 } 278 276 277 + rcu_read_lock(); 279 278 neigh = dst_get_neighbour(dst); 280 279 if (!neigh || !(neigh->nud_state & NUD_VALID)) { 281 280 if (neigh) 282 281 neigh_event_send(neigh, NULL); 283 282 ret = -ENODATA; 284 - goto put; 283 + } else { 284 + ret = rdma_copy_addr(addr, dst->dev, neigh->ha); 285 285 } 286 - 287 - ret = rdma_copy_addr(addr, dst->dev, neigh->ha); 286 + rcu_read_unlock(); 288 287 put: 289 288 dst_release(dst); 290 289 return ret;
+4
drivers/infiniband/hw/cxgb3/iwch_cm.c
··· 1375 1375 goto reject; 1376 1376 } 1377 1377 dst = &rt->dst; 1378 + rcu_read_lock(); 1378 1379 neigh = dst_get_neighbour(dst); 1379 1380 l2t = t3_l2t_get(tdev, neigh, neigh->dev); 1381 + rcu_read_unlock(); 1380 1382 if (!l2t) { 1381 1383 printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n", 1382 1384 __func__); ··· 1948 1946 } 1949 1947 ep->dst = &rt->dst; 1950 1948 1949 + rcu_read_lock(); 1951 1950 neigh = dst_get_neighbour(ep->dst); 1952 1951 1953 1952 /* get a l2t entry */ 1954 1953 ep->l2t = t3_l2t_get(ep->com.tdev, neigh, neigh->dev); 1954 + rcu_read_unlock(); 1955 1955 if (!ep->l2t) { 1956 1956 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 1957 1957 err = -ENOMEM;
+9 -1
drivers/infiniband/hw/cxgb4/cm.c
··· 542 542 (mpa_rev_to_use == 2 ? MPA_ENHANCED_RDMA_CONN : 0); 543 543 mpa->private_data_size = htons(ep->plen); 544 544 mpa->revision = mpa_rev_to_use; 545 - if (mpa_rev_to_use == 1) 545 + if (mpa_rev_to_use == 1) { 546 546 ep->tried_with_mpa_v1 = 1; 547 + ep->retry_with_mpa_v1 = 0; 548 + } 547 549 548 550 if (mpa_rev_to_use == 2) { 549 551 mpa->private_data_size += ··· 1596 1594 goto reject; 1597 1595 } 1598 1596 dst = &rt->dst; 1597 + rcu_read_lock(); 1599 1598 neigh = dst_get_neighbour(dst); 1600 1599 if (neigh->dev->flags & IFF_LOOPBACK) { 1601 1600 pdev = ip_dev_find(&init_net, peer_ip); ··· 1623 1620 rss_qid = dev->rdev.lldi.rxq_ids[ 1624 1621 cxgb4_port_idx(neigh->dev) * step]; 1625 1622 } 1623 + rcu_read_unlock(); 1626 1624 if (!l2t) { 1627 1625 printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n", 1628 1626 __func__); ··· 1824 1820 } 1825 1821 ep->dst = &rt->dst; 1826 1822 1823 + rcu_read_lock(); 1827 1824 neigh = dst_get_neighbour(ep->dst); 1828 1825 1829 1826 /* get a l2t entry */ ··· 1861 1856 ep->rss_qid = ep->com.dev->rdev.lldi.rxq_ids[ 1862 1857 cxgb4_port_idx(neigh->dev) * step]; 1863 1858 } 1859 + rcu_read_unlock(); 1864 1860 if (!ep->l2t) { 1865 1861 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 1866 1862 err = -ENOMEM; ··· 2307 2301 } 2308 2302 ep->dst = &rt->dst; 2309 2303 2304 + rcu_read_lock(); 2310 2305 neigh = dst_get_neighbour(ep->dst); 2311 2306 2312 2307 /* get a l2t entry */ ··· 2346 2339 ep->retry_with_mpa_v1 = 0; 2347 2340 ep->tried_with_mpa_v1 = 0; 2348 2341 } 2342 + rcu_read_unlock(); 2349 2343 if (!ep->l2t) { 2350 2344 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 2351 2345 err = -ENOMEM;
+1 -1
drivers/infiniband/hw/cxgb4/cq.c
··· 311 311 while (ptr != cq->sw_pidx) { 312 312 cqe = &cq->sw_queue[ptr]; 313 313 if (RQ_TYPE(cqe) && (CQE_OPCODE(cqe) != FW_RI_READ_RESP) && 314 - (CQE_QPID(cqe) == wq->rq.qid) && cqe_completes_wr(cqe, wq)) 314 + (CQE_QPID(cqe) == wq->sq.qid) && cqe_completes_wr(cqe, wq)) 315 315 (*count)++; 316 316 if (++ptr == cq->size) 317 317 ptr = 0;
+4 -2
drivers/infiniband/hw/nes/nes_cm.c
··· 1377 1377 neigh_release(neigh); 1378 1378 } 1379 1379 1380 - if ((neigh == NULL) || (!(neigh->nud_state & NUD_VALID))) 1380 + if ((neigh == NULL) || (!(neigh->nud_state & NUD_VALID))) { 1381 + rcu_read_lock(); 1381 1382 neigh_event_send(dst_get_neighbour(&rt->dst), NULL); 1382 - 1383 + rcu_read_unlock(); 1384 + } 1383 1385 ip_rt_put(rt); 1384 1386 return rc; 1385 1387 }
+9 -9
drivers/infiniband/hw/qib/qib_iba7322.c
··· 2307 2307 SYM_LSB(IBCCtrlA_0, MaxPktLen); 2308 2308 ppd->cpspec->ibcctrl_a = ibc; /* without linkcmd or linkinitcmd! */ 2309 2309 2310 - /* initially come up waiting for TS1, without sending anything. */ 2311 - val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE << 2312 - QLOGIC_IB_IBCC_LINKINITCMD_SHIFT); 2313 - 2314 - ppd->cpspec->ibcctrl_a = val; 2315 2310 /* 2316 2311 * Reset the PCS interface to the serdes (and also ibc, which is still 2317 2312 * in reset from above). Writes new value of ibcctrl_a as last step. 2318 2313 */ 2319 2314 qib_7322_mini_pcs_reset(ppd); 2320 - qib_write_kreg(dd, kr_scratch, 0ULL); 2321 - /* clear the linkinit cmds */ 2322 - ppd->cpspec->ibcctrl_a &= ~SYM_MASK(IBCCtrlA_0, LinkInitCmd); 2323 2315 2324 2316 if (!ppd->cpspec->ibcctrl_b) { 2325 2317 unsigned lse = ppd->link_speed_enabled; ··· 2376 2384 /* Enable port */ 2377 2385 ppd->cpspec->ibcctrl_a |= SYM_MASK(IBCCtrlA_0, IBLinkEn); 2378 2386 set_vls(ppd); 2387 + 2388 + /* initially come up DISABLED, without sending anything. */ 2389 + val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE << 2390 + QLOGIC_IB_IBCC_LINKINITCMD_SHIFT); 2391 + qib_write_kreg_port(ppd, krp_ibcctrl_a, val); 2392 + qib_write_kreg(dd, kr_scratch, 0ULL); 2393 + /* clear the linkinit cmds */ 2394 + ppd->cpspec->ibcctrl_a = val & ~SYM_MASK(IBCCtrlA_0, LinkInitCmd); 2379 2395 2380 2396 /* be paranoid against later code motion, etc. */ 2381 2397 spin_lock_irqsave(&dd->cspec->rcvmod_lock, flags); ··· 5241 5241 off */ 5242 5242 if (ppd->dd->flags & QIB_HAS_QSFP) { 5243 5243 qd->t_insert = get_jiffies_64(); 5244 - schedule_work(&qd->work); 5244 + queue_work(ib_wq, &qd->work); 5245 5245 } 5246 5246 spin_lock_irqsave(&ppd->sdma_lock, flags); 5247 5247 if (__qib_sdma_running(ppd))
-12
drivers/infiniband/hw/qib/qib_qsfp.c
··· 480 480 udelay(20); /* Generous RST dwell */ 481 481 482 482 dd->f_gpio_mod(dd, mask, mask, mask); 483 - /* Spec says module can take up to two seconds! */ 484 - mask = QSFP_GPIO_MOD_PRS_N; 485 - if (qd->ppd->hw_pidx) 486 - mask <<= QSFP_GPIO_PORT2_SHIFT; 487 - 488 - /* Do not try to wait here. Better to let event handle it */ 489 - if (!qib_qsfp_mod_present(qd->ppd)) 490 - goto bail; 491 - /* We see a module, but it may be unwise to look yet. Just schedule */ 492 - qd->t_insert = get_jiffies_64(); 493 - queue_work(ib_wq, &qd->work); 494 - bail: 495 483 return; 496 484 } 497 485
+8 -5
drivers/infiniband/ulp/ipoib/ipoib_ib.c
··· 57 57 struct ib_pd *pd, struct ib_ah_attr *attr) 58 58 { 59 59 struct ipoib_ah *ah; 60 + struct ib_ah *vah; 60 61 61 62 ah = kmalloc(sizeof *ah, GFP_KERNEL); 62 63 if (!ah) 63 - return NULL; 64 + return ERR_PTR(-ENOMEM); 64 65 65 66 ah->dev = dev; 66 67 ah->last_send = 0; 67 68 kref_init(&ah->ref); 68 69 69 - ah->ah = ib_create_ah(pd, attr); 70 - if (IS_ERR(ah->ah)) { 70 + vah = ib_create_ah(pd, attr); 71 + if (IS_ERR(vah)) { 71 72 kfree(ah); 72 - ah = NULL; 73 - } else 73 + ah = (struct ipoib_ah *)vah; 74 + } else { 75 + ah->ah = vah; 74 76 ipoib_dbg(netdev_priv(dev), "Created ah %p\n", ah->ah); 77 + } 75 78 76 79 return ah; 77 80 }
+12 -8
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 432 432 433 433 spin_lock_irqsave(&priv->lock, flags); 434 434 435 - if (ah) { 435 + if (!IS_ERR_OR_NULL(ah)) { 436 436 path->pathrec = *pathrec; 437 437 438 438 old_ah = path->ah; ··· 555 555 return 0; 556 556 } 557 557 558 + /* called with rcu_read_lock */ 558 559 static void neigh_add_path(struct sk_buff *skb, struct net_device *dev) 559 560 { 560 561 struct ipoib_dev_priv *priv = netdev_priv(dev); ··· 637 636 spin_unlock_irqrestore(&priv->lock, flags); 638 637 } 639 638 639 + /* called with rcu_read_lock */ 640 640 static void ipoib_path_lookup(struct sk_buff *skb, struct net_device *dev) 641 641 { 642 642 struct ipoib_dev_priv *priv = netdev_priv(skb->dev); ··· 722 720 struct neighbour *n = NULL; 723 721 unsigned long flags; 724 722 723 + rcu_read_lock(); 725 724 if (likely(skb_dst(skb))) 726 725 n = dst_get_neighbour(skb_dst(skb)); 727 726 728 727 if (likely(n)) { 729 728 if (unlikely(!*to_ipoib_neigh(n))) { 730 729 ipoib_path_lookup(skb, dev); 731 - return NETDEV_TX_OK; 730 + goto unlock; 732 731 } 733 732 734 733 neigh = *to_ipoib_neigh(n); ··· 752 749 ipoib_neigh_free(dev, neigh); 753 750 spin_unlock_irqrestore(&priv->lock, flags); 754 751 ipoib_path_lookup(skb, dev); 755 - return NETDEV_TX_OK; 752 + goto unlock; 756 753 } 757 754 758 755 if (ipoib_cm_get(neigh)) { 759 756 if (ipoib_cm_up(neigh)) { 760 757 ipoib_cm_send(dev, skb, ipoib_cm_get(neigh)); 761 - return NETDEV_TX_OK; 758 + goto unlock; 762 759 } 763 760 } else if (neigh->ah) { 764 761 ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(n->ha)); 765 - return NETDEV_TX_OK; 762 + goto unlock; 766 763 } 767 764 768 765 if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) { ··· 796 793 phdr->hwaddr + 4); 797 794 dev_kfree_skb_any(skb); 798 795 ++dev->stats.tx_dropped; 799 - return NETDEV_TX_OK; 796 + goto unlock; 800 797 } 801 798 802 799 unicast_arp_send(skb, dev, phdr); 803 800 } 804 801 } 805 - 802 + unlock: 803 + rcu_read_unlock(); 806 804 return NETDEV_TX_OK; 807 805 } 808 806 ··· 841 837 dst = skb_dst(skb); 842 838 n = NULL; 843 839 if (dst) 844 - n = dst_get_neighbour(dst); 840 + n = dst_get_neighbour_raw(dst); 845 841 if ((!dst || !n) && daddr) { 846 842 struct ipoib_pseudoheader *phdr = 847 843 (struct ipoib_pseudoheader *) skb_push(skb, sizeof *phdr);
+9 -4
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
··· 240 240 av.grh.dgid = mcast->mcmember.mgid; 241 241 242 242 ah = ipoib_create_ah(dev, priv->pd, &av); 243 - if (!ah) { 244 - ipoib_warn(priv, "ib_address_create failed\n"); 243 + if (IS_ERR(ah)) { 244 + ipoib_warn(priv, "ib_address_create failed %ld\n", 245 + -PTR_ERR(ah)); 246 + /* use original error */ 247 + return PTR_ERR(ah); 245 248 } else { 246 249 spin_lock_irq(&priv->lock); 247 250 mcast->ah = ah; ··· 269 266 270 267 skb->dev = dev; 271 268 if (dst) 272 - n = dst_get_neighbour(dst); 269 + n = dst_get_neighbour_raw(dst); 273 270 if (!dst || !n) { 274 271 /* put pseudoheader back on for next time */ 275 272 skb_push(skb, sizeof (struct ipoib_pseudoheader)); ··· 725 722 if (mcast && mcast->ah) { 726 723 struct dst_entry *dst = skb_dst(skb); 727 724 struct neighbour *n = NULL; 725 + 726 + rcu_read_lock(); 728 727 if (dst) 729 728 n = dst_get_neighbour(dst); 730 729 if (n && !*to_ipoib_neigh(n)) { ··· 739 734 list_add_tail(&neigh->list, &mcast->neigh_list); 740 735 } 741 736 } 742 - 737 + rcu_read_unlock(); 743 738 spin_unlock_irqrestore(&priv->lock, flags); 744 739 ipoib_send(dev, skb, mcast->ah, IB_MULTICAST_QPN); 745 740 return;
+2 -1
drivers/net/wireless/ath/ath9k/hw.c
··· 1827 1827 } 1828 1828 1829 1829 /* Clear Bit 14 of AR_WA after putting chip into Full Sleep mode. */ 1830 - REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE); 1830 + if (AR_SREV_9300_20_OR_LATER(ah)) 1831 + REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE); 1831 1832 } 1832 1833 1833 1834 /*
+2
drivers/of/irq.c
··· 424 424 425 425 desc->dev = np; 426 426 desc->interrupt_parent = of_irq_find_parent(np); 427 + if (desc->interrupt_parent == np) 428 + desc->interrupt_parent = NULL; 427 429 list_add_tail(&desc->list, &intc_desc_list); 428 430 } 429 431
+1 -1
drivers/regulator/aat2870-regulator.c
··· 160 160 break; 161 161 } 162 162 163 - if (!ri) 163 + if (i == ARRAY_SIZE(aat2870_regulators)) 164 164 return NULL; 165 165 166 166 ri->enable_addr = AAT2870_LDO_EN;
+1 -1
drivers/regulator/core.c
··· 2799 2799 list_del(&rdev->list); 2800 2800 if (rdev->supply) 2801 2801 regulator_put(rdev->supply); 2802 - device_unregister(&rdev->dev); 2803 2802 kfree(rdev->constraints); 2803 + device_unregister(&rdev->dev); 2804 2804 mutex_unlock(&regulator_list_mutex); 2805 2805 } 2806 2806 EXPORT_SYMBOL_GPL(regulator_unregister);
+44 -2
drivers/regulator/twl-regulator.c
··· 71 71 #define VREG_TYPE 1 72 72 #define VREG_REMAP 2 73 73 #define VREG_DEDICATED 3 /* LDO control */ 74 + #define VREG_VOLTAGE_SMPS_4030 9 74 75 /* TWL6030 register offsets */ 75 76 #define VREG_TRANS 1 76 77 #define VREG_STATE 2 ··· 515 514 .get_status = twl4030reg_get_status, 516 515 }; 517 516 517 + static int 518 + twl4030smps_set_voltage(struct regulator_dev *rdev, int min_uV, int max_uV, 519 + unsigned *selector) 520 + { 521 + struct twlreg_info *info = rdev_get_drvdata(rdev); 522 + int vsel = DIV_ROUND_UP(min_uV - 600000, 12500); 523 + 524 + twlreg_write(info, TWL_MODULE_PM_RECEIVER, VREG_VOLTAGE_SMPS_4030, 525 + vsel); 526 + return 0; 527 + } 528 + 529 + static int twl4030smps_get_voltage(struct regulator_dev *rdev) 530 + { 531 + struct twlreg_info *info = rdev_get_drvdata(rdev); 532 + int vsel = twlreg_read(info, TWL_MODULE_PM_RECEIVER, 533 + VREG_VOLTAGE_SMPS_4030); 534 + 535 + return vsel * 12500 + 600000; 536 + } 537 + 538 + static struct regulator_ops twl4030smps_ops = { 539 + .set_voltage = twl4030smps_set_voltage, 540 + .get_voltage = twl4030smps_get_voltage, 541 + }; 542 + 518 543 static int twl6030ldo_list_voltage(struct regulator_dev *rdev, unsigned index) 519 544 { 520 545 struct twlreg_info *info = rdev_get_drvdata(rdev); ··· 883 856 }, \ 884 857 } 885 858 859 + #define TWL4030_ADJUSTABLE_SMPS(label, offset, num, turnon_delay, remap_conf) \ 860 + { \ 861 + .base = offset, \ 862 + .id = num, \ 863 + .delay = turnon_delay, \ 864 + .remap = remap_conf, \ 865 + .desc = { \ 866 + .name = #label, \ 867 + .id = TWL4030_REG_##label, \ 868 + .ops = &twl4030smps_ops, \ 869 + .type = REGULATOR_VOLTAGE, \ 870 + .owner = THIS_MODULE, \ 871 + }, \ 872 + } 873 + 886 874 #define TWL6030_ADJUSTABLE_LDO(label, offset, min_mVolts, max_mVolts) { \ 887 875 .base = offset, \ 888 876 .min_mV = min_mVolts, \ ··· 989 947 TWL4030_ADJUSTABLE_LDO(VINTANA2, 0x43, 12, 100, 0x08), 990 948 TWL4030_FIXED_LDO(VINTDIG, 0x47, 1500, 13, 100, 0x08), 991 949 TWL4030_ADJUSTABLE_LDO(VIO, 0x4b, 14, 1000, 0x08), 992 - TWL4030_ADJUSTABLE_LDO(VDD1, 0x55, 15, 1000, 0x08), 993 - TWL4030_ADJUSTABLE_LDO(VDD2, 0x63, 16, 1000, 0x08), 950 + TWL4030_ADJUSTABLE_SMPS(VDD1, 0x55, 15, 1000, 0x08), 951 + TWL4030_ADJUSTABLE_SMPS(VDD2, 0x63, 16, 1000, 0x08), 994 952 TWL4030_FIXED_LDO(VUSB1V5, 0x71, 1500, 17, 100, 0x08), 995 953 TWL4030_FIXED_LDO(VUSB1V8, 0x74, 1800, 18, 100, 0x08), 996 954 TWL4030_FIXED_LDO(VUSB3V1, 0x77, 3100, 19, 150, 0x08),
+1 -1
drivers/spi/spi-nuc900.c
··· 426 426 goto err_clk; 427 427 } 428 428 429 - mfp_set_groupg(&pdev->dev); 429 + mfp_set_groupg(&pdev->dev, NULL); 430 430 nuc900_init_spi(hw); 431 431 432 432 err = spi_bitbang_start(&hw->bitbang);
+9 -10
drivers/staging/iio/industrialio-core.c
··· 242 242 243 243 static int iio_event_getfd(struct iio_dev *indio_dev) 244 244 { 245 + struct iio_event_interface *ev_int = indio_dev->event_interface; 245 246 int fd; 246 247 247 - if (indio_dev->event_interface == NULL) 248 + if (ev_int == NULL) 248 249 return -ENODEV; 249 250 250 - mutex_lock(&indio_dev->event_interface->event_list_lock); 251 - if (test_and_set_bit(IIO_BUSY_BIT_POS, 252 - &indio_dev->event_interface->flags)) { 253 - mutex_unlock(&indio_dev->event_interface->event_list_lock); 251 + mutex_lock(&ev_int->event_list_lock); 252 + if (test_and_set_bit(IIO_BUSY_BIT_POS, &ev_int->flags)) { 253 + mutex_unlock(&ev_int->event_list_lock); 254 254 return -EBUSY; 255 255 } 256 - mutex_unlock(&indio_dev->event_interface->event_list_lock); 256 + mutex_unlock(&ev_int->event_list_lock); 257 257 fd = anon_inode_getfd("iio:event", 258 - &iio_event_chrdev_fileops, 259 - indio_dev->event_interface, O_RDONLY); 258 + &iio_event_chrdev_fileops, ev_int, O_RDONLY); 260 259 if (fd < 0) { 261 - mutex_lock(&indio_dev->event_interface->event_list_lock); 260 + mutex_lock(&ev_int->event_list_lock); 262 261 clear_bit(IIO_BUSY_BIT_POS, &ev_int->flags); 263 - mutex_unlock(&indio_dev->event_interface->event_list_lock); 262 + mutex_unlock(&ev_int->event_list_lock); 264 263 } 265 264 return fd; 266 265 }
+14 -1
drivers/video/da8xx-fb.c
··· 116 116 /* Clock registers available only on Version 2 */ 117 117 #define LCD_CLK_ENABLE_REG 0x6c 118 118 #define LCD_CLK_RESET_REG 0x70 119 + #define LCD_CLK_MAIN_RESET BIT(3) 119 120 120 121 #define LCD_NUM_BUFFERS 2 121 122 ··· 245 244 { 246 245 u32 reg; 247 246 247 + /* Bring LCDC out of reset */ 248 + if (lcd_revision == LCD_VERSION_2) 249 + lcdc_write(0, LCD_CLK_RESET_REG); 250 + 248 251 reg = lcdc_read(LCD_RASTER_CTRL_REG); 249 252 if (!(reg & LCD_RASTER_ENABLE)) 250 253 lcdc_write(reg | LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG); ··· 262 257 reg = lcdc_read(LCD_RASTER_CTRL_REG); 263 258 if (reg & LCD_RASTER_ENABLE) 264 259 lcdc_write(reg & ~LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG); 260 + 261 + if (lcd_revision == LCD_VERSION_2) 262 + /* Write 1 to reset LCDC */ 263 + lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG); 265 264 } 266 265 267 266 static void lcd_blit(int load_mode, struct da8xx_fb_par *par) ··· 593 584 lcdc_write(0, LCD_DMA_CTRL_REG); 594 585 lcdc_write(0, LCD_RASTER_CTRL_REG); 595 586 596 - if (lcd_revision == LCD_VERSION_2) 587 + if (lcd_revision == LCD_VERSION_2) { 597 588 lcdc_write(0, LCD_INT_ENABLE_SET_REG); 589 + /* Write 1 to reset */ 590 + lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG); 591 + lcdc_write(0, LCD_CLK_RESET_REG); 592 + } 598 593 } 599 594 600 595 static void lcd_calc_clk_divider(struct da8xx_fb_par *par)
+1
drivers/video/omap/dispc.c
··· 19 19 * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 20 20 */ 21 21 #include <linux/kernel.h> 22 + #include <linux/module.h> 22 23 #include <linux/dma-mapping.h> 23 24 #include <linux/mm.h> 24 25 #include <linux/vmalloc.h>
+5 -6
drivers/video/omap2/dss/dispc.c
··· 1720 1720 const int maxdownscale = dss_feat_get_param_max(FEAT_PARAM_DOWNSCALE); 1721 1721 unsigned long fclk = 0; 1722 1722 1723 - if ((ovl->caps & OMAP_DSS_OVL_CAP_SCALE) == 0) { 1724 - if (width != out_width || height != out_height) 1725 - return -EINVAL; 1726 - else 1727 - return 0; 1728 - } 1723 + if (width == out_width && height == out_height) 1724 + return 0; 1725 + 1726 + if ((ovl->caps & OMAP_DSS_OVL_CAP_SCALE) == 0) 1727 + return -EINVAL; 1729 1728 1730 1729 if (out_width < width / maxdownscale || 1731 1730 out_width > width * 8)
+1 -1
drivers/video/omap2/dss/hdmi.c
··· 269 269 unsigned long hdmi_get_pixel_clock(void) 270 270 { 271 271 /* HDMI Pixel Clock in Mhz */ 272 - return hdmi.ip_data.cfg.timings.timings.pixel_clock * 10000; 272 + return hdmi.ip_data.cfg.timings.timings.pixel_clock * 1000; 273 273 } 274 274 275 275 static void hdmi_compute_pll(struct omap_dss_device *dssdev, int phy,
+2 -2
drivers/video/via/share.h
··· 559 559 #define M1200X720_R60_VSP POSITIVE 560 560 561 561 /* 1200x900@60 Sync Polarity (DCON) */ 562 - #define M1200X900_R60_HSP NEGATIVE 563 - #define M1200X900_R60_VSP NEGATIVE 562 + #define M1200X900_R60_HSP POSITIVE 563 + #define M1200X900_R60_VSP POSITIVE 564 564 565 565 /* 1280x600@60 Sync Polarity (GTF Mode) */ 566 566 #define M1280x600_R60_HSP NEGATIVE
+3
fs/btrfs/ctree.h
··· 2369 2369 int btrfs_block_rsv_refill(struct btrfs_root *root, 2370 2370 struct btrfs_block_rsv *block_rsv, 2371 2371 u64 min_reserved); 2372 + int btrfs_block_rsv_refill_noflush(struct btrfs_root *root, 2373 + struct btrfs_block_rsv *block_rsv, 2374 + u64 min_reserved); 2372 2375 int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv, 2373 2376 struct btrfs_block_rsv *dst_rsv, 2374 2377 u64 num_bytes);
+23 -11
fs/btrfs/extent-tree.c
··· 3888 3888 return ret; 3889 3889 } 3890 3890 3891 - int btrfs_block_rsv_refill(struct btrfs_root *root, 3892 - struct btrfs_block_rsv *block_rsv, 3893 - u64 min_reserved) 3891 + static inline int __btrfs_block_rsv_refill(struct btrfs_root *root, 3892 + struct btrfs_block_rsv *block_rsv, 3893 + u64 min_reserved, int flush) 3894 3894 { 3895 3895 u64 num_bytes = 0; 3896 3896 int ret = -ENOSPC; ··· 3909 3909 if (!ret) 3910 3910 return 0; 3911 3911 3912 - ret = reserve_metadata_bytes(root, block_rsv, num_bytes, 1); 3912 + ret = reserve_metadata_bytes(root, block_rsv, num_bytes, flush); 3913 3913 if (!ret) { 3914 3914 block_rsv_add_bytes(block_rsv, num_bytes, 0); 3915 3915 return 0; 3916 3916 } 3917 3917 3918 3918 return ret; 3919 + } 3920 + 3921 + int btrfs_block_rsv_refill(struct btrfs_root *root, 3922 + struct btrfs_block_rsv *block_rsv, 3923 + u64 min_reserved) 3924 + { 3925 + return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 1); 3926 + } 3927 + 3928 + int btrfs_block_rsv_refill_noflush(struct btrfs_root *root, 3929 + struct btrfs_block_rsv *block_rsv, 3930 + u64 min_reserved) 3931 + { 3932 + return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 0); 3919 3933 } 3920 3934 3921 3935 int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv, ··· 5279 5265 spin_lock(&block_group->free_space_ctl->tree_lock); 5280 5266 if (cached && 5281 5267 block_group->free_space_ctl->free_space < 5282 - num_bytes + empty_size) { 5268 + num_bytes + empty_cluster + empty_size) { 5283 5269 spin_unlock(&block_group->free_space_ctl->tree_lock); 5284 5270 goto loop; 5285 5271 } ··· 5300 5286 * people trying to start a new cluster 5301 5287 */ 5302 5288 spin_lock(&last_ptr->refill_lock); 5303 - if (last_ptr->block_group && 5304 - (last_ptr->block_group->ro || 5305 - !block_group_bits(last_ptr->block_group, data))) { 5306 - offset = 0; 5289 + if (!last_ptr->block_group || 5290 + last_ptr->block_group->ro || 5291 + !block_group_bits(last_ptr->block_group, data)) 5307 5292 goto refill_cluster; 5308 - } 5309 5293 5310 5294 offset = btrfs_alloc_from_cluster(block_group, last_ptr, 5311 5295 num_bytes, search_start); ··· 5354 5342 /* allocate a cluster in this block group */ 5355 5343 ret = btrfs_find_space_cluster(trans, root, 5356 5344 block_group, last_ptr, 5357 - offset, num_bytes, 5345 + search_start, num_bytes, 5358 5346 empty_cluster + empty_size); 5359 5347 if (ret == 0) { 5360 5348 /*
+20 -7
fs/btrfs/extent_io.c
··· 2287 2287 if (!uptodate) { 2288 2288 int failed_mirror; 2289 2289 failed_mirror = (int)(unsigned long)bio->bi_bdev; 2290 - if (tree->ops && tree->ops->readpage_io_failed_hook) 2291 - ret = tree->ops->readpage_io_failed_hook( 2292 - bio, page, start, end, 2293 - failed_mirror, state); 2294 - else 2295 - ret = bio_readpage_error(bio, page, start, end, 2296 - failed_mirror, NULL); 2290 + /* 2291 + * The generic bio_readpage_error handles errors the 2292 + * following way: If possible, new read requests are 2293 + * created and submitted and will end up in 2294 + * end_bio_extent_readpage as well (if we're lucky, not 2295 + * in the !uptodate case). In that case it returns 0 and 2296 + * we just go on with the next page in our bio. If it 2297 + * can't handle the error it will return -EIO and we 2298 + * remain responsible for that page. 2299 + */ 2300 + ret = bio_readpage_error(bio, page, start, end, 2301 + failed_mirror, NULL); 2297 2302 if (ret == 0) { 2303 + error_handled: 2298 2304 uptodate = 2299 2305 test_bit(BIO_UPTODATE, &bio->bi_flags); 2300 2306 if (err) 2301 2307 uptodate = 0; 2302 2308 uncache_state(&cached); 2303 2309 continue; 2310 + } 2311 + if (tree->ops && tree->ops->readpage_io_failed_hook) { 2312 + ret = tree->ops->readpage_io_failed_hook( 2313 + bio, page, start, end, 2314 + failed_mirror, state); 2315 + if (ret == 0) 2316 + goto error_handled; 2304 2317 } 2305 2318 } 2306 2319
+2
fs/btrfs/free-space-cache.c
··· 1470 1470 { 1471 1471 info->offset = offset_to_bitmap(ctl, offset); 1472 1472 info->bytes = 0; 1473 + INIT_LIST_HEAD(&info->list); 1473 1474 link_free_space(ctl, info); 1474 1475 ctl->total_bitmaps++; 1475 1476 ··· 2320 2319 2321 2320 if (!found) { 2322 2321 start = i; 2322 + cluster->max_size = 0; 2323 2323 found = true; 2324 2324 } 2325 2325
+1 -1
fs/btrfs/inode.c
··· 3490 3490 * doing the truncate. 3491 3491 */ 3492 3492 while (1) { 3493 - ret = btrfs_block_rsv_refill(root, rsv, min_size); 3493 + ret = btrfs_block_rsv_refill_noflush(root, rsv, min_size); 3494 3494 3495 3495 /* 3496 3496 * Try and steal from the global reserve since we will
+1 -1
fs/btrfs/ioctl.c
··· 1278 1278 } 1279 1279 ret = btrfs_grow_device(trans, device, new_size); 1280 1280 btrfs_commit_transaction(trans, root); 1281 - } else { 1281 + } else if (new_size < old_size) { 1282 1282 ret = btrfs_shrink_device(device, new_size); 1283 1283 } 1284 1284
+5
fs/btrfs/scrub.c
··· 256 256 btrfs_release_path(swarn->path); 257 257 258 258 ipath = init_ipath(4096, local_root, swarn->path); 259 + if (IS_ERR(ipath)) { 260 + ret = PTR_ERR(ipath); 261 + ipath = NULL; 262 + goto err; 263 + } 259 264 ret = paths_from_inode(inum, ipath); 260 265 261 266 if (ret < 0)
+3 -3
fs/btrfs/super.c
··· 1057 1057 int i = 0, nr_devices; 1058 1058 int ret; 1059 1059 1060 - nr_devices = fs_info->fs_devices->rw_devices; 1060 + nr_devices = fs_info->fs_devices->open_devices; 1061 1061 BUG_ON(!nr_devices); 1062 1062 1063 1063 devices_info = kmalloc(sizeof(*devices_info) * nr_devices, ··· 1079 1079 else 1080 1080 min_stripe_size = BTRFS_STRIPE_LEN; 1081 1081 1082 - list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) { 1083 - if (!device->in_fs_metadata) 1082 + list_for_each_entry(device, &fs_devices->devices, dev_list) { 1083 + if (!device->in_fs_metadata || !device->bdev) 1084 1084 continue; 1085 1085 1086 1086 avail_space = device->total_bytes - device->bytes_used;
+1 -1
fs/ext4/inode.c
··· 2807 2807 spin_unlock_irqrestore(&ei->i_completed_io_lock, flags); 2808 2808 2809 2809 /* queue the work to convert unwritten extents to written */ 2810 - queue_work(wq, &io_end->work); 2811 2810 iocb->private = NULL; 2811 + queue_work(wq, &io_end->work); 2812 2812 2813 2813 /* XXX: probably should move into the real I/O completion handler */ 2814 2814 inode_dio_done(inode);
+1 -1
fs/ocfs2/alloc.c
··· 5699 5699 OCFS2_JOURNAL_ACCESS_WRITE); 5700 5700 if (ret) { 5701 5701 mlog_errno(ret); 5702 - goto out; 5702 + goto out_commit; 5703 5703 } 5704 5704 5705 5705 dquot_free_space_nodirty(inode,
+61 -8
fs/ocfs2/aops.c
··· 290 290 } 291 291 292 292 if (down_read_trylock(&oi->ip_alloc_sem) == 0) { 293 + /* 294 + * Unlock the page and cycle ip_alloc_sem so that we don't 295 + * busyloop waiting for ip_alloc_sem to unlock 296 + */ 293 297 ret = AOP_TRUNCATED_PAGE; 298 + unlock_page(page); 299 + unlock = 0; 300 + down_read(&oi->ip_alloc_sem); 301 + up_read(&oi->ip_alloc_sem); 294 302 goto out_inode_unlock; 295 303 } 296 304 ··· 571 563 { 572 564 struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode; 573 565 int level; 566 + wait_queue_head_t *wq = ocfs2_ioend_wq(inode); 574 567 575 568 /* this io's submitter should not have unlocked this before we could */ 576 569 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb)); 577 570 578 571 if (ocfs2_iocb_is_sem_locked(iocb)) 579 572 ocfs2_iocb_clear_sem_locked(iocb); 573 + 574 + if (ocfs2_iocb_is_unaligned_aio(iocb)) { 575 + ocfs2_iocb_clear_unaligned_aio(iocb); 576 + 577 + if (atomic_dec_and_test(&OCFS2_I(inode)->ip_unaligned_aio) && 578 + waitqueue_active(wq)) { 579 + wake_up_all(wq); 580 + } 581 + } 580 582 581 583 ocfs2_iocb_clear_rw_locked(iocb); 582 584 ··· 881 863 struct page *w_target_page; 882 864 883 865 /* 866 + * w_target_locked is used for page_mkwrite path indicating no unlocking 867 + * against w_target_page in ocfs2_write_end_nolock. 868 + */ 869 + unsigned int w_target_locked:1; 870 + 871 + /* 884 872 * ocfs2_write_end() uses this to know what the real range to 885 873 * write in the target should be. 886 874 */ ··· 919 895 920 896 static void ocfs2_free_write_ctxt(struct ocfs2_write_ctxt *wc) 921 897 { 898 + int i; 899 + 900 + /* 901 + * w_target_locked is only set to true in the page_mkwrite() case. 902 + * The intent is to allow us to lock the target page from write_begin() 903 + * to write_end(). The caller must hold a ref on w_target_page. 904 + */ 905 + if (wc->w_target_locked) { 906 + BUG_ON(!wc->w_target_page); 907 + for (i = 0; i < wc->w_num_pages; i++) { 908 + if (wc->w_target_page == wc->w_pages[i]) { 909 + wc->w_pages[i] = NULL; 910 + break; 911 + } 912 + } 913 + mark_page_accessed(wc->w_target_page); 914 + page_cache_release(wc->w_target_page); 915 + } 922 916 ocfs2_unlock_and_free_pages(wc->w_pages, wc->w_num_pages); 923 917 924 918 brelse(wc->w_di_bh); ··· 1174 1132 */ 1175 1133 lock_page(mmap_page); 1176 1134 1135 + /* Exit and let the caller retry */ 1177 1136 if (mmap_page->mapping != mapping) { 1137 + WARN_ON(mmap_page->mapping); 1178 1138 unlock_page(mmap_page); 1179 - /* 1180 - * Sanity check - the locking in 1181 - * ocfs2_pagemkwrite() should ensure 1182 - * that this code doesn't trigger. 1183 - */ 1184 - ret = -EINVAL; 1185 - mlog_errno(ret); 1139 + ret = -EAGAIN; 1186 1140 goto out; 1187 1141 } 1188 1142 1189 1143 page_cache_get(mmap_page); 1190 1144 wc->w_pages[i] = mmap_page; 1145 + wc->w_target_locked = true; 1191 1146 } else { 1192 1147 wc->w_pages[i] = find_or_create_page(mapping, index, 1193 1148 GFP_NOFS); ··· 1199 1160 wc->w_target_page = wc->w_pages[i]; 1200 1161 } 1201 1162 out: 1163 + if (ret) 1164 + wc->w_target_locked = false; 1202 1165 return ret; 1203 1166 } 1204 1167 ··· 1858 1817 */ 1859 1818 ret = ocfs2_grab_pages_for_write(mapping, wc, wc->w_cpos, pos, len, 1860 1819 cluster_of_pages, mmap_page); 1861 - if (ret) { 1820 + if (ret && ret != -EAGAIN) { 1862 1821 mlog_errno(ret); 1822 + goto out_quota; 1823 + } 1824 + 1825 + /* 1826 + * ocfs2_grab_pages_for_write() returns -EAGAIN if it could not lock 1827 + * the target page. In this case, we exit with no error and no target 1828 + * page. This will trigger the caller, page_mkwrite(), to re-try 1829 + * the operation. 1830 + */ 1831 + if (ret == -EAGAIN) { 1832 + BUG_ON(wc->w_target_page); 1833 + ret = 0; 1863 1834 goto out_quota; 1864 1835 } 1865 1836
+14
fs/ocfs2/aops.h
··· 78 78 OCFS2_IOCB_RW_LOCK = 0, 79 79 OCFS2_IOCB_RW_LOCK_LEVEL, 80 80 OCFS2_IOCB_SEM, 81 + OCFS2_IOCB_UNALIGNED_IO, 81 82 OCFS2_IOCB_NUM_LOCKS 82 83 }; 83 84 ··· 92 91 clear_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private) 93 92 #define ocfs2_iocb_is_sem_locked(iocb) \ 94 93 test_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private) 94 + 95 + #define ocfs2_iocb_set_unaligned_aio(iocb) \ 96 + set_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 97 + #define ocfs2_iocb_clear_unaligned_aio(iocb) \ 98 + clear_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 99 + #define ocfs2_iocb_is_unaligned_aio(iocb) \ 100 + test_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 101 + 102 + #define OCFS2_IOEND_WQ_HASH_SZ 37 103 + #define ocfs2_ioend_wq(v) (&ocfs2__ioend_wq[((unsigned long)(v)) %\ 104 + OCFS2_IOEND_WQ_HASH_SZ]) 105 + extern wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ]; 106 + 95 107 #endif /* OCFS2_FILE_H */
+123 -73
fs/ocfs2/cluster/heartbeat.c
··· 216 216 217 217 struct list_head hr_all_item; 218 218 unsigned hr_unclean_stop:1, 219 + hr_aborted_start:1, 219 220 hr_item_pinned:1, 220 221 hr_item_dropped:1; 221 222 ··· 254 253 * has reached a 'steady' state. This will be fixed when we have 255 254 * a more complete api that doesn't lead to this sort of fragility. */ 256 255 atomic_t hr_steady_iterations; 256 + 257 + /* terminate o2hb thread if it does not reach steady state 258 + * (hr_steady_iterations == 0) within hr_unsteady_iterations */ 259 + atomic_t hr_unsteady_iterations; 257 260 258 261 char hr_dev_name[BDEVNAME_SIZE]; 259 262 ··· 329 324 330 325 static void o2hb_arm_write_timeout(struct o2hb_region *reg) 331 326 { 327 + /* Arm writeout only after thread reaches steady state */ 328 + if (atomic_read(&reg->hr_steady_iterations) != 0) 329 + return; 330 + 332 331 mlog(ML_HEARTBEAT, "Queue write timeout for %u ms\n", 333 332 O2HB_MAX_WRITE_TIMEOUT_MS); 334 333 ··· 546 537 return read == computed; 547 538 } 548 539 549 - /* We want to make sure that nobody is heartbeating on top of us -- 550 - * this will help detect an invalid configuration. */ 551 - static void o2hb_check_last_timestamp(struct o2hb_region *reg) 540 + /* 541 + * Compare the slot data with what we wrote in the last iteration. 542 + * If the match fails, print an appropriate error message. This is to 543 + * detect errors like... another node hearting on the same slot, 544 + * flaky device that is losing writes, etc. 545 + * Returns 1 if check succeeds, 0 otherwise. 546 + */ 547 + static int o2hb_check_own_slot(struct o2hb_region *reg) 552 548 { 553 549 struct o2hb_disk_slot *slot; 554 550 struct o2hb_disk_heartbeat_block *hb_block; ··· 562 548 slot = &reg->hr_slots[o2nm_this_node()]; 563 549 /* Don't check on our 1st timestamp */ 564 550 if (!slot->ds_last_time) 565 - return; 551 + return 0; 566 552 567 553 hb_block = slot->ds_raw_block; 568 554 if (le64_to_cpu(hb_block->hb_seq) == slot->ds_last_time && 569 555 le64_to_cpu(hb_block->hb_generation) == slot->ds_last_generation && 570 556 hb_block->hb_node == slot->ds_node_num) 571 - return; 557 + return 1; 572 558 573 559 #define ERRSTR1 "Another node is heartbeating on device" 574 560 #define ERRSTR2 "Heartbeat generation mismatch on device" ··· 588 574 (unsigned long long)slot->ds_last_time, hb_block->hb_node, 589 575 (unsigned long long)le64_to_cpu(hb_block->hb_generation), 590 576 (unsigned long long)le64_to_cpu(hb_block->hb_seq)); 577 + 578 + return 0; 591 579 } 592 580 593 581 static inline void o2hb_prepare_block(struct o2hb_region *reg, ··· 735 719 o2nm_node_put(node); 736 720 } 737 721 738 - static void o2hb_set_quorum_device(struct o2hb_region *reg, 739 - struct o2hb_disk_slot *slot) 722 + static void o2hb_set_quorum_device(struct o2hb_region *reg) 740 723 { 741 - assert_spin_locked(&o2hb_live_lock); 742 - 743 724 if (!o2hb_global_heartbeat_active()) 744 725 return; 745 726 746 - if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 727 + /* Prevent race with o2hb_heartbeat_group_drop_item() */ 728 + if (kthread_should_stop()) 747 729 return; 730 + 731 + /* Tag region as quorum only after thread reaches steady state */ 732 + if (atomic_read(&reg->hr_steady_iterations) != 0) 733 + return; 734 + 735 + spin_lock(&o2hb_live_lock); 736 + 737 + if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 738 + goto unlock; 748 739 749 740 /* 750 741 * A region can be added to the quorum only when it sees all ··· 760 737 */ 761 738 if (memcmp(reg->hr_live_node_bitmap, o2hb_live_node_bitmap, 762 739 sizeof(o2hb_live_node_bitmap))) 763 - return; 740 + goto unlock; 764 741 765 - if (slot->ds_changed_samples < O2HB_LIVE_THRESHOLD) 766 - return; 767 - 768 - printk(KERN_NOTICE "o2hb: Region %s is now a quorum device\n", 769 - config_item_name(&reg->hr_item)); 742 + printk(KERN_NOTICE "o2hb: Region %s (%s) is now a quorum device\n", 743 + config_item_name(&reg->hr_item), reg->hr_dev_name); 770 744 771 745 set_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 772 746 ··· 774 754 if (o2hb_pop_count(&o2hb_quorum_region_bitmap, 775 755 O2NM_MAX_REGIONS) > O2HB_PIN_CUT_OFF) 776 756 o2hb_region_unpin(NULL); 757 + unlock: 758 + spin_unlock(&o2hb_live_lock); 777 759 } 778 760 779 761 static int o2hb_check_slot(struct o2hb_region *reg, ··· 947 925 slot->ds_equal_samples = 0; 948 926 } 949 927 out: 950 - o2hb_set_quorum_device(reg, slot); 951 - 952 928 spin_unlock(&o2hb_live_lock); 953 929 954 930 o2hb_run_event_list(&event); ··· 977 957 978 958 static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) 979 959 { 980 - int i, ret, highest_node, change = 0; 960 + int i, ret, highest_node; 961 + int membership_change = 0, own_slot_ok = 0; 981 962 unsigned long configured_nodes[BITS_TO_LONGS(O2NM_MAX_NODES)]; 982 963 unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 983 964 struct o2hb_bio_wait_ctxt write_wc; ··· 987 966 sizeof(configured_nodes)); 988 967 if (ret) { 989 968 mlog_errno(ret); 990 - return ret; 969 + goto bail; 991 970 } 992 971 993 972 /* ··· 1003 982 1004 983 highest_node = o2hb_highest_node(configured_nodes, O2NM_MAX_NODES); 1005 984 if (highest_node >= O2NM_MAX_NODES) { 1006 - mlog(ML_NOTICE, "ocfs2_heartbeat: no configured nodes found!\n"); 1007 - return -EINVAL; 985 + mlog(ML_NOTICE, "o2hb: No configured nodes found!\n"); 986 + ret = -EINVAL; 987 + goto bail; 1008 988 } 1009 989 1010 990 /* No sense in reading the slots of nodes that don't exist ··· 1015 993 ret = o2hb_read_slots(reg, highest_node + 1); 1016 994 if (ret < 0) { 1017 995 mlog_errno(ret); 1018 - return ret; 996 + goto bail; 1019 997 } 1020 998 1021 999 /* With an up to date view of the slots, we can check that no 1022 1000 * other node has been improperly configured to heartbeat in 1023 1001 * our slot. */ 1024 - o2hb_check_last_timestamp(reg); 1002 + own_slot_ok = o2hb_check_own_slot(reg); 1025 1003 1026 1004 /* fill in the proper info for our next heartbeat */ 1027 1005 o2hb_prepare_block(reg, reg->hr_generation); 1028 1006 1029 - /* And fire off the write. Note that we don't wait on this I/O 1030 - * until later. */ 1031 1007 ret = o2hb_issue_node_write(reg, &write_wc); 1032 1008 if (ret < 0) { 1033 1009 mlog_errno(ret); 1034 - return ret; 1010 + goto bail; 1035 1011 } 1036 1012 1037 1013 i = -1; 1038 1014 while((i = find_next_bit(configured_nodes, 1039 1015 O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) { 1040 - change |= o2hb_check_slot(reg, &reg->hr_slots[i]); 1016 + membership_change |= o2hb_check_slot(reg, &reg->hr_slots[i]); 1041 1017 } 1042 1018 1043 1019 /* ··· 1050 1030 * disk */ 1051 1031 mlog(ML_ERROR, "Write error %d on device \"%s\"\n", 1052 1032 write_wc.wc_error, reg->hr_dev_name); 1053 - return write_wc.wc_error; 1033 + ret = write_wc.wc_error; 1034 + goto bail; 1054 1035 } 1055 1036 1056 - o2hb_arm_write_timeout(reg); 1037 + /* Skip disarming the timeout if own slot has stale/bad data */ 1038 + if (own_slot_ok) { 1039 + o2hb_set_quorum_device(reg); 1040 + o2hb_arm_write_timeout(reg); 1041 + } 1057 1042 1043 + bail: 1058 1044 /* let the person who launched us know when things are steady */ 1059 - if (!change && (atomic_read(&reg->hr_steady_iterations) != 0)) { 1060 - if (atomic_dec_and_test(&reg->hr_steady_iterations)) 1061 - wake_up(&o2hb_steady_queue); 1045 + if (atomic_read(&reg->hr_steady_iterations) != 0) { 1046 + if (!ret && own_slot_ok && !membership_change) { 1047 + if (atomic_dec_and_test(&reg->hr_steady_iterations)) 1048 + wake_up(&o2hb_steady_queue); 1049 + } 1062 1050 } 1063 1051 1064 - return 0; 1052 + if (atomic_read(&reg->hr_steady_iterations) != 0) { 1053 + if (atomic_dec_and_test(&reg->hr_unsteady_iterations)) { 1054 + printk(KERN_NOTICE "o2hb: Unable to stabilize " 1055 + "heartbeart on region %s (%s)\n", 1056 + config_item_name(&reg->hr_item), 1057 + reg->hr_dev_name); 1058 + atomic_set(&reg->hr_steady_iterations, 0); 1059 + reg->hr_aborted_start = 1; 1060 + wake_up(&o2hb_steady_queue); 1061 + ret = -EIO; 1062 + } 1063 + } 1064 + 1065 + return ret; 1065 1066 } 1066 1067 1067 1068 /* Subtract b from a, storing the result in a. a *must* have a larger ··· 1136 1095 /* Pin node */ 1137 1096 o2nm_depend_this_node(); 1138 1097 1139 - while (!kthread_should_stop() && !reg->hr_unclean_stop) { 1098 + while (!kthread_should_stop() && 1099 + !reg->hr_unclean_stop && !reg->hr_aborted_start) { 1140 1100 /* We track the time spent inside 1141 1101 * o2hb_do_disk_heartbeat so that we avoid more than 1142 1102 * hr_timeout_ms between disk writes. On busy systems ··· 1145 1103 * likely to time itself out. */ 1146 1104 do_gettimeofday(&before_hb); 1147 1105 1148 - i = 0; 1149 - do { 1150 - ret = o2hb_do_disk_heartbeat(reg); 1151 - } while (ret && ++i < 2); 1106 + ret = o2hb_do_disk_heartbeat(reg); 1152 1107 1153 1108 do_gettimeofday(&after_hb); 1154 1109 elapsed_msec = o2hb_elapsed_msecs(&before_hb, &after_hb); ··· 1156 1117 after_hb.tv_sec, (unsigned long) after_hb.tv_usec, 1157 1118 elapsed_msec); 1158 1119 1159 - if (elapsed_msec < reg->hr_timeout_ms) { 1120 + if (!kthread_should_stop() && 1121 + elapsed_msec < reg->hr_timeout_ms) { 1160 1122 /* the kthread api has blocked signals for us so no 1161 1123 * need to record the return value. */ 1162 1124 msleep_interruptible(reg->hr_timeout_ms - elapsed_msec); ··· 1174 1134 * to timeout on this region when we could just as easily 1175 1135 * write a clear generation - thus indicating to them that 1176 1136 * this node has left this region. 1177 - * 1178 - * XXX: Should we skip this on unclean_stop? */ 1179 - o2hb_prepare_block(reg, 0); 1180 - ret = o2hb_issue_node_write(reg, &write_wc); 1181 - if (ret == 0) { 1182 - o2hb_wait_on_io(reg, &write_wc); 1183 - } else { 1184 - mlog_errno(ret); 1137 + */ 1138 + if (!reg->hr_unclean_stop && !reg->hr_aborted_start) { 1139 + o2hb_prepare_block(reg, 0); 1140 + ret = o2hb_issue_node_write(reg, &write_wc); 1141 + if (ret == 0) 1142 + o2hb_wait_on_io(reg, &write_wc); 1143 + else 1144 + mlog_errno(ret); 1185 1145 } 1186 1146 1187 1147 /* Unpin node */ 1188 1148 o2nm_undepend_this_node(); 1189 1149 1190 - mlog(ML_HEARTBEAT|ML_KTHREAD, "hb thread exiting\n"); 1150 + mlog(ML_HEARTBEAT|ML_KTHREAD, "o2hb thread exiting\n"); 1191 1151 1192 1152 return 0; 1193 1153 } ··· 1198 1158 struct o2hb_debug_buf *db = inode->i_private; 1199 1159 struct o2hb_region *reg; 1200 1160 unsigned long map[BITS_TO_LONGS(O2NM_MAX_NODES)]; 1161 + unsigned long lts; 1201 1162 char *buf = NULL; 1202 1163 int i = -1; 1203 1164 int out = 0; ··· 1235 1194 1236 1195 case O2HB_DB_TYPE_REGION_ELAPSED_TIME: 1237 1196 reg = (struct o2hb_region *)db->db_data; 1238 - out += snprintf(buf + out, PAGE_SIZE - out, "%u\n", 1239 - jiffies_to_msecs(jiffies - 1240 - reg->hr_last_timeout_start)); 1197 + lts = reg->hr_last_timeout_start; 1198 + /* If 0, it has never been set before */ 1199 + if (lts) 1200 + lts = jiffies_to_msecs(jiffies - lts); 1201 + out += snprintf(buf + out, PAGE_SIZE - out, "%lu\n", lts); 1241 1202 goto done; 1242 1203 1243 1204 case O2HB_DB_TYPE_REGION_PINNED: ··· 1468 1425 int i; 1469 1426 struct page *page; 1470 1427 struct o2hb_region *reg = to_o2hb_region(item); 1428 + 1429 + mlog(ML_HEARTBEAT, "hb region release (%s)\n", reg->hr_dev_name); 1471 1430 1472 1431 if (reg->hr_tmp_block) 1473 1432 kfree(reg->hr_tmp_block); ··· 1837 1792 live_threshold <<= 1; 1838 1793 spin_unlock(&o2hb_live_lock); 1839 1794 } 1840 - atomic_set(&reg->hr_steady_iterations, live_threshold + 1); 1795 + ++live_threshold; 1796 + atomic_set(&reg->hr_steady_iterations, live_threshold); 1797 + /* unsteady_iterations is double the steady_iterations */ 1798 + atomic_set(&reg->hr_unsteady_iterations, (live_threshold << 1)); 1841 1799 1842 1800 hb_task = kthread_run(o2hb_thread, reg, "o2hb-%s", 1843 1801 reg->hr_item.ci_name); ··· 1857 1809 ret = wait_event_interruptible(o2hb_steady_queue, 1858 1810 atomic_read(&reg->hr_steady_iterations) == 0); 1859 1811 if (ret) { 1860 - /* We got interrupted (hello ptrace!). Clean up */ 1861 - spin_lock(&o2hb_live_lock); 1862 - hb_task = reg->hr_task; 1863 - reg->hr_task = NULL; 1864 - spin_unlock(&o2hb_live_lock); 1812 + atomic_set(&reg->hr_steady_iterations, 0); 1813 + reg->hr_aborted_start = 1; 1814 + } 1865 1815 1866 - if (hb_task) 1867 - kthread_stop(hb_task); 1816 + if (reg->hr_aborted_start) { 1817 + ret = -EIO; 1868 1818 goto out; 1869 1819 } 1870 1820 ··· 1879 1833 ret = -EIO; 1880 1834 1881 1835 if (hb_task && o2hb_global_heartbeat_active()) 1882 - printk(KERN_NOTICE "o2hb: Heartbeat started on region %s\n", 1883 - config_item_name(&reg->hr_item)); 1836 + printk(KERN_NOTICE "o2hb: Heartbeat started on region %s (%s)\n", 1837 + config_item_name(&reg->hr_item), reg->hr_dev_name); 1884 1838 1885 1839 out: 1886 1840 if (filp) ··· 2138 2092 2139 2093 /* stop the thread when the user removes the region dir */ 2140 2094 spin_lock(&o2hb_live_lock); 2141 - if (o2hb_global_heartbeat_active()) { 2142 - clear_bit(reg->hr_region_num, o2hb_region_bitmap); 2143 - clear_bit(reg->hr_region_num, o2hb_live_region_bitmap); 2144 - if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 2145 - quorum_region = 1; 2146 - clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 2147 - } 2148 2095 hb_task = reg->hr_task; 2149 2096 reg->hr_task = NULL; 2150 2097 reg->hr_item_dropped = 1; ··· 2146 2107 if (hb_task) 2147 2108 kthread_stop(hb_task); 2148 2109 2110 + if (o2hb_global_heartbeat_active()) { 2111 + spin_lock(&o2hb_live_lock); 2112 + clear_bit(reg->hr_region_num, o2hb_region_bitmap); 2113 + clear_bit(reg->hr_region_num, o2hb_live_region_bitmap); 2114 + if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 2115 + quorum_region = 1; 2116 + clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 2117 + spin_unlock(&o2hb_live_lock); 2118 + printk(KERN_NOTICE "o2hb: Heartbeat %s on region %s (%s)\n", 2119 + ((atomic_read(&reg->hr_steady_iterations) == 0) ? 2120 + "stopped" : "start aborted"), config_item_name(item), 2121 + reg->hr_dev_name); 2122 + } 2123 + 2149 2124 /* 2150 2125 * If we're racing a dev_write(), we need to wake them. They will 2151 2126 * check reg->hr_task 2152 2127 */ 2153 2128 if (atomic_read(&reg->hr_steady_iterations) != 0) { 2129 + reg->hr_aborted_start = 1; 2154 2130 atomic_set(&reg->hr_steady_iterations, 0); 2155 2131 wake_up(&o2hb_steady_queue); 2156 2132 } 2157 - 2158 - if (o2hb_global_heartbeat_active()) 2159 - printk(KERN_NOTICE "o2hb: Heartbeat stopped on region %s\n", 2160 - config_item_name(&reg->hr_item)); 2161 2133 2162 2134 config_item_put(item); 2163 2135
+69 -33
fs/ocfs2/cluster/netdebug.c
··· 47 47 #define SC_DEBUG_NAME "sock_containers" 48 48 #define NST_DEBUG_NAME "send_tracking" 49 49 #define STATS_DEBUG_NAME "stats" 50 + #define NODES_DEBUG_NAME "connected_nodes" 50 51 51 52 #define SHOW_SOCK_CONTAINERS 0 52 53 #define SHOW_SOCK_STATS 1 ··· 56 55 static struct dentry *sc_dentry; 57 56 static struct dentry *nst_dentry; 58 57 static struct dentry *stats_dentry; 58 + static struct dentry *nodes_dentry; 59 59 60 60 static DEFINE_SPINLOCK(o2net_debug_lock); 61 61 ··· 493 491 .release = sc_fop_release, 494 492 }; 495 493 496 - int o2net_debugfs_init(void) 494 + static int o2net_fill_bitmap(char *buf, int len) 497 495 { 498 - o2net_dentry = debugfs_create_dir(O2NET_DEBUG_DIR, NULL); 499 - if (!o2net_dentry) { 500 - mlog_errno(-ENOMEM); 501 - goto bail; 502 - } 496 + unsigned long map[BITS_TO_LONGS(O2NM_MAX_NODES)]; 497 + int i = -1, out = 0; 503 498 504 - nst_dentry = debugfs_create_file(NST_DEBUG_NAME, S_IFREG|S_IRUSR, 505 - o2net_dentry, NULL, 506 - &nst_seq_fops); 507 - if (!nst_dentry) { 508 - mlog_errno(-ENOMEM); 509 - goto bail; 510 - } 499 + o2net_fill_node_map(map, sizeof(map)); 511 500 512 - sc_dentry = debugfs_create_file(SC_DEBUG_NAME, S_IFREG|S_IRUSR, 513 - o2net_dentry, NULL, 514 - &sc_seq_fops); 515 - if (!sc_dentry) { 516 - mlog_errno(-ENOMEM); 517 - goto bail; 518 - } 501 + while ((i = find_next_bit(map, O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) 502 + out += snprintf(buf + out, PAGE_SIZE - out, "%d ", i); 503 + out += snprintf(buf + out, PAGE_SIZE - out, "\n"); 519 504 520 - stats_dentry = debugfs_create_file(STATS_DEBUG_NAME, S_IFREG|S_IRUSR, 521 - o2net_dentry, NULL, 522 - &stats_seq_fops); 523 - if (!stats_dentry) { 524 - mlog_errno(-ENOMEM); 525 - goto bail; 526 - } 505 + return out; 506 + } 507 + 508 + static int nodes_fop_open(struct inode *inode, struct file *file) 509 + { 510 + char *buf; 511 + 512 + buf = kmalloc(PAGE_SIZE, GFP_KERNEL); 513 + if (!buf) 514 + return -ENOMEM; 515 + 516 + i_size_write(inode, o2net_fill_bitmap(buf, PAGE_SIZE)); 517 + 518 + file->private_data = buf; 527 519 528 520 return 0; 529 - bail: 530 - debugfs_remove(stats_dentry); 531 - debugfs_remove(sc_dentry); 532 - debugfs_remove(nst_dentry); 533 - debugfs_remove(o2net_dentry); 534 - return -ENOMEM; 535 521 } 522 + 523 + static int o2net_debug_release(struct inode *inode, struct file *file) 524 + { 525 + kfree(file->private_data); 526 + return 0; 527 + } 528 + 529 + static ssize_t o2net_debug_read(struct file *file, char __user *buf, 530 + size_t nbytes, loff_t *ppos) 531 + { 532 + return simple_read_from_buffer(buf, nbytes, ppos, file->private_data, 533 + i_size_read(file->f_mapping->host)); 534 + } 535 + 536 + static const struct file_operations nodes_fops = { 537 + .open = nodes_fop_open, 538 + .release = o2net_debug_release, 539 + .read = o2net_debug_read, 540 + .llseek = generic_file_llseek, 541 + }; 536 542 537 543 void o2net_debugfs_exit(void) 538 544 { 545 + debugfs_remove(nodes_dentry); 539 546 debugfs_remove(stats_dentry); 540 547 debugfs_remove(sc_dentry); 541 548 debugfs_remove(nst_dentry); 542 549 debugfs_remove(o2net_dentry); 550 + } 551 + 552 + int o2net_debugfs_init(void) 553 + { 554 + mode_t mode = S_IFREG|S_IRUSR; 555 + 556 + o2net_dentry = debugfs_create_dir(O2NET_DEBUG_DIR, NULL); 557 + if (o2net_dentry) 558 + nst_dentry = debugfs_create_file(NST_DEBUG_NAME, mode, 559 + o2net_dentry, NULL, &nst_seq_fops); 560 + if (nst_dentry) 561 + sc_dentry = debugfs_create_file(SC_DEBUG_NAME, mode, 562 + o2net_dentry, NULL, &sc_seq_fops); 563 + if (sc_dentry) 564 + stats_dentry = debugfs_create_file(STATS_DEBUG_NAME, mode, 565 + o2net_dentry, NULL, &stats_seq_fops); 566 + if (stats_dentry) 567 + nodes_dentry = debugfs_create_file(NODES_DEBUG_NAME, mode, 568 + o2net_dentry, NULL, &nodes_fops); 569 + if (nodes_dentry) 570 + return 0; 571 + 572 + o2net_debugfs_exit(); 573 + mlog_errno(-ENOMEM); 574 + return -ENOMEM; 543 575 } 544 576 545 577 #endif /* CONFIG_DEBUG_FS */
+72 -66
fs/ocfs2/cluster/tcp.c
··· 546 546 } 547 547 548 548 if (was_valid && !valid) { 549 - printk(KERN_NOTICE "o2net: no longer connected to " 549 + printk(KERN_NOTICE "o2net: No longer connected to " 550 550 SC_NODEF_FMT "\n", SC_NODEF_ARGS(old_sc)); 551 551 o2net_complete_nodes_nsw(nn); 552 552 } ··· 556 556 cancel_delayed_work(&nn->nn_connect_expired); 557 557 printk(KERN_NOTICE "o2net: %s " SC_NODEF_FMT "\n", 558 558 o2nm_this_node() > sc->sc_node->nd_num ? 559 - "connected to" : "accepted connection from", 559 + "Connected to" : "Accepted connection from", 560 560 SC_NODEF_ARGS(sc)); 561 561 } 562 562 ··· 644 644 o2net_sc_queue_work(sc, &sc->sc_connect_work); 645 645 break; 646 646 default: 647 - printk(KERN_INFO "o2net: connection to " SC_NODEF_FMT 647 + printk(KERN_INFO "o2net: Connection to " SC_NODEF_FMT 648 648 " shutdown, state %d\n", 649 649 SC_NODEF_ARGS(sc), sk->sk_state); 650 650 o2net_sc_queue_work(sc, &sc->sc_shutdown_work); ··· 1035 1035 return ret; 1036 1036 } 1037 1037 1038 + /* Get a map of all nodes to which this node is currently connected to */ 1039 + void o2net_fill_node_map(unsigned long *map, unsigned bytes) 1040 + { 1041 + struct o2net_sock_container *sc; 1042 + int node, ret; 1043 + 1044 + BUG_ON(bytes < (BITS_TO_LONGS(O2NM_MAX_NODES) * sizeof(unsigned long))); 1045 + 1046 + memset(map, 0, bytes); 1047 + for (node = 0; node < O2NM_MAX_NODES; ++node) { 1048 + o2net_tx_can_proceed(o2net_nn_from_num(node), &sc, &ret); 1049 + if (!ret) { 1050 + set_bit(node, map); 1051 + sc_put(sc); 1052 + } 1053 + } 1054 + } 1055 + EXPORT_SYMBOL_GPL(o2net_fill_node_map); 1056 + 1038 1057 int o2net_send_message_vec(u32 msg_type, u32 key, struct kvec *caller_vec, 1039 1058 size_t caller_veclen, u8 target_node, int *status) 1040 1059 { ··· 1304 1285 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num); 1305 1286 1306 1287 if (hand->protocol_version != cpu_to_be64(O2NET_PROTOCOL_VERSION)) { 1307 - mlog(ML_NOTICE, SC_NODEF_FMT " advertised net protocol " 1308 - "version %llu but %llu is required, disconnecting\n", 1309 - SC_NODEF_ARGS(sc), 1310 - (unsigned long long)be64_to_cpu(hand->protocol_version), 1311 - O2NET_PROTOCOL_VERSION); 1288 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " Advertised net " 1289 + "protocol version %llu but %llu is required. " 1290 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1291 + (unsigned long long)be64_to_cpu(hand->protocol_version), 1292 + O2NET_PROTOCOL_VERSION); 1312 1293 1313 1294 /* don't bother reconnecting if its the wrong version. */ 1314 1295 o2net_ensure_shutdown(nn, sc, -ENOTCONN); ··· 1322 1303 */ 1323 1304 if (be32_to_cpu(hand->o2net_idle_timeout_ms) != 1324 1305 o2net_idle_timeout()) { 1325 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a network idle timeout of " 1326 - "%u ms, but we use %u ms locally. disconnecting\n", 1327 - SC_NODEF_ARGS(sc), 1328 - be32_to_cpu(hand->o2net_idle_timeout_ms), 1329 - o2net_idle_timeout()); 1306 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a network " 1307 + "idle timeout of %u ms, but we use %u ms locally. " 1308 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1309 + be32_to_cpu(hand->o2net_idle_timeout_ms), 1310 + o2net_idle_timeout()); 1330 1311 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1331 1312 return -1; 1332 1313 } 1333 1314 1334 1315 if (be32_to_cpu(hand->o2net_keepalive_delay_ms) != 1335 1316 o2net_keepalive_delay()) { 1336 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a keepalive delay of " 1337 - "%u ms, but we use %u ms locally. disconnecting\n", 1338 - SC_NODEF_ARGS(sc), 1339 - be32_to_cpu(hand->o2net_keepalive_delay_ms), 1340 - o2net_keepalive_delay()); 1317 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a keepalive " 1318 + "delay of %u ms, but we use %u ms locally. " 1319 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1320 + be32_to_cpu(hand->o2net_keepalive_delay_ms), 1321 + o2net_keepalive_delay()); 1341 1322 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1342 1323 return -1; 1343 1324 } 1344 1325 1345 1326 if (be32_to_cpu(hand->o2hb_heartbeat_timeout_ms) != 1346 1327 O2HB_MAX_WRITE_TIMEOUT_MS) { 1347 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a heartbeat timeout of " 1348 - "%u ms, but we use %u ms locally. disconnecting\n", 1349 - SC_NODEF_ARGS(sc), 1350 - be32_to_cpu(hand->o2hb_heartbeat_timeout_ms), 1351 - O2HB_MAX_WRITE_TIMEOUT_MS); 1328 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a heartbeat " 1329 + "timeout of %u ms, but we use %u ms locally. " 1330 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1331 + be32_to_cpu(hand->o2hb_heartbeat_timeout_ms), 1332 + O2HB_MAX_WRITE_TIMEOUT_MS); 1352 1333 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1353 1334 return -1; 1354 1335 } ··· 1559 1540 { 1560 1541 struct o2net_sock_container *sc = (struct o2net_sock_container *)data; 1561 1542 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num); 1562 - 1563 1543 #ifdef CONFIG_DEBUG_FS 1564 - ktime_t now = ktime_get(); 1544 + unsigned long msecs = ktime_to_ms(ktime_get()) - 1545 + ktime_to_ms(sc->sc_tv_timer); 1546 + #else 1547 + unsigned long msecs = o2net_idle_timeout(); 1565 1548 #endif 1566 1549 1567 - printk(KERN_NOTICE "o2net: connection to " SC_NODEF_FMT " has been idle for %u.%u " 1568 - "seconds, shutting it down.\n", SC_NODEF_ARGS(sc), 1569 - o2net_idle_timeout() / 1000, 1570 - o2net_idle_timeout() % 1000); 1571 - 1572 - #ifdef CONFIG_DEBUG_FS 1573 - mlog(ML_NOTICE, "Here are some times that might help debug the " 1574 - "situation: (Timer: %lld, Now %lld, DataReady %lld, Advance %lld-%lld, " 1575 - "Key 0x%08x, Func %u, FuncTime %lld-%lld)\n", 1576 - (long long)ktime_to_us(sc->sc_tv_timer), (long long)ktime_to_us(now), 1577 - (long long)ktime_to_us(sc->sc_tv_data_ready), 1578 - (long long)ktime_to_us(sc->sc_tv_advance_start), 1579 - (long long)ktime_to_us(sc->sc_tv_advance_stop), 1580 - sc->sc_msg_key, sc->sc_msg_type, 1581 - (long long)ktime_to_us(sc->sc_tv_func_start), 1582 - (long long)ktime_to_us(sc->sc_tv_func_stop)); 1583 - #endif 1550 + printk(KERN_NOTICE "o2net: Connection to " SC_NODEF_FMT " has been " 1551 + "idle for %lu.%lu secs, shutting it down.\n", SC_NODEF_ARGS(sc), 1552 + msecs / 1000, msecs % 1000); 1584 1553 1585 1554 /* 1586 1555 * Initialize the nn_timeout so that the next connection attempt ··· 1701 1694 1702 1695 out: 1703 1696 if (ret) { 1704 - mlog(ML_NOTICE, "connect attempt to " SC_NODEF_FMT " failed " 1705 - "with errno %d\n", SC_NODEF_ARGS(sc), ret); 1697 + printk(KERN_NOTICE "o2net: Connect attempt to " SC_NODEF_FMT 1698 + " failed with errno %d\n", SC_NODEF_ARGS(sc), ret); 1706 1699 /* 0 err so that another will be queued and attempted 1707 1700 * from set_nn_state */ 1708 1701 if (sc) ··· 1725 1718 1726 1719 spin_lock(&nn->nn_lock); 1727 1720 if (!nn->nn_sc_valid) { 1728 - mlog(ML_ERROR, "no connection established with node %u after " 1729 - "%u.%u seconds, giving up and returning errors.\n", 1721 + printk(KERN_NOTICE "o2net: No connection established with " 1722 + "node %u after %u.%u seconds, giving up.\n", 1730 1723 o2net_num_from_nn(nn), 1731 1724 o2net_idle_timeout() / 1000, 1732 1725 o2net_idle_timeout() % 1000); ··· 1869 1862 1870 1863 node = o2nm_get_node_by_ip(sin.sin_addr.s_addr); 1871 1864 if (node == NULL) { 1872 - mlog(ML_NOTICE, "attempt to connect from unknown node at %pI4:%d\n", 1873 - &sin.sin_addr.s_addr, ntohs(sin.sin_port)); 1865 + printk(KERN_NOTICE "o2net: Attempt to connect from unknown " 1866 + "node at %pI4:%d\n", &sin.sin_addr.s_addr, 1867 + ntohs(sin.sin_port)); 1874 1868 ret = -EINVAL; 1875 1869 goto out; 1876 1870 } 1877 1871 1878 1872 if (o2nm_this_node() >= node->nd_num) { 1879 1873 local_node = o2nm_get_node_by_num(o2nm_this_node()); 1880 - mlog(ML_NOTICE, "unexpected connect attempt seen at node '%s' (" 1881 - "%u, %pI4:%d) from node '%s' (%u, %pI4:%d)\n", 1882 - local_node->nd_name, local_node->nd_num, 1883 - &(local_node->nd_ipv4_address), 1884 - ntohs(local_node->nd_ipv4_port), 1885 - node->nd_name, node->nd_num, &sin.sin_addr.s_addr, 1886 - ntohs(sin.sin_port)); 1874 + printk(KERN_NOTICE "o2net: Unexpected connect attempt seen " 1875 + "at node '%s' (%u, %pI4:%d) from node '%s' (%u, " 1876 + "%pI4:%d)\n", local_node->nd_name, local_node->nd_num, 1877 + &(local_node->nd_ipv4_address), 1878 + ntohs(local_node->nd_ipv4_port), node->nd_name, 1879 + node->nd_num, &sin.sin_addr.s_addr, ntohs(sin.sin_port)); 1887 1880 ret = -EINVAL; 1888 1881 goto out; 1889 1882 } ··· 1908 1901 ret = 0; 1909 1902 spin_unlock(&nn->nn_lock); 1910 1903 if (ret) { 1911 - mlog(ML_NOTICE, "attempt to connect from node '%s' at " 1912 - "%pI4:%d but it already has an open connection\n", 1913 - node->nd_name, &sin.sin_addr.s_addr, 1914 - ntohs(sin.sin_port)); 1904 + printk(KERN_NOTICE "o2net: Attempt to connect from node '%s' " 1905 + "at %pI4:%d but it already has an open connection\n", 1906 + node->nd_name, &sin.sin_addr.s_addr, 1907 + ntohs(sin.sin_port)); 1915 1908 goto out; 1916 1909 } 1917 1910 ··· 1991 1984 1992 1985 ret = sock_create(PF_INET, SOCK_STREAM, IPPROTO_TCP, &sock); 1993 1986 if (ret < 0) { 1994 - mlog(ML_ERROR, "unable to create socket, ret=%d\n", ret); 1987 + printk(KERN_ERR "o2net: Error %d while creating socket\n", ret); 1995 1988 goto out; 1996 1989 } 1997 1990 ··· 2008 2001 sock->sk->sk_reuse = 1; 2009 2002 ret = sock->ops->bind(sock, (struct sockaddr *)&sin, sizeof(sin)); 2010 2003 if (ret < 0) { 2011 - mlog(ML_ERROR, "unable to bind socket at %pI4:%u, " 2012 - "ret=%d\n", &addr, ntohs(port), ret); 2004 + printk(KERN_ERR "o2net: Error %d while binding socket at " 2005 + "%pI4:%u\n", ret, &addr, ntohs(port)); 2013 2006 goto out; 2014 2007 } 2015 2008 2016 2009 ret = sock->ops->listen(sock, 64); 2017 - if (ret < 0) { 2018 - mlog(ML_ERROR, "unable to listen on %pI4:%u, ret=%d\n", 2019 - &addr, ntohs(port), ret); 2020 - } 2010 + if (ret < 0) 2011 + printk(KERN_ERR "o2net: Error %d while listening on %pI4:%u\n", 2012 + ret, &addr, ntohs(port)); 2021 2013 2022 2014 out: 2023 2015 if (ret) {
+2
fs/ocfs2/cluster/tcp.h
··· 106 106 struct list_head *unreg_list); 107 107 void o2net_unregister_handler_list(struct list_head *list); 108 108 109 + void o2net_fill_node_map(unsigned long *map, unsigned bytes); 110 + 109 111 struct o2nm_node; 110 112 int o2net_register_hb_callbacks(void); 111 113 void o2net_unregister_hb_callbacks(void);
+1 -2
fs/ocfs2/dir.c
··· 1184 1184 if (pde) 1185 1185 le16_add_cpu(&pde->rec_len, 1186 1186 le16_to_cpu(de->rec_len)); 1187 - else 1188 - de->inode = 0; 1187 + de->inode = 0; 1189 1188 dir->i_version++; 1190 1189 ocfs2_journal_dirty(handle, bh); 1191 1190 goto bail;
+12 -44
fs/ocfs2/dlm/dlmcommon.h
··· 859 859 void dlm_wait_for_recovery(struct dlm_ctxt *dlm); 860 860 void dlm_kick_recovery_thread(struct dlm_ctxt *dlm); 861 861 int dlm_is_node_dead(struct dlm_ctxt *dlm, u8 node); 862 - int dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout); 863 - int dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout); 862 + void dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout); 863 + void dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout); 864 864 865 865 void dlm_put(struct dlm_ctxt *dlm); 866 866 struct dlm_ctxt *dlm_grab(struct dlm_ctxt *dlm); ··· 877 877 kref_get(&res->refs); 878 878 } 879 879 void dlm_lockres_put(struct dlm_lock_resource *res); 880 - void __dlm_unhash_lockres(struct dlm_lock_resource *res); 881 - void __dlm_insert_lockres(struct dlm_ctxt *dlm, 882 - struct dlm_lock_resource *res); 880 + void __dlm_unhash_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res); 881 + void __dlm_insert_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res); 883 882 struct dlm_lock_resource * __dlm_lookup_lockres_full(struct dlm_ctxt *dlm, 884 883 const char *name, 885 884 unsigned int len, ··· 901 902 const char *name, 902 903 unsigned int namelen); 903 904 904 - #define dlm_lockres_set_refmap_bit(bit,res) \ 905 - __dlm_lockres_set_refmap_bit(bit,res,__FILE__,__LINE__) 906 - #define dlm_lockres_clear_refmap_bit(bit,res) \ 907 - __dlm_lockres_clear_refmap_bit(bit,res,__FILE__,__LINE__) 905 + void dlm_lockres_set_refmap_bit(struct dlm_ctxt *dlm, 906 + struct dlm_lock_resource *res, int bit); 907 + void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm, 908 + struct dlm_lock_resource *res, int bit); 908 909 909 - static inline void __dlm_lockres_set_refmap_bit(int bit, 910 - struct dlm_lock_resource *res, 911 - const char *file, 912 - int line) 913 - { 914 - //printk("%s:%d:%.*s: setting bit %d\n", file, line, 915 - // res->lockname.len, res->lockname.name, bit); 916 - set_bit(bit, res->refmap); 917 - } 918 - 919 - static inline void __dlm_lockres_clear_refmap_bit(int bit, 920 - struct dlm_lock_resource *res, 921 - const char *file, 922 - int line) 923 - { 924 - //printk("%s:%d:%.*s: clearing bit %d\n", file, line, 925 - // res->lockname.len, res->lockname.name, bit); 926 - clear_bit(bit, res->refmap); 927 - } 928 - 929 - void __dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 930 - struct dlm_lock_resource *res, 931 - const char *file, 932 - int line); 933 - void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 934 - struct dlm_lock_resource *res, 935 - int new_lockres, 936 - const char *file, 937 - int line); 938 - #define dlm_lockres_drop_inflight_ref(d,r) \ 939 - __dlm_lockres_drop_inflight_ref(d,r,__FILE__,__LINE__) 940 - #define dlm_lockres_grab_inflight_ref(d,r) \ 941 - __dlm_lockres_grab_inflight_ref(d,r,0,__FILE__,__LINE__) 942 - #define dlm_lockres_grab_inflight_ref_new(d,r) \ 943 - __dlm_lockres_grab_inflight_ref(d,r,1,__FILE__,__LINE__) 910 + void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 911 + struct dlm_lock_resource *res); 912 + void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 913 + struct dlm_lock_resource *res); 944 914 945 915 void dlm_queue_ast(struct dlm_ctxt *dlm, struct dlm_lock *lock); 946 916 void dlm_queue_bast(struct dlm_ctxt *dlm, struct dlm_lock *lock);
+22 -22
fs/ocfs2/dlm/dlmdomain.c
··· 157 157 158 158 static void dlm_unregister_domain_handlers(struct dlm_ctxt *dlm); 159 159 160 - void __dlm_unhash_lockres(struct dlm_lock_resource *lockres) 160 + void __dlm_unhash_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res) 161 161 { 162 - if (!hlist_unhashed(&lockres->hash_node)) { 163 - hlist_del_init(&lockres->hash_node); 164 - dlm_lockres_put(lockres); 165 - } 162 + if (hlist_unhashed(&res->hash_node)) 163 + return; 164 + 165 + mlog(0, "%s: Unhash res %.*s\n", dlm->name, res->lockname.len, 166 + res->lockname.name); 167 + hlist_del_init(&res->hash_node); 168 + dlm_lockres_put(res); 166 169 } 167 170 168 - void __dlm_insert_lockres(struct dlm_ctxt *dlm, 169 - struct dlm_lock_resource *res) 171 + void __dlm_insert_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res) 170 172 { 171 173 struct hlist_head *bucket; 172 174 struct qstr *q; ··· 182 180 dlm_lockres_get(res); 183 181 184 182 hlist_add_head(&res->hash_node, bucket); 183 + 184 + mlog(0, "%s: Hash res %.*s\n", dlm->name, res->lockname.len, 185 + res->lockname.name); 185 186 } 186 187 187 188 struct dlm_lock_resource * __dlm_lookup_lockres_full(struct dlm_ctxt *dlm, ··· 544 539 545 540 static void __dlm_print_nodes(struct dlm_ctxt *dlm) 546 541 { 547 - int node = -1; 542 + int node = -1, num = 0; 548 543 549 544 assert_spin_locked(&dlm->spinlock); 550 545 551 - printk(KERN_NOTICE "o2dlm: Nodes in domain %s: ", dlm->name); 552 - 546 + printk("( "); 553 547 while ((node = find_next_bit(dlm->domain_map, O2NM_MAX_NODES, 554 548 node + 1)) < O2NM_MAX_NODES) { 555 549 printk("%d ", node); 550 + ++num; 556 551 } 557 - printk("\n"); 552 + printk(") %u nodes\n", num); 558 553 } 559 554 560 555 static int dlm_exit_domain_handler(struct o2net_msg *msg, u32 len, void *data, ··· 571 566 572 567 node = exit_msg->node_idx; 573 568 574 - printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s\n", node, dlm->name); 575 - 576 569 spin_lock(&dlm->spinlock); 577 570 clear_bit(node, dlm->domain_map); 578 571 clear_bit(node, dlm->exit_domain_map); 572 + printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s ", node, dlm->name); 579 573 __dlm_print_nodes(dlm); 580 574 581 575 /* notify anything attached to the heartbeat events */ ··· 759 755 760 756 dlm_mark_domain_leaving(dlm); 761 757 dlm_leave_domain(dlm); 758 + printk(KERN_NOTICE "o2dlm: Leaving domain %s\n", dlm->name); 762 759 dlm_force_free_mles(dlm); 763 760 dlm_complete_dlm_shutdown(dlm); 764 761 } ··· 975 970 clear_bit(assert->node_idx, dlm->exit_domain_map); 976 971 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN); 977 972 978 - printk(KERN_NOTICE "o2dlm: Node %u joins domain %s\n", 973 + printk(KERN_NOTICE "o2dlm: Node %u joins domain %s ", 979 974 assert->node_idx, dlm->name); 980 975 __dlm_print_nodes(dlm); 981 976 ··· 1706 1701 bail: 1707 1702 spin_lock(&dlm->spinlock); 1708 1703 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN); 1709 - if (!status) 1704 + if (!status) { 1705 + printk(KERN_NOTICE "o2dlm: Joining domain %s ", dlm->name); 1710 1706 __dlm_print_nodes(dlm); 1707 + } 1711 1708 spin_unlock(&dlm->spinlock); 1712 1709 1713 1710 if (ctxt) { ··· 2135 2128 if (strlen(domain) >= O2NM_MAX_NAME_LEN) { 2136 2129 ret = -ENAMETOOLONG; 2137 2130 mlog(ML_ERROR, "domain name length too long\n"); 2138 - goto leave; 2139 - } 2140 - 2141 - if (!o2hb_check_local_node_heartbeating()) { 2142 - mlog(ML_ERROR, "the local node has not been configured, or is " 2143 - "not heartbeating\n"); 2144 - ret = -EPROTO; 2145 2131 goto leave; 2146 2132 } 2147 2133
+26 -28
fs/ocfs2/dlm/dlmlock.c
··· 183 183 kick_thread = 1; 184 184 } 185 185 } 186 - /* reduce the inflight count, this may result in the lockres 187 - * being purged below during calc_usage */ 188 - if (lock->ml.node == dlm->node_num) 189 - dlm_lockres_drop_inflight_ref(dlm, res); 190 186 191 187 spin_unlock(&res->spinlock); 192 188 wake_up(&res->wq); ··· 227 231 lock->ml.type, res->lockname.len, 228 232 res->lockname.name, flags); 229 233 234 + /* 235 + * Wait if resource is getting recovered, remastered, etc. 236 + * If the resource was remastered and new owner is self, then exit. 237 + */ 230 238 spin_lock(&res->spinlock); 231 - 232 - /* will exit this call with spinlock held */ 233 239 __dlm_wait_on_lockres(res); 240 + if (res->owner == dlm->node_num) { 241 + spin_unlock(&res->spinlock); 242 + return DLM_RECOVERING; 243 + } 234 244 res->state |= DLM_LOCK_RES_IN_PROGRESS; 235 245 236 246 /* add lock to local (secondary) queue */ ··· 321 319 tmpret = o2net_send_message(DLM_CREATE_LOCK_MSG, dlm->key, &create, 322 320 sizeof(create), res->owner, &status); 323 321 if (tmpret >= 0) { 324 - // successfully sent and received 325 - ret = status; // this is already a dlm_status 322 + ret = status; 326 323 if (ret == DLM_REJECTED) { 327 - mlog(ML_ERROR, "%s:%.*s: BUG. this is a stale lockres " 328 - "no longer owned by %u. that node is coming back " 329 - "up currently.\n", dlm->name, create.namelen, 324 + mlog(ML_ERROR, "%s: res %.*s, Stale lockres no longer " 325 + "owned by node %u. That node is coming back up " 326 + "currently.\n", dlm->name, create.namelen, 330 327 create.name, res->owner); 331 328 dlm_print_one_lock_resource(res); 332 329 BUG(); 333 330 } 334 331 } else { 335 - mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to " 336 - "node %u\n", tmpret, DLM_CREATE_LOCK_MSG, dlm->key, 337 - res->owner); 338 - if (dlm_is_host_down(tmpret)) { 332 + mlog(ML_ERROR, "%s: res %.*s, Error %d send CREATE LOCK to " 333 + "node %u\n", dlm->name, create.namelen, create.name, 334 + tmpret, res->owner); 335 + if (dlm_is_host_down(tmpret)) 339 336 ret = DLM_RECOVERING; 340 - mlog(0, "node %u died so returning DLM_RECOVERING " 341 - "from lock message!\n", res->owner); 342 - } else { 337 + else 343 338 ret = dlm_err_to_dlm_status(tmpret); 344 - } 345 339 } 346 340 347 341 return ret; ··· 438 440 /* zero memory only if kernel-allocated */ 439 441 lksb = kzalloc(sizeof(*lksb), GFP_NOFS); 440 442 if (!lksb) { 441 - kfree(lock); 443 + kmem_cache_free(dlm_lock_cache, lock); 442 444 return NULL; 443 445 } 444 446 kernel_allocated = 1; ··· 716 718 717 719 if (status == DLM_RECOVERING || status == DLM_MIGRATING || 718 720 status == DLM_FORWARD) { 719 - mlog(0, "retrying lock with migration/" 720 - "recovery/in progress\n"); 721 721 msleep(100); 722 - /* no waiting for dlm_reco_thread */ 723 722 if (recovery) { 724 723 if (status != DLM_RECOVERING) 725 724 goto retry_lock; 726 - 727 - mlog(0, "%s: got RECOVERING " 728 - "for $RECOVERY lock, master " 729 - "was %u\n", dlm->name, 730 - res->owner); 731 725 /* wait to see the node go down, then 732 726 * drop down and allow the lockres to 733 727 * get cleaned up. need to remaster. */ ··· 730 740 goto retry_lock; 731 741 } 732 742 } 743 + 744 + /* Inflight taken in dlm_get_lock_resource() is dropped here */ 745 + spin_lock(&res->spinlock); 746 + dlm_lockres_drop_inflight_ref(dlm, res); 747 + spin_unlock(&res->spinlock); 748 + 749 + dlm_lockres_calc_usage(dlm, res); 750 + dlm_kick_thread(dlm, res); 733 751 734 752 if (status != DLM_NORMAL) { 735 753 lock->lksb->flags &= ~DLM_LKSB_GET_LVB;
+91 -90
fs/ocfs2/dlm/dlmmaster.c
··· 631 631 return NULL; 632 632 } 633 633 634 - void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 635 - struct dlm_lock_resource *res, 636 - int new_lockres, 637 - const char *file, 638 - int line) 634 + void dlm_lockres_set_refmap_bit(struct dlm_ctxt *dlm, 635 + struct dlm_lock_resource *res, int bit) 639 636 { 640 - if (!new_lockres) 641 - assert_spin_locked(&res->spinlock); 637 + assert_spin_locked(&res->spinlock); 642 638 643 - if (!test_bit(dlm->node_num, res->refmap)) { 644 - BUG_ON(res->inflight_locks != 0); 645 - dlm_lockres_set_refmap_bit(dlm->node_num, res); 646 - } 647 - res->inflight_locks++; 648 - mlog(0, "%s:%.*s: inflight++: now %u\n", 649 - dlm->name, res->lockname.len, res->lockname.name, 650 - res->inflight_locks); 639 + mlog(0, "res %.*s, set node %u, %ps()\n", res->lockname.len, 640 + res->lockname.name, bit, __builtin_return_address(0)); 641 + 642 + set_bit(bit, res->refmap); 651 643 } 652 644 653 - void __dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 654 - struct dlm_lock_resource *res, 655 - const char *file, 656 - int line) 645 + void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm, 646 + struct dlm_lock_resource *res, int bit) 647 + { 648 + assert_spin_locked(&res->spinlock); 649 + 650 + mlog(0, "res %.*s, clr node %u, %ps()\n", res->lockname.len, 651 + res->lockname.name, bit, __builtin_return_address(0)); 652 + 653 + clear_bit(bit, res->refmap); 654 + } 655 + 656 + 657 + void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 658 + struct dlm_lock_resource *res) 659 + { 660 + assert_spin_locked(&res->spinlock); 661 + 662 + res->inflight_locks++; 663 + 664 + mlog(0, "%s: res %.*s, inflight++: now %u, %ps()\n", dlm->name, 665 + res->lockname.len, res->lockname.name, res->inflight_locks, 666 + __builtin_return_address(0)); 667 + } 668 + 669 + void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 670 + struct dlm_lock_resource *res) 657 671 { 658 672 assert_spin_locked(&res->spinlock); 659 673 660 674 BUG_ON(res->inflight_locks == 0); 675 + 661 676 res->inflight_locks--; 662 - mlog(0, "%s:%.*s: inflight--: now %u\n", 663 - dlm->name, res->lockname.len, res->lockname.name, 664 - res->inflight_locks); 665 - if (res->inflight_locks == 0) 666 - dlm_lockres_clear_refmap_bit(dlm->node_num, res); 677 + 678 + mlog(0, "%s: res %.*s, inflight--: now %u, %ps()\n", dlm->name, 679 + res->lockname.len, res->lockname.name, res->inflight_locks, 680 + __builtin_return_address(0)); 681 + 667 682 wake_up(&res->wq); 668 683 } 669 684 ··· 712 697 unsigned int hash; 713 698 int tries = 0; 714 699 int bit, wait_on_recovery = 0; 715 - int drop_inflight_if_nonlocal = 0; 716 700 717 701 BUG_ON(!lockid); 718 702 ··· 723 709 spin_lock(&dlm->spinlock); 724 710 tmpres = __dlm_lookup_lockres_full(dlm, lockid, namelen, hash); 725 711 if (tmpres) { 726 - int dropping_ref = 0; 727 - 728 712 spin_unlock(&dlm->spinlock); 729 - 730 713 spin_lock(&tmpres->spinlock); 731 - /* We wait for the other thread that is mastering the resource */ 714 + /* Wait on the thread that is mastering the resource */ 732 715 if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) { 733 716 __dlm_wait_on_lockres(tmpres); 734 717 BUG_ON(tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN); 735 - } 736 - 737 - if (tmpres->owner == dlm->node_num) { 738 - BUG_ON(tmpres->state & DLM_LOCK_RES_DROPPING_REF); 739 - dlm_lockres_grab_inflight_ref(dlm, tmpres); 740 - } else if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) 741 - dropping_ref = 1; 742 - spin_unlock(&tmpres->spinlock); 743 - 744 - /* wait until done messaging the master, drop our ref to allow 745 - * the lockres to be purged, start over. */ 746 - if (dropping_ref) { 747 - spin_lock(&tmpres->spinlock); 748 - __dlm_wait_on_lockres_flags(tmpres, DLM_LOCK_RES_DROPPING_REF); 749 718 spin_unlock(&tmpres->spinlock); 750 719 dlm_lockres_put(tmpres); 751 720 tmpres = NULL; 752 721 goto lookup; 753 722 } 754 723 755 - mlog(0, "found in hash!\n"); 724 + /* Wait on the resource purge to complete before continuing */ 725 + if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) { 726 + BUG_ON(tmpres->owner == dlm->node_num); 727 + __dlm_wait_on_lockres_flags(tmpres, 728 + DLM_LOCK_RES_DROPPING_REF); 729 + spin_unlock(&tmpres->spinlock); 730 + dlm_lockres_put(tmpres); 731 + tmpres = NULL; 732 + goto lookup; 733 + } 734 + 735 + /* Grab inflight ref to pin the resource */ 736 + dlm_lockres_grab_inflight_ref(dlm, tmpres); 737 + 738 + spin_unlock(&tmpres->spinlock); 756 739 if (res) 757 740 dlm_lockres_put(res); 758 741 res = tmpres; ··· 840 829 * but they might own this lockres. wait on them. */ 841 830 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0); 842 831 if (bit < O2NM_MAX_NODES) { 843 - mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to " 844 - "recover before lock mastery can begin\n", 832 + mlog(0, "%s: res %.*s, At least one node (%d) " 833 + "to recover before lock mastery can begin\n", 845 834 dlm->name, namelen, (char *)lockid, bit); 846 835 wait_on_recovery = 1; 847 836 } ··· 854 843 855 844 /* finally add the lockres to its hash bucket */ 856 845 __dlm_insert_lockres(dlm, res); 857 - /* since this lockres is new it doesn't not require the spinlock */ 858 - dlm_lockres_grab_inflight_ref_new(dlm, res); 859 846 860 - /* if this node does not become the master make sure to drop 861 - * this inflight reference below */ 862 - drop_inflight_if_nonlocal = 1; 847 + /* Grab inflight ref to pin the resource */ 848 + spin_lock(&res->spinlock); 849 + dlm_lockres_grab_inflight_ref(dlm, res); 850 + spin_unlock(&res->spinlock); 863 851 864 852 /* get an extra ref on the mle in case this is a BLOCK 865 853 * if so, the creator of the BLOCK may try to put the last ··· 874 864 * dlm spinlock would be detectable be a change on the mle, 875 865 * so we only need to clear out the recovery map once. */ 876 866 if (dlm_is_recovery_lock(lockid, namelen)) { 877 - mlog(ML_NOTICE, "%s: recovery map is not empty, but " 878 - "must master $RECOVERY lock now\n", dlm->name); 867 + mlog(0, "%s: Recovery map is not empty, but must " 868 + "master $RECOVERY lock now\n", dlm->name); 879 869 if (!dlm_pre_master_reco_lockres(dlm, res)) 880 870 wait_on_recovery = 0; 881 871 else { ··· 893 883 spin_lock(&dlm->spinlock); 894 884 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0); 895 885 if (bit < O2NM_MAX_NODES) { 896 - mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to " 897 - "recover before lock mastery can begin\n", 886 + mlog(0, "%s: res %.*s, At least one node (%d) " 887 + "to recover before lock mastery can begin\n", 898 888 dlm->name, namelen, (char *)lockid, bit); 899 889 wait_on_recovery = 1; 900 890 } else ··· 923 913 * yet, keep going until it does. this is how the 924 914 * master will know that asserts are needed back to 925 915 * the lower nodes. */ 926 - mlog(0, "%s:%.*s: requests only up to %u but master " 927 - "is %u, keep going\n", dlm->name, namelen, 916 + mlog(0, "%s: res %.*s, Requests only up to %u but " 917 + "master is %u, keep going\n", dlm->name, namelen, 928 918 lockid, nodenum, mle->master); 929 919 } 930 920 } ··· 934 924 ret = dlm_wait_for_lock_mastery(dlm, res, mle, &blocked); 935 925 if (ret < 0) { 936 926 wait_on_recovery = 1; 937 - mlog(0, "%s:%.*s: node map changed, redo the " 938 - "master request now, blocked=%d\n", 939 - dlm->name, res->lockname.len, 927 + mlog(0, "%s: res %.*s, Node map changed, redo the master " 928 + "request now, blocked=%d\n", dlm->name, res->lockname.len, 940 929 res->lockname.name, blocked); 941 930 if (++tries > 20) { 942 - mlog(ML_ERROR, "%s:%.*s: spinning on " 943 - "dlm_wait_for_lock_mastery, blocked=%d\n", 931 + mlog(ML_ERROR, "%s: res %.*s, Spinning on " 932 + "dlm_wait_for_lock_mastery, blocked = %d\n", 944 933 dlm->name, res->lockname.len, 945 934 res->lockname.name, blocked); 946 935 dlm_print_one_lock_resource(res); ··· 949 940 goto redo_request; 950 941 } 951 942 952 - mlog(0, "lockres mastered by %u\n", res->owner); 943 + mlog(0, "%s: res %.*s, Mastered by %u\n", dlm->name, res->lockname.len, 944 + res->lockname.name, res->owner); 953 945 /* make sure we never continue without this */ 954 946 BUG_ON(res->owner == O2NM_MAX_NODES); 955 947 ··· 962 952 963 953 wake_waiters: 964 954 spin_lock(&res->spinlock); 965 - if (res->owner != dlm->node_num && drop_inflight_if_nonlocal) 966 - dlm_lockres_drop_inflight_ref(dlm, res); 967 955 res->state &= ~DLM_LOCK_RES_IN_PROGRESS; 968 956 spin_unlock(&res->spinlock); 969 957 wake_up(&res->wq); ··· 1434 1426 } 1435 1427 1436 1428 if (res->owner == dlm->node_num) { 1437 - mlog(0, "%s:%.*s: setting bit %u in refmap\n", 1438 - dlm->name, namelen, name, request->node_idx); 1439 - dlm_lockres_set_refmap_bit(request->node_idx, res); 1429 + dlm_lockres_set_refmap_bit(dlm, res, request->node_idx); 1440 1430 spin_unlock(&res->spinlock); 1441 1431 response = DLM_MASTER_RESP_YES; 1442 1432 if (mle) ··· 1499 1493 * go back and clean the mles on any 1500 1494 * other nodes */ 1501 1495 dispatch_assert = 1; 1502 - dlm_lockres_set_refmap_bit(request->node_idx, res); 1503 - mlog(0, "%s:%.*s: setting bit %u in refmap\n", 1504 - dlm->name, namelen, name, 1505 - request->node_idx); 1496 + dlm_lockres_set_refmap_bit(dlm, res, 1497 + request->node_idx); 1506 1498 } else 1507 1499 response = DLM_MASTER_RESP_NO; 1508 1500 } else { ··· 1706 1702 "lockres, set the bit in the refmap\n", 1707 1703 namelen, lockname, to); 1708 1704 spin_lock(&res->spinlock); 1709 - dlm_lockres_set_refmap_bit(to, res); 1705 + dlm_lockres_set_refmap_bit(dlm, res, to); 1710 1706 spin_unlock(&res->spinlock); 1711 1707 } 1712 1708 } ··· 2191 2187 namelen = res->lockname.len; 2192 2188 BUG_ON(namelen > O2NM_MAX_NAME_LEN); 2193 2189 2194 - mlog(0, "%s:%.*s: sending deref to %d\n", 2195 - dlm->name, namelen, lockname, res->owner); 2196 2190 memset(&deref, 0, sizeof(deref)); 2197 2191 deref.node_idx = dlm->node_num; 2198 2192 deref.namelen = namelen; ··· 2199 2197 ret = o2net_send_message(DLM_DEREF_LOCKRES_MSG, dlm->key, 2200 2198 &deref, sizeof(deref), res->owner, &r); 2201 2199 if (ret < 0) 2202 - mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to " 2203 - "node %u\n", ret, DLM_DEREF_LOCKRES_MSG, dlm->key, 2204 - res->owner); 2200 + mlog(ML_ERROR, "%s: res %.*s, error %d send DEREF to node %u\n", 2201 + dlm->name, namelen, lockname, ret, res->owner); 2205 2202 else if (r < 0) { 2206 2203 /* BAD. other node says I did not have a ref. */ 2207 - mlog(ML_ERROR,"while dropping ref on %s:%.*s " 2208 - "(master=%u) got %d.\n", dlm->name, namelen, 2209 - lockname, res->owner, r); 2204 + mlog(ML_ERROR, "%s: res %.*s, DEREF to node %u got %d\n", 2205 + dlm->name, namelen, lockname, res->owner, r); 2210 2206 dlm_print_one_lock_resource(res); 2211 2207 BUG(); 2212 2208 } ··· 2260 2260 else { 2261 2261 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF); 2262 2262 if (test_bit(node, res->refmap)) { 2263 - dlm_lockres_clear_refmap_bit(node, res); 2263 + dlm_lockres_clear_refmap_bit(dlm, res, node); 2264 2264 cleared = 1; 2265 2265 } 2266 2266 } ··· 2320 2320 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF); 2321 2321 if (test_bit(node, res->refmap)) { 2322 2322 __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_SETREF_INPROG); 2323 - dlm_lockres_clear_refmap_bit(node, res); 2323 + dlm_lockres_clear_refmap_bit(dlm, res, node); 2324 2324 cleared = 1; 2325 2325 } 2326 2326 spin_unlock(&res->spinlock); ··· 2802 2802 BUG_ON(!list_empty(&lock->bast_list)); 2803 2803 BUG_ON(lock->ast_pending); 2804 2804 BUG_ON(lock->bast_pending); 2805 - dlm_lockres_clear_refmap_bit(lock->ml.node, res); 2805 + dlm_lockres_clear_refmap_bit(dlm, res, 2806 + lock->ml.node); 2806 2807 list_del_init(&lock->list); 2807 2808 dlm_lock_put(lock); 2808 2809 /* In a normal unlock, we would have added a ··· 2824 2823 mlog(0, "%s:%.*s: node %u had a ref to this " 2825 2824 "migrating lockres, clearing\n", dlm->name, 2826 2825 res->lockname.len, res->lockname.name, bit); 2827 - dlm_lockres_clear_refmap_bit(bit, res); 2826 + dlm_lockres_clear_refmap_bit(dlm, res, bit); 2828 2827 } 2829 2828 bit++; 2830 2829 } ··· 2917 2916 &migrate, sizeof(migrate), nodenum, 2918 2917 &status); 2919 2918 if (ret < 0) { 2920 - mlog(ML_ERROR, "Error %d when sending message %u (key " 2921 - "0x%x) to node %u\n", ret, DLM_MIGRATE_REQUEST_MSG, 2922 - dlm->key, nodenum); 2919 + mlog(ML_ERROR, "%s: res %.*s, Error %d send " 2920 + "MIGRATE_REQUEST to node %u\n", dlm->name, 2921 + migrate.namelen, migrate.name, ret, nodenum); 2923 2922 if (!dlm_is_host_down(ret)) { 2924 2923 mlog(ML_ERROR, "unhandled error=%d!\n", ret); 2925 2924 BUG(); ··· 2938 2937 dlm->name, res->lockname.len, res->lockname.name, 2939 2938 nodenum); 2940 2939 spin_lock(&res->spinlock); 2941 - dlm_lockres_set_refmap_bit(nodenum, res); 2940 + dlm_lockres_set_refmap_bit(dlm, res, nodenum); 2942 2941 spin_unlock(&res->spinlock); 2943 2942 } 2944 2943 } ··· 3272 3271 * mastery reference here since old_master will briefly have 3273 3272 * a reference after the migration completes */ 3274 3273 spin_lock(&res->spinlock); 3275 - dlm_lockres_set_refmap_bit(old_master, res); 3274 + dlm_lockres_set_refmap_bit(dlm, res, old_master); 3276 3275 spin_unlock(&res->spinlock); 3277 3276 3278 3277 mlog(0, "now time to do a migrate request to other nodes\n");
+82 -82
fs/ocfs2/dlm/dlmrecovery.c
··· 362 362 } 363 363 364 364 365 - int dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout) 365 + void dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout) 366 366 { 367 - if (timeout) { 368 - mlog(ML_NOTICE, "%s: waiting %dms for notification of " 369 - "death of node %u\n", dlm->name, timeout, node); 367 + if (dlm_is_node_dead(dlm, node)) 368 + return; 369 + 370 + printk(KERN_NOTICE "o2dlm: Waiting on the death of node %u in " 371 + "domain %s\n", node, dlm->name); 372 + 373 + if (timeout) 370 374 wait_event_timeout(dlm->dlm_reco_thread_wq, 371 - dlm_is_node_dead(dlm, node), 372 - msecs_to_jiffies(timeout)); 373 - } else { 374 - mlog(ML_NOTICE, "%s: waiting indefinitely for notification " 375 - "of death of node %u\n", dlm->name, node); 375 + dlm_is_node_dead(dlm, node), 376 + msecs_to_jiffies(timeout)); 377 + else 376 378 wait_event(dlm->dlm_reco_thread_wq, 377 379 dlm_is_node_dead(dlm, node)); 378 - } 379 - /* for now, return 0 */ 380 - return 0; 381 380 } 382 381 383 - int dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout) 382 + void dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout) 384 383 { 385 - if (timeout) { 386 - mlog(0, "%s: waiting %dms for notification of " 387 - "recovery of node %u\n", dlm->name, timeout, node); 384 + if (dlm_is_node_recovered(dlm, node)) 385 + return; 386 + 387 + printk(KERN_NOTICE "o2dlm: Waiting on the recovery of node %u in " 388 + "domain %s\n", node, dlm->name); 389 + 390 + if (timeout) 388 391 wait_event_timeout(dlm->dlm_reco_thread_wq, 389 - dlm_is_node_recovered(dlm, node), 390 - msecs_to_jiffies(timeout)); 391 - } else { 392 - mlog(0, "%s: waiting indefinitely for notification " 393 - "of recovery of node %u\n", dlm->name, node); 392 + dlm_is_node_recovered(dlm, node), 393 + msecs_to_jiffies(timeout)); 394 + else 394 395 wait_event(dlm->dlm_reco_thread_wq, 395 396 dlm_is_node_recovered(dlm, node)); 396 - } 397 - /* for now, return 0 */ 398 - return 0; 399 397 } 400 398 401 399 /* callers of the top-level api calls (dlmlock/dlmunlock) should ··· 428 430 { 429 431 spin_lock(&dlm->spinlock); 430 432 BUG_ON(dlm->reco.state & DLM_RECO_STATE_ACTIVE); 433 + printk(KERN_NOTICE "o2dlm: Begin recovery on domain %s for node %u\n", 434 + dlm->name, dlm->reco.dead_node); 431 435 dlm->reco.state |= DLM_RECO_STATE_ACTIVE; 432 436 spin_unlock(&dlm->spinlock); 433 437 } ··· 440 440 BUG_ON(!(dlm->reco.state & DLM_RECO_STATE_ACTIVE)); 441 441 dlm->reco.state &= ~DLM_RECO_STATE_ACTIVE; 442 442 spin_unlock(&dlm->spinlock); 443 + printk(KERN_NOTICE "o2dlm: End recovery on domain %s\n", dlm->name); 443 444 wake_up(&dlm->reco.event); 445 + } 446 + 447 + static void dlm_print_recovery_master(struct dlm_ctxt *dlm) 448 + { 449 + printk(KERN_NOTICE "o2dlm: Node %u (%s) is the Recovery Master for the " 450 + "dead node %u in domain %s\n", dlm->reco.new_master, 451 + (dlm->node_num == dlm->reco.new_master ? "me" : "he"), 452 + dlm->reco.dead_node, dlm->name); 444 453 } 445 454 446 455 static int dlm_do_recovery(struct dlm_ctxt *dlm) ··· 514 505 } 515 506 mlog(0, "another node will master this recovery session.\n"); 516 507 } 517 - mlog(0, "dlm=%s (%d), new_master=%u, this node=%u, dead_node=%u\n", 518 - dlm->name, task_pid_nr(dlm->dlm_reco_thread_task), dlm->reco.new_master, 519 - dlm->node_num, dlm->reco.dead_node); 508 + 509 + dlm_print_recovery_master(dlm); 520 510 521 511 /* it is safe to start everything back up here 522 512 * because all of the dead node's lock resources ··· 526 518 return 0; 527 519 528 520 master_here: 529 - mlog(ML_NOTICE, "(%d) Node %u is the Recovery Master for the Dead Node " 530 - "%u for Domain %s\n", task_pid_nr(dlm->dlm_reco_thread_task), 531 - dlm->node_num, dlm->reco.dead_node, dlm->name); 521 + dlm_print_recovery_master(dlm); 532 522 533 523 status = dlm_remaster_locks(dlm, dlm->reco.dead_node); 534 524 if (status < 0) { 535 525 /* we should never hit this anymore */ 536 - mlog(ML_ERROR, "error %d remastering locks for node %u, " 537 - "retrying.\n", status, dlm->reco.dead_node); 526 + mlog(ML_ERROR, "%s: Error %d remastering locks for node %u, " 527 + "retrying.\n", dlm->name, status, dlm->reco.dead_node); 538 528 /* yield a bit to allow any final network messages 539 529 * to get handled on remaining nodes */ 540 530 msleep(100); ··· 573 567 BUG_ON(ndata->state != DLM_RECO_NODE_DATA_INIT); 574 568 ndata->state = DLM_RECO_NODE_DATA_REQUESTING; 575 569 576 - mlog(0, "requesting lock info from node %u\n", 570 + mlog(0, "%s: Requesting lock info from node %u\n", dlm->name, 577 571 ndata->node_num); 578 572 579 573 if (ndata->node_num == dlm->node_num) { ··· 646 640 spin_unlock(&dlm_reco_state_lock); 647 641 } 648 642 649 - mlog(0, "done requesting all lock info\n"); 643 + mlog(0, "%s: Done requesting all lock info\n", dlm->name); 650 644 651 645 /* nodes should be sending reco data now 652 646 * just need to wait */ ··· 808 802 809 803 /* negative status is handled by caller */ 810 804 if (ret < 0) 811 - mlog(ML_ERROR, "Error %d when sending message %u (key " 812 - "0x%x) to node %u\n", ret, DLM_LOCK_REQUEST_MSG, 813 - dlm->key, request_from); 814 - 805 + mlog(ML_ERROR, "%s: Error %d send LOCK_REQUEST to node %u " 806 + "to recover dead node %u\n", dlm->name, ret, 807 + request_from, dead_node); 815 808 // return from here, then 816 809 // sleep until all received or error 817 810 return ret; ··· 961 956 ret = o2net_send_message(DLM_RECO_DATA_DONE_MSG, dlm->key, &done_msg, 962 957 sizeof(done_msg), send_to, &tmpret); 963 958 if (ret < 0) { 964 - mlog(ML_ERROR, "Error %d when sending message %u (key " 965 - "0x%x) to node %u\n", ret, DLM_RECO_DATA_DONE_MSG, 966 - dlm->key, send_to); 959 + mlog(ML_ERROR, "%s: Error %d send RECO_DATA_DONE to node %u " 960 + "to recover dead node %u\n", dlm->name, ret, send_to, 961 + dead_node); 967 962 if (!dlm_is_host_down(ret)) { 968 963 BUG(); 969 964 } ··· 1132 1127 if (ret < 0) { 1133 1128 /* XXX: negative status is not handled. 1134 1129 * this will end up killing this node. */ 1135 - mlog(ML_ERROR, "Error %d when sending message %u (key " 1136 - "0x%x) to node %u\n", ret, DLM_MIG_LOCKRES_MSG, 1137 - dlm->key, send_to); 1130 + mlog(ML_ERROR, "%s: res %.*s, Error %d send MIG_LOCKRES to " 1131 + "node %u (%s)\n", dlm->name, mres->lockname_len, 1132 + mres->lockname, ret, send_to, 1133 + (orig_flags & DLM_MRES_MIGRATION ? 1134 + "migration" : "recovery")); 1138 1135 } else { 1139 1136 /* might get an -ENOMEM back here */ 1140 1137 ret = status; ··· 1774 1767 dlm->name, mres->lockname_len, mres->lockname, 1775 1768 from); 1776 1769 spin_lock(&res->spinlock); 1777 - dlm_lockres_set_refmap_bit(from, res); 1770 + dlm_lockres_set_refmap_bit(dlm, res, from); 1778 1771 spin_unlock(&res->spinlock); 1779 1772 added++; 1780 1773 break; ··· 1972 1965 mlog(0, "%s:%.*s: added lock for node %u, " 1973 1966 "setting refmap bit\n", dlm->name, 1974 1967 res->lockname.len, res->lockname.name, ml->node); 1975 - dlm_lockres_set_refmap_bit(ml->node, res); 1968 + dlm_lockres_set_refmap_bit(dlm, res, ml->node); 1976 1969 added++; 1977 1970 } 1978 1971 spin_unlock(&res->spinlock); ··· 2091 2084 2092 2085 list_for_each_entry_safe(res, next, &dlm->reco.resources, recovering) { 2093 2086 if (res->owner == dead_node) { 2087 + mlog(0, "%s: res %.*s, Changing owner from %u to %u\n", 2088 + dlm->name, res->lockname.len, res->lockname.name, 2089 + res->owner, new_master); 2094 2090 list_del_init(&res->recovering); 2095 2091 spin_lock(&res->spinlock); 2096 2092 /* new_master has our reference from ··· 2115 2105 for (i = 0; i < DLM_HASH_BUCKETS; i++) { 2116 2106 bucket = dlm_lockres_hash(dlm, i); 2117 2107 hlist_for_each_entry(res, hash_iter, bucket, hash_node) { 2118 - if (res->state & DLM_LOCK_RES_RECOVERING) { 2119 - if (res->owner == dead_node) { 2120 - mlog(0, "(this=%u) res %.*s owner=%u " 2121 - "was not on recovering list, but " 2122 - "clearing state anyway\n", 2123 - dlm->node_num, res->lockname.len, 2124 - res->lockname.name, new_master); 2125 - } else if (res->owner == dlm->node_num) { 2126 - mlog(0, "(this=%u) res %.*s owner=%u " 2127 - "was not on recovering list, " 2128 - "owner is THIS node, clearing\n", 2129 - dlm->node_num, res->lockname.len, 2130 - res->lockname.name, new_master); 2131 - } else 2132 - continue; 2108 + if (!(res->state & DLM_LOCK_RES_RECOVERING)) 2109 + continue; 2133 2110 2134 - if (!list_empty(&res->recovering)) { 2135 - mlog(0, "%s:%.*s: lockres was " 2136 - "marked RECOVERING, owner=%u\n", 2137 - dlm->name, res->lockname.len, 2138 - res->lockname.name, res->owner); 2139 - list_del_init(&res->recovering); 2140 - dlm_lockres_put(res); 2141 - } 2142 - spin_lock(&res->spinlock); 2143 - /* new_master has our reference from 2144 - * the lock state sent during recovery */ 2145 - dlm_change_lockres_owner(dlm, res, new_master); 2146 - res->state &= ~DLM_LOCK_RES_RECOVERING; 2147 - if (__dlm_lockres_has_locks(res)) 2148 - __dlm_dirty_lockres(dlm, res); 2149 - spin_unlock(&res->spinlock); 2150 - wake_up(&res->wq); 2111 + if (res->owner != dead_node && 2112 + res->owner != dlm->node_num) 2113 + continue; 2114 + 2115 + if (!list_empty(&res->recovering)) { 2116 + list_del_init(&res->recovering); 2117 + dlm_lockres_put(res); 2151 2118 } 2119 + 2120 + /* new_master has our reference from 2121 + * the lock state sent during recovery */ 2122 + mlog(0, "%s: res %.*s, Changing owner from %u to %u\n", 2123 + dlm->name, res->lockname.len, res->lockname.name, 2124 + res->owner, new_master); 2125 + spin_lock(&res->spinlock); 2126 + dlm_change_lockres_owner(dlm, res, new_master); 2127 + res->state &= ~DLM_LOCK_RES_RECOVERING; 2128 + if (__dlm_lockres_has_locks(res)) 2129 + __dlm_dirty_lockres(dlm, res); 2130 + spin_unlock(&res->spinlock); 2131 + wake_up(&res->wq); 2152 2132 } 2153 2133 } 2154 2134 } ··· 2252 2252 res->lockname.len, res->lockname.name, freed, dead_node); 2253 2253 __dlm_print_one_lock_resource(res); 2254 2254 } 2255 - dlm_lockres_clear_refmap_bit(dead_node, res); 2255 + dlm_lockres_clear_refmap_bit(dlm, res, dead_node); 2256 2256 } else if (test_bit(dead_node, res->refmap)) { 2257 2257 mlog(0, "%s:%.*s: dead node %u had a ref, but had " 2258 2258 "no locks and had not purged before dying\n", dlm->name, 2259 2259 res->lockname.len, res->lockname.name, dead_node); 2260 - dlm_lockres_clear_refmap_bit(dead_node, res); 2260 + dlm_lockres_clear_refmap_bit(dlm, res, dead_node); 2261 2261 } 2262 2262 2263 2263 /* do not kick thread yet */ ··· 2324 2324 dlm_revalidate_lvb(dlm, res, dead_node); 2325 2325 if (res->owner == dead_node) { 2326 2326 if (res->state & DLM_LOCK_RES_DROPPING_REF) { 2327 - mlog(ML_NOTICE, "Ignore %.*s for " 2327 + mlog(ML_NOTICE, "%s: res %.*s, Skip " 2328 2328 "recovery as it is being freed\n", 2329 - res->lockname.len, 2329 + dlm->name, res->lockname.len, 2330 2330 res->lockname.name); 2331 2331 } else 2332 2332 dlm_move_lockres_to_recovery_list(dlm,
+8 -8
fs/ocfs2/dlm/dlmthread.c
··· 94 94 { 95 95 int bit; 96 96 97 + assert_spin_locked(&res->spinlock); 98 + 97 99 if (__dlm_lockres_has_locks(res)) 100 + return 0; 101 + 102 + /* Locks are in the process of being created */ 103 + if (res->inflight_locks) 98 104 return 0; 99 105 100 106 if (!list_empty(&res->dirty) || res->state & DLM_LOCK_RES_DIRTY) ··· 109 103 if (res->state & DLM_LOCK_RES_RECOVERING) 110 104 return 0; 111 105 106 + /* Another node has this resource with this node as the master */ 112 107 bit = find_next_bit(res->refmap, O2NM_MAX_NODES, 0); 113 108 if (bit < O2NM_MAX_NODES) 114 109 return 0; 115 110 116 - /* 117 - * since the bit for dlm->node_num is not set, inflight_locks better 118 - * be zero 119 - */ 120 - BUG_ON(res->inflight_locks != 0); 121 111 return 1; 122 112 } 123 113 ··· 187 185 /* clear our bit from the master's refmap, ignore errors */ 188 186 ret = dlm_drop_lockres_ref(dlm, res); 189 187 if (ret < 0) { 190 - mlog(ML_ERROR, "%s: deref %.*s failed %d\n", dlm->name, 191 - res->lockname.len, res->lockname.name, ret); 192 188 if (!dlm_is_host_down(ret)) 193 189 BUG(); 194 190 } ··· 209 209 BUG(); 210 210 } 211 211 212 - __dlm_unhash_lockres(res); 212 + __dlm_unhash_lockres(dlm, res); 213 213 214 214 /* lockres is not in the hash now. drop the flag and wake up 215 215 * any processes waiting in dlm_get_lock_resource. */
+15 -6
fs/ocfs2/dlmglue.c
··· 1692 1692 mlog(0, "inode %llu take PRMODE open lock\n", 1693 1693 (unsigned long long)OCFS2_I(inode)->ip_blkno); 1694 1694 1695 - if (ocfs2_mount_local(osb)) 1695 + if (ocfs2_is_hard_readonly(osb) || ocfs2_mount_local(osb)) 1696 1696 goto out; 1697 1697 1698 1698 lockres = &OCFS2_I(inode)->ip_open_lockres; ··· 1717 1717 mlog(0, "inode %llu try to take %s open lock\n", 1718 1718 (unsigned long long)OCFS2_I(inode)->ip_blkno, 1719 1719 write ? "EXMODE" : "PRMODE"); 1720 + 1721 + if (ocfs2_is_hard_readonly(osb)) { 1722 + if (write) 1723 + status = -EROFS; 1724 + goto out; 1725 + } 1720 1726 1721 1727 if (ocfs2_mount_local(osb)) 1722 1728 goto out; ··· 2304 2298 if (ocfs2_is_hard_readonly(osb)) { 2305 2299 if (ex) 2306 2300 status = -EROFS; 2307 - goto bail; 2301 + goto getbh; 2308 2302 } 2309 2303 2310 2304 if (ocfs2_mount_local(osb)) ··· 2362 2356 mlog_errno(status); 2363 2357 goto bail; 2364 2358 } 2365 - 2359 + getbh: 2366 2360 if (ret_bh) { 2367 2361 status = ocfs2_assign_bh(inode, ret_bh, local_bh); 2368 2362 if (status < 0) { ··· 2634 2628 2635 2629 BUG_ON(!dl); 2636 2630 2637 - if (ocfs2_is_hard_readonly(osb)) 2638 - return -EROFS; 2631 + if (ocfs2_is_hard_readonly(osb)) { 2632 + if (ex) 2633 + return -EROFS; 2634 + return 0; 2635 + } 2639 2636 2640 2637 if (ocfs2_mount_local(osb)) 2641 2638 return 0; ··· 2656 2647 struct ocfs2_dentry_lock *dl = dentry->d_fsdata; 2657 2648 struct ocfs2_super *osb = OCFS2_SB(dentry->d_sb); 2658 2649 2659 - if (!ocfs2_mount_local(osb)) 2650 + if (!ocfs2_is_hard_readonly(osb) && !ocfs2_mount_local(osb)) 2660 2651 ocfs2_cluster_unlock(osb, &dl->dl_lockres, level); 2661 2652 } 2662 2653
+96
fs/ocfs2/extent_map.c
··· 832 832 return ret; 833 833 } 834 834 835 + int ocfs2_seek_data_hole_offset(struct file *file, loff_t *offset, int origin) 836 + { 837 + struct inode *inode = file->f_mapping->host; 838 + int ret; 839 + unsigned int is_last = 0, is_data = 0; 840 + u16 cs_bits = OCFS2_SB(inode->i_sb)->s_clustersize_bits; 841 + u32 cpos, cend, clen, hole_size; 842 + u64 extoff, extlen; 843 + struct buffer_head *di_bh = NULL; 844 + struct ocfs2_extent_rec rec; 845 + 846 + BUG_ON(origin != SEEK_DATA && origin != SEEK_HOLE); 847 + 848 + ret = ocfs2_inode_lock(inode, &di_bh, 0); 849 + if (ret) { 850 + mlog_errno(ret); 851 + goto out; 852 + } 853 + 854 + down_read(&OCFS2_I(inode)->ip_alloc_sem); 855 + 856 + if (*offset >= inode->i_size) { 857 + ret = -ENXIO; 858 + goto out_unlock; 859 + } 860 + 861 + if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL) { 862 + if (origin == SEEK_HOLE) 863 + *offset = inode->i_size; 864 + goto out_unlock; 865 + } 866 + 867 + clen = 0; 868 + cpos = *offset >> cs_bits; 869 + cend = ocfs2_clusters_for_bytes(inode->i_sb, inode->i_size); 870 + 871 + while (cpos < cend && !is_last) { 872 + ret = ocfs2_get_clusters_nocache(inode, di_bh, cpos, &hole_size, 873 + &rec, &is_last); 874 + if (ret) { 875 + mlog_errno(ret); 876 + goto out_unlock; 877 + } 878 + 879 + extoff = cpos; 880 + extoff <<= cs_bits; 881 + 882 + if (rec.e_blkno == 0ULL) { 883 + clen = hole_size; 884 + is_data = 0; 885 + } else { 886 + clen = le16_to_cpu(rec.e_leaf_clusters) - 887 + (cpos - le32_to_cpu(rec.e_cpos)); 888 + is_data = (rec.e_flags & OCFS2_EXT_UNWRITTEN) ? 0 : 1; 889 + } 890 + 891 + if ((!is_data && origin == SEEK_HOLE) || 892 + (is_data && origin == SEEK_DATA)) { 893 + if (extoff > *offset) 894 + *offset = extoff; 895 + goto out_unlock; 896 + } 897 + 898 + if (!is_last) 899 + cpos += clen; 900 + } 901 + 902 + if (origin == SEEK_HOLE) { 903 + extoff = cpos; 904 + extoff <<= cs_bits; 905 + extlen = clen; 906 + extlen <<= cs_bits; 907 + 908 + if ((extoff + extlen) > inode->i_size) 909 + extlen = inode->i_size - extoff; 910 + extoff += extlen; 911 + if (extoff > *offset) 912 + *offset = extoff; 913 + goto out_unlock; 914 + } 915 + 916 + ret = -ENXIO; 917 + 918 + out_unlock: 919 + 920 + brelse(di_bh); 921 + 922 + up_read(&OCFS2_I(inode)->ip_alloc_sem); 923 + 924 + ocfs2_inode_unlock(inode, 0); 925 + out: 926 + if (ret && ret != -ENXIO) 927 + ret = -ENXIO; 928 + return ret; 929 + } 930 + 835 931 int ocfs2_read_virt_blocks(struct inode *inode, u64 v_block, int nr, 836 932 struct buffer_head *bhs[], int flags, 837 933 int (*validate)(struct super_block *sb,
+2
fs/ocfs2/extent_map.h
··· 53 53 int ocfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 54 54 u64 map_start, u64 map_len); 55 55 56 + int ocfs2_seek_data_hole_offset(struct file *file, loff_t *offset, int origin); 57 + 56 58 int ocfs2_xattr_get_clusters(struct inode *inode, u32 v_cluster, 57 59 u32 *p_cluster, u32 *num_clusters, 58 60 struct ocfs2_extent_list *el,
+94 -2
fs/ocfs2/file.c
··· 1950 1950 if (ret < 0) 1951 1951 mlog_errno(ret); 1952 1952 1953 + if (file->f_flags & O_SYNC) 1954 + handle->h_sync = 1; 1955 + 1953 1956 ocfs2_commit_trans(osb, handle); 1954 1957 1955 1958 out_inode_unlock: ··· 2053 2050 } 2054 2051 out: 2055 2052 return ret; 2053 + } 2054 + 2055 + static void ocfs2_aiodio_wait(struct inode *inode) 2056 + { 2057 + wait_queue_head_t *wq = ocfs2_ioend_wq(inode); 2058 + 2059 + wait_event(*wq, (atomic_read(&OCFS2_I(inode)->ip_unaligned_aio) == 0)); 2060 + } 2061 + 2062 + static int ocfs2_is_io_unaligned(struct inode *inode, size_t count, loff_t pos) 2063 + { 2064 + int blockmask = inode->i_sb->s_blocksize - 1; 2065 + loff_t final_size = pos + count; 2066 + 2067 + if ((pos & blockmask) || (final_size & blockmask)) 2068 + return 1; 2069 + return 0; 2056 2070 } 2057 2071 2058 2072 static int ocfs2_prepare_inode_for_refcount(struct inode *inode, ··· 2250 2230 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 2251 2231 int full_coherency = !(osb->s_mount_opt & 2252 2232 OCFS2_MOUNT_COHERENCY_BUFFERED); 2233 + int unaligned_dio = 0; 2253 2234 2254 2235 trace_ocfs2_file_aio_write(inode, file, file->f_path.dentry, 2255 2236 (unsigned long long)OCFS2_I(inode)->ip_blkno, ··· 2318 2297 goto out; 2319 2298 } 2320 2299 2300 + if (direct_io && !is_sync_kiocb(iocb)) 2301 + unaligned_dio = ocfs2_is_io_unaligned(inode, iocb->ki_left, 2302 + *ppos); 2303 + 2321 2304 /* 2322 2305 * We can't complete the direct I/O as requested, fall back to 2323 2306 * buffered I/O. ··· 2334 2309 2335 2310 direct_io = 0; 2336 2311 goto relock; 2312 + } 2313 + 2314 + if (unaligned_dio) { 2315 + /* 2316 + * Wait on previous unaligned aio to complete before 2317 + * proceeding. 2318 + */ 2319 + ocfs2_aiodio_wait(inode); 2320 + 2321 + /* Mark the iocb as needing a decrement in ocfs2_dio_end_io */ 2322 + atomic_inc(&OCFS2_I(inode)->ip_unaligned_aio); 2323 + ocfs2_iocb_set_unaligned_aio(iocb); 2337 2324 } 2338 2325 2339 2326 /* ··· 2419 2382 if ((ret == -EIOCBQUEUED) || (!ocfs2_iocb_is_rw_locked(iocb))) { 2420 2383 rw_level = -1; 2421 2384 have_alloc_sem = 0; 2385 + unaligned_dio = 0; 2422 2386 } 2387 + 2388 + if (unaligned_dio) 2389 + atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio); 2423 2390 2424 2391 out: 2425 2392 if (rw_level != -1) ··· 2632 2591 return ret; 2633 2592 } 2634 2593 2594 + /* Refer generic_file_llseek_unlocked() */ 2595 + static loff_t ocfs2_file_llseek(struct file *file, loff_t offset, int origin) 2596 + { 2597 + struct inode *inode = file->f_mapping->host; 2598 + int ret = 0; 2599 + 2600 + mutex_lock(&inode->i_mutex); 2601 + 2602 + switch (origin) { 2603 + case SEEK_SET: 2604 + break; 2605 + case SEEK_END: 2606 + offset += inode->i_size; 2607 + break; 2608 + case SEEK_CUR: 2609 + if (offset == 0) { 2610 + offset = file->f_pos; 2611 + goto out; 2612 + } 2613 + offset += file->f_pos; 2614 + break; 2615 + case SEEK_DATA: 2616 + case SEEK_HOLE: 2617 + ret = ocfs2_seek_data_hole_offset(file, &offset, origin); 2618 + if (ret) 2619 + goto out; 2620 + break; 2621 + default: 2622 + ret = -EINVAL; 2623 + goto out; 2624 + } 2625 + 2626 + if (offset < 0 && !(file->f_mode & FMODE_UNSIGNED_OFFSET)) 2627 + ret = -EINVAL; 2628 + if (!ret && offset > inode->i_sb->s_maxbytes) 2629 + ret = -EINVAL; 2630 + if (ret) 2631 + goto out; 2632 + 2633 + if (offset != file->f_pos) { 2634 + file->f_pos = offset; 2635 + file->f_version = 0; 2636 + } 2637 + 2638 + out: 2639 + mutex_unlock(&inode->i_mutex); 2640 + if (ret) 2641 + return ret; 2642 + return offset; 2643 + } 2644 + 2635 2645 const struct inode_operations ocfs2_file_iops = { 2636 2646 .setattr = ocfs2_setattr, 2637 2647 .getattr = ocfs2_getattr, ··· 2707 2615 * ocfs2_fops_no_plocks and ocfs2_dops_no_plocks! 2708 2616 */ 2709 2617 const struct file_operations ocfs2_fops = { 2710 - .llseek = generic_file_llseek, 2618 + .llseek = ocfs2_file_llseek, 2711 2619 .read = do_sync_read, 2712 2620 .write = do_sync_write, 2713 2621 .mmap = ocfs2_mmap, ··· 2755 2663 * the cluster. 2756 2664 */ 2757 2665 const struct file_operations ocfs2_fops_no_plocks = { 2758 - .llseek = generic_file_llseek, 2666 + .llseek = ocfs2_file_llseek, 2759 2667 .read = do_sync_read, 2760 2668 .write = do_sync_write, 2761 2669 .mmap = ocfs2_mmap,
+1 -1
fs/ocfs2/inode.c
··· 951 951 trace_ocfs2_cleanup_delete_inode( 952 952 (unsigned long long)OCFS2_I(inode)->ip_blkno, sync_data); 953 953 if (sync_data) 954 - write_inode_now(inode, 1); 954 + filemap_write_and_wait(inode->i_mapping); 955 955 truncate_inode_pages(&inode->i_data, 0); 956 956 } 957 957
+3
fs/ocfs2/inode.h
··· 43 43 /* protects extended attribute changes on this inode */ 44 44 struct rw_semaphore ip_xattr_sem; 45 45 46 + /* Number of outstanding AIO's which are not page aligned */ 47 + atomic_t ip_unaligned_aio; 48 + 46 49 /* These fields are protected by ip_lock */ 47 50 spinlock_t ip_lock; 48 51 u32 ip_open_count;
+6 -5
fs/ocfs2/ioctl.c
··· 122 122 if ((oldflags & OCFS2_IMMUTABLE_FL) || ((flags ^ oldflags) & 123 123 (OCFS2_APPEND_FL | OCFS2_IMMUTABLE_FL))) { 124 124 if (!capable(CAP_LINUX_IMMUTABLE)) 125 - goto bail_unlock; 125 + goto bail_commit; 126 126 } 127 127 128 128 ocfs2_inode->ip_attr = flags; ··· 132 132 if (status < 0) 133 133 mlog_errno(status); 134 134 135 + bail_commit: 135 136 ocfs2_commit_trans(osb, handle); 136 137 bail_unlock: 137 138 ocfs2_inode_unlock(inode, 1); ··· 382 381 if (!oifi) { 383 382 status = -ENOMEM; 384 383 mlog_errno(status); 385 - goto bail; 384 + goto out_err; 386 385 } 387 386 388 387 if (o2info_from_user(*oifi, req)) ··· 432 431 o2info_set_request_error(&oifi->ifi_req, req); 433 432 434 433 kfree(oifi); 435 - 434 + out_err: 436 435 return status; 437 436 } 438 437 ··· 667 666 if (!oiff) { 668 667 status = -ENOMEM; 669 668 mlog_errno(status); 670 - goto bail; 669 + goto out_err; 671 670 } 672 671 673 672 if (o2info_from_user(*oiff, req)) ··· 717 716 o2info_set_request_error(&oiff->iff_req, req); 718 717 719 718 kfree(oiff); 720 - 719 + out_err: 721 720 return status; 722 721 } 723 722
+20 -3
fs/ocfs2/journal.c
··· 1544 1544 /* we need to run complete recovery for offline orphan slots */ 1545 1545 ocfs2_replay_map_set_state(osb, REPLAY_NEEDED); 1546 1546 1547 - mlog(ML_NOTICE, "Recovering node %d from slot %d on device (%u,%u)\n", 1548 - node_num, slot_num, 1549 - MAJOR(osb->sb->s_dev), MINOR(osb->sb->s_dev)); 1547 + printk(KERN_NOTICE "ocfs2: Begin replay journal (node %d, slot %d) on "\ 1548 + "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev), 1549 + MINOR(osb->sb->s_dev)); 1550 1550 1551 1551 OCFS2_I(inode)->ip_clusters = le32_to_cpu(fe->i_clusters); 1552 1552 ··· 1601 1601 1602 1602 jbd2_journal_destroy(journal); 1603 1603 1604 + printk(KERN_NOTICE "ocfs2: End replay journal (node %d, slot %d) on "\ 1605 + "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev), 1606 + MINOR(osb->sb->s_dev)); 1604 1607 done: 1605 1608 /* drop the lock on this nodes journal */ 1606 1609 if (got_lock) ··· 1810 1807 * ocfs2_queue_orphan_scan calls ocfs2_queue_recovery_completion for 1811 1808 * every slot, queuing a recovery of the slot on the ocfs2_wq thread. This 1812 1809 * is done to catch any orphans that are left over in orphan directories. 1810 + * 1811 + * It scans all slots, even ones that are in use. It does so to handle the 1812 + * case described below: 1813 + * 1814 + * Node 1 has an inode it was using. The dentry went away due to memory 1815 + * pressure. Node 1 closes the inode, but it's on the free list. The node 1816 + * has the open lock. 1817 + * Node 2 unlinks the inode. It grabs the dentry lock to notify others, 1818 + * but node 1 has no dentry and doesn't get the message. It trylocks the 1819 + * open lock, sees that another node has a PR, and does nothing. 1820 + * Later node 2 runs its orphan dir. It igets the inode, trylocks the 1821 + * open lock, sees the PR still, and does nothing. 1822 + * Basically, we have to trigger an orphan iput on node 1. The only way 1823 + * for this to happen is if node 1 runs node 2's orphan dir. 1813 1824 * 1814 1825 * ocfs2_queue_orphan_scan gets called every ORPHAN_SCAN_SCHEDULE_TIMEOUT 1815 1826 * seconds. It gets an EX lock on os_lockres and checks sequence number
+3 -2
fs/ocfs2/journal.h
··· 441 441 #define OCFS2_SIMPLE_DIR_EXTEND_CREDITS (2) 442 442 443 443 /* file update (nlink, etc) + directory mtime/ctime + dir entry block + quota 444 - * update on dir + index leaf + dx root update for free list */ 444 + * update on dir + index leaf + dx root update for free list + 445 + * previous dirblock update in the free list */ 445 446 static inline int ocfs2_link_credits(struct super_block *sb) 446 447 { 447 - return 2*OCFS2_INODE_UPDATE_CREDITS + 3 + 448 + return 2*OCFS2_INODE_UPDATE_CREDITS + 4 + 448 449 ocfs2_quota_trans_credits(sb); 449 450 } 450 451
+24 -29
fs/ocfs2/mmap.c
··· 61 61 static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh, 62 62 struct page *page) 63 63 { 64 - int ret; 64 + int ret = VM_FAULT_NOPAGE; 65 65 struct inode *inode = file->f_path.dentry->d_inode; 66 66 struct address_space *mapping = inode->i_mapping; 67 67 loff_t pos = page_offset(page); ··· 71 71 void *fsdata; 72 72 loff_t size = i_size_read(inode); 73 73 74 - /* 75 - * Another node might have truncated while we were waiting on 76 - * cluster locks. 77 - * We don't check size == 0 before the shift. This is borrowed 78 - * from do_generic_file_read. 79 - */ 80 74 last_index = (size - 1) >> PAGE_CACHE_SHIFT; 81 - if (unlikely(!size || page->index > last_index)) { 82 - ret = -EINVAL; 83 - goto out; 84 - } 85 75 86 76 /* 87 - * The i_size check above doesn't catch the case where nodes 88 - * truncated and then re-extended the file. We'll re-check the 89 - * page mapping after taking the page lock inside of 90 - * ocfs2_write_begin_nolock(). 77 + * There are cases that lead to the page no longer bebongs to the 78 + * mapping. 79 + * 1) pagecache truncates locally due to memory pressure. 80 + * 2) pagecache truncates when another is taking EX lock against 81 + * inode lock. see ocfs2_data_convert_worker. 82 + * 83 + * The i_size check doesn't catch the case where nodes truncated and 84 + * then re-extended the file. We'll re-check the page mapping after 85 + * taking the page lock inside of ocfs2_write_begin_nolock(). 86 + * 87 + * Let VM retry with these cases. 91 88 */ 92 - if (!PageUptodate(page) || page->mapping != inode->i_mapping) { 93 - /* 94 - * the page has been umapped in ocfs2_data_downconvert_worker. 95 - * So return 0 here and let VFS retry. 96 - */ 97 - ret = 0; 89 + if ((page->mapping != inode->i_mapping) || 90 + (!PageUptodate(page)) || 91 + (page_offset(page) >= size)) 98 92 goto out; 99 - } 100 93 101 94 /* 102 95 * Call ocfs2_write_begin() and ocfs2_write_end() to take ··· 109 116 if (ret) { 110 117 if (ret != -ENOSPC) 111 118 mlog_errno(ret); 119 + if (ret == -ENOMEM) 120 + ret = VM_FAULT_OOM; 121 + else 122 + ret = VM_FAULT_SIGBUS; 112 123 goto out; 113 124 } 114 125 115 - ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page, 116 - fsdata); 117 - if (ret < 0) { 118 - mlog_errno(ret); 126 + if (!locked_page) { 127 + ret = VM_FAULT_NOPAGE; 119 128 goto out; 120 129 } 130 + ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page, 131 + fsdata); 121 132 BUG_ON(ret != len); 122 - ret = 0; 133 + ret = VM_FAULT_LOCKED; 123 134 out: 124 135 return ret; 125 136 } ··· 165 168 166 169 out: 167 170 ocfs2_unblock_signals(&oldset); 168 - if (ret) 169 - ret = VM_FAULT_SIGBUS; 170 171 return ret; 171 172 } 172 173
+1 -1
fs/ocfs2/move_extents.c
··· 745 745 */ 746 746 ocfs2_probe_alloc_group(inode, gd_bh, &goal_bit, len, move_max_hop, 747 747 new_phys_cpos); 748 - if (!new_phys_cpos) { 748 + if (!*new_phys_cpos) { 749 749 ret = -ENOSPC; 750 750 goto out_commit; 751 751 }
+49 -2
fs/ocfs2/ocfs2.h
··· 836 836 837 837 static inline void _ocfs2_set_bit(unsigned int bit, unsigned long *bitmap) 838 838 { 839 - __test_and_set_bit_le(bit, bitmap); 839 + __set_bit_le(bit, bitmap); 840 840 } 841 841 #define ocfs2_set_bit(bit, addr) _ocfs2_set_bit((bit), (unsigned long *)(addr)) 842 842 843 843 static inline void _ocfs2_clear_bit(unsigned int bit, unsigned long *bitmap) 844 844 { 845 - __test_and_clear_bit_le(bit, bitmap); 845 + __clear_bit_le(bit, bitmap); 846 846 } 847 847 #define ocfs2_clear_bit(bit, addr) _ocfs2_clear_bit((bit), (unsigned long *)(addr)) 848 848 849 849 #define ocfs2_test_bit test_bit_le 850 850 #define ocfs2_find_next_zero_bit find_next_zero_bit_le 851 851 #define ocfs2_find_next_bit find_next_bit_le 852 + 853 + static inline void *correct_addr_and_bit_unaligned(int *bit, void *addr) 854 + { 855 + #if BITS_PER_LONG == 64 856 + *bit += ((unsigned long) addr & 7UL) << 3; 857 + addr = (void *) ((unsigned long) addr & ~7UL); 858 + #elif BITS_PER_LONG == 32 859 + *bit += ((unsigned long) addr & 3UL) << 3; 860 + addr = (void *) ((unsigned long) addr & ~3UL); 861 + #else 862 + #error "how many bits you are?!" 863 + #endif 864 + return addr; 865 + } 866 + 867 + static inline void ocfs2_set_bit_unaligned(int bit, void *bitmap) 868 + { 869 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 870 + ocfs2_set_bit(bit, bitmap); 871 + } 872 + 873 + static inline void ocfs2_clear_bit_unaligned(int bit, void *bitmap) 874 + { 875 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 876 + ocfs2_clear_bit(bit, bitmap); 877 + } 878 + 879 + static inline int ocfs2_test_bit_unaligned(int bit, void *bitmap) 880 + { 881 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 882 + return ocfs2_test_bit(bit, bitmap); 883 + } 884 + 885 + static inline int ocfs2_find_next_zero_bit_unaligned(void *bitmap, int max, 886 + int start) 887 + { 888 + int fix = 0, ret, tmpmax; 889 + bitmap = correct_addr_and_bit_unaligned(&fix, bitmap); 890 + tmpmax = max + fix; 891 + start += fix; 892 + 893 + ret = ocfs2_find_next_zero_bit(bitmap, tmpmax, start) - fix; 894 + if (ret > max) 895 + return max; 896 + return ret; 897 + } 898 + 852 899 #endif /* OCFS2_H */ 853 900
+14 -9
fs/ocfs2/quota_local.c
··· 404 404 int status = 0; 405 405 struct ocfs2_quota_recovery *rec; 406 406 407 - mlog(ML_NOTICE, "Beginning quota recovery in slot %u\n", slot_num); 407 + printk(KERN_NOTICE "ocfs2: Beginning quota recovery on device (%s) for " 408 + "slot %u\n", osb->dev_str, slot_num); 409 + 408 410 rec = ocfs2_alloc_quota_recovery(); 409 411 if (!rec) 410 412 return ERR_PTR(-ENOMEM); ··· 551 549 goto out_commit; 552 550 } 553 551 lock_buffer(qbh); 554 - WARN_ON(!ocfs2_test_bit(bit, dchunk->dqc_bitmap)); 555 - ocfs2_clear_bit(bit, dchunk->dqc_bitmap); 552 + WARN_ON(!ocfs2_test_bit_unaligned(bit, dchunk->dqc_bitmap)); 553 + ocfs2_clear_bit_unaligned(bit, dchunk->dqc_bitmap); 556 554 le32_add_cpu(&dchunk->dqc_free, 1); 557 555 unlock_buffer(qbh); 558 556 ocfs2_journal_dirty(handle, qbh); ··· 598 596 struct inode *lqinode; 599 597 unsigned int flags; 600 598 601 - mlog(ML_NOTICE, "Finishing quota recovery in slot %u\n", slot_num); 599 + printk(KERN_NOTICE "ocfs2: Finishing quota recovery on device (%s) for " 600 + "slot %u\n", osb->dev_str, slot_num); 601 + 602 602 mutex_lock(&sb_dqopt(sb)->dqonoff_mutex); 603 603 for (type = 0; type < MAXQUOTAS; type++) { 604 604 if (list_empty(&(rec->r_list[type]))) ··· 616 612 /* Someone else is holding the lock? Then he must be 617 613 * doing the recovery. Just skip the file... */ 618 614 if (status == -EAGAIN) { 619 - mlog(ML_NOTICE, "skipping quota recovery for slot %d " 620 - "because quota file is locked.\n", slot_num); 615 + printk(KERN_NOTICE "ocfs2: Skipping quota recovery on " 616 + "device (%s) for slot %d because quota file is " 617 + "locked.\n", osb->dev_str, slot_num); 621 618 status = 0; 622 619 goto out_put; 623 620 } else if (status < 0) { ··· 949 944 * ol_quota_entries_per_block(sb); 950 945 } 951 946 952 - found = ocfs2_find_next_zero_bit(dchunk->dqc_bitmap, len, 0); 947 + found = ocfs2_find_next_zero_bit_unaligned(dchunk->dqc_bitmap, len, 0); 953 948 /* We failed? */ 954 949 if (found == len) { 955 950 mlog(ML_ERROR, "Did not find empty entry in chunk %d with %u" ··· 1213 1208 struct ocfs2_local_disk_chunk *dchunk; 1214 1209 1215 1210 dchunk = (struct ocfs2_local_disk_chunk *)bh->b_data; 1216 - ocfs2_set_bit(*offset, dchunk->dqc_bitmap); 1211 + ocfs2_set_bit_unaligned(*offset, dchunk->dqc_bitmap); 1217 1212 le32_add_cpu(&dchunk->dqc_free, -1); 1218 1213 } 1219 1214 ··· 1294 1289 (od->dq_chunk->qc_headerbh->b_data); 1295 1290 /* Mark structure as freed */ 1296 1291 lock_buffer(od->dq_chunk->qc_headerbh); 1297 - ocfs2_clear_bit(offset, dchunk->dqc_bitmap); 1292 + ocfs2_clear_bit_unaligned(offset, dchunk->dqc_bitmap); 1298 1293 le32_add_cpu(&dchunk->dqc_free, 1); 1299 1294 unlock_buffer(od->dq_chunk->qc_headerbh); 1300 1295 ocfs2_journal_dirty(handle, od->dq_chunk->qc_headerbh);
+2 -2
fs/ocfs2/slot_map.c
··· 493 493 goto bail; 494 494 } 495 495 } else 496 - mlog(ML_NOTICE, "slot %d is already allocated to this node!\n", 497 - slot); 496 + printk(KERN_INFO "ocfs2: Slot %d on device (%s) was already " 497 + "allocated to this node!\n", slot, osb->dev_str); 498 498 499 499 ocfs2_set_slot(si, slot, osb->node_num); 500 500 osb->slot_num = slot;
+63 -8
fs/ocfs2/stack_o2cb.c
··· 28 28 #include "cluster/masklog.h" 29 29 #include "cluster/nodemanager.h" 30 30 #include "cluster/heartbeat.h" 31 + #include "cluster/tcp.h" 31 32 32 33 #include "stackglue.h" 33 34 ··· 257 256 } 258 257 259 258 /* 259 + * Check if this node is heartbeating and is connected to all other 260 + * heartbeating nodes. 261 + */ 262 + static int o2cb_cluster_check(void) 263 + { 264 + u8 node_num; 265 + int i; 266 + unsigned long hbmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 267 + unsigned long netmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 268 + 269 + node_num = o2nm_this_node(); 270 + if (node_num == O2NM_MAX_NODES) { 271 + printk(KERN_ERR "o2cb: This node has not been configured.\n"); 272 + return -EINVAL; 273 + } 274 + 275 + /* 276 + * o2dlm expects o2net sockets to be created. If not, then 277 + * dlm_join_domain() fails with a stack of errors which are both cryptic 278 + * and incomplete. The idea here is to detect upfront whether we have 279 + * managed to connect to all nodes or not. If not, then list the nodes 280 + * to allow the user to check the configuration (incorrect IP, firewall, 281 + * etc.) Yes, this is racy. But its not the end of the world. 282 + */ 283 + #define O2CB_MAP_STABILIZE_COUNT 60 284 + for (i = 0; i < O2CB_MAP_STABILIZE_COUNT; ++i) { 285 + o2hb_fill_node_map(hbmap, sizeof(hbmap)); 286 + if (!test_bit(node_num, hbmap)) { 287 + printk(KERN_ERR "o2cb: %s heartbeat has not been " 288 + "started.\n", (o2hb_global_heartbeat_active() ? 289 + "Global" : "Local")); 290 + return -EINVAL; 291 + } 292 + o2net_fill_node_map(netmap, sizeof(netmap)); 293 + /* Force set the current node to allow easy compare */ 294 + set_bit(node_num, netmap); 295 + if (!memcmp(hbmap, netmap, sizeof(hbmap))) 296 + return 0; 297 + if (i < O2CB_MAP_STABILIZE_COUNT) 298 + msleep(1000); 299 + } 300 + 301 + printk(KERN_ERR "o2cb: This node could not connect to nodes:"); 302 + i = -1; 303 + while ((i = find_next_bit(hbmap, O2NM_MAX_NODES, 304 + i + 1)) < O2NM_MAX_NODES) { 305 + if (!test_bit(i, netmap)) 306 + printk(" %u", i); 307 + } 308 + printk(".\n"); 309 + 310 + return -ENOTCONN; 311 + } 312 + 313 + /* 260 314 * Called from the dlm when it's about to evict a node. This is how the 261 315 * classic stack signals node death. 262 316 */ ··· 319 263 { 320 264 struct ocfs2_cluster_connection *conn = data; 321 265 322 - mlog(ML_NOTICE, "o2dlm has evicted node %d from group %.*s\n", 323 - node_num, conn->cc_namelen, conn->cc_name); 266 + printk(KERN_NOTICE "o2cb: o2dlm has evicted node %d from domain %.*s\n", 267 + node_num, conn->cc_namelen, conn->cc_name); 324 268 325 269 conn->cc_recovery_handler(node_num, conn->cc_recovery_data); 326 270 } ··· 336 280 BUG_ON(conn == NULL); 337 281 BUG_ON(conn->cc_proto == NULL); 338 282 339 - /* for now we only have one cluster/node, make sure we see it 340 - * in the heartbeat universe */ 341 - if (!o2hb_check_local_node_heartbeating()) { 342 - if (o2hb_global_heartbeat_active()) 343 - mlog(ML_ERROR, "Global heartbeat not started\n"); 344 - rc = -EINVAL; 283 + /* Ensure cluster stack is up and all nodes are connected */ 284 + rc = o2cb_cluster_check(); 285 + if (rc) { 286 + printk(KERN_ERR "o2cb: Cluster check failed. Fix errors " 287 + "before retrying.\n"); 345 288 goto out; 346 289 } 347 290
+16 -9
fs/ocfs2/super.c
··· 54 54 #include "ocfs1_fs_compat.h" 55 55 56 56 #include "alloc.h" 57 + #include "aops.h" 57 58 #include "blockcheck.h" 58 59 #include "dlmglue.h" 59 60 #include "export.h" ··· 1108 1107 1109 1108 ocfs2_set_ro_flag(osb, 1); 1110 1109 1111 - printk(KERN_NOTICE "Readonly device detected. No cluster " 1112 - "services will be utilized for this mount. Recovery " 1113 - "will be skipped.\n"); 1110 + printk(KERN_NOTICE "ocfs2: Readonly device (%s) detected. " 1111 + "Cluster services will not be used for this mount. " 1112 + "Recovery will be skipped.\n", osb->dev_str); 1114 1113 } 1115 1114 1116 1115 if (!ocfs2_is_hard_readonly(osb)) { ··· 1617 1616 return 0; 1618 1617 } 1619 1618 1619 + wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ]; 1620 + 1620 1621 static int __init ocfs2_init(void) 1621 1622 { 1622 - int status; 1623 + int status, i; 1623 1624 1624 1625 ocfs2_print_version(); 1626 + 1627 + for (i = 0; i < OCFS2_IOEND_WQ_HASH_SZ; i++) 1628 + init_waitqueue_head(&ocfs2__ioend_wq[i]); 1625 1629 1626 1630 status = init_ocfs2_uptodate_cache(); 1627 1631 if (status < 0) { ··· 1766 1760 ocfs2_extent_map_init(&oi->vfs_inode); 1767 1761 INIT_LIST_HEAD(&oi->ip_io_markers); 1768 1762 oi->ip_dir_start_lookup = 0; 1769 - 1763 + atomic_set(&oi->ip_unaligned_aio, 0); 1770 1764 init_rwsem(&oi->ip_alloc_sem); 1771 1765 init_rwsem(&oi->ip_xattr_sem); 1772 1766 mutex_init(&oi->ip_io_mutex); ··· 1980 1974 * If we failed before we got a uuid_str yet, we can't stop 1981 1975 * heartbeat. Otherwise, do it. 1982 1976 */ 1983 - if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str) 1977 + if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str && 1978 + !ocfs2_is_hard_readonly(osb)) 1984 1979 hangup_needed = 1; 1985 1980 1986 1981 if (osb->cconn) ··· 2360 2353 mlog_errno(status); 2361 2354 goto bail; 2362 2355 } 2363 - cleancache_init_shared_fs((char *)&uuid_net_key, sb); 2356 + cleancache_init_shared_fs((char *)&di->id2.i_super.s_uuid, sb); 2364 2357 2365 2358 bail: 2366 2359 return status; ··· 2469 2462 goto finally; 2470 2463 } 2471 2464 } else { 2472 - mlog(ML_NOTICE, "File system was not unmounted cleanly, " 2473 - "recovering volume.\n"); 2465 + printk(KERN_NOTICE "ocfs2: File system on device (%s) was not " 2466 + "unmounted cleanly, recovering it.\n", osb->dev_str); 2474 2467 } 2475 2468 2476 2469 local = ocfs2_mount_local(osb);
+6 -4
fs/ocfs2/xattr.c
··· 2376 2376 } 2377 2377 2378 2378 ret = ocfs2_xattr_value_truncate(inode, vb, 0, &ctxt); 2379 - if (ret < 0) { 2380 - mlog_errno(ret); 2381 - break; 2382 - } 2383 2379 2384 2380 ocfs2_commit_trans(osb, ctxt.handle); 2385 2381 if (ctxt.meta_ac) { 2386 2382 ocfs2_free_alloc_context(ctxt.meta_ac); 2387 2383 ctxt.meta_ac = NULL; 2388 2384 } 2385 + 2386 + if (ret < 0) { 2387 + mlog_errno(ret); 2388 + break; 2389 + } 2390 + 2389 2391 } 2390 2392 2391 2393 if (ctxt.meta_ac)
+8 -5
fs/pstore/platform.c
··· 167 167 } 168 168 169 169 psinfo = psi; 170 + mutex_init(&psinfo->read_mutex); 170 171 spin_unlock(&pstore_lock); 171 172 172 173 if (owner && !try_module_get(owner)) { ··· 196 195 void pstore_get_records(int quiet) 197 196 { 198 197 struct pstore_info *psi = psinfo; 198 + char *buf = NULL; 199 199 ssize_t size; 200 200 u64 id; 201 201 enum pstore_type_id type; 202 202 struct timespec time; 203 203 int failed = 0, rc; 204 - unsigned long flags; 205 204 206 205 if (!psi) 207 206 return; 208 207 209 - spin_lock_irqsave(&psinfo->buf_lock, flags); 208 + mutex_lock(&psi->read_mutex); 210 209 rc = psi->open(psi); 211 210 if (rc) 212 211 goto out; 213 212 214 - while ((size = psi->read(&id, &type, &time, psi)) > 0) { 215 - rc = pstore_mkfile(type, psi->name, id, psi->buf, (size_t)size, 213 + while ((size = psi->read(&id, &type, &time, &buf, psi)) > 0) { 214 + rc = pstore_mkfile(type, psi->name, id, buf, (size_t)size, 216 215 time, psi); 216 + kfree(buf); 217 + buf = NULL; 217 218 if (rc && (rc != -EEXIST || !quiet)) 218 219 failed++; 219 220 } 220 221 psi->close(psi); 221 222 out: 222 - spin_unlock_irqrestore(&psinfo->buf_lock, flags); 223 + mutex_unlock(&psi->read_mutex); 223 224 224 225 if (failed) 225 226 printk(KERN_WARNING "pstore: failed to load %d record(s) from '%s'\n",
+4 -5
include/drm/exynos_drm.h
··· 32 32 /** 33 33 * User-desired buffer creation information structure. 34 34 * 35 - * @size: requested size for the object. 35 + * @size: user-desired memory allocation size. 36 36 * - this size value would be page-aligned internally. 37 37 * @flags: user request for setting memory type or cache attributes. 38 - * @handle: returned handle for the object. 39 - * @pad: just padding to be 64-bit aligned. 38 + * @handle: returned a handle to created gem object. 39 + * - this handle will be set by gem module of kernel side. 40 40 */ 41 41 struct drm_exynos_gem_create { 42 - unsigned int size; 42 + uint64_t size; 43 43 unsigned int flags; 44 44 unsigned int handle; 45 - unsigned int pad; 46 45 }; 47 46 48 47 /**
+2 -1
include/linux/clocksource.h
··· 156 156 * @mult: cycle to nanosecond multiplier 157 157 * @shift: cycle to nanosecond divisor (power of two) 158 158 * @max_idle_ns: max idle time permitted by the clocksource (nsecs) 159 + * @maxadj maximum adjustment value to mult (~11%) 159 160 * @flags: flags describing special properties 160 161 * @archdata: arch-specific data 161 162 * @suspend: suspend function for the clocksource, if necessary ··· 173 172 u32 mult; 174 173 u32 shift; 175 174 u64 max_idle_ns; 176 - 175 + u32 maxadj; 177 176 #ifdef CONFIG_ARCH_CLOCKSOURCE_DATA 178 177 struct arch_clocksource_data archdata; 179 178 #endif
+127 -88
include/linux/pm.h
··· 54 54 /** 55 55 * struct dev_pm_ops - device PM callbacks 56 56 * 57 - * Several driver power state transitions are externally visible, affecting 57 + * Several device power state transitions are externally visible, affecting 58 58 * the state of pending I/O queues and (for drivers that touch hardware) 59 59 * interrupts, wakeups, DMA, and other hardware state. There may also be 60 - * internal transitions to various low power modes, which are transparent 60 + * internal transitions to various low-power modes which are transparent 61 61 * to the rest of the driver stack (such as a driver that's ON gating off 62 62 * clocks which are not in active use). 63 63 * 64 - * The externally visible transitions are handled with the help of the following 65 - * callbacks included in this structure: 64 + * The externally visible transitions are handled with the help of callbacks 65 + * included in this structure in such a way that two levels of callbacks are 66 + * involved. First, the PM core executes callbacks provided by PM domains, 67 + * device types, classes and bus types. They are the subsystem-level callbacks 68 + * supposed to execute callbacks provided by device drivers, although they may 69 + * choose not to do that. If the driver callbacks are executed, they have to 70 + * collaborate with the subsystem-level callbacks to achieve the goals 71 + * appropriate for the given system transition, given transition phase and the 72 + * subsystem the device belongs to. 66 73 * 67 - * @prepare: Prepare the device for the upcoming transition, but do NOT change 68 - * its hardware state. Prevent new children of the device from being 69 - * registered after @prepare() returns (the driver's subsystem and 70 - * generally the rest of the kernel is supposed to prevent new calls to the 71 - * probe method from being made too once @prepare() has succeeded). If 72 - * @prepare() detects a situation it cannot handle (e.g. registration of a 73 - * child already in progress), it may return -EAGAIN, so that the PM core 74 - * can execute it once again (e.g. after the new child has been registered) 75 - * to recover from the race condition. This method is executed for all 76 - * kinds of suspend transitions and is followed by one of the suspend 77 - * callbacks: @suspend(), @freeze(), or @poweroff(). 78 - * The PM core executes @prepare() for all devices before starting to 79 - * execute suspend callbacks for any of them, so drivers may assume all of 80 - * the other devices to be present and functional while @prepare() is being 81 - * executed. In particular, it is safe to make GFP_KERNEL memory 82 - * allocations from within @prepare(). However, drivers may NOT assume 83 - * anything about the availability of the user space at that time and it 84 - * is not correct to request firmware from within @prepare() (it's too 85 - * late to do that). [To work around this limitation, drivers may 86 - * register suspend and hibernation notifiers that are executed before the 87 - * freezing of tasks.] 74 + * @prepare: The principal role of this callback is to prevent new children of 75 + * the device from being registered after it has returned (the driver's 76 + * subsystem and generally the rest of the kernel is supposed to prevent 77 + * new calls to the probe method from being made too once @prepare() has 78 + * succeeded). If @prepare() detects a situation it cannot handle (e.g. 79 + * registration of a child already in progress), it may return -EAGAIN, so 80 + * that the PM core can execute it once again (e.g. after a new child has 81 + * been registered) to recover from the race condition. 82 + * This method is executed for all kinds of suspend transitions and is 83 + * followed by one of the suspend callbacks: @suspend(), @freeze(), or 84 + * @poweroff(). The PM core executes subsystem-level @prepare() for all 85 + * devices before starting to invoke suspend callbacks for any of them, so 86 + * generally devices may be assumed to be functional or to respond to 87 + * runtime resume requests while @prepare() is being executed. However, 88 + * device drivers may NOT assume anything about the availability of user 89 + * space at that time and it is NOT valid to request firmware from within 90 + * @prepare() (it's too late to do that). It also is NOT valid to allocate 91 + * substantial amounts of memory from @prepare() in the GFP_KERNEL mode. 92 + * [To work around these limitations, drivers may register suspend and 93 + * hibernation notifiers to be executed before the freezing of tasks.] 88 94 * 89 95 * @complete: Undo the changes made by @prepare(). This method is executed for 90 96 * all kinds of resume transitions, following one of the resume callbacks: 91 97 * @resume(), @thaw(), @restore(). Also called if the state transition 92 - * fails before the driver's suspend callback (@suspend(), @freeze(), 93 - * @poweroff()) can be executed (e.g. if the suspend callback fails for one 98 + * fails before the driver's suspend callback: @suspend(), @freeze() or 99 + * @poweroff(), can be executed (e.g. if the suspend callback fails for one 94 100 * of the other devices that the PM core has unsuccessfully attempted to 95 101 * suspend earlier). 96 - * The PM core executes @complete() after it has executed the appropriate 97 - * resume callback for all devices. 102 + * The PM core executes subsystem-level @complete() after it has executed 103 + * the appropriate resume callbacks for all devices. 98 104 * 99 105 * @suspend: Executed before putting the system into a sleep state in which the 100 - * contents of main memory are preserved. Quiesce the device, put it into 101 - * a low power state appropriate for the upcoming system state (such as 102 - * PCI_D3hot), and enable wakeup events as appropriate. 106 + * contents of main memory are preserved. The exact action to perform 107 + * depends on the device's subsystem (PM domain, device type, class or bus 108 + * type), but generally the device must be quiescent after subsystem-level 109 + * @suspend() has returned, so that it doesn't do any I/O or DMA. 110 + * Subsystem-level @suspend() is executed for all devices after invoking 111 + * subsystem-level @prepare() for all of them. 103 112 * 104 113 * @resume: Executed after waking the system up from a sleep state in which the 105 - * contents of main memory were preserved. Put the device into the 106 - * appropriate state, according to the information saved in memory by the 107 - * preceding @suspend(). The driver starts working again, responding to 108 - * hardware events and software requests. The hardware may have gone 109 - * through a power-off reset, or it may have maintained state from the 110 - * previous suspend() which the driver may rely on while resuming. On most 111 - * platforms, there are no restrictions on availability of resources like 112 - * clocks during @resume(). 114 + * contents of main memory were preserved. The exact action to perform 115 + * depends on the device's subsystem, but generally the driver is expected 116 + * to start working again, responding to hardware events and software 117 + * requests (the device itself may be left in a low-power state, waiting 118 + * for a runtime resume to occur). The state of the device at the time its 119 + * driver's @resume() callback is run depends on the platform and subsystem 120 + * the device belongs to. On most platforms, there are no restrictions on 121 + * availability of resources like clocks during @resume(). 122 + * Subsystem-level @resume() is executed for all devices after invoking 123 + * subsystem-level @resume_noirq() for all of them. 113 124 * 114 125 * @freeze: Hibernation-specific, executed before creating a hibernation image. 115 - * Quiesce operations so that a consistent image can be created, but do NOT 116 - * otherwise put the device into a low power device state and do NOT emit 117 - * system wakeup events. Save in main memory the device settings to be 118 - * used by @restore() during the subsequent resume from hibernation or by 119 - * the subsequent @thaw(), if the creation of the image or the restoration 120 - * of main memory contents from it fails. 126 + * Analogous to @suspend(), but it should not enable the device to signal 127 + * wakeup events or change its power state. The majority of subsystems 128 + * (with the notable exception of the PCI bus type) expect the driver-level 129 + * @freeze() to save the device settings in memory to be used by @restore() 130 + * during the subsequent resume from hibernation. 131 + * Subsystem-level @freeze() is executed for all devices after invoking 132 + * subsystem-level @prepare() for all of them. 121 133 * 122 134 * @thaw: Hibernation-specific, executed after creating a hibernation image OR 123 - * if the creation of the image fails. Also executed after a failing 135 + * if the creation of an image has failed. Also executed after a failing 124 136 * attempt to restore the contents of main memory from such an image. 125 137 * Undo the changes made by the preceding @freeze(), so the device can be 126 138 * operated in the same way as immediately before the call to @freeze(). 139 + * Subsystem-level @thaw() is executed for all devices after invoking 140 + * subsystem-level @thaw_noirq() for all of them. It also may be executed 141 + * directly after @freeze() in case of a transition error. 127 142 * 128 143 * @poweroff: Hibernation-specific, executed after saving a hibernation image. 129 - * Quiesce the device, put it into a low power state appropriate for the 130 - * upcoming system state (such as PCI_D3hot), and enable wakeup events as 131 - * appropriate. 144 + * Analogous to @suspend(), but it need not save the device's settings in 145 + * memory. 146 + * Subsystem-level @poweroff() is executed for all devices after invoking 147 + * subsystem-level @prepare() for all of them. 132 148 * 133 149 * @restore: Hibernation-specific, executed after restoring the contents of main 134 - * memory from a hibernation image. Driver starts working again, 135 - * responding to hardware events and software requests. Drivers may NOT 136 - * make ANY assumptions about the hardware state right prior to @restore(). 137 - * On most platforms, there are no restrictions on availability of 138 - * resources like clocks during @restore(). 150 + * memory from a hibernation image, analogous to @resume(). 139 151 * 140 - * @suspend_noirq: Complete the operations of ->suspend() by carrying out any 141 - * actions required for suspending the device that need interrupts to be 142 - * disabled 152 + * @suspend_noirq: Complete the actions started by @suspend(). Carry out any 153 + * additional operations required for suspending the device that might be 154 + * racing with its driver's interrupt handler, which is guaranteed not to 155 + * run while @suspend_noirq() is being executed. 156 + * It generally is expected that the device will be in a low-power state 157 + * (appropriate for the target system sleep state) after subsystem-level 158 + * @suspend_noirq() has returned successfully. If the device can generate 159 + * system wakeup signals and is enabled to wake up the system, it should be 160 + * configured to do so at that time. However, depending on the platform 161 + * and device's subsystem, @suspend() may be allowed to put the device into 162 + * the low-power state and configure it to generate wakeup signals, in 163 + * which case it generally is not necessary to define @suspend_noirq(). 143 164 * 144 - * @resume_noirq: Prepare for the execution of ->resume() by carrying out any 145 - * actions required for resuming the device that need interrupts to be 146 - * disabled 165 + * @resume_noirq: Prepare for the execution of @resume() by carrying out any 166 + * operations required for resuming the device that might be racing with 167 + * its driver's interrupt handler, which is guaranteed not to run while 168 + * @resume_noirq() is being executed. 147 169 * 148 - * @freeze_noirq: Complete the operations of ->freeze() by carrying out any 149 - * actions required for freezing the device that need interrupts to be 150 - * disabled 170 + * @freeze_noirq: Complete the actions started by @freeze(). Carry out any 171 + * additional operations required for freezing the device that might be 172 + * racing with its driver's interrupt handler, which is guaranteed not to 173 + * run while @freeze_noirq() is being executed. 174 + * The power state of the device should not be changed by either @freeze() 175 + * or @freeze_noirq() and it should not be configured to signal system 176 + * wakeup by any of these callbacks. 151 177 * 152 - * @thaw_noirq: Prepare for the execution of ->thaw() by carrying out any 153 - * actions required for thawing the device that need interrupts to be 154 - * disabled 178 + * @thaw_noirq: Prepare for the execution of @thaw() by carrying out any 179 + * operations required for thawing the device that might be racing with its 180 + * driver's interrupt handler, which is guaranteed not to run while 181 + * @thaw_noirq() is being executed. 155 182 * 156 - * @poweroff_noirq: Complete the operations of ->poweroff() by carrying out any 157 - * actions required for handling the device that need interrupts to be 158 - * disabled 183 + * @poweroff_noirq: Complete the actions started by @poweroff(). Analogous to 184 + * @suspend_noirq(), but it need not save the device's settings in memory. 159 185 * 160 - * @restore_noirq: Prepare for the execution of ->restore() by carrying out any 161 - * actions required for restoring the operations of the device that need 162 - * interrupts to be disabled 186 + * @restore_noirq: Prepare for the execution of @restore() by carrying out any 187 + * operations required for thawing the device that might be racing with its 188 + * driver's interrupt handler, which is guaranteed not to run while 189 + * @restore_noirq() is being executed. Analogous to @resume_noirq(). 163 190 * 164 191 * All of the above callbacks, except for @complete(), return error codes. 165 192 * However, the error codes returned by the resume operations, @resume(), 166 - * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq() do 193 + * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq(), do 167 194 * not cause the PM core to abort the resume transition during which they are 168 - * returned. The error codes returned in that cases are only printed by the PM 195 + * returned. The error codes returned in those cases are only printed by the PM 169 196 * core to the system logs for debugging purposes. Still, it is recommended 170 197 * that drivers only return error codes from their resume methods in case of an 171 198 * unrecoverable failure (i.e. when the device being handled refuses to resume ··· 201 174 * their children. 202 175 * 203 176 * It is allowed to unregister devices while the above callbacks are being 204 - * executed. However, it is not allowed to unregister a device from within any 205 - * of its own callbacks. 177 + * executed. However, a callback routine must NOT try to unregister the device 178 + * it was called for, although it may unregister children of that device (for 179 + * example, if it detects that a child was unplugged while the system was 180 + * asleep). 206 181 * 207 - * There also are the following callbacks related to run-time power management 208 - * of devices: 182 + * Refer to Documentation/power/devices.txt for more information about the role 183 + * of the above callbacks in the system suspend process. 184 + * 185 + * There also are callbacks related to runtime power management of devices. 186 + * Again, these callbacks are executed by the PM core only for subsystems 187 + * (PM domains, device types, classes and bus types) and the subsystem-level 188 + * callbacks are supposed to invoke the driver callbacks. Moreover, the exact 189 + * actions to be performed by a device driver's callbacks generally depend on 190 + * the platform and subsystem the device belongs to. 209 191 * 210 192 * @runtime_suspend: Prepare the device for a condition in which it won't be 211 193 * able to communicate with the CPU(s) and RAM due to power management. 212 - * This need not mean that the device should be put into a low power state. 194 + * This need not mean that the device should be put into a low-power state. 213 195 * For example, if the device is behind a link which is about to be turned 214 196 * off, the device may remain at full power. If the device does go to low 215 - * power and is capable of generating run-time wake-up events, remote 216 - * wake-up (i.e., a hardware mechanism allowing the device to request a 217 - * change of its power state via a wake-up event, such as PCI PME) should 218 - * be enabled for it. 197 + * power and is capable of generating runtime wakeup events, remote wakeup 198 + * (i.e., a hardware mechanism allowing the device to request a change of 199 + * its power state via an interrupt) should be enabled for it. 219 200 * 220 201 * @runtime_resume: Put the device into the fully active state in response to a 221 - * wake-up event generated by hardware or at the request of software. If 222 - * necessary, put the device into the full power state and restore its 202 + * wakeup event generated by hardware or at the request of software. If 203 + * necessary, put the device into the full-power state and restore its 223 204 * registers, so that it is fully operational. 224 205 * 225 - * @runtime_idle: Device appears to be inactive and it might be put into a low 226 - * power state if all of the necessary conditions are satisfied. Check 206 + * @runtime_idle: Device appears to be inactive and it might be put into a 207 + * low-power state if all of the necessary conditions are satisfied. Check 227 208 * these conditions and handle the device as appropriate, possibly queueing 228 209 * a suspend request for it. The return value is ignored by the PM core. 210 + * 211 + * Refer to Documentation/power/runtime_pm.txt for more information about the 212 + * role of the above callbacks in device runtime power management. 213 + * 229 214 */ 230 215 231 216 struct dev_pm_ops {
+3 -1
include/linux/pstore.h
··· 35 35 spinlock_t buf_lock; /* serialize access to 'buf' */ 36 36 char *buf; 37 37 size_t bufsize; 38 + struct mutex read_mutex; /* serialize open/read/close */ 38 39 int (*open)(struct pstore_info *psi); 39 40 int (*close)(struct pstore_info *psi); 40 41 ssize_t (*read)(u64 *id, enum pstore_type_id *type, 41 - struct timespec *time, struct pstore_info *psi); 42 + struct timespec *time, char **buf, 43 + struct pstore_info *psi); 42 44 int (*write)(enum pstore_type_id type, u64 *id, 43 45 unsigned int part, size_t size, struct pstore_info *psi); 44 46 int (*erase)(enum pstore_type_id type, u64 id,
-7
include/video/omapdss.h
··· 307 307 void (*dsi_disable_pads)(int dsi_id, unsigned lane_mask); 308 308 }; 309 309 310 - #if defined(CONFIG_OMAP2_DSS_MODULE) || defined(CONFIG_OMAP2_DSS) 311 310 /* Init with the board info */ 312 311 extern int omap_display_init(struct omap_dss_board_info *board_data); 313 - #else 314 - static inline int omap_display_init(struct omap_dss_board_info *board_data) 315 - { 316 - return 0; 317 - } 318 - #endif 319 312 320 313 struct omap_display_platform_data { 321 314 struct omap_dss_board_info *board_data;
+9 -2
kernel/cgroup_freezer.c
··· 153 153 kfree(cgroup_freezer(cgroup)); 154 154 } 155 155 156 + /* task is frozen or will freeze immediately when next it gets woken */ 157 + static bool is_task_frozen_enough(struct task_struct *task) 158 + { 159 + return frozen(task) || 160 + (task_is_stopped_or_traced(task) && freezing(task)); 161 + } 162 + 156 163 /* 157 164 * The call to cgroup_lock() in the freezer.state write method prevents 158 165 * a write to that file racing against an attach, and hence the ··· 238 231 cgroup_iter_start(cgroup, &it); 239 232 while ((task = cgroup_iter_next(cgroup, &it))) { 240 233 ntotal++; 241 - if (frozen(task)) 234 + if (is_task_frozen_enough(task)) 242 235 nfrozen++; 243 236 } 244 237 ··· 291 284 while ((task = cgroup_iter_next(cgroup, &it))) { 292 285 if (!freeze_task(task, true)) 293 286 continue; 294 - if (frozen(task)) 287 + if (is_task_frozen_enough(task)) 295 288 continue; 296 289 if (!freezing(task) && !freezer_should_skip(task)) 297 290 num_cant_freeze_now++;
+4 -2
kernel/hrtimer.c
··· 885 885 struct hrtimer_clock_base *base, 886 886 unsigned long newstate, int reprogram) 887 887 { 888 + struct timerqueue_node *next_timer; 888 889 if (!(timer->state & HRTIMER_STATE_ENQUEUED)) 889 890 goto out; 890 891 891 - if (&timer->node == timerqueue_getnext(&base->active)) { 892 + next_timer = timerqueue_getnext(&base->active); 893 + timerqueue_del(&base->active, &timer->node); 894 + if (&timer->node == next_timer) { 892 895 #ifdef CONFIG_HIGH_RES_TIMERS 893 896 /* Reprogram the clock event device. if enabled */ 894 897 if (reprogram && hrtimer_hres_active()) { ··· 904 901 } 905 902 #endif 906 903 } 907 - timerqueue_del(&base->active, &timer->node); 908 904 if (!timerqueue_getnext(&base->active)) 909 905 base->cpu_base->active_bases &= ~(1 << base->index); 910 906 out:
+1 -1
kernel/irq/manage.c
··· 1596 1596 return -ENOMEM; 1597 1597 1598 1598 action->handler = handler; 1599 - action->flags = IRQF_PERCPU; 1599 + action->flags = IRQF_PERCPU | IRQF_NO_SUSPEND; 1600 1600 action->name = devname; 1601 1601 action->percpu_dev_id = dev_id; 1602 1602
+3 -1
kernel/irq/spurious.c
··· 84 84 */ 85 85 action = desc->action; 86 86 if (!action || !(action->flags & IRQF_SHARED) || 87 - (action->flags & __IRQF_TIMER) || !action->next) 87 + (action->flags & __IRQF_TIMER) || 88 + (action->handler(irq, action->dev_id) == IRQ_HANDLED) || 89 + !action->next) 88 90 goto out; 89 91 90 92 /* Already running on another processor */
+10 -6
kernel/power/hibernate.c
··· 347 347 348 348 error = freeze_kernel_threads(); 349 349 if (error) 350 - goto Close; 350 + goto Cleanup; 351 351 352 352 if (hibernation_test(TEST_FREEZER) || 353 353 hibernation_testmode(HIBERNATION_TESTPROC)) { ··· 357 357 * successful freezer test. 358 358 */ 359 359 freezer_test_done = true; 360 - goto Close; 360 + goto Cleanup; 361 361 } 362 362 363 363 error = dpm_prepare(PMSG_FREEZE); 364 - if (error) 365 - goto Complete_devices; 364 + if (error) { 365 + dpm_complete(msg); 366 + goto Cleanup; 367 + } 366 368 367 369 suspend_console(); 368 370 pm_restrict_gfp_mask(); ··· 393 391 pm_restore_gfp_mask(); 394 392 395 393 resume_console(); 396 - 397 - Complete_devices: 398 394 dpm_complete(msg); 399 395 400 396 Close: ··· 402 402 Recover_platform: 403 403 platform_recover(platform_mode); 404 404 goto Resume_devices; 405 + 406 + Cleanup: 407 + swsusp_free(); 408 + goto Close; 405 409 } 406 410 407 411 /**
+48 -10
kernel/time/clocksource.c
··· 492 492 } 493 493 494 494 /** 495 + * clocksource_max_adjustment- Returns max adjustment amount 496 + * @cs: Pointer to clocksource 497 + * 498 + */ 499 + static u32 clocksource_max_adjustment(struct clocksource *cs) 500 + { 501 + u64 ret; 502 + /* 503 + * We won't try to correct for more then 11% adjustments (110,000 ppm), 504 + */ 505 + ret = (u64)cs->mult * 11; 506 + do_div(ret,100); 507 + return (u32)ret; 508 + } 509 + 510 + /** 495 511 * clocksource_max_deferment - Returns max time the clocksource can be deferred 496 512 * @cs: Pointer to clocksource 497 513 * ··· 519 503 /* 520 504 * Calculate the maximum number of cycles that we can pass to the 521 505 * cyc2ns function without overflowing a 64-bit signed result. The 522 - * maximum number of cycles is equal to ULLONG_MAX/cs->mult which 523 - * is equivalent to the below. 524 - * max_cycles < (2^63)/cs->mult 525 - * max_cycles < 2^(log2((2^63)/cs->mult)) 526 - * max_cycles < 2^(log2(2^63) - log2(cs->mult)) 527 - * max_cycles < 2^(63 - log2(cs->mult)) 528 - * max_cycles < 1 << (63 - log2(cs->mult)) 506 + * maximum number of cycles is equal to ULLONG_MAX/(cs->mult+cs->maxadj) 507 + * which is equivalent to the below. 508 + * max_cycles < (2^63)/(cs->mult + cs->maxadj) 509 + * max_cycles < 2^(log2((2^63)/(cs->mult + cs->maxadj))) 510 + * max_cycles < 2^(log2(2^63) - log2(cs->mult + cs->maxadj)) 511 + * max_cycles < 2^(63 - log2(cs->mult + cs->maxadj)) 512 + * max_cycles < 1 << (63 - log2(cs->mult + cs->maxadj)) 529 513 * Please note that we add 1 to the result of the log2 to account for 530 514 * any rounding errors, ensure the above inequality is satisfied and 531 515 * no overflow will occur. 532 516 */ 533 - max_cycles = 1ULL << (63 - (ilog2(cs->mult) + 1)); 517 + max_cycles = 1ULL << (63 - (ilog2(cs->mult + cs->maxadj) + 1)); 534 518 535 519 /* 536 520 * The actual maximum number of cycles we can defer the clocksource is 537 521 * determined by the minimum of max_cycles and cs->mask. 522 + * Note: Here we subtract the maxadj to make sure we don't sleep for 523 + * too long if there's a large negative adjustment. 538 524 */ 539 525 max_cycles = min_t(u64, max_cycles, (u64) cs->mask); 540 - max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult, cs->shift); 526 + max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult - cs->maxadj, 527 + cs->shift); 541 528 542 529 /* 543 530 * To ensure that the clocksource does not wrap whilst we are idle, ··· 659 640 void __clocksource_updatefreq_scale(struct clocksource *cs, u32 scale, u32 freq) 660 641 { 661 642 u64 sec; 662 - 663 643 /* 664 644 * Calc the maximum number of seconds which we can run before 665 645 * wrapping around. For clocksources which have a mask > 32bit ··· 679 661 680 662 clocks_calc_mult_shift(&cs->mult, &cs->shift, freq, 681 663 NSEC_PER_SEC / scale, sec * scale); 664 + 665 + /* 666 + * for clocksources that have large mults, to avoid overflow. 667 + * Since mult may be adjusted by ntp, add an safety extra margin 668 + * 669 + */ 670 + cs->maxadj = clocksource_max_adjustment(cs); 671 + while ((cs->mult + cs->maxadj < cs->mult) 672 + || (cs->mult - cs->maxadj > cs->mult)) { 673 + cs->mult >>= 1; 674 + cs->shift--; 675 + cs->maxadj = clocksource_max_adjustment(cs); 676 + } 677 + 682 678 cs->max_idle_ns = clocksource_max_deferment(cs); 683 679 } 684 680 EXPORT_SYMBOL_GPL(__clocksource_updatefreq_scale); ··· 733 701 */ 734 702 int clocksource_register(struct clocksource *cs) 735 703 { 704 + /* calculate max adjustment for given mult/shift */ 705 + cs->maxadj = clocksource_max_adjustment(cs); 706 + WARN_ONCE(cs->mult + cs->maxadj < cs->mult, 707 + "Clocksource %s might overflow on 11%% adjustment\n", 708 + cs->name); 709 + 736 710 /* calculate max idle time permitted for this clocksource */ 737 711 cs->max_idle_ns = clocksource_max_deferment(cs); 738 712
+91 -1
kernel/time/timekeeping.c
··· 249 249 secs = xtime.tv_sec + wall_to_monotonic.tv_sec; 250 250 nsecs = xtime.tv_nsec + wall_to_monotonic.tv_nsec; 251 251 nsecs += timekeeping_get_ns(); 252 + /* If arch requires, add in gettimeoffset() */ 253 + nsecs += arch_gettimeoffset(); 252 254 253 255 } while (read_seqretry(&xtime_lock, seq)); 254 256 /* ··· 282 280 *ts = xtime; 283 281 tomono = wall_to_monotonic; 284 282 nsecs = timekeeping_get_ns(); 283 + /* If arch requires, add in gettimeoffset() */ 284 + nsecs += arch_gettimeoffset(); 285 285 286 286 } while (read_seqretry(&xtime_lock, seq)); 287 287 ··· 806 802 s64 error, interval = timekeeper.cycle_interval; 807 803 int adj; 808 804 805 + /* 806 + * The point of this is to check if the error is greater then half 807 + * an interval. 808 + * 809 + * First we shift it down from NTP_SHIFT to clocksource->shifted nsecs. 810 + * 811 + * Note we subtract one in the shift, so that error is really error*2. 812 + * This "saves" dividing(shifting) intererval twice, but keeps the 813 + * (error > interval) comparision as still measuring if error is 814 + * larger then half an interval. 815 + * 816 + * Note: It does not "save" on aggrivation when reading the code. 817 + */ 809 818 error = timekeeper.ntp_error >> (timekeeper.ntp_error_shift - 1); 810 819 if (error > interval) { 820 + /* 821 + * We now divide error by 4(via shift), which checks if 822 + * the error is greater then twice the interval. 823 + * If it is greater, we need a bigadjust, if its smaller, 824 + * we can adjust by 1. 825 + */ 811 826 error >>= 2; 827 + /* 828 + * XXX - In update_wall_time, we round up to the next 829 + * nanosecond, and store the amount rounded up into 830 + * the error. This causes the likely below to be unlikely. 831 + * 832 + * The properfix is to avoid rounding up by using 833 + * the high precision timekeeper.xtime_nsec instead of 834 + * xtime.tv_nsec everywhere. Fixing this will take some 835 + * time. 836 + */ 812 837 if (likely(error <= interval)) 813 838 adj = 1; 814 839 else 815 840 adj = timekeeping_bigadjust(error, &interval, &offset); 816 841 } else if (error < -interval) { 842 + /* See comment above, this is just switched for the negative */ 817 843 error >>= 2; 818 844 if (likely(error >= -interval)) { 819 845 adj = -1; ··· 851 817 offset = -offset; 852 818 } else 853 819 adj = timekeeping_bigadjust(error, &interval, &offset); 854 - } else 820 + } else /* No adjustment needed */ 855 821 return; 856 822 823 + WARN_ONCE(timekeeper.clock->maxadj && 824 + (timekeeper.mult + adj > timekeeper.clock->mult + 825 + timekeeper.clock->maxadj), 826 + "Adjusting %s more then 11%% (%ld vs %ld)\n", 827 + timekeeper.clock->name, (long)timekeeper.mult + adj, 828 + (long)timekeeper.clock->mult + 829 + timekeeper.clock->maxadj); 830 + /* 831 + * So the following can be confusing. 832 + * 833 + * To keep things simple, lets assume adj == 1 for now. 834 + * 835 + * When adj != 1, remember that the interval and offset values 836 + * have been appropriately scaled so the math is the same. 837 + * 838 + * The basic idea here is that we're increasing the multiplier 839 + * by one, this causes the xtime_interval to be incremented by 840 + * one cycle_interval. This is because: 841 + * xtime_interval = cycle_interval * mult 842 + * So if mult is being incremented by one: 843 + * xtime_interval = cycle_interval * (mult + 1) 844 + * Its the same as: 845 + * xtime_interval = (cycle_interval * mult) + cycle_interval 846 + * Which can be shortened to: 847 + * xtime_interval += cycle_interval 848 + * 849 + * So offset stores the non-accumulated cycles. Thus the current 850 + * time (in shifted nanoseconds) is: 851 + * now = (offset * adj) + xtime_nsec 852 + * Now, even though we're adjusting the clock frequency, we have 853 + * to keep time consistent. In other words, we can't jump back 854 + * in time, and we also want to avoid jumping forward in time. 855 + * 856 + * So given the same offset value, we need the time to be the same 857 + * both before and after the freq adjustment. 858 + * now = (offset * adj_1) + xtime_nsec_1 859 + * now = (offset * adj_2) + xtime_nsec_2 860 + * So: 861 + * (offset * adj_1) + xtime_nsec_1 = 862 + * (offset * adj_2) + xtime_nsec_2 863 + * And we know: 864 + * adj_2 = adj_1 + 1 865 + * So: 866 + * (offset * adj_1) + xtime_nsec_1 = 867 + * (offset * (adj_1+1)) + xtime_nsec_2 868 + * (offset * adj_1) + xtime_nsec_1 = 869 + * (offset * adj_1) + offset + xtime_nsec_2 870 + * Canceling the sides: 871 + * xtime_nsec_1 = offset + xtime_nsec_2 872 + * Which gives us: 873 + * xtime_nsec_2 = xtime_nsec_1 - offset 874 + * Which simplfies to: 875 + * xtime_nsec -= offset 876 + * 877 + * XXX - TODO: Doc ntp_error calculation. 878 + */ 857 879 timekeeper.mult += adj; 858 880 timekeeper.xtime_interval += interval; 859 881 timekeeper.xtime_nsec -= offset;
+8 -9
mm/percpu-vm.c
··· 50 50 51 51 if (!pages || !bitmap) { 52 52 if (may_alloc && !pages) 53 - pages = pcpu_mem_alloc(pages_size); 53 + pages = pcpu_mem_zalloc(pages_size); 54 54 if (may_alloc && !bitmap) 55 - bitmap = pcpu_mem_alloc(bitmap_size); 55 + bitmap = pcpu_mem_zalloc(bitmap_size); 56 56 if (!pages || !bitmap) 57 57 return NULL; 58 58 } 59 59 60 - memset(pages, 0, pages_size); 61 60 bitmap_copy(bitmap, chunk->populated, pcpu_unit_pages); 62 61 63 62 *bitmapp = bitmap; ··· 142 143 int page_start, int page_end) 143 144 { 144 145 flush_cache_vunmap( 145 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 146 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 146 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 147 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 147 148 } 148 149 149 150 static void __pcpu_unmap_pages(unsigned long addr, int nr_pages) ··· 205 206 int page_start, int page_end) 206 207 { 207 208 flush_tlb_kernel_range( 208 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 209 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 209 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 210 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 210 211 } 211 212 212 213 static int __pcpu_map_pages(unsigned long addr, struct page **pages, ··· 283 284 int page_start, int page_end) 284 285 { 285 286 flush_cache_vmap( 286 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 287 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 287 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 288 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 288 289 } 289 290 290 291 /**
+40 -22
mm/percpu.c
··· 116 116 static int pcpu_nr_slots __read_mostly; 117 117 static size_t pcpu_chunk_struct_size __read_mostly; 118 118 119 - /* cpus with the lowest and highest unit numbers */ 120 - static unsigned int pcpu_first_unit_cpu __read_mostly; 121 - static unsigned int pcpu_last_unit_cpu __read_mostly; 119 + /* cpus with the lowest and highest unit addresses */ 120 + static unsigned int pcpu_low_unit_cpu __read_mostly; 121 + static unsigned int pcpu_high_unit_cpu __read_mostly; 122 122 123 123 /* the address of the first chunk which starts with the kernel static area */ 124 124 void *pcpu_base_addr __read_mostly; ··· 273 273 (rs) = (re) + 1, pcpu_next_pop((chunk), &(rs), &(re), (end))) 274 274 275 275 /** 276 - * pcpu_mem_alloc - allocate memory 276 + * pcpu_mem_zalloc - allocate memory 277 277 * @size: bytes to allocate 278 278 * 279 279 * Allocate @size bytes. If @size is smaller than PAGE_SIZE, 280 - * kzalloc() is used; otherwise, vmalloc() is used. The returned 280 + * kzalloc() is used; otherwise, vzalloc() is used. The returned 281 281 * memory is always zeroed. 282 282 * 283 283 * CONTEXT: ··· 286 286 * RETURNS: 287 287 * Pointer to the allocated area on success, NULL on failure. 288 288 */ 289 - static void *pcpu_mem_alloc(size_t size) 289 + static void *pcpu_mem_zalloc(size_t size) 290 290 { 291 291 if (WARN_ON_ONCE(!slab_is_available())) 292 292 return NULL; ··· 302 302 * @ptr: memory to free 303 303 * @size: size of the area 304 304 * 305 - * Free @ptr. @ptr should have been allocated using pcpu_mem_alloc(). 305 + * Free @ptr. @ptr should have been allocated using pcpu_mem_zalloc(). 306 306 */ 307 307 static void pcpu_mem_free(void *ptr, size_t size) 308 308 { ··· 384 384 size_t old_size = 0, new_size = new_alloc * sizeof(new[0]); 385 385 unsigned long flags; 386 386 387 - new = pcpu_mem_alloc(new_size); 387 + new = pcpu_mem_zalloc(new_size); 388 388 if (!new) 389 389 return -ENOMEM; 390 390 ··· 604 604 { 605 605 struct pcpu_chunk *chunk; 606 606 607 - chunk = pcpu_mem_alloc(pcpu_chunk_struct_size); 607 + chunk = pcpu_mem_zalloc(pcpu_chunk_struct_size); 608 608 if (!chunk) 609 609 return NULL; 610 610 611 - chunk->map = pcpu_mem_alloc(PCPU_DFL_MAP_ALLOC * sizeof(chunk->map[0])); 611 + chunk->map = pcpu_mem_zalloc(PCPU_DFL_MAP_ALLOC * 612 + sizeof(chunk->map[0])); 612 613 if (!chunk->map) { 613 614 kfree(chunk); 614 615 return NULL; ··· 978 977 * address. The caller is responsible for ensuring @addr stays valid 979 978 * until this function finishes. 980 979 * 980 + * percpu allocator has special setup for the first chunk, which currently 981 + * supports either embedding in linear address space or vmalloc mapping, 982 + * and, from the second one, the backing allocator (currently either vm or 983 + * km) provides translation. 984 + * 985 + * The addr can be tranlated simply without checking if it falls into the 986 + * first chunk. But the current code reflects better how percpu allocator 987 + * actually works, and the verification can discover both bugs in percpu 988 + * allocator itself and per_cpu_ptr_to_phys() callers. So we keep current 989 + * code. 990 + * 981 991 * RETURNS: 982 992 * The physical address for @addr. 983 993 */ ··· 996 984 { 997 985 void __percpu *base = __addr_to_pcpu_ptr(pcpu_base_addr); 998 986 bool in_first_chunk = false; 999 - unsigned long first_start, first_end; 987 + unsigned long first_low, first_high; 1000 988 unsigned int cpu; 1001 989 1002 990 /* 1003 - * The following test on first_start/end isn't strictly 991 + * The following test on unit_low/high isn't strictly 1004 992 * necessary but will speed up lookups of addresses which 1005 993 * aren't in the first chunk. 1006 994 */ 1007 - first_start = pcpu_chunk_addr(pcpu_first_chunk, pcpu_first_unit_cpu, 0); 1008 - first_end = pcpu_chunk_addr(pcpu_first_chunk, pcpu_last_unit_cpu, 1009 - pcpu_unit_pages); 1010 - if ((unsigned long)addr >= first_start && 1011 - (unsigned long)addr < first_end) { 995 + first_low = pcpu_chunk_addr(pcpu_first_chunk, pcpu_low_unit_cpu, 0); 996 + first_high = pcpu_chunk_addr(pcpu_first_chunk, pcpu_high_unit_cpu, 997 + pcpu_unit_pages); 998 + if ((unsigned long)addr >= first_low && 999 + (unsigned long)addr < first_high) { 1012 1000 for_each_possible_cpu(cpu) { 1013 1001 void *start = per_cpu_ptr(base, cpu); 1014 1002 ··· 1245 1233 1246 1234 for (cpu = 0; cpu < nr_cpu_ids; cpu++) 1247 1235 unit_map[cpu] = UINT_MAX; 1248 - pcpu_first_unit_cpu = NR_CPUS; 1236 + 1237 + pcpu_low_unit_cpu = NR_CPUS; 1238 + pcpu_high_unit_cpu = NR_CPUS; 1249 1239 1250 1240 for (group = 0, unit = 0; group < ai->nr_groups; group++, unit += i) { 1251 1241 const struct pcpu_group_info *gi = &ai->groups[group]; ··· 1267 1253 unit_map[cpu] = unit + i; 1268 1254 unit_off[cpu] = gi->base_offset + i * ai->unit_size; 1269 1255 1270 - if (pcpu_first_unit_cpu == NR_CPUS) 1271 - pcpu_first_unit_cpu = cpu; 1272 - pcpu_last_unit_cpu = cpu; 1256 + /* determine low/high unit_cpu */ 1257 + if (pcpu_low_unit_cpu == NR_CPUS || 1258 + unit_off[cpu] < unit_off[pcpu_low_unit_cpu]) 1259 + pcpu_low_unit_cpu = cpu; 1260 + if (pcpu_high_unit_cpu == NR_CPUS || 1261 + unit_off[cpu] > unit_off[pcpu_high_unit_cpu]) 1262 + pcpu_high_unit_cpu = cpu; 1273 1263 } 1274 1264 } 1275 1265 pcpu_nr_units = unit; ··· 1907 1889 1908 1890 BUILD_BUG_ON(size > PAGE_SIZE); 1909 1891 1910 - map = pcpu_mem_alloc(size); 1892 + map = pcpu_mem_zalloc(size); 1911 1893 BUG_ON(!map); 1912 1894 1913 1895 spin_lock_irqsave(&pcpu_lock, flags);
+26 -16
mm/slub.c
··· 1862 1862 { 1863 1863 struct kmem_cache_node *n = NULL; 1864 1864 struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab); 1865 - struct page *page; 1865 + struct page *page, *discard_page = NULL; 1866 1866 1867 1867 while ((page = c->partial)) { 1868 1868 enum slab_modes { M_PARTIAL, M_FREE }; ··· 1904 1904 if (l == M_PARTIAL) 1905 1905 remove_partial(n, page); 1906 1906 else 1907 - add_partial(n, page, 1); 1907 + add_partial(n, page, 1908 + DEACTIVATE_TO_TAIL); 1908 1909 1909 1910 l = m; 1910 1911 } ··· 1916 1915 "unfreezing slab")); 1917 1916 1918 1917 if (m == M_FREE) { 1919 - stat(s, DEACTIVATE_EMPTY); 1920 - discard_slab(s, page); 1921 - stat(s, FREE_SLAB); 1918 + page->next = discard_page; 1919 + discard_page = page; 1922 1920 } 1923 1921 } 1924 1922 1925 1923 if (n) 1926 1924 spin_unlock(&n->list_lock); 1925 + 1926 + while (discard_page) { 1927 + page = discard_page; 1928 + discard_page = discard_page->next; 1929 + 1930 + stat(s, DEACTIVATE_EMPTY); 1931 + discard_slab(s, page); 1932 + stat(s, FREE_SLAB); 1933 + } 1927 1934 } 1928 1935 1929 1936 /* ··· 1978 1969 page->pobjects = pobjects; 1979 1970 page->next = oldpage; 1980 1971 1981 - } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage); 1972 + } while (irqsafe_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage); 1982 1973 stat(s, CPU_PARTIAL_FREE); 1983 1974 return pobjects; 1984 1975 } ··· 4444 4435 4445 4436 for_each_possible_cpu(cpu) { 4446 4437 struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); 4438 + int node = ACCESS_ONCE(c->node); 4447 4439 struct page *page; 4448 4440 4449 - if (!c || c->node < 0) 4441 + if (node < 0) 4450 4442 continue; 4451 - 4452 - if (c->page) { 4453 - if (flags & SO_TOTAL) 4454 - x = c->page->objects; 4443 + page = ACCESS_ONCE(c->page); 4444 + if (page) { 4445 + if (flags & SO_TOTAL) 4446 + x = page->objects; 4455 4447 else if (flags & SO_OBJECTS) 4456 - x = c->page->inuse; 4448 + x = page->inuse; 4457 4449 else 4458 4450 x = 1; 4459 4451 4460 4452 total += x; 4461 - nodes[c->node] += x; 4453 + nodes[node] += x; 4462 4454 } 4463 4455 page = c->partial; 4464 4456 4465 4457 if (page) { 4466 4458 x = page->pobjects; 4467 - total += x; 4468 - nodes[c->node] += x; 4459 + total += x; 4460 + nodes[node] += x; 4469 4461 } 4470 - per_cpu[c->node]++; 4462 + per_cpu[node]++; 4471 4463 } 4472 4464 } 4473 4465
+1 -2
net/sunrpc/xprtsock.c
··· 496 496 struct rpc_rqst *req = task->tk_rqstp; 497 497 struct rpc_xprt *xprt = req->rq_xprt; 498 498 struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt); 499 - int ret = 0; 499 + int ret = -EAGAIN; 500 500 501 501 dprintk("RPC: %5u xmit incomplete (%u left of %u)\n", 502 502 task->tk_pid, req->rq_slen - req->rq_bytes_sent, ··· 508 508 /* Don't race with disconnect */ 509 509 if (xprt_connected(xprt)) { 510 510 if (test_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags)) { 511 - ret = -EAGAIN; 512 511 /* 513 512 * Notify TCP that we're limited by the application 514 513 * window size
+1 -1
sound/pci/cs5535audio/cs5535audio_pcm.c
··· 148 148 struct cs5535audio_dma_desc *desc = 149 149 &((struct cs5535audio_dma_desc *) dma->desc_buf.area)[i]; 150 150 desc->addr = cpu_to_le32(addr); 151 - desc->size = cpu_to_le32(period_bytes); 151 + desc->size = cpu_to_le16(period_bytes); 152 152 desc->ctlreserved = cpu_to_le16(PRD_EOP); 153 153 desc_addr += sizeof(struct cs5535audio_dma_desc); 154 154 addr += period_bytes;
+3 -3
sound/pci/hda/hda_codec.c
··· 4046 4046 4047 4047 /* Search for codec ID */ 4048 4048 for (q = tbl; q->subvendor; q++) { 4049 - unsigned long vendorid = (q->subdevice) | (q->subvendor << 16); 4050 - 4051 - if (vendorid == codec->subsystem_id) 4049 + unsigned int mask = 0xffff0000 | q->subdevice_mask; 4050 + unsigned int id = (q->subdevice | (q->subvendor << 16)) & mask; 4051 + if ((codec->subsystem_id & mask) == id) 4052 4052 break; 4053 4053 } 4054 4054
+19 -9
sound/pci/hda/hda_eld.c
··· 347 347 348 348 for (i = 0; i < size; i++) { 349 349 unsigned int val = hdmi_get_eld_data(codec, nid, i); 350 + /* 351 + * Graphics driver might be writing to ELD buffer right now. 352 + * Just abort. The caller will repoll after a while. 353 + */ 350 354 if (!(val & AC_ELDD_ELD_VALID)) { 351 - if (!i) { 352 - snd_printd(KERN_INFO 353 - "HDMI: invalid ELD data\n"); 354 - ret = -EINVAL; 355 - goto error; 356 - } 357 355 snd_printd(KERN_INFO 358 356 "HDMI: invalid ELD data byte %d\n", i); 359 - val = 0; 360 - } else 361 - val &= AC_ELDD_ELD_DATA; 357 + ret = -EINVAL; 358 + goto error; 359 + } 360 + val &= AC_ELDD_ELD_DATA; 361 + /* 362 + * The first byte cannot be zero. This can happen on some DVI 363 + * connections. Some Intel chips may also need some 250ms delay 364 + * to return non-zero ELD data, even when the graphics driver 365 + * correctly writes ELD content before setting ELD_valid bit. 366 + */ 367 + if (!val && !i) { 368 + snd_printdd(KERN_INFO "HDMI: 0 ELD data\n"); 369 + ret = -EINVAL; 370 + goto error; 371 + } 362 372 buf[i] = val; 363 373 } 364 374
+23 -9
sound/pci/hda/patch_cirrus.c
··· 58 58 unsigned int gpio_mask; 59 59 unsigned int gpio_dir; 60 60 unsigned int gpio_data; 61 + unsigned int gpio_eapd_hp; /* EAPD GPIO bit for headphones */ 62 + unsigned int gpio_eapd_speaker; /* EAPD GPIO bit for speakers */ 61 63 62 64 struct hda_pcm pcm_rec[2]; /* PCM information */ 63 65 ··· 78 76 CS420X_MBP53, 79 77 CS420X_MBP55, 80 78 CS420X_IMAC27, 79 + CS420X_APPLE, 81 80 CS420X_AUTO, 82 81 CS420X_MODELS 83 82 }; ··· 931 928 spdif_present ? 0 : PIN_OUT); 932 929 } 933 930 } 934 - if (spec->board_config == CS420X_MBP53 || 935 - spec->board_config == CS420X_MBP55 || 936 - spec->board_config == CS420X_IMAC27) { 937 - unsigned int gpio = hp_present ? 0x02 : 0x08; 931 + if (spec->gpio_eapd_hp) { 932 + unsigned int gpio = hp_present ? 933 + spec->gpio_eapd_hp : spec->gpio_eapd_speaker; 938 934 snd_hda_codec_write(codec, 0x01, 0, 939 935 AC_VERB_SET_GPIO_DATA, gpio); 940 936 } ··· 1278 1276 [CS420X_MBP53] = "mbp53", 1279 1277 [CS420X_MBP55] = "mbp55", 1280 1278 [CS420X_IMAC27] = "imac27", 1279 + [CS420X_APPLE] = "apple", 1281 1280 [CS420X_AUTO] = "auto", 1282 1281 }; 1283 1282 ··· 1288 1285 SND_PCI_QUIRK(0x10de, 0x0d94, "MacBookAir 3,1(2)", CS420X_MBP55), 1289 1286 SND_PCI_QUIRK(0x10de, 0xcb79, "MacBookPro 5,5", CS420X_MBP55), 1290 1287 SND_PCI_QUIRK(0x10de, 0xcb89, "MacBookPro 7,1", CS420X_MBP55), 1291 - SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27), 1288 + /* this conflicts with too many other models */ 1289 + /*SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27),*/ 1290 + {} /* terminator */ 1291 + }; 1292 + 1293 + static const struct snd_pci_quirk cs420x_codec_cfg_tbl[] = { 1294 + SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE), 1292 1295 {} /* terminator */ 1293 1296 }; 1294 1297 ··· 1376 1367 spec->board_config = 1377 1368 snd_hda_check_board_config(codec, CS420X_MODELS, 1378 1369 cs420x_models, cs420x_cfg_tbl); 1370 + if (spec->board_config < 0) 1371 + spec->board_config = 1372 + snd_hda_check_board_codec_sid_config(codec, 1373 + CS420X_MODELS, NULL, cs420x_codec_cfg_tbl); 1379 1374 if (spec->board_config >= 0) 1380 1375 fix_pincfg(codec, spec->board_config, cs_pincfgs); 1381 1376 ··· 1387 1374 case CS420X_IMAC27: 1388 1375 case CS420X_MBP53: 1389 1376 case CS420X_MBP55: 1390 - /* GPIO1 = headphones */ 1391 - /* GPIO3 = speakers */ 1392 - spec->gpio_mask = 0x0a; 1393 - spec->gpio_dir = 0x0a; 1377 + case CS420X_APPLE: 1378 + spec->gpio_eapd_hp = 2; /* GPIO1 = headphones */ 1379 + spec->gpio_eapd_speaker = 8; /* GPIO3 = speakers */ 1380 + spec->gpio_mask = spec->gpio_dir = 1381 + spec->gpio_eapd_hp | spec->gpio_eapd_speaker; 1394 1382 break; 1395 1383 } 1396 1384
+10 -6
sound/pci/hda/patch_hdmi.c
··· 69 69 struct hda_codec *codec; 70 70 struct hdmi_eld sink_eld; 71 71 struct delayed_work work; 72 + int repoll_count; 72 73 }; 73 74 74 75 struct hdmi_spec { ··· 749 748 * Unsolicited events 750 749 */ 751 750 752 - static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, bool retry); 751 + static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll); 753 752 754 753 static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res) 755 754 { ··· 767 766 if (pin_idx < 0) 768 767 return; 769 768 770 - hdmi_present_sense(&spec->pins[pin_idx], true); 769 + hdmi_present_sense(&spec->pins[pin_idx], 1); 771 770 } 772 771 773 772 static void hdmi_non_intrinsic_event(struct hda_codec *codec, unsigned int res) ··· 961 960 return 0; 962 961 } 963 962 964 - static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, bool retry) 963 + static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll) 965 964 { 966 965 struct hda_codec *codec = per_pin->codec; 967 966 struct hdmi_eld *eld = &per_pin->sink_eld; ··· 990 989 if (eld_valid) { 991 990 if (!snd_hdmi_get_eld(eld, codec, pin_nid)) 992 991 snd_hdmi_show_eld(eld); 993 - else if (retry) { 992 + else if (repoll) { 994 993 queue_delayed_work(codec->bus->workq, 995 994 &per_pin->work, 996 995 msecs_to_jiffies(300)); ··· 1005 1004 struct hdmi_spec_per_pin *per_pin = 1006 1005 container_of(to_delayed_work(work), struct hdmi_spec_per_pin, work); 1007 1006 1008 - hdmi_present_sense(per_pin, false); 1007 + if (per_pin->repoll_count++ > 6) 1008 + per_pin->repoll_count = 0; 1009 + 1010 + hdmi_present_sense(per_pin, per_pin->repoll_count); 1009 1011 } 1010 1012 1011 1013 static int hdmi_add_pin(struct hda_codec *codec, hda_nid_t pin_nid) ··· 1239 1235 if (err < 0) 1240 1236 return err; 1241 1237 1242 - hdmi_present_sense(per_pin, false); 1238 + hdmi_present_sense(per_pin, 0); 1243 1239 return 0; 1244 1240 } 1245 1241
+23 -11
sound/pci/hda/patch_realtek.c
··· 277 277 return false; 278 278 } 279 279 280 + static inline hda_nid_t get_capsrc(struct alc_spec *spec, int idx) 281 + { 282 + return spec->capsrc_nids ? 283 + spec->capsrc_nids[idx] : spec->adc_nids[idx]; 284 + } 285 + 280 286 /* select the given imux item; either unmute exclusively or select the route */ 281 287 static int alc_mux_select(struct hda_codec *codec, unsigned int adc_idx, 282 288 unsigned int idx, bool force) ··· 309 303 adc_idx = spec->dyn_adc_idx[idx]; 310 304 } 311 305 312 - nid = spec->capsrc_nids ? 313 - spec->capsrc_nids[adc_idx] : spec->adc_nids[adc_idx]; 306 + nid = get_capsrc(spec, adc_idx); 314 307 315 308 /* no selection? */ 316 309 num_conns = snd_hda_get_conn_list(codec, nid, NULL); ··· 1059 1054 spec->imux_pins[2] = spec->dock_mic_pin; 1060 1055 for (i = 0; i < 3; i++) { 1061 1056 strcpy(imux->items[i].label, texts[i]); 1062 - if (spec->imux_pins[i]) 1057 + if (spec->imux_pins[i]) { 1058 + hda_nid_t pin = spec->imux_pins[i]; 1059 + int c; 1060 + for (c = 0; c < spec->num_adc_nids; c++) { 1061 + hda_nid_t cap = get_capsrc(spec, c); 1062 + int idx = get_connection_index(codec, cap, pin); 1063 + if (idx >= 0) { 1064 + imux->items[i].index = idx; 1065 + break; 1066 + } 1067 + } 1063 1068 imux->num_items = i + 1; 1069 + } 1064 1070 } 1065 1071 spec->num_mux_defs = 1; 1066 1072 spec->input_mux = imux; ··· 1973 1957 if (!kctl) 1974 1958 kctl = snd_hda_find_mixer_ctl(codec, "Input Source"); 1975 1959 for (i = 0; kctl && i < kctl->count; i++) { 1976 - const hda_nid_t *nids = spec->capsrc_nids; 1977 - if (!nids) 1978 - nids = spec->adc_nids; 1979 - err = snd_hda_add_nid(codec, kctl, i, nids[i]); 1960 + err = snd_hda_add_nid(codec, kctl, i, 1961 + get_capsrc(spec, i)); 1980 1962 if (err < 0) 1981 1963 return err; 1982 1964 } ··· 2761 2747 } 2762 2748 2763 2749 for (c = 0; c < num_adcs; c++) { 2764 - hda_nid_t cap = spec->capsrc_nids ? 2765 - spec->capsrc_nids[c] : spec->adc_nids[c]; 2750 + hda_nid_t cap = get_capsrc(spec, c); 2766 2751 idx = get_connection_index(codec, cap, pin); 2767 2752 if (idx >= 0) { 2768 2753 spec->imux_pins[imux->num_items] = pin; ··· 3707 3694 if (!pin) 3708 3695 return 0; 3709 3696 for (i = 0; i < spec->num_adc_nids; i++) { 3710 - hda_nid_t cap = spec->capsrc_nids ? 3711 - spec->capsrc_nids[i] : spec->adc_nids[i]; 3697 + hda_nid_t cap = get_capsrc(spec, i); 3712 3698 int idx; 3713 3699 3714 3700 idx = get_connection_index(codec, cap, pin);
+2
sound/pci/hda/patch_sigmatel.c
··· 1641 1641 "Alienware M17x", STAC_ALIENWARE_M17X), 1642 1642 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x043a, 1643 1643 "Alienware M17x", STAC_ALIENWARE_M17X), 1644 + SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0490, 1645 + "Alienware M17x", STAC_ALIENWARE_M17X), 1644 1646 {} /* terminator */ 1645 1647 }; 1646 1648
+45 -35
sound/pci/hda/patch_via.c
··· 208 208 /* work to check hp jack state */ 209 209 struct hda_codec *codec; 210 210 struct delayed_work vt1708_hp_work; 211 + int hp_work_active; 211 212 int vt1708_jack_detect; 212 213 int vt1708_hp_present; 213 214 ··· 306 305 static void analog_low_current_mode(struct hda_codec *codec); 307 306 static bool is_aa_path_mute(struct hda_codec *codec); 308 307 309 - static void vt1708_start_hp_work(struct via_spec *spec) 310 - { 311 - if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 312 - return; 313 - snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 314 - !spec->vt1708_jack_detect); 315 - if (!delayed_work_pending(&spec->vt1708_hp_work)) 316 - schedule_delayed_work(&spec->vt1708_hp_work, 317 - msecs_to_jiffies(100)); 318 - } 308 + #define hp_detect_with_aa(codec) \ 309 + (snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") == 1 && \ 310 + !is_aa_path_mute(codec)) 319 311 320 312 static void vt1708_stop_hp_work(struct via_spec *spec) 321 313 { 322 314 if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 323 315 return; 324 - if (snd_hda_get_bool_hint(spec->codec, "analog_loopback_hp_detect") == 1 325 - && !is_aa_path_mute(spec->codec)) 316 + if (spec->hp_work_active) { 317 + snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 1); 318 + cancel_delayed_work_sync(&spec->vt1708_hp_work); 319 + spec->hp_work_active = 0; 320 + } 321 + } 322 + 323 + static void vt1708_update_hp_work(struct via_spec *spec) 324 + { 325 + if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 326 326 return; 327 - snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 328 - !spec->vt1708_jack_detect); 329 - cancel_delayed_work_sync(&spec->vt1708_hp_work); 327 + if (spec->vt1708_jack_detect && 328 + (spec->active_streams || hp_detect_with_aa(spec->codec))) { 329 + if (!spec->hp_work_active) { 330 + snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 0); 331 + schedule_delayed_work(&spec->vt1708_hp_work, 332 + msecs_to_jiffies(100)); 333 + spec->hp_work_active = 1; 334 + } 335 + } else if (!hp_detect_with_aa(spec->codec)) 336 + vt1708_stop_hp_work(spec); 330 337 } 331 338 332 339 static void set_widgets_power_state(struct hda_codec *codec) ··· 352 343 353 344 set_widgets_power_state(codec); 354 345 analog_low_current_mode(snd_kcontrol_chip(kcontrol)); 355 - if (snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") == 1) { 356 - if (is_aa_path_mute(codec)) 357 - vt1708_start_hp_work(codec->spec); 358 - else 359 - vt1708_stop_hp_work(codec->spec); 360 - } 346 + vt1708_update_hp_work(codec->spec); 361 347 return change; 362 348 } 363 349 ··· 1158 1154 spec->cur_dac_stream_tag = stream_tag; 1159 1155 spec->cur_dac_format = format; 1160 1156 mutex_unlock(&spec->config_mutex); 1161 - vt1708_start_hp_work(spec); 1157 + vt1708_update_hp_work(spec); 1162 1158 return 0; 1163 1159 } 1164 1160 ··· 1178 1174 spec->cur_hp_stream_tag = stream_tag; 1179 1175 spec->cur_hp_format = format; 1180 1176 mutex_unlock(&spec->config_mutex); 1181 - vt1708_start_hp_work(spec); 1177 + vt1708_update_hp_work(spec); 1182 1178 return 0; 1183 1179 } 1184 1180 ··· 1192 1188 snd_hda_multi_out_analog_cleanup(codec, &spec->multiout); 1193 1189 spec->active_streams &= ~STREAM_MULTI_OUT; 1194 1190 mutex_unlock(&spec->config_mutex); 1195 - vt1708_stop_hp_work(spec); 1191 + vt1708_update_hp_work(spec); 1196 1192 return 0; 1197 1193 } 1198 1194 ··· 1207 1203 snd_hda_codec_setup_stream(codec, spec->hp_dac_nid, 0, 0, 0); 1208 1204 spec->active_streams &= ~STREAM_INDEP_HP; 1209 1205 mutex_unlock(&spec->config_mutex); 1210 - vt1708_stop_hp_work(spec); 1206 + vt1708_update_hp_work(spec); 1211 1207 return 0; 1212 1208 } 1213 1209 ··· 1649 1645 int nums; 1650 1646 struct via_spec *spec = codec->spec; 1651 1647 1652 - if (!spec->hp_independent_mode && spec->autocfg.hp_pins[0]) 1648 + if (!spec->hp_independent_mode && spec->autocfg.hp_pins[0] && 1649 + (spec->codec_type != VT1708 || spec->vt1708_jack_detect)) 1653 1650 present = snd_hda_jack_detect(codec, spec->autocfg.hp_pins[0]); 1654 1651 1655 1652 if (spec->smart51_enabled) ··· 2617 2612 2618 2613 if (spec->codec_type != VT1708) 2619 2614 return 0; 2620 - spec->vt1708_jack_detect = 2621 - !((snd_hda_codec_read(codec, 0x1, 0, 0xf84, 0) >> 8) & 0x1); 2622 2615 ucontrol->value.integer.value[0] = spec->vt1708_jack_detect; 2623 2616 return 0; 2624 2617 } ··· 2626 2623 { 2627 2624 struct hda_codec *codec = snd_kcontrol_chip(kcontrol); 2628 2625 struct via_spec *spec = codec->spec; 2629 - int change; 2626 + int val; 2630 2627 2631 2628 if (spec->codec_type != VT1708) 2632 2629 return 0; 2633 - spec->vt1708_jack_detect = ucontrol->value.integer.value[0]; 2634 - change = (0x1 & (snd_hda_codec_read(codec, 0x1, 0, 0xf84, 0) >> 8)) 2635 - == !spec->vt1708_jack_detect; 2636 - if (spec->vt1708_jack_detect) { 2630 + val = !!ucontrol->value.integer.value[0]; 2631 + if (spec->vt1708_jack_detect == val) 2632 + return 0; 2633 + spec->vt1708_jack_detect = val; 2634 + if (spec->vt1708_jack_detect && 2635 + snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") != 1) { 2637 2636 mute_aa_path(codec, 1); 2638 2637 notify_aa_path_ctls(codec); 2639 2638 } 2640 - return change; 2639 + via_hp_automute(codec); 2640 + vt1708_update_hp_work(spec); 2641 + return 1; 2641 2642 } 2642 2643 2643 2644 static const struct snd_kcontrol_new vt1708_jack_detect_ctl = { ··· 2778 2771 via_auto_init_unsol_event(codec); 2779 2772 2780 2773 via_hp_automute(codec); 2774 + vt1708_update_hp_work(spec); 2781 2775 2782 2776 return 0; 2783 2777 } ··· 2795 2787 spec->vt1708_hp_present ^= 1; 2796 2788 via_hp_automute(spec->codec); 2797 2789 } 2798 - vt1708_start_hp_work(spec); 2790 + if (spec->vt1708_jack_detect) 2791 + schedule_delayed_work(&spec->vt1708_hp_work, 2792 + msecs_to_jiffies(100)); 2799 2793 } 2800 2794 2801 2795 static int get_mux_nids(struct hda_codec *codec)
+16 -7
sound/pci/lx6464es/lx_core.c
··· 78 78 return ioread32(address); 79 79 } 80 80 81 - void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, u32 len) 81 + static void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, 82 + u32 len) 82 83 { 83 - void __iomem *address = lx_dsp_register(chip, port); 84 - memcpy_fromio(data, address, len*sizeof(u32)); 84 + u32 __iomem *address = lx_dsp_register(chip, port); 85 + int i; 86 + 87 + /* we cannot use memcpy_fromio */ 88 + for (i = 0; i != len; ++i) 89 + data[i] = ioread32(address + i); 85 90 } 86 91 87 92 ··· 96 91 iowrite32(data, address); 97 92 } 98 93 99 - void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, const u32 *data, 100 - u32 len) 94 + static void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, 95 + const u32 *data, u32 len) 101 96 { 102 - void __iomem *address = lx_dsp_register(chip, port); 103 - memcpy_toio(address, data, len*sizeof(u32)); 97 + u32 __iomem *address = lx_dsp_register(chip, port); 98 + int i; 99 + 100 + /* we cannot use memcpy_to */ 101 + for (i = 0; i != len; ++i) 102 + iowrite32(data[i], address + i); 104 103 } 105 104 106 105
-3
sound/pci/lx6464es/lx_core.h
··· 72 72 }; 73 73 74 74 unsigned long lx_dsp_reg_read(struct lx6464es *chip, int port); 75 - void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, u32 len); 76 75 void lx_dsp_reg_write(struct lx6464es *chip, int port, unsigned data); 77 - void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, const u32 *data, 78 - u32 len); 79 76 80 77 /* plx register access */ 81 78 enum {
+1 -1
sound/pci/rme9652/hdspm.c
··· 6518 6518 hdspm->io_type = AES32; 6519 6519 hdspm->card_name = "RME AES32"; 6520 6520 hdspm->midiPorts = 2; 6521 - } else if ((hdspm->firmware_rev == 0xd5) || 6521 + } else if ((hdspm->firmware_rev == 0xd2) || 6522 6522 ((hdspm->firmware_rev >= 0xc8) && 6523 6523 (hdspm->firmware_rev <= 0xcf))) { 6524 6524 hdspm->io_type = MADI;
+1 -1
sound/soc/codecs/adau1373.c
··· 245 245 }; 246 246 247 247 static const unsigned int adau1373_bass_tlv[] = { 248 - TLV_DB_RANGE_HEAD(4), 248 + TLV_DB_RANGE_HEAD(3), 249 249 0, 2, TLV_DB_SCALE_ITEM(-600, 600, 1), 250 250 3, 4, TLV_DB_SCALE_ITEM(950, 250, 0), 251 251 5, 7, TLV_DB_SCALE_ITEM(1400, 150, 0),
+5 -3
sound/soc/codecs/cs4271.c
··· 434 434 { 435 435 int ret; 436 436 /* Set power-down bit */ 437 - ret = snd_soc_update_bits(codec, CS4271_MODE2, 0, CS4271_MODE2_PDN); 437 + ret = snd_soc_update_bits(codec, CS4271_MODE2, CS4271_MODE2_PDN, 438 + CS4271_MODE2_PDN); 438 439 if (ret < 0) 439 440 return ret; 440 441 return 0; ··· 502 501 return ret; 503 502 } 504 503 505 - ret = snd_soc_update_bits(codec, CS4271_MODE2, 0, 506 - CS4271_MODE2_PDN | CS4271_MODE2_CPEN); 504 + ret = snd_soc_update_bits(codec, CS4271_MODE2, 505 + CS4271_MODE2_PDN | CS4271_MODE2_CPEN, 506 + CS4271_MODE2_PDN | CS4271_MODE2_CPEN); 507 507 if (ret < 0) 508 508 return ret; 509 509 ret = snd_soc_update_bits(codec, CS4271_MODE2, CS4271_MODE2_PDN, 0);
+1 -1
sound/soc/codecs/rt5631.c
··· 177 177 static const DECLARE_TLV_DB_SCALE(in_vol_tlv, -3450, 150, 0); 178 178 /* {0, +20, +24, +30, +35, +40, +44, +50, +52}dB */ 179 179 static unsigned int mic_bst_tlv[] = { 180 - TLV_DB_RANGE_HEAD(6), 180 + TLV_DB_RANGE_HEAD(7), 181 181 0, 0, TLV_DB_SCALE_ITEM(0, 0, 0), 182 182 1, 1, TLV_DB_SCALE_ITEM(2000, 0, 0), 183 183 2, 2, TLV_DB_SCALE_ITEM(2400, 0, 0),
+1 -1
sound/soc/codecs/sgtl5000.c
··· 365 365 366 366 /* tlv for mic gain, 0db 20db 30db 40db */ 367 367 static const unsigned int mic_gain_tlv[] = { 368 - TLV_DB_RANGE_HEAD(4), 368 + TLV_DB_RANGE_HEAD(2), 369 369 0, 0, TLV_DB_SCALE_ITEM(0, 0, 0), 370 370 1, 3, TLV_DB_SCALE_ITEM(2000, 1000, 0), 371 371 };
+62 -1
sound/soc/codecs/sta32x.c
··· 76 76 77 77 unsigned int mclk; 78 78 unsigned int format; 79 + 80 + u32 coef_shadow[STA32X_COEF_COUNT]; 79 81 }; 80 82 81 83 static const DECLARE_TLV_DB_SCALE(mvol_tlv, -12700, 50, 1); ··· 229 227 struct snd_ctl_elem_value *ucontrol) 230 228 { 231 229 struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); 230 + struct sta32x_priv *sta32x = snd_soc_codec_get_drvdata(codec); 232 231 int numcoef = kcontrol->private_value >> 16; 233 232 int index = kcontrol->private_value & 0xffff; 234 233 unsigned int cfud; ··· 242 239 snd_soc_write(codec, STA32X_CFUD, cfud); 243 240 244 241 snd_soc_write(codec, STA32X_CFADDR2, index); 242 + for (i = 0; i < numcoef && (index + i < STA32X_COEF_COUNT); i++) 243 + sta32x->coef_shadow[index + i] = 244 + (ucontrol->value.bytes.data[3 * i] << 16) 245 + | (ucontrol->value.bytes.data[3 * i + 1] << 8) 246 + | (ucontrol->value.bytes.data[3 * i + 2]); 245 247 for (i = 0; i < 3 * numcoef; i++) 246 248 snd_soc_write(codec, STA32X_B1CF1 + i, 247 249 ucontrol->value.bytes.data[i]); ··· 258 250 return -EINVAL; 259 251 260 252 return 0; 253 + } 254 + 255 + int sta32x_sync_coef_shadow(struct snd_soc_codec *codec) 256 + { 257 + struct sta32x_priv *sta32x = snd_soc_codec_get_drvdata(codec); 258 + unsigned int cfud; 259 + int i; 260 + 261 + /* preserve reserved bits in STA32X_CFUD */ 262 + cfud = snd_soc_read(codec, STA32X_CFUD) & 0xf0; 263 + 264 + for (i = 0; i < STA32X_COEF_COUNT; i++) { 265 + snd_soc_write(codec, STA32X_CFADDR2, i); 266 + snd_soc_write(codec, STA32X_B1CF1, 267 + (sta32x->coef_shadow[i] >> 16) & 0xff); 268 + snd_soc_write(codec, STA32X_B1CF2, 269 + (sta32x->coef_shadow[i] >> 8) & 0xff); 270 + snd_soc_write(codec, STA32X_B1CF3, 271 + (sta32x->coef_shadow[i]) & 0xff); 272 + /* chip documentation does not say if the bits are 273 + * self-clearing, so do it explicitly */ 274 + snd_soc_write(codec, STA32X_CFUD, cfud); 275 + snd_soc_write(codec, STA32X_CFUD, cfud | 0x01); 276 + } 277 + return 0; 278 + } 279 + 280 + int sta32x_cache_sync(struct snd_soc_codec *codec) 281 + { 282 + unsigned int mute; 283 + int rc; 284 + 285 + if (!codec->cache_sync) 286 + return 0; 287 + 288 + /* mute during register sync */ 289 + mute = snd_soc_read(codec, STA32X_MMUTE); 290 + snd_soc_write(codec, STA32X_MMUTE, mute | STA32X_MMUTE_MMUTE); 291 + sta32x_sync_coef_shadow(codec); 292 + rc = snd_soc_cache_sync(codec); 293 + snd_soc_write(codec, STA32X_MMUTE, mute); 294 + return rc; 261 295 } 262 296 263 297 #define SINGLE_COEF(xname, index) \ ··· 711 661 return ret; 712 662 } 713 663 714 - snd_soc_cache_sync(codec); 664 + sta32x_cache_sync(codec); 715 665 } 716 666 717 667 /* Power up to mute */ ··· 839 789 snd_soc_update_bits(codec, STA32X_C3CFG, 840 790 STA32X_CxCFG_OM_MASK, 841 791 2 << STA32X_CxCFG_OM_SHIFT); 792 + 793 + /* initialize coefficient shadow RAM with reset values */ 794 + for (i = 4; i <= 49; i += 5) 795 + sta32x->coef_shadow[i] = 0x400000; 796 + for (i = 50; i <= 54; i++) 797 + sta32x->coef_shadow[i] = 0x7fffff; 798 + sta32x->coef_shadow[55] = 0x5a9df7; 799 + sta32x->coef_shadow[56] = 0x7fffff; 800 + sta32x->coef_shadow[59] = 0x7fffff; 801 + sta32x->coef_shadow[60] = 0x400000; 802 + sta32x->coef_shadow[61] = 0x400000; 842 803 843 804 sta32x_set_bias_level(codec, SND_SOC_BIAS_STANDBY); 844 805 /* Bias level configuration will have done an extra enable */
+1
sound/soc/codecs/sta32x.h
··· 19 19 /* STA326 register addresses */ 20 20 21 21 #define STA32X_REGISTER_COUNT 0x2d 22 + #define STA32X_COEF_COUNT 62 22 23 23 24 #define STA32X_CONFA 0x00 24 25 #define STA32X_CONFB 0x01
+1
sound/soc/codecs/wm8731.c
··· 453 453 snd_soc_write(codec, WM8731_PWR, 0xffff); 454 454 regulator_bulk_disable(ARRAY_SIZE(wm8731->supplies), 455 455 wm8731->supplies); 456 + codec->cache_sync = 1; 456 457 break; 457 458 } 458 459 codec->dapm.bias_level = level;
+3
sound/soc/codecs/wm8753.c
··· 190 190 struct wm8753_priv *wm8753 = snd_soc_codec_get_drvdata(codec); 191 191 u16 ioctl; 192 192 193 + if (wm8753->dai_func == ucontrol->value.integer.value[0]) 194 + return 0; 195 + 193 196 if (codec->active) 194 197 return -EBUSY; 195 198
+2 -2
sound/soc/codecs/wm8962.c
··· 1973 1973 static const DECLARE_TLV_DB_SCALE(inpga_tlv, -2325, 75, 0); 1974 1974 static const DECLARE_TLV_DB_SCALE(mixin_tlv, -1500, 300, 0); 1975 1975 static const unsigned int mixinpga_tlv[] = { 1976 - TLV_DB_RANGE_HEAD(7), 1976 + TLV_DB_RANGE_HEAD(5), 1977 1977 0, 1, TLV_DB_SCALE_ITEM(0, 600, 0), 1978 1978 2, 2, TLV_DB_SCALE_ITEM(1300, 1300, 0), 1979 1979 3, 4, TLV_DB_SCALE_ITEM(1800, 200, 0), ··· 1988 1988 static const DECLARE_TLV_DB_SCALE(out_tlv, -12100, 100, 1); 1989 1989 static const DECLARE_TLV_DB_SCALE(hp_tlv, -700, 100, 0); 1990 1990 static const unsigned int classd_tlv[] = { 1991 - TLV_DB_RANGE_HEAD(7), 1991 + TLV_DB_RANGE_HEAD(2), 1992 1992 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 1993 1993 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 1994 1994 };
+1 -1
sound/soc/codecs/wm8993.c
··· 512 512 static const DECLARE_TLV_DB_SCALE(drc_comp_amp, -2250, 75, 0); 513 513 static const DECLARE_TLV_DB_SCALE(drc_min_tlv, -1800, 600, 0); 514 514 static const unsigned int drc_max_tlv[] = { 515 - TLV_DB_RANGE_HEAD(4), 515 + TLV_DB_RANGE_HEAD(2), 516 516 0, 2, TLV_DB_SCALE_ITEM(1200, 600, 0), 517 517 3, 3, TLV_DB_SCALE_ITEM(3600, 0, 0), 518 518 };
+5 -5
sound/soc/codecs/wm9081.c
··· 807 807 mdelay(100); 808 808 809 809 /* Normal bias enable & soft start off */ 810 - reg |= WM9081_BIAS_ENA; 811 810 reg &= ~WM9081_VMID_RAMP; 812 811 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); 813 812 ··· 817 818 } 818 819 819 820 /* VMID 2*240k */ 820 - reg = snd_soc_read(codec, WM9081_BIAS_CONTROL_1); 821 + reg = snd_soc_read(codec, WM9081_VMID_CONTROL); 821 822 reg &= ~WM9081_VMID_SEL_MASK; 822 823 reg |= 0x04; 823 824 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); ··· 829 830 break; 830 831 831 832 case SND_SOC_BIAS_OFF: 832 - /* Startup bias source */ 833 + /* Startup bias source and disable bias */ 833 834 reg = snd_soc_read(codec, WM9081_BIAS_CONTROL_1); 834 835 reg |= WM9081_BIAS_SRC; 836 + reg &= ~WM9081_BIAS_ENA; 835 837 snd_soc_write(codec, WM9081_BIAS_CONTROL_1, reg); 836 838 837 - /* Disable VMID and biases with soft ramping */ 839 + /* Disable VMID with soft ramping */ 838 840 reg = snd_soc_read(codec, WM9081_VMID_CONTROL); 839 - reg &= ~(WM9081_VMID_SEL_MASK | WM9081_BIAS_ENA); 841 + reg &= ~WM9081_VMID_SEL_MASK; 840 842 reg |= WM9081_VMID_RAMP; 841 843 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); 842 844
+3 -3
sound/soc/codecs/wm9090.c
··· 177 177 } 178 178 179 179 static const unsigned int in_tlv[] = { 180 - TLV_DB_RANGE_HEAD(6), 180 + TLV_DB_RANGE_HEAD(3), 181 181 0, 0, TLV_DB_SCALE_ITEM(-600, 0, 0), 182 182 1, 3, TLV_DB_SCALE_ITEM(-350, 350, 0), 183 183 4, 6, TLV_DB_SCALE_ITEM(600, 600, 0), 184 184 }; 185 185 static const unsigned int mix_tlv[] = { 186 - TLV_DB_RANGE_HEAD(4), 186 + TLV_DB_RANGE_HEAD(2), 187 187 0, 2, TLV_DB_SCALE_ITEM(-1200, 300, 0), 188 188 3, 3, TLV_DB_SCALE_ITEM(0, 0, 0), 189 189 }; 190 190 static const DECLARE_TLV_DB_SCALE(out_tlv, -5700, 100, 0); 191 191 static const unsigned int spkboost_tlv[] = { 192 - TLV_DB_RANGE_HEAD(7), 192 + TLV_DB_RANGE_HEAD(2), 193 193 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 194 194 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 195 195 };
+1 -1
sound/soc/codecs/wm_hubs.c
··· 40 40 static const DECLARE_TLV_DB_SCALE(spkmixout_tlv, -1800, 600, 1); 41 41 static const DECLARE_TLV_DB_SCALE(outpga_tlv, -5700, 100, 0); 42 42 static const unsigned int spkboost_tlv[] = { 43 - TLV_DB_RANGE_HEAD(7), 43 + TLV_DB_RANGE_HEAD(2), 44 44 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 45 45 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 46 46 };
+1
sound/soc/fsl/fsl_ssi.c
··· 694 694 695 695 /* Initialize the the device_attribute structure */ 696 696 dev_attr = &ssi_private->dev_attr; 697 + sysfs_attr_init(&dev_attr->attr); 697 698 dev_attr->attr.name = "statistics"; 698 699 dev_attr->attr.mode = S_IRUGO; 699 700 dev_attr->show = fsl_sysfs_ssi_show;
+2 -1
sound/soc/nuc900/nuc900-ac97.c
··· 365 365 if (ret) 366 366 goto out3; 367 367 368 - mfp_set_groupg(nuc900_audio->dev); /* enbale ac97 multifunction pin*/ 368 + /* enbale ac97 multifunction pin */ 369 + mfp_set_groupg(nuc900_audio->dev, "nuc900-audio"); 369 370 370 371 return 0; 371 372