Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

+5518 -3611
+6 -1
Documentation/DocBook/uio-howto.tmpl
··· 521 521 522 522 <itemizedlist> 523 523 <listitem><para> 524 + <varname>const char *name</varname>: Optional. Set this to help identify 525 + the memory region, it will show up in the corresponding sysfs node. 526 + </para></listitem> 527 + 528 + <listitem><para> 524 529 <varname>int memtype</varname>: Required if the mapping is used. Set this to 525 530 <varname>UIO_MEM_PHYS</varname> if you you have physical memory on your 526 531 card to be mapped. Use <varname>UIO_MEM_LOGICAL</varname> for logical ··· 558 553 </itemizedlist> 559 554 560 555 <para> 561 - Please do not touch the <varname>kobj</varname> element of 556 + Please do not touch the <varname>map</varname> element of 562 557 <varname>struct uio_mem</varname>! It is used by the UIO framework 563 558 to set up sysfs files for this mapping. Simply leave it alone. 564 559 </para>
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 33 33 ramtron Ramtron International 34 34 samsung Samsung Semiconductor 35 35 schindler Schindler 36 + sil Silicon Image 36 37 simtek 37 38 sirf SiRF Technology, Inc. 38 39 stericsson ST-Ericsson
+2 -2
Documentation/filesystems/btrfs.txt
··· 63 63 Userspace tools for creating and manipulating Btrfs file systems are 64 64 available from the git repository at the following location: 65 65 66 - http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs-unstable.git 67 - git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs-unstable.git 66 + http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git 67 + git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git 68 68 69 69 These include the following tools: 70 70
+19 -17
Documentation/i2c/ten-bit-addresses
··· 1 1 The I2C protocol knows about two kinds of device addresses: normal 7 bit 2 2 addresses, and an extended set of 10 bit addresses. The sets of addresses 3 3 do not intersect: the 7 bit address 0x10 is not the same as the 10 bit 4 - address 0x10 (though a single device could respond to both of them). You 5 - select a 10 bit address by adding an extra byte after the address 6 - byte: 7 - S Addr7 Rd/Wr .... 8 - becomes 9 - S 11110 Addr10 Rd/Wr 10 - S is the start bit, Rd/Wr the read/write bit, and if you count the number 11 - of bits, you will see the there are 8 after the S bit for 7 bit addresses, 12 - and 16 after the S bit for 10 bit addresses. 4 + address 0x10 (though a single device could respond to both of them). 13 5 14 - WARNING! The current 10 bit address support is EXPERIMENTAL. There are 15 - several places in the code that will cause SEVERE PROBLEMS with 10 bit 16 - addresses, even though there is some basic handling and hooks. Also, 17 - almost no supported adapter handles the 10 bit addresses correctly. 6 + I2C messages to and from 10-bit address devices have a different format. 7 + See the I2C specification for the details. 18 8 19 - As soon as a real 10 bit address device is spotted 'in the wild', we 20 - can and will add proper support. Right now, 10 bit address devices 21 - are defined by the I2C protocol, but we have never seen a single device 22 - which supports them. 9 + The current 10 bit address support is minimal. It should work, however 10 + you can expect some problems along the way: 11 + * Not all bus drivers support 10-bit addresses. Some don't because the 12 + hardware doesn't support them (SMBus doesn't require 10-bit address 13 + support for example), some don't because nobody bothered adding the 14 + code (or it's there but not working properly.) Software implementation 15 + (i2c-algo-bit) is known to work. 16 + * Some optional features do not support 10-bit addresses. This is the 17 + case of automatic detection and instantiation of devices by their, 18 + drivers, for example. 19 + * Many user-space packages (for example i2c-tools) lack support for 20 + 10-bit addresses. 21 + 22 + Note that 10-bit address devices are still pretty rare, so the limitations 23 + listed above could stay for a long time, maybe even forever if nobody 24 + needs them to be fixed.
+66 -39
Documentation/power/devices.txt
··· 123 123 Subsystem-Level Methods 124 124 ----------------------- 125 125 The core methods to suspend and resume devices reside in struct dev_pm_ops 126 - pointed to by the pm member of struct bus_type, struct device_type and 127 - struct class. They are mostly of interest to the people writing infrastructure 128 - for buses, like PCI or USB, or device type and device class drivers. 126 + pointed to by the ops member of struct dev_pm_domain, or by the pm member of 127 + struct bus_type, struct device_type and struct class. They are mostly of 128 + interest to the people writing infrastructure for platforms and buses, like PCI 129 + or USB, or device type and device class drivers. 129 130 130 131 Bus drivers implement these methods as appropriate for the hardware and the 131 132 drivers using it; PCI works differently from USB, and so on. Not many people ··· 140 139 141 140 /sys/devices/.../power/wakeup files 142 141 ----------------------------------- 143 - All devices in the driver model have two flags to control handling of wakeup 144 - events (hardware signals that can force the device and/or system out of a low 145 - power state). These flags are initialized by bus or device driver code using 142 + All device objects in the driver model contain fields that control the handling 143 + of system wakeup events (hardware signals that can force the system out of a 144 + sleep state). These fields are initialized by bus or device driver code using 146 145 device_set_wakeup_capable() and device_set_wakeup_enable(), defined in 147 146 include/linux/pm_wakeup.h. 148 147 149 - The "can_wakeup" flag just records whether the device (and its driver) can 148 + The "power.can_wakeup" flag just records whether the device (and its driver) can 150 149 physically support wakeup events. The device_set_wakeup_capable() routine 151 - affects this flag. The "should_wakeup" flag controls whether the device should 152 - try to use its wakeup mechanism. device_set_wakeup_enable() affects this flag; 153 - for the most part drivers should not change its value. The initial value of 154 - should_wakeup is supposed to be false for the majority of devices; the major 155 - exceptions are power buttons, keyboards, and Ethernet adapters whose WoL 156 - (wake-on-LAN) feature has been set up with ethtool. It should also default 157 - to true for devices that don't generate wakeup requests on their own but merely 158 - forward wakeup requests from one bus to another (like PCI bridges). 150 + affects this flag. The "power.wakeup" field is a pointer to an object of type 151 + struct wakeup_source used for controlling whether or not the device should use 152 + its system wakeup mechanism and for notifying the PM core of system wakeup 153 + events signaled by the device. This object is only present for wakeup-capable 154 + devices (i.e. devices whose "can_wakeup" flags are set) and is created (or 155 + removed) by device_set_wakeup_capable(). 159 156 160 157 Whether or not a device is capable of issuing wakeup events is a hardware 161 158 matter, and the kernel is responsible for keeping track of it. By contrast, 162 159 whether or not a wakeup-capable device should issue wakeup events is a policy 163 160 decision, and it is managed by user space through a sysfs attribute: the 164 - power/wakeup file. User space can write the strings "enabled" or "disabled" to 165 - set or clear the "should_wakeup" flag, respectively. This file is only present 166 - for wakeup-capable devices (i.e. devices whose "can_wakeup" flags are set) 167 - and is created (or removed) by device_set_wakeup_capable(). Reads from the 168 - file will return the corresponding string. 161 + "power/wakeup" file. User space can write the strings "enabled" or "disabled" 162 + to it to indicate whether or not, respectively, the device is supposed to signal 163 + system wakeup. This file is only present if the "power.wakeup" object exists 164 + for the given device and is created (or removed) along with that object, by 165 + device_set_wakeup_capable(). Reads from the file will return the corresponding 166 + string. 169 167 170 - The device_may_wakeup() routine returns true only if both flags are set. 168 + The "power/wakeup" file is supposed to contain the "disabled" string initially 169 + for the majority of devices; the major exceptions are power buttons, keyboards, 170 + and Ethernet adapters whose WoL (wake-on-LAN) feature has been set up with 171 + ethtool. It should also default to "enabled" for devices that don't generate 172 + wakeup requests on their own but merely forward wakeup requests from one bus to 173 + another (like PCI Express ports). 174 + 175 + The device_may_wakeup() routine returns true only if the "power.wakeup" object 176 + exists and the corresponding "power/wakeup" file contains the string "enabled". 171 177 This information is used by subsystems, like the PCI bus type code, to see 172 178 whether or not to enable the devices' wakeup mechanisms. If device wakeup 173 179 mechanisms are enabled or disabled directly by drivers, they also should use 174 180 device_may_wakeup() to decide what to do during a system sleep transition. 175 - However for runtime power management, wakeup events should be enabled whenever 176 - the device and driver both support them, regardless of the should_wakeup flag. 181 + Device drivers, however, are not supposed to call device_set_wakeup_enable() 182 + directly in any case. 177 183 184 + It ought to be noted that system wakeup is conceptually different from "remote 185 + wakeup" used by runtime power management, although it may be supported by the 186 + same physical mechanism. Remote wakeup is a feature allowing devices in 187 + low-power states to trigger specific interrupts to signal conditions in which 188 + they should be put into the full-power state. Those interrupts may or may not 189 + be used to signal system wakeup events, depending on the hardware design. On 190 + some systems it is impossible to trigger them from system sleep states. In any 191 + case, remote wakeup should always be enabled for runtime power management for 192 + all devices and drivers that support it. 178 193 179 194 /sys/devices/.../power/control files 180 195 ------------------------------------ ··· 266 249 support all these callbacks and not all drivers use all the callbacks. The 267 250 various phases always run after tasks have been frozen and before they are 268 251 unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have 269 - been disabled (except for those marked with the IRQ_WAKEUP flag). 252 + been disabled (except for those marked with the IRQF_NO_SUSPEND flag). 270 253 271 - All phases use bus, type, or class callbacks (that is, methods defined in 272 - dev->bus->pm, dev->type->pm, or dev->class->pm). These callbacks are mutually 273 - exclusive, so if the device type provides a struct dev_pm_ops object pointed to 274 - by its pm field (i.e. both dev->type and dev->type->pm are defined), the 275 - callbacks included in that object (i.e. dev->type->pm) will be used. Otherwise, 276 - if the class provides a struct dev_pm_ops object pointed to by its pm field 277 - (i.e. both dev->class and dev->class->pm are defined), the PM core will use the 278 - callbacks from that object (i.e. dev->class->pm). Finally, if the pm fields of 279 - both the device type and class objects are NULL (or those objects do not exist), 280 - the callbacks provided by the bus (that is, the callbacks from dev->bus->pm) 281 - will be used (this allows device types to override callbacks provided by bus 282 - types or classes if necessary). 254 + All phases use PM domain, bus, type, or class callbacks (that is, methods 255 + defined in dev->pm_domain->ops, dev->bus->pm, dev->type->pm, or dev->class->pm). 256 + These callbacks are regarded by the PM core as mutually exclusive. Moreover, 257 + PM domain callbacks always take precedence over bus, type and class callbacks, 258 + while type callbacks take precedence over bus and class callbacks, and class 259 + callbacks take precedence over bus callbacks. To be precise, the following 260 + rules are used to determine which callback to execute in the given phase: 261 + 262 + 1. If dev->pm_domain is present, the PM core will attempt to execute the 263 + callback included in dev->pm_domain->ops. If that callback is not 264 + present, no action will be carried out for the given device. 265 + 266 + 2. Otherwise, if both dev->type and dev->type->pm are present, the callback 267 + included in dev->type->pm will be executed. 268 + 269 + 3. Otherwise, if both dev->class and dev->class->pm are present, the 270 + callback included in dev->class->pm will be executed. 271 + 272 + 4. Otherwise, if both dev->bus and dev->bus->pm are present, the callback 273 + included in dev->bus->pm will be executed. 274 + 275 + This allows PM domains and device types to override callbacks provided by bus 276 + types or device classes if necessary. 283 277 284 278 These callbacks may in turn invoke device- or driver-specific methods stored in 285 279 dev->driver->pm, but they don't have to. ··· 311 283 312 284 After the prepare callback method returns, no new children may be 313 285 registered below the device. The method may also prepare the device or 314 - driver in some way for the upcoming system power transition (for 315 - example, by allocating additional memory required for this purpose), but 316 - it should not put the device into a low-power state. 286 + driver in some way for the upcoming system power transition, but it 287 + should not put the device into a low-power state. 317 288 318 289 2. The suspend methods should quiesce the device to stop it from performing 319 290 I/O. They also may save the device registers and put it into the
+24 -16
Documentation/power/runtime_pm.txt
··· 44 44 }; 45 45 46 46 The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks 47 - are executed by the PM core for either the power domain, or the device type 48 - (if the device power domain's struct dev_pm_ops does not exist), or the class 49 - (if the device power domain's and type's struct dev_pm_ops object does not 50 - exist), or the bus type (if the device power domain's, type's and class' 51 - struct dev_pm_ops objects do not exist) of the given device, so the priority 52 - order of callbacks from high to low is that power domain callbacks, device 53 - type callbacks, class callbacks and bus type callbacks, and the high priority 54 - one will take precedence over low priority one. The bus type, device type and 55 - class callbacks are referred to as subsystem-level callbacks in what follows, 56 - and generally speaking, the power domain callbacks are used for representing 57 - power domains within a SoC. 47 + are executed by the PM core for the device's subsystem that may be either of 48 + the following: 49 + 50 + 1. PM domain of the device, if the device's PM domain object, dev->pm_domain, 51 + is present. 52 + 53 + 2. Device type of the device, if both dev->type and dev->type->pm are present. 54 + 55 + 3. Device class of the device, if both dev->class and dev->class->pm are 56 + present. 57 + 58 + 4. Bus type of the device, if both dev->bus and dev->bus->pm are present. 59 + 60 + The PM core always checks which callback to use in the order given above, so the 61 + priority order of callbacks from high to low is: PM domain, device type, class 62 + and bus type. Moreover, the high-priority one will always take precedence over 63 + a low-priority one. The PM domain, bus type, device type and class callbacks 64 + are referred to as subsystem-level callbacks in what follows. 58 65 59 66 By default, the callbacks are always invoked in process context with interrupts 60 67 enabled. However, subsystems can use the pm_runtime_irq_safe() helper function 61 - to tell the PM core that a device's ->runtime_suspend() and ->runtime_resume() 62 - callbacks should be invoked in atomic context with interrupts disabled. 63 - This implies that these callback routines must not block or sleep, but it also 64 - means that the synchronous helper functions listed at the end of Section 4 can 65 - be used within an interrupt handler or in an atomic context. 68 + to tell the PM core that their ->runtime_suspend(), ->runtime_resume() and 69 + ->runtime_idle() callbacks may be invoked in atomic context with interrupts 70 + disabled for a given device. This implies that the callback routines in 71 + question must not block or sleep, but it also means that the synchronous helper 72 + functions listed at the end of Section 4 may be used for that device within an 73 + interrupt handler or generally in an atomic context. 66 74 67 75 The subsystem-level suspend callback is _entirely_ _responsible_ for handling 68 76 the suspend of the device as appropriate, which may, but need not include
+11 -3
Documentation/serial/serial-rs485.txt
··· 97 97 98 98 struct serial_rs485 rs485conf; 99 99 100 - /* Set RS485 mode: */ 100 + /* Enable RS485 mode: */ 101 101 rs485conf.flags |= SER_RS485_ENABLED; 102 102 103 + /* Set logical level for RTS pin equal to 1 when sending: */ 104 + rs485conf.flags |= SER_RS485_RTS_ON_SEND; 105 + /* or, set logical level for RTS pin equal to 0 when sending: */ 106 + rs485conf.flags &= ~(SER_RS485_RTS_ON_SEND); 107 + 108 + /* Set logical level for RTS pin equal to 1 after sending: */ 109 + rs485conf.flags |= SER_RS485_RTS_AFTER_SEND; 110 + /* or, set logical level for RTS pin equal to 0 after sending: */ 111 + rs485conf.flags &= ~(SER_RS485_RTS_AFTER_SEND); 112 + 103 113 /* Set rts delay before send, if needed: */ 104 - rs485conf.flags |= SER_RS485_RTS_BEFORE_SEND; 105 114 rs485conf.delay_rts_before_send = ...; 106 115 107 116 /* Set rts delay after send, if needed: */ 108 - rs485conf.flags |= SER_RS485_RTS_AFTER_SEND; 109 117 rs485conf.delay_rts_after_send = ...; 110 118 111 119 /* Set this flag if you want to receive data even whilst sending data */
+17 -2
MAINTAINERS
··· 789 789 S: Maintained 790 790 T: git git://git.pengutronix.de/git/imx/linux-2.6.git 791 791 F: arch/arm/mach-mx*/ 792 + F: arch/arm/mach-imx/ 792 793 F: arch/arm/plat-mxc/ 793 794 794 795 ARM/FREESCALE IMX51 ··· 804 803 S: Maintained 805 804 T: git git://git.linaro.org/people/shawnguo/linux-2.6.git 806 805 F: arch/arm/mach-imx/*imx6* 806 + 807 + ARM/FREESCALE MXS ARM ARCHITECTURE 808 + M: Shawn Guo <shawn.guo@linaro.org> 809 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 810 + S: Maintained 811 + T: git git://git.linaro.org/people/shawnguo/linux-2.6.git 812 + F: arch/arm/mach-mxs/ 807 813 808 814 ARM/GLOMATION GESBC9312SX MACHINE SUPPORT 809 815 M: Lennert Buytenhek <kernel@wantstofly.org> ··· 1796 1788 F: include/net/cfg80211.h 1797 1789 F: net/wireless/* 1798 1790 X: net/wireless/wext* 1791 + 1792 + CHAR and MISC DRIVERS 1793 + M: Arnd Bergmann <arnd@arndb.de> 1794 + M: Greg Kroah-Hartman <greg@kroah.com> 1795 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git 1796 + S: Maintained 1797 + F: drivers/char/* 1798 + F: drivers/misc/* 1799 1799 1800 1800 CHECKPATCH 1801 1801 M: Andy Whitcroft <apw@canonical.com> ··· 3736 3720 F: include/linux/jbd2.h 3737 3721 3738 3722 JSM Neo PCI based serial card 3739 - M: Breno Leitao <leitao@linux.vnet.ibm.com> 3723 + M: Lucas Tavares <lucaskt@linux.vnet.ibm.com> 3740 3724 L: linux-serial@vger.kernel.org 3741 3725 S: Maintained 3742 3726 F: drivers/tty/serial/jsm/ ··· 5675 5659 F: include/media/*7146* 5676 5660 5677 5661 SAMSUNG AUDIO (ASoC) DRIVERS 5678 - M: Jassi Brar <jassisinghbrar@gmail.com> 5679 5662 M: Sangbeom Kim <sbkim73@samsung.com> 5680 5663 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 5681 5664 S: Supported
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 2 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc4 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+16 -4
arch/arm/Kconfig
··· 1231 1231 capabilities of the processor. 1232 1232 1233 1233 config PL310_ERRATA_588369 1234 - bool "Clean & Invalidate maintenance operations do not invalidate clean lines" 1234 + bool "PL310 errata: Clean & Invalidate maintenance operations do not invalidate clean lines" 1235 1235 depends on CACHE_L2X0 1236 1236 help 1237 1237 The PL310 L2 cache controller implements three types of Clean & ··· 1256 1256 entries regardless of the ASID. 1257 1257 1258 1258 config PL310_ERRATA_727915 1259 - bool "Background Clean & Invalidate by Way operation can cause data corruption" 1259 + bool "PL310 errata: Background Clean & Invalidate by Way operation can cause data corruption" 1260 1260 depends on CACHE_L2X0 1261 1261 help 1262 1262 PL310 implements the Clean & Invalidate by Way L2 cache maintenance ··· 1289 1289 operation is received by a CPU before the ICIALLUIS has completed, 1290 1290 potentially leading to corrupted entries in the cache or TLB. 1291 1291 1292 - config ARM_ERRATA_753970 1293 - bool "ARM errata: cache sync operation may be faulty" 1292 + config PL310_ERRATA_753970 1293 + bool "PL310 errata: cache sync operation may be faulty" 1294 1294 depends on CACHE_PL310 1295 1295 help 1296 1296 This option enables the workaround for the 753970 PL310 (r3p0) erratum. ··· 1351 1351 system. This workaround adds a DSB instruction before the 1352 1352 relevant cache maintenance functions and sets a specific bit 1353 1353 in the diagnostic control register of the SCU. 1354 + 1355 + config PL310_ERRATA_769419 1356 + bool "PL310 errata: no automatic Store Buffer drain" 1357 + depends on CACHE_L2X0 1358 + help 1359 + On revisions of the PL310 prior to r3p2, the Store Buffer does 1360 + not automatically drain. This can cause normal, non-cacheable 1361 + writes to be retained when the memory system is idle, leading 1362 + to suboptimal I/O performance for drivers using coherent DMA. 1363 + This option adds a write barrier to the cpu_idle loop so that, 1364 + on systems with an outer cache, the store buffer is drained 1365 + explicitly. 1354 1366 1355 1367 endmenu 1356 1368
+10 -6
arch/arm/common/gic.c
··· 526 526 sizeof(u32)); 527 527 BUG_ON(!gic->saved_ppi_conf); 528 528 529 - cpu_pm_register_notifier(&gic_notifier_block); 529 + if (gic == &gic_data[0]) 530 + cpu_pm_register_notifier(&gic_notifier_block); 530 531 } 531 532 #else 532 533 static void __init gic_pm_init(struct gic_chip_data *gic) ··· 582 581 * For primary GICs, skip over SGIs. 583 582 * For secondary GICs, skip over PPIs, too. 584 583 */ 584 + domain->hwirq_base = 32; 585 585 if (gic_nr == 0) { 586 586 gic_cpu_base_addr = cpu_base; 587 - domain->hwirq_base = 16; 588 - if (irq_start > 0) 589 - irq_start = (irq_start & ~31) + 16; 590 - } else 591 - domain->hwirq_base = 32; 587 + 588 + if ((irq_start & 31) > 0) { 589 + domain->hwirq_base = 16; 590 + if (irq_start != -1) 591 + irq_start = (irq_start & ~31) + 16; 592 + } 593 + } 592 594 593 595 /* 594 596 * Find out how many interrupts are supported.
+9 -3
arch/arm/common/pl330.c
··· 1211 1211 ccr |= (rqc->brst_size << CC_SRCBRSTSIZE_SHFT); 1212 1212 ccr |= (rqc->brst_size << CC_DSTBRSTSIZE_SHFT); 1213 1213 1214 - ccr |= (rqc->dcctl << CC_SRCCCTRL_SHFT); 1215 - ccr |= (rqc->scctl << CC_DSTCCTRL_SHFT); 1214 + ccr |= (rqc->scctl << CC_SRCCCTRL_SHFT); 1215 + ccr |= (rqc->dcctl << CC_DSTCCTRL_SHFT); 1216 1216 1217 1217 ccr |= (rqc->swap << CC_SWAP_SHFT); 1218 1218 ··· 1623 1623 return -1; 1624 1624 } 1625 1625 1626 + static bool _chan_ns(const struct pl330_info *pi, int i) 1627 + { 1628 + return pi->pcfg.irq_ns & (1 << i); 1629 + } 1630 + 1626 1631 /* Upon success, returns IdentityToken for the 1627 1632 * allocated channel, NULL otherwise. 1628 1633 */ ··· 1652 1647 1653 1648 for (i = 0; i < chans; i++) { 1654 1649 thrd = &pl330->channels[i]; 1655 - if (thrd->free) { 1650 + if ((thrd->free) && (!_manager_ns(thrd) || 1651 + _chan_ns(pi, i))) { 1656 1652 thrd->ev = _alloc_event(thrd); 1657 1653 if (thrd->ev >= 0) { 1658 1654 thrd->free = false;
+15 -51
arch/arm/configs/at91cap9adk_defconfig arch/arm/configs/at91sam9rl_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91CAP9=y 15 - CONFIG_MACH_AT91CAP9ADK=y 16 - CONFIG_MTD_AT91_DATAFLASH_CARD=y 14 + CONFIG_ARCH_AT91SAM9RL=y 15 + CONFIG_MACH_AT91SAM9RLEK=y 17 16 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 18 17 # CONFIG_ARM_THUMB is not set 19 - CONFIG_AEABI=y 20 - CONFIG_LEDS=y 21 - CONFIG_LEDS_CPU=y 22 18 CONFIG_ZBOOT_ROM_TEXT=0x0 23 19 CONFIG_ZBOOT_ROM_BSS=0x0 24 - CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw" 20 + CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw" 25 21 CONFIG_FPE_NWFPE=y 26 22 CONFIG_NET=y 27 - CONFIG_PACKET=y 28 23 CONFIG_UNIX=y 29 - CONFIG_INET=y 30 - CONFIG_IP_PNP=y 31 - CONFIG_IP_PNP_BOOTP=y 32 - CONFIG_IP_PNP_RARP=y 33 - # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 34 - # CONFIG_INET_XFRM_MODE_TUNNEL is not set 35 - # CONFIG_INET_XFRM_MODE_BEET is not set 36 - # CONFIG_INET_LRO is not set 37 - # CONFIG_INET_DIAG is not set 38 - # CONFIG_IPV6 is not set 39 24 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 40 25 CONFIG_MTD=y 41 - CONFIG_MTD_PARTITIONS=y 42 26 CONFIG_MTD_CMDLINE_PARTS=y 43 27 CONFIG_MTD_CHAR=y 44 28 CONFIG_MTD_BLOCK=y 45 - CONFIG_MTD_CFI=y 46 - CONFIG_MTD_JEDECPROBE=y 47 - CONFIG_MTD_CFI_AMDSTD=y 48 - CONFIG_MTD_PHYSMAP=y 49 29 CONFIG_MTD_DATAFLASH=y 50 30 CONFIG_MTD_NAND=y 51 31 CONFIG_MTD_NAND_ATMEL=y 52 32 CONFIG_BLK_DEV_LOOP=y 53 33 CONFIG_BLK_DEV_RAM=y 54 - CONFIG_BLK_DEV_RAM_SIZE=8192 55 - CONFIG_ATMEL_SSC=y 34 + CONFIG_BLK_DEV_RAM_COUNT=4 35 + CONFIG_BLK_DEV_RAM_SIZE=24576 56 36 CONFIG_SCSI=y 57 37 CONFIG_BLK_DEV_SD=y 58 38 CONFIG_SCSI_MULTI_LUN=y 59 - CONFIG_NETDEVICES=y 60 - CONFIG_NET_ETHERNET=y 61 - CONFIG_MII=y 62 - CONFIG_MACB=y 63 - # CONFIG_NETDEV_1000 is not set 64 - # CONFIG_NETDEV_10000 is not set 65 39 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 40 + CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 41 + CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 66 42 CONFIG_INPUT_EVDEV=y 67 43 # CONFIG_INPUT_KEYBOARD is not set 68 44 # CONFIG_INPUT_MOUSE is not set 69 45 CONFIG_INPUT_TOUCHSCREEN=y 70 - CONFIG_TOUCHSCREEN_ADS7846=y 46 + CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y 71 47 # CONFIG_SERIO is not set 72 48 CONFIG_SERIAL_ATMEL=y 73 49 CONFIG_SERIAL_ATMEL_CONSOLE=y 74 - CONFIG_HW_RANDOM=y 50 + # CONFIG_HW_RANDOM is not set 75 51 CONFIG_I2C=y 76 52 CONFIG_I2C_CHARDEV=y 53 + CONFIG_I2C_GPIO=y 77 54 CONFIG_SPI=y 78 55 CONFIG_SPI_ATMEL=y 79 56 # CONFIG_HWMON is not set 80 57 CONFIG_WATCHDOG=y 81 58 CONFIG_WATCHDOG_NOWAYOUT=y 59 + CONFIG_AT91SAM9X_WATCHDOG=y 82 60 CONFIG_FB=y 83 61 CONFIG_FB_ATMEL=y 84 - # CONFIG_VGA_CONSOLE is not set 85 - CONFIG_LOGO=y 86 - # CONFIG_LOGO_LINUX_MONO is not set 87 - # CONFIG_LOGO_LINUX_CLUT224 is not set 88 - # CONFIG_USB_HID is not set 89 - CONFIG_USB=y 90 - CONFIG_USB_DEVICEFS=y 91 - CONFIG_USB_MON=y 92 - CONFIG_USB_OHCI_HCD=y 93 - CONFIG_USB_STORAGE=y 94 - CONFIG_USB_GADGET=y 95 - CONFIG_USB_ETH=m 96 - CONFIG_USB_FILE_STORAGE=m 97 62 CONFIG_MMC=y 98 63 CONFIG_MMC_AT91=m 99 64 CONFIG_RTC_CLASS=y 100 65 CONFIG_RTC_DRV_AT91SAM9=y 101 66 CONFIG_EXT2_FS=y 102 - CONFIG_INOTIFY=y 67 + CONFIG_MSDOS_FS=y 103 68 CONFIG_VFAT_FS=y 104 69 CONFIG_TMPFS=y 105 - CONFIG_JFFS2_FS=y 106 70 CONFIG_CRAMFS=y 107 - CONFIG_NFS_FS=y 108 - CONFIG_ROOT_NFS=y 109 71 CONFIG_NLS_CODEPAGE_437=y 110 72 CONFIG_NLS_CODEPAGE_850=y 111 73 CONFIG_NLS_ISO8859_1=y 112 - CONFIG_DEBUG_FS=y 74 + CONFIG_NLS_ISO8859_15=y 75 + CONFIG_NLS_UTF8=y 113 76 CONFIG_DEBUG_KERNEL=y 114 77 CONFIG_DEBUG_INFO=y 115 78 CONFIG_DEBUG_USER=y 79 + CONFIG_DEBUG_LL=y
+14 -33
arch/arm/configs/at91rm9200_defconfig
··· 5 5 CONFIG_IKCONFIG=y 6 6 CONFIG_IKCONFIG_PROC=y 7 7 CONFIG_LOG_BUF_SHIFT=14 8 - CONFIG_SYSFS_DEPRECATED_V2=y 9 8 CONFIG_BLK_DEV_INITRD=y 10 9 CONFIG_MODULES=y 11 10 CONFIG_MODULE_FORCE_LOAD=y ··· 55 56 CONFIG_IP_PNP_DHCP=y 56 57 CONFIG_IP_PNP_BOOTP=y 57 58 CONFIG_NET_IPIP=m 58 - CONFIG_NET_IPGRE=m 59 59 CONFIG_INET_AH=m 60 60 CONFIG_INET_ESP=m 61 61 CONFIG_INET_IPCOMP=m ··· 73 75 CONFIG_BRIDGE=m 74 76 CONFIG_VLAN_8021Q=m 75 77 CONFIG_BT=m 76 - CONFIG_BT_L2CAP=m 77 - CONFIG_BT_SCO=m 78 - CONFIG_BT_RFCOMM=m 79 - CONFIG_BT_RFCOMM_TTY=y 80 - CONFIG_BT_BNEP=m 81 - CONFIG_BT_BNEP_MC_FILTER=y 82 - CONFIG_BT_BNEP_PROTO_FILTER=y 83 - CONFIG_BT_HIDP=m 84 78 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 85 79 CONFIG_MTD=y 86 - CONFIG_MTD_CONCAT=y 87 - CONFIG_MTD_PARTITIONS=y 88 80 CONFIG_MTD_CMDLINE_PARTS=y 89 81 CONFIG_MTD_AFS_PARTS=y 90 82 CONFIG_MTD_CHAR=y ··· 96 108 CONFIG_BLK_DEV_NBD=y 97 109 CONFIG_BLK_DEV_RAM=y 98 110 CONFIG_BLK_DEV_RAM_SIZE=8192 99 - CONFIG_ATMEL_TCLIB=y 100 - CONFIG_EEPROM_LEGACY=m 101 111 CONFIG_SCSI=y 102 112 CONFIG_BLK_DEV_SD=y 103 113 CONFIG_BLK_DEV_SR=m ··· 105 119 # CONFIG_SCSI_LOWLEVEL is not set 106 120 CONFIG_NETDEVICES=y 107 121 CONFIG_TUN=m 122 + CONFIG_ARM_AT91_ETHER=y 108 123 CONFIG_PHYLIB=y 109 124 CONFIG_DAVICOM_PHY=y 110 125 CONFIG_SMSC_PHY=y 111 126 CONFIG_MICREL_PHY=y 112 - CONFIG_NET_ETHERNET=y 113 - CONFIG_ARM_AT91_ETHER=y 114 - # CONFIG_NETDEV_1000 is not set 115 - # CONFIG_NETDEV_10000 is not set 127 + CONFIG_PPP=y 128 + CONFIG_PPP_BSDCOMP=y 129 + CONFIG_PPP_DEFLATE=y 130 + CONFIG_PPP_FILTER=y 131 + CONFIG_PPP_MPPE=m 132 + CONFIG_PPP_MULTILINK=y 133 + CONFIG_PPPOE=m 134 + CONFIG_PPP_ASYNC=y 135 + CONFIG_SLIP=m 136 + CONFIG_SLIP_COMPRESSED=y 137 + CONFIG_SLIP_SMART=y 138 + CONFIG_SLIP_MODE_SLIP6=y 116 139 CONFIG_USB_CATC=m 117 140 CONFIG_USB_KAWETH=m 118 141 CONFIG_USB_PEGASUS=m ··· 134 139 CONFIG_USB_ALI_M5632=y 135 140 CONFIG_USB_AN2720=y 136 141 CONFIG_USB_EPSON2888=y 137 - CONFIG_PPP=y 138 - CONFIG_PPP_MULTILINK=y 139 - CONFIG_PPP_FILTER=y 140 - CONFIG_PPP_ASYNC=y 141 - CONFIG_PPP_DEFLATE=y 142 - CONFIG_PPP_BSDCOMP=y 143 - CONFIG_PPP_MPPE=m 144 - CONFIG_PPPOE=m 145 - CONFIG_SLIP=m 146 - CONFIG_SLIP_COMPRESSED=y 147 - CONFIG_SLIP_SMART=y 148 - CONFIG_SLIP_MODE_SLIP6=y 149 142 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 150 143 CONFIG_INPUT_MOUSEDEV_SCREEN_X=640 151 144 CONFIG_INPUT_MOUSEDEV_SCREEN_Y=480 ··· 141 158 CONFIG_KEYBOARD_GPIO=y 142 159 # CONFIG_INPUT_MOUSE is not set 143 160 CONFIG_INPUT_TOUCHSCREEN=y 161 + CONFIG_LEGACY_PTY_COUNT=32 144 162 CONFIG_SERIAL_ATMEL=y 145 163 CONFIG_SERIAL_ATMEL_CONSOLE=y 146 - CONFIG_LEGACY_PTY_COUNT=32 147 164 CONFIG_HW_RANDOM=y 148 165 CONFIG_I2C=y 149 166 CONFIG_I2C_CHARDEV=y ··· 273 290 CONFIG_NFS_V4=y 274 291 CONFIG_ROOT_NFS=y 275 292 CONFIG_NFSD=y 276 - CONFIG_SMB_FS=m 277 293 CONFIG_CIFS=m 278 294 CONFIG_PARTITION_ADVANCED=y 279 295 CONFIG_MAC_PARTITION=y ··· 317 335 CONFIG_MAGIC_SYSRQ=y 318 336 CONFIG_DEBUG_FS=y 319 337 CONFIG_DEBUG_KERNEL=y 320 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 321 338 # CONFIG_FTRACE is not set 322 339 CONFIG_CRYPTO_PCBC=y 323 340 CONFIG_CRYPTO_SHA1=y
+61 -20
arch/arm/configs/at91sam9260ek_defconfig arch/arm/configs/at91sam9g20_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9260=y 15 - CONFIG_MACH_AT91SAM9260EK=y 14 + CONFIG_ARCH_AT91SAM9G20=y 15 + CONFIG_MACH_AT91SAM9G20EK=y 16 + CONFIG_MACH_AT91SAM9G20EK_2MMC=y 17 + CONFIG_MACH_CPU9G20=y 18 + CONFIG_MACH_ACMENETUSFOXG20=y 19 + CONFIG_MACH_PORTUXG20=y 20 + CONFIG_MACH_STAMP9G20=y 21 + CONFIG_MACH_PCONTROL_G20=y 22 + CONFIG_MACH_GSIA18S=y 23 + CONFIG_MACH_USB_A9G20=y 24 + CONFIG_MACH_SNAPPER_9260=y 25 + CONFIG_MACH_AT91SAM_DT=y 16 26 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 17 27 # CONFIG_ARM_THUMB is not set 28 + CONFIG_AEABI=y 29 + CONFIG_LEDS=y 30 + CONFIG_LEDS_CPU=y 18 31 CONFIG_ZBOOT_ROM_TEXT=0x0 19 32 CONFIG_ZBOOT_ROM_BSS=0x0 33 + CONFIG_ARM_APPENDED_DTB=y 34 + CONFIG_ARM_ATAG_DTB_COMPAT=y 20 35 CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 21 36 CONFIG_FPE_NWFPE=y 22 37 CONFIG_NET=y ··· 46 31 # CONFIG_INET_LRO is not set 47 32 # CONFIG_IPV6 is not set 48 33 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 34 + CONFIG_MTD=y 35 + CONFIG_MTD_CMDLINE_PARTS=y 36 + CONFIG_MTD_CHAR=y 37 + CONFIG_MTD_BLOCK=y 38 + CONFIG_MTD_DATAFLASH=y 39 + CONFIG_MTD_NAND=y 40 + CONFIG_MTD_NAND_ATMEL=y 41 + CONFIG_BLK_DEV_LOOP=y 49 42 CONFIG_BLK_DEV_RAM=y 50 43 CONFIG_BLK_DEV_RAM_SIZE=8192 51 - CONFIG_ATMEL_SSC=y 52 44 CONFIG_SCSI=y 53 45 CONFIG_BLK_DEV_SD=y 54 46 CONFIG_SCSI_MULTI_LUN=y 47 + # CONFIG_SCSI_LOWLEVEL is not set 55 48 CONFIG_NETDEVICES=y 56 - CONFIG_NET_ETHERNET=y 57 49 CONFIG_MII=y 58 50 CONFIG_MACB=y 59 51 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 60 - # CONFIG_INPUT_KEYBOARD is not set 52 + CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 53 + CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 54 + CONFIG_INPUT_EVDEV=y 55 + # CONFIG_KEYBOARD_ATKBD is not set 56 + CONFIG_KEYBOARD_GPIO=y 61 57 # CONFIG_INPUT_MOUSE is not set 62 - # CONFIG_SERIO is not set 58 + CONFIG_LEGACY_PTY_COUNT=16 63 59 CONFIG_SERIAL_ATMEL=y 64 60 CONFIG_SERIAL_ATMEL_CONSOLE=y 65 - # CONFIG_HW_RANDOM is not set 66 - CONFIG_I2C=y 67 - CONFIG_I2C_CHARDEV=y 68 - CONFIG_I2C_GPIO=y 61 + CONFIG_HW_RANDOM=y 62 + CONFIG_SPI=y 63 + CONFIG_SPI_ATMEL=y 64 + CONFIG_SPI_SPIDEV=y 69 65 # CONFIG_HWMON is not set 70 - CONFIG_WATCHDOG=y 71 - CONFIG_WATCHDOG_NOWAYOUT=y 72 - CONFIG_AT91SAM9X_WATCHDOG=y 73 - # CONFIG_VGA_CONSOLE is not set 74 - # CONFIG_USB_HID is not set 66 + CONFIG_SOUND=y 67 + CONFIG_SND=y 68 + CONFIG_SND_SEQUENCER=y 69 + CONFIG_SND_MIXER_OSS=y 70 + CONFIG_SND_PCM_OSS=y 71 + CONFIG_SND_SEQUENCER_OSS=y 72 + # CONFIG_SND_VERBOSE_PROCFS is not set 75 73 CONFIG_USB=y 76 74 CONFIG_USB_DEVICEFS=y 75 + # CONFIG_USB_DEVICE_CLASS is not set 77 76 CONFIG_USB_MON=y 78 77 CONFIG_USB_OHCI_HCD=y 79 78 CONFIG_USB_STORAGE=y 80 - CONFIG_USB_STORAGE_DEBUG=y 81 79 CONFIG_USB_GADGET=y 82 80 CONFIG_USB_ZERO=m 83 81 CONFIG_USB_GADGETFS=m 84 82 CONFIG_USB_FILE_STORAGE=m 85 83 CONFIG_USB_G_SERIAL=m 84 + CONFIG_MMC=y 85 + CONFIG_MMC_AT91=m 86 + CONFIG_NEW_LEDS=y 87 + CONFIG_LEDS_CLASS=y 88 + CONFIG_LEDS_GPIO=y 89 + CONFIG_LEDS_TRIGGERS=y 90 + CONFIG_LEDS_TRIGGER_TIMER=y 91 + CONFIG_LEDS_TRIGGER_HEARTBEAT=y 86 92 CONFIG_RTC_CLASS=y 87 93 CONFIG_RTC_DRV_AT91SAM9=y 88 94 CONFIG_EXT2_FS=y 89 - CONFIG_INOTIFY=y 95 + CONFIG_MSDOS_FS=y 90 96 CONFIG_VFAT_FS=y 91 97 CONFIG_TMPFS=y 98 + CONFIG_JFFS2_FS=y 99 + CONFIG_JFFS2_SUMMARY=y 92 100 CONFIG_CRAMFS=y 101 + CONFIG_NFS_FS=y 102 + CONFIG_NFS_V3=y 103 + CONFIG_ROOT_NFS=y 93 104 CONFIG_NLS_CODEPAGE_437=y 94 105 CONFIG_NLS_CODEPAGE_850=y 95 106 CONFIG_NLS_ISO8859_1=y 96 - CONFIG_DEBUG_KERNEL=y 97 - CONFIG_DEBUG_USER=y 98 - CONFIG_DEBUG_LL=y 107 + CONFIG_NLS_ISO8859_15=y 108 + CONFIG_NLS_UTF8=y 109 + # CONFIG_ENABLE_WARN_DEPRECATED is not set
+29 -44
arch/arm/configs/at91sam9g20ek_defconfig arch/arm/configs/at91cap9_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9G20=y 15 - CONFIG_MACH_AT91SAM9G20EK=y 16 - CONFIG_MACH_AT91SAM9G20EK_2MMC=y 14 + CONFIG_ARCH_AT91CAP9=y 15 + CONFIG_MACH_AT91CAP9ADK=y 16 + CONFIG_MTD_AT91_DATAFLASH_CARD=y 17 17 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 18 18 # CONFIG_ARM_THUMB is not set 19 19 CONFIG_AEABI=y ··· 21 21 CONFIG_LEDS_CPU=y 22 22 CONFIG_ZBOOT_ROM_TEXT=0x0 23 23 CONFIG_ZBOOT_ROM_BSS=0x0 24 - CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 24 + CONFIG_CMDLINE="console=ttyS0,115200 root=/dev/ram0 rw" 25 25 CONFIG_FPE_NWFPE=y 26 - CONFIG_PM=y 27 26 CONFIG_NET=y 28 27 CONFIG_PACKET=y 29 28 CONFIG_UNIX=y 30 29 CONFIG_INET=y 31 30 CONFIG_IP_PNP=y 32 31 CONFIG_IP_PNP_BOOTP=y 32 + CONFIG_IP_PNP_RARP=y 33 33 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 34 34 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 35 35 # CONFIG_INET_XFRM_MODE_BEET is not set 36 36 # CONFIG_INET_LRO is not set 37 + # CONFIG_INET_DIAG is not set 37 38 # CONFIG_IPV6 is not set 38 39 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 39 40 CONFIG_MTD=y 40 - CONFIG_MTD_CONCAT=y 41 - CONFIG_MTD_PARTITIONS=y 42 41 CONFIG_MTD_CMDLINE_PARTS=y 43 42 CONFIG_MTD_CHAR=y 44 43 CONFIG_MTD_BLOCK=y 44 + CONFIG_MTD_CFI=y 45 + CONFIG_MTD_JEDECPROBE=y 46 + CONFIG_MTD_CFI_AMDSTD=y 47 + CONFIG_MTD_PHYSMAP=y 45 48 CONFIG_MTD_DATAFLASH=y 46 49 CONFIG_MTD_NAND=y 47 50 CONFIG_MTD_NAND_ATMEL=y 48 51 CONFIG_BLK_DEV_LOOP=y 49 52 CONFIG_BLK_DEV_RAM=y 50 53 CONFIG_BLK_DEV_RAM_SIZE=8192 51 - CONFIG_ATMEL_SSC=y 52 54 CONFIG_SCSI=y 53 55 CONFIG_BLK_DEV_SD=y 54 56 CONFIG_SCSI_MULTI_LUN=y 55 - # CONFIG_SCSI_LOWLEVEL is not set 56 57 CONFIG_NETDEVICES=y 57 - CONFIG_NET_ETHERNET=y 58 58 CONFIG_MII=y 59 59 CONFIG_MACB=y 60 - # CONFIG_NETDEV_1000 is not set 61 - # CONFIG_NETDEV_10000 is not set 62 60 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 63 - CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 64 - CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 65 61 CONFIG_INPUT_EVDEV=y 66 - # CONFIG_KEYBOARD_ATKBD is not set 67 - CONFIG_KEYBOARD_GPIO=y 62 + # CONFIG_INPUT_KEYBOARD is not set 68 63 # CONFIG_INPUT_MOUSE is not set 64 + CONFIG_INPUT_TOUCHSCREEN=y 65 + CONFIG_TOUCHSCREEN_ADS7846=y 66 + # CONFIG_SERIO is not set 69 67 CONFIG_SERIAL_ATMEL=y 70 68 CONFIG_SERIAL_ATMEL_CONSOLE=y 71 - CONFIG_LEGACY_PTY_COUNT=16 72 69 CONFIG_HW_RANDOM=y 70 + CONFIG_I2C=y 71 + CONFIG_I2C_CHARDEV=y 73 72 CONFIG_SPI=y 74 73 CONFIG_SPI_ATMEL=y 75 - CONFIG_SPI_SPIDEV=y 76 74 # CONFIG_HWMON is not set 77 - # CONFIG_VGA_CONSOLE is not set 78 - CONFIG_SOUND=y 79 - CONFIG_SND=y 80 - CONFIG_SND_SEQUENCER=y 81 - CONFIG_SND_MIXER_OSS=y 82 - CONFIG_SND_PCM_OSS=y 83 - CONFIG_SND_SEQUENCER_OSS=y 84 - # CONFIG_SND_VERBOSE_PROCFS is not set 85 - CONFIG_SND_AT73C213=y 75 + CONFIG_WATCHDOG=y 76 + CONFIG_WATCHDOG_NOWAYOUT=y 77 + CONFIG_FB=y 78 + CONFIG_FB_ATMEL=y 79 + CONFIG_LOGO=y 80 + # CONFIG_LOGO_LINUX_MONO is not set 81 + # CONFIG_LOGO_LINUX_CLUT224 is not set 82 + # CONFIG_USB_HID is not set 86 83 CONFIG_USB=y 87 84 CONFIG_USB_DEVICEFS=y 88 - # CONFIG_USB_DEVICE_CLASS is not set 89 85 CONFIG_USB_MON=y 90 86 CONFIG_USB_OHCI_HCD=y 91 87 CONFIG_USB_STORAGE=y 92 88 CONFIG_USB_GADGET=y 93 - CONFIG_USB_ZERO=m 94 - CONFIG_USB_GADGETFS=m 89 + CONFIG_USB_ETH=m 95 90 CONFIG_USB_FILE_STORAGE=m 96 - CONFIG_USB_G_SERIAL=m 97 91 CONFIG_MMC=y 98 92 CONFIG_MMC_AT91=m 99 - CONFIG_NEW_LEDS=y 100 - CONFIG_LEDS_CLASS=y 101 - CONFIG_LEDS_GPIO=y 102 - CONFIG_LEDS_TRIGGERS=y 103 - CONFIG_LEDS_TRIGGER_TIMER=y 104 - CONFIG_LEDS_TRIGGER_HEARTBEAT=y 105 93 CONFIG_RTC_CLASS=y 106 94 CONFIG_RTC_DRV_AT91SAM9=y 107 95 CONFIG_EXT2_FS=y 108 - CONFIG_INOTIFY=y 109 - CONFIG_MSDOS_FS=y 110 96 CONFIG_VFAT_FS=y 111 97 CONFIG_TMPFS=y 112 98 CONFIG_JFFS2_FS=y 113 - CONFIG_JFFS2_SUMMARY=y 114 99 CONFIG_CRAMFS=y 115 100 CONFIG_NFS_FS=y 116 - CONFIG_NFS_V3=y 117 101 CONFIG_ROOT_NFS=y 118 102 CONFIG_NLS_CODEPAGE_437=y 119 103 CONFIG_NLS_CODEPAGE_850=y 120 104 CONFIG_NLS_ISO8859_1=y 121 - CONFIG_NLS_ISO8859_15=y 122 - CONFIG_NLS_UTF8=y 123 - # CONFIG_ENABLE_WARN_DEPRECATED is not set 105 + CONFIG_DEBUG_FS=y 106 + CONFIG_DEBUG_KERNEL=y 107 + CONFIG_DEBUG_INFO=y 108 + CONFIG_DEBUG_USER=y
+2 -5
arch/arm/configs/at91sam9g45_defconfig
··· 18 18 CONFIG_ARCH_AT91=y 19 19 CONFIG_ARCH_AT91SAM9G45=y 20 20 CONFIG_MACH_AT91SAM9M10G45EK=y 21 + CONFIG_MACH_AT91SAM_DT=y 21 22 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 22 23 CONFIG_AT91_SLOW_CLOCK=y 23 24 CONFIG_AEABI=y ··· 74 73 # CONFIG_SCSI_LOWLEVEL is not set 75 74 CONFIG_NETDEVICES=y 76 75 CONFIG_MII=y 77 - CONFIG_DAVICOM_PHY=y 78 - CONFIG_NET_ETHERNET=y 79 76 CONFIG_MACB=y 80 - # CONFIG_NETDEV_1000 is not set 81 - # CONFIG_NETDEV_10000 is not set 77 + CONFIG_DAVICOM_PHY=y 82 78 CONFIG_LIBERTAS_THINFIRM=m 83 79 CONFIG_LIBERTAS_THINFIRM_USB=m 84 80 CONFIG_AT76C50X_USB=m ··· 129 131 CONFIG_SPI=y 130 132 CONFIG_SPI_ATMEL=y 131 133 # CONFIG_HWMON is not set 132 - # CONFIG_MFD_SUPPORT is not set 133 134 CONFIG_FB=y 134 135 CONFIG_FB_ATMEL=y 135 136 CONFIG_FB_UDL=m
+40 -33
arch/arm/configs/at91sam9rlek_defconfig arch/arm/configs/at91sam9260_defconfig
··· 11 11 # CONFIG_IOSCHED_DEADLINE is not set 12 12 # CONFIG_IOSCHED_CFQ is not set 13 13 CONFIG_ARCH_AT91=y 14 - CONFIG_ARCH_AT91SAM9RL=y 15 - CONFIG_MACH_AT91SAM9RLEK=y 14 + CONFIG_ARCH_AT91SAM9260=y 15 + CONFIG_ARCH_AT91SAM9260_SAM9XE=y 16 + CONFIG_MACH_AT91SAM9260EK=y 17 + CONFIG_MACH_CAM60=y 18 + CONFIG_MACH_SAM9_L9260=y 19 + CONFIG_MACH_AFEB9260=y 20 + CONFIG_MACH_USB_A9260=y 21 + CONFIG_MACH_QIL_A9260=y 22 + CONFIG_MACH_CPU9260=y 23 + CONFIG_MACH_FLEXIBITY=y 24 + CONFIG_MACH_SNAPPER_9260=y 25 + CONFIG_MACH_AT91SAM_DT=y 16 26 CONFIG_AT91_PROGRAMMABLE_CLOCKS=y 17 27 # CONFIG_ARM_THUMB is not set 18 28 CONFIG_ZBOOT_ROM_TEXT=0x0 19 29 CONFIG_ZBOOT_ROM_BSS=0x0 20 - CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,17105363 root=/dev/ram0 rw" 30 + CONFIG_ARM_APPENDED_DTB=y 31 + CONFIG_ARM_ATAG_DTB_COMPAT=y 32 + CONFIG_CMDLINE="mem=64M console=ttyS0,115200 initrd=0x21100000,3145728 root=/dev/ram0 rw" 21 33 CONFIG_FPE_NWFPE=y 22 34 CONFIG_NET=y 35 + CONFIG_PACKET=y 23 36 CONFIG_UNIX=y 37 + CONFIG_INET=y 38 + CONFIG_IP_PNP=y 39 + CONFIG_IP_PNP_BOOTP=y 40 + # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 41 + # CONFIG_INET_XFRM_MODE_TUNNEL is not set 42 + # CONFIG_INET_XFRM_MODE_BEET is not set 43 + # CONFIG_INET_LRO is not set 44 + # CONFIG_IPV6 is not set 24 45 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 25 - CONFIG_MTD=y 26 - CONFIG_MTD_CONCAT=y 27 - CONFIG_MTD_PARTITIONS=y 28 - CONFIG_MTD_CMDLINE_PARTS=y 29 - CONFIG_MTD_CHAR=y 30 - CONFIG_MTD_BLOCK=y 31 - CONFIG_MTD_DATAFLASH=y 32 - CONFIG_MTD_NAND=y 33 - CONFIG_MTD_NAND_ATMEL=y 34 - CONFIG_BLK_DEV_LOOP=y 35 46 CONFIG_BLK_DEV_RAM=y 36 - CONFIG_BLK_DEV_RAM_COUNT=4 37 - CONFIG_BLK_DEV_RAM_SIZE=24576 38 - CONFIG_ATMEL_SSC=y 47 + CONFIG_BLK_DEV_RAM_SIZE=8192 39 48 CONFIG_SCSI=y 40 49 CONFIG_BLK_DEV_SD=y 41 50 CONFIG_SCSI_MULTI_LUN=y 51 + CONFIG_NETDEVICES=y 52 + CONFIG_MII=y 53 + CONFIG_MACB=y 42 54 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 43 - CONFIG_INPUT_MOUSEDEV_SCREEN_X=320 44 - CONFIG_INPUT_MOUSEDEV_SCREEN_Y=240 45 - CONFIG_INPUT_EVDEV=y 46 55 # CONFIG_INPUT_KEYBOARD is not set 47 56 # CONFIG_INPUT_MOUSE is not set 48 - CONFIG_INPUT_TOUCHSCREEN=y 49 - CONFIG_TOUCHSCREEN_ATMEL_TSADCC=y 50 57 # CONFIG_SERIO is not set 51 58 CONFIG_SERIAL_ATMEL=y 52 59 CONFIG_SERIAL_ATMEL_CONSOLE=y ··· 61 54 CONFIG_I2C=y 62 55 CONFIG_I2C_CHARDEV=y 63 56 CONFIG_I2C_GPIO=y 64 - CONFIG_SPI=y 65 - CONFIG_SPI_ATMEL=y 66 57 # CONFIG_HWMON is not set 67 58 CONFIG_WATCHDOG=y 68 59 CONFIG_WATCHDOG_NOWAYOUT=y 69 60 CONFIG_AT91SAM9X_WATCHDOG=y 70 - CONFIG_FB=y 71 - CONFIG_FB_ATMEL=y 72 - # CONFIG_VGA_CONSOLE is not set 73 - CONFIG_MMC=y 74 - CONFIG_MMC_AT91=m 61 + # CONFIG_USB_HID is not set 62 + CONFIG_USB=y 63 + CONFIG_USB_DEVICEFS=y 64 + CONFIG_USB_MON=y 65 + CONFIG_USB_OHCI_HCD=y 66 + CONFIG_USB_STORAGE=y 67 + CONFIG_USB_STORAGE_DEBUG=y 68 + CONFIG_USB_GADGET=y 69 + CONFIG_USB_ZERO=m 70 + CONFIG_USB_GADGETFS=m 71 + CONFIG_USB_FILE_STORAGE=m 72 + CONFIG_USB_G_SERIAL=m 75 73 CONFIG_RTC_CLASS=y 76 74 CONFIG_RTC_DRV_AT91SAM9=y 77 75 CONFIG_EXT2_FS=y 78 - CONFIG_INOTIFY=y 79 - CONFIG_MSDOS_FS=y 80 76 CONFIG_VFAT_FS=y 81 77 CONFIG_TMPFS=y 82 78 CONFIG_CRAMFS=y 83 79 CONFIG_NLS_CODEPAGE_437=y 84 80 CONFIG_NLS_CODEPAGE_850=y 85 81 CONFIG_NLS_ISO8859_1=y 86 - CONFIG_NLS_ISO8859_15=y 87 - CONFIG_NLS_UTF8=y 88 82 CONFIG_DEBUG_KERNEL=y 89 - CONFIG_DEBUG_INFO=y 90 83 CONFIG_DEBUG_USER=y 91 84 CONFIG_DEBUG_LL=y
+1 -1
arch/arm/configs/ezx_defconfig
··· 287 287 # CONFIG_USB_DEVICE_CLASS is not set 288 288 CONFIG_USB_OHCI_HCD=y 289 289 CONFIG_USB_GADGET=y 290 - CONFIG_USB_GADGET_PXA27X=y 290 + CONFIG_USB_PXA27X=y 291 291 CONFIG_USB_ETH=m 292 292 # CONFIG_USB_ETH_RNDIS is not set 293 293 CONFIG_MMC=y
+1 -1
arch/arm/configs/imote2_defconfig
··· 263 263 # CONFIG_USB_DEVICE_CLASS is not set 264 264 CONFIG_USB_OHCI_HCD=y 265 265 CONFIG_USB_GADGET=y 266 - CONFIG_USB_GADGET_PXA27X=y 266 + CONFIG_USB_PXA27X=y 267 267 CONFIG_USB_ETH=m 268 268 # CONFIG_USB_ETH_RNDIS is not set 269 269 CONFIG_MMC=y
+1 -1
arch/arm/configs/magician_defconfig
··· 132 132 CONFIG_USB_OHCI_HCD=y 133 133 CONFIG_USB_GADGET=y 134 134 CONFIG_USB_GADGET_VBUS_DRAW=500 135 - CONFIG_USB_GADGET_PXA27X=y 135 + CONFIG_USB_PXA27X=y 136 136 CONFIG_USB_ETH=m 137 137 # CONFIG_USB_ETH_RNDIS is not set 138 138 CONFIG_USB_GADGETFS=m
-1
arch/arm/configs/omap1_defconfig
··· 48 48 CONFIG_MACH_NOKIA770=y 49 49 CONFIG_MACH_AMS_DELTA=y 50 50 CONFIG_MACH_OMAP_GENERIC=y 51 - CONFIG_OMAP_CLOCKS_SET_BY_BOOTLOADER=y 52 51 CONFIG_OMAP_ARM_216MHZ=y 53 52 CONFIG_OMAP_ARM_195MHZ=y 54 53 CONFIG_OMAP_ARM_192MHZ=y
+6 -7
arch/arm/configs/u300_defconfig
··· 14 14 CONFIG_ARCH_U300=y 15 15 CONFIG_MACH_U300=y 16 16 CONFIG_MACH_U300_BS335=y 17 - CONFIG_MACH_U300_DUAL_RAM=y 18 - CONFIG_U300_DEBUG=y 19 17 CONFIG_MACH_U300_SPIDUMMY=y 20 18 CONFIG_NO_HZ=y 21 19 CONFIG_HIGH_RES_TIMERS=y ··· 24 26 CONFIG_CMDLINE="root=/dev/ram0 rw rootfstype=rootfs console=ttyAMA0,115200n8 lpj=515072" 25 27 CONFIG_CPU_IDLE=y 26 28 CONFIG_FPE_NWFPE=y 27 - CONFIG_PM=y 28 29 # CONFIG_SUSPEND is not set 29 30 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 31 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 31 - # CONFIG_MISC_DEVICES is not set 32 + CONFIG_MTD=y 33 + CONFIG_MTD_CMDLINE_PARTS=y 34 + CONFIG_MTD_NAND=y 35 + CONFIG_MTD_NAND_FSMC=y 32 36 # CONFIG_INPUT_MOUSEDEV is not set 33 37 CONFIG_INPUT_EVDEV=y 34 38 # CONFIG_KEYBOARD_ATKBD is not set 35 39 # CONFIG_INPUT_MOUSE is not set 36 40 # CONFIG_SERIO is not set 41 + CONFIG_LEGACY_PTY_COUNT=16 37 42 CONFIG_SERIAL_AMBA_PL011=y 38 43 CONFIG_SERIAL_AMBA_PL011_CONSOLE=y 39 - CONFIG_LEGACY_PTY_COUNT=16 40 44 # CONFIG_HW_RANDOM is not set 41 45 CONFIG_I2C=y 42 46 # CONFIG_HWMON is not set ··· 51 51 # CONFIG_HID_SUPPORT is not set 52 52 # CONFIG_USB_SUPPORT is not set 53 53 CONFIG_MMC=y 54 + CONFIG_MMC_CLKGATE=y 54 55 CONFIG_MMC_ARMMMCI=y 55 56 CONFIG_RTC_CLASS=y 56 57 # CONFIG_RTC_HCTOSYS is not set ··· 66 65 CONFIG_NLS_ISO8859_1=y 67 66 CONFIG_PRINTK_TIME=y 68 67 CONFIG_DEBUG_FS=y 69 - CONFIG_DEBUG_KERNEL=y 70 68 # CONFIG_SCHED_DEBUG is not set 71 69 CONFIG_TIMER_STATS=y 72 70 # CONFIG_DEBUG_PREEMPT is not set 73 71 CONFIG_DEBUG_INFO=y 74 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 75 72 # CONFIG_CRC32 is not set
+5 -9
arch/arm/configs/u8500_defconfig
··· 10 10 CONFIG_ARCH_U8500=y 11 11 CONFIG_UX500_SOC_DB5500=y 12 12 CONFIG_UX500_SOC_DB8500=y 13 - CONFIG_MACH_U8500=y 13 + CONFIG_MACH_HREFV60=y 14 14 CONFIG_MACH_SNOWBALL=y 15 15 CONFIG_MACH_U5500=y 16 16 CONFIG_NO_HZ=y ··· 24 24 CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y 25 25 CONFIG_VFP=y 26 26 CONFIG_NEON=y 27 + CONFIG_PM_RUNTIME=y 27 28 CONFIG_NET=y 28 29 CONFIG_PACKET=y 29 30 CONFIG_UNIX=y ··· 42 41 CONFIG_AB8500_PWM=y 43 42 CONFIG_SENSORS_BH1780=y 44 43 CONFIG_NETDEVICES=y 45 - CONFIG_SMSC_PHY=y 46 - CONFIG_NET_ETHERNET=y 47 44 CONFIG_SMSC911X=y 48 - # CONFIG_NETDEV_1000 is not set 49 - # CONFIG_NETDEV_10000 is not set 45 + CONFIG_SMSC_PHY=y 50 46 # CONFIG_WLAN is not set 51 47 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 52 48 CONFIG_INPUT_EVDEV=y ··· 70 72 CONFIG_SPI_PL022=y 71 73 CONFIG_GPIO_STMPE=y 72 74 CONFIG_GPIO_TC3589X=y 73 - # CONFIG_HWMON is not set 74 75 CONFIG_MFD_STMPE=y 75 76 CONFIG_MFD_TC3589X=y 77 + CONFIG_AB5500_CORE=y 76 78 CONFIG_AB8500_CORE=y 77 79 CONFIG_REGULATOR_AB8500=y 78 80 # CONFIG_HID_SUPPORT is not set 79 - CONFIG_USB_MUSB_HDRC=y 80 - CONFIG_USB_GADGET_MUSB_HDRC=y 81 - CONFIG_MUSB_PIO_ONLY=y 82 81 CONFIG_USB_GADGET=y 83 82 CONFIG_AB8500_USB=y 84 83 CONFIG_MMC=y ··· 92 97 CONFIG_STE_DMA40=y 93 98 CONFIG_STAGING=y 94 99 CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y 100 + CONFIG_HSEM_U8500=y 95 101 CONFIG_EXT2_FS=y 96 102 CONFIG_EXT2_FS_XATTR=y 97 103 CONFIG_EXT2_FS_POSIX_ACL=y
+1 -1
arch/arm/configs/zeus_defconfig
··· 140 140 CONFIG_USB_SERIAL_GENERIC=y 141 141 CONFIG_USB_SERIAL_MCT_U232=m 142 142 CONFIG_USB_GADGET=m 143 - CONFIG_USB_GADGET_PXA27X=y 143 + CONFIG_USB_PXA27X=y 144 144 CONFIG_USB_ETH=m 145 145 CONFIG_USB_GADGETFS=m 146 146 CONFIG_USB_FILE_STORAGE=m
-10
arch/arm/include/asm/pmu.h
··· 55 55 extern void 56 56 release_pmu(enum arm_pmu_type type); 57 57 58 - /** 59 - * init_pmu() - Initialise the PMU. 60 - * 61 - * Initialise the system ready for PMU enabling. This should typically set the 62 - * IRQ affinity and nothing else. The users (oprofile/perf events etc) will do 63 - * the actual hardware initialisation. 64 - */ 65 - extern int 66 - init_pmu(enum arm_pmu_type type); 67 - 68 58 #else /* CONFIG_CPU_HAS_PMU */ 69 59 70 60 #include <linux/err.h>
+1 -1
arch/arm/include/asm/topology.h
··· 25 25 26 26 void init_cpu_topology(void); 27 27 void store_cpu_topology(unsigned int cpuid); 28 - const struct cpumask *cpu_coregroup_mask(unsigned int cpu); 28 + const struct cpumask *cpu_coregroup_mask(int cpu); 29 29 30 30 #else 31 31
+1 -1
arch/arm/kernel/entry-armv.S
··· 497 497 .popsection 498 498 .pushsection __ex_table,"a" 499 499 .long 1b, 4b 500 - #if __LINUX_ARM_ARCH__ >= 7 500 + #if CONFIG_ARM_THUMB && __LINUX_ARM_ARCH__ >= 6 && CONFIG_CPU_V7 501 501 .long 2b, 4b 502 502 .long 3b, 4b 503 503 #endif
+3 -1
arch/arm/kernel/kprobes-arm.c
··· 519 519 static const union decode_item arm_cccc_0001_____1001_table[] = { 520 520 /* Synchronization primitives */ 521 521 522 + #if __LINUX_ARM_ARCH__ < 6 523 + /* Deprecated on ARMv6 and may be UNDEFINED on v7 */ 522 524 /* SMP/SWPB cccc 0001 0x00 xxxx xxxx xxxx 1001 xxxx */ 523 525 DECODE_EMULATEX (0x0fb000f0, 0x01000090, emulate_rd12rn16rm0_rwflags_nopc, 524 526 REGS(NOPC, NOPC, 0, 0, NOPC)), 525 - 527 + #endif 526 528 /* LDREX/STREX{,D,B,H} cccc 0001 1xxx xxxx xxxx xxxx 1001 xxxx */ 527 529 /* And unallocated instructions... */ 528 530 DECODE_END
+17 -10
arch/arm/kernel/kprobes-test-arm.c
··· 427 427 428 428 TEST_GROUP("Synchronization primitives") 429 429 430 - /* 431 - * Use hard coded constants for SWP instructions to avoid warnings 432 - * about deprecated instructions. 433 - */ 434 - TEST_RP( ".word 0xe108e097 @ swp lr, r",7,VAL2,", [r",8,0,"]") 435 - TEST_R( ".word 0x610d0091 @ swpvs r0, r",1,VAL1,", [sp]") 436 - TEST_RP( ".word 0xe10cd09e @ swp sp, r",14,VAL2,", [r",12,13*4,"]") 430 + #if __LINUX_ARM_ARCH__ < 6 431 + TEST_RP("swp lr, r",7,VAL2,", [r",8,0,"]") 432 + TEST_R( "swpvs r0, r",1,VAL1,", [sp]") 433 + TEST_RP("swp sp, r",14,VAL2,", [r",12,13*4,"]") 434 + #else 435 + TEST_UNSUPPORTED(".word 0xe108e097 @ swp lr, r7, [r8]") 436 + TEST_UNSUPPORTED(".word 0x610d0091 @ swpvs r0, r1, [sp]") 437 + TEST_UNSUPPORTED(".word 0xe10cd09e @ swp sp, r14 [r12]") 438 + #endif 437 439 TEST_UNSUPPORTED(".word 0xe102f091 @ swp pc, r1, [r2]") 438 440 TEST_UNSUPPORTED(".word 0xe102009f @ swp r0, pc, [r2]") 439 441 TEST_UNSUPPORTED(".word 0xe10f0091 @ swp r0, r1, [pc]") 440 - TEST_RP( ".word 0xe148e097 @ swpb lr, r",7,VAL2,", [r",8,0,"]") 441 - TEST_R( ".word 0x614d0091 @ swpvsb r0, r",1,VAL1,", [sp]") 442 + #if __LINUX_ARM_ARCH__ < 6 443 + TEST_RP("swpb lr, r",7,VAL2,", [r",8,0,"]") 444 + TEST_R( "swpvsb r0, r",1,VAL1,", [sp]") 445 + #else 446 + TEST_UNSUPPORTED(".word 0xe148e097 @ swpb lr, r7, [r8]") 447 + TEST_UNSUPPORTED(".word 0x614d0091 @ swpvsb r0, r1, [sp]") 448 + #endif 442 449 TEST_UNSUPPORTED(".word 0xe142f091 @ swpb pc, r1, [r2]") 443 450 444 451 TEST_UNSUPPORTED(".word 0xe1100090") /* Unallocated space */ ··· 557 550 TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") 558 551 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") 559 552 TEST_RPR( "strcsd r",12,VAL2,", [r",11,48,", -r",10,24,"]!") 560 - TEST_RPR( "strd r",2, VAL1,", [r",3, 24,"], r",4,48,"") 553 + TEST_RPR( "strd r",2, VAL1,", [r",5, 24,"], r",4,48,"") 561 554 TEST_RPR( "strd r",10,VAL2,", [r",9, 48,"], -r",7,24,"") 562 555 TEST_UNSUPPORTED(".word 0xe1afc0fa @ strd r12, [pc, r10]!") 563 556
+8 -8
arch/arm/kernel/kprobes-test-thumb.c
··· 222 222 DONT_TEST_IN_ITBLOCK( 223 223 TEST_BF_R( "cbnz r",0,0, ", 2f") 224 224 TEST_BF_R( "cbz r",2,-1,", 2f") 225 - TEST_BF_RX( "cbnz r",4,1, ", 2f",0x20) 226 - TEST_BF_RX( "cbz r",7,0, ", 2f",0x40) 225 + TEST_BF_RX( "cbnz r",4,1, ", 2f", SPACE_0x20) 226 + TEST_BF_RX( "cbz r",7,0, ", 2f", SPACE_0x40) 227 227 ) 228 228 TEST_R("sxth r0, r",7, HH1,"") 229 229 TEST_R("sxth r7, r",0, HH2,"") ··· 246 246 TESTCASE_START(code) \ 247 247 TEST_ARG_PTR(13, offset) \ 248 248 TEST_ARG_END("") \ 249 - TEST_BRANCH_F(code,0) \ 249 + TEST_BRANCH_F(code) \ 250 250 TESTCASE_END 251 251 252 252 TEST("push {r0}") ··· 319 319 320 320 TEST_BF( "b 2f") 321 321 TEST_BB( "b 2b") 322 - TEST_BF_X("b 2f", 0x400) 323 - TEST_BB_X("b 2b", 0x400) 322 + TEST_BF_X("b 2f", SPACE_0x400) 323 + TEST_BB_X("b 2b", SPACE_0x400) 324 324 325 325 TEST_GROUP("Testing instructions in IT blocks") 326 326 ··· 746 746 TEST_BB("bne.w 2b") 747 747 TEST_BF("bgt.w 2f") 748 748 TEST_BB("blt.w 2b") 749 - TEST_BF_X("bpl.w 2f",0x1000) 749 + TEST_BF_X("bpl.w 2f", SPACE_0x1000) 750 750 ) 751 751 752 752 TEST_UNSUPPORTED("msr cpsr, r0") ··· 786 786 787 787 TEST_BF( "b.w 2f") 788 788 TEST_BB( "b.w 2b") 789 - TEST_BF_X("b.w 2f", 0x1000) 789 + TEST_BF_X("b.w 2f", SPACE_0x1000) 790 790 791 791 TEST_BF( "bl.w 2f") 792 792 TEST_BB( "bl.w 2b") 793 - TEST_BB_X("bl.w 2b", 0x1000) 793 + TEST_BB_X("bl.w 2b", SPACE_0x1000) 794 794 795 795 TEST_X( "blx __dummy_arm_subroutine", 796 796 ".arm \n\t"
+71 -31
arch/arm/kernel/kprobes-test.h
··· 149 149 "1: "instruction" \n\t" \ 150 150 " nop \n\t" 151 151 152 - #define TEST_BRANCH_F(instruction, xtra_dist) \ 152 + #define TEST_BRANCH_F(instruction) \ 153 153 TEST_INSTRUCTION(instruction) \ 154 - ".if "#xtra_dist" \n\t" \ 155 - " b 99f \n\t" \ 156 - ".space "#xtra_dist" \n\t" \ 157 - ".endif \n\t" \ 158 154 " b 99f \n\t" \ 159 155 "2: nop \n\t" 160 156 161 - #define TEST_BRANCH_B(instruction, xtra_dist) \ 157 + #define TEST_BRANCH_B(instruction) \ 162 158 " b 50f \n\t" \ 163 159 " b 99f \n\t" \ 164 160 "2: nop \n\t" \ 165 161 " b 99f \n\t" \ 166 - ".if "#xtra_dist" \n\t" \ 167 - ".space "#xtra_dist" \n\t" \ 168 - ".endif \n\t" \ 162 + TEST_INSTRUCTION(instruction) 163 + 164 + #define TEST_BRANCH_FX(instruction, codex) \ 165 + TEST_INSTRUCTION(instruction) \ 166 + " b 99f \n\t" \ 167 + codex" \n\t" \ 168 + " b 99f \n\t" \ 169 + "2: nop \n\t" 170 + 171 + #define TEST_BRANCH_BX(instruction, codex) \ 172 + " b 50f \n\t" \ 173 + " b 99f \n\t" \ 174 + "2: nop \n\t" \ 175 + " b 99f \n\t" \ 176 + codex" \n\t" \ 169 177 TEST_INSTRUCTION(instruction) 170 178 171 179 #define TESTCASE_END \ ··· 309 301 TESTCASE_START(code1 #reg1 code2) \ 310 302 TEST_ARG_PTR(reg1, val1) \ 311 303 TEST_ARG_END("") \ 312 - TEST_BRANCH_F(code1 #reg1 code2, 0) \ 304 + TEST_BRANCH_F(code1 #reg1 code2) \ 313 305 TESTCASE_END 314 306 315 - #define TEST_BF_X(code, xtra_dist) \ 307 + #define TEST_BF(code) \ 316 308 TESTCASE_START(code) \ 317 309 TEST_ARG_END("") \ 318 - TEST_BRANCH_F(code, xtra_dist) \ 310 + TEST_BRANCH_F(code) \ 319 311 TESTCASE_END 320 312 321 - #define TEST_BB_X(code, xtra_dist) \ 313 + #define TEST_BB(code) \ 322 314 TESTCASE_START(code) \ 323 315 TEST_ARG_END("") \ 324 - TEST_BRANCH_B(code, xtra_dist) \ 316 + TEST_BRANCH_B(code) \ 325 317 TESTCASE_END 326 318 327 - #define TEST_BF_RX(code1, reg, val, code2, xtra_dist) \ 328 - TESTCASE_START(code1 #reg code2) \ 329 - TEST_ARG_REG(reg, val) \ 330 - TEST_ARG_END("") \ 331 - TEST_BRANCH_F(code1 #reg code2, xtra_dist) \ 319 + #define TEST_BF_R(code1, reg, val, code2) \ 320 + TESTCASE_START(code1 #reg code2) \ 321 + TEST_ARG_REG(reg, val) \ 322 + TEST_ARG_END("") \ 323 + TEST_BRANCH_F(code1 #reg code2) \ 332 324 TESTCASE_END 333 325 334 - #define TEST_BB_RX(code1, reg, val, code2, xtra_dist) \ 335 - TESTCASE_START(code1 #reg code2) \ 336 - TEST_ARG_REG(reg, val) \ 337 - TEST_ARG_END("") \ 338 - TEST_BRANCH_B(code1 #reg code2, xtra_dist) \ 326 + #define TEST_BB_R(code1, reg, val, code2) \ 327 + TESTCASE_START(code1 #reg code2) \ 328 + TEST_ARG_REG(reg, val) \ 329 + TEST_ARG_END("") \ 330 + TEST_BRANCH_B(code1 #reg code2) \ 339 331 TESTCASE_END 340 - 341 - #define TEST_BF(code) TEST_BF_X(code, 0) 342 - #define TEST_BB(code) TEST_BB_X(code, 0) 343 - 344 - #define TEST_BF_R(code1, reg, val, code2) TEST_BF_RX(code1, reg, val, code2, 0) 345 - #define TEST_BB_R(code1, reg, val, code2) TEST_BB_RX(code1, reg, val, code2, 0) 346 332 347 333 #define TEST_BF_RR(code1, reg1, val1, code2, reg2, val2, code3) \ 348 334 TESTCASE_START(code1 #reg1 code2 #reg2 code3) \ 349 335 TEST_ARG_REG(reg1, val1) \ 350 336 TEST_ARG_REG(reg2, val2) \ 351 337 TEST_ARG_END("") \ 352 - TEST_BRANCH_F(code1 #reg1 code2 #reg2 code3, 0) \ 338 + TEST_BRANCH_F(code1 #reg1 code2 #reg2 code3) \ 339 + TESTCASE_END 340 + 341 + #define TEST_BF_X(code, codex) \ 342 + TESTCASE_START(code) \ 343 + TEST_ARG_END("") \ 344 + TEST_BRANCH_FX(code, codex) \ 345 + TESTCASE_END 346 + 347 + #define TEST_BB_X(code, codex) \ 348 + TESTCASE_START(code) \ 349 + TEST_ARG_END("") \ 350 + TEST_BRANCH_BX(code, codex) \ 351 + TESTCASE_END 352 + 353 + #define TEST_BF_RX(code1, reg, val, code2, codex) \ 354 + TESTCASE_START(code1 #reg code2) \ 355 + TEST_ARG_REG(reg, val) \ 356 + TEST_ARG_END("") \ 357 + TEST_BRANCH_FX(code1 #reg code2, codex) \ 353 358 TESTCASE_END 354 359 355 360 #define TEST_X(code, codex) \ ··· 391 370 " b 99f \n\t" \ 392 371 " "codex" \n\t" \ 393 372 TESTCASE_END 373 + 374 + 375 + /* 376 + * Macros for defining space directives spread over multiple lines. 377 + * These are required so the compiler guesses better the length of inline asm 378 + * code and will spill the literal pool early enough to avoid generating PC 379 + * relative loads with out of range offsets. 380 + */ 381 + #define TWICE(x) x x 382 + #define SPACE_0x8 TWICE(".space 4\n\t") 383 + #define SPACE_0x10 TWICE(SPACE_0x8) 384 + #define SPACE_0x20 TWICE(SPACE_0x10) 385 + #define SPACE_0x40 TWICE(SPACE_0x20) 386 + #define SPACE_0x80 TWICE(SPACE_0x40) 387 + #define SPACE_0x100 TWICE(SPACE_0x80) 388 + #define SPACE_0x200 TWICE(SPACE_0x100) 389 + #define SPACE_0x400 TWICE(SPACE_0x200) 390 + #define SPACE_0x800 TWICE(SPACE_0x400) 391 + #define SPACE_0x1000 TWICE(SPACE_0x800) 394 392 395 393 396 394 /* Various values used in test cases... */
+10 -1
arch/arm/kernel/perf_event.c
··· 343 343 { 344 344 struct perf_event *sibling, *leader = event->group_leader; 345 345 struct pmu_hw_events fake_pmu; 346 + DECLARE_BITMAP(fake_used_mask, ARMPMU_MAX_HWEVENTS); 346 347 347 - memset(&fake_pmu, 0, sizeof(fake_pmu)); 348 + /* 349 + * Initialise the fake PMU. We only need to populate the 350 + * used_mask for the purposes of validation. 351 + */ 352 + memset(fake_used_mask, 0, sizeof(fake_used_mask)); 353 + fake_pmu.used_mask = fake_used_mask; 348 354 349 355 if (!validate_event(&fake_pmu, leader)) 350 356 return -ENOSPC; ··· 401 395 irq_handler_t handle_irq; 402 396 int i, err, irq, irqs; 403 397 struct platform_device *pmu_device = armpmu->plat_device; 398 + 399 + if (!pmu_device) 400 + return -ENODEV; 404 401 405 402 err = reserve_pmu(armpmu->type); 406 403 if (err) {
+1
arch/arm/kernel/pmu.c
··· 33 33 { 34 34 clear_bit_unlock(type, pmu_lock); 35 35 } 36 + EXPORT_SYMBOL_GPL(release_pmu);
+3
arch/arm/kernel/process.c
··· 192 192 #endif 193 193 194 194 local_irq_disable(); 195 + #ifdef CONFIG_PL310_ERRATA_769419 196 + wmb(); 197 + #endif 195 198 if (hlt_counter) { 196 199 local_irq_enable(); 197 200 cpu_relax();
+1 -1
arch/arm/kernel/topology.c
··· 43 43 44 44 struct cputopo_arm cpu_topology[NR_CPUS]; 45 45 46 - const struct cpumask *cpu_coregroup_mask(unsigned int cpu) 46 + const struct cpumask *cpu_coregroup_mask(int cpu) 47 47 { 48 48 return &cpu_topology[cpu].core_sibling; 49 49 }
+22 -4
arch/arm/lib/bitops.h
··· 1 + #include <asm/unwind.h> 2 + 1 3 #if __LINUX_ARM_ARCH__ >= 6 2 - .macro bitop, instr 4 + .macro bitop, name, instr 5 + ENTRY( \name ) 6 + UNWIND( .fnstart ) 3 7 ands ip, r1, #3 4 8 strneb r1, [ip] @ assert word-aligned 5 9 mov r2, #1 ··· 17 13 cmp r0, #0 18 14 bne 1b 19 15 bx lr 16 + UNWIND( .fnend ) 17 + ENDPROC(\name ) 20 18 .endm 21 19 22 - .macro testop, instr, store 20 + .macro testop, name, instr, store 21 + ENTRY( \name ) 22 + UNWIND( .fnstart ) 23 23 ands ip, r1, #3 24 24 strneb r1, [ip] @ assert word-aligned 25 25 mov r2, #1 ··· 42 34 cmp r0, #0 43 35 movne r0, #1 44 36 2: bx lr 37 + UNWIND( .fnend ) 38 + ENDPROC(\name ) 45 39 .endm 46 40 #else 47 - .macro bitop, instr 41 + .macro bitop, name, instr 42 + ENTRY( \name ) 43 + UNWIND( .fnstart ) 48 44 ands ip, r1, #3 49 45 strneb r1, [ip] @ assert word-aligned 50 46 and r2, r0, #31 ··· 61 49 str r2, [r1, r0, lsl #2] 62 50 restore_irqs ip 63 51 mov pc, lr 52 + UNWIND( .fnend ) 53 + ENDPROC(\name ) 64 54 .endm 65 55 66 56 /** ··· 73 59 * Note: we can trivially conditionalise the store instruction 74 60 * to avoid dirtying the data cache. 75 61 */ 76 - .macro testop, instr, store 62 + .macro testop, name, instr, store 63 + ENTRY( \name ) 64 + UNWIND( .fnstart ) 77 65 ands ip, r1, #3 78 66 strneb r1, [ip] @ assert word-aligned 79 67 and r3, r0, #31 ··· 89 73 moveq r0, #0 90 74 restore_irqs ip 91 75 mov pc, lr 76 + UNWIND( .fnend ) 77 + ENDPROC(\name ) 92 78 .endm 93 79 #endif
+1 -3
arch/arm/lib/changebit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_change_bit) 16 - bitop eor 17 - ENDPROC(_change_bit) 15 + bitop _change_bit, eor
+1 -3
arch/arm/lib/clearbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_clear_bit) 16 - bitop bic 17 - ENDPROC(_clear_bit) 15 + bitop _clear_bit, bic
+1 -3
arch/arm/lib/setbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_set_bit) 16 - bitop orr 17 - ENDPROC(_set_bit) 15 + bitop _set_bit, orr
+1 -3
arch/arm/lib/testchangebit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_change_bit) 16 - testop eor, str 17 - ENDPROC(_test_and_change_bit) 15 + testop _test_and_change_bit, eor, str
+1 -3
arch/arm/lib/testclearbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_clear_bit) 16 - testop bicne, strne 17 - ENDPROC(_test_and_clear_bit) 15 + testop _test_and_clear_bit, bicne, strne
+1 -3
arch/arm/lib/testsetbit.S
··· 12 12 #include "bitops.h" 13 13 .text 14 14 15 - ENTRY(_test_and_set_bit) 16 - testop orreq, streq 17 - ENDPROC(_test_and_set_bit) 15 + testop _test_and_set_bit, orreq, streq
+2
arch/arm/mach-exynos/cpuidle.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/cpuidle.h> 14 14 #include <linux/io.h> 15 + #include <linux/export.h> 16 + #include <linux/time.h> 15 17 16 18 #include <asm/proc-fns.h> 17 19
+4
arch/arm/mach-highbank/highbank.c
··· 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_platform.h> 24 24 #include <linux/of_address.h> 25 + #include <linux/smp.h> 25 26 26 27 #include <asm/cacheflush.h> 27 28 #include <asm/unified.h> ··· 73 72 74 73 void highbank_set_cpu_jump(int cpu, void *jump_addr) 75 74 { 75 + #ifdef CONFIG_SMP 76 + cpu = cpu_logical_map(cpu); 77 + #endif 76 78 writel(BSYM(virt_to_phys(jump_addr)), HB_JUMP_TABLE_VIRT(cpu)); 77 79 __cpuc_flush_dcache_area(HB_JUMP_TABLE_VIRT(cpu), 16); 78 80 outer_clean_range(HB_JUMP_TABLE_PHYS(cpu),
-13
arch/arm/mach-imx/Kconfig
··· 10 10 config HAVE_IMX_SRC 11 11 bool 12 12 13 - # 14 - # ARCH_MX31 and ARCH_MX35 are left for compatibility 15 - # Some usages assume that having one of them implies not having (e.g.) ARCH_MX2. 16 - # To easily distinguish good and reviewed from unreviewed usages new (and IMHO 17 - # more sensible) names are used: SOC_IMX31 and SOC_IMX35 18 13 config ARCH_MX1 19 14 bool 20 15 ··· 20 25 bool 21 26 22 27 config MACH_MX27 23 - bool 24 - 25 - config ARCH_MX31 26 - bool 27 - 28 - config ARCH_MX35 29 28 bool 30 29 31 30 config SOC_IMX1 ··· 61 72 select CPU_V6 62 73 select IMX_HAVE_PLATFORM_MXC_RNGA 63 74 select ARCH_MXC_AUDMUX_V2 64 - select ARCH_MX31 65 75 select MXC_AVIC 66 76 select SMP_ON_UP if SMP 67 77 ··· 70 82 select ARCH_MXC_IOMUX_V3 71 83 select ARCH_MXC_AUDMUX_V2 72 84 select HAVE_EPIT 73 - select ARCH_MX35 74 85 select MXC_AVIC 75 86 select SMP_ON_UP if SMP 76 87
+5 -2
arch/arm/mach-imx/clock-imx6q.c
··· 1953 1953 imx_map_entry(MX6Q, ANATOP, MT_DEVICE), 1954 1954 }; 1955 1955 1956 + void __init imx6q_clock_map_io(void) 1957 + { 1958 + iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc)); 1959 + } 1960 + 1956 1961 int __init mx6q_clocks_init(void) 1957 1962 { 1958 1963 struct device_node *np; 1959 1964 void __iomem *base; 1960 1965 int i, irq; 1961 - 1962 - iotable_init(imx6q_clock_desc, ARRAY_SIZE(imx6q_clock_desc)); 1963 1966 1964 1967 /* retrieve the freqency of fixed clocks from device tree */ 1965 1968 for_each_compatible_node(np, NULL, "fixed-clock") {
+1
arch/arm/mach-imx/mach-imx6q.c
··· 34 34 { 35 35 imx_lluart_map_io(); 36 36 imx_scu_map_io(); 37 + imx6q_clock_map_io(); 37 38 } 38 39 39 40 static void __init imx6q_gpio_add_irq_domain(struct device_node *np,
+58 -51
arch/arm/mach-imx/mm-imx3.c
··· 33 33 static void imx3_idle(void) 34 34 { 35 35 unsigned long reg = 0; 36 - __asm__ __volatile__( 37 - /* disable I and D cache */ 38 - "mrc p15, 0, %0, c1, c0, 0\n" 39 - "bic %0, %0, #0x00001000\n" 40 - "bic %0, %0, #0x00000004\n" 41 - "mcr p15, 0, %0, c1, c0, 0\n" 42 - /* invalidate I cache */ 43 - "mov %0, #0\n" 44 - "mcr p15, 0, %0, c7, c5, 0\n" 45 - /* clear and invalidate D cache */ 46 - "mov %0, #0\n" 47 - "mcr p15, 0, %0, c7, c14, 0\n" 48 - /* WFI */ 49 - "mov %0, #0\n" 50 - "mcr p15, 0, %0, c7, c0, 4\n" 51 - "nop\n" "nop\n" "nop\n" "nop\n" 52 - "nop\n" "nop\n" "nop\n" 53 - /* enable I and D cache */ 54 - "mrc p15, 0, %0, c1, c0, 0\n" 55 - "orr %0, %0, #0x00001000\n" 56 - "orr %0, %0, #0x00000004\n" 57 - "mcr p15, 0, %0, c1, c0, 0\n" 58 - : "=r" (reg)); 36 + 37 + if (!need_resched()) 38 + __asm__ __volatile__( 39 + /* disable I and D cache */ 40 + "mrc p15, 0, %0, c1, c0, 0\n" 41 + "bic %0, %0, #0x00001000\n" 42 + "bic %0, %0, #0x00000004\n" 43 + "mcr p15, 0, %0, c1, c0, 0\n" 44 + /* invalidate I cache */ 45 + "mov %0, #0\n" 46 + "mcr p15, 0, %0, c7, c5, 0\n" 47 + /* clear and invalidate D cache */ 48 + "mov %0, #0\n" 49 + "mcr p15, 0, %0, c7, c14, 0\n" 50 + /* WFI */ 51 + "mov %0, #0\n" 52 + "mcr p15, 0, %0, c7, c0, 4\n" 53 + "nop\n" "nop\n" "nop\n" "nop\n" 54 + "nop\n" "nop\n" "nop\n" 55 + /* enable I and D cache */ 56 + "mrc p15, 0, %0, c1, c0, 0\n" 57 + "orr %0, %0, #0x00001000\n" 58 + "orr %0, %0, #0x00000004\n" 59 + "mcr p15, 0, %0, c1, c0, 0\n" 60 + : "=r" (reg)); 61 + local_irq_enable(); 59 62 } 60 63 61 64 static void __iomem *imx3_ioremap(unsigned long phys_addr, size_t size, ··· 111 108 l2x0_init(l2x0_base, 0x00030024, 0x00000000); 112 109 } 113 110 111 + #ifdef CONFIG_SOC_IMX31 114 112 static struct map_desc mx31_io_desc[] __initdata = { 115 113 imx_map_entry(MX31, X_MEMC, MT_DEVICE), 116 114 imx_map_entry(MX31, AVIC, MT_DEVICE_NONSHARED), ··· 130 126 iotable_init(mx31_io_desc, ARRAY_SIZE(mx31_io_desc)); 131 127 } 132 128 133 - static struct map_desc mx35_io_desc[] __initdata = { 134 - imx_map_entry(MX35, X_MEMC, MT_DEVICE), 135 - imx_map_entry(MX35, AVIC, MT_DEVICE_NONSHARED), 136 - imx_map_entry(MX35, AIPS1, MT_DEVICE_NONSHARED), 137 - imx_map_entry(MX35, AIPS2, MT_DEVICE_NONSHARED), 138 - imx_map_entry(MX35, SPBA0, MT_DEVICE_NONSHARED), 139 - }; 140 - 141 - void __init mx35_map_io(void) 142 - { 143 - iotable_init(mx35_io_desc, ARRAY_SIZE(mx35_io_desc)); 144 - } 145 - 146 129 void __init imx31_init_early(void) 147 130 { 148 131 mxc_set_cpu_type(MXC_CPU_MX31); 149 132 mxc_arch_reset_init(MX31_IO_ADDRESS(MX31_WDOG_BASE_ADDR)); 150 - imx_idle = imx3_idle; 151 - imx_ioremap = imx3_ioremap; 152 - } 153 - 154 - void __init imx35_init_early(void) 155 - { 156 - mxc_set_cpu_type(MXC_CPU_MX35); 157 - mxc_iomux_v3_init(MX35_IO_ADDRESS(MX35_IOMUXC_BASE_ADDR)); 158 - mxc_arch_reset_init(MX35_IO_ADDRESS(MX35_WDOG_BASE_ADDR)); 159 - imx_idle = imx3_idle; 133 + pm_idle = imx3_idle; 160 134 imx_ioremap = imx3_ioremap; 161 135 } 162 136 163 137 void __init mx31_init_irq(void) 164 138 { 165 139 mxc_init_irq(MX31_IO_ADDRESS(MX31_AVIC_BASE_ADDR)); 166 - } 167 - 168 - void __init mx35_init_irq(void) 169 - { 170 - mxc_init_irq(MX35_IO_ADDRESS(MX35_AVIC_BASE_ADDR)); 171 140 } 172 141 173 142 static struct sdma_script_start_addrs imx31_to1_sdma_script __initdata = { ··· 175 198 } 176 199 177 200 imx_add_imx_sdma("imx31-sdma", MX31_SDMA_BASE_ADDR, MX31_INT_SDMA, &imx31_sdma_pdata); 201 + } 202 + #endif /* ifdef CONFIG_SOC_IMX31 */ 203 + 204 + #ifdef CONFIG_SOC_IMX35 205 + static struct map_desc mx35_io_desc[] __initdata = { 206 + imx_map_entry(MX35, X_MEMC, MT_DEVICE), 207 + imx_map_entry(MX35, AVIC, MT_DEVICE_NONSHARED), 208 + imx_map_entry(MX35, AIPS1, MT_DEVICE_NONSHARED), 209 + imx_map_entry(MX35, AIPS2, MT_DEVICE_NONSHARED), 210 + imx_map_entry(MX35, SPBA0, MT_DEVICE_NONSHARED), 211 + }; 212 + 213 + void __init mx35_map_io(void) 214 + { 215 + iotable_init(mx35_io_desc, ARRAY_SIZE(mx35_io_desc)); 216 + } 217 + 218 + void __init imx35_init_early(void) 219 + { 220 + mxc_set_cpu_type(MXC_CPU_MX35); 221 + mxc_iomux_v3_init(MX35_IO_ADDRESS(MX35_IOMUXC_BASE_ADDR)); 222 + mxc_arch_reset_init(MX35_IO_ADDRESS(MX35_WDOG_BASE_ADDR)); 223 + pm_idle = imx3_idle; 224 + imx_ioremap = imx3_ioremap; 225 + } 226 + 227 + void __init mx35_init_irq(void) 228 + { 229 + mxc_init_irq(MX35_IO_ADDRESS(MX35_AVIC_BASE_ADDR)); 178 230 } 179 231 180 232 static struct sdma_script_start_addrs imx35_to1_sdma_script __initdata = { ··· 260 254 261 255 imx_add_imx_sdma("imx35-sdma", MX35_SDMA_BASE_ADDR, MX35_INT_SDMA, &imx35_sdma_pdata); 262 256 } 257 + #endif /* ifdef CONFIG_SOC_IMX35 */
+7
arch/arm/mach-imx/src.c
··· 14 14 #include <linux/io.h> 15 15 #include <linux/of.h> 16 16 #include <linux/of_address.h> 17 + #include <linux/smp.h> 17 18 #include <asm/unified.h> 18 19 19 20 #define SRC_SCR 0x000 ··· 24 23 25 24 static void __iomem *src_base; 26 25 26 + #ifndef CONFIG_SMP 27 + #define cpu_logical_map(cpu) 0 28 + #endif 29 + 27 30 void imx_enable_cpu(int cpu, bool enable) 28 31 { 29 32 u32 mask, val; 30 33 34 + cpu = cpu_logical_map(cpu); 31 35 mask = 1 << (BP_SRC_SCR_CORE1_ENABLE + cpu - 1); 32 36 val = readl_relaxed(src_base + SRC_SCR); 33 37 val = enable ? val | mask : val & ~mask; ··· 41 35 42 36 void imx_set_cpu_jump(int cpu, void *jump_addr) 43 37 { 38 + cpu = cpu_logical_map(cpu); 44 39 writel_relaxed(BSYM(virt_to_phys(jump_addr)), 45 40 src_base + SRC_GPR1 + cpu * 8); 46 41 }
+1 -1
arch/arm/mach-mmp/gplugd.c
··· 182 182 183 183 /* on-chip devices */ 184 184 pxa168_add_uart(3); 185 - pxa168_add_ssp(0); 185 + pxa168_add_ssp(1); 186 186 pxa168_add_twsi(0, NULL, ARRAY_AND_SIZE(gplugd_i2c_board_info)); 187 187 188 188 pxa168_add_eth(&gplugd_eth_platform_data);
+1 -1
arch/arm/mach-mmp/include/mach/gpio-pxa.h
··· 7 7 #define GPIO_REGS_VIRT (APB_VIRT_BASE + 0x19000) 8 8 9 9 #define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2)) 10 - #define GPIO_REG(x) (GPIO_REGS_VIRT + (x)) 10 + #define GPIO_REG(x) (*(volatile u32 *)(GPIO_REGS_VIRT + (x))) 11 11 12 12 #define NR_BUILTIN_GPIO IRQ_GPIO_NUM 13 13
+3 -2
arch/arm/mach-mx5/cpu.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/module.h> 18 18 #include <mach/hardware.h> 19 - #include <asm/io.h> 19 + #include <linux/io.h> 20 20 21 21 static int mx5_cpu_rev = -1; 22 22 ··· 67 67 if (!cpu_is_mx51()) 68 68 return 0; 69 69 70 - if (mx51_revision() < IMX_CHIP_REVISION_3_0 && (elf_hwcap & HWCAP_NEON)) { 70 + if (mx51_revision() < IMX_CHIP_REVISION_3_0 && 71 + (elf_hwcap & HWCAP_NEON)) { 71 72 elf_hwcap &= ~HWCAP_NEON; 72 73 pr_info("Turning off NEON support, detected broken NEON implementation\n"); 73 74 }
+4 -2
arch/arm/mach-mx5/mm.c
··· 23 23 24 24 static void imx5_idle(void) 25 25 { 26 - mx5_cpu_lp_set(WAIT_UNCLOCKED_POWER_OFF); 26 + if (!need_resched()) 27 + mx5_cpu_lp_set(WAIT_UNCLOCKED_POWER_OFF); 28 + local_irq_enable(); 27 29 } 28 30 29 31 /* ··· 91 89 mxc_set_cpu_type(MXC_CPU_MX51); 92 90 mxc_iomux_v3_init(MX51_IO_ADDRESS(MX51_IOMUXC_BASE_ADDR)); 93 91 mxc_arch_reset_init(MX51_IO_ADDRESS(MX51_WDOG1_BASE_ADDR)); 94 - imx_idle = imx5_idle; 92 + pm_idle = imx5_idle; 95 93 } 96 94 97 95 void __init imx53_init_early(void)
+1 -1
arch/arm/mach-mxs/clock-mx28.c
··· 404 404 reg = __raw_readl(CLKCTRL_BASE_ADDR + HW_CLKCTRL_##dr); \ 405 405 reg &= ~BM_CLKCTRL_##dr##_DIV; \ 406 406 reg |= div << BP_CLKCTRL_##dr##_DIV; \ 407 - if (reg | (1 << clk->enable_shift)) { \ 407 + if (reg & (1 << clk->enable_shift)) { \ 408 408 pr_err("%s: clock is gated\n", __func__); \ 409 409 return -EINVAL; \ 410 410 } \
-8
arch/arm/mach-omap1/Kconfig
··· 171 171 comment "OMAP CPU Speed" 172 172 depends on ARCH_OMAP1 173 173 174 - config OMAP_CLOCKS_SET_BY_BOOTLOADER 175 - bool "OMAP clocks set by bootloader" 176 - depends on ARCH_OMAP1 177 - help 178 - Enable this option to prevent the kernel from overriding the clock 179 - frequencies programmed by bootloader for MPU, DSP, MMUs, TC, 180 - internal LCD controller and MPU peripherals. 181 - 182 174 config OMAP_ARM_216MHZ 183 175 bool "OMAP ARM 216 MHz CPU (1710 only)" 184 176 depends on ARCH_OMAP1 && ARCH_OMAP16XX
+7 -3
arch/arm/mach-omap1/board-ams-delta.c
··· 302 302 omap_cfg_reg(J19_1610_CAM_D6); 303 303 omap_cfg_reg(J18_1610_CAM_D7); 304 304 305 - iotable_init(ams_delta_io_desc, ARRAY_SIZE(ams_delta_io_desc)); 306 - 307 305 omap_board_config = ams_delta_config; 308 306 omap_board_config_size = ARRAY_SIZE(ams_delta_config); 309 307 omap_serial_init(); ··· 371 373 } 372 374 arch_initcall(ams_delta_modem_init); 373 375 376 + static void __init ams_delta_map_io(void) 377 + { 378 + omap15xx_map_io(); 379 + iotable_init(ams_delta_io_desc, ARRAY_SIZE(ams_delta_io_desc)); 380 + } 381 + 374 382 MACHINE_START(AMS_DELTA, "Amstrad E3 (Delta)") 375 383 /* Maintainer: Jonathan McDowell <noodles@earth.li> */ 376 384 .atag_offset = 0x100, 377 - .map_io = omap15xx_map_io, 385 + .map_io = ams_delta_map_io, 378 386 .init_early = omap1_init_early, 379 387 .reserve = omap_reserve, 380 388 .init_irq = omap1_init_irq,
+2 -1
arch/arm/mach-omap1/clock.h
··· 17 17 18 18 #include <plat/clock.h> 19 19 20 - extern int __init omap1_clk_init(void); 20 + int omap1_clk_init(void); 21 + void omap1_clk_late_init(void); 21 22 extern int omap1_clk_enable(struct clk *clk); 22 23 extern void omap1_clk_disable(struct clk *clk); 23 24 extern long omap1_clk_round_rate(struct clk *clk, unsigned long rate);
+34 -19
arch/arm/mach-omap1/clock_data.c
··· 767 767 .clk_disable_unused = omap1_clk_disable_unused, 768 768 }; 769 769 770 + static void __init omap1_show_rates(void) 771 + { 772 + pr_notice("Clocking rate (xtal/DPLL1/MPU): " 773 + "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n", 774 + ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10, 775 + ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10, 776 + arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10); 777 + } 778 + 770 779 int __init omap1_clk_init(void) 771 780 { 772 781 struct omap_clk *c; ··· 844 835 /* We want to be in syncronous scalable mode */ 845 836 omap_writew(0x1000, ARM_SYSST); 846 837 847 - #ifdef CONFIG_OMAP_CLOCKS_SET_BY_BOOTLOADER 848 - /* Use values set by bootloader. Determine PLL rate and recalculate 849 - * dependent clocks as if kernel had changed PLL or divisors. 838 + 839 + /* 840 + * Initially use the values set by bootloader. Determine PLL rate and 841 + * recalculate dependent clocks as if kernel had changed PLL or 842 + * divisors. See also omap1_clk_late_init() that can reprogram dpll1 843 + * after the SRAM is initialized. 850 844 */ 851 845 { 852 846 unsigned pll_ctl_val = omap_readw(DPLL_CTL); ··· 874 862 } 875 863 } 876 864 } 877 - #else 878 - /* Find the highest supported frequency and enable it */ 879 - if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) { 880 - printk(KERN_ERR "System frequencies not set. Check your config.\n"); 881 - /* Guess sane values (60MHz) */ 882 - omap_writew(0x2290, DPLL_CTL); 883 - omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL); 884 - ck_dpll1.rate = 60000000; 885 - } 886 - #endif 887 865 propagate_rate(&ck_dpll1); 888 866 /* Cache rates for clocks connected to ck_ref (not dpll1) */ 889 867 propagate_rate(&ck_ref); 890 - printk(KERN_INFO "Clocking rate (xtal/DPLL1/MPU): " 891 - "%ld.%01ld/%ld.%01ld/%ld.%01ld MHz\n", 892 - ck_ref.rate / 1000000, (ck_ref.rate / 100000) % 10, 893 - ck_dpll1.rate / 1000000, (ck_dpll1.rate / 100000) % 10, 894 - arm_ck.rate / 1000000, (arm_ck.rate / 100000) % 10); 895 - 868 + omap1_show_rates(); 896 869 if (machine_is_omap_perseus2() || machine_is_omap_fsample()) { 897 870 /* Select slicer output as OMAP input clock */ 898 871 omap_writew(omap_readw(OMAP7XX_PCC_UPLD_CTRL) & ~0x1, ··· 921 924 clk_enable(&arm_gpio_ck); 922 925 923 926 return 0; 927 + } 928 + 929 + #define OMAP1_DPLL1_SANE_VALUE 60000000 930 + 931 + void __init omap1_clk_late_init(void) 932 + { 933 + if (ck_dpll1.rate >= OMAP1_DPLL1_SANE_VALUE) 934 + return; 935 + 936 + /* Find the highest supported frequency and enable it */ 937 + if (omap1_select_table_rate(&virtual_ck_mpu, ~0)) { 938 + pr_err("System frequencies not set, using default. Check your config.\n"); 939 + omap_writew(0x2290, DPLL_CTL); 940 + omap_writew(cpu_is_omap7xx() ? 0x3005 : 0x1005, ARM_CKCTL); 941 + ck_dpll1.rate = OMAP1_DPLL1_SANE_VALUE; 942 + } 943 + propagate_rate(&ck_dpll1); 944 + omap1_show_rates(); 924 945 }
+3
arch/arm/mach-omap1/devices.c
··· 30 30 #include <plat/omap7xx.h> 31 31 #include <plat/mcbsp.h> 32 32 33 + #include "clock.h" 34 + 33 35 /*-------------------------------------------------------------------------*/ 34 36 35 37 #if defined(CONFIG_RTC_DRV_OMAP) || defined(CONFIG_RTC_DRV_OMAP_MODULE) ··· 295 293 return -ENODEV; 296 294 297 295 omap_sram_init(); 296 + omap1_clk_late_init(); 298 297 299 298 /* please keep these calls, and their implementations above, 300 299 * in alphabetical order so they're easier to sort through.
+1
arch/arm/mach-omap2/Kconfig
··· 334 334 config OMAP3_EMU 335 335 bool "OMAP3 debugging peripherals" 336 336 depends on ARCH_OMAP3 337 + select ARM_AMBA 337 338 select OC_ETM 338 339 help 339 340 Say Y here to enable debugging hardware of omap3
+1 -4
arch/arm/mach-omap2/Makefile
··· 4 4 5 5 # Common support 6 6 obj-y := id.o io.o control.o mux.o devices.o serial.o gpmc.o timer.o pm.o \ 7 - common.o gpio.o dma.o wd_timer.o 7 + common.o gpio.o dma.o wd_timer.o display.o 8 8 9 9 omap-2-3-common = irq.o sdrc.o 10 10 hwmod-common = omap_hwmod.o \ ··· 263 263 smsc911x-$(CONFIG_SMSC911X) := gpmc-smsc911x.o 264 264 obj-y += $(smsc911x-m) $(smsc911x-y) 265 265 obj-$(CONFIG_ARCH_OMAP4) += hwspinlock.o 266 - 267 - disp-$(CONFIG_OMAP2_DSS) := display.o 268 - obj-y += $(disp-m) $(disp-y) 269 266 270 267 obj-y += common-board-devices.o twl-common.o
+1
arch/arm/mach-omap2/cpuidle34xx.c
··· 24 24 25 25 #include <linux/sched.h> 26 26 #include <linux/cpuidle.h> 27 + #include <linux/export.h> 27 28 28 29 #include <plat/prcm.h> 29 30 #include <plat/irqs.h>
+159
arch/arm/mach-omap2/display.c
··· 27 27 #include <plat/omap_hwmod.h> 28 28 #include <plat/omap_device.h> 29 29 #include <plat/omap-pm.h> 30 + #include <plat/common.h> 30 31 31 32 #include "control.h" 33 + #include "display.h" 34 + 35 + #define DISPC_CONTROL 0x0040 36 + #define DISPC_CONTROL2 0x0238 37 + #define DISPC_IRQSTATUS 0x0018 38 + 39 + #define DSS_SYSCONFIG 0x10 40 + #define DSS_SYSSTATUS 0x14 41 + #define DSS_CONTROL 0x40 42 + #define DSS_SDI_CONTROL 0x44 43 + #define DSS_PLL_CONTROL 0x48 44 + 45 + #define LCD_EN_MASK (0x1 << 0) 46 + #define DIGIT_EN_MASK (0x1 << 1) 47 + 48 + #define FRAMEDONE_IRQ_SHIFT 0 49 + #define EVSYNC_EVEN_IRQ_SHIFT 2 50 + #define EVSYNC_ODD_IRQ_SHIFT 3 51 + #define FRAMEDONE2_IRQ_SHIFT 22 52 + #define FRAMEDONETV_IRQ_SHIFT 24 53 + 54 + /* 55 + * FRAMEDONE_IRQ_TIMEOUT: how long (in milliseconds) to wait during DISPC 56 + * reset before deciding that something has gone wrong 57 + */ 58 + #define FRAMEDONE_IRQ_TIMEOUT 100 32 59 33 60 static struct platform_device omap_display_device = { 34 61 .name = "omapdss", ··· 196 169 r = platform_device_register(&omap_display_device); 197 170 if (r < 0) 198 171 printk(KERN_ERR "Unable to register OMAP-Display device\n"); 172 + 173 + return r; 174 + } 175 + 176 + static void dispc_disable_outputs(void) 177 + { 178 + u32 v, irq_mask = 0; 179 + bool lcd_en, digit_en, lcd2_en = false; 180 + int i; 181 + struct omap_dss_dispc_dev_attr *da; 182 + struct omap_hwmod *oh; 183 + 184 + oh = omap_hwmod_lookup("dss_dispc"); 185 + if (!oh) { 186 + WARN(1, "display: could not disable outputs during reset - could not find dss_dispc hwmod\n"); 187 + return; 188 + } 189 + 190 + if (!oh->dev_attr) { 191 + pr_err("display: could not disable outputs during reset due to missing dev_attr\n"); 192 + return; 193 + } 194 + 195 + da = (struct omap_dss_dispc_dev_attr *)oh->dev_attr; 196 + 197 + /* store value of LCDENABLE and DIGITENABLE bits */ 198 + v = omap_hwmod_read(oh, DISPC_CONTROL); 199 + lcd_en = v & LCD_EN_MASK; 200 + digit_en = v & DIGIT_EN_MASK; 201 + 202 + /* store value of LCDENABLE for LCD2 */ 203 + if (da->manager_count > 2) { 204 + v = omap_hwmod_read(oh, DISPC_CONTROL2); 205 + lcd2_en = v & LCD_EN_MASK; 206 + } 207 + 208 + if (!(lcd_en | digit_en | lcd2_en)) 209 + return; /* no managers currently enabled */ 210 + 211 + /* 212 + * If any manager was enabled, we need to disable it before 213 + * DSS clocks are disabled or DISPC module is reset 214 + */ 215 + if (lcd_en) 216 + irq_mask |= 1 << FRAMEDONE_IRQ_SHIFT; 217 + 218 + if (digit_en) { 219 + if (da->has_framedonetv_irq) { 220 + irq_mask |= 1 << FRAMEDONETV_IRQ_SHIFT; 221 + } else { 222 + irq_mask |= 1 << EVSYNC_EVEN_IRQ_SHIFT | 223 + 1 << EVSYNC_ODD_IRQ_SHIFT; 224 + } 225 + } 226 + 227 + if (lcd2_en) 228 + irq_mask |= 1 << FRAMEDONE2_IRQ_SHIFT; 229 + 230 + /* 231 + * clear any previous FRAMEDONE, FRAMEDONETV, 232 + * EVSYNC_EVEN/ODD or FRAMEDONE2 interrupts 233 + */ 234 + omap_hwmod_write(irq_mask, oh, DISPC_IRQSTATUS); 235 + 236 + /* disable LCD and TV managers */ 237 + v = omap_hwmod_read(oh, DISPC_CONTROL); 238 + v &= ~(LCD_EN_MASK | DIGIT_EN_MASK); 239 + omap_hwmod_write(v, oh, DISPC_CONTROL); 240 + 241 + /* disable LCD2 manager */ 242 + if (da->manager_count > 2) { 243 + v = omap_hwmod_read(oh, DISPC_CONTROL2); 244 + v &= ~LCD_EN_MASK; 245 + omap_hwmod_write(v, oh, DISPC_CONTROL2); 246 + } 247 + 248 + i = 0; 249 + while ((omap_hwmod_read(oh, DISPC_IRQSTATUS) & irq_mask) != 250 + irq_mask) { 251 + i++; 252 + if (i > FRAMEDONE_IRQ_TIMEOUT) { 253 + pr_err("didn't get FRAMEDONE1/2 or TV interrupt\n"); 254 + break; 255 + } 256 + mdelay(1); 257 + } 258 + } 259 + 260 + #define MAX_MODULE_SOFTRESET_WAIT 10000 261 + int omap_dss_reset(struct omap_hwmod *oh) 262 + { 263 + struct omap_hwmod_opt_clk *oc; 264 + int c = 0; 265 + int i, r; 266 + 267 + if (!(oh->class->sysc->sysc_flags & SYSS_HAS_RESET_STATUS)) { 268 + pr_err("dss_core: hwmod data doesn't contain reset data\n"); 269 + return -EINVAL; 270 + } 271 + 272 + for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 273 + if (oc->_clk) 274 + clk_enable(oc->_clk); 275 + 276 + dispc_disable_outputs(); 277 + 278 + /* clear SDI registers */ 279 + if (cpu_is_omap3430()) { 280 + omap_hwmod_write(0x0, oh, DSS_SDI_CONTROL); 281 + omap_hwmod_write(0x0, oh, DSS_PLL_CONTROL); 282 + } 283 + 284 + /* 285 + * clear DSS_CONTROL register to switch DSS clock sources to 286 + * PRCM clock, if any 287 + */ 288 + omap_hwmod_write(0x0, oh, DSS_CONTROL); 289 + 290 + omap_test_timeout((omap_hwmod_read(oh, oh->class->sysc->syss_offs) 291 + & SYSS_RESETDONE_MASK), 292 + MAX_MODULE_SOFTRESET_WAIT, c); 293 + 294 + if (c == MAX_MODULE_SOFTRESET_WAIT) 295 + pr_warning("dss_core: waiting for reset to finish failed\n"); 296 + else 297 + pr_debug("dss_core: softreset done\n"); 298 + 299 + for (i = oh->opt_clks_cnt, oc = oh->opt_clks; i > 0; i--, oc++) 300 + if (oc->_clk) 301 + clk_disable(oc->_clk); 302 + 303 + r = (c == MAX_MODULE_SOFTRESET_WAIT) ? -ETIMEDOUT : 0; 199 304 200 305 return r; 201 306 }
+29
arch/arm/mach-omap2/display.h
··· 1 + /* 2 + * display.h - OMAP2+ integration-specific DSS header 3 + * 4 + * Copyright (C) 2011 Texas Instruments, Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License version 2 as published by 8 + * the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + * 15 + * You should have received a copy of the GNU General Public License along with 16 + * this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #ifndef __ARCH_ARM_MACH_OMAP2_DISPLAY_H 20 + #define __ARCH_ARM_MACH_OMAP2_DISPLAY_H 21 + 22 + #include <linux/kernel.h> 23 + 24 + struct omap_dss_dispc_dev_attr { 25 + u8 manager_count; 26 + bool has_framedonetv_irq; 27 + }; 28 + 29 + #endif
arch/arm/mach-omap2/io.h
+3 -3
arch/arm/mach-omap2/omap_hwmod.c
··· 749 749 ohii = &oh->mpu_irqs[i++]; 750 750 } while (ohii->irq != -1); 751 751 752 - return i; 752 + return i-1; 753 753 } 754 754 755 755 /** ··· 772 772 ohdi = &oh->sdma_reqs[i++]; 773 773 } while (ohdi->dma_req != -1); 774 774 775 - return i; 775 + return i-1; 776 776 } 777 777 778 778 /** ··· 795 795 mem = &os->addr[i++]; 796 796 } while (mem->pa_start != mem->pa_end); 797 797 798 - return i; 798 + return i-1; 799 799 } 800 800 801 801 /**
+14 -3
arch/arm/mach-omap2/omap_hwmod_2420_data.c
··· 875 875 }; 876 876 877 877 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 878 + /* 879 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 880 + * driver does not use these clocks. 881 + */ 878 882 { .role = "tv_clk", .clk = "dss_54m_fck" }, 879 883 { .role = "sys_clk", .clk = "dss2_fck" }, 880 884 }; ··· 903 899 .slaves_cnt = ARRAY_SIZE(omap2420_dss_slaves), 904 900 .masters = omap2420_dss_masters, 905 901 .masters_cnt = ARRAY_SIZE(omap2420_dss_masters), 906 - .flags = HWMOD_NO_IDLEST, 902 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 907 903 }; 908 904 909 905 /* l4_core -> dss_dispc */ ··· 943 939 .slaves = omap2420_dss_dispc_slaves, 944 940 .slaves_cnt = ARRAY_SIZE(omap2420_dss_dispc_slaves), 945 941 .flags = HWMOD_NO_IDLEST, 942 + .dev_attr = &omap2_3_dss_dispc_dev_attr 946 943 }; 947 944 948 945 /* l4_core -> dss_rfbi */ ··· 966 961 &omap2420_l4_core__dss_rfbi, 967 962 }; 968 963 964 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 965 + { .role = "ick", .clk = "dss_ick" }, 966 + }; 967 + 969 968 static struct omap_hwmod omap2420_dss_rfbi_hwmod = { 970 969 .name = "dss_rfbi", 971 970 .class = &omap2_rfbi_hwmod_class, ··· 981 972 .module_offs = CORE_MOD, 982 973 }, 983 974 }, 975 + .opt_clks = dss_rfbi_opt_clks, 976 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 984 977 .slaves = omap2420_dss_rfbi_slaves, 985 978 .slaves_cnt = ARRAY_SIZE(omap2420_dss_rfbi_slaves), 986 979 .flags = HWMOD_NO_IDLEST, ··· 992 981 static struct omap_hwmod_ocp_if omap2420_l4_core__dss_venc = { 993 982 .master = &omap2420_l4_core_hwmod, 994 983 .slave = &omap2420_dss_venc_hwmod, 995 - .clk = "dss_54m_fck", 984 + .clk = "dss_ick", 996 985 .addr = omap2_dss_venc_addrs, 997 986 .fw = { 998 987 .omap2 = { ··· 1012 1001 static struct omap_hwmod omap2420_dss_venc_hwmod = { 1013 1002 .name = "dss_venc", 1014 1003 .class = &omap2_venc_hwmod_class, 1015 - .main_clk = "dss1_fck", 1004 + .main_clk = "dss_54m_fck", 1016 1005 .prcm = { 1017 1006 .omap2 = { 1018 1007 .prcm_reg_id = 1,
+14 -3
arch/arm/mach-omap2/omap_hwmod_2430_data.c
··· 942 942 }; 943 943 944 944 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 945 + /* 946 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 947 + * driver does not use these clocks. 948 + */ 945 949 { .role = "tv_clk", .clk = "dss_54m_fck" }, 946 950 { .role = "sys_clk", .clk = "dss2_fck" }, 947 951 }; ··· 970 966 .slaves_cnt = ARRAY_SIZE(omap2430_dss_slaves), 971 967 .masters = omap2430_dss_masters, 972 968 .masters_cnt = ARRAY_SIZE(omap2430_dss_masters), 973 - .flags = HWMOD_NO_IDLEST, 969 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 974 970 }; 975 971 976 972 /* l4_core -> dss_dispc */ ··· 1004 1000 .slaves = omap2430_dss_dispc_slaves, 1005 1001 .slaves_cnt = ARRAY_SIZE(omap2430_dss_dispc_slaves), 1006 1002 .flags = HWMOD_NO_IDLEST, 1003 + .dev_attr = &omap2_3_dss_dispc_dev_attr 1007 1004 }; 1008 1005 1009 1006 /* l4_core -> dss_rfbi */ ··· 1021 1016 &omap2430_l4_core__dss_rfbi, 1022 1017 }; 1023 1018 1019 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 1020 + { .role = "ick", .clk = "dss_ick" }, 1021 + }; 1022 + 1024 1023 static struct omap_hwmod omap2430_dss_rfbi_hwmod = { 1025 1024 .name = "dss_rfbi", 1026 1025 .class = &omap2_rfbi_hwmod_class, ··· 1036 1027 .module_offs = CORE_MOD, 1037 1028 }, 1038 1029 }, 1030 + .opt_clks = dss_rfbi_opt_clks, 1031 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 1039 1032 .slaves = omap2430_dss_rfbi_slaves, 1040 1033 .slaves_cnt = ARRAY_SIZE(omap2430_dss_rfbi_slaves), 1041 1034 .flags = HWMOD_NO_IDLEST, ··· 1047 1036 static struct omap_hwmod_ocp_if omap2430_l4_core__dss_venc = { 1048 1037 .master = &omap2430_l4_core_hwmod, 1049 1038 .slave = &omap2430_dss_venc_hwmod, 1050 - .clk = "dss_54m_fck", 1039 + .clk = "dss_ick", 1051 1040 .addr = omap2_dss_venc_addrs, 1052 1041 .flags = OCPIF_SWSUP_IDLE, 1053 1042 .user = OCP_USER_MPU | OCP_USER_SDMA, ··· 1061 1050 static struct omap_hwmod omap2430_dss_venc_hwmod = { 1062 1051 .name = "dss_venc", 1063 1052 .class = &omap2_venc_hwmod_class, 1064 - .main_clk = "dss1_fck", 1053 + .main_clk = "dss_54m_fck", 1065 1054 .prcm = { 1066 1055 .omap2 = { 1067 1056 .prcm_reg_id = 1,
+4 -1
arch/arm/mach-omap2/omap_hwmod_2xxx_3xxx_ipblock_data.c
··· 11 11 #include <plat/omap_hwmod.h> 12 12 #include <plat/serial.h> 13 13 #include <plat/dma.h> 14 + #include <plat/common.h> 14 15 15 16 #include <mach/irqs.h> 16 17 ··· 44 43 .rev_offs = 0x0000, 45 44 .sysc_offs = 0x0010, 46 45 .syss_offs = 0x0014, 47 - .sysc_flags = (SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE), 46 + .sysc_flags = (SYSC_HAS_SOFTRESET | SYSC_HAS_AUTOIDLE | 47 + SYSS_HAS_RESET_STATUS), 48 48 .sysc_fields = &omap_hwmod_sysc_type1, 49 49 }; 50 50 51 51 struct omap_hwmod_class omap2_dss_hwmod_class = { 52 52 .name = "dss", 53 53 .sysc = &omap2_dss_sysc, 54 + .reset = omap_dss_reset, 54 55 }; 55 56 56 57 /*
+32 -5
arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
··· 1369 1369 }; 1370 1370 1371 1371 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 1372 - { .role = "tv_clk", .clk = "dss_tv_fck" }, 1373 - { .role = "video_clk", .clk = "dss_96m_fck" }, 1372 + /* 1373 + * The DSS HW needs all DSS clocks enabled during reset. The dss_core 1374 + * driver does not use these clocks. 1375 + */ 1374 1376 { .role = "sys_clk", .clk = "dss2_alwon_fck" }, 1377 + { .role = "tv_clk", .clk = "dss_tv_fck" }, 1378 + /* required only on OMAP3430 */ 1379 + { .role = "tv_dac_clk", .clk = "dss_96m_fck" }, 1375 1380 }; 1376 1381 1377 1382 static struct omap_hwmod omap3430es1_dss_core_hwmod = { ··· 1399 1394 .slaves_cnt = ARRAY_SIZE(omap3430es1_dss_slaves), 1400 1395 .masters = omap3xxx_dss_masters, 1401 1396 .masters_cnt = ARRAY_SIZE(omap3xxx_dss_masters), 1402 - .flags = HWMOD_NO_IDLEST, 1397 + .flags = HWMOD_NO_IDLEST | HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1403 1398 }; 1404 1399 1405 1400 static struct omap_hwmod omap3xxx_dss_core_hwmod = { 1406 1401 .name = "dss_core", 1402 + .flags = HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1407 1403 .class = &omap2_dss_hwmod_class, 1408 1404 .main_clk = "dss1_alwon_fck", /* instead of dss_fck */ 1409 1405 .sdma_reqs = omap3xxx_dss_sdma_chs, ··· 1462 1456 .slaves = omap3xxx_dss_dispc_slaves, 1463 1457 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_dispc_slaves), 1464 1458 .flags = HWMOD_NO_IDLEST, 1459 + .dev_attr = &omap2_3_dss_dispc_dev_attr 1465 1460 }; 1466 1461 1467 1462 /* ··· 1493 1486 static struct omap_hwmod_ocp_if omap3xxx_l4_core__dss_dsi1 = { 1494 1487 .master = &omap3xxx_l4_core_hwmod, 1495 1488 .slave = &omap3xxx_dss_dsi1_hwmod, 1489 + .clk = "dss_ick", 1496 1490 .addr = omap3xxx_dss_dsi1_addrs, 1497 1491 .fw = { 1498 1492 .omap2 = { ··· 1510 1502 &omap3xxx_l4_core__dss_dsi1, 1511 1503 }; 1512 1504 1505 + static struct omap_hwmod_opt_clk dss_dsi1_opt_clks[] = { 1506 + { .role = "sys_clk", .clk = "dss2_alwon_fck" }, 1507 + }; 1508 + 1513 1509 static struct omap_hwmod omap3xxx_dss_dsi1_hwmod = { 1514 1510 .name = "dss_dsi1", 1515 1511 .class = &omap3xxx_dsi_hwmod_class, ··· 1526 1514 .module_offs = OMAP3430_DSS_MOD, 1527 1515 }, 1528 1516 }, 1517 + .opt_clks = dss_dsi1_opt_clks, 1518 + .opt_clks_cnt = ARRAY_SIZE(dss_dsi1_opt_clks), 1529 1519 .slaves = omap3xxx_dss_dsi1_slaves, 1530 1520 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_dsi1_slaves), 1531 1521 .flags = HWMOD_NO_IDLEST, ··· 1554 1540 &omap3xxx_l4_core__dss_rfbi, 1555 1541 }; 1556 1542 1543 + static struct omap_hwmod_opt_clk dss_rfbi_opt_clks[] = { 1544 + { .role = "ick", .clk = "dss_ick" }, 1545 + }; 1546 + 1557 1547 static struct omap_hwmod omap3xxx_dss_rfbi_hwmod = { 1558 1548 .name = "dss_rfbi", 1559 1549 .class = &omap2_rfbi_hwmod_class, ··· 1569 1551 .module_offs = OMAP3430_DSS_MOD, 1570 1552 }, 1571 1553 }, 1554 + .opt_clks = dss_rfbi_opt_clks, 1555 + .opt_clks_cnt = ARRAY_SIZE(dss_rfbi_opt_clks), 1572 1556 .slaves = omap3xxx_dss_rfbi_slaves, 1573 1557 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_rfbi_slaves), 1574 1558 .flags = HWMOD_NO_IDLEST, ··· 1580 1560 static struct omap_hwmod_ocp_if omap3xxx_l4_core__dss_venc = { 1581 1561 .master = &omap3xxx_l4_core_hwmod, 1582 1562 .slave = &omap3xxx_dss_venc_hwmod, 1583 - .clk = "dss_tv_fck", 1563 + .clk = "dss_ick", 1584 1564 .addr = omap2_dss_venc_addrs, 1585 1565 .fw = { 1586 1566 .omap2 = { ··· 1598 1578 &omap3xxx_l4_core__dss_venc, 1599 1579 }; 1600 1580 1581 + static struct omap_hwmod_opt_clk dss_venc_opt_clks[] = { 1582 + /* required only on OMAP3430 */ 1583 + { .role = "tv_dac_clk", .clk = "dss_96m_fck" }, 1584 + }; 1585 + 1601 1586 static struct omap_hwmod omap3xxx_dss_venc_hwmod = { 1602 1587 .name = "dss_venc", 1603 1588 .class = &omap2_venc_hwmod_class, 1604 - .main_clk = "dss1_alwon_fck", 1589 + .main_clk = "dss_tv_fck", 1605 1590 .prcm = { 1606 1591 .omap2 = { 1607 1592 .prcm_reg_id = 1, ··· 1614 1589 .module_offs = OMAP3430_DSS_MOD, 1615 1590 }, 1616 1591 }, 1592 + .opt_clks = dss_venc_opt_clks, 1593 + .opt_clks_cnt = ARRAY_SIZE(dss_venc_opt_clks), 1617 1594 .slaves = omap3xxx_dss_venc_slaves, 1618 1595 .slaves_cnt = ARRAY_SIZE(omap3xxx_dss_venc_slaves), 1619 1596 .flags = HWMOD_NO_IDLEST,
+12 -12
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 30 30 #include <plat/mmc.h> 31 31 #include <plat/i2c.h> 32 32 #include <plat/dmtimer.h> 33 + #include <plat/common.h> 33 34 34 35 #include "omap_hwmod_common_data.h" 35 36 ··· 1188 1187 static struct omap_hwmod_class omap44xx_dss_hwmod_class = { 1189 1188 .name = "dss", 1190 1189 .sysc = &omap44xx_dss_sysc, 1190 + .reset = omap_dss_reset, 1191 1191 }; 1192 1192 1193 1193 /* dss */ ··· 1242 1240 static struct omap_hwmod_opt_clk dss_opt_clks[] = { 1243 1241 { .role = "sys_clk", .clk = "dss_sys_clk" }, 1244 1242 { .role = "tv_clk", .clk = "dss_tv_clk" }, 1245 - { .role = "dss_clk", .clk = "dss_dss_clk" }, 1246 - { .role = "video_clk", .clk = "dss_48mhz_clk" }, 1243 + { .role = "hdmi_clk", .clk = "dss_48mhz_clk" }, 1247 1244 }; 1248 1245 1249 1246 static struct omap_hwmod omap44xx_dss_hwmod = { 1250 1247 .name = "dss_core", 1248 + .flags = HWMOD_CONTROL_OPT_CLKS_IN_RESET, 1251 1249 .class = &omap44xx_dss_hwmod_class, 1252 1250 .clkdm_name = "l3_dss_clkdm", 1253 1251 .main_clk = "dss_dss_clk", ··· 1327 1325 { } 1328 1326 }; 1329 1327 1328 + static struct omap_dss_dispc_dev_attr omap44xx_dss_dispc_dev_attr = { 1329 + .manager_count = 3, 1330 + .has_framedonetv_irq = 1 1331 + }; 1332 + 1330 1333 /* l4_per -> dss_dispc */ 1331 1334 static struct omap_hwmod_ocp_if omap44xx_l4_per__dss_dispc = { 1332 1335 .master = &omap44xx_l4_per_hwmod, ··· 1347 1340 &omap44xx_l4_per__dss_dispc, 1348 1341 }; 1349 1342 1350 - static struct omap_hwmod_opt_clk dss_dispc_opt_clks[] = { 1351 - { .role = "sys_clk", .clk = "dss_sys_clk" }, 1352 - { .role = "tv_clk", .clk = "dss_tv_clk" }, 1353 - { .role = "hdmi_clk", .clk = "dss_48mhz_clk" }, 1354 - }; 1355 - 1356 1343 static struct omap_hwmod omap44xx_dss_dispc_hwmod = { 1357 1344 .name = "dss_dispc", 1358 1345 .class = &omap44xx_dispc_hwmod_class, ··· 1360 1359 .context_offs = OMAP4_RM_DSS_DSS_CONTEXT_OFFSET, 1361 1360 }, 1362 1361 }, 1363 - .opt_clks = dss_dispc_opt_clks, 1364 - .opt_clks_cnt = ARRAY_SIZE(dss_dispc_opt_clks), 1365 1362 .slaves = omap44xx_dss_dispc_slaves, 1366 1363 .slaves_cnt = ARRAY_SIZE(omap44xx_dss_dispc_slaves), 1364 + .dev_attr = &omap44xx_dss_dispc_dev_attr 1367 1365 }; 1368 1366 1369 1367 /* ··· 1624 1624 .clkdm_name = "l3_dss_clkdm", 1625 1625 .mpu_irqs = omap44xx_dss_hdmi_irqs, 1626 1626 .sdma_reqs = omap44xx_dss_hdmi_sdma_reqs, 1627 - .main_clk = "dss_dss_clk", 1627 + .main_clk = "dss_48mhz_clk", 1628 1628 .prcm = { 1629 1629 .omap4 = { 1630 1630 .clkctrl_offs = OMAP4_CM_DSS_DSS_CLKCTRL_OFFSET, ··· 1785 1785 .name = "dss_venc", 1786 1786 .class = &omap44xx_venc_hwmod_class, 1787 1787 .clkdm_name = "l3_dss_clkdm", 1788 - .main_clk = "dss_dss_clk", 1788 + .main_clk = "dss_tv_clk", 1789 1789 .prcm = { 1790 1790 .omap4 = { 1791 1791 .clkctrl_offs = OMAP4_CM_DSS_DSS_CLKCTRL_OFFSET,
+4
arch/arm/mach-omap2/omap_hwmod_common_data.c
··· 49 49 .srst_shift = SYSC_TYPE2_SOFTRESET_SHIFT, 50 50 }; 51 51 52 + struct omap_dss_dispc_dev_attr omap2_3_dss_dispc_dev_attr = { 53 + .manager_count = 2, 54 + .has_framedonetv_irq = 0 55 + };
+4
arch/arm/mach-omap2/omap_hwmod_common_data.h
··· 16 16 17 17 #include <plat/omap_hwmod.h> 18 18 19 + #include "display.h" 20 + 19 21 /* Common address space across OMAP2xxx */ 20 22 extern struct omap_hwmod_addr_space omap2xxx_uart1_addr_space[]; 21 23 extern struct omap_hwmod_addr_space omap2xxx_uart2_addr_space[]; ··· 112 110 extern struct omap_hwmod_class omap2xxx_dma_hwmod_class; 113 111 extern struct omap_hwmod_class omap2xxx_mailbox_hwmod_class; 114 112 extern struct omap_hwmod_class omap2xxx_mcspi_class; 113 + 114 + extern struct omap_dss_dispc_dev_attr omap2_3_dss_dispc_dev_attr; 115 115 116 116 #endif
+1 -1
arch/arm/mach-omap2/omap_l3_noc.c
··· 237 237 static const struct of_device_id l3_noc_match[] = { 238 238 {.compatible = "ti,omap4-l3-noc", }, 239 239 {}, 240 - } 240 + }; 241 241 MODULE_DEVICE_TABLE(of, l3_noc_match); 242 242 #else 243 243 #define l3_noc_match NULL
+2 -4
arch/arm/mach-omap2/pm.c
··· 24 24 #include "powerdomain.h" 25 25 #include "clockdomain.h" 26 26 #include "pm.h" 27 + #include "twl-common.h" 27 28 28 29 static struct omap_device_pm_latency *pm_lats; 29 30 ··· 227 226 228 227 static int __init omap2_common_pm_late_init(void) 229 228 { 230 - /* Init the OMAP TWL parameters */ 231 - omap3_twl_init(); 232 - omap4_twl_init(); 233 - 234 229 /* Init the voltage layer */ 230 + omap_pmic_late_init(); 235 231 omap_voltage_late_init(); 236 232 237 233 /* Initialize the voltages */
+1 -1
arch/arm/mach-omap2/smartreflex.c
··· 139 139 sr_write_reg(sr_info, ERRCONFIG_V1, status); 140 140 } else if (sr_info->ip_type == SR_TYPE_V2) { 141 141 /* Read the status bits */ 142 - sr_read_reg(sr_info, IRQSTATUS); 142 + status = sr_read_reg(sr_info, IRQSTATUS); 143 143 144 144 /* Clear them by writing back */ 145 145 sr_write_reg(sr_info, IRQSTATUS, status);
+11
arch/arm/mach-omap2/twl-common.c
··· 30 30 #include <plat/usb.h> 31 31 32 32 #include "twl-common.h" 33 + #include "pm.h" 33 34 34 35 static struct i2c_board_info __initdata pmic_i2c_board_info = { 35 36 .addr = 0x48, ··· 47 46 pmic_i2c_board_info.platform_data = pmic_data; 48 47 49 48 omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1); 49 + } 50 + 51 + void __init omap_pmic_late_init(void) 52 + { 53 + /* Init the OMAP TWL parameters (if PMIC has been registerd) */ 54 + if (!pmic_i2c_board_info.irq) 55 + return; 56 + 57 + omap3_twl_init(); 58 + omap4_twl_init(); 50 59 } 51 60 52 61 #if defined(CONFIG_ARCH_OMAP3)
+3
arch/arm/mach-omap2/twl-common.h
··· 1 1 #ifndef __OMAP_PMIC_COMMON__ 2 2 #define __OMAP_PMIC_COMMON__ 3 3 4 + #include <plat/irqs.h> 5 + 4 6 #define TWL_COMMON_PDATA_USB (1 << 0) 5 7 #define TWL_COMMON_PDATA_BCI (1 << 1) 6 8 #define TWL_COMMON_PDATA_MADC (1 << 2) ··· 32 30 33 31 void omap_pmic_init(int bus, u32 clkrate, const char *pmic_type, int pmic_irq, 34 32 struct twl4030_platform_data *pmic_data); 33 + void omap_pmic_late_init(void); 35 34 36 35 static inline void omap2_pmic_init(const char *pmic_type, 37 36 struct twl4030_platform_data *pmic_data)
+1 -1
arch/arm/mach-pxa/balloon3.c
··· 307 307 /****************************************************************************** 308 308 * USB Gadget 309 309 ******************************************************************************/ 310 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 310 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 311 311 static void balloon3_udc_command(int cmd) 312 312 { 313 313 if (cmd == PXA2XX_UDC_CMD_CONNECT)
+1 -1
arch/arm/mach-pxa/colibri-pxa320.c
··· 146 146 static inline void __init colibri_pxa320_init_eth(void) {} 147 147 #endif /* CONFIG_AX88796 */ 148 148 149 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 149 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 150 150 static struct gpio_vbus_mach_info colibri_pxa320_gpio_vbus_info = { 151 151 .gpio_vbus = mfp_to_gpio(MFP_PIN_GPIO96), 152 152 .gpio_pullup = -1,
+1 -1
arch/arm/mach-pxa/gumstix.c
··· 106 106 } 107 107 #endif 108 108 109 - #ifdef CONFIG_USB_GADGET_PXA25X 109 + #ifdef CONFIG_USB_PXA25X 110 110 static struct gpio_vbus_mach_info gumstix_udc_info = { 111 111 .gpio_vbus = GPIO_GUMSTIX_USB_GPIOn, 112 112 .gpio_pullup = GPIO_GUMSTIX_USB_GPIOx,
+2 -2
arch/arm/mach-pxa/include/mach/palm27x.h
··· 37 37 #define palm27x_lcd_init(power, mode) do {} while (0) 38 38 #endif 39 39 40 - #if defined(CONFIG_USB_GADGET_PXA27X) || \ 41 - defined(CONFIG_USB_GADGET_PXA27X_MODULE) 40 + #if defined(CONFIG_USB_PXA27X) || \ 41 + defined(CONFIG_USB_PXA27X_MODULE) 42 42 extern void __init palm27x_udc_init(int vbus, int pullup, 43 43 int vbus_inverted); 44 44 #else
+2 -2
arch/arm/mach-pxa/palm27x.c
··· 164 164 /****************************************************************************** 165 165 * USB Gadget 166 166 ******************************************************************************/ 167 - #if defined(CONFIG_USB_GADGET_PXA27X) || \ 168 - defined(CONFIG_USB_GADGET_PXA27X_MODULE) 167 + #if defined(CONFIG_USB_PXA27X) || \ 168 + defined(CONFIG_USB_PXA27X_MODULE) 169 169 static struct gpio_vbus_mach_info palm27x_udc_info = { 170 170 .gpio_vbus_inverted = 1, 171 171 };
+1 -1
arch/arm/mach-pxa/palmtc.c
··· 338 338 /****************************************************************************** 339 339 * UDC 340 340 ******************************************************************************/ 341 - #if defined(CONFIG_USB_GADGET_PXA25X)||defined(CONFIG_USB_GADGET_PXA25X_MODULE) 341 + #if defined(CONFIG_USB_PXA25X)||defined(CONFIG_USB_PXA25X_MODULE) 342 342 static struct gpio_vbus_mach_info palmtc_udc_info = { 343 343 .gpio_vbus = GPIO_NR_PALMTC_USB_DETECT_N, 344 344 .gpio_vbus_inverted = 1,
+1 -1
arch/arm/mach-pxa/vpac270.c
··· 343 343 /****************************************************************************** 344 344 * USB Gadget 345 345 ******************************************************************************/ 346 - #if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE) 346 + #if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE) 347 347 static struct gpio_vbus_mach_info vpac270_gpio_vbus_info = { 348 348 .gpio_vbus = GPIO41_VPAC270_UDC_DETECT, 349 349 .gpio_pullup = -1,
+1 -1
arch/arm/mach-s3c64xx/mach-crag6410-module.c
··· 8 8 * published by the Free Software Foundation. 9 9 */ 10 10 11 - #include <linux/module.h> 11 + #include <linux/export.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/i2c.h> 14 14
+1 -1
arch/arm/mm/cache-l2x0.c
··· 61 61 { 62 62 void __iomem *base = l2x0_base; 63 63 64 - #ifdef CONFIG_ARM_ERRATA_753970 64 + #ifdef CONFIG_PL310_ERRATA_753970 65 65 /* write to an unmmapped register */ 66 66 writel_relaxed(0, base + L2X0_DUMMY_REG); 67 67 #else
+10 -1
arch/arm/mm/dma-mapping.c
··· 168 168 pte_t *pte; 169 169 int i = 0; 170 170 unsigned long base = consistent_base; 171 - unsigned long num_ptes = (CONSISTENT_END - base) >> PGDIR_SHIFT; 171 + unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT; 172 172 173 173 consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL); 174 174 if (!consistent_pte) { ··· 331 331 { 332 332 struct page *page; 333 333 void *addr; 334 + 335 + /* 336 + * Following is a work-around (a.k.a. hack) to prevent pages 337 + * with __GFP_COMP being passed to split_page() which cannot 338 + * handle them. The real problem is that this flag probably 339 + * should be 0 on ARM as it is not supported on this 340 + * platform; see CONFIG_HUGETLBFS. 341 + */ 342 + gfp &= ~(__GFP_COMP); 334 343 335 344 *handle = ~0; 336 345 size = PAGE_ALIGN(size);
+6 -17
arch/arm/mm/mmap.c
··· 9 9 #include <linux/io.h> 10 10 #include <linux/personality.h> 11 11 #include <linux/random.h> 12 - #include <asm/cputype.h> 13 - #include <asm/system.h> 12 + #include <asm/cachetype.h> 14 13 15 14 #define COLOUR_ALIGN(addr,pgoff) \ 16 15 ((((addr)+SHMLBA-1)&~(SHMLBA-1)) + \ ··· 31 32 struct mm_struct *mm = current->mm; 32 33 struct vm_area_struct *vma; 33 34 unsigned long start_addr; 34 - #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K) 35 - unsigned int cache_type; 36 - int do_align = 0, aliasing = 0; 35 + int do_align = 0; 36 + int aliasing = cache_is_vipt_aliasing(); 37 37 38 38 /* 39 39 * We only need to do colour alignment if either the I or D 40 - * caches alias. This is indicated by bits 9 and 21 of the 41 - * cache type register. 40 + * caches alias. 42 41 */ 43 - cache_type = read_cpuid_cachetype(); 44 - if (cache_type != read_cpuid_id()) { 45 - aliasing = (cache_type | cache_type >> 12) & (1 << 11); 46 - if (aliasing) 47 - do_align = filp || flags & MAP_SHARED; 48 - } 49 - #else 50 - #define do_align 0 51 - #define aliasing 0 52 - #endif 42 + if (aliasing) 43 + do_align = filp || (flags & MAP_SHARED); 53 44 54 45 /* 55 46 * We enforce the MAP_FIXED case.
+1 -1
arch/arm/plat-mxc/include/mach/common.h
··· 85 85 }; 86 86 87 87 extern void mx5_cpu_lp_set(enum mxc_cpu_pwr_mode mode); 88 - extern void (*imx_idle)(void); 89 88 extern void imx_print_silicon_rev(const char *cpu, int srev); 90 89 91 90 void avic_handle_irq(struct pt_regs *); ··· 132 133 extern void imx53_smd_common_init(void); 133 134 extern int imx6q_set_lpm(enum mxc_cpu_pwr_mode mode); 134 135 extern void imx6q_pm_init(void); 136 + extern void imx6q_clock_map_io(void); 135 137 #endif
-14
arch/arm/plat-mxc/include/mach/mxc.h
··· 50 50 #define IMX_CHIP_REVISION_3_3 0x33 51 51 #define IMX_CHIP_REVISION_UNKNOWN 0xff 52 52 53 - #define IMX_CHIP_REVISION_1_0_STRING "1.0" 54 - #define IMX_CHIP_REVISION_1_1_STRING "1.1" 55 - #define IMX_CHIP_REVISION_1_2_STRING "1.2" 56 - #define IMX_CHIP_REVISION_1_3_STRING "1.3" 57 - #define IMX_CHIP_REVISION_2_0_STRING "2.0" 58 - #define IMX_CHIP_REVISION_2_1_STRING "2.1" 59 - #define IMX_CHIP_REVISION_2_2_STRING "2.2" 60 - #define IMX_CHIP_REVISION_2_3_STRING "2.3" 61 - #define IMX_CHIP_REVISION_3_0_STRING "3.0" 62 - #define IMX_CHIP_REVISION_3_1_STRING "3.1" 63 - #define IMX_CHIP_REVISION_3_2_STRING "3.2" 64 - #define IMX_CHIP_REVISION_3_3_STRING "3.3" 65 - #define IMX_CHIP_REVISION_UNKNOWN_STRING "unknown" 66 - 67 53 #ifndef __ASSEMBLY__ 68 54 extern unsigned int __mxc_cpu_type; 69 55 #endif
+1 -6
arch/arm/plat-mxc/include/mach/system.h
··· 17 17 #ifndef __ASM_ARCH_MXC_SYSTEM_H__ 18 18 #define __ASM_ARCH_MXC_SYSTEM_H__ 19 19 20 - extern void (*imx_idle)(void); 21 - 22 20 static inline void arch_idle(void) 23 21 { 24 - if (imx_idle != NULL) 25 - (imx_idle)(); 26 - else 27 - cpu_do_idle(); 22 + cpu_do_idle(); 28 23 } 29 24 30 25 void arch_reset(char mode, const char *cmd);
+2 -1
arch/arm/plat-mxc/system.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/err.h> 23 23 #include <linux/delay.h> 24 + #include <linux/module.h> 24 25 25 26 #include <mach/hardware.h> 26 27 #include <mach/common.h> ··· 29 28 #include <asm/system.h> 30 29 #include <asm/mach-types.h> 31 30 32 - void (*imx_idle)(void) = NULL; 33 31 void __iomem *(*imx_ioremap)(unsigned long, size_t, unsigned int) = NULL; 32 + EXPORT_SYMBOL_GPL(imx_ioremap); 34 33 35 34 static void __iomem *wdog_base; 36 35
+1 -1
arch/arm/plat-omap/include/plat/clock.h
··· 165 165 u8 auto_recal_bit; 166 166 u8 recal_en_bit; 167 167 u8 recal_st_bit; 168 - u8 flags; 169 168 # endif 169 + u8 flags; 170 170 }; 171 171 172 172 #endif
+3
arch/arm/plat-omap/include/plat/common.h
··· 30 30 #include <linux/delay.h> 31 31 32 32 #include <plat/i2c.h> 33 + #include <plat/omap_hwmod.h> 33 34 34 35 struct sys_timer; 35 36 ··· 55 54 void am35xx_init_early(void); 56 55 void ti816x_init_early(void); 57 56 void omap4430_init_early(void); 57 + 58 + extern int omap_dss_reset(struct omap_hwmod *); 58 59 59 60 void omap_sram_init(void); 60 61
+1 -1
arch/arm/plat-s3c24xx/cpu-freq-debugfs.c
··· 12 12 */ 13 13 14 14 #include <linux/init.h> 15 - #include <linux/module.h> 15 + #include <linux/export.h> 16 16 #include <linux/interrupt.h> 17 17 #include <linux/ioport.h> 18 18 #include <linux/cpufreq.h>
+1
arch/arm/plat-s5p/sysmmu.c
··· 11 11 #include <linux/io.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/platform_device.h> 14 + #include <linux/export.h> 14 15 15 16 #include <asm/pgtable.h> 16 17
+2
arch/arm/plat-samsung/include/plat/gpio-cfg.h
··· 24 24 #ifndef __PLAT_GPIO_CFG_H 25 25 #define __PLAT_GPIO_CFG_H __FILE__ 26 26 27 + #include<linux/types.h> 28 + 27 29 typedef unsigned int __bitwise__ samsung_gpio_pull_t; 28 30 typedef unsigned int __bitwise__ s5p_gpio_drvstr_t; 29 31
+1 -1
arch/arm/plat-samsung/pd.c
··· 11 11 */ 12 12 13 13 #include <linux/init.h> 14 - #include <linux/module.h> 14 + #include <linux/export.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/err.h> 17 17 #include <linux/pm_runtime.h>
+1 -1
arch/arm/plat-samsung/pwm.c
··· 11 11 * the Free Software Foundation; either version 2 of the License. 12 12 */ 13 13 14 - #include <linux/module.h> 14 + #include <linux/export.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/platform_device.h> 17 17 #include <linux/slab.h>
+1
arch/arm/tools/mach-types
··· 1123 1123 thales_adc MACH_THALES_ADC THALES_ADC 3492 1124 1124 ubisys_p9d_evp MACH_UBISYS_P9D_EVP UBISYS_P9D_EVP 3493 1125 1125 atdgp318 MACH_ATDGP318 ATDGP318 3494 1126 + m28evk MACH_M28EVK M28EVK 3613 1126 1127 smdk4212 MACH_SMDK4212 SMDK4212 3638 1127 1128 smdk4412 MACH_SMDK4412 SMDK4412 3765
-22
arch/microblaze/include/asm/namei.h
··· 1 - /* 2 - * Copyright (C) 2006 Atmark Techno, Inc. 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 - */ 8 - 9 - #ifndef _ASM_MICROBLAZE_NAMEI_H 10 - #define _ASM_MICROBLAZE_NAMEI_H 11 - 12 - #ifdef __KERNEL__ 13 - 14 - /* This dummy routine maybe changed to something useful 15 - * for /usr/gnemul/ emulation stuff. 16 - * Look at asm-sparc/namei.h for details. 17 - */ 18 - #define __emul_prefix() NULL 19 - 20 - #endif /* __KERNEL__ */ 21 - 22 - #endif /* _ASM_MICROBLAZE_NAMEI_H */
+13 -4
arch/powerpc/boot/dts/p1023rds.dts
··· 449 449 interrupt-parent = <&mpic>; 450 450 interrupts = <16 2>; 451 451 interrupt-map-mask = <0xf800 0 0 7>; 452 + /* IRQ[0:3] are pulled up on board, set to active-low */ 452 453 interrupt-map = < 453 454 /* IDSEL 0x0 */ 454 455 0000 0 0 1 &mpic 0 1 ··· 489 488 interrupt-parent = <&mpic>; 490 489 interrupts = <16 2>; 491 490 interrupt-map-mask = <0xf800 0 0 7>; 491 + /* 492 + * IRQ[4:6] only for PCIe, set to active-high, 493 + * IRQ[7] is pulled up on board, set to active-low 494 + */ 492 495 interrupt-map = < 493 496 /* IDSEL 0x0 */ 494 - 0000 0 0 1 &mpic 4 1 495 - 0000 0 0 2 &mpic 5 1 496 - 0000 0 0 3 &mpic 6 1 497 + 0000 0 0 1 &mpic 4 2 498 + 0000 0 0 2 &mpic 5 2 499 + 0000 0 0 3 &mpic 6 2 497 500 0000 0 0 4 &mpic 7 1 498 501 >; 499 502 ranges = <0x2000000 0x0 0xa0000000 ··· 532 527 interrupt-parent = <&mpic>; 533 528 interrupts = <16 2>; 534 529 interrupt-map-mask = <0xf800 0 0 7>; 530 + /* 531 + * IRQ[8:10] are pulled up on board, set to active-low 532 + * IRQ[11] only for PCIe, set to active-high, 533 + */ 535 534 interrupt-map = < 536 535 /* IDSEL 0x0 */ 537 536 0000 0 0 1 &mpic 8 1 538 537 0000 0 0 2 &mpic 9 1 539 538 0000 0 0 3 &mpic 10 1 540 - 0000 0 0 4 &mpic 11 1 539 + 0000 0 0 4 &mpic 11 2 541 540 >; 542 541 ranges = <0x2000000 0x0 0x80000000 543 542 0x2000000 0x0 0x80000000
+2
arch/powerpc/configs/ppc44x_defconfig
··· 52 52 CONFIG_MTD_JEDECPROBE=y 53 53 CONFIG_MTD_CFI_AMDSTD=y 54 54 CONFIG_MTD_PHYSMAP_OF=y 55 + CONFIG_MTD_NAND=m 56 + CONFIG_MTD_NAND_NDFC=m 55 57 CONFIG_MTD_UBI=m 56 58 CONFIG_MTD_UBI_GLUEBI=m 57 59 CONFIG_PROC_DEVICETREE=y
+1
arch/powerpc/mm/hugetlbpage.c
··· 15 15 #include <linux/of_fdt.h> 16 16 #include <linux/memblock.h> 17 17 #include <linux/bootmem.h> 18 + #include <linux/moduleparam.h> 18 19 #include <asm/pgtable.h> 19 20 #include <asm/pgalloc.h> 20 21 #include <asm/tlb.h>
+1 -1
arch/powerpc/platforms/85xx/Kconfig
··· 203 203 select PPC_E500MC 204 204 select PHYS_64BIT 205 205 select SWIOTLB 206 - select MPC8xxx_GPIO 206 + select GPIO_MPC8XXX 207 207 select HAS_RAPIDIO 208 208 select PPC_EPAPR_HV_PIC 209 209 help
+1 -1
arch/powerpc/platforms/85xx/p3060_qds.c
··· 70 70 .power_save = e500_idle, 71 71 }; 72 72 73 - machine_device_initcall(p3060_qds, declare_of_platform_devices); 73 + machine_device_initcall(p3060_qds, corenet_ds_publish_devices); 74 74 75 75 #ifdef CONFIG_SWIOTLB 76 76 machine_arch_initcall(p3060_qds, swiotlb_setup_bus_notifier);
+1
arch/powerpc/sysdev/ehv_pic.c
··· 280 280 281 281 if (!ehv_pic->irqhost) { 282 282 of_node_put(np); 283 + kfree(ehv_pic); 283 284 return; 284 285 } 285 286
+1
arch/powerpc/sysdev/fsl_lbc.c
··· 328 328 err: 329 329 iounmap(fsl_lbc_ctrl_dev->regs); 330 330 kfree(fsl_lbc_ctrl_dev); 331 + fsl_lbc_ctrl_dev = NULL; 331 332 return ret; 332 333 } 333 334
+1 -1
arch/powerpc/sysdev/qe_lib/qe.c
··· 216 216 /* Errata QE_General4, which affects some MPC832x and MPC836x SOCs, says 217 217 that the BRG divisor must be even if you're not using divide-by-16 218 218 mode. */ 219 - if (!div16 && (divisor & 1)) 219 + if (!div16 && (divisor & 1) && (divisor > 3)) 220 220 divisor++; 221 221 222 222 tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) |
+1 -1
arch/x86/um/asm/processor.h
··· 11 11 #endif 12 12 13 13 #define KSTK_EIP(tsk) KSTK_REG(tsk, HOST_IP) 14 - #define KSTK_ESP(tsk) KSTK_REG(tsk, HOST_IP) 14 + #define KSTK_ESP(tsk) KSTK_REG(tsk, HOST_SP) 15 15 #define KSTK_EBP(tsk) KSTK_REG(tsk, HOST_BP) 16 16 17 17 #define ARCH_IS_STACKGROW(address) \
+22 -9
drivers/acpi/apei/erst.c
··· 932 932 static int erst_open_pstore(struct pstore_info *psi); 933 933 static int erst_close_pstore(struct pstore_info *psi); 934 934 static ssize_t erst_reader(u64 *id, enum pstore_type_id *type, 935 - struct timespec *time, struct pstore_info *psi); 935 + struct timespec *time, char **buf, 936 + struct pstore_info *psi); 936 937 static int erst_writer(enum pstore_type_id type, u64 *id, unsigned int part, 937 938 size_t size, struct pstore_info *psi); 938 939 static int erst_clearer(enum pstore_type_id type, u64 id, ··· 987 986 } 988 987 989 988 static ssize_t erst_reader(u64 *id, enum pstore_type_id *type, 990 - struct timespec *time, struct pstore_info *psi) 989 + struct timespec *time, char **buf, 990 + struct pstore_info *psi) 991 991 { 992 992 int rc; 993 993 ssize_t len = 0; 994 994 u64 record_id; 995 - struct cper_pstore_record *rcd = (struct cper_pstore_record *) 996 - (erst_info.buf - sizeof(*rcd)); 995 + struct cper_pstore_record *rcd; 996 + size_t rcd_len = sizeof(*rcd) + erst_info.bufsize; 997 997 998 998 if (erst_disable) 999 999 return -ENODEV; 1000 1000 1001 + rcd = kmalloc(rcd_len, GFP_KERNEL); 1002 + if (!rcd) { 1003 + rc = -ENOMEM; 1004 + goto out; 1005 + } 1001 1006 skip: 1002 1007 rc = erst_get_record_id_next(&reader_pos, &record_id); 1003 1008 if (rc) ··· 1011 1004 1012 1005 /* no more record */ 1013 1006 if (record_id == APEI_ERST_INVALID_RECORD_ID) { 1014 - rc = -1; 1007 + rc = -EINVAL; 1015 1008 goto out; 1016 1009 } 1017 1010 1018 - len = erst_read(record_id, &rcd->hdr, sizeof(*rcd) + 1019 - erst_info.bufsize); 1011 + len = erst_read(record_id, &rcd->hdr, rcd_len); 1020 1012 /* The record may be cleared by others, try read next record */ 1021 1013 if (len == -ENOENT) 1022 1014 goto skip; 1023 - else if (len < 0) { 1024 - rc = -1; 1015 + else if (len < sizeof(*rcd)) { 1016 + rc = -EIO; 1025 1017 goto out; 1026 1018 } 1027 1019 if (uuid_le_cmp(rcd->hdr.creator_id, CPER_CREATOR_PSTORE) != 0) 1028 1020 goto skip; 1029 1021 1022 + *buf = kmalloc(len, GFP_KERNEL); 1023 + if (*buf == NULL) { 1024 + rc = -ENOMEM; 1025 + goto out; 1026 + } 1027 + memcpy(*buf, rcd->data, len - sizeof(*rcd)); 1030 1028 *id = record_id; 1031 1029 if (uuid_le_cmp(rcd->sec_hdr.section_type, 1032 1030 CPER_SECTION_TYPE_DMESG) == 0) ··· 1049 1037 time->tv_nsec = 0; 1050 1038 1051 1039 out: 1040 + kfree(rcd); 1052 1041 return (rc < 0) ? rc : (len - sizeof(*rcd)); 1053 1042 } 1054 1043
+1 -1
drivers/ata/ahci_platform.c
··· 67 67 struct device *dev = &pdev->dev; 68 68 struct ahci_platform_data *pdata = dev_get_platdata(dev); 69 69 const struct platform_device_id *id = platform_get_device_id(pdev); 70 - struct ata_port_info pi = ahci_port_info[id->driver_data]; 70 + struct ata_port_info pi = ahci_port_info[id ? id->driver_data : 0]; 71 71 const struct ata_port_info *ppi[] = { &pi, NULL }; 72 72 struct ahci_host_priv *hpriv; 73 73 struct ata_host *host;
+4
drivers/ata/libata-sff.c
··· 2533 2533 if (rc) 2534 2534 goto out; 2535 2535 2536 + #ifdef CONFIG_ATA_BMDMA 2536 2537 if (bmdma) 2537 2538 /* prepare and activate BMDMA host */ 2538 2539 rc = ata_pci_bmdma_prepare_host(pdev, ppi, &host); 2539 2540 else 2541 + #endif 2540 2542 /* prepare and activate SFF host */ 2541 2543 rc = ata_pci_sff_prepare_host(pdev, ppi, &host); 2542 2544 if (rc) ··· 2546 2544 host->private_data = host_priv; 2547 2545 host->flags |= hflags; 2548 2546 2547 + #ifdef CONFIG_ATA_BMDMA 2549 2548 if (bmdma) { 2550 2549 pci_set_master(pdev); 2551 2550 rc = ata_pci_sff_activate_host(host, ata_bmdma_interrupt, sht); 2552 2551 } else 2552 + #endif 2553 2553 rc = ata_pci_sff_activate_host(host, ata_sff_interrupt, sht); 2554 2554 out: 2555 2555 if (rc == 0)
+8 -6
drivers/base/node.c
··· 127 127 nid, K(node_page_state(nid, NR_WRITEBACK)), 128 128 nid, K(node_page_state(nid, NR_FILE_PAGES)), 129 129 nid, K(node_page_state(nid, NR_FILE_MAPPED)), 130 - nid, K(node_page_state(nid, NR_ANON_PAGES) 131 130 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 131 + nid, K(node_page_state(nid, NR_ANON_PAGES) 132 132 + node_page_state(nid, NR_ANON_TRANSPARENT_HUGEPAGES) * 133 - HPAGE_PMD_NR 133 + HPAGE_PMD_NR), 134 + #else 135 + nid, K(node_page_state(nid, NR_ANON_PAGES)), 134 136 #endif 135 - ), 136 137 nid, K(node_page_state(nid, NR_SHMEM)), 137 138 nid, node_page_state(nid, NR_KERNEL_STACK) * 138 139 THREAD_SIZE / 1024, ··· 144 143 nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE) + 145 144 node_page_state(nid, NR_SLAB_UNRECLAIMABLE)), 146 145 nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE)), 147 - nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE)) 148 146 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 147 + nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE)) 149 148 , nid, 150 149 K(node_page_state(nid, NR_ANON_TRANSPARENT_HUGEPAGES) * 151 - HPAGE_PMD_NR) 150 + HPAGE_PMD_NR)); 151 + #else 152 + nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE))); 152 153 #endif 153 - ); 154 154 n += hugetlb_report_node_meminfo(nid, buf + n); 155 155 return n; 156 156 }
+7 -5
drivers/crypto/mv_cesa.c
··· 343 343 else 344 344 op.config |= CFG_MID_FRAG; 345 345 346 - writel(req_ctx->state[0], cpg->reg + DIGEST_INITIAL_VAL_A); 347 - writel(req_ctx->state[1], cpg->reg + DIGEST_INITIAL_VAL_B); 348 - writel(req_ctx->state[2], cpg->reg + DIGEST_INITIAL_VAL_C); 349 - writel(req_ctx->state[3], cpg->reg + DIGEST_INITIAL_VAL_D); 350 - writel(req_ctx->state[4], cpg->reg + DIGEST_INITIAL_VAL_E); 346 + if (first_block) { 347 + writel(req_ctx->state[0], cpg->reg + DIGEST_INITIAL_VAL_A); 348 + writel(req_ctx->state[1], cpg->reg + DIGEST_INITIAL_VAL_B); 349 + writel(req_ctx->state[2], cpg->reg + DIGEST_INITIAL_VAL_C); 350 + writel(req_ctx->state[3], cpg->reg + DIGEST_INITIAL_VAL_D); 351 + writel(req_ctx->state[4], cpg->reg + DIGEST_INITIAL_VAL_E); 352 + } 351 353 } 352 354 353 355 memcpy(cpg->sram + SRAM_CONFIG, &op, sizeof(struct sec_accel_config));
+1 -1
drivers/edac/mpc85xx_edac.c
··· 1128 1128 { .compatible = "fsl,p1020-memory-controller", }, 1129 1129 { .compatible = "fsl,p1021-memory-controller", }, 1130 1130 { .compatible = "fsl,p2020-memory-controller", }, 1131 - { .compatible = "fsl,p4080-memory-controller", }, 1131 + { .compatible = "fsl,qoriq-memory-controller", }, 1132 1132 {}, 1133 1133 }; 1134 1134 MODULE_DEVICE_TABLE(of, mpc85xx_mc_err_of_match);
+9 -3
drivers/firmware/efivars.c
··· 457 457 } 458 458 459 459 static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, 460 - struct timespec *timespec, struct pstore_info *psi) 460 + struct timespec *timespec, 461 + char **buf, struct pstore_info *psi) 461 462 { 462 463 efi_guid_t vendor = LINUX_EFI_CRASH_GUID; 463 464 struct efivars *efivars = psi->data; ··· 479 478 timespec->tv_nsec = 0; 480 479 get_var_data_locked(efivars, &efivars->walk_entry->var); 481 480 size = efivars->walk_entry->var.DataSize; 482 - memcpy(psi->buf, efivars->walk_entry->var.Data, size); 481 + *buf = kmalloc(size, GFP_KERNEL); 482 + if (*buf == NULL) 483 + return -ENOMEM; 484 + memcpy(*buf, efivars->walk_entry->var.Data, 485 + size); 483 486 efivars->walk_entry = list_entry(efivars->walk_entry->list.next, 484 487 struct efivar_entry, list); 485 488 return size; ··· 581 576 } 582 577 583 578 static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, 584 - struct timespec *time, struct pstore_info *psi) 579 + struct timespec *timespec, 580 + char **buf, struct pstore_info *psi) 585 581 { 586 582 return -1; 587 583 }
+2 -2
drivers/gpio/gpio-pca953x.c
··· 546 546 * Translate OpenFirmware node properties into platform_data 547 547 * WARNING: This is DEPRECATED and will be removed eventually! 548 548 */ 549 - void 549 + static void 550 550 pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert) 551 551 { 552 552 struct device_node *node; ··· 574 574 *invert = *val; 575 575 } 576 576 #else 577 - void 577 + static void 578 578 pca953x_get_alt_pdata(struct i2c_client *client, int *gpio_base, int *invert) 579 579 { 580 580 *gpio_base = -1;
+4
drivers/gpu/drm/drm_crtc.c
··· 1873 1873 } 1874 1874 1875 1875 if (num_clips && clips_ptr) { 1876 + if (num_clips < 0 || num_clips > DRM_MODE_FB_DIRTY_MAX_CLIPS) { 1877 + ret = -EINVAL; 1878 + goto out_err1; 1879 + } 1876 1880 clips = kzalloc(num_clips * sizeof(*clips), GFP_KERNEL); 1877 1881 if (!clips) { 1878 1882 ret = -ENOMEM;
+32 -30
drivers/gpu/drm/exynos/exynos_drm_buf.c
··· 27 27 #include "drm.h" 28 28 29 29 #include "exynos_drm_drv.h" 30 + #include "exynos_drm_gem.h" 30 31 #include "exynos_drm_buf.h" 31 32 32 - static DEFINE_MUTEX(exynos_drm_buf_lock); 33 - 34 33 static int lowlevel_buffer_allocate(struct drm_device *dev, 35 - struct exynos_drm_buf_entry *entry) 34 + struct exynos_drm_gem_buf *buffer) 36 35 { 37 36 DRM_DEBUG_KMS("%s\n", __FILE__); 38 37 39 - entry->vaddr = dma_alloc_writecombine(dev->dev, entry->size, 40 - (dma_addr_t *)&entry->paddr, GFP_KERNEL); 41 - if (!entry->paddr) { 38 + buffer->kvaddr = dma_alloc_writecombine(dev->dev, buffer->size, 39 + &buffer->dma_addr, GFP_KERNEL); 40 + if (!buffer->kvaddr) { 42 41 DRM_ERROR("failed to allocate buffer.\n"); 43 42 return -ENOMEM; 44 43 } 45 44 46 - DRM_DEBUG_KMS("allocated : vaddr(0x%x), paddr(0x%x), size(0x%x)\n", 47 - (unsigned int)entry->vaddr, entry->paddr, entry->size); 45 + DRM_DEBUG_KMS("vaddr(0x%lx), dma_addr(0x%lx), size(0x%lx)\n", 46 + (unsigned long)buffer->kvaddr, 47 + (unsigned long)buffer->dma_addr, 48 + buffer->size); 48 49 49 50 return 0; 50 51 } 51 52 52 53 static void lowlevel_buffer_deallocate(struct drm_device *dev, 53 - struct exynos_drm_buf_entry *entry) 54 + struct exynos_drm_gem_buf *buffer) 54 55 { 55 56 DRM_DEBUG_KMS("%s.\n", __FILE__); 56 57 57 - if (entry->paddr && entry->vaddr && entry->size) 58 - dma_free_writecombine(dev->dev, entry->size, entry->vaddr, 59 - entry->paddr); 58 + if (buffer->dma_addr && buffer->size) 59 + dma_free_writecombine(dev->dev, buffer->size, buffer->kvaddr, 60 + (dma_addr_t)buffer->dma_addr); 60 61 else 61 - DRM_DEBUG_KMS("entry data is null.\n"); 62 + DRM_DEBUG_KMS("buffer data are invalid.\n"); 62 63 } 63 64 64 - struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev, 65 + struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev, 65 66 unsigned int size) 66 67 { 67 - struct exynos_drm_buf_entry *entry; 68 + struct exynos_drm_gem_buf *buffer; 68 69 69 70 DRM_DEBUG_KMS("%s.\n", __FILE__); 71 + DRM_DEBUG_KMS("desired size = 0x%x\n", size); 70 72 71 - entry = kzalloc(sizeof(*entry), GFP_KERNEL); 72 - if (!entry) { 73 - DRM_ERROR("failed to allocate exynos_drm_buf_entry.\n"); 73 + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); 74 + if (!buffer) { 75 + DRM_ERROR("failed to allocate exynos_drm_gem_buf.\n"); 74 76 return ERR_PTR(-ENOMEM); 75 77 } 76 78 77 - entry->size = size; 79 + buffer->size = size; 78 80 79 81 /* 80 82 * allocate memory region with size and set the memory information 81 - * to vaddr and paddr of a entry object. 83 + * to vaddr and dma_addr of a buffer object. 82 84 */ 83 - if (lowlevel_buffer_allocate(dev, entry) < 0) { 84 - kfree(entry); 85 - entry = NULL; 85 + if (lowlevel_buffer_allocate(dev, buffer) < 0) { 86 + kfree(buffer); 87 + buffer = NULL; 86 88 return ERR_PTR(-ENOMEM); 87 89 } 88 90 89 - return entry; 91 + return buffer; 90 92 } 91 93 92 94 void exynos_drm_buf_destroy(struct drm_device *dev, 93 - struct exynos_drm_buf_entry *entry) 95 + struct exynos_drm_gem_buf *buffer) 94 96 { 95 97 DRM_DEBUG_KMS("%s.\n", __FILE__); 96 98 97 - if (!entry) { 98 - DRM_DEBUG_KMS("entry is null.\n"); 99 + if (!buffer) { 100 + DRM_DEBUG_KMS("buffer is null.\n"); 99 101 return; 100 102 } 101 103 102 - lowlevel_buffer_deallocate(dev, entry); 104 + lowlevel_buffer_deallocate(dev, buffer); 103 105 104 - kfree(entry); 105 - entry = NULL; 106 + kfree(buffer); 107 + buffer = NULL; 106 108 } 107 109 108 110 MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+4 -17
drivers/gpu/drm/exynos/exynos_drm_buf.h
··· 26 26 #ifndef _EXYNOS_DRM_BUF_H_ 27 27 #define _EXYNOS_DRM_BUF_H_ 28 28 29 - /* 30 - * exynos drm buffer entry structure. 31 - * 32 - * @paddr: physical address of allocated memory. 33 - * @vaddr: kernel virtual address of allocated memory. 34 - * @size: size of allocated memory. 35 - */ 36 - struct exynos_drm_buf_entry { 37 - dma_addr_t paddr; 38 - void __iomem *vaddr; 39 - unsigned int size; 40 - }; 41 - 42 29 /* allocate physical memory. */ 43 - struct exynos_drm_buf_entry *exynos_drm_buf_create(struct drm_device *dev, 30 + struct exynos_drm_gem_buf *exynos_drm_buf_create(struct drm_device *dev, 44 31 unsigned int size); 45 32 46 - /* get physical memory information of a drm framebuffer. */ 47 - struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb); 33 + /* get memory information of a drm framebuffer. */ 34 + struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb); 48 35 49 36 /* remove allocated physical memory. */ 50 37 void exynos_drm_buf_destroy(struct drm_device *dev, 51 - struct exynos_drm_buf_entry *entry); 38 + struct exynos_drm_gem_buf *buffer); 52 39 53 40 #endif
+56 -22
drivers/gpu/drm/exynos/exynos_drm_connector.c
··· 37 37 38 38 struct exynos_drm_connector { 39 39 struct drm_connector drm_connector; 40 + uint32_t encoder_id; 41 + struct exynos_drm_manager *manager; 40 42 }; 41 43 42 44 /* convert exynos_video_timings to drm_display_mode */ ··· 49 47 DRM_DEBUG_KMS("%s\n", __FILE__); 50 48 51 49 mode->clock = timing->pixclock / 1000; 50 + mode->vrefresh = timing->refresh; 52 51 53 52 mode->hdisplay = timing->xres; 54 53 mode->hsync_start = mode->hdisplay + timing->left_margin; ··· 60 57 mode->vsync_start = mode->vdisplay + timing->upper_margin; 61 58 mode->vsync_end = mode->vsync_start + timing->vsync_len; 62 59 mode->vtotal = mode->vsync_end + timing->lower_margin; 60 + 61 + if (timing->vmode & FB_VMODE_INTERLACED) 62 + mode->flags |= DRM_MODE_FLAG_INTERLACE; 63 + 64 + if (timing->vmode & FB_VMODE_DOUBLE) 65 + mode->flags |= DRM_MODE_FLAG_DBLSCAN; 63 66 } 64 67 65 68 /* convert drm_display_mode to exynos_video_timings */ ··· 78 69 memset(timing, 0, sizeof(*timing)); 79 70 80 71 timing->pixclock = mode->clock * 1000; 81 - timing->refresh = mode->vrefresh; 72 + timing->refresh = drm_mode_vrefresh(mode); 82 73 83 74 timing->xres = mode->hdisplay; 84 75 timing->left_margin = mode->hsync_start - mode->hdisplay; ··· 101 92 102 93 static int exynos_drm_connector_get_modes(struct drm_connector *connector) 103 94 { 104 - struct exynos_drm_manager *manager = 105 - exynos_drm_get_manager(connector->encoder); 106 - struct exynos_drm_display *display = manager->display; 95 + struct exynos_drm_connector *exynos_connector = 96 + to_exynos_connector(connector); 97 + struct exynos_drm_manager *manager = exynos_connector->manager; 98 + struct exynos_drm_display_ops *display_ops = manager->display_ops; 107 99 unsigned int count; 108 100 109 101 DRM_DEBUG_KMS("%s\n", __FILE__); 110 102 111 - if (!display) { 112 - DRM_DEBUG_KMS("display is null.\n"); 103 + if (!display_ops) { 104 + DRM_DEBUG_KMS("display_ops is null.\n"); 113 105 return 0; 114 106 } 115 107 ··· 122 112 * P.S. in case of lcd panel, count is always 1 if success 123 113 * because lcd panel has only one mode. 124 114 */ 125 - if (display->get_edid) { 115 + if (display_ops->get_edid) { 126 116 int ret; 127 117 void *edid; 128 118 ··· 132 122 return 0; 133 123 } 134 124 135 - ret = display->get_edid(manager->dev, connector, 125 + ret = display_ops->get_edid(manager->dev, connector, 136 126 edid, MAX_EDID); 137 127 if (ret < 0) { 138 128 DRM_ERROR("failed to get edid data.\n"); ··· 150 140 struct drm_display_mode *mode = drm_mode_create(connector->dev); 151 141 struct fb_videomode *timing; 152 142 153 - if (display->get_timing) 154 - timing = display->get_timing(manager->dev); 143 + if (display_ops->get_timing) 144 + timing = display_ops->get_timing(manager->dev); 155 145 else { 156 146 drm_mode_destroy(connector->dev, mode); 157 147 return 0; ··· 172 162 static int exynos_drm_connector_mode_valid(struct drm_connector *connector, 173 163 struct drm_display_mode *mode) 174 164 { 175 - struct exynos_drm_manager *manager = 176 - exynos_drm_get_manager(connector->encoder); 177 - struct exynos_drm_display *display = manager->display; 165 + struct exynos_drm_connector *exynos_connector = 166 + to_exynos_connector(connector); 167 + struct exynos_drm_manager *manager = exynos_connector->manager; 168 + struct exynos_drm_display_ops *display_ops = manager->display_ops; 178 169 struct fb_videomode timing; 179 170 int ret = MODE_BAD; 180 171 ··· 183 172 184 173 convert_to_video_timing(&timing, mode); 185 174 186 - if (display && display->check_timing) 187 - if (!display->check_timing(manager->dev, (void *)&timing)) 175 + if (display_ops && display_ops->check_timing) 176 + if (!display_ops->check_timing(manager->dev, (void *)&timing)) 188 177 ret = MODE_OK; 189 178 190 179 return ret; ··· 192 181 193 182 struct drm_encoder *exynos_drm_best_encoder(struct drm_connector *connector) 194 183 { 184 + struct drm_device *dev = connector->dev; 185 + struct exynos_drm_connector *exynos_connector = 186 + to_exynos_connector(connector); 187 + struct drm_mode_object *obj; 188 + struct drm_encoder *encoder; 189 + 195 190 DRM_DEBUG_KMS("%s\n", __FILE__); 196 191 197 - return connector->encoder; 192 + obj = drm_mode_object_find(dev, exynos_connector->encoder_id, 193 + DRM_MODE_OBJECT_ENCODER); 194 + if (!obj) { 195 + DRM_DEBUG_KMS("Unknown ENCODER ID %d\n", 196 + exynos_connector->encoder_id); 197 + return NULL; 198 + } 199 + 200 + encoder = obj_to_encoder(obj); 201 + 202 + return encoder; 198 203 } 199 204 200 205 static struct drm_connector_helper_funcs exynos_connector_helper_funcs = { ··· 223 196 static enum drm_connector_status 224 197 exynos_drm_connector_detect(struct drm_connector *connector, bool force) 225 198 { 226 - struct exynos_drm_manager *manager = 227 - exynos_drm_get_manager(connector->encoder); 228 - struct exynos_drm_display *display = manager->display; 199 + struct exynos_drm_connector *exynos_connector = 200 + to_exynos_connector(connector); 201 + struct exynos_drm_manager *manager = exynos_connector->manager; 202 + struct exynos_drm_display_ops *display_ops = 203 + manager->display_ops; 229 204 enum drm_connector_status status = connector_status_disconnected; 230 205 231 206 DRM_DEBUG_KMS("%s\n", __FILE__); 232 207 233 - if (display && display->is_connected) { 234 - if (display->is_connected(manager->dev)) 208 + if (display_ops && display_ops->is_connected) { 209 + if (display_ops->is_connected(manager->dev)) 235 210 status = connector_status_connected; 236 211 else 237 212 status = connector_status_disconnected; ··· 280 251 281 252 connector = &exynos_connector->drm_connector; 282 253 283 - switch (manager->display->type) { 254 + switch (manager->display_ops->type) { 284 255 case EXYNOS_DISPLAY_TYPE_HDMI: 285 256 type = DRM_MODE_CONNECTOR_HDMIA; 257 + connector->interlace_allowed = true; 258 + connector->polled = DRM_CONNECTOR_POLL_HPD; 286 259 break; 287 260 default: 288 261 type = DRM_MODE_CONNECTOR_Unknown; ··· 298 267 if (err) 299 268 goto err_connector; 300 269 270 + exynos_connector->encoder_id = encoder->base.id; 271 + exynos_connector->manager = manager; 301 272 connector->encoder = encoder; 273 + 302 274 err = drm_mode_connector_attach_encoder(connector, encoder); 303 275 if (err) { 304 276 DRM_ERROR("failed to attach a connector to a encoder\n");
+39 -37
drivers/gpu/drm/exynos/exynos_drm_crtc.c
··· 29 29 #include "drmP.h" 30 30 #include "drm_crtc_helper.h" 31 31 32 + #include "exynos_drm_crtc.h" 32 33 #include "exynos_drm_drv.h" 33 34 #include "exynos_drm_fb.h" 34 35 #include "exynos_drm_encoder.h" 36 + #include "exynos_drm_gem.h" 35 37 #include "exynos_drm_buf.h" 36 38 37 39 #define to_exynos_crtc(x) container_of(x, struct exynos_drm_crtc,\ 38 40 drm_crtc) 39 - 40 - /* 41 - * Exynos specific crtc postion structure. 42 - * 43 - * @fb_x: offset x on a framebuffer to be displyed 44 - * - the unit is screen coordinates. 45 - * @fb_y: offset y on a framebuffer to be displayed 46 - * - the unit is screen coordinates. 47 - * @crtc_x: offset x on hardware screen. 48 - * @crtc_y: offset y on hardware screen. 49 - * @crtc_w: width of hardware screen. 50 - * @crtc_h: height of hardware screen. 51 - */ 52 - struct exynos_drm_crtc_pos { 53 - unsigned int fb_x; 54 - unsigned int fb_y; 55 - unsigned int crtc_x; 56 - unsigned int crtc_y; 57 - unsigned int crtc_w; 58 - unsigned int crtc_h; 59 - }; 60 41 61 42 /* 62 43 * Exynos specific crtc structure. ··· 66 85 67 86 exynos_drm_fn_encoder(crtc, overlay, 68 87 exynos_drm_encoder_crtc_mode_set); 69 - exynos_drm_fn_encoder(crtc, NULL, exynos_drm_encoder_crtc_commit); 88 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 89 + exynos_drm_encoder_crtc_commit); 70 90 } 71 91 72 - static int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 73 - struct drm_framebuffer *fb, 74 - struct drm_display_mode *mode, 75 - struct exynos_drm_crtc_pos *pos) 92 + int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 93 + struct drm_framebuffer *fb, 94 + struct drm_display_mode *mode, 95 + struct exynos_drm_crtc_pos *pos) 76 96 { 77 - struct exynos_drm_buf_entry *entry; 97 + struct exynos_drm_gem_buf *buffer; 78 98 unsigned int actual_w; 79 99 unsigned int actual_h; 80 100 81 - entry = exynos_drm_fb_get_buf(fb); 82 - if (!entry) { 83 - DRM_LOG_KMS("entry is null.\n"); 101 + buffer = exynos_drm_fb_get_buf(fb); 102 + if (!buffer) { 103 + DRM_LOG_KMS("buffer is null.\n"); 84 104 return -EFAULT; 85 105 } 86 106 87 - overlay->paddr = entry->paddr; 88 - overlay->vaddr = entry->vaddr; 107 + overlay->dma_addr = buffer->dma_addr; 108 + overlay->vaddr = buffer->kvaddr; 89 109 90 - DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n", 110 + DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n", 91 111 (unsigned long)overlay->vaddr, 92 - (unsigned long)overlay->paddr); 112 + (unsigned long)overlay->dma_addr); 93 113 94 114 actual_w = min((mode->hdisplay - pos->crtc_x), pos->crtc_w); 95 115 actual_h = min((mode->vdisplay - pos->crtc_y), pos->crtc_h); ··· 153 171 154 172 static void exynos_drm_crtc_dpms(struct drm_crtc *crtc, int mode) 155 173 { 156 - DRM_DEBUG_KMS("%s\n", __FILE__); 174 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 157 175 158 - /* TODO */ 176 + DRM_DEBUG_KMS("crtc[%d] mode[%d]\n", crtc->base.id, mode); 177 + 178 + switch (mode) { 179 + case DRM_MODE_DPMS_ON: 180 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 181 + exynos_drm_encoder_crtc_commit); 182 + break; 183 + case DRM_MODE_DPMS_STANDBY: 184 + case DRM_MODE_DPMS_SUSPEND: 185 + case DRM_MODE_DPMS_OFF: 186 + /* TODO */ 187 + exynos_drm_fn_encoder(crtc, NULL, 188 + exynos_drm_encoder_crtc_disable); 189 + break; 190 + default: 191 + DRM_DEBUG_KMS("unspecified mode %d\n", mode); 192 + break; 193 + } 159 194 } 160 195 161 196 static void exynos_drm_crtc_prepare(struct drm_crtc *crtc) ··· 184 185 185 186 static void exynos_drm_crtc_commit(struct drm_crtc *crtc) 186 187 { 188 + struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc); 189 + 187 190 DRM_DEBUG_KMS("%s\n", __FILE__); 188 191 189 - /* drm framework doesn't check NULL. */ 192 + exynos_drm_fn_encoder(crtc, &exynos_crtc->pipe, 193 + exynos_drm_encoder_crtc_commit); 190 194 } 191 195 192 196 static bool
+25
drivers/gpu/drm/exynos/exynos_drm_crtc.h
··· 35 35 int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int crtc); 36 36 void exynos_drm_crtc_disable_vblank(struct drm_device *dev, int crtc); 37 37 38 + /* 39 + * Exynos specific crtc postion structure. 40 + * 41 + * @fb_x: offset x on a framebuffer to be displyed 42 + * - the unit is screen coordinates. 43 + * @fb_y: offset y on a framebuffer to be displayed 44 + * - the unit is screen coordinates. 45 + * @crtc_x: offset x on hardware screen. 46 + * @crtc_y: offset y on hardware screen. 47 + * @crtc_w: width of hardware screen. 48 + * @crtc_h: height of hardware screen. 49 + */ 50 + struct exynos_drm_crtc_pos { 51 + unsigned int fb_x; 52 + unsigned int fb_y; 53 + unsigned int crtc_x; 54 + unsigned int crtc_y; 55 + unsigned int crtc_w; 56 + unsigned int crtc_h; 57 + }; 58 + 59 + int exynos_drm_overlay_update(struct exynos_drm_overlay *overlay, 60 + struct drm_framebuffer *fb, 61 + struct drm_display_mode *mode, 62 + struct exynos_drm_crtc_pos *pos); 38 63 #endif
+5
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 27 27 28 28 #include "drmP.h" 29 29 #include "drm.h" 30 + #include "drm_crtc_helper.h" 30 31 31 32 #include <drm/exynos_drm.h> 32 33 ··· 61 60 dev->dev_private = (void *)private; 62 61 63 62 drm_mode_config_init(dev); 63 + 64 + /* init kms poll for handling hpd */ 65 + drm_kms_helper_poll_init(dev); 64 66 65 67 exynos_drm_mode_config_init(dev); 66 68 ··· 120 116 exynos_drm_fbdev_fini(dev); 121 117 exynos_drm_device_unregister(dev); 122 118 drm_vblank_cleanup(dev); 119 + drm_kms_helper_poll_fini(dev); 123 120 drm_mode_config_cleanup(dev); 124 121 kfree(dev->dev_private); 125 122
+8 -5
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 29 29 #ifndef _EXYNOS_DRM_DRV_H_ 30 30 #define _EXYNOS_DRM_DRV_H_ 31 31 32 + #include <linux/module.h> 32 33 #include "drm.h" 33 34 34 35 #define MAX_CRTC 2 ··· 80 79 * @scan_flag: interlace or progressive way. 81 80 * (it could be DRM_MODE_FLAG_*) 82 81 * @bpp: pixel size.(in bit) 83 - * @paddr: bus(accessed by dma) physical memory address to this overlay 84 - * and this is physically continuous. 82 + * @dma_addr: bus(accessed by dma) address to the memory region allocated 83 + * for a overlay. 85 84 * @vaddr: virtual memory addresss to this overlay. 86 85 * @default_win: a window to be enabled. 87 86 * @color_key: color key on or off. ··· 109 108 unsigned int scan_flag; 110 109 unsigned int bpp; 111 110 unsigned int pitch; 112 - dma_addr_t paddr; 111 + dma_addr_t dma_addr; 113 112 void __iomem *vaddr; 114 113 115 114 bool default_win; ··· 131 130 * @check_timing: check if timing is valid or not. 132 131 * @power_on: display device on or off. 133 132 */ 134 - struct exynos_drm_display { 133 + struct exynos_drm_display_ops { 135 134 enum exynos_drm_output_type type; 136 135 bool (*is_connected)(struct device *dev); 137 136 int (*get_edid)(struct device *dev, struct drm_connector *connector, ··· 147 146 * @mode_set: convert drm_display_mode to hw specific display mode and 148 147 * would be called by encoder->mode_set(). 149 148 * @commit: set current hw specific display mode to hw. 149 + * @disable: disable hardware specific display mode. 150 150 * @enable_vblank: specific driver callback for enabling vblank interrupt. 151 151 * @disable_vblank: specific driver callback for disabling vblank interrupt. 152 152 */ 153 153 struct exynos_drm_manager_ops { 154 154 void (*mode_set)(struct device *subdrv_dev, void *mode); 155 155 void (*commit)(struct device *subdrv_dev); 156 + void (*disable)(struct device *subdrv_dev); 156 157 int (*enable_vblank)(struct device *subdrv_dev); 157 158 void (*disable_vblank)(struct device *subdrv_dev); 158 159 }; ··· 181 178 int pipe; 182 179 struct exynos_drm_manager_ops *ops; 183 180 struct exynos_drm_overlay_ops *overlay_ops; 184 - struct exynos_drm_display *display; 181 + struct exynos_drm_display_ops *display_ops; 185 182 }; 186 183 187 184 /*
+72 -11
drivers/gpu/drm/exynos/exynos_drm_encoder.c
··· 53 53 struct drm_device *dev = encoder->dev; 54 54 struct drm_connector *connector; 55 55 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder); 56 + struct exynos_drm_manager_ops *manager_ops = manager->ops; 56 57 57 58 DRM_DEBUG_KMS("%s, encoder dpms: %d\n", __FILE__, mode); 58 59 60 + switch (mode) { 61 + case DRM_MODE_DPMS_ON: 62 + if (manager_ops && manager_ops->commit) 63 + manager_ops->commit(manager->dev); 64 + break; 65 + case DRM_MODE_DPMS_STANDBY: 66 + case DRM_MODE_DPMS_SUSPEND: 67 + case DRM_MODE_DPMS_OFF: 68 + /* TODO */ 69 + if (manager_ops && manager_ops->disable) 70 + manager_ops->disable(manager->dev); 71 + break; 72 + default: 73 + DRM_ERROR("unspecified mode %d\n", mode); 74 + break; 75 + } 76 + 59 77 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 60 78 if (connector->encoder == encoder) { 61 - struct exynos_drm_display *display = manager->display; 79 + struct exynos_drm_display_ops *display_ops = 80 + manager->display_ops; 62 81 63 - if (display && display->power_on) 64 - display->power_on(manager->dev, mode); 82 + DRM_DEBUG_KMS("connector[%d] dpms[%d]\n", 83 + connector->base.id, mode); 84 + if (display_ops && display_ops->power_on) 85 + display_ops->power_on(manager->dev, mode); 65 86 } 66 87 } 67 88 } ··· 137 116 { 138 117 struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder); 139 118 struct exynos_drm_manager_ops *manager_ops = manager->ops; 140 - struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 141 119 142 120 DRM_DEBUG_KMS("%s\n", __FILE__); 143 121 144 122 if (manager_ops && manager_ops->commit) 145 123 manager_ops->commit(manager->dev); 146 - 147 - if (overlay_ops && overlay_ops->commit) 148 - overlay_ops->commit(manager->dev); 149 124 } 150 125 151 126 static struct drm_crtc * ··· 225 208 { 226 209 struct drm_device *dev = crtc->dev; 227 210 struct drm_encoder *encoder; 211 + struct exynos_drm_private *private = dev->dev_private; 212 + struct exynos_drm_manager *manager; 228 213 229 214 list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 230 - if (encoder->crtc != crtc) 231 - continue; 215 + /* 216 + * if crtc is detached from encoder, check pipe, 217 + * otherwise check crtc attached to encoder 218 + */ 219 + if (!encoder->crtc) { 220 + manager = to_exynos_encoder(encoder)->manager; 221 + if (manager->pipe < 0 || 222 + private->crtc[manager->pipe] != crtc) 223 + continue; 224 + } else { 225 + if (encoder->crtc != crtc) 226 + continue; 227 + } 232 228 233 229 fn(encoder, data); 234 230 } ··· 280 250 struct exynos_drm_manager *manager = 281 251 to_exynos_encoder(encoder)->manager; 282 252 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 253 + int crtc = *(int *)data; 283 254 284 - overlay_ops->commit(manager->dev); 255 + DRM_DEBUG_KMS("%s\n", __FILE__); 256 + 257 + /* 258 + * when crtc is detached from encoder, this pipe is used 259 + * to select manager operation 260 + */ 261 + manager->pipe = crtc; 262 + 263 + if (overlay_ops && overlay_ops->commit) 264 + overlay_ops->commit(manager->dev); 285 265 } 286 266 287 267 void exynos_drm_encoder_crtc_mode_set(struct drm_encoder *encoder, void *data) ··· 301 261 struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 302 262 struct exynos_drm_overlay *overlay = data; 303 263 304 - overlay_ops->mode_set(manager->dev, overlay); 264 + if (overlay_ops && overlay_ops->mode_set) 265 + overlay_ops->mode_set(manager->dev, overlay); 266 + } 267 + 268 + void exynos_drm_encoder_crtc_disable(struct drm_encoder *encoder, void *data) 269 + { 270 + struct exynos_drm_manager *manager = 271 + to_exynos_encoder(encoder)->manager; 272 + struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops; 273 + 274 + DRM_DEBUG_KMS("\n"); 275 + 276 + if (overlay_ops && overlay_ops->disable) 277 + overlay_ops->disable(manager->dev); 278 + 279 + /* 280 + * crtc is already detached from encoder and last 281 + * function for detaching is properly done, so 282 + * clear pipe from manager to prevent repeated call 283 + */ 284 + if (!encoder->crtc) 285 + manager->pipe = -1; 305 286 } 306 287 307 288 MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+1
drivers/gpu/drm/exynos/exynos_drm_encoder.h
··· 41 41 void exynos_drm_disable_vblank(struct drm_encoder *encoder, void *data); 42 42 void exynos_drm_encoder_crtc_commit(struct drm_encoder *encoder, void *data); 43 43 void exynos_drm_encoder_crtc_mode_set(struct drm_encoder *encoder, void *data); 44 + void exynos_drm_encoder_crtc_disable(struct drm_encoder *encoder, void *data); 44 45 45 46 #endif
+39 -27
drivers/gpu/drm/exynos/exynos_drm_fb.c
··· 29 29 #include "drmP.h" 30 30 #include "drm_crtc.h" 31 31 #include "drm_crtc_helper.h" 32 + #include "drm_fb_helper.h" 32 33 34 + #include "exynos_drm_drv.h" 33 35 #include "exynos_drm_fb.h" 34 36 #include "exynos_drm_buf.h" 35 37 #include "exynos_drm_gem.h" ··· 43 41 * 44 42 * @fb: drm framebuffer obejct. 45 43 * @exynos_gem_obj: exynos specific gem object containing a gem object. 46 - * @entry: pointer to exynos drm buffer entry object. 47 - * - containing only the information to physically continuous memory 48 - * region allocated at default framebuffer creation. 44 + * @buffer: pointer to exynos_drm_gem_buffer object. 45 + * - contain the memory information to memory region allocated 46 + * at default framebuffer creation. 49 47 */ 50 48 struct exynos_drm_fb { 51 49 struct drm_framebuffer fb; 52 50 struct exynos_drm_gem_obj *exynos_gem_obj; 53 - struct exynos_drm_buf_entry *entry; 51 + struct exynos_drm_gem_buf *buffer; 54 52 }; 55 53 56 54 static void exynos_drm_fb_destroy(struct drm_framebuffer *fb) ··· 65 63 * default framebuffer has no gem object so 66 64 * a buffer of the default framebuffer should be released at here. 67 65 */ 68 - if (!exynos_fb->exynos_gem_obj && exynos_fb->entry) 69 - exynos_drm_buf_destroy(fb->dev, exynos_fb->entry); 66 + if (!exynos_fb->exynos_gem_obj && exynos_fb->buffer) 67 + exynos_drm_buf_destroy(fb->dev, exynos_fb->buffer); 70 68 71 69 kfree(exynos_fb); 72 70 exynos_fb = NULL; ··· 145 143 */ 146 144 if (!mode_cmd->handle) { 147 145 if (!file_priv) { 148 - struct exynos_drm_buf_entry *entry; 146 + struct exynos_drm_gem_buf *buffer; 149 147 150 148 /* 151 149 * in case that file_priv is NULL, it allocates 152 150 * only buffer and this buffer would be used 153 151 * for default framebuffer. 154 152 */ 155 - entry = exynos_drm_buf_create(dev, size); 156 - if (IS_ERR(entry)) { 157 - ret = PTR_ERR(entry); 153 + buffer = exynos_drm_buf_create(dev, size); 154 + if (IS_ERR(buffer)) { 155 + ret = PTR_ERR(buffer); 158 156 goto err_buffer; 159 157 } 160 158 161 - exynos_fb->entry = entry; 159 + exynos_fb->buffer = buffer; 162 160 163 - DRM_LOG_KMS("default fb: paddr = 0x%lx, size = 0x%x\n", 164 - (unsigned long)entry->paddr, size); 161 + DRM_LOG_KMS("default: dma_addr = 0x%lx, size = 0x%x\n", 162 + (unsigned long)buffer->dma_addr, size); 165 163 166 164 goto out; 167 165 } else { 168 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, 169 - size, 170 - &mode_cmd->handle); 166 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, 167 + &mode_cmd->handle, 168 + size); 171 169 if (IS_ERR(exynos_gem_obj)) { 172 170 ret = PTR_ERR(exynos_gem_obj); 173 171 goto err_buffer; ··· 191 189 * so that default framebuffer has no its own gem object, 192 190 * only its own buffer object. 193 191 */ 194 - exynos_fb->entry = exynos_gem_obj->entry; 192 + exynos_fb->buffer = exynos_gem_obj->buffer; 195 193 196 - DRM_LOG_KMS("paddr = 0x%lx, size = 0x%x, gem object = 0x%x\n", 197 - (unsigned long)exynos_fb->entry->paddr, size, 194 + DRM_LOG_KMS("dma_addr = 0x%lx, size = 0x%x, gem object = 0x%x\n", 195 + (unsigned long)exynos_fb->buffer->dma_addr, size, 198 196 (unsigned int)&exynos_gem_obj->base); 199 197 200 198 out: ··· 222 220 return exynos_drm_fb_init(file_priv, dev, mode_cmd); 223 221 } 224 222 225 - struct exynos_drm_buf_entry *exynos_drm_fb_get_buf(struct drm_framebuffer *fb) 223 + struct exynos_drm_gem_buf *exynos_drm_fb_get_buf(struct drm_framebuffer *fb) 226 224 { 227 225 struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb); 228 - struct exynos_drm_buf_entry *entry; 226 + struct exynos_drm_gem_buf *buffer; 229 227 230 228 DRM_DEBUG_KMS("%s\n", __FILE__); 231 229 232 - entry = exynos_fb->entry; 233 - if (!entry) 230 + buffer = exynos_fb->buffer; 231 + if (!buffer) 234 232 return NULL; 235 233 236 - DRM_DEBUG_KMS("vaddr = 0x%lx, paddr = 0x%lx\n", 237 - (unsigned long)entry->vaddr, 238 - (unsigned long)entry->paddr); 234 + DRM_DEBUG_KMS("vaddr = 0x%lx, dma_addr = 0x%lx\n", 235 + (unsigned long)buffer->kvaddr, 236 + (unsigned long)buffer->dma_addr); 239 237 240 - return entry; 238 + return buffer; 239 + } 240 + 241 + static void exynos_drm_output_poll_changed(struct drm_device *dev) 242 + { 243 + struct exynos_drm_private *private = dev->dev_private; 244 + struct drm_fb_helper *fb_helper = private->fb_helper; 245 + 246 + if (fb_helper) 247 + drm_fb_helper_hotplug_event(fb_helper); 241 248 } 242 249 243 250 static struct drm_mode_config_funcs exynos_drm_mode_config_funcs = { 244 251 .fb_create = exynos_drm_fb_create, 252 + .output_poll_changed = exynos_drm_output_poll_changed, 245 253 }; 246 254 247 255 void exynos_drm_mode_config_init(struct drm_device *dev)
+28 -16
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
··· 33 33 34 34 #include "exynos_drm_drv.h" 35 35 #include "exynos_drm_fb.h" 36 + #include "exynos_drm_gem.h" 36 37 #include "exynos_drm_buf.h" 37 38 38 39 #define MAX_CONNECTOR 4 ··· 86 85 }; 87 86 88 87 static int exynos_drm_fbdev_update(struct drm_fb_helper *helper, 89 - struct drm_framebuffer *fb, 90 - unsigned int fb_width, 91 - unsigned int fb_height) 88 + struct drm_framebuffer *fb) 92 89 { 93 90 struct fb_info *fbi = helper->fbdev; 94 91 struct drm_device *dev = helper->dev; 95 92 struct exynos_drm_fbdev *exynos_fb = to_exynos_fbdev(helper); 96 - struct exynos_drm_buf_entry *entry; 97 - unsigned int size = fb_width * fb_height * (fb->bits_per_pixel >> 3); 93 + struct exynos_drm_gem_buf *buffer; 94 + unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3); 98 95 unsigned long offset; 99 96 100 97 DRM_DEBUG_KMS("%s\n", __FILE__); ··· 100 101 exynos_fb->fb = fb; 101 102 102 103 drm_fb_helper_fill_fix(fbi, fb->pitch, fb->depth); 103 - drm_fb_helper_fill_var(fbi, helper, fb_width, fb_height); 104 + drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height); 104 105 105 - entry = exynos_drm_fb_get_buf(fb); 106 - if (!entry) { 107 - DRM_LOG_KMS("entry is null.\n"); 106 + buffer = exynos_drm_fb_get_buf(fb); 107 + if (!buffer) { 108 + DRM_LOG_KMS("buffer is null.\n"); 108 109 return -EFAULT; 109 110 } 110 111 111 112 offset = fbi->var.xoffset * (fb->bits_per_pixel >> 3); 112 113 offset += fbi->var.yoffset * fb->pitch; 113 114 114 - dev->mode_config.fb_base = entry->paddr; 115 - fbi->screen_base = entry->vaddr + offset; 116 - fbi->fix.smem_start = entry->paddr + offset; 115 + dev->mode_config.fb_base = (resource_size_t)buffer->dma_addr; 116 + fbi->screen_base = buffer->kvaddr + offset; 117 + fbi->fix.smem_start = (unsigned long)(buffer->dma_addr + offset); 117 118 fbi->screen_size = size; 118 119 fbi->fix.smem_len = size; 119 120 ··· 170 171 goto out; 171 172 } 172 173 173 - ret = exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width, 174 - sizes->fb_height); 174 + ret = exynos_drm_fbdev_update(helper, helper->fb); 175 175 if (ret < 0) 176 176 fb_dealloc_cmap(&fbi->cmap); 177 177 ··· 233 235 } 234 236 235 237 helper->fb = exynos_fbdev->fb; 236 - return exynos_drm_fbdev_update(helper, helper->fb, sizes->fb_width, 237 - sizes->fb_height); 238 + return exynos_drm_fbdev_update(helper, helper->fb); 238 239 } 239 240 240 241 static int exynos_drm_fbdev_probe(struct drm_fb_helper *helper, ··· 402 405 fb_helper = private->fb_helper; 403 406 404 407 if (fb_helper) { 408 + struct list_head temp_list; 409 + 410 + INIT_LIST_HEAD(&temp_list); 411 + 412 + /* 413 + * fb_helper is reintialized but kernel fb is reused 414 + * so kernel_fb_list need to be backuped and restored 415 + */ 416 + if (!list_empty(&fb_helper->kernel_fb_list)) 417 + list_replace_init(&fb_helper->kernel_fb_list, 418 + &temp_list); 419 + 405 420 drm_fb_helper_fini(fb_helper); 406 421 407 422 ret = drm_fb_helper_init(dev, fb_helper, ··· 422 413 DRM_ERROR("failed to initialize drm fb helper\n"); 423 414 return ret; 424 415 } 416 + 417 + if (!list_empty(&temp_list)) 418 + list_replace(&temp_list, &fb_helper->kernel_fb_list); 425 419 426 420 ret = drm_fb_helper_single_add_all_connectors(fb_helper); 427 421 if (ret < 0) {
+53 -18
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 64 64 unsigned int fb_width; 65 65 unsigned int fb_height; 66 66 unsigned int bpp; 67 - dma_addr_t paddr; 67 + dma_addr_t dma_addr; 68 68 void __iomem *vaddr; 69 69 unsigned int buf_offsize; 70 70 unsigned int line_size; /* bytes */ ··· 124 124 return 0; 125 125 } 126 126 127 - static struct exynos_drm_display fimd_display = { 127 + static struct exynos_drm_display_ops fimd_display_ops = { 128 128 .type = EXYNOS_DISPLAY_TYPE_LCD, 129 129 .is_connected = fimd_display_is_connected, 130 130 .get_timing = fimd_get_timing, ··· 177 177 writel(val, ctx->regs + VIDCON0); 178 178 } 179 179 180 + static void fimd_disable(struct device *dev) 181 + { 182 + struct fimd_context *ctx = get_fimd_context(dev); 183 + struct exynos_drm_subdrv *subdrv = &ctx->subdrv; 184 + struct drm_device *drm_dev = subdrv->drm_dev; 185 + struct exynos_drm_manager *manager = &subdrv->manager; 186 + u32 val; 187 + 188 + DRM_DEBUG_KMS("%s\n", __FILE__); 189 + 190 + /* fimd dma off */ 191 + val = readl(ctx->regs + VIDCON0); 192 + val &= ~(VIDCON0_ENVID | VIDCON0_ENVID_F); 193 + writel(val, ctx->regs + VIDCON0); 194 + 195 + /* 196 + * if vblank is enabled status with dma off then 197 + * it disables vsync interrupt. 198 + */ 199 + if (drm_dev->vblank_enabled[manager->pipe] && 200 + atomic_read(&drm_dev->vblank_refcount[manager->pipe])) { 201 + drm_vblank_put(drm_dev, manager->pipe); 202 + 203 + /* 204 + * if vblank_disable_allowed is 0 then disable 205 + * vsync interrupt right now else the vsync interrupt 206 + * would be disabled by drm timer once a current process 207 + * gives up ownershop of vblank event. 208 + */ 209 + if (!drm_dev->vblank_disable_allowed) 210 + drm_vblank_off(drm_dev, manager->pipe); 211 + } 212 + } 213 + 180 214 static int fimd_enable_vblank(struct device *dev) 181 215 { 182 216 struct fimd_context *ctx = get_fimd_context(dev); ··· 254 220 255 221 static struct exynos_drm_manager_ops fimd_manager_ops = { 256 222 .commit = fimd_commit, 223 + .disable = fimd_disable, 257 224 .enable_vblank = fimd_enable_vblank, 258 225 .disable_vblank = fimd_disable_vblank, 259 226 }; ··· 286 251 win_data->ovl_height = overlay->crtc_height; 287 252 win_data->fb_width = overlay->fb_width; 288 253 win_data->fb_height = overlay->fb_height; 289 - win_data->paddr = overlay->paddr + offset; 254 + win_data->dma_addr = overlay->dma_addr + offset; 290 255 win_data->vaddr = overlay->vaddr + offset; 291 256 win_data->bpp = overlay->bpp; 292 257 win_data->buf_offsize = (overlay->fb_width - overlay->crtc_width) * ··· 298 263 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n", 299 264 win_data->ovl_width, win_data->ovl_height); 300 265 DRM_DEBUG_KMS("paddr = 0x%lx, vaddr = 0x%lx\n", 301 - (unsigned long)win_data->paddr, 266 + (unsigned long)win_data->dma_addr, 302 267 (unsigned long)win_data->vaddr); 303 268 DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n", 304 269 overlay->fb_width, overlay->crtc_width); ··· 411 376 writel(val, ctx->regs + SHADOWCON); 412 377 413 378 /* buffer start address */ 414 - val = win_data->paddr; 379 + val = (unsigned long)win_data->dma_addr; 415 380 writel(val, ctx->regs + VIDWx_BUF_START(win, 0)); 416 381 417 382 /* buffer end address */ 418 383 size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3); 419 - val = win_data->paddr + size; 384 + val = (unsigned long)(win_data->dma_addr + size); 420 385 writel(val, ctx->regs + VIDWx_BUF_END(win, 0)); 421 386 422 387 DRM_DEBUG_KMS("start addr = 0x%lx, end addr = 0x%lx, size = 0x%lx\n", 423 - (unsigned long)win_data->paddr, val, size); 388 + (unsigned long)win_data->dma_addr, val, size); 424 389 DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n", 425 390 win_data->ovl_width, win_data->ovl_height); 426 391 ··· 482 447 static void fimd_win_disable(struct device *dev) 483 448 { 484 449 struct fimd_context *ctx = get_fimd_context(dev); 485 - struct fimd_win_data *win_data; 486 450 int win = ctx->default_win; 487 451 u32 val; 488 452 ··· 489 455 490 456 if (win < 0 || win > WINDOWS_NR) 491 457 return; 492 - 493 - win_data = &ctx->win_data[win]; 494 458 495 459 /* protect windows */ 496 460 val = readl(ctx->regs + SHADOWCON); ··· 560 528 /* VSYNC interrupt */ 561 529 writel(VIDINTCON1_INT_FRAME, ctx->regs + VIDINTCON1); 562 530 531 + /* 532 + * in case that vblank_disable_allowed is 1, it could induce 533 + * the problem that manager->pipe could be -1 because with 534 + * disable callback, vsync interrupt isn't disabled and at this moment, 535 + * vsync interrupt could occur. the vsync interrupt would be disabled 536 + * by timer handler later. 537 + */ 538 + if (manager->pipe == -1) 539 + return IRQ_HANDLED; 540 + 563 541 drm_handle_vblank(drm_dev, manager->pipe); 564 542 fimd_finish_pageflip(drm_dev, manager->pipe); 565 543 ··· 589 547 * drm framework supports only one irq handler. 590 548 */ 591 549 drm_dev->irq_enabled = 1; 592 - 593 - /* 594 - * with vblank_disable_allowed = 1, vblank interrupt will be disabled 595 - * by drm timer once a current process gives up ownership of 596 - * vblank event.(drm_vblank_put function was called) 597 - */ 598 - drm_dev->vblank_disable_allowed = 1; 599 550 600 551 return 0; 601 552 } ··· 766 731 subdrv->manager.pipe = -1; 767 732 subdrv->manager.ops = &fimd_manager_ops; 768 733 subdrv->manager.overlay_ops = &fimd_overlay_ops; 769 - subdrv->manager.display = &fimd_display; 734 + subdrv->manager.display_ops = &fimd_display_ops; 770 735 subdrv->manager.dev = dev; 771 736 772 737 platform_set_drvdata(pdev, ctx);
+52 -37
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 62 62 return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT; 63 63 } 64 64 65 - struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv, 66 - struct drm_device *dev, unsigned int size, 67 - unsigned int *handle) 65 + static struct exynos_drm_gem_obj 66 + *exynos_drm_gem_init(struct drm_device *drm_dev, 67 + struct drm_file *file_priv, unsigned int *handle, 68 + unsigned int size) 68 69 { 69 70 struct exynos_drm_gem_obj *exynos_gem_obj; 70 - struct exynos_drm_buf_entry *entry; 71 71 struct drm_gem_object *obj; 72 72 int ret; 73 - 74 - DRM_DEBUG_KMS("%s\n", __FILE__); 75 - 76 - size = roundup(size, PAGE_SIZE); 77 73 78 74 exynos_gem_obj = kzalloc(sizeof(*exynos_gem_obj), GFP_KERNEL); 79 75 if (!exynos_gem_obj) { ··· 77 81 return ERR_PTR(-ENOMEM); 78 82 } 79 83 80 - /* allocate the new buffer object and memory region. */ 81 - entry = exynos_drm_buf_create(dev, size); 82 - if (!entry) { 83 - kfree(exynos_gem_obj); 84 - return ERR_PTR(-ENOMEM); 85 - } 86 - 87 - exynos_gem_obj->entry = entry; 88 - 89 84 obj = &exynos_gem_obj->base; 90 85 91 - ret = drm_gem_object_init(dev, obj, size); 86 + ret = drm_gem_object_init(drm_dev, obj, size); 92 87 if (ret < 0) { 93 - DRM_ERROR("failed to initailize gem object.\n"); 94 - goto err_obj_init; 88 + DRM_ERROR("failed to initialize gem object.\n"); 89 + ret = -EINVAL; 90 + goto err_object_init; 95 91 } 96 92 97 93 DRM_DEBUG_KMS("created file object = 0x%x\n", (unsigned int)obj->filp); ··· 115 127 err_create_mmap_offset: 116 128 drm_gem_object_release(obj); 117 129 118 - err_obj_init: 119 - exynos_drm_buf_destroy(dev, exynos_gem_obj->entry); 120 - 130 + err_object_init: 121 131 kfree(exynos_gem_obj); 122 132 123 133 return ERR_PTR(ret); 124 134 } 125 135 136 + struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev, 137 + struct drm_file *file_priv, 138 + unsigned int *handle, unsigned long size) 139 + { 140 + 141 + struct exynos_drm_gem_obj *exynos_gem_obj = NULL; 142 + struct exynos_drm_gem_buf *buffer; 143 + 144 + size = roundup(size, PAGE_SIZE); 145 + 146 + DRM_DEBUG_KMS("%s: size = 0x%lx\n", __FILE__, size); 147 + 148 + buffer = exynos_drm_buf_create(dev, size); 149 + if (IS_ERR(buffer)) { 150 + return ERR_CAST(buffer); 151 + } 152 + 153 + exynos_gem_obj = exynos_drm_gem_init(dev, file_priv, handle, size); 154 + if (IS_ERR(exynos_gem_obj)) { 155 + exynos_drm_buf_destroy(dev, buffer); 156 + return exynos_gem_obj; 157 + } 158 + 159 + exynos_gem_obj->buffer = buffer; 160 + 161 + return exynos_gem_obj; 162 + } 163 + 126 164 int exynos_drm_gem_create_ioctl(struct drm_device *dev, void *data, 127 - struct drm_file *file_priv) 165 + struct drm_file *file_priv) 128 166 { 129 167 struct drm_exynos_gem_create *args = data; 130 - struct exynos_drm_gem_obj *exynos_gem_obj; 168 + struct exynos_drm_gem_obj *exynos_gem_obj = NULL; 131 169 132 - DRM_DEBUG_KMS("%s : size = 0x%x\n", __FILE__, args->size); 170 + DRM_DEBUG_KMS("%s\n", __FILE__); 133 171 134 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size, 135 - &args->handle); 172 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, 173 + &args->handle, args->size); 136 174 if (IS_ERR(exynos_gem_obj)) 137 175 return PTR_ERR(exynos_gem_obj); 138 176 ··· 189 175 { 190 176 struct drm_gem_object *obj = filp->private_data; 191 177 struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj); 192 - struct exynos_drm_buf_entry *entry; 178 + struct exynos_drm_gem_buf *buffer; 193 179 unsigned long pfn, vm_size; 194 180 195 181 DRM_DEBUG_KMS("%s\n", __FILE__); ··· 201 187 202 188 vm_size = vma->vm_end - vma->vm_start; 203 189 /* 204 - * a entry contains information to physically continuous memory 190 + * a buffer contains information to physically continuous memory 205 191 * allocated by user request or at framebuffer creation. 206 192 */ 207 - entry = exynos_gem_obj->entry; 193 + buffer = exynos_gem_obj->buffer; 208 194 209 195 /* check if user-requested size is valid. */ 210 - if (vm_size > entry->size) 196 + if (vm_size > buffer->size) 211 197 return -EINVAL; 212 198 213 199 /* 214 200 * get page frame number to physical memory to be mapped 215 201 * to user space. 216 202 */ 217 - pfn = exynos_gem_obj->entry->paddr >> PAGE_SHIFT; 203 + pfn = ((unsigned long)exynos_gem_obj->buffer->dma_addr) >> PAGE_SHIFT; 218 204 219 205 DRM_DEBUG_KMS("pfn = 0x%lx\n", pfn); 220 206 ··· 295 281 296 282 exynos_gem_obj = to_exynos_gem_obj(gem_obj); 297 283 298 - exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->entry); 284 + exynos_drm_buf_destroy(gem_obj->dev, exynos_gem_obj->buffer); 299 285 300 286 kfree(exynos_gem_obj); 301 287 } ··· 316 302 args->pitch = args->width * args->bpp >> 3; 317 303 args->size = args->pitch * args->height; 318 304 319 - exynos_gem_obj = exynos_drm_gem_create(file_priv, dev, args->size, 320 - &args->handle); 305 + exynos_gem_obj = exynos_drm_gem_create(dev, file_priv, &args->handle, 306 + args->size); 321 307 if (IS_ERR(exynos_gem_obj)) 322 308 return PTR_ERR(exynos_gem_obj); 323 309 ··· 374 360 375 361 mutex_lock(&dev->struct_mutex); 376 362 377 - pfn = (exynos_gem_obj->entry->paddr >> PAGE_SHIFT) + page_offset; 363 + pfn = (((unsigned long)exynos_gem_obj->buffer->dma_addr) >> 364 + PAGE_SHIFT) + page_offset; 378 365 379 366 ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address, pfn); 380 367
+22 -6
drivers/gpu/drm/exynos/exynos_drm_gem.h
··· 30 30 struct exynos_drm_gem_obj, base) 31 31 32 32 /* 33 + * exynos drm gem buffer structure. 34 + * 35 + * @kvaddr: kernel virtual address to allocated memory region. 36 + * @dma_addr: bus address(accessed by dma) to allocated memory region. 37 + * - this address could be physical address without IOMMU and 38 + * device address with IOMMU. 39 + * @size: size of allocated memory region. 40 + */ 41 + struct exynos_drm_gem_buf { 42 + void __iomem *kvaddr; 43 + dma_addr_t dma_addr; 44 + unsigned long size; 45 + }; 46 + 47 + /* 33 48 * exynos drm buffer structure. 34 49 * 35 50 * @base: a gem object. 36 51 * - a new handle to this gem object would be created 37 52 * by drm_gem_handle_create(). 38 - * @entry: pointer to exynos drm buffer entry object. 39 - * - containing the information to physically 53 + * @buffer: a pointer to exynos_drm_gem_buffer object. 54 + * - contain the information to memory region allocated 55 + * by user request or at framebuffer creation. 40 56 * continuous memory region allocated by user request 41 57 * or at framebuffer creation. 42 58 * ··· 61 45 */ 62 46 struct exynos_drm_gem_obj { 63 47 struct drm_gem_object base; 64 - struct exynos_drm_buf_entry *entry; 48 + struct exynos_drm_gem_buf *buffer; 65 49 }; 66 50 67 51 /* create a new buffer and get a new gem handle. */ 68 - struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_file *file_priv, 69 - struct drm_device *dev, unsigned int size, 70 - unsigned int *handle); 52 + struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev, 53 + struct drm_file *file_priv, 54 + unsigned int *handle, unsigned long size); 71 55 72 56 /* 73 57 * request gem object creation and buffer allocation as the size
+51 -6
drivers/gpu/drm/i915/i915_debugfs.c
··· 636 636 struct drm_device *dev = node->minor->dev; 637 637 drm_i915_private_t *dev_priv = dev->dev_private; 638 638 struct intel_ring_buffer *ring; 639 + int ret; 639 640 640 641 ring = &dev_priv->ring[(uintptr_t)node->info_ent->data]; 641 642 if (ring->size == 0) 642 643 return 0; 644 + 645 + ret = mutex_lock_interruptible(&dev->struct_mutex); 646 + if (ret) 647 + return ret; 643 648 644 649 seq_printf(m, "Ring %s:\n", ring->name); 645 650 seq_printf(m, " Head : %08x\n", I915_READ_HEAD(ring) & HEAD_ADDR); ··· 658 653 } 659 654 seq_printf(m, " Control : %08x\n", I915_READ_CTL(ring)); 660 655 seq_printf(m, " Start : %08x\n", I915_READ_START(ring)); 656 + 657 + mutex_unlock(&dev->struct_mutex); 661 658 662 659 return 0; 663 660 } ··· 849 842 struct drm_info_node *node = (struct drm_info_node *) m->private; 850 843 struct drm_device *dev = node->minor->dev; 851 844 drm_i915_private_t *dev_priv = dev->dev_private; 852 - u16 crstanddelay = I915_READ16(CRSTANDVID); 845 + u16 crstanddelay; 846 + int ret; 847 + 848 + ret = mutex_lock_interruptible(&dev->struct_mutex); 849 + if (ret) 850 + return ret; 851 + 852 + crstanddelay = I915_READ16(CRSTANDVID); 853 + 854 + mutex_unlock(&dev->struct_mutex); 853 855 854 856 seq_printf(m, "w/ctx: %d, w/o ctx: %d\n", (crstanddelay >> 8) & 0x3f, (crstanddelay & 0x3f)); 855 857 ··· 956 940 struct drm_device *dev = node->minor->dev; 957 941 drm_i915_private_t *dev_priv = dev->dev_private; 958 942 u32 delayfreq; 959 - int i; 943 + int ret, i; 944 + 945 + ret = mutex_lock_interruptible(&dev->struct_mutex); 946 + if (ret) 947 + return ret; 960 948 961 949 for (i = 0; i < 16; i++) { 962 950 delayfreq = I915_READ(PXVFREQ_BASE + i * 4); 963 951 seq_printf(m, "P%02dVIDFREQ: 0x%08x (VID: %d)\n", i, delayfreq, 964 952 (delayfreq & PXVFREQ_PX_MASK) >> PXVFREQ_PX_SHIFT); 965 953 } 954 + 955 + mutex_unlock(&dev->struct_mutex); 966 956 967 957 return 0; 968 958 } ··· 984 962 struct drm_device *dev = node->minor->dev; 985 963 drm_i915_private_t *dev_priv = dev->dev_private; 986 964 u32 inttoext; 987 - int i; 965 + int ret, i; 966 + 967 + ret = mutex_lock_interruptible(&dev->struct_mutex); 968 + if (ret) 969 + return ret; 988 970 989 971 for (i = 1; i <= 32; i++) { 990 972 inttoext = I915_READ(INTTOEXT_BASE_ILK + i * 4); 991 973 seq_printf(m, "INTTOEXT%02d: 0x%08x\n", i, inttoext); 992 974 } 975 + 976 + mutex_unlock(&dev->struct_mutex); 993 977 994 978 return 0; 995 979 } ··· 1005 977 struct drm_info_node *node = (struct drm_info_node *) m->private; 1006 978 struct drm_device *dev = node->minor->dev; 1007 979 drm_i915_private_t *dev_priv = dev->dev_private; 1008 - u32 rgvmodectl = I915_READ(MEMMODECTL); 1009 - u32 rstdbyctl = I915_READ(RSTDBYCTL); 1010 - u16 crstandvid = I915_READ16(CRSTANDVID); 980 + u32 rgvmodectl, rstdbyctl; 981 + u16 crstandvid; 982 + int ret; 983 + 984 + ret = mutex_lock_interruptible(&dev->struct_mutex); 985 + if (ret) 986 + return ret; 987 + 988 + rgvmodectl = I915_READ(MEMMODECTL); 989 + rstdbyctl = I915_READ(RSTDBYCTL); 990 + crstandvid = I915_READ16(CRSTANDVID); 991 + 992 + mutex_unlock(&dev->struct_mutex); 1011 993 1012 994 seq_printf(m, "HD boost: %s\n", (rgvmodectl & MEMMODE_BOOST_EN) ? 1013 995 "yes" : "no"); ··· 1205 1167 struct drm_info_node *node = (struct drm_info_node *) m->private; 1206 1168 struct drm_device *dev = node->minor->dev; 1207 1169 drm_i915_private_t *dev_priv = dev->dev_private; 1170 + int ret; 1171 + 1172 + ret = mutex_lock_interruptible(&dev->struct_mutex); 1173 + if (ret) 1174 + return ret; 1208 1175 1209 1176 seq_printf(m, "GFXEC: %ld\n", (unsigned long)I915_READ(0x112f4)); 1177 + 1178 + mutex_unlock(&dev->struct_mutex); 1210 1179 1211 1180 return 0; 1212 1181 }
+3 -3
drivers/gpu/drm/i915/i915_drv.c
··· 68 68 MODULE_PARM_DESC(i915_enable_rc6, 69 69 "Enable power-saving render C-state 6 (default: true)"); 70 70 71 - unsigned int i915_enable_fbc __read_mostly = -1; 71 + int i915_enable_fbc __read_mostly = -1; 72 72 module_param_named(i915_enable_fbc, i915_enable_fbc, int, 0600); 73 73 MODULE_PARM_DESC(i915_enable_fbc, 74 74 "Enable frame buffer compression for power savings " ··· 80 80 "Use panel (LVDS/eDP) downclocking for power savings " 81 81 "(default: false)"); 82 82 83 - unsigned int i915_panel_use_ssc __read_mostly = -1; 83 + int i915_panel_use_ssc __read_mostly = -1; 84 84 module_param_named(lvds_use_ssc, i915_panel_use_ssc, int, 0600); 85 85 MODULE_PARM_DESC(lvds_use_ssc, 86 86 "Use Spread Spectrum Clock with panels [LVDS/eDP] " ··· 107 107 extern int intel_agp_enabled; 108 108 109 109 #define INTEL_VGA_DEVICE(id, info) { \ 110 - .class = PCI_CLASS_DISPLAY_VGA << 8, \ 110 + .class = PCI_BASE_CLASS_DISPLAY << 16, \ 111 111 .class_mask = 0xff0000, \ 112 112 .vendor = 0x8086, \ 113 113 .device = id, \
+10 -9
drivers/gpu/drm/i915/i915_drv.h
··· 126 126 struct _drm_i915_sarea *sarea_priv; 127 127 }; 128 128 #define I915_FENCE_REG_NONE -1 129 + #define I915_MAX_NUM_FENCES 16 130 + /* 16 fences + sign bit for FENCE_REG_NONE */ 131 + #define I915_MAX_NUM_FENCE_BITS 5 129 132 130 133 struct drm_i915_fence_reg { 131 134 struct list_head lru_list; ··· 171 168 u32 instdone1; 172 169 u32 seqno; 173 170 u64 bbaddr; 174 - u64 fence[16]; 171 + u64 fence[I915_MAX_NUM_FENCES]; 175 172 struct timeval time; 176 173 struct drm_i915_error_object { 177 174 int page_count; ··· 185 182 u32 gtt_offset; 186 183 u32 read_domains; 187 184 u32 write_domain; 188 - s32 fence_reg:5; 185 + s32 fence_reg:I915_MAX_NUM_FENCE_BITS; 189 186 s32 pinned:2; 190 187 u32 tiling:2; 191 188 u32 dirty:1; ··· 378 375 struct notifier_block lid_notifier; 379 376 380 377 int crt_ddc_pin; 381 - struct drm_i915_fence_reg fence_regs[16]; /* assume 965 */ 378 + struct drm_i915_fence_reg fence_regs[I915_MAX_NUM_FENCES]; /* assume 965 */ 382 379 int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */ 383 380 int num_fence_regs; /* 8 on pre-965, 16 otherwise */ 384 381 ··· 509 506 u8 saveAR[21]; 510 507 u8 saveDACMASK; 511 508 u8 saveCR[37]; 512 - uint64_t saveFENCE[16]; 509 + uint64_t saveFENCE[I915_MAX_NUM_FENCES]; 513 510 u32 saveCURACNTR; 514 511 u32 saveCURAPOS; 515 512 u32 saveCURABASE; ··· 780 777 * Fence register bits (if any) for this object. Will be set 781 778 * as needed when mapped into the GTT. 782 779 * Protected by dev->struct_mutex. 783 - * 784 - * Size: 4 bits for 16 fences + sign (for FENCE_REG_NONE) 785 780 */ 786 - signed int fence_reg:5; 781 + signed int fence_reg:I915_MAX_NUM_FENCE_BITS; 787 782 788 783 /** 789 784 * Advice: are the backing pages purgeable? ··· 1000 999 extern unsigned int i915_powersave __read_mostly; 1001 1000 extern unsigned int i915_semaphores __read_mostly; 1002 1001 extern unsigned int i915_lvds_downclock __read_mostly; 1003 - extern unsigned int i915_panel_use_ssc __read_mostly; 1002 + extern int i915_panel_use_ssc __read_mostly; 1004 1003 extern int i915_vbt_sdvo_panel_type __read_mostly; 1005 1004 extern unsigned int i915_enable_rc6 __read_mostly; 1006 - extern unsigned int i915_enable_fbc __read_mostly; 1005 + extern int i915_enable_fbc __read_mostly; 1007 1006 extern bool i915_enable_hangcheck __read_mostly; 1008 1007 1009 1008 extern int i915_suspend(struct drm_device *dev, pm_message_t state);
+7 -5
drivers/gpu/drm/i915/i915_gem.c
··· 1745 1745 struct drm_i915_private *dev_priv = dev->dev_private; 1746 1746 int i; 1747 1747 1748 - for (i = 0; i < 16; i++) { 1748 + for (i = 0; i < dev_priv->num_fence_regs; i++) { 1749 1749 struct drm_i915_fence_reg *reg = &dev_priv->fence_regs[i]; 1750 1750 struct drm_i915_gem_object *obj = reg->obj; 1751 1751 ··· 3512 3512 * so emit a request to do so. 3513 3513 */ 3514 3514 request = kzalloc(sizeof(*request), GFP_KERNEL); 3515 - if (request) 3515 + if (request) { 3516 3516 ret = i915_add_request(obj->ring, NULL, request); 3517 - else 3517 + if (ret) 3518 + kfree(request); 3519 + } else 3518 3520 ret = -ENOMEM; 3519 3521 } 3520 3522 ··· 3615 3613 obj->base.write_domain = I915_GEM_DOMAIN_CPU; 3616 3614 obj->base.read_domains = I915_GEM_DOMAIN_CPU; 3617 3615 3618 - if (IS_GEN6(dev)) { 3616 + if (IS_GEN6(dev) || IS_GEN7(dev)) { 3619 3617 /* On Gen6, we can have the GPU use the LLC (the CPU 3620 3618 * cache) for about a 10% performance improvement 3621 3619 * compared to uncached. Graphics requests other than ··· 3879 3877 INIT_LIST_HEAD(&dev_priv->mm.gtt_list); 3880 3878 for (i = 0; i < I915_NUM_RINGS; i++) 3881 3879 init_ring_lists(&dev_priv->ring[i]); 3882 - for (i = 0; i < 16; i++) 3880 + for (i = 0; i < I915_MAX_NUM_FENCES; i++) 3883 3881 INIT_LIST_HEAD(&dev_priv->fence_regs[i].lru_list); 3884 3882 INIT_DELAYED_WORK(&dev_priv->mm.retire_work, 3885 3883 i915_gem_retire_work_handler);
+1
drivers/gpu/drm/i915/i915_irq.c
··· 824 824 825 825 /* Fences */ 826 826 switch (INTEL_INFO(dev)->gen) { 827 + case 7: 827 828 case 6: 828 829 for (i = 0; i < 16; i++) 829 830 error->fence[i] = I915_READ64(FENCE_REG_SANDYBRIDGE_0 + (i * 8));
+17 -4
drivers/gpu/drm/i915/i915_reg.h
··· 1553 1553 */ 1554 1554 #define PP_READY (1 << 30) 1555 1555 #define PP_SEQUENCE_NONE (0 << 28) 1556 - #define PP_SEQUENCE_ON (1 << 28) 1557 - #define PP_SEQUENCE_OFF (2 << 28) 1558 - #define PP_SEQUENCE_MASK 0x30000000 1556 + #define PP_SEQUENCE_POWER_UP (1 << 28) 1557 + #define PP_SEQUENCE_POWER_DOWN (2 << 28) 1558 + #define PP_SEQUENCE_MASK (3 << 28) 1559 + #define PP_SEQUENCE_SHIFT 28 1559 1560 #define PP_CYCLE_DELAY_ACTIVE (1 << 27) 1560 - #define PP_SEQUENCE_STATE_ON_IDLE (1 << 3) 1561 1561 #define PP_SEQUENCE_STATE_MASK 0x0000000f 1562 + #define PP_SEQUENCE_STATE_OFF_IDLE (0x0 << 0) 1563 + #define PP_SEQUENCE_STATE_OFF_S0_1 (0x1 << 0) 1564 + #define PP_SEQUENCE_STATE_OFF_S0_2 (0x2 << 0) 1565 + #define PP_SEQUENCE_STATE_OFF_S0_3 (0x3 << 0) 1566 + #define PP_SEQUENCE_STATE_ON_IDLE (0x8 << 0) 1567 + #define PP_SEQUENCE_STATE_ON_S1_0 (0x9 << 0) 1568 + #define PP_SEQUENCE_STATE_ON_S1_2 (0xa << 0) 1569 + #define PP_SEQUENCE_STATE_ON_S1_3 (0xb << 0) 1570 + #define PP_SEQUENCE_STATE_RESET (0xf << 0) 1562 1571 #define PP_CONTROL 0x61204 1563 1572 #define POWER_TARGET_ON (1 << 0) 1564 1573 #define PP_ON_DELAYS 0x61208 ··· 3452 3443 3453 3444 #define GT_FIFO_FREE_ENTRIES 0x120008 3454 3445 #define GT_FIFO_NUM_RESERVED_ENTRIES 20 3446 + 3447 + #define GEN6_UCGCTL2 0x9404 3448 + # define GEN6_RCPBUNIT_CLOCK_GATE_DISABLE (1 << 12) 3449 + # define GEN6_RCCUNIT_CLOCK_GATE_DISABLE (1 << 11) 3455 3450 3456 3451 #define GEN6_RPNSWREQ 0xA008 3457 3452 #define GEN6_TURBO_DISABLE (1<<31)
+2
drivers/gpu/drm/i915/i915_suspend.c
··· 370 370 371 371 /* Fences */ 372 372 switch (INTEL_INFO(dev)->gen) { 373 + case 7: 373 374 case 6: 374 375 for (i = 0; i < 16; i++) 375 376 dev_priv->saveFENCE[i] = I915_READ64(FENCE_REG_SANDYBRIDGE_0 + (i * 8)); ··· 405 404 406 405 /* Fences */ 407 406 switch (INTEL_INFO(dev)->gen) { 407 + case 7: 408 408 case 6: 409 409 for (i = 0; i < 16; i++) 410 410 I915_WRITE64(FENCE_REG_SANDYBRIDGE_0 + (i * 8), dev_priv->saveFENCE[i]);
+24 -9
drivers/gpu/drm/i915/intel_display.c
··· 2933 2933 2934 2934 /* For PCH DP, enable TRANS_DP_CTL */ 2935 2935 if (HAS_PCH_CPT(dev) && 2936 - intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) { 2936 + (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT) || 2937 + intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))) { 2937 2938 u32 bpc = (I915_READ(PIPECONF(pipe)) & PIPE_BPC_MASK) >> 5; 2938 2939 reg = TRANS_DP_CTL(pipe); 2939 2940 temp = I915_READ(reg); ··· 4712 4711 lvds_bpc = 6; 4713 4712 4714 4713 if (lvds_bpc < display_bpc) { 4715 - DRM_DEBUG_DRIVER("clamping display bpc (was %d) to LVDS (%d)\n", display_bpc, lvds_bpc); 4714 + DRM_DEBUG_KMS("clamping display bpc (was %d) to LVDS (%d)\n", display_bpc, lvds_bpc); 4716 4715 display_bpc = lvds_bpc; 4717 4716 } 4718 4717 continue; ··· 4723 4722 unsigned int edp_bpc = dev_priv->edp.bpp / 3; 4724 4723 4725 4724 if (edp_bpc < display_bpc) { 4726 - DRM_DEBUG_DRIVER("clamping display bpc (was %d) to eDP (%d)\n", display_bpc, edp_bpc); 4725 + DRM_DEBUG_KMS("clamping display bpc (was %d) to eDP (%d)\n", display_bpc, edp_bpc); 4727 4726 display_bpc = edp_bpc; 4728 4727 } 4729 4728 continue; ··· 4738 4737 /* Don't use an invalid EDID bpc value */ 4739 4738 if (connector->display_info.bpc && 4740 4739 connector->display_info.bpc < display_bpc) { 4741 - DRM_DEBUG_DRIVER("clamping display bpc (was %d) to EDID reported max of %d\n", display_bpc, connector->display_info.bpc); 4740 + DRM_DEBUG_KMS("clamping display bpc (was %d) to EDID reported max of %d\n", display_bpc, connector->display_info.bpc); 4742 4741 display_bpc = connector->display_info.bpc; 4743 4742 } 4744 4743 } ··· 4749 4748 */ 4750 4749 if (intel_encoder->type == INTEL_OUTPUT_HDMI) { 4751 4750 if (display_bpc > 8 && display_bpc < 12) { 4752 - DRM_DEBUG_DRIVER("forcing bpc to 12 for HDMI\n"); 4751 + DRM_DEBUG_KMS("forcing bpc to 12 for HDMI\n"); 4753 4752 display_bpc = 12; 4754 4753 } else { 4755 - DRM_DEBUG_DRIVER("forcing bpc to 8 for HDMI\n"); 4754 + DRM_DEBUG_KMS("forcing bpc to 8 for HDMI\n"); 4756 4755 display_bpc = 8; 4757 4756 } 4758 4757 } ··· 4790 4789 4791 4790 display_bpc = min(display_bpc, bpc); 4792 4791 4793 - DRM_DEBUG_DRIVER("setting pipe bpc to %d (max display bpc %d)\n", 4794 - bpc, display_bpc); 4792 + DRM_DEBUG_KMS("setting pipe bpc to %d (max display bpc %d)\n", 4793 + bpc, display_bpc); 4795 4794 4796 4795 *pipe_bpp = display_bpc * 3; 4797 4796 ··· 5672 5671 pipeconf &= ~PIPECONF_DITHER_TYPE_MASK; 5673 5672 if ((is_lvds && dev_priv->lvds_dither) || dither) { 5674 5673 pipeconf |= PIPECONF_DITHER_EN; 5675 - pipeconf |= PIPECONF_DITHER_TYPE_ST1; 5674 + pipeconf |= PIPECONF_DITHER_TYPE_SP; 5676 5675 } 5677 5676 if (is_dp || intel_encoder_is_pch_edp(&has_edp_encoder->base)) { 5678 5677 intel_dp_set_m_n(crtc, mode, adjusted_mode); ··· 8148 8147 I915_WRITE(WM3_LP_ILK, 0); 8149 8148 I915_WRITE(WM2_LP_ILK, 0); 8150 8149 I915_WRITE(WM1_LP_ILK, 0); 8150 + 8151 + /* According to the BSpec vol1g, bit 12 (RCPBUNIT) clock 8152 + * gating disable must be set. Failure to set it results in 8153 + * flickering pixels due to Z write ordering failures after 8154 + * some amount of runtime in the Mesa "fire" demo, and Unigine 8155 + * Sanctuary and Tropics, and apparently anything else with 8156 + * alpha test or pixel discard. 8157 + * 8158 + * According to the spec, bit 11 (RCCUNIT) must also be set, 8159 + * but we didn't debug actual testcases to find it out. 8160 + */ 8161 + I915_WRITE(GEN6_UCGCTL2, 8162 + GEN6_RCPBUNIT_CLOCK_GATE_DISABLE | 8163 + GEN6_RCCUNIT_CLOCK_GATE_DISABLE); 8151 8164 8152 8165 /* 8153 8166 * According to the spec the following bits should be
+237 -174
drivers/gpu/drm/i915/intel_dp.c
··· 59 59 struct i2c_algo_dp_aux_data algo; 60 60 bool is_pch_edp; 61 61 uint8_t train_set[4]; 62 - uint8_t link_status[DP_LINK_STATUS_SIZE]; 63 62 int panel_power_up_delay; 64 63 int panel_power_down_delay; 65 64 int panel_power_cycle_delay; ··· 67 68 struct drm_display_mode *panel_fixed_mode; /* for eDP */ 68 69 struct delayed_work panel_vdd_work; 69 70 bool want_panel_vdd; 70 - unsigned long panel_off_jiffies; 71 71 }; 72 72 73 73 /** ··· 155 157 static int 156 158 intel_dp_max_lane_count(struct intel_dp *intel_dp) 157 159 { 158 - int max_lane_count = 4; 159 - 160 - if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11) { 161 - max_lane_count = intel_dp->dpcd[DP_MAX_LANE_COUNT] & 0x1f; 162 - switch (max_lane_count) { 163 - case 1: case 2: case 4: 164 - break; 165 - default: 166 - max_lane_count = 4; 167 - } 160 + int max_lane_count = intel_dp->dpcd[DP_MAX_LANE_COUNT] & 0x1f; 161 + switch (max_lane_count) { 162 + case 1: case 2: case 4: 163 + break; 164 + default: 165 + max_lane_count = 4; 168 166 } 169 167 return max_lane_count; 170 168 } ··· 762 768 continue; 763 769 764 770 intel_dp = enc_to_intel_dp(encoder); 765 - if (intel_dp->base.type == INTEL_OUTPUT_DISPLAYPORT) { 771 + if (intel_dp->base.type == INTEL_OUTPUT_DISPLAYPORT || 772 + intel_dp->base.type == INTEL_OUTPUT_EDP) 773 + { 766 774 lane_count = intel_dp->lane_count; 767 - break; 768 - } else if (is_edp(intel_dp)) { 769 - lane_count = dev_priv->edp.lanes; 770 775 break; 771 776 } 772 777 } ··· 803 810 struct drm_display_mode *adjusted_mode) 804 811 { 805 812 struct drm_device *dev = encoder->dev; 813 + struct drm_i915_private *dev_priv = dev->dev_private; 806 814 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 807 815 struct drm_crtc *crtc = intel_dp->base.base.crtc; 808 816 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); ··· 816 822 ironlake_edp_pll_off(encoder); 817 823 } 818 824 819 - intel_dp->DP = DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0; 820 - intel_dp->DP |= intel_dp->color_range; 825 + /* 826 + * There are three kinds of DP registers: 827 + * 828 + * IBX PCH 829 + * CPU 830 + * CPT PCH 831 + * 832 + * IBX PCH and CPU are the same for almost everything, 833 + * except that the CPU DP PLL is configured in this 834 + * register 835 + * 836 + * CPT PCH is quite different, having many bits moved 837 + * to the TRANS_DP_CTL register instead. That 838 + * configuration happens (oddly) in ironlake_pch_enable 839 + */ 821 840 822 - if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC) 823 - intel_dp->DP |= DP_SYNC_HS_HIGH; 824 - if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC) 825 - intel_dp->DP |= DP_SYNC_VS_HIGH; 841 + /* Preserve the BIOS-computed detected bit. This is 842 + * supposed to be read-only. 843 + */ 844 + intel_dp->DP = I915_READ(intel_dp->output_reg) & DP_DETECTED; 845 + intel_dp->DP |= DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0; 826 846 827 - if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 828 - intel_dp->DP |= DP_LINK_TRAIN_OFF_CPT; 829 - else 830 - intel_dp->DP |= DP_LINK_TRAIN_OFF; 847 + /* Handle DP bits in common between all three register formats */ 848 + 849 + intel_dp->DP |= DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0; 831 850 832 851 switch (intel_dp->lane_count) { 833 852 case 1: ··· 859 852 intel_dp->DP |= DP_AUDIO_OUTPUT_ENABLE; 860 853 intel_write_eld(encoder, adjusted_mode); 861 854 } 862 - 863 855 memset(intel_dp->link_configuration, 0, DP_LINK_CONFIGURATION_SIZE); 864 856 intel_dp->link_configuration[0] = intel_dp->link_bw; 865 857 intel_dp->link_configuration[1] = intel_dp->lane_count; 866 858 intel_dp->link_configuration[8] = DP_SET_ANSI_8B10B; 867 - 868 859 /* 869 860 * Check for DPCD version > 1.1 and enhanced framing support 870 861 */ 871 862 if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11 && 872 863 (intel_dp->dpcd[DP_MAX_LANE_COUNT] & DP_ENHANCED_FRAME_CAP)) { 873 864 intel_dp->link_configuration[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 874 - intel_dp->DP |= DP_ENHANCED_FRAMING; 875 865 } 876 866 877 - /* CPT DP's pipe select is decided in TRANS_DP_CTL */ 878 - if (intel_crtc->pipe == 1 && !HAS_PCH_CPT(dev)) 879 - intel_dp->DP |= DP_PIPEB_SELECT; 867 + /* Split out the IBX/CPU vs CPT settings */ 880 868 881 - if (is_cpu_edp(intel_dp)) { 882 - /* don't miss out required setting for eDP */ 883 - intel_dp->DP |= DP_PLL_ENABLE; 884 - if (adjusted_mode->clock < 200000) 885 - intel_dp->DP |= DP_PLL_FREQ_160MHZ; 886 - else 887 - intel_dp->DP |= DP_PLL_FREQ_270MHZ; 869 + if (!HAS_PCH_CPT(dev) || is_cpu_edp(intel_dp)) { 870 + intel_dp->DP |= intel_dp->color_range; 871 + 872 + if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC) 873 + intel_dp->DP |= DP_SYNC_HS_HIGH; 874 + if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC) 875 + intel_dp->DP |= DP_SYNC_VS_HIGH; 876 + intel_dp->DP |= DP_LINK_TRAIN_OFF; 877 + 878 + if (intel_dp->link_configuration[1] & DP_LANE_COUNT_ENHANCED_FRAME_EN) 879 + intel_dp->DP |= DP_ENHANCED_FRAMING; 880 + 881 + if (intel_crtc->pipe == 1) 882 + intel_dp->DP |= DP_PIPEB_SELECT; 883 + 884 + if (is_cpu_edp(intel_dp)) { 885 + /* don't miss out required setting for eDP */ 886 + intel_dp->DP |= DP_PLL_ENABLE; 887 + if (adjusted_mode->clock < 200000) 888 + intel_dp->DP |= DP_PLL_FREQ_160MHZ; 889 + else 890 + intel_dp->DP |= DP_PLL_FREQ_270MHZ; 891 + } 892 + } else { 893 + intel_dp->DP |= DP_LINK_TRAIN_OFF_CPT; 888 894 } 895 + } 896 + 897 + #define IDLE_ON_MASK (PP_ON | 0 | PP_SEQUENCE_MASK | 0 | PP_SEQUENCE_STATE_MASK) 898 + #define IDLE_ON_VALUE (PP_ON | 0 | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_ON_IDLE) 899 + 900 + #define IDLE_OFF_MASK (PP_ON | 0 | PP_SEQUENCE_MASK | 0 | PP_SEQUENCE_STATE_MASK) 901 + #define IDLE_OFF_VALUE (0 | 0 | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_OFF_IDLE) 902 + 903 + #define IDLE_CYCLE_MASK (PP_ON | 0 | PP_SEQUENCE_MASK | PP_CYCLE_DELAY_ACTIVE | PP_SEQUENCE_STATE_MASK) 904 + #define IDLE_CYCLE_VALUE (0 | 0 | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_OFF_IDLE) 905 + 906 + static void ironlake_wait_panel_status(struct intel_dp *intel_dp, 907 + u32 mask, 908 + u32 value) 909 + { 910 + struct drm_device *dev = intel_dp->base.base.dev; 911 + struct drm_i915_private *dev_priv = dev->dev_private; 912 + 913 + DRM_DEBUG_KMS("mask %08x value %08x status %08x control %08x\n", 914 + mask, value, 915 + I915_READ(PCH_PP_STATUS), 916 + I915_READ(PCH_PP_CONTROL)); 917 + 918 + if (_wait_for((I915_READ(PCH_PP_STATUS) & mask) == value, 5000, 10)) { 919 + DRM_ERROR("Panel status timeout: status %08x control %08x\n", 920 + I915_READ(PCH_PP_STATUS), 921 + I915_READ(PCH_PP_CONTROL)); 922 + } 923 + } 924 + 925 + static void ironlake_wait_panel_on(struct intel_dp *intel_dp) 926 + { 927 + DRM_DEBUG_KMS("Wait for panel power on\n"); 928 + ironlake_wait_panel_status(intel_dp, IDLE_ON_MASK, IDLE_ON_VALUE); 889 929 } 890 930 891 931 static void ironlake_wait_panel_off(struct intel_dp *intel_dp) 892 932 { 893 - unsigned long off_time; 894 - unsigned long delay; 895 - 896 933 DRM_DEBUG_KMS("Wait for panel power off time\n"); 934 + ironlake_wait_panel_status(intel_dp, IDLE_OFF_MASK, IDLE_OFF_VALUE); 935 + } 897 936 898 - if (ironlake_edp_have_panel_power(intel_dp) || 899 - ironlake_edp_have_panel_vdd(intel_dp)) 900 - { 901 - DRM_DEBUG_KMS("Panel still on, no delay needed\n"); 902 - return; 903 - } 937 + static void ironlake_wait_panel_power_cycle(struct intel_dp *intel_dp) 938 + { 939 + DRM_DEBUG_KMS("Wait for panel power cycle\n"); 940 + ironlake_wait_panel_status(intel_dp, IDLE_CYCLE_MASK, IDLE_CYCLE_VALUE); 941 + } 904 942 905 - off_time = intel_dp->panel_off_jiffies + msecs_to_jiffies(intel_dp->panel_power_down_delay); 906 - if (time_after(jiffies, off_time)) { 907 - DRM_DEBUG_KMS("Time already passed"); 908 - return; 909 - } 910 - delay = jiffies_to_msecs(off_time - jiffies); 911 - if (delay > intel_dp->panel_power_down_delay) 912 - delay = intel_dp->panel_power_down_delay; 913 - DRM_DEBUG_KMS("Waiting an additional %ld ms\n", delay); 914 - msleep(delay); 943 + 944 + /* Read the current pp_control value, unlocking the register if it 945 + * is locked 946 + */ 947 + 948 + static u32 ironlake_get_pp_control(struct drm_i915_private *dev_priv) 949 + { 950 + u32 control = I915_READ(PCH_PP_CONTROL); 951 + 952 + control &= ~PANEL_UNLOCK_MASK; 953 + control |= PANEL_UNLOCK_REGS; 954 + return control; 915 955 } 916 956 917 957 static void ironlake_edp_panel_vdd_on(struct intel_dp *intel_dp) ··· 975 921 "eDP VDD already requested on\n"); 976 922 977 923 intel_dp->want_panel_vdd = true; 924 + 978 925 if (ironlake_edp_have_panel_vdd(intel_dp)) { 979 926 DRM_DEBUG_KMS("eDP VDD already on\n"); 980 927 return; 981 928 } 982 929 983 - ironlake_wait_panel_off(intel_dp); 984 - pp = I915_READ(PCH_PP_CONTROL); 985 - pp &= ~PANEL_UNLOCK_MASK; 986 - pp |= PANEL_UNLOCK_REGS; 930 + if (!ironlake_edp_have_panel_power(intel_dp)) 931 + ironlake_wait_panel_power_cycle(intel_dp); 932 + 933 + pp = ironlake_get_pp_control(dev_priv); 987 934 pp |= EDP_FORCE_VDD; 988 935 I915_WRITE(PCH_PP_CONTROL, pp); 989 936 POSTING_READ(PCH_PP_CONTROL); ··· 1007 952 u32 pp; 1008 953 1009 954 if (!intel_dp->want_panel_vdd && ironlake_edp_have_panel_vdd(intel_dp)) { 1010 - pp = I915_READ(PCH_PP_CONTROL); 1011 - pp &= ~PANEL_UNLOCK_MASK; 1012 - pp |= PANEL_UNLOCK_REGS; 955 + pp = ironlake_get_pp_control(dev_priv); 1013 956 pp &= ~EDP_FORCE_VDD; 1014 957 I915_WRITE(PCH_PP_CONTROL, pp); 1015 958 POSTING_READ(PCH_PP_CONTROL); ··· 1015 962 /* Make sure sequencer is idle before allowing subsequent activity */ 1016 963 DRM_DEBUG_KMS("PCH_PP_STATUS: 0x%08x PCH_PP_CONTROL: 0x%08x\n", 1017 964 I915_READ(PCH_PP_STATUS), I915_READ(PCH_PP_CONTROL)); 1018 - intel_dp->panel_off_jiffies = jiffies; 965 + 966 + msleep(intel_dp->panel_power_down_delay); 1019 967 } 1020 968 } 1021 969 ··· 1026 972 struct intel_dp, panel_vdd_work); 1027 973 struct drm_device *dev = intel_dp->base.base.dev; 1028 974 1029 - mutex_lock(&dev->struct_mutex); 975 + mutex_lock(&dev->mode_config.mutex); 1030 976 ironlake_panel_vdd_off_sync(intel_dp); 1031 - mutex_unlock(&dev->struct_mutex); 977 + mutex_unlock(&dev->mode_config.mutex); 1032 978 } 1033 979 1034 980 static void ironlake_edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync) ··· 1038 984 1039 985 DRM_DEBUG_KMS("Turn eDP VDD off %d\n", intel_dp->want_panel_vdd); 1040 986 WARN(!intel_dp->want_panel_vdd, "eDP VDD not forced on"); 1041 - 987 + 1042 988 intel_dp->want_panel_vdd = false; 1043 989 1044 990 if (sync) { ··· 1054 1000 } 1055 1001 } 1056 1002 1057 - /* Returns true if the panel was already on when called */ 1058 1003 static void ironlake_edp_panel_on(struct intel_dp *intel_dp) 1059 1004 { 1060 1005 struct drm_device *dev = intel_dp->base.base.dev; 1061 1006 struct drm_i915_private *dev_priv = dev->dev_private; 1062 - u32 pp, idle_on_mask = PP_ON | PP_SEQUENCE_STATE_ON_IDLE; 1007 + u32 pp; 1063 1008 1064 1009 if (!is_edp(intel_dp)) 1065 1010 return; 1066 - if (ironlake_edp_have_panel_power(intel_dp)) 1011 + 1012 + DRM_DEBUG_KMS("Turn eDP power on\n"); 1013 + 1014 + if (ironlake_edp_have_panel_power(intel_dp)) { 1015 + DRM_DEBUG_KMS("eDP power already on\n"); 1067 1016 return; 1017 + } 1068 1018 1069 - ironlake_wait_panel_off(intel_dp); 1070 - pp = I915_READ(PCH_PP_CONTROL); 1071 - pp &= ~PANEL_UNLOCK_MASK; 1072 - pp |= PANEL_UNLOCK_REGS; 1019 + ironlake_wait_panel_power_cycle(intel_dp); 1073 1020 1021 + pp = ironlake_get_pp_control(dev_priv); 1074 1022 if (IS_GEN5(dev)) { 1075 1023 /* ILK workaround: disable reset around power sequence */ 1076 1024 pp &= ~PANEL_POWER_RESET; ··· 1081 1025 } 1082 1026 1083 1027 pp |= POWER_TARGET_ON; 1028 + if (!IS_GEN5(dev)) 1029 + pp |= PANEL_POWER_RESET; 1030 + 1084 1031 I915_WRITE(PCH_PP_CONTROL, pp); 1085 1032 POSTING_READ(PCH_PP_CONTROL); 1086 1033 1087 - if (wait_for((I915_READ(PCH_PP_STATUS) & idle_on_mask) == idle_on_mask, 1088 - 5000)) 1089 - DRM_ERROR("panel on wait timed out: 0x%08x\n", 1090 - I915_READ(PCH_PP_STATUS)); 1034 + ironlake_wait_panel_on(intel_dp); 1091 1035 1092 1036 if (IS_GEN5(dev)) { 1093 1037 pp |= PANEL_POWER_RESET; /* restore panel reset bit */ ··· 1096 1040 } 1097 1041 } 1098 1042 1099 - static void ironlake_edp_panel_off(struct drm_encoder *encoder) 1043 + static void ironlake_edp_panel_off(struct intel_dp *intel_dp) 1100 1044 { 1101 - struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 1102 - struct drm_device *dev = encoder->dev; 1045 + struct drm_device *dev = intel_dp->base.base.dev; 1103 1046 struct drm_i915_private *dev_priv = dev->dev_private; 1104 - u32 pp, idle_off_mask = PP_ON | PP_SEQUENCE_MASK | 1105 - PP_CYCLE_DELAY_ACTIVE | PP_SEQUENCE_STATE_MASK; 1047 + u32 pp; 1106 1048 1107 1049 if (!is_edp(intel_dp)) 1108 1050 return; 1109 - pp = I915_READ(PCH_PP_CONTROL); 1110 - pp &= ~PANEL_UNLOCK_MASK; 1111 - pp |= PANEL_UNLOCK_REGS; 1112 1051 1113 - if (IS_GEN5(dev)) { 1114 - /* ILK workaround: disable reset around power sequence */ 1115 - pp &= ~PANEL_POWER_RESET; 1116 - I915_WRITE(PCH_PP_CONTROL, pp); 1117 - POSTING_READ(PCH_PP_CONTROL); 1118 - } 1052 + DRM_DEBUG_KMS("Turn eDP power off\n"); 1119 1053 1120 - intel_dp->panel_off_jiffies = jiffies; 1054 + WARN(intel_dp->want_panel_vdd, "Cannot turn power off while VDD is on\n"); 1121 1055 1122 - if (IS_GEN5(dev)) { 1123 - pp &= ~POWER_TARGET_ON; 1124 - I915_WRITE(PCH_PP_CONTROL, pp); 1125 - POSTING_READ(PCH_PP_CONTROL); 1126 - pp &= ~POWER_TARGET_ON; 1127 - I915_WRITE(PCH_PP_CONTROL, pp); 1128 - POSTING_READ(PCH_PP_CONTROL); 1129 - msleep(intel_dp->panel_power_cycle_delay); 1056 + pp = ironlake_get_pp_control(dev_priv); 1057 + pp &= ~(POWER_TARGET_ON | EDP_FORCE_VDD | PANEL_POWER_RESET | EDP_BLC_ENABLE); 1058 + I915_WRITE(PCH_PP_CONTROL, pp); 1059 + POSTING_READ(PCH_PP_CONTROL); 1130 1060 1131 - if (wait_for((I915_READ(PCH_PP_STATUS) & idle_off_mask) == 0, 5000)) 1132 - DRM_ERROR("panel off wait timed out: 0x%08x\n", 1133 - I915_READ(PCH_PP_STATUS)); 1134 - 1135 - pp |= PANEL_POWER_RESET; /* restore panel reset bit */ 1136 - I915_WRITE(PCH_PP_CONTROL, pp); 1137 - POSTING_READ(PCH_PP_CONTROL); 1138 - } 1061 + ironlake_wait_panel_off(intel_dp); 1139 1062 } 1140 1063 1141 1064 static void ironlake_edp_backlight_on(struct intel_dp *intel_dp) ··· 1134 1099 * allowing it to appear. 1135 1100 */ 1136 1101 msleep(intel_dp->backlight_on_delay); 1137 - pp = I915_READ(PCH_PP_CONTROL); 1138 - pp &= ~PANEL_UNLOCK_MASK; 1139 - pp |= PANEL_UNLOCK_REGS; 1102 + pp = ironlake_get_pp_control(dev_priv); 1140 1103 pp |= EDP_BLC_ENABLE; 1141 1104 I915_WRITE(PCH_PP_CONTROL, pp); 1142 1105 POSTING_READ(PCH_PP_CONTROL); ··· 1150 1117 return; 1151 1118 1152 1119 DRM_DEBUG_KMS("\n"); 1153 - pp = I915_READ(PCH_PP_CONTROL); 1154 - pp &= ~PANEL_UNLOCK_MASK; 1155 - pp |= PANEL_UNLOCK_REGS; 1120 + pp = ironlake_get_pp_control(dev_priv); 1156 1121 pp &= ~EDP_BLC_ENABLE; 1157 1122 I915_WRITE(PCH_PP_CONTROL, pp); 1158 1123 POSTING_READ(PCH_PP_CONTROL); ··· 1218 1187 { 1219 1188 struct intel_dp *intel_dp = enc_to_intel_dp(encoder); 1220 1189 1190 + ironlake_edp_backlight_off(intel_dp); 1191 + ironlake_edp_panel_off(intel_dp); 1192 + 1221 1193 /* Wake up the sink first */ 1222 1194 ironlake_edp_panel_vdd_on(intel_dp); 1223 1195 intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON); 1196 + intel_dp_link_down(intel_dp); 1224 1197 ironlake_edp_panel_vdd_off(intel_dp, false); 1225 1198 1226 1199 /* Make sure the panel is off before trying to 1227 1200 * change the mode 1228 1201 */ 1229 - ironlake_edp_backlight_off(intel_dp); 1230 - intel_dp_link_down(intel_dp); 1231 - ironlake_edp_panel_off(encoder); 1232 1202 } 1233 1203 1234 1204 static void intel_dp_commit(struct drm_encoder *encoder) ··· 1243 1211 intel_dp_start_link_train(intel_dp); 1244 1212 ironlake_edp_panel_on(intel_dp); 1245 1213 ironlake_edp_panel_vdd_off(intel_dp, true); 1246 - 1247 1214 intel_dp_complete_link_train(intel_dp); 1248 1215 ironlake_edp_backlight_on(intel_dp); 1249 1216 ··· 1261 1230 uint32_t dp_reg = I915_READ(intel_dp->output_reg); 1262 1231 1263 1232 if (mode != DRM_MODE_DPMS_ON) { 1233 + ironlake_edp_backlight_off(intel_dp); 1234 + ironlake_edp_panel_off(intel_dp); 1235 + 1264 1236 ironlake_edp_panel_vdd_on(intel_dp); 1265 - if (is_edp(intel_dp)) 1266 - ironlake_edp_backlight_off(intel_dp); 1267 1237 intel_dp_sink_dpms(intel_dp, mode); 1268 1238 intel_dp_link_down(intel_dp); 1269 - ironlake_edp_panel_off(encoder); 1270 - if (is_edp(intel_dp) && !is_pch_edp(intel_dp)) 1271 - ironlake_edp_pll_off(encoder); 1272 1239 ironlake_edp_panel_vdd_off(intel_dp, false); 1240 + 1241 + if (is_cpu_edp(intel_dp)) 1242 + ironlake_edp_pll_off(encoder); 1273 1243 } else { 1244 + if (is_cpu_edp(intel_dp)) 1245 + ironlake_edp_pll_on(encoder); 1246 + 1274 1247 ironlake_edp_panel_vdd_on(intel_dp); 1275 1248 intel_dp_sink_dpms(intel_dp, mode); 1276 1249 if (!(dp_reg & DP_PORT_EN)) { ··· 1282 1247 ironlake_edp_panel_on(intel_dp); 1283 1248 ironlake_edp_panel_vdd_off(intel_dp, true); 1284 1249 intel_dp_complete_link_train(intel_dp); 1285 - ironlake_edp_backlight_on(intel_dp); 1286 1250 } else 1287 1251 ironlake_edp_panel_vdd_off(intel_dp, false); 1288 1252 ironlake_edp_backlight_on(intel_dp); ··· 1319 1285 * link status information 1320 1286 */ 1321 1287 static bool 1322 - intel_dp_get_link_status(struct intel_dp *intel_dp) 1288 + intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]) 1323 1289 { 1324 1290 return intel_dp_aux_native_read_retry(intel_dp, 1325 1291 DP_LANE0_1_STATUS, 1326 - intel_dp->link_status, 1292 + link_status, 1327 1293 DP_LINK_STATUS_SIZE); 1328 1294 } 1329 1295 ··· 1335 1301 } 1336 1302 1337 1303 static uint8_t 1338 - intel_get_adjust_request_voltage(uint8_t link_status[DP_LINK_STATUS_SIZE], 1304 + intel_get_adjust_request_voltage(uint8_t adjust_request[2], 1339 1305 int lane) 1340 1306 { 1341 - int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1); 1342 1307 int s = ((lane & 1) ? 1343 1308 DP_ADJUST_VOLTAGE_SWING_LANE1_SHIFT : 1344 1309 DP_ADJUST_VOLTAGE_SWING_LANE0_SHIFT); 1345 - uint8_t l = intel_dp_link_status(link_status, i); 1310 + uint8_t l = adjust_request[lane>>1]; 1346 1311 1347 1312 return ((l >> s) & 3) << DP_TRAIN_VOLTAGE_SWING_SHIFT; 1348 1313 } 1349 1314 1350 1315 static uint8_t 1351 - intel_get_adjust_request_pre_emphasis(uint8_t link_status[DP_LINK_STATUS_SIZE], 1316 + intel_get_adjust_request_pre_emphasis(uint8_t adjust_request[2], 1352 1317 int lane) 1353 1318 { 1354 - int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1); 1355 1319 int s = ((lane & 1) ? 1356 1320 DP_ADJUST_PRE_EMPHASIS_LANE1_SHIFT : 1357 1321 DP_ADJUST_PRE_EMPHASIS_LANE0_SHIFT); 1358 - uint8_t l = intel_dp_link_status(link_status, i); 1322 + uint8_t l = adjust_request[lane>>1]; 1359 1323 1360 1324 return ((l >> s) & 3) << DP_TRAIN_PRE_EMPHASIS_SHIFT; 1361 1325 } ··· 1376 1344 * a maximum voltage of 800mV and a maximum pre-emphasis of 6dB 1377 1345 */ 1378 1346 #define I830_DP_VOLTAGE_MAX DP_TRAIN_VOLTAGE_SWING_800 1347 + #define I830_DP_VOLTAGE_MAX_CPT DP_TRAIN_VOLTAGE_SWING_1200 1379 1348 1380 1349 static uint8_t 1381 1350 intel_dp_pre_emphasis_max(uint8_t voltage_swing) ··· 1395 1362 } 1396 1363 1397 1364 static void 1398 - intel_get_adjust_train(struct intel_dp *intel_dp) 1365 + intel_get_adjust_train(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]) 1399 1366 { 1367 + struct drm_device *dev = intel_dp->base.base.dev; 1400 1368 uint8_t v = 0; 1401 1369 uint8_t p = 0; 1402 1370 int lane; 1371 + uint8_t *adjust_request = link_status + (DP_ADJUST_REQUEST_LANE0_1 - DP_LANE0_1_STATUS); 1372 + int voltage_max; 1403 1373 1404 1374 for (lane = 0; lane < intel_dp->lane_count; lane++) { 1405 - uint8_t this_v = intel_get_adjust_request_voltage(intel_dp->link_status, lane); 1406 - uint8_t this_p = intel_get_adjust_request_pre_emphasis(intel_dp->link_status, lane); 1375 + uint8_t this_v = intel_get_adjust_request_voltage(adjust_request, lane); 1376 + uint8_t this_p = intel_get_adjust_request_pre_emphasis(adjust_request, lane); 1407 1377 1408 1378 if (this_v > v) 1409 1379 v = this_v; ··· 1414 1378 p = this_p; 1415 1379 } 1416 1380 1417 - if (v >= I830_DP_VOLTAGE_MAX) 1418 - v = I830_DP_VOLTAGE_MAX | DP_TRAIN_MAX_SWING_REACHED; 1381 + if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1382 + voltage_max = I830_DP_VOLTAGE_MAX_CPT; 1383 + else 1384 + voltage_max = I830_DP_VOLTAGE_MAX; 1385 + if (v >= voltage_max) 1386 + v = voltage_max | DP_TRAIN_MAX_SWING_REACHED; 1419 1387 1420 1388 if (p >= intel_dp_pre_emphasis_max(v)) 1421 1389 p = intel_dp_pre_emphasis_max(v) | DP_TRAIN_MAX_PRE_EMPHASIS_REACHED; ··· 1429 1389 } 1430 1390 1431 1391 static uint32_t 1432 - intel_dp_signal_levels(uint8_t train_set, int lane_count) 1392 + intel_dp_signal_levels(uint8_t train_set) 1433 1393 { 1434 1394 uint32_t signal_levels = 0; 1435 1395 ··· 1498 1458 intel_get_lane_status(uint8_t link_status[DP_LINK_STATUS_SIZE], 1499 1459 int lane) 1500 1460 { 1501 - int i = DP_LANE0_1_STATUS + (lane >> 1); 1502 1461 int s = (lane & 1) * 4; 1503 - uint8_t l = intel_dp_link_status(link_status, i); 1462 + uint8_t l = link_status[lane>>1]; 1504 1463 1505 1464 return (l >> s) & 0xf; 1506 1465 } ··· 1524 1485 DP_LANE_CHANNEL_EQ_DONE|\ 1525 1486 DP_LANE_SYMBOL_LOCKED) 1526 1487 static bool 1527 - intel_channel_eq_ok(struct intel_dp *intel_dp) 1488 + intel_channel_eq_ok(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]) 1528 1489 { 1529 1490 uint8_t lane_align; 1530 1491 uint8_t lane_status; 1531 1492 int lane; 1532 1493 1533 - lane_align = intel_dp_link_status(intel_dp->link_status, 1494 + lane_align = intel_dp_link_status(link_status, 1534 1495 DP_LANE_ALIGN_STATUS_UPDATED); 1535 1496 if ((lane_align & DP_INTERLANE_ALIGN_DONE) == 0) 1536 1497 return false; 1537 1498 for (lane = 0; lane < intel_dp->lane_count; lane++) { 1538 - lane_status = intel_get_lane_status(intel_dp->link_status, lane); 1499 + lane_status = intel_get_lane_status(link_status, lane); 1539 1500 if ((lane_status & CHANNEL_EQ_BITS) != CHANNEL_EQ_BITS) 1540 1501 return false; 1541 1502 } ··· 1560 1521 1561 1522 ret = intel_dp_aux_native_write(intel_dp, 1562 1523 DP_TRAINING_LANE0_SET, 1563 - intel_dp->train_set, 4); 1564 - if (ret != 4) 1524 + intel_dp->train_set, 1525 + intel_dp->lane_count); 1526 + if (ret != intel_dp->lane_count) 1565 1527 return false; 1566 1528 1567 1529 return true; ··· 1578 1538 int i; 1579 1539 uint8_t voltage; 1580 1540 bool clock_recovery = false; 1581 - int tries; 1541 + int voltage_tries, loop_tries; 1582 1542 u32 reg; 1583 1543 uint32_t DP = intel_dp->DP; 1584 1544 ··· 1605 1565 DP &= ~DP_LINK_TRAIN_MASK; 1606 1566 memset(intel_dp->train_set, 0, 4); 1607 1567 voltage = 0xff; 1608 - tries = 0; 1568 + voltage_tries = 0; 1569 + loop_tries = 0; 1609 1570 clock_recovery = false; 1610 1571 for (;;) { 1611 1572 /* Use intel_dp->train_set[0] to set the voltage and pre emphasis values */ 1573 + uint8_t link_status[DP_LINK_STATUS_SIZE]; 1612 1574 uint32_t signal_levels; 1613 - if (IS_GEN6(dev) && is_edp(intel_dp)) { 1575 + 1576 + if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) { 1614 1577 signal_levels = intel_gen6_edp_signal_levels(intel_dp->train_set[0]); 1615 1578 DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_SNB) | signal_levels; 1616 1579 } else { 1617 - signal_levels = intel_dp_signal_levels(intel_dp->train_set[0], intel_dp->lane_count); 1580 + signal_levels = intel_dp_signal_levels(intel_dp->train_set[0]); 1581 + DRM_DEBUG_KMS("training pattern 1 signal levels %08x\n", signal_levels); 1618 1582 DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels; 1619 1583 } 1620 1584 ··· 1634 1590 /* Set training pattern 1 */ 1635 1591 1636 1592 udelay(100); 1637 - if (!intel_dp_get_link_status(intel_dp)) 1593 + if (!intel_dp_get_link_status(intel_dp, link_status)) { 1594 + DRM_ERROR("failed to get link status\n"); 1638 1595 break; 1596 + } 1639 1597 1640 - if (intel_clock_recovery_ok(intel_dp->link_status, intel_dp->lane_count)) { 1598 + if (intel_clock_recovery_ok(link_status, intel_dp->lane_count)) { 1599 + DRM_DEBUG_KMS("clock recovery OK\n"); 1641 1600 clock_recovery = true; 1642 1601 break; 1643 1602 } ··· 1649 1602 for (i = 0; i < intel_dp->lane_count; i++) 1650 1603 if ((intel_dp->train_set[i] & DP_TRAIN_MAX_SWING_REACHED) == 0) 1651 1604 break; 1652 - if (i == intel_dp->lane_count) 1653 - break; 1605 + if (i == intel_dp->lane_count) { 1606 + ++loop_tries; 1607 + if (loop_tries == 5) { 1608 + DRM_DEBUG_KMS("too many full retries, give up\n"); 1609 + break; 1610 + } 1611 + memset(intel_dp->train_set, 0, 4); 1612 + voltage_tries = 0; 1613 + continue; 1614 + } 1654 1615 1655 1616 /* Check to see if we've tried the same voltage 5 times */ 1656 1617 if ((intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK) == voltage) { 1657 - ++tries; 1658 - if (tries == 5) 1618 + ++voltage_tries; 1619 + if (voltage_tries == 5) { 1620 + DRM_DEBUG_KMS("too many voltage retries, give up\n"); 1659 1621 break; 1622 + } 1660 1623 } else 1661 - tries = 0; 1624 + voltage_tries = 0; 1662 1625 voltage = intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK; 1663 1626 1664 1627 /* Compute new intel_dp->train_set as requested by target */ 1665 - intel_get_adjust_train(intel_dp); 1628 + intel_get_adjust_train(intel_dp, link_status); 1666 1629 } 1667 1630 1668 1631 intel_dp->DP = DP; ··· 1695 1638 for (;;) { 1696 1639 /* Use intel_dp->train_set[0] to set the voltage and pre emphasis values */ 1697 1640 uint32_t signal_levels; 1641 + uint8_t link_status[DP_LINK_STATUS_SIZE]; 1698 1642 1699 1643 if (cr_tries > 5) { 1700 1644 DRM_ERROR("failed to train DP, aborting\n"); ··· 1703 1645 break; 1704 1646 } 1705 1647 1706 - if (IS_GEN6(dev) && is_edp(intel_dp)) { 1648 + if (IS_GEN6(dev) && is_cpu_edp(intel_dp)) { 1707 1649 signal_levels = intel_gen6_edp_signal_levels(intel_dp->train_set[0]); 1708 1650 DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_SNB) | signal_levels; 1709 1651 } else { 1710 - signal_levels = intel_dp_signal_levels(intel_dp->train_set[0], intel_dp->lane_count); 1652 + signal_levels = intel_dp_signal_levels(intel_dp->train_set[0]); 1711 1653 DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels; 1712 1654 } 1713 1655 ··· 1723 1665 break; 1724 1666 1725 1667 udelay(400); 1726 - if (!intel_dp_get_link_status(intel_dp)) 1668 + if (!intel_dp_get_link_status(intel_dp, link_status)) 1727 1669 break; 1728 1670 1729 1671 /* Make sure clock is still ok */ 1730 - if (!intel_clock_recovery_ok(intel_dp->link_status, intel_dp->lane_count)) { 1672 + if (!intel_clock_recovery_ok(link_status, intel_dp->lane_count)) { 1731 1673 intel_dp_start_link_train(intel_dp); 1732 1674 cr_tries++; 1733 1675 continue; 1734 1676 } 1735 1677 1736 - if (intel_channel_eq_ok(intel_dp)) { 1678 + if (intel_channel_eq_ok(intel_dp, link_status)) { 1737 1679 channel_eq = true; 1738 1680 break; 1739 1681 } ··· 1748 1690 } 1749 1691 1750 1692 /* Compute new intel_dp->train_set as requested by target */ 1751 - intel_get_adjust_train(intel_dp); 1693 + intel_get_adjust_train(intel_dp, link_status); 1752 1694 ++tries; 1753 1695 } 1754 1696 ··· 1793 1735 1794 1736 msleep(17); 1795 1737 1796 - if (is_edp(intel_dp)) 1797 - DP |= DP_LINK_TRAIN_OFF; 1738 + if (is_edp(intel_dp)) { 1739 + if (HAS_PCH_CPT(dev) && !is_cpu_edp(intel_dp)) 1740 + DP |= DP_LINK_TRAIN_OFF_CPT; 1741 + else 1742 + DP |= DP_LINK_TRAIN_OFF; 1743 + } 1798 1744 1799 1745 if (!HAS_PCH_CPT(dev) && 1800 1746 I915_READ(intel_dp->output_reg) & DP_PIPEB_SELECT) { ··· 1884 1822 intel_dp_check_link_status(struct intel_dp *intel_dp) 1885 1823 { 1886 1824 u8 sink_irq_vector; 1825 + u8 link_status[DP_LINK_STATUS_SIZE]; 1887 1826 1888 1827 if (intel_dp->dpms_mode != DRM_MODE_DPMS_ON) 1889 1828 return; ··· 1893 1830 return; 1894 1831 1895 1832 /* Try to read receiver status if the link appears to be up */ 1896 - if (!intel_dp_get_link_status(intel_dp)) { 1833 + if (!intel_dp_get_link_status(intel_dp, link_status)) { 1897 1834 intel_dp_link_down(intel_dp); 1898 1835 return; 1899 1836 } ··· 1918 1855 DRM_DEBUG_DRIVER("CP or sink specific irq unhandled\n"); 1919 1856 } 1920 1857 1921 - if (!intel_channel_eq_ok(intel_dp)) { 1858 + if (!intel_channel_eq_ok(intel_dp, link_status)) { 1922 1859 DRM_DEBUG_KMS("%s: channel EQ not ok, retraining\n", 1923 1860 drm_get_encoder_name(&intel_dp->base.base)); 1924 1861 intel_dp_start_link_train(intel_dp); ··· 2242 2179 continue; 2243 2180 2244 2181 intel_dp = enc_to_intel_dp(encoder); 2245 - if (intel_dp->base.type == INTEL_OUTPUT_DISPLAYPORT) 2182 + if (intel_dp->base.type == INTEL_OUTPUT_DISPLAYPORT || 2183 + intel_dp->base.type == INTEL_OUTPUT_EDP) 2246 2184 return intel_dp->output_reg; 2247 2185 } 2248 2186 ··· 2385 2321 2386 2322 cur.t8 = (pp_on & PANEL_LIGHT_ON_DELAY_MASK) >> 2387 2323 PANEL_LIGHT_ON_DELAY_SHIFT; 2388 - 2324 + 2389 2325 cur.t9 = (pp_off & PANEL_LIGHT_OFF_DELAY_MASK) >> 2390 2326 PANEL_LIGHT_OFF_DELAY_SHIFT; 2391 2327 ··· 2418 2354 DRM_DEBUG_KMS("backlight on delay %d, off delay %d\n", 2419 2355 intel_dp->backlight_on_delay, intel_dp->backlight_off_delay); 2420 2356 2421 - intel_dp->panel_off_jiffies = jiffies - intel_dp->panel_power_down_delay; 2422 - 2423 2357 ironlake_edp_panel_vdd_on(intel_dp); 2424 2358 ret = intel_dp_get_dpcd(intel_dp); 2425 2359 ironlake_edp_panel_vdd_off(intel_dp, false); 2360 + 2426 2361 if (ret) { 2427 2362 if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11) 2428 2363 dev_priv->no_aux_handshake =
+2 -1
drivers/gpu/drm/i915/intel_panel.c
··· 326 326 static int intel_panel_get_brightness(struct backlight_device *bd) 327 327 { 328 328 struct drm_device *dev = bl_get_data(bd); 329 - return intel_panel_get_backlight(dev); 329 + struct drm_i915_private *dev_priv = dev->dev_private; 330 + return dev_priv->backlight_level; 330 331 } 331 332 332 333 static const struct backlight_ops intel_panel_bl_ops = {
+50 -42
drivers/gpu/drm/radeon/evergreen_cs.c
··· 480 480 } 481 481 break; 482 482 case DB_Z_INFO: 483 - r = evergreen_cs_packet_next_reloc(p, &reloc); 484 - if (r) { 485 - dev_warn(p->dev, "bad SET_CONTEXT_REG " 486 - "0x%04X\n", reg); 487 - return -EINVAL; 488 - } 489 483 track->db_z_info = radeon_get_ib_value(p, idx); 490 - ib[idx] &= ~Z_ARRAY_MODE(0xf); 491 - track->db_z_info &= ~Z_ARRAY_MODE(0xf); 492 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 493 - ib[idx] |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 494 - track->db_z_info |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 495 - } else { 496 - ib[idx] |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 497 - track->db_z_info |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 484 + if (!p->keep_tiling_flags) { 485 + r = evergreen_cs_packet_next_reloc(p, &reloc); 486 + if (r) { 487 + dev_warn(p->dev, "bad SET_CONTEXT_REG " 488 + "0x%04X\n", reg); 489 + return -EINVAL; 490 + } 491 + ib[idx] &= ~Z_ARRAY_MODE(0xf); 492 + track->db_z_info &= ~Z_ARRAY_MODE(0xf); 493 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 494 + ib[idx] |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 495 + track->db_z_info |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 496 + } else { 497 + ib[idx] |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 498 + track->db_z_info |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 499 + } 498 500 } 499 501 break; 500 502 case DB_STENCIL_INFO: ··· 609 607 case CB_COLOR5_INFO: 610 608 case CB_COLOR6_INFO: 611 609 case CB_COLOR7_INFO: 612 - r = evergreen_cs_packet_next_reloc(p, &reloc); 613 - if (r) { 614 - dev_warn(p->dev, "bad SET_CONTEXT_REG " 615 - "0x%04X\n", reg); 616 - return -EINVAL; 617 - } 618 610 tmp = (reg - CB_COLOR0_INFO) / 0x3c; 619 611 track->cb_color_info[tmp] = radeon_get_ib_value(p, idx); 620 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 621 - ib[idx] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 622 - track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 623 - } else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) { 624 - ib[idx] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 625 - track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 612 + if (!p->keep_tiling_flags) { 613 + r = evergreen_cs_packet_next_reloc(p, &reloc); 614 + if (r) { 615 + dev_warn(p->dev, "bad SET_CONTEXT_REG " 616 + "0x%04X\n", reg); 617 + return -EINVAL; 618 + } 619 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 620 + ib[idx] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 621 + track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 622 + } else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) { 623 + ib[idx] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 624 + track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 625 + } 626 626 } 627 627 break; 628 628 case CB_COLOR8_INFO: 629 629 case CB_COLOR9_INFO: 630 630 case CB_COLOR10_INFO: 631 631 case CB_COLOR11_INFO: 632 - r = evergreen_cs_packet_next_reloc(p, &reloc); 633 - if (r) { 634 - dev_warn(p->dev, "bad SET_CONTEXT_REG " 635 - "0x%04X\n", reg); 636 - return -EINVAL; 637 - } 638 632 tmp = ((reg - CB_COLOR8_INFO) / 0x1c) + 8; 639 633 track->cb_color_info[tmp] = radeon_get_ib_value(p, idx); 640 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 641 - ib[idx] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 642 - track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 643 - } else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) { 644 - ib[idx] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 645 - track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 634 + if (!p->keep_tiling_flags) { 635 + r = evergreen_cs_packet_next_reloc(p, &reloc); 636 + if (r) { 637 + dev_warn(p->dev, "bad SET_CONTEXT_REG " 638 + "0x%04X\n", reg); 639 + return -EINVAL; 640 + } 641 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) { 642 + ib[idx] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 643 + track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 644 + } else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) { 645 + ib[idx] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 646 + track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 647 + } 646 648 } 647 649 break; 648 650 case CB_COLOR0_PITCH: ··· 1317 1311 return -EINVAL; 1318 1312 } 1319 1313 ib[idx+1+(i*8)+2] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff); 1320 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 1321 - ib[idx+1+(i*8)+1] |= TEX_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 1322 - else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 1323 - ib[idx+1+(i*8)+1] |= TEX_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 1314 + if (!p->keep_tiling_flags) { 1315 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 1316 + ib[idx+1+(i*8)+1] |= TEX_ARRAY_MODE(ARRAY_2D_TILED_THIN1); 1317 + else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 1318 + ib[idx+1+(i*8)+1] |= TEX_ARRAY_MODE(ARRAY_1D_TILED_THIN1); 1319 + } 1324 1320 texture = reloc->robj; 1325 1321 /* tex mip base */ 1326 1322 r = evergreen_cs_packet_next_reloc(p, &reloc);
+52 -44
drivers/gpu/drm/radeon/r300.c
··· 701 701 return r; 702 702 } 703 703 704 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 705 - tile_flags |= R300_TXO_MACRO_TILE; 706 - if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 707 - tile_flags |= R300_TXO_MICRO_TILE; 708 - else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE) 709 - tile_flags |= R300_TXO_MICRO_TILE_SQUARE; 704 + if (p->keep_tiling_flags) { 705 + ib[idx] = (idx_value & 31) | /* keep the 1st 5 bits */ 706 + ((idx_value & ~31) + (u32)reloc->lobj.gpu_offset); 707 + } else { 708 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 709 + tile_flags |= R300_TXO_MACRO_TILE; 710 + if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 711 + tile_flags |= R300_TXO_MICRO_TILE; 712 + else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE) 713 + tile_flags |= R300_TXO_MICRO_TILE_SQUARE; 710 714 711 - tmp = idx_value + ((u32)reloc->lobj.gpu_offset); 712 - tmp |= tile_flags; 713 - ib[idx] = tmp; 715 + tmp = idx_value + ((u32)reloc->lobj.gpu_offset); 716 + tmp |= tile_flags; 717 + ib[idx] = tmp; 718 + } 714 719 track->textures[i].robj = reloc->robj; 715 720 track->tex_dirty = true; 716 721 break; ··· 765 760 /* RB3D_COLORPITCH1 */ 766 761 /* RB3D_COLORPITCH2 */ 767 762 /* RB3D_COLORPITCH3 */ 768 - r = r100_cs_packet_next_reloc(p, &reloc); 769 - if (r) { 770 - DRM_ERROR("No reloc for ib[%d]=0x%04X\n", 771 - idx, reg); 772 - r100_cs_dump_packet(p, pkt); 773 - return r; 763 + if (!p->keep_tiling_flags) { 764 + r = r100_cs_packet_next_reloc(p, &reloc); 765 + if (r) { 766 + DRM_ERROR("No reloc for ib[%d]=0x%04X\n", 767 + idx, reg); 768 + r100_cs_dump_packet(p, pkt); 769 + return r; 770 + } 771 + 772 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 773 + tile_flags |= R300_COLOR_TILE_ENABLE; 774 + if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 775 + tile_flags |= R300_COLOR_MICROTILE_ENABLE; 776 + else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE) 777 + tile_flags |= R300_COLOR_MICROTILE_SQUARE_ENABLE; 778 + 779 + tmp = idx_value & ~(0x7 << 16); 780 + tmp |= tile_flags; 781 + ib[idx] = tmp; 774 782 } 775 - 776 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 777 - tile_flags |= R300_COLOR_TILE_ENABLE; 778 - if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 779 - tile_flags |= R300_COLOR_MICROTILE_ENABLE; 780 - else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE) 781 - tile_flags |= R300_COLOR_MICROTILE_SQUARE_ENABLE; 782 - 783 - tmp = idx_value & ~(0x7 << 16); 784 - tmp |= tile_flags; 785 - ib[idx] = tmp; 786 783 i = (reg - 0x4E38) >> 2; 787 784 track->cb[i].pitch = idx_value & 0x3FFE; 788 785 switch (((idx_value >> 21) & 0xF)) { ··· 850 843 break; 851 844 case 0x4F24: 852 845 /* ZB_DEPTHPITCH */ 853 - r = r100_cs_packet_next_reloc(p, &reloc); 854 - if (r) { 855 - DRM_ERROR("No reloc for ib[%d]=0x%04X\n", 856 - idx, reg); 857 - r100_cs_dump_packet(p, pkt); 858 - return r; 846 + if (!p->keep_tiling_flags) { 847 + r = r100_cs_packet_next_reloc(p, &reloc); 848 + if (r) { 849 + DRM_ERROR("No reloc for ib[%d]=0x%04X\n", 850 + idx, reg); 851 + r100_cs_dump_packet(p, pkt); 852 + return r; 853 + } 854 + 855 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 856 + tile_flags |= R300_DEPTHMACROTILE_ENABLE; 857 + if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 858 + tile_flags |= R300_DEPTHMICROTILE_TILED; 859 + else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE) 860 + tile_flags |= R300_DEPTHMICROTILE_TILED_SQUARE; 861 + 862 + tmp = idx_value & ~(0x7 << 16); 863 + tmp |= tile_flags; 864 + ib[idx] = tmp; 859 865 } 860 - 861 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 862 - tile_flags |= R300_DEPTHMACROTILE_ENABLE; 863 - if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 864 - tile_flags |= R300_DEPTHMICROTILE_TILED; 865 - else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE) 866 - tile_flags |= R300_DEPTHMICROTILE_TILED_SQUARE; 867 - 868 - tmp = idx_value & ~(0x7 << 16); 869 - tmp |= tile_flags; 870 - ib[idx] = tmp; 871 - 872 866 track->zb.pitch = idx_value & 0x3FFC; 873 867 track->zb_dirty = true; 874 868 break;
+16 -10
drivers/gpu/drm/radeon/r600_cs.c
··· 941 941 track->db_depth_control = radeon_get_ib_value(p, idx); 942 942 break; 943 943 case R_028010_DB_DEPTH_INFO: 944 - if (r600_cs_packet_next_is_pkt3_nop(p)) { 944 + if (!p->keep_tiling_flags && 945 + r600_cs_packet_next_is_pkt3_nop(p)) { 945 946 r = r600_cs_packet_next_reloc(p, &reloc); 946 947 if (r) { 947 948 dev_warn(p->dev, "bad SET_CONTEXT_REG " ··· 993 992 case R_0280B4_CB_COLOR5_INFO: 994 993 case R_0280B8_CB_COLOR6_INFO: 995 994 case R_0280BC_CB_COLOR7_INFO: 996 - if (r600_cs_packet_next_is_pkt3_nop(p)) { 995 + if (!p->keep_tiling_flags && 996 + r600_cs_packet_next_is_pkt3_nop(p)) { 997 997 r = r600_cs_packet_next_reloc(p, &reloc); 998 998 if (r) { 999 999 dev_err(p->dev, "bad SET_CONTEXT_REG 0x%04X\n", reg); ··· 1293 1291 mip_offset <<= 8; 1294 1292 1295 1293 word0 = radeon_get_ib_value(p, idx + 0); 1296 - if (tiling_flags & RADEON_TILING_MACRO) 1297 - word0 |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1); 1298 - else if (tiling_flags & RADEON_TILING_MICRO) 1299 - word0 |= S_038000_TILE_MODE(V_038000_ARRAY_1D_TILED_THIN1); 1294 + if (!p->keep_tiling_flags) { 1295 + if (tiling_flags & RADEON_TILING_MACRO) 1296 + word0 |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1); 1297 + else if (tiling_flags & RADEON_TILING_MICRO) 1298 + word0 |= S_038000_TILE_MODE(V_038000_ARRAY_1D_TILED_THIN1); 1299 + } 1300 1300 word1 = radeon_get_ib_value(p, idx + 1); 1301 1301 w0 = G_038000_TEX_WIDTH(word0) + 1; 1302 1302 h0 = G_038004_TEX_HEIGHT(word1) + 1; ··· 1625 1621 return -EINVAL; 1626 1622 } 1627 1623 base_offset = (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff); 1628 - if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 1629 - ib[idx+1+(i*7)+0] |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1); 1630 - else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 1631 - ib[idx+1+(i*7)+0] |= S_038000_TILE_MODE(V_038000_ARRAY_1D_TILED_THIN1); 1624 + if (!p->keep_tiling_flags) { 1625 + if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) 1626 + ib[idx+1+(i*7)+0] |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1); 1627 + else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) 1628 + ib[idx+1+(i*7)+0] |= S_038000_TILE_MODE(V_038000_ARRAY_1D_TILED_THIN1); 1629 + } 1632 1630 texture = reloc->robj; 1633 1631 /* tex mip base */ 1634 1632 r = r600_cs_packet_next_reloc(p, &reloc);
+2 -1
drivers/gpu/drm/radeon/radeon.h
··· 611 611 struct radeon_ib *ib; 612 612 void *track; 613 613 unsigned family; 614 - int parser_error; 614 + int parser_error; 615 + bool keep_tiling_flags; 615 616 }; 616 617 617 618 extern int radeon_cs_update_pages(struct radeon_cs_parser *p, int pg_idx);
+86 -116
drivers/gpu/drm/radeon/radeon_atombios.c
··· 62 62 struct _ATOM_SUPPORTED_DEVICES_INFO_2d1 info_2d1; 63 63 }; 64 64 65 + static void radeon_lookup_i2c_gpio_quirks(struct radeon_device *rdev, 66 + ATOM_GPIO_I2C_ASSIGMENT *gpio, 67 + u8 index) 68 + { 69 + /* r4xx mask is technically not used by the hw, so patch in the legacy mask bits */ 70 + if ((rdev->family == CHIP_R420) || 71 + (rdev->family == CHIP_R423) || 72 + (rdev->family == CHIP_RV410)) { 73 + if ((le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x0018) || 74 + (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x0019) || 75 + (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x001a)) { 76 + gpio->ucClkMaskShift = 0x19; 77 + gpio->ucDataMaskShift = 0x18; 78 + } 79 + } 80 + 81 + /* some evergreen boards have bad data for this entry */ 82 + if (ASIC_IS_DCE4(rdev)) { 83 + if ((index == 7) && 84 + (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1936) && 85 + (gpio->sucI2cId.ucAccess == 0)) { 86 + gpio->sucI2cId.ucAccess = 0x97; 87 + gpio->ucDataMaskShift = 8; 88 + gpio->ucDataEnShift = 8; 89 + gpio->ucDataY_Shift = 8; 90 + gpio->ucDataA_Shift = 8; 91 + } 92 + } 93 + 94 + /* some DCE3 boards have bad data for this entry */ 95 + if (ASIC_IS_DCE3(rdev)) { 96 + if ((index == 4) && 97 + (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1fda) && 98 + (gpio->sucI2cId.ucAccess == 0x94)) 99 + gpio->sucI2cId.ucAccess = 0x14; 100 + } 101 + } 102 + 103 + static struct radeon_i2c_bus_rec radeon_get_bus_rec_for_i2c_gpio(ATOM_GPIO_I2C_ASSIGMENT *gpio) 104 + { 105 + struct radeon_i2c_bus_rec i2c; 106 + 107 + memset(&i2c, 0, sizeof(struct radeon_i2c_bus_rec)); 108 + 109 + i2c.mask_clk_reg = le16_to_cpu(gpio->usClkMaskRegisterIndex) * 4; 110 + i2c.mask_data_reg = le16_to_cpu(gpio->usDataMaskRegisterIndex) * 4; 111 + i2c.en_clk_reg = le16_to_cpu(gpio->usClkEnRegisterIndex) * 4; 112 + i2c.en_data_reg = le16_to_cpu(gpio->usDataEnRegisterIndex) * 4; 113 + i2c.y_clk_reg = le16_to_cpu(gpio->usClkY_RegisterIndex) * 4; 114 + i2c.y_data_reg = le16_to_cpu(gpio->usDataY_RegisterIndex) * 4; 115 + i2c.a_clk_reg = le16_to_cpu(gpio->usClkA_RegisterIndex) * 4; 116 + i2c.a_data_reg = le16_to_cpu(gpio->usDataA_RegisterIndex) * 4; 117 + i2c.mask_clk_mask = (1 << gpio->ucClkMaskShift); 118 + i2c.mask_data_mask = (1 << gpio->ucDataMaskShift); 119 + i2c.en_clk_mask = (1 << gpio->ucClkEnShift); 120 + i2c.en_data_mask = (1 << gpio->ucDataEnShift); 121 + i2c.y_clk_mask = (1 << gpio->ucClkY_Shift); 122 + i2c.y_data_mask = (1 << gpio->ucDataY_Shift); 123 + i2c.a_clk_mask = (1 << gpio->ucClkA_Shift); 124 + i2c.a_data_mask = (1 << gpio->ucDataA_Shift); 125 + 126 + if (gpio->sucI2cId.sbfAccess.bfHW_Capable) 127 + i2c.hw_capable = true; 128 + else 129 + i2c.hw_capable = false; 130 + 131 + if (gpio->sucI2cId.ucAccess == 0xa0) 132 + i2c.mm_i2c = true; 133 + else 134 + i2c.mm_i2c = false; 135 + 136 + i2c.i2c_id = gpio->sucI2cId.ucAccess; 137 + 138 + if (i2c.mask_clk_reg) 139 + i2c.valid = true; 140 + else 141 + i2c.valid = false; 142 + 143 + return i2c; 144 + } 145 + 65 146 static struct radeon_i2c_bus_rec radeon_lookup_i2c_gpio(struct radeon_device *rdev, 66 147 uint8_t id) 67 148 { ··· 166 85 for (i = 0; i < num_indices; i++) { 167 86 gpio = &i2c_info->asGPIO_Info[i]; 168 87 169 - /* r4xx mask is technically not used by the hw, so patch in the legacy mask bits */ 170 - if ((rdev->family == CHIP_R420) || 171 - (rdev->family == CHIP_R423) || 172 - (rdev->family == CHIP_RV410)) { 173 - if ((le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x0018) || 174 - (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x0019) || 175 - (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x001a)) { 176 - gpio->ucClkMaskShift = 0x19; 177 - gpio->ucDataMaskShift = 0x18; 178 - } 179 - } 180 - 181 - /* some evergreen boards have bad data for this entry */ 182 - if (ASIC_IS_DCE4(rdev)) { 183 - if ((i == 7) && 184 - (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1936) && 185 - (gpio->sucI2cId.ucAccess == 0)) { 186 - gpio->sucI2cId.ucAccess = 0x97; 187 - gpio->ucDataMaskShift = 8; 188 - gpio->ucDataEnShift = 8; 189 - gpio->ucDataY_Shift = 8; 190 - gpio->ucDataA_Shift = 8; 191 - } 192 - } 193 - 194 - /* some DCE3 boards have bad data for this entry */ 195 - if (ASIC_IS_DCE3(rdev)) { 196 - if ((i == 4) && 197 - (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1fda) && 198 - (gpio->sucI2cId.ucAccess == 0x94)) 199 - gpio->sucI2cId.ucAccess = 0x14; 200 - } 88 + radeon_lookup_i2c_gpio_quirks(rdev, gpio, i); 201 89 202 90 if (gpio->sucI2cId.ucAccess == id) { 203 - i2c.mask_clk_reg = le16_to_cpu(gpio->usClkMaskRegisterIndex) * 4; 204 - i2c.mask_data_reg = le16_to_cpu(gpio->usDataMaskRegisterIndex) * 4; 205 - i2c.en_clk_reg = le16_to_cpu(gpio->usClkEnRegisterIndex) * 4; 206 - i2c.en_data_reg = le16_to_cpu(gpio->usDataEnRegisterIndex) * 4; 207 - i2c.y_clk_reg = le16_to_cpu(gpio->usClkY_RegisterIndex) * 4; 208 - i2c.y_data_reg = le16_to_cpu(gpio->usDataY_RegisterIndex) * 4; 209 - i2c.a_clk_reg = le16_to_cpu(gpio->usClkA_RegisterIndex) * 4; 210 - i2c.a_data_reg = le16_to_cpu(gpio->usDataA_RegisterIndex) * 4; 211 - i2c.mask_clk_mask = (1 << gpio->ucClkMaskShift); 212 - i2c.mask_data_mask = (1 << gpio->ucDataMaskShift); 213 - i2c.en_clk_mask = (1 << gpio->ucClkEnShift); 214 - i2c.en_data_mask = (1 << gpio->ucDataEnShift); 215 - i2c.y_clk_mask = (1 << gpio->ucClkY_Shift); 216 - i2c.y_data_mask = (1 << gpio->ucDataY_Shift); 217 - i2c.a_clk_mask = (1 << gpio->ucClkA_Shift); 218 - i2c.a_data_mask = (1 << gpio->ucDataA_Shift); 219 - 220 - if (gpio->sucI2cId.sbfAccess.bfHW_Capable) 221 - i2c.hw_capable = true; 222 - else 223 - i2c.hw_capable = false; 224 - 225 - if (gpio->sucI2cId.ucAccess == 0xa0) 226 - i2c.mm_i2c = true; 227 - else 228 - i2c.mm_i2c = false; 229 - 230 - i2c.i2c_id = gpio->sucI2cId.ucAccess; 231 - 232 - if (i2c.mask_clk_reg) 233 - i2c.valid = true; 91 + i2c = radeon_get_bus_rec_for_i2c_gpio(gpio); 234 92 break; 235 93 } 236 94 } ··· 189 169 int i, num_indices; 190 170 char stmp[32]; 191 171 192 - memset(&i2c, 0, sizeof(struct radeon_i2c_bus_rec)); 193 - 194 172 if (atom_parse_data_header(ctx, index, &size, NULL, NULL, &data_offset)) { 195 173 i2c_info = (struct _ATOM_GPIO_I2C_INFO *)(ctx->bios + data_offset); 196 174 ··· 197 179 198 180 for (i = 0; i < num_indices; i++) { 199 181 gpio = &i2c_info->asGPIO_Info[i]; 200 - i2c.valid = false; 201 182 202 - /* some evergreen boards have bad data for this entry */ 203 - if (ASIC_IS_DCE4(rdev)) { 204 - if ((i == 7) && 205 - (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1936) && 206 - (gpio->sucI2cId.ucAccess == 0)) { 207 - gpio->sucI2cId.ucAccess = 0x97; 208 - gpio->ucDataMaskShift = 8; 209 - gpio->ucDataEnShift = 8; 210 - gpio->ucDataY_Shift = 8; 211 - gpio->ucDataA_Shift = 8; 212 - } 213 - } 183 + radeon_lookup_i2c_gpio_quirks(rdev, gpio, i); 214 184 215 - /* some DCE3 boards have bad data for this entry */ 216 - if (ASIC_IS_DCE3(rdev)) { 217 - if ((i == 4) && 218 - (le16_to_cpu(gpio->usClkMaskRegisterIndex) == 0x1fda) && 219 - (gpio->sucI2cId.ucAccess == 0x94)) 220 - gpio->sucI2cId.ucAccess = 0x14; 221 - } 185 + i2c = radeon_get_bus_rec_for_i2c_gpio(gpio); 222 186 223 - i2c.mask_clk_reg = le16_to_cpu(gpio->usClkMaskRegisterIndex) * 4; 224 - i2c.mask_data_reg = le16_to_cpu(gpio->usDataMaskRegisterIndex) * 4; 225 - i2c.en_clk_reg = le16_to_cpu(gpio->usClkEnRegisterIndex) * 4; 226 - i2c.en_data_reg = le16_to_cpu(gpio->usDataEnRegisterIndex) * 4; 227 - i2c.y_clk_reg = le16_to_cpu(gpio->usClkY_RegisterIndex) * 4; 228 - i2c.y_data_reg = le16_to_cpu(gpio->usDataY_RegisterIndex) * 4; 229 - i2c.a_clk_reg = le16_to_cpu(gpio->usClkA_RegisterIndex) * 4; 230 - i2c.a_data_reg = le16_to_cpu(gpio->usDataA_RegisterIndex) * 4; 231 - i2c.mask_clk_mask = (1 << gpio->ucClkMaskShift); 232 - i2c.mask_data_mask = (1 << gpio->ucDataMaskShift); 233 - i2c.en_clk_mask = (1 << gpio->ucClkEnShift); 234 - i2c.en_data_mask = (1 << gpio->ucDataEnShift); 235 - i2c.y_clk_mask = (1 << gpio->ucClkY_Shift); 236 - i2c.y_data_mask = (1 << gpio->ucDataY_Shift); 237 - i2c.a_clk_mask = (1 << gpio->ucClkA_Shift); 238 - i2c.a_data_mask = (1 << gpio->ucDataA_Shift); 239 - 240 - if (gpio->sucI2cId.sbfAccess.bfHW_Capable) 241 - i2c.hw_capable = true; 242 - else 243 - i2c.hw_capable = false; 244 - 245 - if (gpio->sucI2cId.ucAccess == 0xa0) 246 - i2c.mm_i2c = true; 247 - else 248 - i2c.mm_i2c = false; 249 - 250 - i2c.i2c_id = gpio->sucI2cId.ucAccess; 251 - 252 - if (i2c.mask_clk_reg) { 253 - i2c.valid = true; 187 + if (i2c.valid) { 254 188 sprintf(stmp, "0x%x", i2c.i2c_id); 255 189 rdev->i2c_bus[i] = radeon_i2c_create(rdev->ddev, &i2c, stmp); 256 190 }
+10 -1
drivers/gpu/drm/radeon/radeon_cs.c
··· 93 93 { 94 94 struct drm_radeon_cs *cs = data; 95 95 uint64_t *chunk_array_ptr; 96 - unsigned size, i; 96 + unsigned size, i, flags = 0; 97 97 98 98 if (!cs->num_chunks) { 99 99 return 0; ··· 140 140 if (p->chunks[i].length_dw == 0) 141 141 return -EINVAL; 142 142 } 143 + if (p->chunks[i].chunk_id == RADEON_CHUNK_ID_FLAGS && 144 + !p->chunks[i].length_dw) { 145 + return -EINVAL; 146 + } 143 147 144 148 p->chunks[i].length_dw = user_chunk.length_dw; 145 149 p->chunks[i].user_ptr = (void __user *)(unsigned long)user_chunk.chunk_data; ··· 158 154 if (DRM_COPY_FROM_USER(p->chunks[i].kdata, 159 155 p->chunks[i].user_ptr, size)) { 160 156 return -EFAULT; 157 + } 158 + if (p->chunks[i].chunk_id == RADEON_CHUNK_ID_FLAGS) { 159 + flags = p->chunks[i].kdata[0]; 161 160 } 162 161 } else { 163 162 p->chunks[i].kpage[0] = kmalloc(PAGE_SIZE, GFP_KERNEL); ··· 181 174 p->chunks[p->chunk_ib_idx].length_dw); 182 175 return -EINVAL; 183 176 } 177 + 178 + p->keep_tiling_flags = (flags & RADEON_CS_KEEP_TILING_FLAGS) != 0; 184 179 return 0; 185 180 } 186 181
+2 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 53 53 * 2.9.0 - r600 tiling (s3tc,rgtc) working, SET_PREDICATION packet 3 on r600 + eg, backend query 54 54 * 2.10.0 - fusion 2D tiling 55 55 * 2.11.0 - backend map, initial compute support for the CS checker 56 + * 2.12.0 - RADEON_CS_KEEP_TILING_FLAGS 56 57 */ 57 58 #define KMS_DRIVER_MAJOR 2 58 - #define KMS_DRIVER_MINOR 11 59 + #define KMS_DRIVER_MINOR 12 59 60 #define KMS_DRIVER_PATCHLEVEL 0 60 61 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); 61 62 int radeon_driver_unload_kms(struct drm_device *dev);
+7 -1
drivers/gpu/drm/ttm/ttm_bo.c
··· 574 574 return ret; 575 575 576 576 spin_lock(&glob->lru_lock); 577 + 578 + if (unlikely(list_empty(&bo->ddestroy))) { 579 + spin_unlock(&glob->lru_lock); 580 + return 0; 581 + } 582 + 577 583 ret = ttm_bo_reserve_locked(bo, interruptible, 578 584 no_wait_reserve, false, 0); 579 585 580 - if (unlikely(ret != 0) || list_empty(&bo->ddestroy)) { 586 + if (unlikely(ret != 0)) { 581 587 spin_unlock(&glob->lru_lock); 582 588 return ret; 583 589 }
+12 -6
drivers/gpu/vga/vgaarb.c
··· 991 991 uc = &priv->cards[i]; 992 992 } 993 993 994 - if (!uc) 995 - return -EINVAL; 994 + if (!uc) { 995 + ret_val = -EINVAL; 996 + goto done; 997 + } 996 998 997 - if (io_state & VGA_RSRC_LEGACY_IO && uc->io_cnt == 0) 998 - return -EINVAL; 999 + if (io_state & VGA_RSRC_LEGACY_IO && uc->io_cnt == 0) { 1000 + ret_val = -EINVAL; 1001 + goto done; 1002 + } 999 1003 1000 - if (io_state & VGA_RSRC_LEGACY_MEM && uc->mem_cnt == 0) 1001 - return -EINVAL; 1004 + if (io_state & VGA_RSRC_LEGACY_MEM && uc->mem_cnt == 0) { 1005 + ret_val = -EINVAL; 1006 + goto done; 1007 + } 1002 1008 1003 1009 vga_put(pdev, io_state); 1004 1010
-1
drivers/hwmon/ad7314.c
··· 160 160 static struct spi_driver ad7314_driver = { 161 161 .driver = { 162 162 .name = "ad7314", 163 - .bus = &spi_bus_type, 164 163 .owner = THIS_MODULE, 165 164 }, 166 165 .probe = ad7314_probe,
-1
drivers/hwmon/ads7871.c
··· 227 227 static struct spi_driver ads7871_driver = { 228 228 .driver = { 229 229 .name = DEVICE_NAME, 230 - .bus = &spi_bus_type, 231 230 .owner = THIS_MODULE, 232 231 }, 233 232
+1 -11
drivers/hwmon/exynos4_tmu.c
··· 506 506 .resume = exynos4_tmu_resume, 507 507 }; 508 508 509 - static int __init exynos4_tmu_driver_init(void) 510 - { 511 - return platform_driver_register(&exynos4_tmu_driver); 512 - } 513 - module_init(exynos4_tmu_driver_init); 514 - 515 - static void __exit exynos4_tmu_driver_exit(void) 516 - { 517 - platform_driver_unregister(&exynos4_tmu_driver); 518 - } 519 - module_exit(exynos4_tmu_driver_exit); 509 + module_platform_driver(exynos4_tmu_driver); 520 510 521 511 MODULE_DESCRIPTION("EXYNOS4 TMU Driver"); 522 512 MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>");
+1 -12
drivers/hwmon/gpio-fan.c
··· 539 539 }, 540 540 }; 541 541 542 - static int __init gpio_fan_init(void) 543 - { 544 - return platform_driver_register(&gpio_fan_driver); 545 - } 546 - 547 - static void __exit gpio_fan_exit(void) 548 - { 549 - platform_driver_unregister(&gpio_fan_driver); 550 - } 551 - 552 - module_init(gpio_fan_init); 553 - module_exit(gpio_fan_exit); 542 + module_platform_driver(gpio_fan_driver); 554 543 555 544 MODULE_AUTHOR("Simon Guinot <sguinot@lacie.com>"); 556 545 MODULE_DESCRIPTION("GPIO FAN driver");
+1 -11
drivers/hwmon/jz4740-hwmon.c
··· 212 212 }, 213 213 }; 214 214 215 - static int __init jz4740_hwmon_init(void) 216 - { 217 - return platform_driver_register(&jz4740_hwmon_driver); 218 - } 219 - module_init(jz4740_hwmon_init); 220 - 221 - static void __exit jz4740_hwmon_exit(void) 222 - { 223 - platform_driver_unregister(&jz4740_hwmon_driver); 224 - } 225 - module_exit(jz4740_hwmon_exit); 215 + module_platform_driver(jz4740_hwmon_driver); 226 216 227 217 MODULE_DESCRIPTION("JZ4740 SoC HWMON driver"); 228 218 MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
+1 -13
drivers/hwmon/ntc_thermistor.c
··· 432 432 .id_table = ntc_thermistor_id, 433 433 }; 434 434 435 - static int __init ntc_thermistor_init(void) 436 - { 437 - return platform_driver_register(&ntc_thermistor_driver); 438 - } 439 - 440 - module_init(ntc_thermistor_init); 441 - 442 - static void __exit ntc_thermistor_cleanup(void) 443 - { 444 - platform_driver_unregister(&ntc_thermistor_driver); 445 - } 446 - 447 - module_exit(ntc_thermistor_cleanup); 435 + module_platform_driver(ntc_thermistor_driver); 448 436 449 437 MODULE_DESCRIPTION("NTC Thermistor Driver"); 450 438 MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
+1 -12
drivers/hwmon/s3c-hwmon.c
··· 393 393 .remove = __devexit_p(s3c_hwmon_remove), 394 394 }; 395 395 396 - static int __init s3c_hwmon_init(void) 397 - { 398 - return platform_driver_register(&s3c_hwmon_driver); 399 - } 400 - 401 - static void __exit s3c_hwmon_exit(void) 402 - { 403 - platform_driver_unregister(&s3c_hwmon_driver); 404 - } 405 - 406 - module_init(s3c_hwmon_init); 407 - module_exit(s3c_hwmon_exit); 396 + module_platform_driver(s3c_hwmon_driver); 408 397 409 398 MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>"); 410 399 MODULE_DESCRIPTION("S3C ADC HWMon driver");
+1 -12
drivers/hwmon/sch5627.c
··· 590 590 .remove = sch5627_remove, 591 591 }; 592 592 593 - static int __init sch5627_init(void) 594 - { 595 - return platform_driver_register(&sch5627_driver); 596 - } 597 - 598 - static void __exit sch5627_exit(void) 599 - { 600 - platform_driver_unregister(&sch5627_driver); 601 - } 593 + module_platform_driver(sch5627_driver); 602 594 603 595 MODULE_DESCRIPTION("SMSC SCH5627 Hardware Monitoring Driver"); 604 596 MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 605 597 MODULE_LICENSE("GPL"); 606 - 607 - module_init(sch5627_init); 608 - module_exit(sch5627_exit);
+1 -12
drivers/hwmon/sch5636.c
··· 521 521 .remove = sch5636_remove, 522 522 }; 523 523 524 - static int __init sch5636_init(void) 525 - { 526 - return platform_driver_register(&sch5636_driver); 527 - } 528 - 529 - static void __exit sch5636_exit(void) 530 - { 531 - platform_driver_unregister(&sch5636_driver); 532 - } 524 + module_platform_driver(sch5636_driver); 533 525 534 526 MODULE_DESCRIPTION("SMSC SCH5636 Hardware Monitoring Driver"); 535 527 MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 536 528 MODULE_LICENSE("GPL"); 537 - 538 - module_init(sch5636_init); 539 - module_exit(sch5636_exit);
+1 -13
drivers/hwmon/twl4030-madc-hwmon.c
··· 136 136 }, 137 137 }; 138 138 139 - static int __init twl4030_madc_hwmon_init(void) 140 - { 141 - return platform_driver_register(&twl4030_madc_hwmon_driver); 142 - } 143 - 144 - module_init(twl4030_madc_hwmon_init); 145 - 146 - static void __exit twl4030_madc_hwmon_exit(void) 147 - { 148 - platform_driver_unregister(&twl4030_madc_hwmon_driver); 149 - } 150 - 151 - module_exit(twl4030_madc_hwmon_exit); 139 + module_platform_driver(twl4030_madc_hwmon_driver); 152 140 153 141 MODULE_DESCRIPTION("TWL4030 ADC Hwmon driver"); 154 142 MODULE_LICENSE("GPL");
+1 -12
drivers/hwmon/ultra45_env.c
··· 309 309 .remove = __devexit_p(env_remove), 310 310 }; 311 311 312 - static int __init env_init(void) 313 - { 314 - return platform_driver_register(&env_driver); 315 - } 316 - 317 - static void __exit env_exit(void) 318 - { 319 - platform_driver_unregister(&env_driver); 320 - } 321 - 322 - module_init(env_init); 323 - module_exit(env_exit); 312 + module_platform_driver(env_driver);
+1 -11
drivers/hwmon/wm831x-hwmon.c
··· 209 209 }, 210 210 }; 211 211 212 - static int __init wm831x_hwmon_init(void) 213 - { 214 - return platform_driver_register(&wm831x_hwmon_driver); 215 - } 216 - module_init(wm831x_hwmon_init); 217 - 218 - static void __exit wm831x_hwmon_exit(void) 219 - { 220 - platform_driver_unregister(&wm831x_hwmon_driver); 221 - } 222 - module_exit(wm831x_hwmon_exit); 212 + module_platform_driver(wm831x_hwmon_driver); 223 213 224 214 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 225 215 MODULE_DESCRIPTION("WM831x Hardware Monitoring");
+1 -11
drivers/hwmon/wm8350-hwmon.c
··· 133 133 }, 134 134 }; 135 135 136 - static int __init wm8350_hwmon_init(void) 137 - { 138 - return platform_driver_register(&wm8350_hwmon_driver); 139 - } 140 - module_init(wm8350_hwmon_init); 141 - 142 - static void __exit wm8350_hwmon_exit(void) 143 - { 144 - platform_driver_unregister(&wm8350_hwmon_driver); 145 - } 146 - module_exit(wm8350_hwmon_exit); 136 + module_platform_driver(wm8350_hwmon_driver); 147 137 148 138 MODULE_AUTHOR("Mark Brown <broonie@opensource.wolfsonmicro.com>"); 149 139 MODULE_DESCRIPTION("WM8350 Hardware Monitoring");
+2 -2
drivers/i2c/algos/i2c-algo-bit.c
··· 488 488 489 489 if (flags & I2C_M_TEN) { 490 490 /* a ten bit address */ 491 - addr = 0xf0 | ((msg->addr >> 7) & 0x03); 491 + addr = 0xf0 | ((msg->addr >> 7) & 0x06); 492 492 bit_dbg(2, &i2c_adap->dev, "addr0: %d\n", addr); 493 493 /* try extended address code...*/ 494 494 ret = try_address(i2c_adap, addr, retries); ··· 498 498 return -ENXIO; 499 499 } 500 500 /* the remaining 8 bit address */ 501 - ret = i2c_outb(i2c_adap, msg->addr & 0x7f); 501 + ret = i2c_outb(i2c_adap, msg->addr & 0xff); 502 502 if ((ret != 1) && !nak_ok) { 503 503 /* the chip did not ack / xmission error occurred */ 504 504 dev_err(&i2c_adap->dev, "died at 2nd address code\n");
+1 -1
drivers/i2c/busses/i2c-nuc900.c
··· 593 593 i2c->adap.algo_data = i2c; 594 594 i2c->adap.dev.parent = &pdev->dev; 595 595 596 - mfp_set_groupg(&pdev->dev); 596 + mfp_set_groupg(&pdev->dev, NULL); 597 597 598 598 clk_get_rate(i2c->clk); 599 599
+3 -1
drivers/i2c/i2c-core.c
··· 539 539 client->dev.type = &i2c_client_type; 540 540 client->dev.of_node = info->of_node; 541 541 542 + /* For 10-bit clients, add an arbitrary offset to avoid collisions */ 542 543 dev_set_name(&client->dev, "%d-%04x", i2c_adapter_id(adap), 543 - client->addr); 544 + client->addr | ((client->flags & I2C_CLIENT_TEN) 545 + ? 0xa000 : 0)); 544 546 status = device_register(&client->dev); 545 547 if (status) 546 548 goto out_err;
+1 -1
drivers/i2c/i2c-dev.c
··· 579 579 return 0; 580 580 } 581 581 582 - int i2cdev_notifier_call(struct notifier_block *nb, unsigned long action, 582 + static int i2cdev_notifier_call(struct notifier_block *nb, unsigned long action, 583 583 void *data) 584 584 { 585 585 struct device *dev = data;
+6 -3
drivers/infiniband/core/addr.c
··· 216 216 217 217 neigh = neigh_lookup(&arp_tbl, &rt->rt_gateway, rt->dst.dev); 218 218 if (!neigh || !(neigh->nud_state & NUD_VALID)) { 219 + rcu_read_lock(); 219 220 neigh_event_send(dst_get_neighbour(&rt->dst), NULL); 221 + rcu_read_unlock(); 220 222 ret = -ENODATA; 221 223 if (neigh) 222 224 goto release; ··· 276 274 goto put; 277 275 } 278 276 277 + rcu_read_lock(); 279 278 neigh = dst_get_neighbour(dst); 280 279 if (!neigh || !(neigh->nud_state & NUD_VALID)) { 281 280 if (neigh) 282 281 neigh_event_send(neigh, NULL); 283 282 ret = -ENODATA; 284 - goto put; 283 + } else { 284 + ret = rdma_copy_addr(addr, dst->dev, neigh->ha); 285 285 } 286 - 287 - ret = rdma_copy_addr(addr, dst->dev, neigh->ha); 286 + rcu_read_unlock(); 288 287 put: 289 288 dst_release(dst); 290 289 return ret;
+4
drivers/infiniband/hw/cxgb3/iwch_cm.c
··· 1375 1375 goto reject; 1376 1376 } 1377 1377 dst = &rt->dst; 1378 + rcu_read_lock(); 1378 1379 neigh = dst_get_neighbour(dst); 1379 1380 l2t = t3_l2t_get(tdev, neigh, neigh->dev); 1381 + rcu_read_unlock(); 1380 1382 if (!l2t) { 1381 1383 printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n", 1382 1384 __func__); ··· 1948 1946 } 1949 1947 ep->dst = &rt->dst; 1950 1948 1949 + rcu_read_lock(); 1951 1950 neigh = dst_get_neighbour(ep->dst); 1952 1951 1953 1952 /* get a l2t entry */ 1954 1953 ep->l2t = t3_l2t_get(ep->com.tdev, neigh, neigh->dev); 1954 + rcu_read_unlock(); 1955 1955 if (!ep->l2t) { 1956 1956 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 1957 1957 err = -ENOMEM;
+9 -1
drivers/infiniband/hw/cxgb4/cm.c
··· 542 542 (mpa_rev_to_use == 2 ? MPA_ENHANCED_RDMA_CONN : 0); 543 543 mpa->private_data_size = htons(ep->plen); 544 544 mpa->revision = mpa_rev_to_use; 545 - if (mpa_rev_to_use == 1) 545 + if (mpa_rev_to_use == 1) { 546 546 ep->tried_with_mpa_v1 = 1; 547 + ep->retry_with_mpa_v1 = 0; 548 + } 547 549 548 550 if (mpa_rev_to_use == 2) { 549 551 mpa->private_data_size += ··· 1596 1594 goto reject; 1597 1595 } 1598 1596 dst = &rt->dst; 1597 + rcu_read_lock(); 1599 1598 neigh = dst_get_neighbour(dst); 1600 1599 if (neigh->dev->flags & IFF_LOOPBACK) { 1601 1600 pdev = ip_dev_find(&init_net, peer_ip); ··· 1623 1620 rss_qid = dev->rdev.lldi.rxq_ids[ 1624 1621 cxgb4_port_idx(neigh->dev) * step]; 1625 1622 } 1623 + rcu_read_unlock(); 1626 1624 if (!l2t) { 1627 1625 printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n", 1628 1626 __func__); ··· 1824 1820 } 1825 1821 ep->dst = &rt->dst; 1826 1822 1823 + rcu_read_lock(); 1827 1824 neigh = dst_get_neighbour(ep->dst); 1828 1825 1829 1826 /* get a l2t entry */ ··· 1861 1856 ep->rss_qid = ep->com.dev->rdev.lldi.rxq_ids[ 1862 1857 cxgb4_port_idx(neigh->dev) * step]; 1863 1858 } 1859 + rcu_read_unlock(); 1864 1860 if (!ep->l2t) { 1865 1861 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 1866 1862 err = -ENOMEM; ··· 2307 2301 } 2308 2302 ep->dst = &rt->dst; 2309 2303 2304 + rcu_read_lock(); 2310 2305 neigh = dst_get_neighbour(ep->dst); 2311 2306 2312 2307 /* get a l2t entry */ ··· 2346 2339 ep->retry_with_mpa_v1 = 0; 2347 2340 ep->tried_with_mpa_v1 = 0; 2348 2341 } 2342 + rcu_read_unlock(); 2349 2343 if (!ep->l2t) { 2350 2344 printk(KERN_ERR MOD "%s - cannot alloc l2e.\n", __func__); 2351 2345 err = -ENOMEM;
+1 -1
drivers/infiniband/hw/cxgb4/cq.c
··· 311 311 while (ptr != cq->sw_pidx) { 312 312 cqe = &cq->sw_queue[ptr]; 313 313 if (RQ_TYPE(cqe) && (CQE_OPCODE(cqe) != FW_RI_READ_RESP) && 314 - (CQE_QPID(cqe) == wq->rq.qid) && cqe_completes_wr(cqe, wq)) 314 + (CQE_QPID(cqe) == wq->sq.qid) && cqe_completes_wr(cqe, wq)) 315 315 (*count)++; 316 316 if (++ptr == cq->size) 317 317 ptr = 0;
+4 -2
drivers/infiniband/hw/nes/nes_cm.c
··· 1377 1377 neigh_release(neigh); 1378 1378 } 1379 1379 1380 - if ((neigh == NULL) || (!(neigh->nud_state & NUD_VALID))) 1380 + if ((neigh == NULL) || (!(neigh->nud_state & NUD_VALID))) { 1381 + rcu_read_lock(); 1381 1382 neigh_event_send(dst_get_neighbour(&rt->dst), NULL); 1382 - 1383 + rcu_read_unlock(); 1384 + } 1383 1385 ip_rt_put(rt); 1384 1386 return rc; 1385 1387 }
+9 -9
drivers/infiniband/hw/qib/qib_iba7322.c
··· 2307 2307 SYM_LSB(IBCCtrlA_0, MaxPktLen); 2308 2308 ppd->cpspec->ibcctrl_a = ibc; /* without linkcmd or linkinitcmd! */ 2309 2309 2310 - /* initially come up waiting for TS1, without sending anything. */ 2311 - val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE << 2312 - QLOGIC_IB_IBCC_LINKINITCMD_SHIFT); 2313 - 2314 - ppd->cpspec->ibcctrl_a = val; 2315 2310 /* 2316 2311 * Reset the PCS interface to the serdes (and also ibc, which is still 2317 2312 * in reset from above). Writes new value of ibcctrl_a as last step. 2318 2313 */ 2319 2314 qib_7322_mini_pcs_reset(ppd); 2320 - qib_write_kreg(dd, kr_scratch, 0ULL); 2321 - /* clear the linkinit cmds */ 2322 - ppd->cpspec->ibcctrl_a &= ~SYM_MASK(IBCCtrlA_0, LinkInitCmd); 2323 2315 2324 2316 if (!ppd->cpspec->ibcctrl_b) { 2325 2317 unsigned lse = ppd->link_speed_enabled; ··· 2376 2384 /* Enable port */ 2377 2385 ppd->cpspec->ibcctrl_a |= SYM_MASK(IBCCtrlA_0, IBLinkEn); 2378 2386 set_vls(ppd); 2387 + 2388 + /* initially come up DISABLED, without sending anything. */ 2389 + val = ppd->cpspec->ibcctrl_a | (QLOGIC_IB_IBCC_LINKINITCMD_DISABLE << 2390 + QLOGIC_IB_IBCC_LINKINITCMD_SHIFT); 2391 + qib_write_kreg_port(ppd, krp_ibcctrl_a, val); 2392 + qib_write_kreg(dd, kr_scratch, 0ULL); 2393 + /* clear the linkinit cmds */ 2394 + ppd->cpspec->ibcctrl_a = val & ~SYM_MASK(IBCCtrlA_0, LinkInitCmd); 2379 2395 2380 2396 /* be paranoid against later code motion, etc. */ 2381 2397 spin_lock_irqsave(&dd->cspec->rcvmod_lock, flags); ··· 5241 5241 off */ 5242 5242 if (ppd->dd->flags & QIB_HAS_QSFP) { 5243 5243 qd->t_insert = get_jiffies_64(); 5244 - schedule_work(&qd->work); 5244 + queue_work(ib_wq, &qd->work); 5245 5245 } 5246 5246 spin_lock_irqsave(&ppd->sdma_lock, flags); 5247 5247 if (__qib_sdma_running(ppd))
-12
drivers/infiniband/hw/qib/qib_qsfp.c
··· 480 480 udelay(20); /* Generous RST dwell */ 481 481 482 482 dd->f_gpio_mod(dd, mask, mask, mask); 483 - /* Spec says module can take up to two seconds! */ 484 - mask = QSFP_GPIO_MOD_PRS_N; 485 - if (qd->ppd->hw_pidx) 486 - mask <<= QSFP_GPIO_PORT2_SHIFT; 487 - 488 - /* Do not try to wait here. Better to let event handle it */ 489 - if (!qib_qsfp_mod_present(qd->ppd)) 490 - goto bail; 491 - /* We see a module, but it may be unwise to look yet. Just schedule */ 492 - qd->t_insert = get_jiffies_64(); 493 - queue_work(ib_wq, &qd->work); 494 - bail: 495 483 return; 496 484 } 497 485
+8 -5
drivers/infiniband/ulp/ipoib/ipoib_ib.c
··· 57 57 struct ib_pd *pd, struct ib_ah_attr *attr) 58 58 { 59 59 struct ipoib_ah *ah; 60 + struct ib_ah *vah; 60 61 61 62 ah = kmalloc(sizeof *ah, GFP_KERNEL); 62 63 if (!ah) 63 - return NULL; 64 + return ERR_PTR(-ENOMEM); 64 65 65 66 ah->dev = dev; 66 67 ah->last_send = 0; 67 68 kref_init(&ah->ref); 68 69 69 - ah->ah = ib_create_ah(pd, attr); 70 - if (IS_ERR(ah->ah)) { 70 + vah = ib_create_ah(pd, attr); 71 + if (IS_ERR(vah)) { 71 72 kfree(ah); 72 - ah = NULL; 73 - } else 73 + ah = (struct ipoib_ah *)vah; 74 + } else { 75 + ah->ah = vah; 74 76 ipoib_dbg(netdev_priv(dev), "Created ah %p\n", ah->ah); 77 + } 75 78 76 79 return ah; 77 80 }
+12 -8
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 432 432 433 433 spin_lock_irqsave(&priv->lock, flags); 434 434 435 - if (ah) { 435 + if (!IS_ERR_OR_NULL(ah)) { 436 436 path->pathrec = *pathrec; 437 437 438 438 old_ah = path->ah; ··· 555 555 return 0; 556 556 } 557 557 558 + /* called with rcu_read_lock */ 558 559 static void neigh_add_path(struct sk_buff *skb, struct net_device *dev) 559 560 { 560 561 struct ipoib_dev_priv *priv = netdev_priv(dev); ··· 637 636 spin_unlock_irqrestore(&priv->lock, flags); 638 637 } 639 638 639 + /* called with rcu_read_lock */ 640 640 static void ipoib_path_lookup(struct sk_buff *skb, struct net_device *dev) 641 641 { 642 642 struct ipoib_dev_priv *priv = netdev_priv(skb->dev); ··· 722 720 struct neighbour *n = NULL; 723 721 unsigned long flags; 724 722 723 + rcu_read_lock(); 725 724 if (likely(skb_dst(skb))) 726 725 n = dst_get_neighbour(skb_dst(skb)); 727 726 728 727 if (likely(n)) { 729 728 if (unlikely(!*to_ipoib_neigh(n))) { 730 729 ipoib_path_lookup(skb, dev); 731 - return NETDEV_TX_OK; 730 + goto unlock; 732 731 } 733 732 734 733 neigh = *to_ipoib_neigh(n); ··· 752 749 ipoib_neigh_free(dev, neigh); 753 750 spin_unlock_irqrestore(&priv->lock, flags); 754 751 ipoib_path_lookup(skb, dev); 755 - return NETDEV_TX_OK; 752 + goto unlock; 756 753 } 757 754 758 755 if (ipoib_cm_get(neigh)) { 759 756 if (ipoib_cm_up(neigh)) { 760 757 ipoib_cm_send(dev, skb, ipoib_cm_get(neigh)); 761 - return NETDEV_TX_OK; 758 + goto unlock; 762 759 } 763 760 } else if (neigh->ah) { 764 761 ipoib_send(dev, skb, neigh->ah, IPOIB_QPN(n->ha)); 765 - return NETDEV_TX_OK; 762 + goto unlock; 766 763 } 767 764 768 765 if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) { ··· 796 793 phdr->hwaddr + 4); 797 794 dev_kfree_skb_any(skb); 798 795 ++dev->stats.tx_dropped; 799 - return NETDEV_TX_OK; 796 + goto unlock; 800 797 } 801 798 802 799 unicast_arp_send(skb, dev, phdr); 803 800 } 804 801 } 805 - 802 + unlock: 803 + rcu_read_unlock(); 806 804 return NETDEV_TX_OK; 807 805 } 808 806 ··· 841 837 dst = skb_dst(skb); 842 838 n = NULL; 843 839 if (dst) 844 - n = dst_get_neighbour(dst); 840 + n = dst_get_neighbour_raw(dst); 845 841 if ((!dst || !n) && daddr) { 846 842 struct ipoib_pseudoheader *phdr = 847 843 (struct ipoib_pseudoheader *) skb_push(skb, sizeof *phdr);
+9 -4
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
··· 240 240 av.grh.dgid = mcast->mcmember.mgid; 241 241 242 242 ah = ipoib_create_ah(dev, priv->pd, &av); 243 - if (!ah) { 244 - ipoib_warn(priv, "ib_address_create failed\n"); 243 + if (IS_ERR(ah)) { 244 + ipoib_warn(priv, "ib_address_create failed %ld\n", 245 + -PTR_ERR(ah)); 246 + /* use original error */ 247 + return PTR_ERR(ah); 245 248 } else { 246 249 spin_lock_irq(&priv->lock); 247 250 mcast->ah = ah; ··· 269 266 270 267 skb->dev = dev; 271 268 if (dst) 272 - n = dst_get_neighbour(dst); 269 + n = dst_get_neighbour_raw(dst); 273 270 if (!dst || !n) { 274 271 /* put pseudoheader back on for next time */ 275 272 skb_push(skb, sizeof (struct ipoib_pseudoheader)); ··· 725 722 if (mcast && mcast->ah) { 726 723 struct dst_entry *dst = skb_dst(skb); 727 724 struct neighbour *n = NULL; 725 + 726 + rcu_read_lock(); 728 727 if (dst) 729 728 n = dst_get_neighbour(dst); 730 729 if (n && !*to_ipoib_neigh(n)) { ··· 739 734 list_add_tail(&neigh->list, &mcast->neigh_list); 740 735 } 741 736 } 742 - 737 + rcu_read_unlock(); 743 738 spin_unlock_irqrestore(&priv->lock, flags); 744 739 ipoib_send(dev, skb, mcast->ah, IB_MULTICAST_QPN); 745 740 return;
+18 -8
drivers/input/mouse/elantech.c
··· 1210 1210 */ 1211 1211 static int elantech_set_properties(struct elantech_data *etd) 1212 1212 { 1213 + /* This represents the version of IC body. */ 1213 1214 int ver = (etd->fw_version & 0x0f0000) >> 16; 1214 1215 1216 + /* Early version of Elan touchpads doesn't obey the rule. */ 1215 1217 if (etd->fw_version < 0x020030 || etd->fw_version == 0x020600) 1216 1218 etd->hw_version = 1; 1217 - else if (etd->fw_version < 0x150600) 1218 - etd->hw_version = 2; 1219 - else if (ver == 5) 1220 - etd->hw_version = 3; 1221 - else if (ver == 6) 1222 - etd->hw_version = 4; 1223 - else 1224 - return -1; 1219 + else { 1220 + switch (ver) { 1221 + case 2: 1222 + case 4: 1223 + etd->hw_version = 2; 1224 + break; 1225 + case 5: 1226 + etd->hw_version = 3; 1227 + break; 1228 + case 6: 1229 + etd->hw_version = 4; 1230 + break; 1231 + default: 1232 + return -1; 1233 + } 1234 + } 1225 1235 1226 1236 /* 1227 1237 * Turn on packet checking by default.
+1
drivers/input/serio/ams_delta_serio.c
··· 24 24 #include <linux/irq.h> 25 25 #include <linux/serio.h> 26 26 #include <linux/slab.h> 27 + #include <linux/module.h> 27 28 28 29 #include <asm/mach-types.h> 29 30 #include <plat/board-ams-delta.h>
+14
drivers/input/serio/i8042-x86ia64io.h
··· 431 431 DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V13"), 432 432 }, 433 433 }, 434 + { 435 + /* Newer HP Pavilion dv4 models */ 436 + .matches = { 437 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 438 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"), 439 + }, 440 + }, 434 441 { } 435 442 }; 436 443 ··· 565 558 .matches = { 566 559 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 567 560 DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V13"), 561 + }, 562 + }, 563 + { 564 + /* Newer HP Pavilion dv4 models */ 565 + .matches = { 566 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 567 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv4 Notebook PC"), 568 568 }, 569 569 }, 570 570 { }
+6
drivers/isdn/divert/divert_procfs.c
··· 242 242 case IIOCDOCFINT: 243 243 if (!divert_if.drv_to_name(dioctl.cf_ctrl.drvid)) 244 244 return (-EINVAL); /* invalid driver */ 245 + if (strnlen(dioctl.cf_ctrl.msn, sizeof(dioctl.cf_ctrl.msn)) == 246 + sizeof(dioctl.cf_ctrl.msn)) 247 + return -EINVAL; 248 + if (strnlen(dioctl.cf_ctrl.fwd_nr, sizeof(dioctl.cf_ctrl.fwd_nr)) == 249 + sizeof(dioctl.cf_ctrl.fwd_nr)) 250 + return -EINVAL; 245 251 if ((i = cf_command(dioctl.cf_ctrl.drvid, 246 252 (cmd == IIOCDOCFACT) ? 1 : (cmd == IIOCDOCFDIS) ? 0 : 2, 247 253 dioctl.cf_ctrl.cfproc,
+3
drivers/isdn/i4l/isdn_net.c
··· 2756 2756 char *c, 2757 2757 *e; 2758 2758 2759 + if (strnlen(cfg->drvid, sizeof(cfg->drvid)) == 2760 + sizeof(cfg->drvid)) 2761 + return -EINVAL; 2759 2762 drvidx = -1; 2760 2763 chidx = -1; 2761 2764 strcpy(drvid, cfg->drvid);
+8 -7
drivers/misc/Kconfig
··· 472 472 module will be called bmp085. 473 473 474 474 config PCH_PHUB 475 - tristate "Intel EG20T PCH / OKI SEMICONDUCTOR IOH(ML7213/ML7223) PHUB" 475 + tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) PHUB" 476 476 depends on PCI 477 477 help 478 478 This driver is for PCH(Platform controller Hub) PHUB(Packet Hub) of ··· 480 480 processor. The Topcliff has MAC address and Option ROM data in SROM. 481 481 This driver can access MAC address and Option ROM data in SROM. 482 482 483 - This driver also can be used for OKI SEMICONDUCTOR IOH(Input/ 484 - Output Hub), ML7213 and ML7223. 485 - ML7213 IOH is for IVI(In-Vehicle Infotainment) use and ML7223 IOH is 486 - for MP(Media Phone) use. 487 - ML7213/ML7223 is companion chip for Intel Atom E6xx series. 488 - ML7213/ML7223 is completely compatible for Intel EG20T PCH. 483 + This driver also can be used for LAPIS Semiconductor's IOH, 484 + ML7213/ML7223/ML7831. 485 + ML7213 which is for IVI(In-Vehicle Infotainment) use. 486 + ML7223 IOH is for MP(Media Phone) use. 487 + ML7831 IOH is for general purpose use. 488 + ML7213/ML7223/ML7831 is companion chip for Intel Atom E6xx series. 489 + ML7213/ML7223/ML7831 is completely compatible for Intel EG20T PCH. 489 490 490 491 To compile this driver as a module, choose M here: the module will 491 492 be called pch_phub.
+1 -1
drivers/misc/ad525x_dpot.h
··· 100 100 AD5293_ID = DPOT_CONF(F_RDACS_RW | F_SPI_16BIT, BRDAC0, 10, 27), 101 101 AD7376_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT, 102 102 BRDAC0, 7, 28), 103 - AD8400_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_8BIT, 103 + AD8400_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT, 104 104 BRDAC0, 8, 29), 105 105 AD8402_ID = DPOT_CONF(F_RDACS_WONLY | F_AD_APPDATA | F_SPI_16BIT, 106 106 BRDAC0 | BRDAC1, 8, 30),
+59 -22
drivers/misc/pch_phub.c
··· 1 1 /* 2 - * Copyright (C) 2010 OKI SEMICONDUCTOR CO., LTD. 2 + * Copyright (C) 2011 LAPIS Semiconductor Co., Ltd. 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify 5 5 * it under the terms of the GNU General Public License as published by ··· 41 41 #define PCH_PHUB_ROM_START_ADDR_EG20T 0x80 /* ROM data area start address offset 42 42 (Intel EG20T PCH)*/ 43 43 #define PCH_PHUB_ROM_START_ADDR_ML7213 0x400 /* ROM data area start address 44 - offset(OKI SEMICONDUCTOR ML7213) 44 + offset(LAPIS Semicon ML7213) 45 45 */ 46 46 #define PCH_PHUB_ROM_START_ADDR_ML7223 0x400 /* ROM data area start address 47 - offset(OKI SEMICONDUCTOR ML7223) 47 + offset(LAPIS Semicon ML7223) 48 48 */ 49 49 50 50 /* MAX number of INT_REDUCE_CONTROL registers */ ··· 72 72 /* Macros for ML7223 */ 73 73 #define PCI_DEVICE_ID_ROHM_ML7223_mPHUB 0x8012 /* for Bus-m */ 74 74 #define PCI_DEVICE_ID_ROHM_ML7223_nPHUB 0x8002 /* for Bus-n */ 75 + 76 + /* Macros for ML7831 */ 77 + #define PCI_DEVICE_ID_ROHM_ML7831_PHUB 0x8801 75 78 76 79 /* SROM ACCESS Macro */ 77 80 #define PCH_WORD_ADDR_MASK (~((1 << 2) - 1)) ··· 118 115 * @pch_mac_start_address: MAC address area start address 119 116 * @pch_opt_rom_start_address: Option ROM start address 120 117 * @ioh_type: Save IOH type 118 + * @pdev: pointer to pci device struct 121 119 */ 122 120 struct pch_phub_reg { 123 121 u32 phub_id_reg; ··· 140 136 u32 pch_mac_start_address; 141 137 u32 pch_opt_rom_start_address; 142 138 int ioh_type; 139 + struct pci_dev *pdev; 143 140 }; 144 141 145 142 /* SROM SPEC for MAC address assignment offset */ ··· 476 471 int retval; 477 472 int i; 478 473 479 - if (chip->ioh_type == 1) /* EG20T */ 474 + if ((chip->ioh_type == 1) || (chip->ioh_type == 5)) /* EG20T or ML7831*/ 480 475 retval = pch_phub_gbe_serial_rom_conf(chip); 481 476 else /* ML7223 */ 482 477 retval = pch_phub_gbe_serial_rom_conf_mp(chip); ··· 503 498 unsigned int orom_size; 504 499 int ret; 505 500 int err; 501 + ssize_t rom_size; 506 502 507 503 struct pch_phub_reg *chip = 508 504 dev_get_drvdata(container_of(kobj, struct device, kobj)); ··· 515 509 } 516 510 517 511 /* Get Rom signature */ 512 + chip->pch_phub_extrom_base_address = pci_map_rom(chip->pdev, &rom_size); 513 + if (!chip->pch_phub_extrom_base_address) 514 + goto exrom_map_err; 515 + 518 516 pch_phub_read_serial_rom(chip, chip->pch_opt_rom_start_address, 519 517 (unsigned char *)&rom_signature); 520 518 rom_signature &= 0xff; ··· 549 539 goto return_err; 550 540 } 551 541 return_ok: 542 + pci_unmap_rom(chip->pdev, chip->pch_phub_extrom_base_address); 552 543 mutex_unlock(&pch_phub_mutex); 553 544 return addr_offset; 554 545 555 546 return_err: 547 + pci_unmap_rom(chip->pdev, chip->pch_phub_extrom_base_address); 548 + exrom_map_err: 556 549 mutex_unlock(&pch_phub_mutex); 557 550 return_err_nomutex: 558 551 return err; ··· 568 555 int err; 569 556 unsigned int addr_offset; 570 557 int ret; 558 + ssize_t rom_size; 571 559 struct pch_phub_reg *chip = 572 560 dev_get_drvdata(container_of(kobj, struct device, kobj)); 573 561 ··· 585 571 goto return_ok; 586 572 } 587 573 574 + chip->pch_phub_extrom_base_address = pci_map_rom(chip->pdev, &rom_size); 575 + if (!chip->pch_phub_extrom_base_address) { 576 + err = -ENOMEM; 577 + goto exrom_map_err; 578 + } 579 + 588 580 for (addr_offset = 0; addr_offset < count; addr_offset++) { 589 581 if (PCH_PHUB_OROM_SIZE < off + addr_offset) 590 582 goto return_ok; ··· 605 585 } 606 586 607 587 return_ok: 588 + pci_unmap_rom(chip->pdev, chip->pch_phub_extrom_base_address); 608 589 mutex_unlock(&pch_phub_mutex); 609 590 return addr_offset; 610 591 611 592 return_err: 593 + pci_unmap_rom(chip->pdev, chip->pch_phub_extrom_base_address); 594 + 595 + exrom_map_err: 612 596 mutex_unlock(&pch_phub_mutex); 613 597 return err; 614 598 } ··· 622 598 { 623 599 u8 mac[8]; 624 600 struct pch_phub_reg *chip = dev_get_drvdata(dev); 601 + ssize_t rom_size; 602 + 603 + chip->pch_phub_extrom_base_address = pci_map_rom(chip->pdev, &rom_size); 604 + if (!chip->pch_phub_extrom_base_address) 605 + return -ENOMEM; 625 606 626 607 pch_phub_read_gbe_mac_addr(chip, mac); 608 + pci_unmap_rom(chip->pdev, chip->pch_phub_extrom_base_address); 627 609 628 610 return sprintf(buf, "%pM\n", mac); 629 611 } ··· 638 608 const char *buf, size_t count) 639 609 { 640 610 u8 mac[6]; 611 + ssize_t rom_size; 641 612 struct pch_phub_reg *chip = dev_get_drvdata(dev); 642 613 643 614 if (count != 18) ··· 648 617 (u32 *)&mac[0], (u32 *)&mac[1], (u32 *)&mac[2], (u32 *)&mac[3], 649 618 (u32 *)&mac[4], (u32 *)&mac[5]); 650 619 620 + chip->pch_phub_extrom_base_address = pci_map_rom(chip->pdev, &rom_size); 621 + if (!chip->pch_phub_extrom_base_address) 622 + return -ENOMEM; 623 + 651 624 pch_phub_write_gbe_mac_addr(chip, mac); 625 + pci_unmap_rom(chip->pdev, chip->pch_phub_extrom_base_address); 652 626 653 627 return count; 654 628 } ··· 676 640 int retval; 677 641 678 642 int ret; 679 - ssize_t rom_size; 680 643 struct pch_phub_reg *chip; 681 644 682 645 chip = kzalloc(sizeof(struct pch_phub_reg), GFP_KERNEL); ··· 712 677 "in pch_phub_base_address variable is %p\n", __func__, 713 678 chip->pch_phub_base_address); 714 679 715 - if (id->driver_data != 3) { 716 - chip->pch_phub_extrom_base_address =\ 717 - pci_map_rom(pdev, &rom_size); 718 - if (chip->pch_phub_extrom_base_address == 0) { 719 - dev_err(&pdev->dev, "%s: pci_map_rom FAILED", __func__); 720 - ret = -ENOMEM; 721 - goto err_pci_map; 722 - } 723 - dev_dbg(&pdev->dev, "%s : " 724 - "pci_map_rom SUCCESS and value in " 725 - "pch_phub_extrom_base_address variable is %p\n", 726 - __func__, chip->pch_phub_extrom_base_address); 727 - } 680 + chip->pdev = pdev; /* Save pci device struct */ 728 681 729 682 if (id->driver_data == 1) { /* EG20T PCH */ 730 683 const char *board_name; ··· 786 763 chip->pch_opt_rom_start_address =\ 787 764 PCH_PHUB_ROM_START_ADDR_ML7223; 788 765 chip->pch_mac_start_address = PCH_PHUB_MAC_START_ADDR_ML7223; 766 + } else if (id->driver_data == 5) { /* ML7831 */ 767 + retval = sysfs_create_file(&pdev->dev.kobj, 768 + &dev_attr_pch_mac.attr); 769 + if (retval) 770 + goto err_sysfs_create; 771 + 772 + retval = sysfs_create_bin_file(&pdev->dev.kobj, &pch_bin_attr); 773 + if (retval) 774 + goto exit_bin_attr; 775 + 776 + /* set the prefech value */ 777 + iowrite32(0x000affaa, chip->pch_phub_base_address + 0x14); 778 + /* set the interrupt delay value */ 779 + iowrite32(0x25, chip->pch_phub_base_address + 0x44); 780 + chip->pch_opt_rom_start_address = PCH_PHUB_ROM_START_ADDR_EG20T; 781 + chip->pch_mac_start_address = PCH_PHUB_MAC_START_ADDR_EG20T; 789 782 } 790 783 791 784 chip->ioh_type = id->driver_data; ··· 812 773 sysfs_remove_file(&pdev->dev.kobj, &dev_attr_pch_mac.attr); 813 774 814 775 err_sysfs_create: 815 - pci_unmap_rom(pdev, chip->pch_phub_extrom_base_address); 816 - err_pci_map: 817 776 pci_iounmap(pdev, chip->pch_phub_base_address); 818 777 err_pci_iomap: 819 778 pci_release_regions(pdev); ··· 829 792 830 793 sysfs_remove_file(&pdev->dev.kobj, &dev_attr_pch_mac.attr); 831 794 sysfs_remove_bin_file(&pdev->dev.kobj, &pch_bin_attr); 832 - pci_unmap_rom(pdev, chip->pch_phub_extrom_base_address); 833 795 pci_iounmap(pdev, chip->pch_phub_base_address); 834 796 pci_release_regions(pdev); 835 797 pci_disable_device(pdev); ··· 883 847 { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ROHM_ML7213_PHUB), 2, }, 884 848 { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ROHM_ML7223_mPHUB), 3, }, 885 849 { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ROHM_ML7223_nPHUB), 4, }, 850 + { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ROHM_ML7831_PHUB), 5, }, 886 851 { } 887 852 }; 888 853 MODULE_DEVICE_TABLE(pci, pch_phub_pcidev_id); ··· 910 873 module_init(pch_phub_pci_init); 911 874 module_exit(pch_phub_pci_exit); 912 875 913 - MODULE_DESCRIPTION("Intel EG20T PCH/OKI SEMICONDUCTOR IOH(ML7213/ML7223) PHUB"); 876 + MODULE_DESCRIPTION("Intel EG20T PCH/LAPIS Semiconductor IOH(ML7213/ML7223) PHUB"); 914 877 MODULE_LICENSE("GPL");
+1 -1
drivers/misc/spear13xx_pcie_gadget.c
··· 903 903 } 904 904 module_exit(spear_pcie_gadget_exit); 905 905 906 - MODULE_ALIAS("pcie-gadget-spear"); 906 + MODULE_ALIAS("platform:pcie-gadget-spear"); 907 907 MODULE_AUTHOR("Pratyush Anand"); 908 908 MODULE_LICENSE("GPL");
+6 -27
drivers/net/bonding/bond_main.c
··· 2554 2554 } 2555 2555 } 2556 2556 2557 - static __be32 bond_glean_dev_ip(struct net_device *dev) 2558 - { 2559 - struct in_device *idev; 2560 - struct in_ifaddr *ifa; 2561 - __be32 addr = 0; 2562 - 2563 - if (!dev) 2564 - return 0; 2565 - 2566 - rcu_read_lock(); 2567 - idev = __in_dev_get_rcu(dev); 2568 - if (!idev) 2569 - goto out; 2570 - 2571 - ifa = idev->ifa_list; 2572 - if (!ifa) 2573 - goto out; 2574 - 2575 - addr = ifa->ifa_local; 2576 - out: 2577 - rcu_read_unlock(); 2578 - return addr; 2579 - } 2580 - 2581 2557 static int bond_has_this_ip(struct bonding *bond, __be32 ip) 2582 2558 { 2583 2559 struct vlan_entry *vlan; ··· 3299 3323 struct bonding *bond; 3300 3324 struct vlan_entry *vlan; 3301 3325 3326 + /* we only care about primary address */ 3327 + if(ifa->ifa_flags & IFA_F_SECONDARY) 3328 + return NOTIFY_DONE; 3329 + 3302 3330 list_for_each_entry(bond, &bn->dev_list, bond_list) { 3303 3331 if (bond->dev == event_dev) { 3304 3332 switch (event) { ··· 3310 3330 bond->master_ip = ifa->ifa_local; 3311 3331 return NOTIFY_OK; 3312 3332 case NETDEV_DOWN: 3313 - bond->master_ip = bond_glean_dev_ip(bond->dev); 3333 + bond->master_ip = 0; 3314 3334 return NOTIFY_OK; 3315 3335 default: 3316 3336 return NOTIFY_DONE; ··· 3326 3346 vlan->vlan_ip = ifa->ifa_local; 3327 3347 return NOTIFY_OK; 3328 3348 case NETDEV_DOWN: 3329 - vlan->vlan_ip = 3330 - bond_glean_dev_ip(vlan_dev); 3349 + vlan->vlan_ip = 0; 3331 3350 return NOTIFY_OK; 3332 3351 default: 3333 3352 return NOTIFY_DONE;
+1 -1
drivers/net/ethernet/davicom/dm9000.c
··· 614 614 615 615 if (!dm->wake_state) 616 616 irq_set_irq_wake(dm->irq_wake, 1); 617 - else if (dm->wake_state & !opts) 617 + else if (dm->wake_state && !opts) 618 618 irq_set_irq_wake(dm->irq_wake, 0); 619 619 } 620 620
+1
drivers/net/ethernet/freescale/Kconfig
··· 24 24 bool "FEC ethernet controller (of ColdFire and some i.MX CPUs)" 25 25 depends on (M523x || M527x || M5272 || M528x || M520x || M532x || \ 26 26 ARCH_MXC || ARCH_MXS) 27 + default ARCH_MXC || ARCH_MXS if ARM 27 28 select PHYLIB 28 29 ---help--- 29 30 Say Y here if you want to use the built-in 10/100 Fast ethernet
+110 -3
drivers/net/ethernet/jme.c
··· 1745 1745 } 1746 1746 1747 1747 static int 1748 + jme_phy_specreg_read(struct jme_adapter *jme, u32 specreg) 1749 + { 1750 + u32 phy_addr; 1751 + 1752 + phy_addr = JM_PHY_SPEC_REG_READ | specreg; 1753 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, JM_PHY_SPEC_ADDR_REG, 1754 + phy_addr); 1755 + return jme_mdio_read(jme->dev, jme->mii_if.phy_id, 1756 + JM_PHY_SPEC_DATA_REG); 1757 + } 1758 + 1759 + static void 1760 + jme_phy_specreg_write(struct jme_adapter *jme, u32 ext_reg, u32 phy_data) 1761 + { 1762 + u32 phy_addr; 1763 + 1764 + phy_addr = JM_PHY_SPEC_REG_WRITE | ext_reg; 1765 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, JM_PHY_SPEC_DATA_REG, 1766 + phy_data); 1767 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, JM_PHY_SPEC_ADDR_REG, 1768 + phy_addr); 1769 + } 1770 + 1771 + static int 1772 + jme_phy_calibration(struct jme_adapter *jme) 1773 + { 1774 + u32 ctrl1000, phy_data; 1775 + 1776 + jme_phy_off(jme); 1777 + jme_phy_on(jme); 1778 + /* Enabel PHY test mode 1 */ 1779 + ctrl1000 = jme_mdio_read(jme->dev, jme->mii_if.phy_id, MII_CTRL1000); 1780 + ctrl1000 &= ~PHY_GAD_TEST_MODE_MSK; 1781 + ctrl1000 |= PHY_GAD_TEST_MODE_1; 1782 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, MII_CTRL1000, ctrl1000); 1783 + 1784 + phy_data = jme_phy_specreg_read(jme, JM_PHY_EXT_COMM_2_REG); 1785 + phy_data &= ~JM_PHY_EXT_COMM_2_CALI_MODE_0; 1786 + phy_data |= JM_PHY_EXT_COMM_2_CALI_LATCH | 1787 + JM_PHY_EXT_COMM_2_CALI_ENABLE; 1788 + jme_phy_specreg_write(jme, JM_PHY_EXT_COMM_2_REG, phy_data); 1789 + msleep(20); 1790 + phy_data = jme_phy_specreg_read(jme, JM_PHY_EXT_COMM_2_REG); 1791 + phy_data &= ~(JM_PHY_EXT_COMM_2_CALI_ENABLE | 1792 + JM_PHY_EXT_COMM_2_CALI_MODE_0 | 1793 + JM_PHY_EXT_COMM_2_CALI_LATCH); 1794 + jme_phy_specreg_write(jme, JM_PHY_EXT_COMM_2_REG, phy_data); 1795 + 1796 + /* Disable PHY test mode */ 1797 + ctrl1000 = jme_mdio_read(jme->dev, jme->mii_if.phy_id, MII_CTRL1000); 1798 + ctrl1000 &= ~PHY_GAD_TEST_MODE_MSK; 1799 + jme_mdio_write(jme->dev, jme->mii_if.phy_id, MII_CTRL1000, ctrl1000); 1800 + return 0; 1801 + } 1802 + 1803 + static int 1804 + jme_phy_setEA(struct jme_adapter *jme) 1805 + { 1806 + u32 phy_comm0 = 0, phy_comm1 = 0; 1807 + u8 nic_ctrl; 1808 + 1809 + pci_read_config_byte(jme->pdev, PCI_PRIV_SHARE_NICCTRL, &nic_ctrl); 1810 + if ((nic_ctrl & 0x3) == JME_FLAG_PHYEA_ENABLE) 1811 + return 0; 1812 + 1813 + switch (jme->pdev->device) { 1814 + case PCI_DEVICE_ID_JMICRON_JMC250: 1815 + if (((jme->chip_main_rev == 5) && 1816 + ((jme->chip_sub_rev == 0) || (jme->chip_sub_rev == 1) || 1817 + (jme->chip_sub_rev == 3))) || 1818 + (jme->chip_main_rev >= 6)) { 1819 + phy_comm0 = 0x008A; 1820 + phy_comm1 = 0x4109; 1821 + } 1822 + if ((jme->chip_main_rev == 3) && 1823 + ((jme->chip_sub_rev == 1) || (jme->chip_sub_rev == 2))) 1824 + phy_comm0 = 0xE088; 1825 + break; 1826 + case PCI_DEVICE_ID_JMICRON_JMC260: 1827 + if (((jme->chip_main_rev == 5) && 1828 + ((jme->chip_sub_rev == 0) || (jme->chip_sub_rev == 1) || 1829 + (jme->chip_sub_rev == 3))) || 1830 + (jme->chip_main_rev >= 6)) { 1831 + phy_comm0 = 0x008A; 1832 + phy_comm1 = 0x4109; 1833 + } 1834 + if ((jme->chip_main_rev == 3) && 1835 + ((jme->chip_sub_rev == 1) || (jme->chip_sub_rev == 2))) 1836 + phy_comm0 = 0xE088; 1837 + if ((jme->chip_main_rev == 2) && (jme->chip_sub_rev == 0)) 1838 + phy_comm0 = 0x608A; 1839 + if ((jme->chip_main_rev == 2) && (jme->chip_sub_rev == 2)) 1840 + phy_comm0 = 0x408A; 1841 + break; 1842 + default: 1843 + return -ENODEV; 1844 + } 1845 + if (phy_comm0) 1846 + jme_phy_specreg_write(jme, JM_PHY_EXT_COMM_0_REG, phy_comm0); 1847 + if (phy_comm1) 1848 + jme_phy_specreg_write(jme, JM_PHY_EXT_COMM_1_REG, phy_comm1); 1849 + 1850 + return 0; 1851 + } 1852 + 1853 + static int 1748 1854 jme_open(struct net_device *netdev) 1749 1855 { 1750 1856 struct jme_adapter *jme = netdev_priv(netdev); ··· 1875 1769 jme_set_settings(netdev, &jme->old_ecmd); 1876 1770 else 1877 1771 jme_reset_phy_processor(jme); 1878 - 1772 + jme_phy_calibration(jme); 1773 + jme_phy_setEA(jme); 1879 1774 jme_reset_link(jme); 1880 1775 1881 1776 return 0; ··· 3291 3184 jme_set_settings(netdev, &jme->old_ecmd); 3292 3185 else 3293 3186 jme_reset_phy_processor(jme); 3294 - 3187 + jme_phy_calibration(jme); 3188 + jme_phy_setEA(jme); 3295 3189 jme_start_irq(jme); 3296 3190 netif_device_attach(netdev); 3297 3191 ··· 3347 3239 MODULE_LICENSE("GPL"); 3348 3240 MODULE_VERSION(DRV_VERSION); 3349 3241 MODULE_DEVICE_TABLE(pci, jme_pci_tbl); 3350 -
+19
drivers/net/ethernet/jme.h
··· 760 760 RXMCS_CHECKSUM, 761 761 }; 762 762 763 + /* Extern PHY common register 2 */ 764 + 765 + #define PHY_GAD_TEST_MODE_1 0x00002000 766 + #define PHY_GAD_TEST_MODE_MSK 0x0000E000 767 + #define JM_PHY_SPEC_REG_READ 0x00004000 768 + #define JM_PHY_SPEC_REG_WRITE 0x00008000 769 + #define PHY_CALIBRATION_DELAY 20 770 + #define JM_PHY_SPEC_ADDR_REG 0x1E 771 + #define JM_PHY_SPEC_DATA_REG 0x1F 772 + 773 + #define JM_PHY_EXT_COMM_0_REG 0x30 774 + #define JM_PHY_EXT_COMM_1_REG 0x31 775 + #define JM_PHY_EXT_COMM_2_REG 0x32 776 + #define JM_PHY_EXT_COMM_2_CALI_ENABLE 0x01 777 + #define JM_PHY_EXT_COMM_2_CALI_MODE_0 0x02 778 + #define JM_PHY_EXT_COMM_2_CALI_LATCH 0x10 779 + #define PCI_PRIV_SHARE_NICCTRL 0xF5 780 + #define JME_FLAG_PHYEA_ENABLE 0x2 781 + 763 782 /* 764 783 * Wakeup Frame setup interface registers 765 784 */
+2 -1
drivers/net/wireless/ath/ath9k/hw.c
··· 1827 1827 } 1828 1828 1829 1829 /* Clear Bit 14 of AR_WA after putting chip into Full Sleep mode. */ 1830 - REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE); 1830 + if (AR_SREV_9300_20_OR_LATER(ah)) 1831 + REG_WRITE(ah, AR_WA, ah->WARegVal & ~AR_WA_D3_L1_DISABLE); 1831 1832 } 1832 1833 1833 1834 /*
+9 -8
drivers/net/wireless/rtlwifi/ps.c
··· 395 395 if (mac->link_state != MAC80211_LINKED) 396 396 return; 397 397 398 - spin_lock(&rtlpriv->locks.lps_lock); 398 + spin_lock_irq(&rtlpriv->locks.lps_lock); 399 399 400 400 /* Idle for a while if we connect to AP a while ago. */ 401 401 if (mac->cnt_after_linked >= 2) { ··· 407 407 } 408 408 } 409 409 410 - spin_unlock(&rtlpriv->locks.lps_lock); 410 + spin_unlock_irq(&rtlpriv->locks.lps_lock); 411 411 } 412 412 413 413 /*Leave the leisure power save mode.*/ ··· 416 416 struct rtl_priv *rtlpriv = rtl_priv(hw); 417 417 struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw)); 418 418 struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); 419 + unsigned long flags; 419 420 420 - spin_lock(&rtlpriv->locks.lps_lock); 421 + spin_lock_irqsave(&rtlpriv->locks.lps_lock, flags); 421 422 422 423 if (ppsc->fwctrl_lps) { 423 424 if (ppsc->dot11_psmode != EACTIVE) { ··· 439 438 rtl_lps_set_psmode(hw, EACTIVE); 440 439 } 441 440 } 442 - spin_unlock(&rtlpriv->locks.lps_lock); 441 + spin_unlock_irqrestore(&rtlpriv->locks.lps_lock, flags); 443 442 } 444 443 445 444 /* For sw LPS*/ ··· 540 539 RT_CLEAR_PS_LEVEL(ppsc, RT_PS_LEVEL_ASPM); 541 540 } 542 541 543 - spin_lock(&rtlpriv->locks.lps_lock); 542 + spin_lock_irq(&rtlpriv->locks.lps_lock); 544 543 rtl_ps_set_rf_state(hw, ERFON, RF_CHANGE_BY_PS); 545 - spin_unlock(&rtlpriv->locks.lps_lock); 544 + spin_unlock_irq(&rtlpriv->locks.lps_lock); 546 545 } 547 546 548 547 void rtl_swlps_rfon_wq_callback(void *data) ··· 575 574 if (rtlpriv->link_info.busytraffic) 576 575 return; 577 576 578 - spin_lock(&rtlpriv->locks.lps_lock); 577 + spin_lock_irq(&rtlpriv->locks.lps_lock); 579 578 rtl_ps_set_rf_state(hw, ERFSLEEP, RF_CHANGE_BY_PS); 580 - spin_unlock(&rtlpriv->locks.lps_lock); 579 + spin_unlock_irq(&rtlpriv->locks.lps_lock); 581 580 582 581 if (ppsc->reg_rfps_level & RT_RF_OFF_LEVL_ASPM && 583 582 !RT_IN_PS_LEVEL(ppsc, RT_PS_LEVEL_ASPM)) {
+9 -7
drivers/of/irq.c
··· 60 60 */ 61 61 struct device_node *of_irq_find_parent(struct device_node *child) 62 62 { 63 - struct device_node *p, *c = child; 63 + struct device_node *p; 64 64 const __be32 *parp; 65 65 66 - if (!of_node_get(c)) 66 + if (!of_node_get(child)) 67 67 return NULL; 68 68 69 69 do { 70 - parp = of_get_property(c, "interrupt-parent", NULL); 70 + parp = of_get_property(child, "interrupt-parent", NULL); 71 71 if (parp == NULL) 72 - p = of_get_parent(c); 72 + p = of_get_parent(child); 73 73 else { 74 74 if (of_irq_workarounds & OF_IMAP_NO_PHANDLE) 75 75 p = of_node_get(of_irq_dflt_pic); 76 76 else 77 77 p = of_find_node_by_phandle(be32_to_cpup(parp)); 78 78 } 79 - of_node_put(c); 80 - c = p; 79 + of_node_put(child); 80 + child = p; 81 81 } while (p && of_get_property(p, "#interrupt-cells", NULL) == NULL); 82 82 83 - return (p == child) ? NULL : p; 83 + return p; 84 84 } 85 85 86 86 /** ··· 424 424 425 425 desc->dev = np; 426 426 desc->interrupt_parent = of_irq_find_parent(np); 427 + if (desc->interrupt_parent == np) 428 + desc->interrupt_parent = NULL; 427 429 list_add_tail(&desc->list, &intc_desc_list); 428 430 } 429 431
+1
drivers/pci/Kconfig
··· 76 76 77 77 config PCI_PRI 78 78 bool "PCI PRI support" 79 + depends on PCI 79 80 select PCI_ATS 80 81 help 81 82 PRI is the PCI Page Request Interface. It allows PCI devices that are
+24 -5
drivers/pci/hotplug/acpiphp_glue.c
··· 459 459 { 460 460 acpi_status status; 461 461 unsigned long long tmp; 462 + struct acpi_pci_root *root; 462 463 acpi_handle dummy_handle; 464 + 465 + /* 466 + * We shouldn't use this bridge if PCIe native hotplug control has been 467 + * granted by the BIOS for it. 468 + */ 469 + root = acpi_pci_find_root(handle); 470 + if (root && (root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL)) 471 + return -ENODEV; 463 472 464 473 /* if the bridge doesn't have _STA, we assume it is always there */ 465 474 status = acpi_get_handle(handle, "_STA", &dummy_handle); ··· 1385 1376 static acpi_status 1386 1377 find_root_bridges(acpi_handle handle, u32 lvl, void *context, void **rv) 1387 1378 { 1379 + struct acpi_pci_root *root; 1388 1380 int *count = (int *)context; 1389 1381 1390 - if (acpi_is_root_bridge(handle)) { 1391 - acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY, 1392 - handle_hotplug_event_bridge, NULL); 1393 - (*count)++; 1394 - } 1382 + if (!acpi_is_root_bridge(handle)) 1383 + return AE_OK; 1384 + 1385 + root = acpi_pci_find_root(handle); 1386 + if (!root) 1387 + return AE_OK; 1388 + 1389 + if (root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL) 1390 + return AE_OK; 1391 + 1392 + (*count)++; 1393 + acpi_install_notify_handler(handle, ACPI_SYSTEM_NOTIFY, 1394 + handle_hotplug_event_bridge, NULL); 1395 + 1395 1396 return AE_OK ; 1396 1397 } 1397 1398
-3
drivers/pci/hotplug/pciehp_ctrl.c
··· 213 213 goto err_exit; 214 214 } 215 215 216 - /* Wait for 1 second after checking link training status */ 217 - msleep(1000); 218 - 219 216 /* Check for a power fault */ 220 217 if (ctrl->power_fault_detected || pciehp_query_power_fault(p_slot)) { 221 218 ctrl_err(ctrl, "Power fault on slot %s\n", slot_name(p_slot));
+18 -9
drivers/pci/hotplug/pciehp_hpc.c
··· 280 280 else 281 281 msleep(1000); 282 282 283 + /* 284 + * Need to wait for 1000 ms after Data Link Layer Link Active 285 + * (DLLLA) bit reads 1b before sending configuration request. 286 + * We need it before checking Link Training (LT) bit becuase 287 + * LT is still set even after DLLLA bit is set on some platform. 288 + */ 289 + msleep(1000); 290 + 283 291 retval = pciehp_readw(ctrl, PCI_EXP_LNKSTA, &lnk_status); 284 292 if (retval) { 285 293 ctrl_err(ctrl, "Cannot read LNKSTATUS register\n"); ··· 301 293 retval = -1; 302 294 return retval; 303 295 } 296 + 297 + /* 298 + * If the port supports Link speeds greater than 5.0 GT/s, we 299 + * must wait for 100 ms after Link training completes before 300 + * sending configuration request. 301 + */ 302 + if (ctrl->pcie->port->subordinate->max_bus_speed > PCIE_SPEED_5_0GT) 303 + msleep(100); 304 + 305 + pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status); 304 306 305 307 return retval; 306 308 } ··· 502 484 u16 slot_cmd; 503 485 u16 cmd_mask; 504 486 u16 slot_status; 505 - u16 lnk_status; 506 487 int retval = 0; 507 488 508 489 /* Clear sticky power-fault bit from previous power failures */ ··· 532 515 } 533 516 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 534 517 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 535 - 536 - retval = pciehp_readw(ctrl, PCI_EXP_LNKSTA, &lnk_status); 537 - if (retval) { 538 - ctrl_err(ctrl, "%s: Cannot read LNKSTA register\n", 539 - __func__); 540 - return retval; 541 - } 542 - pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status); 543 518 544 519 return retval; 545 520 }
+2 -2
drivers/pci/hotplug/shpchp_core.c
··· 278 278 279 279 static int is_shpc_capable(struct pci_dev *dev) 280 280 { 281 - if ((dev->vendor == PCI_VENDOR_ID_AMD) || (dev->device == 282 - PCI_DEVICE_ID_AMD_GOLAM_7450)) 281 + if (dev->vendor == PCI_VENDOR_ID_AMD && 282 + dev->device == PCI_DEVICE_ID_AMD_GOLAM_7450) 283 283 return 1; 284 284 if (!pci_find_capability(dev, PCI_CAP_ID_SHPC)) 285 285 return 0;
+2 -2
drivers/pci/hotplug/shpchp_hpc.c
··· 944 944 ctrl->pci_dev = pdev; /* pci_dev of the P2P bridge */ 945 945 ctrl_dbg(ctrl, "Hotplug Controller:\n"); 946 946 947 - if ((pdev->vendor == PCI_VENDOR_ID_AMD) || (pdev->device == 948 - PCI_DEVICE_ID_AMD_GOLAM_7450)) { 947 + if (pdev->vendor == PCI_VENDOR_ID_AMD && 948 + pdev->device == PCI_DEVICE_ID_AMD_GOLAM_7450) { 949 949 /* amd shpc driver doesn't use Base Offset; assume 0 */ 950 950 ctrl->mmio_base = pci_resource_start(pdev, 0); 951 951 ctrl->mmio_size = pci_resource_len(pdev, 0);
+1 -1
drivers/regulator/aat2870-regulator.c
··· 160 160 break; 161 161 } 162 162 163 - if (!ri) 163 + if (i == ARRAY_SIZE(aat2870_regulators)) 164 164 return NULL; 165 165 166 166 ri->enable_addr = AAT2870_LDO_EN;
+1 -1
drivers/regulator/core.c
··· 2799 2799 list_del(&rdev->list); 2800 2800 if (rdev->supply) 2801 2801 regulator_put(rdev->supply); 2802 - device_unregister(&rdev->dev); 2803 2802 kfree(rdev->constraints); 2803 + device_unregister(&rdev->dev); 2804 2804 mutex_unlock(&regulator_list_mutex); 2805 2805 } 2806 2806 EXPORT_SYMBOL_GPL(regulator_unregister);
+8 -6
drivers/regulator/tps65910-regulator.c
··· 664 664 665 665 switch (id) { 666 666 case TPS65910_REG_VDD1: 667 - dcdc_mult = (selector / VDD1_2_NUM_VOLTS) + 1; 667 + dcdc_mult = (selector / VDD1_2_NUM_VOLT_FINE) + 1; 668 668 if (dcdc_mult == 1) 669 669 dcdc_mult--; 670 - vsel = (selector % VDD1_2_NUM_VOLTS) + 3; 670 + vsel = (selector % VDD1_2_NUM_VOLT_FINE) + 3; 671 671 672 672 tps65910_modify_bits(pmic, TPS65910_VDD1, 673 673 (dcdc_mult << VDD1_VGAIN_SEL_SHIFT), ··· 675 675 tps65910_reg_write(pmic, TPS65910_VDD1_OP, vsel); 676 676 break; 677 677 case TPS65910_REG_VDD2: 678 - dcdc_mult = (selector / VDD1_2_NUM_VOLTS) + 1; 678 + dcdc_mult = (selector / VDD1_2_NUM_VOLT_FINE) + 1; 679 679 if (dcdc_mult == 1) 680 680 dcdc_mult--; 681 - vsel = (selector % VDD1_2_NUM_VOLTS) + 3; 681 + vsel = (selector % VDD1_2_NUM_VOLT_FINE) + 3; 682 682 683 683 tps65910_modify_bits(pmic, TPS65910_VDD2, 684 684 (dcdc_mult << VDD2_VGAIN_SEL_SHIFT), ··· 756 756 switch (id) { 757 757 case TPS65910_REG_VDD1: 758 758 case TPS65910_REG_VDD2: 759 - mult = (selector / VDD1_2_NUM_VOLTS) + 1; 759 + mult = (selector / VDD1_2_NUM_VOLT_FINE) + 1; 760 760 volt = VDD1_2_MIN_VOLT + 761 - (selector % VDD1_2_NUM_VOLTS) * VDD1_2_OFFSET; 761 + (selector % VDD1_2_NUM_VOLT_FINE) * VDD1_2_OFFSET; 762 762 break; 763 763 case TPS65911_REG_VDDCTRL: 764 764 volt = VDDCTRL_MIN_VOLT + (selector * VDDCTRL_OFFSET); ··· 947 947 948 948 if (i == TPS65910_REG_VDD1 || i == TPS65910_REG_VDD2) { 949 949 pmic->desc[i].ops = &tps65910_ops_dcdc; 950 + pmic->desc[i].n_voltages = VDD1_2_NUM_VOLT_FINE * 951 + VDD1_2_NUM_VOLT_COARSE; 950 952 } else if (i == TPS65910_REG_VDD3) { 951 953 if (tps65910_chip_id(tps65910) == TPS65910) 952 954 pmic->desc[i].ops = &tps65910_ops_vdd3;
+44 -2
drivers/regulator/twl-regulator.c
··· 71 71 #define VREG_TYPE 1 72 72 #define VREG_REMAP 2 73 73 #define VREG_DEDICATED 3 /* LDO control */ 74 + #define VREG_VOLTAGE_SMPS_4030 9 74 75 /* TWL6030 register offsets */ 75 76 #define VREG_TRANS 1 76 77 #define VREG_STATE 2 ··· 515 514 .get_status = twl4030reg_get_status, 516 515 }; 517 516 517 + static int 518 + twl4030smps_set_voltage(struct regulator_dev *rdev, int min_uV, int max_uV, 519 + unsigned *selector) 520 + { 521 + struct twlreg_info *info = rdev_get_drvdata(rdev); 522 + int vsel = DIV_ROUND_UP(min_uV - 600000, 12500); 523 + 524 + twlreg_write(info, TWL_MODULE_PM_RECEIVER, VREG_VOLTAGE_SMPS_4030, 525 + vsel); 526 + return 0; 527 + } 528 + 529 + static int twl4030smps_get_voltage(struct regulator_dev *rdev) 530 + { 531 + struct twlreg_info *info = rdev_get_drvdata(rdev); 532 + int vsel = twlreg_read(info, TWL_MODULE_PM_RECEIVER, 533 + VREG_VOLTAGE_SMPS_4030); 534 + 535 + return vsel * 12500 + 600000; 536 + } 537 + 538 + static struct regulator_ops twl4030smps_ops = { 539 + .set_voltage = twl4030smps_set_voltage, 540 + .get_voltage = twl4030smps_get_voltage, 541 + }; 542 + 518 543 static int twl6030ldo_list_voltage(struct regulator_dev *rdev, unsigned index) 519 544 { 520 545 struct twlreg_info *info = rdev_get_drvdata(rdev); ··· 883 856 }, \ 884 857 } 885 858 859 + #define TWL4030_ADJUSTABLE_SMPS(label, offset, num, turnon_delay, remap_conf) \ 860 + { \ 861 + .base = offset, \ 862 + .id = num, \ 863 + .delay = turnon_delay, \ 864 + .remap = remap_conf, \ 865 + .desc = { \ 866 + .name = #label, \ 867 + .id = TWL4030_REG_##label, \ 868 + .ops = &twl4030smps_ops, \ 869 + .type = REGULATOR_VOLTAGE, \ 870 + .owner = THIS_MODULE, \ 871 + }, \ 872 + } 873 + 886 874 #define TWL6030_ADJUSTABLE_LDO(label, offset, min_mVolts, max_mVolts) { \ 887 875 .base = offset, \ 888 876 .min_mV = min_mVolts, \ ··· 989 947 TWL4030_ADJUSTABLE_LDO(VINTANA2, 0x43, 12, 100, 0x08), 990 948 TWL4030_FIXED_LDO(VINTDIG, 0x47, 1500, 13, 100, 0x08), 991 949 TWL4030_ADJUSTABLE_LDO(VIO, 0x4b, 14, 1000, 0x08), 992 - TWL4030_ADJUSTABLE_LDO(VDD1, 0x55, 15, 1000, 0x08), 993 - TWL4030_ADJUSTABLE_LDO(VDD2, 0x63, 16, 1000, 0x08), 950 + TWL4030_ADJUSTABLE_SMPS(VDD1, 0x55, 15, 1000, 0x08), 951 + TWL4030_ADJUSTABLE_SMPS(VDD2, 0x63, 16, 1000, 0x08), 994 952 TWL4030_FIXED_LDO(VUSB1V5, 0x71, 1500, 17, 100, 0x08), 995 953 TWL4030_FIXED_LDO(VUSB1V8, 0x74, 1800, 18, 100, 0x08), 996 954 TWL4030_FIXED_LDO(VUSB3V1, 0x77, 3100, 19, 150, 0x08),
+1 -1
drivers/spi/spi-nuc900.c
··· 426 426 goto err_clk; 427 427 } 428 428 429 - mfp_set_groupg(&pdev->dev); 429 + mfp_set_groupg(&pdev->dev, NULL); 430 430 nuc900_init_spi(hw); 431 431 432 432 err = spi_bitbang_start(&hw->bitbang);
+2 -1
drivers/staging/et131x/Kconfig
··· 1 1 config ET131X 2 2 tristate "Agere ET-1310 Gigabit Ethernet support" 3 - depends on PCI 3 + depends on PCI && NET && NETDEVICES 4 + select PHYLIB 4 5 default n 5 6 ---help--- 6 7 This driver supports Agere ET-1310 ethernet adapters.
+6 -6
drivers/staging/et131x/et131x.c
··· 4469 4469 return 0; 4470 4470 } 4471 4471 4472 + static SIMPLE_DEV_PM_OPS(et131x_pm_ops, et131x_suspend, et131x_resume); 4473 + #define ET131X_PM_OPS (&et131x_pm_ops) 4474 + #else 4475 + #define ET131X_PM_OPS NULL 4476 + #endif 4477 + 4472 4478 /* ISR functions */ 4473 4479 4474 4480 /** ··· 5475 5469 err_out: 5476 5470 return result; 5477 5471 } 5478 - 5479 - static SIMPLE_DEV_PM_OPS(et131x_pm_ops, et131x_suspend, et131x_resume); 5480 - #define ET131X_PM_OPS (&et131x_pm_ops) 5481 - #else 5482 - #define ET131X_PM_OPS NULL 5483 - #endif 5484 5472 5485 5473 static DEFINE_PCI_DEVICE_TABLE(et131x_pci_table) = { 5486 5474 { PCI_VDEVICE(ATT, ET131X_PCI_DEVICE_ID_GIG), 0UL},
+16 -9
drivers/staging/iio/industrialio-core.c
··· 242 242 243 243 static int iio_event_getfd(struct iio_dev *indio_dev) 244 244 { 245 - if (indio_dev->event_interface == NULL) 245 + struct iio_event_interface *ev_int = indio_dev->event_interface; 246 + int fd; 247 + 248 + if (ev_int == NULL) 246 249 return -ENODEV; 247 250 248 - mutex_lock(&indio_dev->event_interface->event_list_lock); 249 - if (test_and_set_bit(IIO_BUSY_BIT_POS, 250 - &indio_dev->event_interface->flags)) { 251 - mutex_unlock(&indio_dev->event_interface->event_list_lock); 251 + mutex_lock(&ev_int->event_list_lock); 252 + if (test_and_set_bit(IIO_BUSY_BIT_POS, &ev_int->flags)) { 253 + mutex_unlock(&ev_int->event_list_lock); 252 254 return -EBUSY; 253 255 } 254 - mutex_unlock(&indio_dev->event_interface->event_list_lock); 255 - return anon_inode_getfd("iio:event", 256 - &iio_event_chrdev_fileops, 257 - indio_dev->event_interface, O_RDONLY); 256 + mutex_unlock(&ev_int->event_list_lock); 257 + fd = anon_inode_getfd("iio:event", 258 + &iio_event_chrdev_fileops, ev_int, O_RDONLY); 259 + if (fd < 0) { 260 + mutex_lock(&ev_int->event_list_lock); 261 + clear_bit(IIO_BUSY_BIT_POS, &ev_int->flags); 262 + mutex_unlock(&ev_int->event_list_lock); 263 + } 264 + return fd; 258 265 } 259 266 260 267 static int __init iio_init(void)
+1 -1
drivers/staging/slicoss/Kconfig
··· 1 1 config SLICOSS 2 2 tristate "Alacritech Gigabit IS-NIC support" 3 - depends on PCI && X86 3 + depends on PCI && X86 && NET 4 4 default n 5 5 help 6 6 This driver supports Alacritech's IS-NIC gigabit ethernet cards.
+2
drivers/tty/hvc/hvc_dcc.c
··· 46 46 47 47 asm volatile("mrc p14, 0, %0, c0, c5, 0 @ read comms data reg" 48 48 : "=r" (__c)); 49 + isb(); 49 50 50 51 return __c; 51 52 } ··· 56 55 asm volatile("mcr p14, 0, %0, c0, c5, 0 @ write a char" 57 56 : /* no output register */ 58 57 : "r" (c)); 58 + isb(); 59 59 } 60 60 61 61 static int hvc_dcc_put_chars(uint32_t vt, const char *buf, int count)
+7 -7
drivers/tty/serial/Kconfig
··· 1560 1560 Support for the IFX6x60 modem devices on Intel MID platforms. 1561 1561 1562 1562 config SERIAL_PCH_UART 1563 - tristate "Intel EG20T PCH / OKI SEMICONDUCTOR IOH(ML7213/ML7223) UART" 1563 + tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) UART" 1564 1564 depends on PCI 1565 1565 select SERIAL_CORE 1566 1566 help ··· 1568 1568 which is an IOH(Input/Output Hub) for x86 embedded processor. 1569 1569 Enabling PCH_DMA, this PCH UART works as DMA mode. 1570 1570 1571 - This driver also can be used for OKI SEMICONDUCTOR IOH(Input/ 1572 - Output Hub), ML7213 and ML7223. 1573 - ML7213 IOH is for IVI(In-Vehicle Infotainment) use and ML7223 IOH is 1574 - for MP(Media Phone) use. 1575 - ML7213/ML7223 is companion chip for Intel Atom E6xx series. 1576 - ML7213/ML7223 is completely compatible for Intel EG20T PCH. 1571 + This driver also can be used for LAPIS Semiconductor IOH(Input/ 1572 + Output Hub), ML7213, ML7223 and ML7831. 1573 + ML7213 IOH is for IVI(In-Vehicle Infotainment) use, ML7223 IOH is 1574 + for MP(Media Phone) use and ML7831 IOH is for general purpose use. 1575 + ML7213/ML7223/ML7831 is companion chip for Intel Atom E6xx series. 1576 + ML7213/ML7223/ML7831 is completely compatible for Intel EG20T PCH. 1577 1577 1578 1578 config SERIAL_MSM_SMD 1579 1579 bool "Enable tty device interface for some SMD ports"
+3 -13
drivers/tty/serial/atmel_serial.c
··· 228 228 if (rs485conf->flags & SER_RS485_ENABLED) { 229 229 dev_dbg(port->dev, "Setting UART to RS485\n"); 230 230 atmel_port->tx_done_mask = ATMEL_US_TXEMPTY; 231 - if (rs485conf->flags & SER_RS485_RTS_AFTER_SEND) 231 + if ((rs485conf->delay_rts_after_send) > 0) 232 232 UART_PUT_TTGR(port, rs485conf->delay_rts_after_send); 233 233 mode |= ATMEL_US_USMODE_RS485; 234 234 } else { ··· 304 304 305 305 if (atmel_port->rs485.flags & SER_RS485_ENABLED) { 306 306 dev_dbg(port->dev, "Setting UART to RS485\n"); 307 - if (atmel_port->rs485.flags & SER_RS485_RTS_AFTER_SEND) 307 + if ((atmel_port->rs485.delay_rts_after_send) > 0) 308 308 UART_PUT_TTGR(port, 309 309 atmel_port->rs485.delay_rts_after_send); 310 310 mode |= ATMEL_US_USMODE_RS485; ··· 1228 1228 1229 1229 if (atmel_port->rs485.flags & SER_RS485_ENABLED) { 1230 1230 dev_dbg(port->dev, "Setting UART to RS485\n"); 1231 - if (atmel_port->rs485.flags & SER_RS485_RTS_AFTER_SEND) 1231 + if ((atmel_port->rs485.delay_rts_after_send) > 0) 1232 1232 UART_PUT_TTGR(port, 1233 1233 atmel_port->rs485.delay_rts_after_send); 1234 1234 mode |= ATMEL_US_USMODE_RS485; ··· 1446 1446 rs485conf->delay_rts_before_send = rs485_delay[0]; 1447 1447 rs485conf->delay_rts_after_send = rs485_delay[1]; 1448 1448 rs485conf->flags = 0; 1449 - 1450 - if (rs485conf->delay_rts_before_send == 0 && 1451 - rs485conf->delay_rts_after_send == 0) { 1452 - rs485conf->flags |= SER_RS485_RTS_ON_SEND; 1453 - } else { 1454 - if (rs485conf->delay_rts_before_send) 1455 - rs485conf->flags |= SER_RS485_RTS_BEFORE_SEND; 1456 - if (rs485conf->delay_rts_after_send) 1457 - rs485conf->flags |= SER_RS485_RTS_AFTER_SEND; 1458 - } 1459 1449 1460 1450 if (of_get_property(np, "rs485-rx-during-tx", NULL)) 1461 1451 rs485conf->flags |= SER_RS485_RX_DURING_TX;
+2 -8
drivers/tty/serial/crisv10.c
··· 3234 3234 e100_disable_rx(info); 3235 3235 e100_enable_rx_irq(info); 3236 3236 #endif 3237 - if ((info->rs485.flags & SER_RS485_RTS_BEFORE_SEND) && 3238 - (info->rs485.delay_rts_before_send > 0)) 3239 - msleep(info->rs485.delay_rts_before_send); 3237 + if (info->rs485.delay_rts_before_send > 0) 3238 + msleep(info->rs485.delay_rts_before_send); 3240 3239 } 3241 3240 #endif /* CONFIG_ETRAX_RS485 */ 3242 3241 ··· 3692 3693 3693 3694 rs485data.delay_rts_before_send = rs485ctrl.delay_rts_before_send; 3694 3695 rs485data.flags = 0; 3695 - if (rs485data.delay_rts_before_send != 0) 3696 - rs485data.flags |= SER_RS485_RTS_BEFORE_SEND; 3697 - else 3698 - rs485data.flags &= ~(SER_RS485_RTS_BEFORE_SEND); 3699 3696 3700 3697 if (rs485ctrl.enabled) 3701 3698 rs485data.flags |= SER_RS485_ENABLED; ··· 4526 4531 /* Set sane defaults */ 4527 4532 info->rs485.flags &= ~(SER_RS485_RTS_ON_SEND); 4528 4533 info->rs485.flags |= SER_RS485_RTS_AFTER_SEND; 4529 - info->rs485.flags &= ~(SER_RS485_RTS_BEFORE_SEND); 4530 4534 info->rs485.delay_rts_before_send = 0; 4531 4535 info->rs485.flags &= ~(SER_RS485_ENABLED); 4532 4536 #endif
+1 -3
drivers/tty/serial/mfd.c
··· 884 884 { 885 885 struct uart_hsu_port *up = 886 886 container_of(port, struct uart_hsu_port, port); 887 - struct tty_struct *tty = port->state->port.tty; 888 887 unsigned char cval, fcr = 0; 889 888 unsigned long flags; 890 889 unsigned int baud, quot; ··· 906 907 } 907 908 908 909 /* CMSPAR isn't supported by this driver */ 909 - if (tty) 910 - tty->termios->c_cflag &= ~CMSPAR; 910 + termios->c_cflag &= ~CMSPAR; 911 911 912 912 if (termios->c_cflag & CSTOPB) 913 913 cval |= UART_LCR_STOP;
+14 -5
drivers/tty/serial/pch_uart.c
··· 1 1 /* 2 - *Copyright (C) 2010 OKI SEMICONDUCTOR CO., LTD. 2 + *Copyright (C) 2011 LAPIS Semiconductor Co., Ltd. 3 3 * 4 4 *This program is free software; you can redistribute it and/or modify 5 5 *it under the terms of the GNU General Public License as published by ··· 46 46 47 47 /* Set the max number of UART port 48 48 * Intel EG20T PCH: 4 port 49 - * OKI SEMICONDUCTOR ML7213 IOH: 3 port 50 - * OKI SEMICONDUCTOR ML7223 IOH: 2 port 49 + * LAPIS Semiconductor ML7213 IOH: 3 port 50 + * LAPIS Semiconductor ML7223 IOH: 2 port 51 51 */ 52 52 #define PCH_UART_NR 4 53 53 ··· 258 258 pch_ml7213_uart2, 259 259 pch_ml7223_uart0, 260 260 pch_ml7223_uart1, 261 + pch_ml7831_uart0, 262 + pch_ml7831_uart1, 261 263 }; 262 264 263 265 static struct pch_uart_driver_data drv_dat[] = { ··· 272 270 [pch_ml7213_uart2] = {PCH_UART_2LINE, 2}, 273 271 [pch_ml7223_uart0] = {PCH_UART_8LINE, 0}, 274 272 [pch_ml7223_uart1] = {PCH_UART_2LINE, 1}, 273 + [pch_ml7831_uart0] = {PCH_UART_8LINE, 0}, 274 + [pch_ml7831_uart1] = {PCH_UART_2LINE, 1}, 275 275 }; 276 276 277 277 static unsigned int default_baud = 9600; ··· 632 628 dev_err(priv->port.dev, "%s:dma_request_channel FAILS(Rx)\n", 633 629 __func__); 634 630 dma_release_channel(priv->chan_tx); 631 + priv->chan_tx = NULL; 635 632 return; 636 633 } 637 634 ··· 1220 1215 dev_err(priv->port.dev, 1221 1216 "pch_uart_hal_set_fifo Failed(ret=%d)\n", ret); 1222 1217 1223 - if (priv->use_dma_flag) 1224 - pch_free_dma(port); 1218 + pch_free_dma(port); 1225 1219 1226 1220 free_irq(priv->port.irq, priv); 1227 1221 } ··· 1284 1280 if (rtn) 1285 1281 goto out; 1286 1282 1283 + pch_uart_set_mctrl(&priv->port, priv->port.mctrl); 1287 1284 /* Don't rewrite B0 */ 1288 1285 if (tty_termios_baud_rate(termios)) 1289 1286 tty_termios_encode_baud_rate(termios, baud, baud); ··· 1557 1552 .driver_data = pch_ml7223_uart0}, 1558 1553 {PCI_DEVICE(PCI_VENDOR_ID_ROHM, 0x800D), 1559 1554 .driver_data = pch_ml7223_uart1}, 1555 + {PCI_DEVICE(PCI_VENDOR_ID_ROHM, 0x8811), 1556 + .driver_data = pch_ml7831_uart0}, 1557 + {PCI_DEVICE(PCI_VENDOR_ID_ROHM, 0x8812), 1558 + .driver_data = pch_ml7831_uart1}, 1560 1559 {0,}, 1561 1560 }; 1562 1561
+23 -7
drivers/tty/tty_ldisc.c
··· 36 36 37 37 #include <linux/kmod.h> 38 38 #include <linux/nsproxy.h> 39 + #include <linux/ratelimit.h> 39 40 40 41 /* 41 42 * This guards the refcounted line discipline lists. The lock ··· 548 547 /** 549 548 * tty_ldisc_wait_idle - wait for the ldisc to become idle 550 549 * @tty: tty to wait for 550 + * @timeout: for how long to wait at most 551 551 * 552 552 * Wait for the line discipline to become idle. The discipline must 553 553 * have been halted for this to guarantee it remains idle. 554 554 */ 555 - static int tty_ldisc_wait_idle(struct tty_struct *tty) 555 + static int tty_ldisc_wait_idle(struct tty_struct *tty, long timeout) 556 556 { 557 - int ret; 557 + long ret; 558 558 ret = wait_event_timeout(tty_ldisc_idle, 559 - atomic_read(&tty->ldisc->users) == 1, 5 * HZ); 559 + atomic_read(&tty->ldisc->users) == 1, timeout); 560 560 if (ret < 0) 561 561 return ret; 562 562 return ret > 0 ? 0 : -EBUSY; ··· 667 665 668 666 tty_ldisc_flush_works(tty); 669 667 670 - retval = tty_ldisc_wait_idle(tty); 668 + retval = tty_ldisc_wait_idle(tty, 5 * HZ); 671 669 672 670 tty_lock(); 673 671 mutex_lock(&tty->ldisc_mutex); ··· 764 762 if (IS_ERR(ld)) 765 763 return -1; 766 764 767 - WARN_ON_ONCE(tty_ldisc_wait_idle(tty)); 768 - 769 765 tty_ldisc_close(tty, tty->ldisc); 770 766 tty_ldisc_put(tty->ldisc); 771 767 tty->ldisc = NULL; ··· 838 838 tty_unlock(); 839 839 cancel_work_sync(&tty->buf.work); 840 840 mutex_unlock(&tty->ldisc_mutex); 841 - 841 + retry: 842 842 tty_lock(); 843 843 mutex_lock(&tty->ldisc_mutex); 844 844 ··· 847 847 it means auditing a lot of other paths so this is 848 848 a FIXME */ 849 849 if (tty->ldisc) { /* Not yet closed */ 850 + if (atomic_read(&tty->ldisc->users) != 1) { 851 + char cur_n[TASK_COMM_LEN], tty_n[64]; 852 + long timeout = 3 * HZ; 853 + tty_unlock(); 854 + 855 + while (tty_ldisc_wait_idle(tty, timeout) == -EBUSY) { 856 + timeout = MAX_SCHEDULE_TIMEOUT; 857 + printk_ratelimited(KERN_WARNING 858 + "%s: waiting (%s) for %s took too long, but we keep waiting...\n", 859 + __func__, get_task_comm(cur_n, current), 860 + tty_name(tty, tty_n)); 861 + } 862 + mutex_unlock(&tty->ldisc_mutex); 863 + goto retry; 864 + } 865 + 850 866 if (reset == 0) { 851 867 852 868 if (!tty_ldisc_reinit(tty, tty->termios->c_line))
+5 -3
drivers/usb/class/cdc-acm.c
··· 539 539 { 540 540 int i; 541 541 542 - mutex_lock(&open_mutex); 543 542 if (acm->dev) { 544 543 usb_autopm_get_interface(acm->control); 545 544 acm_set_control(acm, acm->ctrlout = 0); ··· 550 551 acm->control->needs_remote_wakeup = 0; 551 552 usb_autopm_put_interface(acm->control); 552 553 } 553 - mutex_unlock(&open_mutex); 554 554 } 555 555 556 556 static void acm_tty_hangup(struct tty_struct *tty) 557 557 { 558 558 struct acm *acm = tty->driver_data; 559 559 tty_port_hangup(&acm->port); 560 + mutex_lock(&open_mutex); 560 561 acm_port_down(acm); 562 + mutex_unlock(&open_mutex); 561 563 } 562 564 563 565 static void acm_tty_close(struct tty_struct *tty, struct file *filp) ··· 569 569 shutdown */ 570 570 if (!acm) 571 571 return; 572 + 573 + mutex_lock(&open_mutex); 572 574 if (tty_port_close_start(&acm->port, tty, filp) == 0) { 573 - mutex_lock(&open_mutex); 574 575 if (!acm->dev) { 575 576 tty_port_tty_set(&acm->port, NULL); 576 577 acm_tty_unregister(acm); ··· 583 582 acm_port_down(acm); 584 583 tty_port_close_end(&acm->port, tty); 585 584 tty_port_tty_set(&acm->port, NULL); 585 + mutex_unlock(&open_mutex); 586 586 } 587 587 588 588 static int acm_tty_write(struct tty_struct *tty,
+6
drivers/usb/core/hub.c
··· 813 813 USB_PORT_FEAT_C_PORT_LINK_STATE); 814 814 } 815 815 816 + if ((portchange & USB_PORT_STAT_C_BH_RESET) && 817 + hub_is_superspeed(hub->hdev)) { 818 + need_debounce_delay = true; 819 + clear_port_feature(hub->hdev, port1, 820 + USB_PORT_FEAT_C_BH_PORT_RESET); 821 + } 816 822 /* We can forget about a "removed" device when there's a 817 823 * physical disconnect or the connect status changes. 818 824 */
+27
drivers/usb/core/quirks.c
··· 50 50 /* Logitech Webcam B/C500 */ 51 51 { USB_DEVICE(0x046d, 0x0807), .driver_info = USB_QUIRK_RESET_RESUME }, 52 52 53 + /* Logitech Webcam C600 */ 54 + { USB_DEVICE(0x046d, 0x0808), .driver_info = USB_QUIRK_RESET_RESUME }, 55 + 53 56 /* Logitech Webcam Pro 9000 */ 54 57 { USB_DEVICE(0x046d, 0x0809), .driver_info = USB_QUIRK_RESET_RESUME }, 58 + 59 + /* Logitech Webcam C905 */ 60 + { USB_DEVICE(0x046d, 0x080a), .driver_info = USB_QUIRK_RESET_RESUME }, 61 + 62 + /* Logitech Webcam C210 */ 63 + { USB_DEVICE(0x046d, 0x0819), .driver_info = USB_QUIRK_RESET_RESUME }, 64 + 65 + /* Logitech Webcam C260 */ 66 + { USB_DEVICE(0x046d, 0x081a), .driver_info = USB_QUIRK_RESET_RESUME }, 55 67 56 68 /* Logitech Webcam C310 */ 57 69 { USB_DEVICE(0x046d, 0x081b), .driver_info = USB_QUIRK_RESET_RESUME }, 58 70 71 + /* Logitech Webcam C910 */ 72 + { USB_DEVICE(0x046d, 0x0821), .driver_info = USB_QUIRK_RESET_RESUME }, 73 + 74 + /* Logitech Webcam C160 */ 75 + { USB_DEVICE(0x046d, 0x0824), .driver_info = USB_QUIRK_RESET_RESUME }, 76 + 59 77 /* Logitech Webcam C270 */ 60 78 { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME }, 79 + 80 + /* Logitech Quickcam Pro 9000 */ 81 + { USB_DEVICE(0x046d, 0x0990), .driver_info = USB_QUIRK_RESET_RESUME }, 82 + 83 + /* Logitech Quickcam E3500 */ 84 + { USB_DEVICE(0x046d, 0x09a4), .driver_info = USB_QUIRK_RESET_RESUME }, 85 + 86 + /* Logitech Quickcam Vision Pro */ 87 + { USB_DEVICE(0x046d, 0x09a6), .driver_info = USB_QUIRK_RESET_RESUME }, 61 88 62 89 /* Logitech Harmony 700-series */ 63 90 { USB_DEVICE(0x046d, 0xc122), .driver_info = USB_QUIRK_DELAY_INIT },
+1
drivers/usb/dwc3/gadget.c
··· 1284 1284 int ret; 1285 1285 1286 1286 dep->endpoint.maxpacket = 1024; 1287 + dep->endpoint.max_streams = 15; 1287 1288 dep->endpoint.ops = &dwc3_gadget_ep_ops; 1288 1289 list_add_tail(&dep->endpoint.ep_list, 1289 1290 &dwc->gadget.ep_list);
+5 -4
drivers/usb/gadget/Kconfig
··· 469 469 gadget drivers to also be dynamically linked. 470 470 471 471 config USB_EG20T 472 - tristate "Intel EG20T PCH/OKI SEMICONDUCTOR ML7213 IOH UDC" 472 + tristate "Intel EG20T PCH/LAPIS Semiconductor IOH(ML7213/ML7831) UDC" 473 473 depends on PCI 474 474 select USB_GADGET_DUALSPEED 475 475 help ··· 485 485 This driver dose not support interrupt transfer or isochronous 486 486 transfer modes. 487 487 488 - This driver also can be used for OKI SEMICONDUCTOR's ML7213 which is 488 + This driver also can be used for LAPIS Semiconductor's ML7213 which is 489 489 for IVI(In-Vehicle Infotainment) use. 490 - ML7213 is companion chip for Intel Atom E6xx series. 491 - ML7213 is completely compatible for Intel EG20T PCH. 490 + ML7831 is for general purpose use. 491 + ML7213/ML7831 is companion chip for Intel Atom E6xx series. 492 + ML7213/ML7831 is completely compatible for Intel EG20T PCH. 492 493 493 494 config USB_CI13XXX_MSM 494 495 tristate "MIPS USB CI13xxx for MSM"
+2
drivers/usb/gadget/ci13xxx_msm.c
··· 122 122 return platform_driver_register(&ci13xxx_msm_driver); 123 123 } 124 124 module_init(ci13xxx_msm_init); 125 + 126 + MODULE_LICENSE("GPL v2");
+13 -8
drivers/usb/gadget/ci13xxx_udc.c
··· 71 71 /****************************************************************************** 72 72 * DEFINE 73 73 *****************************************************************************/ 74 + 75 + #define DMA_ADDR_INVALID (~(dma_addr_t)0) 76 + 74 77 /* ctrl register bank access */ 75 78 static DEFINE_SPINLOCK(udc_lock); 76 79 ··· 1437 1434 return -EALREADY; 1438 1435 1439 1436 mReq->req.status = -EALREADY; 1440 - if (length && !mReq->req.dma) { 1437 + if (length && mReq->req.dma == DMA_ADDR_INVALID) { 1441 1438 mReq->req.dma = \ 1442 1439 dma_map_single(mEp->device, mReq->req.buf, 1443 1440 length, mEp->dir ? DMA_TO_DEVICE : ··· 1456 1453 dma_unmap_single(mEp->device, mReq->req.dma, 1457 1454 length, mEp->dir ? DMA_TO_DEVICE : 1458 1455 DMA_FROM_DEVICE); 1459 - mReq->req.dma = 0; 1456 + mReq->req.dma = DMA_ADDR_INVALID; 1460 1457 mReq->map = 0; 1461 1458 } 1462 1459 return -ENOMEM; ··· 1552 1549 if (mReq->map) { 1553 1550 dma_unmap_single(mEp->device, mReq->req.dma, mReq->req.length, 1554 1551 mEp->dir ? DMA_TO_DEVICE : DMA_FROM_DEVICE); 1555 - mReq->req.dma = 0; 1552 + mReq->req.dma = DMA_ADDR_INVALID; 1556 1553 mReq->map = 0; 1557 1554 } 1558 1555 ··· 1613 1610 * @gadget: gadget 1614 1611 * 1615 1612 * This function returns an error code 1616 - * Caller must hold lock 1617 1613 */ 1618 1614 static int _gadget_stop_activity(struct usb_gadget *gadget) 1619 1615 { ··· 2191 2189 mReq = kzalloc(sizeof(struct ci13xxx_req), gfp_flags); 2192 2190 if (mReq != NULL) { 2193 2191 INIT_LIST_HEAD(&mReq->queue); 2192 + mReq->req.dma = DMA_ADDR_INVALID; 2194 2193 2195 2194 mReq->ptr = dma_pool_alloc(mEp->td_pool, gfp_flags, 2196 2195 &mReq->dma); ··· 2331 2328 if (mReq->map) { 2332 2329 dma_unmap_single(mEp->device, mReq->req.dma, mReq->req.length, 2333 2330 mEp->dir ? DMA_TO_DEVICE : DMA_FROM_DEVICE); 2334 - mReq->req.dma = 0; 2331 + mReq->req.dma = DMA_ADDR_INVALID; 2335 2332 mReq->map = 0; 2336 2333 } 2337 2334 req->status = -ECONNRESET; ··· 2503 2500 spin_lock_irqsave(udc->lock, flags); 2504 2501 if (!udc->remote_wakeup) { 2505 2502 ret = -EOPNOTSUPP; 2506 - dbg_trace("remote wakeup feature is not enabled\n"); 2503 + trace("remote wakeup feature is not enabled\n"); 2507 2504 goto out; 2508 2505 } 2509 2506 if (!hw_cread(CAP_PORTSC, PORTSC_SUSP)) { 2510 2507 ret = -EINVAL; 2511 - dbg_trace("port is not suspended\n"); 2508 + trace("port is not suspended\n"); 2512 2509 goto out; 2513 2510 } 2514 2511 hw_cwrite(CAP_PORTSC, PORTSC_FPR, PORTSC_FPR); ··· 2706 2703 if (udc->udc_driver->notify_event) 2707 2704 udc->udc_driver->notify_event(udc, 2708 2705 CI13XXX_CONTROLLER_STOPPED_EVENT); 2706 + spin_unlock_irqrestore(udc->lock, flags); 2709 2707 _gadget_stop_activity(&udc->gadget); 2708 + spin_lock_irqsave(udc->lock, flags); 2710 2709 pm_runtime_put(&udc->gadget.dev); 2711 2710 } 2712 2711 ··· 2855 2850 struct ci13xxx *udc; 2856 2851 int retval = 0; 2857 2852 2858 - trace("%p, %p, %p", dev, regs, name); 2853 + trace("%p, %p, %p", dev, regs, driver->name); 2859 2854 2860 2855 if (dev == NULL || regs == NULL || driver == NULL || 2861 2856 driver->name == NULL)
+4 -2
drivers/usb/gadget/f_mass_storage.c
··· 624 624 if (ctrl->bRequestType != 625 625 (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE)) 626 626 break; 627 - if (w_index != fsg->interface_number || w_value != 0) 627 + if (w_index != fsg->interface_number || w_value != 0 || 628 + w_length != 0) 628 629 return -EDOM; 629 630 630 631 /* ··· 640 639 if (ctrl->bRequestType != 641 640 (USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE)) 642 641 break; 643 - if (w_index != fsg->interface_number || w_value != 0) 642 + if (w_index != fsg->interface_number || w_value != 0 || 643 + w_length != 1) 644 644 return -EDOM; 645 645 VDBG(fsg, "get max LUN\n"); 646 646 *(u8 *)req->buf = fsg->common->nluns - 1;
+61 -69
drivers/usb/gadget/f_midi.c
··· 95 95 96 96 DECLARE_UAC_AC_HEADER_DESCRIPTOR(1); 97 97 DECLARE_USB_MIDI_OUT_JACK_DESCRIPTOR(1); 98 - DECLARE_USB_MIDI_OUT_JACK_DESCRIPTOR(16); 99 98 DECLARE_USB_MS_ENDPOINT_DESCRIPTOR(16); 100 99 101 100 /* B.3.1 Standard AC Interface Descriptor */ ··· 137 138 .bDescriptorSubtype = USB_MS_HEADER, 138 139 .bcdMSC = cpu_to_le16(0x0100), 139 140 /* .wTotalLength = DYNAMIC */ 140 - }; 141 - 142 - /* B.4.3 Embedded MIDI IN Jack Descriptor */ 143 - static struct usb_midi_in_jack_descriptor jack_in_emb_desc = { 144 - .bLength = USB_DT_MIDI_IN_SIZE, 145 - .bDescriptorType = USB_DT_CS_INTERFACE, 146 - .bDescriptorSubtype = USB_MS_MIDI_IN_JACK, 147 - .bJackType = USB_MS_EMBEDDED, 148 - /* .bJackID = DYNAMIC */ 149 - }; 150 - 151 - /* B.4.4 Embedded MIDI OUT Jack Descriptor */ 152 - static struct usb_midi_out_jack_descriptor_16 jack_out_emb_desc = { 153 - /* .bLength = DYNAMIC */ 154 - .bDescriptorType = USB_DT_CS_INTERFACE, 155 - .bDescriptorSubtype = USB_MS_MIDI_OUT_JACK, 156 - .bJackType = USB_MS_EMBEDDED, 157 - /* .bJackID = DYNAMIC */ 158 - /* .bNrInputPins = DYNAMIC */ 159 - /* .pins = DYNAMIC */ 160 141 }; 161 142 162 143 /* B.5.1 Standard Bulk OUT Endpoint Descriptor */ ··· 737 758 static int __init 738 759 f_midi_bind(struct usb_configuration *c, struct usb_function *f) 739 760 { 740 - struct usb_descriptor_header *midi_function[(MAX_PORTS * 2) + 12]; 761 + struct usb_descriptor_header **midi_function; 741 762 struct usb_midi_in_jack_descriptor jack_in_ext_desc[MAX_PORTS]; 763 + struct usb_midi_in_jack_descriptor jack_in_emb_desc[MAX_PORTS]; 742 764 struct usb_midi_out_jack_descriptor_1 jack_out_ext_desc[MAX_PORTS]; 765 + struct usb_midi_out_jack_descriptor_1 jack_out_emb_desc[MAX_PORTS]; 743 766 struct usb_composite_dev *cdev = c->cdev; 744 767 struct f_midi *midi = func_to_midi(f); 745 768 int status, n, jack = 1, i = 0; ··· 779 798 goto fail; 780 799 midi->out_ep->driver_data = cdev; /* claim */ 781 800 801 + /* allocate temporary function list */ 802 + midi_function = kcalloc((MAX_PORTS * 4) + 9, sizeof(midi_function), 803 + GFP_KERNEL); 804 + if (!midi_function) { 805 + status = -ENOMEM; 806 + goto fail; 807 + } 808 + 782 809 /* 783 810 * construct the function's descriptor set. As the number of 784 811 * input and output MIDI ports is configurable, we have to do ··· 800 811 801 812 /* calculate the header's wTotalLength */ 802 813 n = USB_DT_MS_HEADER_SIZE 803 - + (1 + midi->in_ports) * USB_DT_MIDI_IN_SIZE 804 - + (1 + midi->out_ports) * USB_DT_MIDI_OUT_SIZE(1); 814 + + (midi->in_ports + midi->out_ports) * 815 + (USB_DT_MIDI_IN_SIZE + USB_DT_MIDI_OUT_SIZE(1)); 805 816 ms_header_desc.wTotalLength = cpu_to_le16(n); 806 817 807 818 midi_function[i++] = (struct usb_descriptor_header *) &ms_header_desc; 808 819 809 - /* we have one embedded IN jack */ 810 - jack_in_emb_desc.bJackID = jack++; 811 - midi_function[i++] = (struct usb_descriptor_header *) &jack_in_emb_desc; 812 - 813 - /* and a dynamic amount of external IN jacks */ 820 + /* configure the external IN jacks, each linked to an embedded OUT jack */ 814 821 for (n = 0; n < midi->in_ports; n++) { 815 - struct usb_midi_in_jack_descriptor *ext = &jack_in_ext_desc[n]; 822 + struct usb_midi_in_jack_descriptor *in_ext = &jack_in_ext_desc[n]; 823 + struct usb_midi_out_jack_descriptor_1 *out_emb = &jack_out_emb_desc[n]; 816 824 817 - ext->bLength = USB_DT_MIDI_IN_SIZE; 818 - ext->bDescriptorType = USB_DT_CS_INTERFACE; 819 - ext->bDescriptorSubtype = USB_MS_MIDI_IN_JACK; 820 - ext->bJackType = USB_MS_EXTERNAL; 821 - ext->bJackID = jack++; 822 - ext->iJack = 0; 825 + in_ext->bLength = USB_DT_MIDI_IN_SIZE; 826 + in_ext->bDescriptorType = USB_DT_CS_INTERFACE; 827 + in_ext->bDescriptorSubtype = USB_MS_MIDI_IN_JACK; 828 + in_ext->bJackType = USB_MS_EXTERNAL; 829 + in_ext->bJackID = jack++; 830 + in_ext->iJack = 0; 831 + midi_function[i++] = (struct usb_descriptor_header *) in_ext; 823 832 824 - midi_function[i++] = (struct usb_descriptor_header *) ext; 833 + out_emb->bLength = USB_DT_MIDI_OUT_SIZE(1); 834 + out_emb->bDescriptorType = USB_DT_CS_INTERFACE; 835 + out_emb->bDescriptorSubtype = USB_MS_MIDI_OUT_JACK; 836 + out_emb->bJackType = USB_MS_EMBEDDED; 837 + out_emb->bJackID = jack++; 838 + out_emb->bNrInputPins = 1; 839 + out_emb->pins[0].baSourcePin = 1; 840 + out_emb->pins[0].baSourceID = in_ext->bJackID; 841 + out_emb->iJack = 0; 842 + midi_function[i++] = (struct usb_descriptor_header *) out_emb; 843 + 844 + /* link it to the endpoint */ 845 + ms_in_desc.baAssocJackID[n] = out_emb->bJackID; 825 846 } 826 847 827 - /* one embedded OUT jack ... */ 828 - jack_out_emb_desc.bLength = USB_DT_MIDI_OUT_SIZE(midi->in_ports); 829 - jack_out_emb_desc.bJackID = jack++; 830 - jack_out_emb_desc.bNrInputPins = midi->in_ports; 831 - /* ... which referencess all external IN jacks */ 832 - for (n = 0; n < midi->in_ports; n++) { 833 - jack_out_emb_desc.pins[n].baSourceID = jack_in_ext_desc[n].bJackID; 834 - jack_out_emb_desc.pins[n].baSourcePin = 1; 835 - } 836 - 837 - midi_function[i++] = (struct usb_descriptor_header *) &jack_out_emb_desc; 838 - 839 - /* and multiple external OUT jacks ... */ 848 + /* configure the external OUT jacks, each linked to an embedded IN jack */ 840 849 for (n = 0; n < midi->out_ports; n++) { 841 - struct usb_midi_out_jack_descriptor_1 *ext = &jack_out_ext_desc[n]; 842 - int m; 850 + struct usb_midi_in_jack_descriptor *in_emb = &jack_in_emb_desc[n]; 851 + struct usb_midi_out_jack_descriptor_1 *out_ext = &jack_out_ext_desc[n]; 843 852 844 - ext->bLength = USB_DT_MIDI_OUT_SIZE(1); 845 - ext->bDescriptorType = USB_DT_CS_INTERFACE; 846 - ext->bDescriptorSubtype = USB_MS_MIDI_OUT_JACK; 847 - ext->bJackType = USB_MS_EXTERNAL; 848 - ext->bJackID = jack++; 849 - ext->bNrInputPins = 1; 850 - ext->iJack = 0; 851 - /* ... which all reference the same embedded IN jack */ 852 - for (m = 0; m < midi->out_ports; m++) { 853 - ext->pins[m].baSourceID = jack_in_emb_desc.bJackID; 854 - ext->pins[m].baSourcePin = 1; 855 - } 853 + in_emb->bLength = USB_DT_MIDI_IN_SIZE; 854 + in_emb->bDescriptorType = USB_DT_CS_INTERFACE; 855 + in_emb->bDescriptorSubtype = USB_MS_MIDI_IN_JACK; 856 + in_emb->bJackType = USB_MS_EMBEDDED; 857 + in_emb->bJackID = jack++; 858 + in_emb->iJack = 0; 859 + midi_function[i++] = (struct usb_descriptor_header *) in_emb; 856 860 857 - midi_function[i++] = (struct usb_descriptor_header *) ext; 861 + out_ext->bLength = USB_DT_MIDI_OUT_SIZE(1); 862 + out_ext->bDescriptorType = USB_DT_CS_INTERFACE; 863 + out_ext->bDescriptorSubtype = USB_MS_MIDI_OUT_JACK; 864 + out_ext->bJackType = USB_MS_EXTERNAL; 865 + out_ext->bJackID = jack++; 866 + out_ext->bNrInputPins = 1; 867 + out_ext->iJack = 0; 868 + out_ext->pins[0].baSourceID = in_emb->bJackID; 869 + out_ext->pins[0].baSourcePin = 1; 870 + midi_function[i++] = (struct usb_descriptor_header *) out_ext; 871 + 872 + /* link it to the endpoint */ 873 + ms_out_desc.baAssocJackID[n] = in_emb->bJackID; 858 874 } 859 875 860 876 /* configure the endpoint descriptors ... */ 861 877 ms_out_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->in_ports); 862 878 ms_out_desc.bNumEmbMIDIJack = midi->in_ports; 863 - for (n = 0; n < midi->in_ports; n++) 864 - ms_out_desc.baAssocJackID[n] = jack_in_emb_desc.bJackID; 865 879 866 880 ms_in_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->out_ports); 867 881 ms_in_desc.bNumEmbMIDIJack = midi->out_ports; 868 - for (n = 0; n < midi->out_ports; n++) 869 - ms_in_desc.baAssocJackID[n] = jack_out_emb_desc.bJackID; 870 882 871 883 /* ... and add them to the list */ 872 884 midi_function[i++] = (struct usb_descriptor_header *) &bulk_out_desc; ··· 890 900 } else { 891 901 f->descriptors = usb_copy_descriptors(midi_function); 892 902 } 903 + 904 + kfree(midi_function); 893 905 894 906 return 0; 895 907
+2 -2
drivers/usb/gadget/file_storage.c
··· 859 859 if (ctrl->bRequestType != (USB_DIR_OUT | 860 860 USB_TYPE_CLASS | USB_RECIP_INTERFACE)) 861 861 break; 862 - if (w_index != 0 || w_value != 0) { 862 + if (w_index != 0 || w_value != 0 || w_length != 0) { 863 863 value = -EDOM; 864 864 break; 865 865 } ··· 875 875 if (ctrl->bRequestType != (USB_DIR_IN | 876 876 USB_TYPE_CLASS | USB_RECIP_INTERFACE)) 877 877 break; 878 - if (w_index != 0 || w_value != 0) { 878 + if (w_index != 0 || w_value != 0 || w_length != 1) { 879 879 value = -EDOM; 880 880 break; 881 881 }
+1 -2
drivers/usb/gadget/fsl_udc_core.c
··· 2480 2480 2481 2481 #ifndef CONFIG_ARCH_MXC 2482 2482 if (pdata->have_sysif_regs) 2483 - usb_sys_regs = (struct usb_sys_interface *) 2484 - ((u32)dr_regs + USB_DR_SYS_OFFSET); 2483 + usb_sys_regs = (void *)dr_regs + USB_DR_SYS_OFFSET; 2485 2484 #endif 2486 2485 2487 2486 /* Initialize USB clocks */
+3 -2
drivers/usb/gadget/inode.c
··· 1730 1730 gadgetfs_disconnect (struct usb_gadget *gadget) 1731 1731 { 1732 1732 struct dev_data *dev = get_gadget_data (gadget); 1733 + unsigned long flags; 1733 1734 1734 - spin_lock (&dev->lock); 1735 + spin_lock_irqsave (&dev->lock, flags); 1735 1736 if (dev->state == STATE_DEV_UNCONNECTED) 1736 1737 goto exit; 1737 1738 dev->state = STATE_DEV_UNCONNECTED; ··· 1741 1740 next_event (dev, GADGETFS_DISCONNECT); 1742 1741 ep0_readable (dev); 1743 1742 exit: 1744 - spin_unlock (&dev->lock); 1743 + spin_unlock_irqrestore (&dev->lock, flags); 1745 1744 } 1746 1745 1747 1746 static void
+8 -2
drivers/usb/gadget/pch_udc.c
··· 1 1 /* 2 - * Copyright (C) 2010 OKI SEMICONDUCTOR CO., LTD. 2 + * Copyright (C) 2011 LAPIS Semiconductor Co., Ltd. 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify 5 5 * it under the terms of the GNU General Public License as published by ··· 354 354 #define PCI_DEVICE_ID_INTEL_EG20T_UDC 0x8808 355 355 #define PCI_VENDOR_ID_ROHM 0x10DB 356 356 #define PCI_DEVICE_ID_ML7213_IOH_UDC 0x801D 357 + #define PCI_DEVICE_ID_ML7831_IOH_UDC 0x8808 357 358 358 359 static const char ep0_string[] = "ep0in"; 359 360 static DEFINE_SPINLOCK(udc_stall_spinlock); /* stall spin lock */ ··· 2971 2970 .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 2972 2971 .class_mask = 0xffffffff, 2973 2972 }, 2973 + { 2974 + PCI_DEVICE(PCI_VENDOR_ID_ROHM, PCI_DEVICE_ID_ML7831_IOH_UDC), 2975 + .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 2976 + .class_mask = 0xffffffff, 2977 + }, 2974 2978 { 0 }, 2975 2979 }; 2976 2980 ··· 3005 2999 module_exit(pch_udc_pci_exit); 3006 3000 3007 3001 MODULE_DESCRIPTION("Intel EG20T USB Device Controller"); 3008 - MODULE_AUTHOR("OKI SEMICONDUCTOR, <toshiharu-linux@dsn.okisemi.com>"); 3002 + MODULE_AUTHOR("LAPIS Semiconductor, <tomoya-linux@dsn.lapis-semi.com>"); 3009 3003 MODULE_LICENSE("GPL");
+11 -19
drivers/usb/gadget/r8a66597-udc.c
··· 1718 1718 if (list_empty(&ep->queue) && !ep->busy) { 1719 1719 pipe_stop(ep->r8a66597, ep->pipenum); 1720 1720 r8a66597_bclr(ep->r8a66597, BCLR, ep->fifoctr); 1721 + r8a66597_write(ep->r8a66597, ACLRM, ep->pipectr); 1722 + r8a66597_write(ep->r8a66597, 0, ep->pipectr); 1721 1723 } 1722 1724 spin_unlock_irqrestore(&ep->r8a66597->lock, flags); 1723 1725 } ··· 1744 1742 struct usb_gadget_driver *driver) 1745 1743 { 1746 1744 struct r8a66597 *r8a66597 = gadget_to_r8a66597(gadget); 1747 - int retval; 1748 1745 1749 1746 if (!driver 1750 1747 || driver->speed != USB_SPEED_HIGH ··· 1753 1752 return -ENODEV; 1754 1753 1755 1754 /* hook up the driver */ 1756 - driver->driver.bus = NULL; 1757 1755 r8a66597->driver = driver; 1758 - r8a66597->gadget.dev.driver = &driver->driver; 1759 - 1760 - retval = device_add(&r8a66597->gadget.dev); 1761 - if (retval) { 1762 - dev_err(r8a66597_to_dev(r8a66597), "device_add error (%d)\n", 1763 - retval); 1764 - goto error; 1765 - } 1766 1756 1767 1757 init_controller(r8a66597); 1768 1758 r8a66597_bset(r8a66597, VBSE, INTENB0); ··· 1767 1775 } 1768 1776 1769 1777 return 0; 1770 - 1771 - error: 1772 - r8a66597->driver = NULL; 1773 - r8a66597->gadget.dev.driver = NULL; 1774 - 1775 - return retval; 1776 1778 } 1777 1779 1778 1780 static int r8a66597_stop(struct usb_gadget *gadget, ··· 1780 1794 disable_controller(r8a66597); 1781 1795 spin_unlock_irqrestore(&r8a66597->lock, flags); 1782 1796 1783 - device_del(&r8a66597->gadget.dev); 1784 1797 r8a66597->driver = NULL; 1785 1798 return 0; 1786 1799 } ··· 1830 1845 clk_put(r8a66597->clk); 1831 1846 } 1832 1847 #endif 1848 + device_unregister(&r8a66597->gadget.dev); 1833 1849 kfree(r8a66597); 1834 1850 return 0; 1835 1851 } ··· 1910 1924 r8a66597->irq_sense_low = irq_trigger == IRQF_TRIGGER_LOW; 1911 1925 1912 1926 r8a66597->gadget.ops = &r8a66597_gadget_ops; 1913 - device_initialize(&r8a66597->gadget.dev); 1914 1927 dev_set_name(&r8a66597->gadget.dev, "gadget"); 1915 1928 r8a66597->gadget.is_dualspeed = 1; 1916 1929 r8a66597->gadget.dev.parent = &pdev->dev; 1917 1930 r8a66597->gadget.dev.dma_mask = pdev->dev.dma_mask; 1918 1931 r8a66597->gadget.dev.release = pdev->dev.release; 1919 1932 r8a66597->gadget.name = udc_name; 1933 + ret = device_register(&r8a66597->gadget.dev); 1934 + if (ret < 0) { 1935 + dev_err(&pdev->dev, "device_register failed\n"); 1936 + goto clean_up; 1937 + } 1920 1938 1921 1939 init_timer(&r8a66597->timer); 1922 1940 r8a66597->timer.function = r8a66597_timer; ··· 1935 1945 dev_err(&pdev->dev, "cannot get clock \"%s\"\n", 1936 1946 clk_name); 1937 1947 ret = PTR_ERR(r8a66597->clk); 1938 - goto clean_up; 1948 + goto clean_up_dev; 1939 1949 } 1940 1950 clk_enable(r8a66597->clk); 1941 1951 } ··· 2004 2014 clk_disable(r8a66597->clk); 2005 2015 clk_put(r8a66597->clk); 2006 2016 } 2017 + clean_up_dev: 2007 2018 #endif 2019 + device_unregister(&r8a66597->gadget.dev); 2008 2020 clean_up: 2009 2021 if (r8a66597) { 2010 2022 if (r8a66597->sudmac_reg)
+5 -5
drivers/usb/gadget/udc-core.c
··· 210 210 kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE); 211 211 212 212 if (udc_is_newstyle(udc)) { 213 - usb_gadget_disconnect(udc->gadget); 213 + udc->driver->disconnect(udc->gadget); 214 214 udc->driver->unbind(udc->gadget); 215 215 usb_gadget_udc_stop(udc->gadget, udc->driver); 216 - 216 + usb_gadget_disconnect(udc->gadget); 217 217 } else { 218 218 usb_gadget_stop(udc->gadget, udc->driver); 219 219 } ··· 344 344 static ssize_t usb_udc_srp_store(struct device *dev, 345 345 struct device_attribute *attr, const char *buf, size_t n) 346 346 { 347 - struct usb_udc *udc = dev_get_drvdata(dev); 347 + struct usb_udc *udc = container_of(dev, struct usb_udc, dev); 348 348 349 349 if (sysfs_streq(buf, "1")) 350 350 usb_gadget_wakeup(udc->gadget); ··· 378 378 return snprintf(buf, PAGE_SIZE, "%s\n", 379 379 usb_speed_string(udc->gadget->speed)); 380 380 } 381 - static DEVICE_ATTR(speed, S_IRUSR, usb_udc_speed_show, NULL); 381 + static DEVICE_ATTR(speed, S_IRUGO, usb_udc_speed_show, NULL); 382 382 383 383 #define USB_UDC_ATTR(name) \ 384 384 ssize_t usb_udc_##name##_show(struct device *dev, \ ··· 389 389 \ 390 390 return snprintf(buf, PAGE_SIZE, "%d\n", gadget->name); \ 391 391 } \ 392 - static DEVICE_ATTR(name, S_IRUSR, usb_udc_##name##_show, NULL) 392 + static DEVICE_ATTR(name, S_IRUGO, usb_udc_##name##_show, NULL) 393 393 394 394 static USB_UDC_ATTR(is_dualspeed); 395 395 static USB_UDC_ATTR(is_otg);
+10 -5
drivers/usb/host/ehci-sched.c
··· 1479 1479 1480 1480 /* NOTE: assumes URB_ISO_ASAP, to limit complexity/bugs */ 1481 1481 1482 - /* find a uframe slot with enough bandwidth */ 1483 - next = start + period; 1484 - for (; start < next; start++) { 1485 - 1482 + /* find a uframe slot with enough bandwidth. 1483 + * Early uframes are more precious because full-speed 1484 + * iso IN transfers can't use late uframes, 1485 + * and therefore they should be allocated last. 1486 + */ 1487 + next = start; 1488 + start += period; 1489 + do { 1490 + start--; 1486 1491 /* check schedule: enough space? */ 1487 1492 if (stream->highspeed) { 1488 1493 if (itd_slot_ok(ehci, mod, start, ··· 1500 1495 start, sched, period)) 1501 1496 break; 1502 1497 } 1503 - } 1498 + } while (start > next); 1504 1499 1505 1500 /* no room in the schedule */ 1506 1501 if (start == next) {
+1 -1
drivers/usb/host/ehci-xls.c
··· 19 19 20 20 ehci->caps = hcd->regs; 21 21 ehci->regs = hcd->regs + 22 - HC_LENGTH(ehci_readl(ehci, &ehci->caps->hc_capbase)); 22 + HC_LENGTH(ehci, ehci_readl(ehci, &ehci->caps->hc_capbase)); 23 23 dbg_hcs_params(ehci, "reset"); 24 24 dbg_hcc_params(ehci, "reset"); 25 25
+6
drivers/usb/host/ohci-at91.c
··· 223 223 if (port < 0 || port >= 2) 224 224 return; 225 225 226 + if (pdata->vbus_pin[port] <= 0) 227 + return; 228 + 226 229 gpio_set_value(pdata->vbus_pin[port], !pdata->vbus_pin_inverted ^ enable); 227 230 } 228 231 229 232 static int ohci_at91_usb_get_power(struct at91_usbh_data *pdata, int port) 230 233 { 231 234 if (port < 0 || port >= 2) 235 + return -EINVAL; 236 + 237 + if (pdata->vbus_pin[port] <= 0) 232 238 return -EINVAL; 233 239 234 240 return gpio_get_value(pdata->vbus_pin[port]) ^ !pdata->vbus_pin_inverted;
+6 -9
drivers/usb/host/ohci-hcd.c
··· 389 389 struct ohci_hcd *ohci; 390 390 391 391 ohci = hcd_to_ohci (hcd); 392 - ohci_writel (ohci, OHCI_INTR_MIE, &ohci->regs->intrdisable); 393 - ohci->hc_control = ohci_readl(ohci, &ohci->regs->control); 392 + ohci_writel(ohci, (u32) ~0, &ohci->regs->intrdisable); 394 393 395 - /* If the SHUTDOWN quirk is set, don't put the controller in RESET */ 396 - ohci->hc_control &= (ohci->flags & OHCI_QUIRK_SHUTDOWN ? 397 - OHCI_CTRL_RWC | OHCI_CTRL_HCFS : 398 - OHCI_CTRL_RWC); 399 - ohci_writel(ohci, ohci->hc_control, &ohci->regs->control); 394 + /* Software reset, after which the controller goes into SUSPEND */ 395 + ohci_writel(ohci, OHCI_HCR, &ohci->regs->cmdstatus); 396 + ohci_readl(ohci, &ohci->regs->cmdstatus); /* flush the writes */ 397 + udelay(10); 400 398 401 - /* flush the writes */ 402 - (void) ohci_readl (ohci, &ohci->regs->control); 399 + ohci_writel(ohci, ohci->fminterval, &ohci->regs->fminterval); 403 400 } 404 401 405 402 static int check_ed(struct ohci_hcd *ohci, struct ed *ed)
-26
drivers/usb/host/ohci-pci.c
··· 175 175 return 0; 176 176 } 177 177 178 - /* nVidia controllers continue to drive Reset signalling on the bus 179 - * even after system shutdown, wasting power. This flag tells the 180 - * shutdown routine to leave the controller OPERATIONAL instead of RESET. 181 - */ 182 - static int ohci_quirk_nvidia_shutdown(struct usb_hcd *hcd) 183 - { 184 - struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 185 - struct ohci_hcd *ohci = hcd_to_ohci(hcd); 186 - 187 - /* Evidently nVidia fixed their later hardware; this is a guess at 188 - * the changeover point. 189 - */ 190 - #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_USB 0x026d 191 - 192 - if (pdev->device < PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_USB) { 193 - ohci->flags |= OHCI_QUIRK_SHUTDOWN; 194 - ohci_dbg(ohci, "enabled nVidia shutdown quirk\n"); 195 - } 196 - 197 - return 0; 198 - } 199 - 200 178 static void sb800_prefetch(struct ohci_hcd *ohci, int on) 201 179 { 202 180 struct pci_dev *pdev; ··· 237 259 { 238 260 PCI_DEVICE(PCI_VENDOR_ID_ATI, 0x4399), 239 261 .driver_data = (unsigned long)ohci_quirk_amd700, 240 - }, 241 - { 242 - PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID), 243 - .driver_data = (unsigned long) ohci_quirk_nvidia_shutdown, 244 262 }, 245 263 246 264 /* FIXME for some of the early AMD 760 southbridges, OHCI
-1
drivers/usb/host/ohci.h
··· 403 403 #define OHCI_QUIRK_HUB_POWER 0x100 /* distrust firmware power/oc setup */ 404 404 #define OHCI_QUIRK_AMD_PLL 0x200 /* AMD PLL quirk*/ 405 405 #define OHCI_QUIRK_AMD_PREFETCH 0x400 /* pre-fetch for ISO transfer */ 406 - #define OHCI_QUIRK_SHUTDOWN 0x800 /* nVidia power bug */ 407 406 // there are also chip quirks/bugs in init logic 408 407 409 408 struct work_struct nec_work; /* Worker for NEC quirk */
+26 -33
drivers/usb/host/pci-quirks.c
··· 37 37 #define OHCI_INTRENABLE 0x10 38 38 #define OHCI_INTRDISABLE 0x14 39 39 #define OHCI_FMINTERVAL 0x34 40 + #define OHCI_HCFS (3 << 6) /* hc functional state */ 40 41 #define OHCI_HCR (1 << 0) /* host controller reset */ 41 42 #define OHCI_OCR (1 << 3) /* ownership change request */ 42 43 #define OHCI_CTRL_RWC (1 << 9) /* remote wakeup connected */ ··· 467 466 { 468 467 void __iomem *base; 469 468 u32 control; 469 + u32 fminterval; 470 + int cnt; 470 471 471 472 if (!mmio_resource_enabled(pdev, 0)) 472 473 return; ··· 501 498 } 502 499 #endif 503 500 504 - /* reset controller, preserving RWC (and possibly IR) */ 505 - writel(control & OHCI_CTRL_MASK, base + OHCI_CONTROL); 506 - readl(base + OHCI_CONTROL); 501 + /* disable interrupts */ 502 + writel((u32) ~0, base + OHCI_INTRDISABLE); 507 503 508 - /* Some NVIDIA controllers stop working if kept in RESET for too long */ 509 - if (pdev->vendor == PCI_VENDOR_ID_NVIDIA) { 510 - u32 fminterval; 511 - int cnt; 504 + /* Reset the USB bus, if the controller isn't already in RESET */ 505 + if (control & OHCI_HCFS) { 506 + /* Go into RESET, preserving RWC (and possibly IR) */ 507 + writel(control & OHCI_CTRL_MASK, base + OHCI_CONTROL); 508 + readl(base + OHCI_CONTROL); 512 509 513 - /* drive reset for at least 50 ms (7.1.7.5) */ 510 + /* drive bus reset for at least 50 ms (7.1.7.5) */ 514 511 msleep(50); 515 - 516 - /* software reset of the controller, preserving HcFmInterval */ 517 - fminterval = readl(base + OHCI_FMINTERVAL); 518 - writel(OHCI_HCR, base + OHCI_CMDSTATUS); 519 - 520 - /* reset requires max 10 us delay */ 521 - for (cnt = 30; cnt > 0; --cnt) { /* ... allow extra time */ 522 - if ((readl(base + OHCI_CMDSTATUS) & OHCI_HCR) == 0) 523 - break; 524 - udelay(1); 525 - } 526 - writel(fminterval, base + OHCI_FMINTERVAL); 527 - 528 - /* Now we're in the SUSPEND state with all devices reset 529 - * and wakeups and interrupts disabled 530 - */ 531 512 } 532 513 533 - /* 534 - * disable interrupts 535 - */ 536 - writel(~(u32)0, base + OHCI_INTRDISABLE); 537 - writel(~(u32)0, base + OHCI_INTRSTATUS); 514 + /* software reset of the controller, preserving HcFmInterval */ 515 + fminterval = readl(base + OHCI_FMINTERVAL); 516 + writel(OHCI_HCR, base + OHCI_CMDSTATUS); 538 517 518 + /* reset requires max 10 us delay */ 519 + for (cnt = 30; cnt > 0; --cnt) { /* ... allow extra time */ 520 + if ((readl(base + OHCI_CMDSTATUS) & OHCI_HCR) == 0) 521 + break; 522 + udelay(1); 523 + } 524 + writel(fminterval, base + OHCI_FMINTERVAL); 525 + 526 + /* Now the controller is safely in SUSPEND and nothing can wake it up */ 539 527 iounmap(base); 540 528 } 541 529 ··· 621 627 void __iomem *base, *op_reg_base; 622 628 u32 hcc_params, cap, val; 623 629 u8 offset, cap_length; 624 - int wait_time, delta, count = 256/4; 630 + int wait_time, count = 256/4; 625 631 626 632 if (!mmio_resource_enabled(pdev, 0)) 627 633 return; ··· 667 673 writel(val, op_reg_base + EHCI_USBCMD); 668 674 669 675 wait_time = 2000; 670 - delta = 100; 671 676 do { 672 677 writel(0x3f, op_reg_base + EHCI_USBSTS); 673 - udelay(delta); 674 - wait_time -= delta; 678 + udelay(100); 679 + wait_time -= 100; 675 680 val = readl(op_reg_base + EHCI_USBSTS); 676 681 if ((val == ~(u32)0) || (val & EHCI_USBSTS_HALTED)) { 677 682 break;
-5
drivers/usb/host/xhci-mem.c
··· 982 982 struct xhci_virt_device *dev; 983 983 struct xhci_ep_ctx *ep0_ctx; 984 984 struct xhci_slot_ctx *slot_ctx; 985 - struct xhci_input_control_ctx *ctrl_ctx; 986 985 u32 port_num; 987 986 struct usb_device *top_dev; 988 987 ··· 993 994 return -EINVAL; 994 995 } 995 996 ep0_ctx = xhci_get_ep_ctx(xhci, dev->in_ctx, 0); 996 - ctrl_ctx = xhci_get_input_control_ctx(xhci, dev->in_ctx); 997 997 slot_ctx = xhci_get_slot_ctx(xhci, dev->in_ctx); 998 - 999 - /* 2) New slot context and endpoint 0 context are valid*/ 1000 - ctrl_ctx->add_flags = cpu_to_le32(SLOT_FLAG | EP0_FLAG); 1001 998 1002 999 /* 3) Only the control endpoint is valid - one endpoint context */ 1003 1000 slot_ctx->dev_info |= cpu_to_le32(LAST_CTX(1) | udev->route);
+7 -6
drivers/usb/host/xhci-ring.c
··· 816 816 struct xhci_ring *ring; 817 817 struct xhci_td *cur_td; 818 818 int ret, i, j; 819 + unsigned long flags; 819 820 820 821 ep = (struct xhci_virt_ep *) arg; 821 822 xhci = ep->xhci; 822 823 823 - spin_lock(&xhci->lock); 824 + spin_lock_irqsave(&xhci->lock, flags); 824 825 825 826 ep->stop_cmds_pending--; 826 827 if (xhci->xhc_state & XHCI_STATE_DYING) { 827 828 xhci_dbg(xhci, "Stop EP timer ran, but another timer marked " 828 829 "xHCI as DYING, exiting.\n"); 829 - spin_unlock(&xhci->lock); 830 + spin_unlock_irqrestore(&xhci->lock, flags); 830 831 return; 831 832 } 832 833 if (!(ep->stop_cmds_pending == 0 && (ep->ep_state & EP_HALT_PENDING))) { 833 834 xhci_dbg(xhci, "Stop EP timer ran, but no command pending, " 834 835 "exiting.\n"); 835 - spin_unlock(&xhci->lock); 836 + spin_unlock_irqrestore(&xhci->lock, flags); 836 837 return; 837 838 } 838 839 ··· 845 844 xhci->xhc_state |= XHCI_STATE_DYING; 846 845 /* Disable interrupts from the host controller and start halting it */ 847 846 xhci_quiesce(xhci); 848 - spin_unlock(&xhci->lock); 847 + spin_unlock_irqrestore(&xhci->lock, flags); 849 848 850 849 ret = xhci_halt(xhci); 851 850 852 - spin_lock(&xhci->lock); 851 + spin_lock_irqsave(&xhci->lock, flags); 853 852 if (ret < 0) { 854 853 /* This is bad; the host is not responding to commands and it's 855 854 * not allowing itself to be halted. At least interrupts are ··· 897 896 } 898 897 } 899 898 } 900 - spin_unlock(&xhci->lock); 899 + spin_unlock_irqrestore(&xhci->lock, flags); 901 900 xhci_dbg(xhci, "Calling usb_hc_died()\n"); 902 901 usb_hc_died(xhci_to_hcd(xhci)->primary_hcd); 903 902 xhci_dbg(xhci, "xHCI host controller is dead.\n");
+18 -16
drivers/usb/host/xhci.c
··· 799 799 u32 command, temp = 0; 800 800 struct usb_hcd *hcd = xhci_to_hcd(xhci); 801 801 struct usb_hcd *secondary_hcd; 802 - int retval; 802 + int retval = 0; 803 803 804 804 /* Wait a bit if either of the roothubs need to settle from the 805 805 * transition into bus suspend. ··· 808 808 time_before(jiffies, 809 809 xhci->bus_state[1].next_statechange)) 810 810 msleep(100); 811 + 812 + set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 813 + set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags); 811 814 812 815 spin_lock_irq(&xhci->lock); 813 816 if (xhci->quirks & XHCI_RESET_ON_RESUME) ··· 881 878 return retval; 882 879 xhci_dbg(xhci, "Start the primary HCD\n"); 883 880 retval = xhci_run(hcd->primary_hcd); 884 - if (retval) 885 - goto failed_restart; 886 - 887 - xhci_dbg(xhci, "Start the secondary HCD\n"); 888 - retval = xhci_run(secondary_hcd); 889 881 if (!retval) { 890 - set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 891 - set_bit(HCD_FLAG_HW_ACCESSIBLE, 892 - &xhci->shared_hcd->flags); 882 + xhci_dbg(xhci, "Start the secondary HCD\n"); 883 + retval = xhci_run(secondary_hcd); 893 884 } 894 - failed_restart: 895 885 hcd->state = HC_STATE_SUSPENDED; 896 886 xhci->shared_hcd->state = HC_STATE_SUSPENDED; 897 - return retval; 887 + goto done; 898 888 } 899 889 900 890 /* step 4: set Run/Stop bit */ ··· 906 910 * Running endpoints by ringing their doorbells 907 911 */ 908 912 909 - set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); 910 - set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags); 911 - 912 913 spin_unlock_irq(&xhci->lock); 913 - return 0; 914 + 915 + done: 916 + if (retval == 0) { 917 + usb_hcd_resume_root_hub(hcd); 918 + usb_hcd_resume_root_hub(xhci->shared_hcd); 919 + } 920 + return retval; 914 921 } 915 922 #endif /* CONFIG_PM */ 916 923 ··· 3503 3504 /* Otherwise, update the control endpoint ring enqueue pointer. */ 3504 3505 else 3505 3506 xhci_copy_ep0_dequeue_into_input_ctx(xhci, udev); 3507 + ctrl_ctx = xhci_get_input_control_ctx(xhci, virt_dev->in_ctx); 3508 + ctrl_ctx->add_flags = cpu_to_le32(SLOT_FLAG | EP0_FLAG); 3509 + ctrl_ctx->drop_flags = 0; 3510 + 3506 3511 xhci_dbg(xhci, "Slot ID %d Input Context:\n", udev->slot_id); 3507 3512 xhci_dbg_ctx(xhci, virt_dev->in_ctx, 2); 3508 3513 ··· 3588 3585 virt_dev->address = (le32_to_cpu(slot_ctx->dev_state) & DEV_ADDR_MASK) 3589 3586 + 1; 3590 3587 /* Zero the input context control for later use */ 3591 - ctrl_ctx = xhci_get_input_control_ctx(xhci, virt_dev->in_ctx); 3592 3588 ctrl_ctx->add_flags = 0; 3593 3589 ctrl_ctx->drop_flags = 0; 3594 3590
+2 -1
drivers/usb/musb/Kconfig
··· 11 11 select TWL4030_USB if MACH_OMAP_3430SDP 12 12 select TWL6030_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA 13 13 select USB_OTG_UTILS 14 + select USB_GADGET_DUALSPEED 14 15 tristate 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)' 15 16 help 16 17 Say Y here if your system has a dual role high speed USB ··· 61 60 62 61 config USB_MUSB_UX500 63 62 tristate "U8500 and U5500" 64 - depends on (ARCH_U8500 && AB8500_USB) || (ARCH_U5500) 63 + depends on (ARCH_U8500 && AB8500_USB) 65 64 66 65 endchoice 67 66
+1
drivers/usb/musb/am35x.c
··· 27 27 */ 28 28 29 29 #include <linux/init.h> 30 + #include <linux/module.h> 30 31 #include <linux/clk.h> 31 32 #include <linux/io.h> 32 33 #include <linux/platform_device.h>
+1
drivers/usb/musb/da8xx.c
··· 27 27 */ 28 28 29 29 #include <linux/init.h> 30 + #include <linux/module.h> 30 31 #include <linux/clk.h> 31 32 #include <linux/io.h> 32 33 #include <linux/platform_device.h>
+1 -2
drivers/usb/musb/musb_core.c
··· 1477 1477 /*-------------------------------------------------------------------------*/ 1478 1478 1479 1479 #if defined(CONFIG_SOC_OMAP2430) || defined(CONFIG_SOC_OMAP3430) || \ 1480 - defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_ARCH_U8500) || \ 1481 - defined(CONFIG_ARCH_U5500) 1480 + defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_ARCH_U8500) 1482 1481 1483 1482 static irqreturn_t generic_interrupt(int irq, void *__hci) 1484 1483 {
-4
drivers/usb/musb/musb_gadget.c
··· 1999 1999 nuke(&hw_ep->ep_out, -ESHUTDOWN); 2000 2000 } 2001 2001 } 2002 - 2003 - spin_unlock(&musb->lock); 2004 - driver->disconnect(&musb->g); 2005 - spin_lock(&musb->lock); 2006 2002 } 2007 2003 } 2008 2004
+1 -1
drivers/usb/renesas_usbhs/common.c
··· 405 405 /* 406 406 * platform functions 407 407 */ 408 - static int __devinit usbhs_probe(struct platform_device *pdev) 408 + static int usbhs_probe(struct platform_device *pdev) 409 409 { 410 410 struct renesas_usbhs_platform_info *info = pdev->dev.platform_data; 411 411 struct renesas_usbhs_driver_callback *dfunc;
+2 -2
drivers/usb/renesas_usbhs/fifo.c
··· 820 820 if (len % 4) /* 32bit alignment */ 821 821 goto usbhsf_pio_prepare_push; 822 822 823 - if ((*(u32 *) pkt->buf + pkt->actual) & 0x7) /* 8byte alignment */ 823 + if ((uintptr_t)(pkt->buf + pkt->actual) & 0x7) /* 8byte alignment */ 824 824 goto usbhsf_pio_prepare_push; 825 825 826 826 /* get enable DMA fifo */ ··· 897 897 if (!fifo) 898 898 goto usbhsf_pio_prepare_pop; 899 899 900 - if ((*(u32 *) pkt->buf + pkt->actual) & 0x7) /* 8byte alignment */ 900 + if ((uintptr_t)(pkt->buf + pkt->actual) & 0x7) /* 8byte alignment */ 901 901 goto usbhsf_pio_prepare_pop; 902 902 903 903 ret = usbhsf_fifo_select(pipe, fifo, 0);
+4 -4
drivers/usb/renesas_usbhs/mod.h
··· 143 143 */ 144 144 #if defined(CONFIG_USB_RENESAS_USBHS_HCD) || \ 145 145 defined(CONFIG_USB_RENESAS_USBHS_HCD_MODULE) 146 - extern int __devinit usbhs_mod_host_probe(struct usbhs_priv *priv); 147 - extern int __devexit usbhs_mod_host_remove(struct usbhs_priv *priv); 146 + extern int usbhs_mod_host_probe(struct usbhs_priv *priv); 147 + extern int usbhs_mod_host_remove(struct usbhs_priv *priv); 148 148 #else 149 149 static inline int usbhs_mod_host_probe(struct usbhs_priv *priv) 150 150 { ··· 157 157 158 158 #if defined(CONFIG_USB_RENESAS_USBHS_UDC) || \ 159 159 defined(CONFIG_USB_RENESAS_USBHS_UDC_MODULE) 160 - extern int __devinit usbhs_mod_gadget_probe(struct usbhs_priv *priv); 161 - extern void __devexit usbhs_mod_gadget_remove(struct usbhs_priv *priv); 160 + extern int usbhs_mod_gadget_probe(struct usbhs_priv *priv); 161 + extern void usbhs_mod_gadget_remove(struct usbhs_priv *priv); 162 162 #else 163 163 static inline int usbhs_mod_gadget_probe(struct usbhs_priv *priv) 164 164 {
+2 -2
drivers/usb/renesas_usbhs/mod_gadget.c
··· 830 830 return usbhsg_try_stop(priv, USBHSG_STATUS_STARTED); 831 831 } 832 832 833 - int __devinit usbhs_mod_gadget_probe(struct usbhs_priv *priv) 833 + int usbhs_mod_gadget_probe(struct usbhs_priv *priv) 834 834 { 835 835 struct usbhsg_gpriv *gpriv; 836 836 struct usbhsg_uep *uep; ··· 927 927 return ret; 928 928 } 929 929 930 - void __devexit usbhs_mod_gadget_remove(struct usbhs_priv *priv) 930 + void usbhs_mod_gadget_remove(struct usbhs_priv *priv) 931 931 { 932 932 struct usbhsg_gpriv *gpriv = usbhsg_priv_to_gpriv(priv); 933 933
+39 -24
drivers/usb/renesas_usbhs/mod_host.c
··· 103 103 104 104 u32 port_stat; /* USB_PORT_STAT_xxx */ 105 105 106 - struct completion *done; 106 + struct completion setup_ack_done; 107 107 108 108 /* see usbhsh_req_alloc/free */ 109 109 struct list_head ureq_link_active; ··· 355 355 struct usbhsh_ep *usbhsh_endpoint_alloc(struct usbhsh_hpriv *hpriv, 356 356 struct usbhsh_device *udev, 357 357 struct usb_host_endpoint *ep, 358 + int dir_in_req, 358 359 gfp_t mem_flags) 359 360 { 360 361 struct usbhs_priv *priv = usbhsh_hpriv_to_priv(hpriv); ··· 365 364 struct usbhs_pipe *pipe, *best_pipe; 366 365 struct device *dev = usbhsh_hcd_to_dev(hcd); 367 366 struct usb_endpoint_descriptor *desc = &ep->desc; 368 - int type, i; 367 + int type, i, dir_in; 369 368 unsigned int min_usr; 369 + 370 + dir_in_req = !!dir_in_req; 370 371 371 372 uep = kzalloc(sizeof(struct usbhsh_ep), mem_flags); 372 373 if (!uep) { 373 374 dev_err(dev, "usbhsh_ep alloc fail\n"); 374 375 return NULL; 375 376 } 376 - type = usb_endpoint_type(desc); 377 + 378 + if (usb_endpoint_xfer_control(desc)) { 379 + best_pipe = usbhsh_hpriv_to_dcp(hpriv); 380 + goto usbhsh_endpoint_alloc_find_pipe; 381 + } 377 382 378 383 /* 379 384 * find best pipe for endpoint 380 385 * see 381 386 * HARDWARE LIMITATION 382 387 */ 388 + type = usb_endpoint_type(desc); 383 389 min_usr = ~0; 384 390 best_pipe = NULL; 385 - usbhs_for_each_pipe_with_dcp(pipe, priv, i) { 391 + usbhs_for_each_pipe(pipe, priv, i) { 386 392 if (!usbhs_pipe_type_is(pipe, type)) 393 + continue; 394 + 395 + dir_in = !!usbhs_pipe_is_dir_in(pipe); 396 + if (0 != (dir_in - dir_in_req)) 387 397 continue; 388 398 389 399 info = usbhsh_pipe_info(pipe); ··· 410 398 kfree(uep); 411 399 return NULL; 412 400 } 413 - 401 + usbhsh_endpoint_alloc_find_pipe: 414 402 /* 415 403 * init uep 416 404 */ ··· 435 423 * see 436 424 * DCPMAXP/PIPEMAXP 437 425 */ 426 + usbhs_pipe_sequence_data0(uep->pipe); 438 427 usbhs_pipe_config_update(uep->pipe, 439 428 usbhsh_device_number(hpriv, udev), 440 429 usb_endpoint_num(desc), ··· 443 430 444 431 dev_dbg(dev, "%s [%d-%s](%p)\n", __func__, 445 432 usbhsh_device_number(hpriv, udev), 446 - usbhs_pipe_name(pipe), uep); 433 + usbhs_pipe_name(uep->pipe), uep); 447 434 448 435 return uep; 449 436 } ··· 562 549 * usbhsh_irq_setup_ack() 563 550 * usbhsh_irq_setup_err() 564 551 */ 565 - DECLARE_COMPLETION(done); 566 - hpriv->done = &done; 552 + init_completion(&hpriv->setup_ack_done); 567 553 568 554 /* copy original request */ 569 555 memcpy(&req, urb->setup_packet, sizeof(struct usb_ctrlrequest)); ··· 584 572 /* 585 573 * wait setup packet ACK 586 574 */ 587 - wait_for_completion(&done); 588 - hpriv->done = NULL; 575 + wait_for_completion(&hpriv->setup_ack_done); 589 576 590 577 dev_dbg(dev, "%s done\n", __func__); 591 578 } ··· 735 724 struct usbhsh_device *udev, *new_udev = NULL; 736 725 struct usbhs_pipe *pipe; 737 726 struct usbhsh_ep *uep; 727 + int is_dir_in = usb_pipein(urb->pipe); 738 728 739 729 int ret; 740 730 741 - dev_dbg(dev, "%s (%s)\n", 742 - __func__, usb_pipein(urb->pipe) ? "in" : "out"); 731 + dev_dbg(dev, "%s (%s)\n", __func__, is_dir_in ? "in" : "out"); 743 732 744 733 ret = usb_hcd_link_urb_to_ep(hcd, urb); 745 734 if (ret) ··· 762 751 */ 763 752 uep = usbhsh_ep_to_uep(ep); 764 753 if (!uep) { 765 - uep = usbhsh_endpoint_alloc(hpriv, udev, ep, mem_flags); 754 + uep = usbhsh_endpoint_alloc(hpriv, udev, ep, 755 + is_dir_in, mem_flags); 766 756 if (!uep) 767 757 goto usbhsh_urb_enqueue_error_free_device; 768 758 } ··· 1107 1095 1108 1096 dev_dbg(dev, "setup packet OK\n"); 1109 1097 1110 - if (unlikely(!hpriv->done)) 1111 - dev_err(dev, "setup ack happen without necessary data\n"); 1112 - else 1113 - complete(hpriv->done); /* see usbhsh_urb_enqueue() */ 1098 + complete(&hpriv->setup_ack_done); /* see usbhsh_urb_enqueue() */ 1114 1099 1115 1100 return 0; 1116 1101 } ··· 1120 1111 1121 1112 dev_dbg(dev, "setup packet Err\n"); 1122 1113 1123 - if (unlikely(!hpriv->done)) 1124 - dev_err(dev, "setup err happen without necessary data\n"); 1125 - else 1126 - complete(hpriv->done); /* see usbhsh_urb_enqueue() */ 1114 + complete(&hpriv->setup_ack_done); /* see usbhsh_urb_enqueue() */ 1127 1115 1128 1116 return 0; 1129 1117 } ··· 1227 1221 { 1228 1222 struct usbhsh_hpriv *hpriv = usbhsh_priv_to_hpriv(priv); 1229 1223 struct usb_hcd *hcd = usbhsh_hpriv_to_hcd(hpriv); 1224 + struct usbhs_mod *mod = usbhs_mod_get_current(priv); 1230 1225 struct device *dev = usbhs_priv_to_dev(priv); 1226 + 1227 + /* 1228 + * disable irq callback 1229 + */ 1230 + mod->irq_attch = NULL; 1231 + mod->irq_dtch = NULL; 1232 + mod->irq_sack = NULL; 1233 + mod->irq_sign = NULL; 1234 + usbhs_irq_callback_update(priv, mod); 1231 1235 1232 1236 usb_remove_hcd(hcd); 1233 1237 ··· 1251 1235 return 0; 1252 1236 } 1253 1237 1254 - int __devinit usbhs_mod_host_probe(struct usbhs_priv *priv) 1238 + int usbhs_mod_host_probe(struct usbhs_priv *priv) 1255 1239 { 1256 1240 struct usbhsh_hpriv *hpriv; 1257 1241 struct usb_hcd *hcd; ··· 1295 1279 hpriv->mod.stop = usbhsh_stop; 1296 1280 hpriv->pipe_info = pipe_info; 1297 1281 hpriv->pipe_size = pipe_size; 1298 - hpriv->done = NULL; 1299 1282 usbhsh_req_list_init(hpriv); 1300 1283 usbhsh_port_stat_init(hpriv); 1301 1284 ··· 1314 1299 return -ENOMEM; 1315 1300 } 1316 1301 1317 - int __devexit usbhs_mod_host_remove(struct usbhs_priv *priv) 1302 + int usbhs_mod_host_remove(struct usbhs_priv *priv) 1318 1303 { 1319 1304 struct usbhsh_hpriv *hpriv = usbhsh_priv_to_hpriv(priv); 1320 1305 struct usb_hcd *hcd = usbhsh_hpriv_to_hcd(hpriv);
+5 -5
drivers/usb/serial/ark3116.c
··· 42 42 * Version information 43 43 */ 44 44 45 - #define DRIVER_VERSION "v0.6" 45 + #define DRIVER_VERSION "v0.7" 46 46 #define DRIVER_AUTHOR "Bart Hartgers <bart.hartgers+ark3116@gmail.com>" 47 47 #define DRIVER_DESC "USB ARK3116 serial/IrDA driver" 48 48 #define DRIVER_DEV_DESC "ARK3116 RS232/IrDA" ··· 380 380 goto err_out; 381 381 } 382 382 383 - /* setup termios */ 384 - if (tty) 385 - ark3116_set_termios(tty, port, NULL); 386 - 387 383 /* remove any data still left: also clears error state */ 388 384 ark3116_read_reg(serial, UART_RX, buf); 389 385 ··· 401 405 402 406 /* enable DMA */ 403 407 ark3116_write_reg(port->serial, UART_FCR, UART_FCR_DMA_SELECT); 408 + 409 + /* setup termios */ 410 + if (tty) 411 + ark3116_set_termios(tty, port, NULL); 404 412 405 413 err_out: 406 414 kfree(buf);
+11 -3
drivers/usb/serial/ftdi_sio.c
··· 2104 2104 2105 2105 cflag = termios->c_cflag; 2106 2106 2107 - /* FIXME -For this cut I don't care if the line is really changing or 2108 - not - so just do the change regardless - should be able to 2109 - compare old_termios and tty->termios */ 2107 + if (old_termios->c_cflag == termios->c_cflag 2108 + && old_termios->c_ispeed == termios->c_ispeed 2109 + && old_termios->c_ospeed == termios->c_ospeed) 2110 + goto no_c_cflag_changes; 2111 + 2110 2112 /* NOTE These routines can get interrupted by 2111 2113 ftdi_sio_read_bulk_callback - need to examine what this means - 2112 2114 don't see any problems yet */ 2115 + 2116 + if ((old_termios->c_cflag & (CSIZE|PARODD|PARENB|CMSPAR|CSTOPB)) == 2117 + (termios->c_cflag & (CSIZE|PARODD|PARENB|CMSPAR|CSTOPB))) 2118 + goto no_data_parity_stop_changes; 2113 2119 2114 2120 /* Set number of data bits, parity, stop bits */ 2115 2121 ··· 2157 2151 } 2158 2152 2159 2153 /* Now do the baudrate */ 2154 + no_data_parity_stop_changes: 2160 2155 if ((cflag & CBAUD) == B0) { 2161 2156 /* Disable flow control */ 2162 2157 if (usb_control_msg(dev, usb_sndctrlpipe(dev, 0), ··· 2185 2178 2186 2179 /* Set flow control */ 2187 2180 /* Note device also supports DTR/CD (ugh) and Xon/Xoff in hardware */ 2181 + no_c_cflag_changes: 2188 2182 if (cflag & CRTSCTS) { 2189 2183 dbg("%s Setting to CRTSCTS flow control", __func__); 2190 2184 if (usb_control_msg(dev,
+28
drivers/usb/serial/option.c
··· 156 156 #define HUAWEI_PRODUCT_K4511 0x14CC 157 157 #define HUAWEI_PRODUCT_ETS1220 0x1803 158 158 #define HUAWEI_PRODUCT_E353 0x1506 159 + #define HUAWEI_PRODUCT_E173S 0x1C05 159 160 160 161 #define QUANTA_VENDOR_ID 0x0408 161 162 #define QUANTA_PRODUCT_Q101 0xEA02 ··· 317 316 #define ZTE_PRODUCT_AC8710 0xfff1 318 317 #define ZTE_PRODUCT_AC2726 0xfff5 319 318 #define ZTE_PRODUCT_AC8710T 0xffff 319 + #define ZTE_PRODUCT_MC2718 0xffe8 320 + #define ZTE_PRODUCT_AD3812 0xffeb 321 + #define ZTE_PRODUCT_MC2716 0xffed 320 322 321 323 #define BENQ_VENDOR_ID 0x04a5 322 324 #define BENQ_PRODUCT_H10 0x4068 ··· 472 468 #define YUGA_PRODUCT_CLU528 0x260D 473 469 #define YUGA_PRODUCT_CLU526 0x260F 474 470 471 + /* Viettel products */ 472 + #define VIETTEL_VENDOR_ID 0x2262 473 + #define VIETTEL_PRODUCT_VT1000 0x0002 474 + 475 475 /* some devices interfaces need special handling due to a number of reasons */ 476 476 enum option_blacklist_reason { 477 477 OPTION_BLACKLIST_NONE = 0, ··· 506 498 static const struct option_blacklist_info zte_k3765_z_blacklist = { 507 499 .sendsetup = BIT(0) | BIT(1) | BIT(2), 508 500 .reserved = BIT(4), 501 + }; 502 + 503 + static const struct option_blacklist_info zte_ad3812_z_blacklist = { 504 + .sendsetup = BIT(0) | BIT(1) | BIT(2), 505 + }; 506 + 507 + static const struct option_blacklist_info zte_mc2718_z_blacklist = { 508 + .sendsetup = BIT(1) | BIT(2) | BIT(3) | BIT(4), 509 + }; 510 + 511 + static const struct option_blacklist_info zte_mc2716_z_blacklist = { 512 + .sendsetup = BIT(1) | BIT(2) | BIT(3), 509 513 }; 510 514 511 515 static const struct option_blacklist_info huawei_cdc12_blacklist = { ··· 642 622 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E143D, 0xff, 0xff, 0xff) }, 643 623 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E143E, 0xff, 0xff, 0xff) }, 644 624 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E143F, 0xff, 0xff, 0xff) }, 625 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173S, 0xff, 0xff, 0xff) }, 645 626 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4505, 0xff, 0xff, 0xff), 646 627 .driver_info = (kernel_ulong_t) &huawei_cdc12_blacklist }, 647 628 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K3765, 0xff, 0xff, 0xff), ··· 1064 1043 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC8710, 0xff, 0xff, 0xff) }, 1065 1044 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC2726, 0xff, 0xff, 0xff) }, 1066 1045 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AC8710T, 0xff, 0xff, 0xff) }, 1046 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2718, 0xff, 0xff, 0xff), 1047 + .driver_info = (kernel_ulong_t)&zte_mc2718_z_blacklist }, 1048 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_AD3812, 0xff, 0xff, 0xff), 1049 + .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist }, 1050 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff), 1051 + .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist }, 1067 1052 { USB_DEVICE(BENQ_VENDOR_ID, BENQ_PRODUCT_H10) }, 1068 1053 { USB_DEVICE(DLINK_VENDOR_ID, DLINK_PRODUCT_DWM_652) }, 1069 1054 { USB_DEVICE(ALINK_VENDOR_ID, DLINK_PRODUCT_DWM_652_U5) }, /* Yes, ALINK_VENDOR_ID */ ··· 1168 1141 { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU516) }, 1169 1142 { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU528) }, 1170 1143 { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CLU526) }, 1144 + { USB_DEVICE_AND_INTERFACE_INFO(VIETTEL_VENDOR_ID, VIETTEL_PRODUCT_VT1000, 0xff, 0xff, 0xff) }, 1171 1145 { } /* Terminating entry */ 1172 1146 }; 1173 1147 MODULE_DEVICE_TABLE(usb, option_ids);
-1
drivers/usb/serial/pl2303.c
··· 91 91 { USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) }, 92 92 { USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) }, 93 93 { USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) }, 94 - { USB_DEVICE(WINCHIPHEAD_VENDOR_ID, WINCHIPHEAD_USBSER_PRODUCT_ID) }, 95 94 { USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) }, 96 95 { } /* Terminating entry */ 97 96 };
-4
drivers/usb/serial/pl2303.h
··· 145 145 #define ADLINK_VENDOR_ID 0x0b63 146 146 #define ADLINK_ND6530_PRODUCT_ID 0x6530 147 147 148 - /* WinChipHead USB->RS 232 adapter */ 149 - #define WINCHIPHEAD_VENDOR_ID 0x4348 150 - #define WINCHIPHEAD_USBSER_PRODUCT_ID 0x5523 151 - 152 148 /* SMART USB Serial Adapter */ 153 149 #define SMART_VENDOR_ID 0x0b8c 154 150 #define SMART_PRODUCT_ID 0x2303
+1 -2
drivers/usb/storage/ene_ub6250.c
··· 1762 1762 result = ene_send_scsi_cmd(us, FDIR_WRITE, scsi_sglist(srb), 1); 1763 1763 } else { 1764 1764 void *buf; 1765 - int offset; 1765 + int offset = 0; 1766 1766 u16 PhyBlockAddr; 1767 1767 u8 PageNum; 1768 - u32 result; 1769 1768 u16 len, oldphy, newphy; 1770 1769 1771 1770 buf = kmalloc(blenByte, GFP_KERNEL);
+3 -4
drivers/usb/storage/protocol.c
··· 59 59 60 60 void usb_stor_pad12_command(struct scsi_cmnd *srb, struct us_data *us) 61 61 { 62 - /* Pad the SCSI command with zeros out to 12 bytes 62 + /* 63 + * Pad the SCSI command with zeros out to 12 bytes. If the 64 + * command already is 12 bytes or longer, leave it alone. 63 65 * 64 66 * NOTE: This only works because a scsi_cmnd struct field contains 65 67 * a unsigned char cmnd[16], so we know we have storage available 66 68 */ 67 69 for (; srb->cmd_len<12; srb->cmd_len++) 68 70 srb->cmnd[srb->cmd_len] = 0; 69 - 70 - /* set command length to 12 bytes */ 71 - srb->cmd_len = 12; 72 71 73 72 /* send the command to the transport layer */ 74 73 usb_stor_invoke_transport(srb, us);
+14 -1
drivers/video/da8xx-fb.c
··· 116 116 /* Clock registers available only on Version 2 */ 117 117 #define LCD_CLK_ENABLE_REG 0x6c 118 118 #define LCD_CLK_RESET_REG 0x70 119 + #define LCD_CLK_MAIN_RESET BIT(3) 119 120 120 121 #define LCD_NUM_BUFFERS 2 121 122 ··· 245 244 { 246 245 u32 reg; 247 246 247 + /* Bring LCDC out of reset */ 248 + if (lcd_revision == LCD_VERSION_2) 249 + lcdc_write(0, LCD_CLK_RESET_REG); 250 + 248 251 reg = lcdc_read(LCD_RASTER_CTRL_REG); 249 252 if (!(reg & LCD_RASTER_ENABLE)) 250 253 lcdc_write(reg | LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG); ··· 262 257 reg = lcdc_read(LCD_RASTER_CTRL_REG); 263 258 if (reg & LCD_RASTER_ENABLE) 264 259 lcdc_write(reg & ~LCD_RASTER_ENABLE, LCD_RASTER_CTRL_REG); 260 + 261 + if (lcd_revision == LCD_VERSION_2) 262 + /* Write 1 to reset LCDC */ 263 + lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG); 265 264 } 266 265 267 266 static void lcd_blit(int load_mode, struct da8xx_fb_par *par) ··· 593 584 lcdc_write(0, LCD_DMA_CTRL_REG); 594 585 lcdc_write(0, LCD_RASTER_CTRL_REG); 595 586 596 - if (lcd_revision == LCD_VERSION_2) 587 + if (lcd_revision == LCD_VERSION_2) { 597 588 lcdc_write(0, LCD_INT_ENABLE_SET_REG); 589 + /* Write 1 to reset */ 590 + lcdc_write(LCD_CLK_MAIN_RESET, LCD_CLK_RESET_REG); 591 + lcdc_write(0, LCD_CLK_RESET_REG); 592 + } 598 593 } 599 594 600 595 static void lcd_calc_clk_divider(struct da8xx_fb_par *par)
+1
drivers/video/omap/dispc.c
··· 19 19 * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 20 20 */ 21 21 #include <linux/kernel.h> 22 + #include <linux/module.h> 22 23 #include <linux/dma-mapping.h> 23 24 #include <linux/mm.h> 24 25 #include <linux/vmalloc.h>
+5 -6
drivers/video/omap2/dss/dispc.c
··· 1720 1720 const int maxdownscale = dss_feat_get_param_max(FEAT_PARAM_DOWNSCALE); 1721 1721 unsigned long fclk = 0; 1722 1722 1723 - if ((ovl->caps & OMAP_DSS_OVL_CAP_SCALE) == 0) { 1724 - if (width != out_width || height != out_height) 1725 - return -EINVAL; 1726 - else 1727 - return 0; 1728 - } 1723 + if (width == out_width && height == out_height) 1724 + return 0; 1725 + 1726 + if ((ovl->caps & OMAP_DSS_OVL_CAP_SCALE) == 0) 1727 + return -EINVAL; 1729 1728 1730 1729 if (out_width < width / maxdownscale || 1731 1730 out_width > width * 8)
+1 -1
drivers/video/omap2/dss/hdmi.c
··· 269 269 unsigned long hdmi_get_pixel_clock(void) 270 270 { 271 271 /* HDMI Pixel Clock in Mhz */ 272 - return hdmi.ip_data.cfg.timings.timings.pixel_clock * 10000; 272 + return hdmi.ip_data.cfg.timings.timings.pixel_clock * 1000; 273 273 } 274 274 275 275 static void hdmi_compute_pll(struct omap_dss_device *dssdev, int phy,
+2 -2
drivers/video/via/share.h
··· 559 559 #define M1200X720_R60_VSP POSITIVE 560 560 561 561 /* 1200x900@60 Sync Polarity (DCON) */ 562 - #define M1200X900_R60_HSP NEGATIVE 563 - #define M1200X900_R60_VSP NEGATIVE 562 + #define M1200X900_R60_HSP POSITIVE 563 + #define M1200X900_R60_VSP POSITIVE 564 564 565 565 /* 1280x600@60 Sync Polarity (GTF Mode) */ 566 566 #define M1280x600_R60_HSP NEGATIVE
+1 -1
drivers/virtio/Kconfig
··· 37 37 38 38 config VIRTIO_MMIO 39 39 tristate "Platform bus driver for memory mapped virtio devices (EXPERIMENTAL)" 40 - depends on EXPERIMENTAL 40 + depends on HAS_IOMEM && EXPERIMENTAL 41 41 select VIRTIO 42 42 select VIRTIO_RING 43 43 ---help---
+1 -1
drivers/virtio/virtio_mmio.c
··· 118 118 vring_transport_features(vdev); 119 119 120 120 for (i = 0; i < ARRAY_SIZE(vdev->features); i++) { 121 - writel(i, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SET); 121 + writel(i, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SEL); 122 122 writel(vdev->features[i], 123 123 vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES); 124 124 }
+18
drivers/virtio/virtio_pci.c
··· 169 169 iowrite8(status, vp_dev->ioaddr + VIRTIO_PCI_STATUS); 170 170 } 171 171 172 + /* wait for pending irq handlers */ 173 + static void vp_synchronize_vectors(struct virtio_device *vdev) 174 + { 175 + struct virtio_pci_device *vp_dev = to_vp_device(vdev); 176 + int i; 177 + 178 + if (vp_dev->intx_enabled) 179 + synchronize_irq(vp_dev->pci_dev->irq); 180 + 181 + for (i = 0; i < vp_dev->msix_vectors; ++i) 182 + synchronize_irq(vp_dev->msix_entries[i].vector); 183 + } 184 + 172 185 static void vp_reset(struct virtio_device *vdev) 173 186 { 174 187 struct virtio_pci_device *vp_dev = to_vp_device(vdev); 175 188 /* 0 status means a reset. */ 176 189 iowrite8(0, vp_dev->ioaddr + VIRTIO_PCI_STATUS); 190 + /* Flush out the status write, and flush in device writes, 191 + * including MSi-X interrupts, if any. */ 192 + ioread8(vp_dev->ioaddr + VIRTIO_PCI_STATUS); 193 + /* Flush pending VQ/configuration callbacks. */ 194 + vp_synchronize_vectors(vdev); 177 195 } 178 196 179 197 /* the notify function used when creating a virt queue */
-7
drivers/watchdog/Kconfig
··· 314 314 To compile this driver as a module, choose M here: the 315 315 module will be called nuc900_wdt. 316 316 317 - config ADX_WATCHDOG 318 - tristate "Avionic Design Xanthos watchdog" 319 - depends on ARCH_PXA_ADX 320 - help 321 - Say Y here if you want support for the watchdog timer on Avionic 322 - Design Xanthos boards. 323 - 324 317 config TS72XX_WATCHDOG 325 318 tristate "TS-72XX SBC Watchdog" 326 319 depends on MACH_TS72XX
-1
drivers/watchdog/Makefile
··· 51 51 obj-$(CONFIG_COH901327_WATCHDOG) += coh901327_wdt.o 52 52 obj-$(CONFIG_STMP3XXX_WATCHDOG) += stmp3xxx_wdt.o 53 53 obj-$(CONFIG_NUC900_WATCHDOG) += nuc900_wdt.o 54 - obj-$(CONFIG_ADX_WATCHDOG) += adx_wdt.o 55 54 obj-$(CONFIG_TS72XX_WATCHDOG) += ts72xx_wdt.o 56 55 obj-$(CONFIG_IMX2_WDT) += imx2_wdt.o 57 56
-355
drivers/watchdog/adx_wdt.c
··· 1 - /* 2 - * Copyright (C) 2008-2009 Avionic Design GmbH 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - */ 8 - 9 - #include <linux/fs.h> 10 - #include <linux/gfp.h> 11 - #include <linux/io.h> 12 - #include <linux/miscdevice.h> 13 - #include <linux/module.h> 14 - #include <linux/platform_device.h> 15 - #include <linux/types.h> 16 - #include <linux/uaccess.h> 17 - #include <linux/watchdog.h> 18 - 19 - #define WATCHDOG_NAME "adx-wdt" 20 - 21 - /* register offsets */ 22 - #define ADX_WDT_CONTROL 0x00 23 - #define ADX_WDT_CONTROL_ENABLE (1 << 0) 24 - #define ADX_WDT_CONTROL_nRESET (1 << 1) 25 - #define ADX_WDT_TIMEOUT 0x08 26 - 27 - static struct platform_device *adx_wdt_dev; 28 - static unsigned long driver_open; 29 - 30 - #define WDT_STATE_STOP 0 31 - #define WDT_STATE_START 1 32 - 33 - struct adx_wdt { 34 - void __iomem *base; 35 - unsigned long timeout; 36 - unsigned int state; 37 - unsigned int wake; 38 - spinlock_t lock; 39 - }; 40 - 41 - static const struct watchdog_info adx_wdt_info = { 42 - .identity = "Avionic Design Xanthos Watchdog", 43 - .options = WDIOF_SETTIMEOUT | WDIOF_KEEPALIVEPING, 44 - }; 45 - 46 - static void adx_wdt_start_locked(struct adx_wdt *wdt) 47 - { 48 - u32 ctrl; 49 - 50 - ctrl = readl(wdt->base + ADX_WDT_CONTROL); 51 - ctrl |= ADX_WDT_CONTROL_ENABLE; 52 - writel(ctrl, wdt->base + ADX_WDT_CONTROL); 53 - wdt->state = WDT_STATE_START; 54 - } 55 - 56 - static void adx_wdt_start(struct adx_wdt *wdt) 57 - { 58 - unsigned long flags; 59 - 60 - spin_lock_irqsave(&wdt->lock, flags); 61 - adx_wdt_start_locked(wdt); 62 - spin_unlock_irqrestore(&wdt->lock, flags); 63 - } 64 - 65 - static void adx_wdt_stop_locked(struct adx_wdt *wdt) 66 - { 67 - u32 ctrl; 68 - 69 - ctrl = readl(wdt->base + ADX_WDT_CONTROL); 70 - ctrl &= ~ADX_WDT_CONTROL_ENABLE; 71 - writel(ctrl, wdt->base + ADX_WDT_CONTROL); 72 - wdt->state = WDT_STATE_STOP; 73 - } 74 - 75 - static void adx_wdt_stop(struct adx_wdt *wdt) 76 - { 77 - unsigned long flags; 78 - 79 - spin_lock_irqsave(&wdt->lock, flags); 80 - adx_wdt_stop_locked(wdt); 81 - spin_unlock_irqrestore(&wdt->lock, flags); 82 - } 83 - 84 - static void adx_wdt_set_timeout(struct adx_wdt *wdt, unsigned long seconds) 85 - { 86 - unsigned long timeout = seconds * 1000; 87 - unsigned long flags; 88 - unsigned int state; 89 - 90 - spin_lock_irqsave(&wdt->lock, flags); 91 - state = wdt->state; 92 - adx_wdt_stop_locked(wdt); 93 - writel(timeout, wdt->base + ADX_WDT_TIMEOUT); 94 - 95 - if (state == WDT_STATE_START) 96 - adx_wdt_start_locked(wdt); 97 - 98 - wdt->timeout = timeout; 99 - spin_unlock_irqrestore(&wdt->lock, flags); 100 - } 101 - 102 - static void adx_wdt_get_timeout(struct adx_wdt *wdt, unsigned long *seconds) 103 - { 104 - *seconds = wdt->timeout / 1000; 105 - } 106 - 107 - static void adx_wdt_keepalive(struct adx_wdt *wdt) 108 - { 109 - unsigned long flags; 110 - 111 - spin_lock_irqsave(&wdt->lock, flags); 112 - writel(wdt->timeout, wdt->base + ADX_WDT_TIMEOUT); 113 - spin_unlock_irqrestore(&wdt->lock, flags); 114 - } 115 - 116 - static int adx_wdt_open(struct inode *inode, struct file *file) 117 - { 118 - struct adx_wdt *wdt = platform_get_drvdata(adx_wdt_dev); 119 - 120 - if (test_and_set_bit(0, &driver_open)) 121 - return -EBUSY; 122 - 123 - file->private_data = wdt; 124 - adx_wdt_set_timeout(wdt, 30); 125 - adx_wdt_start(wdt); 126 - 127 - return nonseekable_open(inode, file); 128 - } 129 - 130 - static int adx_wdt_release(struct inode *inode, struct file *file) 131 - { 132 - struct adx_wdt *wdt = file->private_data; 133 - 134 - adx_wdt_stop(wdt); 135 - clear_bit(0, &driver_open); 136 - 137 - return 0; 138 - } 139 - 140 - static long adx_wdt_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 141 - { 142 - struct adx_wdt *wdt = file->private_data; 143 - void __user *argp = (void __user *)arg; 144 - unsigned long __user *p = argp; 145 - unsigned long seconds = 0; 146 - unsigned int options; 147 - long ret = -EINVAL; 148 - 149 - switch (cmd) { 150 - case WDIOC_GETSUPPORT: 151 - if (copy_to_user(argp, &adx_wdt_info, sizeof(adx_wdt_info))) 152 - return -EFAULT; 153 - else 154 - return 0; 155 - 156 - case WDIOC_GETSTATUS: 157 - case WDIOC_GETBOOTSTATUS: 158 - return put_user(0, p); 159 - 160 - case WDIOC_KEEPALIVE: 161 - adx_wdt_keepalive(wdt); 162 - return 0; 163 - 164 - case WDIOC_SETTIMEOUT: 165 - if (get_user(seconds, p)) 166 - return -EFAULT; 167 - 168 - adx_wdt_set_timeout(wdt, seconds); 169 - 170 - /* fallthrough */ 171 - case WDIOC_GETTIMEOUT: 172 - adx_wdt_get_timeout(wdt, &seconds); 173 - return put_user(seconds, p); 174 - 175 - case WDIOC_SETOPTIONS: 176 - if (copy_from_user(&options, argp, sizeof(options))) 177 - return -EFAULT; 178 - 179 - if (options & WDIOS_DISABLECARD) { 180 - adx_wdt_stop(wdt); 181 - ret = 0; 182 - } 183 - 184 - if (options & WDIOS_ENABLECARD) { 185 - adx_wdt_start(wdt); 186 - ret = 0; 187 - } 188 - 189 - return ret; 190 - 191 - default: 192 - break; 193 - } 194 - 195 - return -ENOTTY; 196 - } 197 - 198 - static ssize_t adx_wdt_write(struct file *file, const char __user *data, 199 - size_t len, loff_t *ppos) 200 - { 201 - struct adx_wdt *wdt = file->private_data; 202 - 203 - if (len) 204 - adx_wdt_keepalive(wdt); 205 - 206 - return len; 207 - } 208 - 209 - static const struct file_operations adx_wdt_fops = { 210 - .owner = THIS_MODULE, 211 - .llseek = no_llseek, 212 - .open = adx_wdt_open, 213 - .release = adx_wdt_release, 214 - .unlocked_ioctl = adx_wdt_ioctl, 215 - .write = adx_wdt_write, 216 - }; 217 - 218 - static struct miscdevice adx_wdt_miscdev = { 219 - .minor = WATCHDOG_MINOR, 220 - .name = "watchdog", 221 - .fops = &adx_wdt_fops, 222 - }; 223 - 224 - static int __devinit adx_wdt_probe(struct platform_device *pdev) 225 - { 226 - struct resource *res; 227 - struct adx_wdt *wdt; 228 - int ret = 0; 229 - u32 ctrl; 230 - 231 - wdt = devm_kzalloc(&pdev->dev, sizeof(*wdt), GFP_KERNEL); 232 - if (!wdt) { 233 - dev_err(&pdev->dev, "cannot allocate WDT structure\n"); 234 - return -ENOMEM; 235 - } 236 - 237 - spin_lock_init(&wdt->lock); 238 - 239 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 240 - if (!res) { 241 - dev_err(&pdev->dev, "cannot obtain I/O memory region\n"); 242 - return -ENXIO; 243 - } 244 - 245 - res = devm_request_mem_region(&pdev->dev, res->start, 246 - resource_size(res), res->name); 247 - if (!res) { 248 - dev_err(&pdev->dev, "cannot request I/O memory region\n"); 249 - return -ENXIO; 250 - } 251 - 252 - wdt->base = devm_ioremap_nocache(&pdev->dev, res->start, 253 - resource_size(res)); 254 - if (!wdt->base) { 255 - dev_err(&pdev->dev, "cannot remap I/O memory region\n"); 256 - return -ENXIO; 257 - } 258 - 259 - /* disable watchdog and reboot on timeout */ 260 - ctrl = readl(wdt->base + ADX_WDT_CONTROL); 261 - ctrl &= ~ADX_WDT_CONTROL_ENABLE; 262 - ctrl &= ~ADX_WDT_CONTROL_nRESET; 263 - writel(ctrl, wdt->base + ADX_WDT_CONTROL); 264 - 265 - platform_set_drvdata(pdev, wdt); 266 - adx_wdt_dev = pdev; 267 - 268 - ret = misc_register(&adx_wdt_miscdev); 269 - if (ret) { 270 - dev_err(&pdev->dev, "cannot register miscdev on minor %d " 271 - "(err=%d)\n", WATCHDOG_MINOR, ret); 272 - return ret; 273 - } 274 - 275 - return 0; 276 - } 277 - 278 - static int __devexit adx_wdt_remove(struct platform_device *pdev) 279 - { 280 - struct adx_wdt *wdt = platform_get_drvdata(pdev); 281 - 282 - misc_deregister(&adx_wdt_miscdev); 283 - adx_wdt_stop(wdt); 284 - platform_set_drvdata(pdev, NULL); 285 - 286 - return 0; 287 - } 288 - 289 - static void adx_wdt_shutdown(struct platform_device *pdev) 290 - { 291 - struct adx_wdt *wdt = platform_get_drvdata(pdev); 292 - adx_wdt_stop(wdt); 293 - } 294 - 295 - #ifdef CONFIG_PM 296 - static int adx_wdt_suspend(struct device *dev) 297 - { 298 - struct platform_device *pdev = to_platform_device(dev); 299 - struct adx_wdt *wdt = platform_get_drvdata(pdev); 300 - 301 - wdt->wake = (wdt->state == WDT_STATE_START) ? 1 : 0; 302 - adx_wdt_stop(wdt); 303 - 304 - return 0; 305 - } 306 - 307 - static int adx_wdt_resume(struct device *dev) 308 - { 309 - struct platform_device *pdev = to_platform_device(dev); 310 - struct adx_wdt *wdt = platform_get_drvdata(pdev); 311 - 312 - if (wdt->wake) 313 - adx_wdt_start(wdt); 314 - 315 - return 0; 316 - } 317 - 318 - static const struct dev_pm_ops adx_wdt_pm_ops = { 319 - .suspend = adx_wdt_suspend, 320 - .resume = adx_wdt_resume, 321 - }; 322 - 323 - # define ADX_WDT_PM_OPS (&adx_wdt_pm_ops) 324 - #else 325 - # define ADX_WDT_PM_OPS NULL 326 - #endif 327 - 328 - static struct platform_driver adx_wdt_driver = { 329 - .probe = adx_wdt_probe, 330 - .remove = __devexit_p(adx_wdt_remove), 331 - .shutdown = adx_wdt_shutdown, 332 - .driver = { 333 - .name = WATCHDOG_NAME, 334 - .owner = THIS_MODULE, 335 - .pm = ADX_WDT_PM_OPS, 336 - }, 337 - }; 338 - 339 - static int __init adx_wdt_init(void) 340 - { 341 - return platform_driver_register(&adx_wdt_driver); 342 - } 343 - 344 - static void __exit adx_wdt_exit(void) 345 - { 346 - platform_driver_unregister(&adx_wdt_driver); 347 - } 348 - 349 - module_init(adx_wdt_init); 350 - module_exit(adx_wdt_exit); 351 - 352 - MODULE_DESCRIPTION("Avionic Design Xanthos Watchdog Driver"); 353 - MODULE_LICENSE("GPL v2"); 354 - MODULE_AUTHOR("Thierry Reding <thierry.reding@avionic-design.de>"); 355 - MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+2 -2
drivers/watchdog/s3c2410_wdt.c
··· 401 401 402 402 dev_info(dev, "watchdog %sactive, reset %sabled, irq %sabled\n", 403 403 (wtcon & S3C2410_WTCON_ENABLE) ? "" : "in", 404 - (wtcon & S3C2410_WTCON_RSTEN) ? "" : "dis", 405 - (wtcon & S3C2410_WTCON_INTEN) ? "" : "en"); 404 + (wtcon & S3C2410_WTCON_RSTEN) ? "en" : "dis", 405 + (wtcon & S3C2410_WTCON_INTEN) ? "en" : "dis"); 406 406 407 407 return 0; 408 408
+1 -1
drivers/watchdog/wm831x_wdt.c
··· 150 150 if (wm831x_wdt_cfgs[i].time == timeout) 151 151 break; 152 152 if (i == ARRAY_SIZE(wm831x_wdt_cfgs)) 153 - ret = -EINVAL; 153 + return -EINVAL; 154 154 155 155 ret = wm831x_reg_unlock(wm831x); 156 156 if (ret == 0) {
+1 -1
fs/btrfs/backref.c
··· 683 683 return PTR_ERR(fspath); 684 684 685 685 if (fspath > fspath_min) { 686 - ipath->fspath->val[i] = (u64)fspath; 686 + ipath->fspath->val[i] = (u64)(unsigned long)fspath; 687 687 ++ipath->fspath->elem_cnt; 688 688 ipath->fspath->bytes_left = fspath - fspath_min; 689 689 } else {
+16 -1
fs/btrfs/ctree.c
··· 514 514 struct btrfs_root *root, 515 515 struct extent_buffer *buf) 516 516 { 517 + /* ensure we can see the force_cow */ 518 + smp_rmb(); 519 + 520 + /* 521 + * We do not need to cow a block if 522 + * 1) this block is not created or changed in this transaction; 523 + * 2) this block does not belong to TREE_RELOC tree; 524 + * 3) the root is not forced COW. 525 + * 526 + * What is forced COW: 527 + * when we create snapshot during commiting the transaction, 528 + * after we've finished coping src root, we must COW the shared 529 + * block to ensure the metadata consistency. 530 + */ 517 531 if (btrfs_header_generation(buf) == trans->transid && 518 532 !btrfs_header_flag(buf, BTRFS_HEADER_FLAG_WRITTEN) && 519 533 !(root->root_key.objectid != BTRFS_TREE_RELOC_OBJECTID && 520 - btrfs_header_flag(buf, BTRFS_HEADER_FLAG_RELOC))) 534 + btrfs_header_flag(buf, BTRFS_HEADER_FLAG_RELOC)) && 535 + !root->force_cow) 521 536 return 0; 522 537 return 1; 523 538 }
+7 -1
fs/btrfs/ctree.h
··· 848 848 enum btrfs_caching_type { 849 849 BTRFS_CACHE_NO = 0, 850 850 BTRFS_CACHE_STARTED = 1, 851 - BTRFS_CACHE_FINISHED = 2, 851 + BTRFS_CACHE_FAST = 2, 852 + BTRFS_CACHE_FINISHED = 3, 852 853 }; 853 854 854 855 enum btrfs_disk_cache_state { ··· 1272 1271 * for stat. It may be used for more later 1273 1272 */ 1274 1273 dev_t anon_dev; 1274 + 1275 + int force_cow; 1275 1276 }; 1276 1277 1277 1278 struct btrfs_ioctl_defrag_range_args { ··· 2369 2366 int btrfs_block_rsv_refill(struct btrfs_root *root, 2370 2367 struct btrfs_block_rsv *block_rsv, 2371 2368 u64 min_reserved); 2369 + int btrfs_block_rsv_refill_noflush(struct btrfs_root *root, 2370 + struct btrfs_block_rsv *block_rsv, 2371 + u64 min_reserved); 2372 2372 int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv, 2373 2373 struct btrfs_block_rsv *dst_rsv, 2374 2374 u64 num_bytes);
+129 -18
fs/btrfs/disk-io.c
··· 620 620 621 621 static int btree_io_failed_hook(struct bio *failed_bio, 622 622 struct page *page, u64 start, u64 end, 623 - u64 mirror_num, struct extent_state *state) 623 + int mirror_num, struct extent_state *state) 624 624 { 625 625 struct extent_io_tree *tree; 626 626 unsigned long len; ··· 2573 2573 int errors = 0; 2574 2574 u32 crc; 2575 2575 u64 bytenr; 2576 - int last_barrier = 0; 2577 2576 2578 2577 if (max_mirrors == 0) 2579 2578 max_mirrors = BTRFS_SUPER_MIRROR_MAX; 2580 - 2581 - /* make sure only the last submit_bh does a barrier */ 2582 - if (do_barriers) { 2583 - for (i = 0; i < max_mirrors; i++) { 2584 - bytenr = btrfs_sb_offset(i); 2585 - if (bytenr + BTRFS_SUPER_INFO_SIZE >= 2586 - device->total_bytes) 2587 - break; 2588 - last_barrier = i; 2589 - } 2590 - } 2591 2579 2592 2580 for (i = 0; i < max_mirrors; i++) { 2593 2581 bytenr = btrfs_sb_offset(i); ··· 2622 2634 bh->b_end_io = btrfs_end_buffer_write_sync; 2623 2635 } 2624 2636 2625 - if (i == last_barrier && do_barriers) 2626 - ret = submit_bh(WRITE_FLUSH_FUA, bh); 2627 - else 2628 - ret = submit_bh(WRITE_SYNC, bh); 2629 - 2637 + /* 2638 + * we fua the first super. The others we allow 2639 + * to go down lazy. 2640 + */ 2641 + ret = submit_bh(WRITE_FUA, bh); 2630 2642 if (ret) 2631 2643 errors++; 2632 2644 } 2633 2645 return errors < i ? 0 : -1; 2646 + } 2647 + 2648 + /* 2649 + * endio for the write_dev_flush, this will wake anyone waiting 2650 + * for the barrier when it is done 2651 + */ 2652 + static void btrfs_end_empty_barrier(struct bio *bio, int err) 2653 + { 2654 + if (err) { 2655 + if (err == -EOPNOTSUPP) 2656 + set_bit(BIO_EOPNOTSUPP, &bio->bi_flags); 2657 + clear_bit(BIO_UPTODATE, &bio->bi_flags); 2658 + } 2659 + if (bio->bi_private) 2660 + complete(bio->bi_private); 2661 + bio_put(bio); 2662 + } 2663 + 2664 + /* 2665 + * trigger flushes for one the devices. If you pass wait == 0, the flushes are 2666 + * sent down. With wait == 1, it waits for the previous flush. 2667 + * 2668 + * any device where the flush fails with eopnotsupp are flagged as not-barrier 2669 + * capable 2670 + */ 2671 + static int write_dev_flush(struct btrfs_device *device, int wait) 2672 + { 2673 + struct bio *bio; 2674 + int ret = 0; 2675 + 2676 + if (device->nobarriers) 2677 + return 0; 2678 + 2679 + if (wait) { 2680 + bio = device->flush_bio; 2681 + if (!bio) 2682 + return 0; 2683 + 2684 + wait_for_completion(&device->flush_wait); 2685 + 2686 + if (bio_flagged(bio, BIO_EOPNOTSUPP)) { 2687 + printk("btrfs: disabling barriers on dev %s\n", 2688 + device->name); 2689 + device->nobarriers = 1; 2690 + } 2691 + if (!bio_flagged(bio, BIO_UPTODATE)) { 2692 + ret = -EIO; 2693 + } 2694 + 2695 + /* drop the reference from the wait == 0 run */ 2696 + bio_put(bio); 2697 + device->flush_bio = NULL; 2698 + 2699 + return ret; 2700 + } 2701 + 2702 + /* 2703 + * one reference for us, and we leave it for the 2704 + * caller 2705 + */ 2706 + device->flush_bio = NULL;; 2707 + bio = bio_alloc(GFP_NOFS, 0); 2708 + if (!bio) 2709 + return -ENOMEM; 2710 + 2711 + bio->bi_end_io = btrfs_end_empty_barrier; 2712 + bio->bi_bdev = device->bdev; 2713 + init_completion(&device->flush_wait); 2714 + bio->bi_private = &device->flush_wait; 2715 + device->flush_bio = bio; 2716 + 2717 + bio_get(bio); 2718 + submit_bio(WRITE_FLUSH, bio); 2719 + 2720 + return 0; 2721 + } 2722 + 2723 + /* 2724 + * send an empty flush down to each device in parallel, 2725 + * then wait for them 2726 + */ 2727 + static int barrier_all_devices(struct btrfs_fs_info *info) 2728 + { 2729 + struct list_head *head; 2730 + struct btrfs_device *dev; 2731 + int errors = 0; 2732 + int ret; 2733 + 2734 + /* send down all the barriers */ 2735 + head = &info->fs_devices->devices; 2736 + list_for_each_entry_rcu(dev, head, dev_list) { 2737 + if (!dev->bdev) { 2738 + errors++; 2739 + continue; 2740 + } 2741 + if (!dev->in_fs_metadata || !dev->writeable) 2742 + continue; 2743 + 2744 + ret = write_dev_flush(dev, 0); 2745 + if (ret) 2746 + errors++; 2747 + } 2748 + 2749 + /* wait for all the barriers */ 2750 + list_for_each_entry_rcu(dev, head, dev_list) { 2751 + if (!dev->bdev) { 2752 + errors++; 2753 + continue; 2754 + } 2755 + if (!dev->in_fs_metadata || !dev->writeable) 2756 + continue; 2757 + 2758 + ret = write_dev_flush(dev, 1); 2759 + if (ret) 2760 + errors++; 2761 + } 2762 + if (errors) 2763 + return -EIO; 2764 + return 0; 2634 2765 } 2635 2766 2636 2767 int write_all_supers(struct btrfs_root *root, int max_mirrors) ··· 2773 2666 2774 2667 mutex_lock(&root->fs_info->fs_devices->device_list_mutex); 2775 2668 head = &root->fs_info->fs_devices->devices; 2669 + 2670 + if (do_barriers) 2671 + barrier_all_devices(root->fs_info); 2672 + 2776 2673 list_for_each_entry_rcu(dev, head, dev_list) { 2777 2674 if (!dev->bdev) { 2778 2675 total_errors++;
+103 -52
fs/btrfs/extent-tree.c
··· 467 467 struct btrfs_root *root, 468 468 int load_cache_only) 469 469 { 470 + DEFINE_WAIT(wait); 470 471 struct btrfs_fs_info *fs_info = cache->fs_info; 471 472 struct btrfs_caching_control *caching_ctl; 472 473 int ret = 0; 473 474 474 - smp_mb(); 475 - if (cache->cached != BTRFS_CACHE_NO) 475 + caching_ctl = kzalloc(sizeof(*caching_ctl), GFP_NOFS); 476 + BUG_ON(!caching_ctl); 477 + 478 + INIT_LIST_HEAD(&caching_ctl->list); 479 + mutex_init(&caching_ctl->mutex); 480 + init_waitqueue_head(&caching_ctl->wait); 481 + caching_ctl->block_group = cache; 482 + caching_ctl->progress = cache->key.objectid; 483 + atomic_set(&caching_ctl->count, 1); 484 + caching_ctl->work.func = caching_thread; 485 + 486 + spin_lock(&cache->lock); 487 + /* 488 + * This should be a rare occasion, but this could happen I think in the 489 + * case where one thread starts to load the space cache info, and then 490 + * some other thread starts a transaction commit which tries to do an 491 + * allocation while the other thread is still loading the space cache 492 + * info. The previous loop should have kept us from choosing this block 493 + * group, but if we've moved to the state where we will wait on caching 494 + * block groups we need to first check if we're doing a fast load here, 495 + * so we can wait for it to finish, otherwise we could end up allocating 496 + * from a block group who's cache gets evicted for one reason or 497 + * another. 498 + */ 499 + while (cache->cached == BTRFS_CACHE_FAST) { 500 + struct btrfs_caching_control *ctl; 501 + 502 + ctl = cache->caching_ctl; 503 + atomic_inc(&ctl->count); 504 + prepare_to_wait(&ctl->wait, &wait, TASK_UNINTERRUPTIBLE); 505 + spin_unlock(&cache->lock); 506 + 507 + schedule(); 508 + 509 + finish_wait(&ctl->wait, &wait); 510 + put_caching_control(ctl); 511 + spin_lock(&cache->lock); 512 + } 513 + 514 + if (cache->cached != BTRFS_CACHE_NO) { 515 + spin_unlock(&cache->lock); 516 + kfree(caching_ctl); 476 517 return 0; 518 + } 519 + WARN_ON(cache->caching_ctl); 520 + cache->caching_ctl = caching_ctl; 521 + cache->cached = BTRFS_CACHE_FAST; 522 + spin_unlock(&cache->lock); 477 523 478 524 /* 479 525 * We can't do the read from on-disk cache during a commit since we need ··· 530 484 if (trans && (!trans->transaction->in_commit) && 531 485 (root && root != root->fs_info->tree_root) && 532 486 btrfs_test_opt(root, SPACE_CACHE)) { 533 - spin_lock(&cache->lock); 534 - if (cache->cached != BTRFS_CACHE_NO) { 535 - spin_unlock(&cache->lock); 536 - return 0; 537 - } 538 - cache->cached = BTRFS_CACHE_STARTED; 539 - spin_unlock(&cache->lock); 540 - 541 487 ret = load_free_space_cache(fs_info, cache); 542 488 543 489 spin_lock(&cache->lock); 544 490 if (ret == 1) { 491 + cache->caching_ctl = NULL; 545 492 cache->cached = BTRFS_CACHE_FINISHED; 546 493 cache->last_byte_to_unpin = (u64)-1; 547 494 } else { 548 - cache->cached = BTRFS_CACHE_NO; 495 + if (load_cache_only) { 496 + cache->caching_ctl = NULL; 497 + cache->cached = BTRFS_CACHE_NO; 498 + } else { 499 + cache->cached = BTRFS_CACHE_STARTED; 500 + } 549 501 } 550 502 spin_unlock(&cache->lock); 503 + wake_up(&caching_ctl->wait); 551 504 if (ret == 1) { 505 + put_caching_control(caching_ctl); 552 506 free_excluded_extents(fs_info->extent_root, cache); 553 507 return 0; 554 508 } 555 - } 556 - 557 - if (load_cache_only) 558 - return 0; 559 - 560 - caching_ctl = kzalloc(sizeof(*caching_ctl), GFP_NOFS); 561 - BUG_ON(!caching_ctl); 562 - 563 - INIT_LIST_HEAD(&caching_ctl->list); 564 - mutex_init(&caching_ctl->mutex); 565 - init_waitqueue_head(&caching_ctl->wait); 566 - caching_ctl->block_group = cache; 567 - caching_ctl->progress = cache->key.objectid; 568 - /* one for caching kthread, one for caching block group list */ 569 - atomic_set(&caching_ctl->count, 2); 570 - caching_ctl->work.func = caching_thread; 571 - 572 - spin_lock(&cache->lock); 573 - if (cache->cached != BTRFS_CACHE_NO) { 509 + } else { 510 + /* 511 + * We are not going to do the fast caching, set cached to the 512 + * appropriate value and wakeup any waiters. 513 + */ 514 + spin_lock(&cache->lock); 515 + if (load_cache_only) { 516 + cache->caching_ctl = NULL; 517 + cache->cached = BTRFS_CACHE_NO; 518 + } else { 519 + cache->cached = BTRFS_CACHE_STARTED; 520 + } 574 521 spin_unlock(&cache->lock); 575 - kfree(caching_ctl); 522 + wake_up(&caching_ctl->wait); 523 + } 524 + 525 + if (load_cache_only) { 526 + put_caching_control(caching_ctl); 576 527 return 0; 577 528 } 578 - cache->caching_ctl = caching_ctl; 579 - cache->cached = BTRFS_CACHE_STARTED; 580 - spin_unlock(&cache->lock); 581 529 582 530 down_write(&fs_info->extent_commit_sem); 531 + atomic_inc(&caching_ctl->count); 583 532 list_add_tail(&caching_ctl->list, &fs_info->caching_block_groups); 584 533 up_write(&fs_info->extent_commit_sem); 585 534 ··· 3888 3847 return ret; 3889 3848 } 3890 3849 3891 - int btrfs_block_rsv_refill(struct btrfs_root *root, 3892 - struct btrfs_block_rsv *block_rsv, 3893 - u64 min_reserved) 3850 + static inline int __btrfs_block_rsv_refill(struct btrfs_root *root, 3851 + struct btrfs_block_rsv *block_rsv, 3852 + u64 min_reserved, int flush) 3894 3853 { 3895 3854 u64 num_bytes = 0; 3896 3855 int ret = -ENOSPC; ··· 3909 3868 if (!ret) 3910 3869 return 0; 3911 3870 3912 - ret = reserve_metadata_bytes(root, block_rsv, num_bytes, 1); 3871 + ret = reserve_metadata_bytes(root, block_rsv, num_bytes, flush); 3913 3872 if (!ret) { 3914 3873 block_rsv_add_bytes(block_rsv, num_bytes, 0); 3915 3874 return 0; 3916 3875 } 3917 3876 3918 3877 return ret; 3878 + } 3879 + 3880 + int btrfs_block_rsv_refill(struct btrfs_root *root, 3881 + struct btrfs_block_rsv *block_rsv, 3882 + u64 min_reserved) 3883 + { 3884 + return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 1); 3885 + } 3886 + 3887 + int btrfs_block_rsv_refill_noflush(struct btrfs_root *root, 3888 + struct btrfs_block_rsv *block_rsv, 3889 + u64 min_reserved) 3890 + { 3891 + return __btrfs_block_rsv_refill(root, block_rsv, min_reserved, 0); 3919 3892 } 3920 3893 3921 3894 int btrfs_block_rsv_migrate(struct btrfs_block_rsv *src_rsv, ··· 5233 5178 } 5234 5179 5235 5180 have_block_group: 5236 - if (unlikely(block_group->cached == BTRFS_CACHE_NO)) { 5181 + cached = block_group_cache_done(block_group); 5182 + if (unlikely(!cached)) { 5237 5183 u64 free_percent; 5238 5184 5185 + found_uncached_bg = true; 5239 5186 ret = cache_block_group(block_group, trans, 5240 5187 orig_root, 1); 5241 5188 if (block_group->cached == BTRFS_CACHE_FINISHED) 5242 - goto have_block_group; 5189 + goto alloc; 5243 5190 5244 5191 free_percent = btrfs_block_group_used(&block_group->item); 5245 5192 free_percent *= 100; ··· 5263 5206 orig_root, 0); 5264 5207 BUG_ON(ret); 5265 5208 } 5266 - found_uncached_bg = true; 5267 5209 5268 5210 /* 5269 5211 * If loop is set for cached only, try the next block ··· 5272 5216 goto loop; 5273 5217 } 5274 5218 5275 - cached = block_group_cache_done(block_group); 5276 - if (unlikely(!cached)) 5277 - found_uncached_bg = true; 5278 - 5219 + alloc: 5279 5220 if (unlikely(block_group->ro)) 5280 5221 goto loop; 5281 5222 5282 5223 spin_lock(&block_group->free_space_ctl->tree_lock); 5283 5224 if (cached && 5284 5225 block_group->free_space_ctl->free_space < 5285 - num_bytes + empty_size) { 5226 + num_bytes + empty_cluster + empty_size) { 5286 5227 spin_unlock(&block_group->free_space_ctl->tree_lock); 5287 5228 goto loop; 5288 5229 } ··· 5300 5247 * people trying to start a new cluster 5301 5248 */ 5302 5249 spin_lock(&last_ptr->refill_lock); 5303 - if (last_ptr->block_group && 5304 - (last_ptr->block_group->ro || 5305 - !block_group_bits(last_ptr->block_group, data))) { 5306 - offset = 0; 5250 + if (!last_ptr->block_group || 5251 + last_ptr->block_group->ro || 5252 + !block_group_bits(last_ptr->block_group, data)) 5307 5253 goto refill_cluster; 5308 - } 5309 5254 5310 5255 offset = btrfs_alloc_from_cluster(block_group, last_ptr, 5311 5256 num_bytes, search_start); ··· 5354 5303 /* allocate a cluster in this block group */ 5355 5304 ret = btrfs_find_space_cluster(trans, root, 5356 5305 block_group, last_ptr, 5357 - offset, num_bytes, 5306 + search_start, num_bytes, 5358 5307 empty_cluster + empty_size); 5359 5308 if (ret == 0) { 5360 5309 /*
+26 -10
fs/btrfs/extent_io.c
··· 2285 2285 clean_io_failure(start, page); 2286 2286 } 2287 2287 if (!uptodate) { 2288 - u64 failed_mirror; 2289 - failed_mirror = (u64)bio->bi_bdev; 2290 - if (tree->ops && tree->ops->readpage_io_failed_hook) 2291 - ret = tree->ops->readpage_io_failed_hook( 2292 - bio, page, start, end, 2293 - failed_mirror, state); 2294 - else 2295 - ret = bio_readpage_error(bio, page, start, end, 2296 - failed_mirror, NULL); 2288 + int failed_mirror; 2289 + failed_mirror = (int)(unsigned long)bio->bi_bdev; 2290 + /* 2291 + * The generic bio_readpage_error handles errors the 2292 + * following way: If possible, new read requests are 2293 + * created and submitted and will end up in 2294 + * end_bio_extent_readpage as well (if we're lucky, not 2295 + * in the !uptodate case). In that case it returns 0 and 2296 + * we just go on with the next page in our bio. If it 2297 + * can't handle the error it will return -EIO and we 2298 + * remain responsible for that page. 2299 + */ 2300 + ret = bio_readpage_error(bio, page, start, end, 2301 + failed_mirror, NULL); 2297 2302 if (ret == 0) { 2303 + error_handled: 2298 2304 uptodate = 2299 2305 test_bit(BIO_UPTODATE, &bio->bi_flags); 2300 2306 if (err) 2301 2307 uptodate = 0; 2302 2308 uncache_state(&cached); 2303 2309 continue; 2310 + } 2311 + if (tree->ops && tree->ops->readpage_io_failed_hook) { 2312 + ret = tree->ops->readpage_io_failed_hook( 2313 + bio, page, start, end, 2314 + failed_mirror, state); 2315 + if (ret == 0) 2316 + goto error_handled; 2304 2317 } 2305 2318 } 2306 2319 ··· 3379 3366 return -ENOMEM; 3380 3367 path->leave_spinning = 1; 3381 3368 3369 + start = ALIGN(start, BTRFS_I(inode)->root->sectorsize); 3370 + len = ALIGN(len, BTRFS_I(inode)->root->sectorsize); 3371 + 3382 3372 /* 3383 3373 * lookup the last file extent. We're not using i_size here 3384 3374 * because there might be preallocation past i_size ··· 3429 3413 lock_extent_bits(&BTRFS_I(inode)->io_tree, start, start + len, 0, 3430 3414 &cached_state, GFP_NOFS); 3431 3415 3432 - em = get_extent_skip_holes(inode, off, last_for_get_extent, 3416 + em = get_extent_skip_holes(inode, start, last_for_get_extent, 3433 3417 get_extent); 3434 3418 if (!em) 3435 3419 goto out;
+1 -1
fs/btrfs/extent_io.h
··· 70 70 unsigned long bio_flags); 71 71 int (*readpage_io_hook)(struct page *page, u64 start, u64 end); 72 72 int (*readpage_io_failed_hook)(struct bio *bio, struct page *page, 73 - u64 start, u64 end, u64 failed_mirror, 73 + u64 start, u64 end, int failed_mirror, 74 74 struct extent_state *state); 75 75 int (*writepage_io_failed_hook)(struct bio *bio, struct page *page, 76 76 u64 start, u64 end,
+28 -37
fs/btrfs/free-space-cache.c
··· 351 351 } 352 352 } 353 353 354 + for (i = 0; i < io_ctl->num_pages; i++) { 355 + clear_page_dirty_for_io(io_ctl->pages[i]); 356 + set_page_extent_mapped(io_ctl->pages[i]); 357 + } 358 + 354 359 return 0; 355 360 } 356 361 ··· 1470 1465 { 1471 1466 info->offset = offset_to_bitmap(ctl, offset); 1472 1467 info->bytes = 0; 1468 + INIT_LIST_HEAD(&info->list); 1473 1469 link_free_space(ctl, info); 1474 1470 ctl->total_bitmaps++; 1475 1471 ··· 1850 1844 info = tree_search_offset(ctl, offset_to_bitmap(ctl, offset), 1851 1845 1, 0); 1852 1846 if (!info) { 1853 - WARN_ON(1); 1847 + /* the tree logging code might be calling us before we 1848 + * have fully loaded the free space rbtree for this 1849 + * block group. So it is possible the entry won't 1850 + * be in the rbtree yet at all. The caching code 1851 + * will make sure not to put it in the rbtree if 1852 + * the logging code has pinned it. 1853 + */ 1854 1854 goto out_lock; 1855 1855 } 1856 1856 } ··· 2320 2308 2321 2309 if (!found) { 2322 2310 start = i; 2311 + cluster->max_size = 0; 2323 2312 found = true; 2324 2313 } 2325 2314 ··· 2464 2451 { 2465 2452 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl; 2466 2453 struct btrfs_free_space *entry; 2467 - struct rb_node *node; 2468 2454 int ret = -ENOSPC; 2455 + u64 bitmap_offset = offset_to_bitmap(ctl, offset); 2469 2456 2470 2457 if (ctl->total_bitmaps == 0) 2471 2458 return -ENOSPC; 2472 2459 2473 2460 /* 2474 - * First check our cached list of bitmaps and see if there is an entry 2475 - * here that will work. 2461 + * The bitmap that covers offset won't be in the list unless offset 2462 + * is just its start offset. 2476 2463 */ 2464 + entry = list_first_entry(bitmaps, struct btrfs_free_space, list); 2465 + if (entry->offset != bitmap_offset) { 2466 + entry = tree_search_offset(ctl, bitmap_offset, 1, 0); 2467 + if (entry && list_empty(&entry->list)) 2468 + list_add(&entry->list, bitmaps); 2469 + } 2470 + 2477 2471 list_for_each_entry(entry, bitmaps, list) { 2478 2472 if (entry->bytes < min_bytes) 2479 2473 continue; ··· 2491 2471 } 2492 2472 2493 2473 /* 2494 - * If we do have entries on our list and we are here then we didn't find 2495 - * anything, so go ahead and get the next entry after the last entry in 2496 - * this list and start the search from there. 2474 + * The bitmaps list has all the bitmaps that record free space 2475 + * starting after offset, so no more search is required. 2497 2476 */ 2498 - if (!list_empty(bitmaps)) { 2499 - entry = list_entry(bitmaps->prev, struct btrfs_free_space, 2500 - list); 2501 - node = rb_next(&entry->offset_index); 2502 - if (!node) 2503 - return -ENOSPC; 2504 - entry = rb_entry(node, struct btrfs_free_space, offset_index); 2505 - goto search; 2506 - } 2507 - 2508 - entry = tree_search_offset(ctl, offset_to_bitmap(ctl, offset), 0, 1); 2509 - if (!entry) 2510 - return -ENOSPC; 2511 - 2512 - search: 2513 - node = &entry->offset_index; 2514 - do { 2515 - entry = rb_entry(node, struct btrfs_free_space, offset_index); 2516 - node = rb_next(&entry->offset_index); 2517 - if (!entry->bitmap) 2518 - continue; 2519 - if (entry->bytes < min_bytes) 2520 - continue; 2521 - ret = btrfs_bitmap_cluster(block_group, entry, cluster, offset, 2522 - bytes, min_bytes); 2523 - } while (ret && node); 2524 - 2525 - return ret; 2477 + return -ENOSPC; 2526 2478 } 2527 2479 2528 2480 /* ··· 2512 2520 u64 offset, u64 bytes, u64 empty_size) 2513 2521 { 2514 2522 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl; 2515 - struct list_head bitmaps; 2516 2523 struct btrfs_free_space *entry, *tmp; 2524 + LIST_HEAD(bitmaps); 2517 2525 u64 min_bytes; 2518 2526 int ret; 2519 2527 ··· 2552 2560 goto out; 2553 2561 } 2554 2562 2555 - INIT_LIST_HEAD(&bitmaps); 2556 2563 ret = setup_cluster_no_bitmap(block_group, cluster, &bitmaps, offset, 2557 2564 bytes, min_bytes); 2558 2565 if (ret)
+5 -3
fs/btrfs/inode.c
··· 3490 3490 * doing the truncate. 3491 3491 */ 3492 3492 while (1) { 3493 - ret = btrfs_block_rsv_refill(root, rsv, min_size); 3493 + ret = btrfs_block_rsv_refill_noflush(root, rsv, min_size); 3494 3494 3495 3495 /* 3496 3496 * Try and steal from the global reserve since we will ··· 6794 6794 struct dentry *dentry, struct kstat *stat) 6795 6795 { 6796 6796 struct inode *inode = dentry->d_inode; 6797 + u32 blocksize = inode->i_sb->s_blocksize; 6798 + 6797 6799 generic_fillattr(inode, stat); 6798 6800 stat->dev = BTRFS_I(inode)->root->anon_dev; 6799 6801 stat->blksize = PAGE_CACHE_SIZE; 6800 - stat->blocks = (inode_get_bytes(inode) + 6801 - BTRFS_I(inode)->delalloc_bytes) >> 9; 6802 + stat->blocks = (ALIGN(inode_get_bytes(inode), blocksize) + 6803 + ALIGN(BTRFS_I(inode)->delalloc_bytes, blocksize)) >> 9; 6802 6804 return 0; 6803 6805 } 6804 6806
+10 -7
fs/btrfs/ioctl.c
··· 1216 1216 *devstr = '\0'; 1217 1217 devstr = vol_args->name; 1218 1218 devid = simple_strtoull(devstr, &end, 10); 1219 - printk(KERN_INFO "resizing devid %llu\n", 1219 + printk(KERN_INFO "btrfs: resizing devid %llu\n", 1220 1220 (unsigned long long)devid); 1221 1221 } 1222 1222 device = btrfs_find_device(root, devid, NULL, NULL); 1223 1223 if (!device) { 1224 - printk(KERN_INFO "resizer unable to find device %llu\n", 1224 + printk(KERN_INFO "btrfs: resizer unable to find device %llu\n", 1225 1225 (unsigned long long)devid); 1226 1226 ret = -EINVAL; 1227 1227 goto out_unlock; ··· 1267 1267 do_div(new_size, root->sectorsize); 1268 1268 new_size *= root->sectorsize; 1269 1269 1270 - printk(KERN_INFO "new size for %s is %llu\n", 1270 + printk(KERN_INFO "btrfs: new size for %s is %llu\n", 1271 1271 device->name, (unsigned long long)new_size); 1272 1272 1273 1273 if (new_size > old_size) { ··· 1278 1278 } 1279 1279 ret = btrfs_grow_device(trans, device, new_size); 1280 1280 btrfs_commit_transaction(trans, root); 1281 - } else { 1281 + } else if (new_size < old_size) { 1282 1282 ret = btrfs_shrink_device(device, new_size); 1283 1283 } 1284 1284 ··· 2930 2930 goto out; 2931 2931 2932 2932 for (i = 0; i < ipath->fspath->elem_cnt; ++i) { 2933 - rel_ptr = ipath->fspath->val[i] - (u64)ipath->fspath->val; 2933 + rel_ptr = ipath->fspath->val[i] - 2934 + (u64)(unsigned long)ipath->fspath->val; 2934 2935 ipath->fspath->val[i] = rel_ptr; 2935 2936 } 2936 2937 2937 - ret = copy_to_user((void *)ipa->fspath, (void *)ipath->fspath, size); 2938 + ret = copy_to_user((void *)(unsigned long)ipa->fspath, 2939 + (void *)(unsigned long)ipath->fspath, size); 2938 2940 if (ret) { 2939 2941 ret = -EFAULT; 2940 2942 goto out; ··· 3019 3017 if (ret < 0) 3020 3018 goto out; 3021 3019 3022 - ret = copy_to_user((void *)loi->inodes, (void *)inodes, size); 3020 + ret = copy_to_user((void *)(unsigned long)loi->inodes, 3021 + (void *)(unsigned long)inodes, size); 3023 3022 if (ret) 3024 3023 ret = -EFAULT; 3025 3024
+6 -1
fs/btrfs/scrub.c
··· 256 256 btrfs_release_path(swarn->path); 257 257 258 258 ipath = init_ipath(4096, local_root, swarn->path); 259 + if (IS_ERR(ipath)) { 260 + ret = PTR_ERR(ipath); 261 + ipath = NULL; 262 + goto err; 263 + } 259 264 ret = paths_from_inode(inum, ipath); 260 265 261 266 if (ret < 0) ··· 277 272 swarn->logical, swarn->dev->name, 278 273 (unsigned long long)swarn->sector, root, inum, offset, 279 274 min(isize - offset, (u64)PAGE_SIZE), nlink, 280 - (char *)ipath->fspath->val[i]); 275 + (char *)(unsigned long)ipath->fspath->val[i]); 281 276 282 277 free_ipath(ipath); 283 278 return 0;
+3 -3
fs/btrfs/super.c
··· 1057 1057 int i = 0, nr_devices; 1058 1058 int ret; 1059 1059 1060 - nr_devices = fs_info->fs_devices->rw_devices; 1060 + nr_devices = fs_info->fs_devices->open_devices; 1061 1061 BUG_ON(!nr_devices); 1062 1062 1063 1063 devices_info = kmalloc(sizeof(*devices_info) * nr_devices, ··· 1079 1079 else 1080 1080 min_stripe_size = BTRFS_STRIPE_LEN; 1081 1081 1082 - list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) { 1083 - if (!device->in_fs_metadata) 1082 + list_for_each_entry(device, &fs_devices->devices, dev_list) { 1083 + if (!device->in_fs_metadata || !device->bdev) 1084 1084 continue; 1085 1085 1086 1086 avail_space = device->total_bytes - device->bytes_used;
+8
fs/btrfs/transaction.c
··· 785 785 786 786 btrfs_save_ino_cache(root, trans); 787 787 788 + /* see comments in should_cow_block() */ 789 + root->force_cow = 0; 790 + smp_wmb(); 791 + 788 792 if (root->commit_root != root->node) { 789 793 mutex_lock(&root->fs_commit_mutex); 790 794 switch_commit_root(root); ··· 950 946 btrfs_copy_root(trans, root, old, &tmp, objectid); 951 947 btrfs_tree_unlock(old); 952 948 free_extent_buffer(old); 949 + 950 + /* see comments in should_cow_block() */ 951 + root->force_cow = 1; 952 + smp_wmb(); 953 953 954 954 btrfs_set_root_node(new_root_item, tmp); 955 955 /* record when the snapshot was created in key.offset */
+6
fs/btrfs/volumes.h
··· 100 100 struct reada_zone *reada_curr_zone; 101 101 struct radix_tree_root reada_zones; 102 102 struct radix_tree_root reada_extents; 103 + 104 + /* for sending down flush barriers */ 105 + struct bio *flush_bio; 106 + struct completion flush_wait; 107 + int nobarriers; 108 + 103 109 }; 104 110 105 111 struct btrfs_fs_devices {
+1 -1
fs/ceph/dir.c
··· 1143 1143 { 1144 1144 struct ceph_dentry_info *di; 1145 1145 1146 - dout("d_release %p\n", dentry); 1146 + dout("ceph_d_prune %p\n", dentry); 1147 1147 1148 1148 /* do we have a valid parent? */ 1149 1149 if (!dentry->d_parent || IS_ROOT(dentry))
+6 -3
fs/ceph/inode.c
··· 1328 1328 */ 1329 1329 void ceph_queue_writeback(struct inode *inode) 1330 1330 { 1331 + ihold(inode); 1331 1332 if (queue_work(ceph_inode_to_client(inode)->wb_wq, 1332 1333 &ceph_inode(inode)->i_wb_work)) { 1333 1334 dout("ceph_queue_writeback %p\n", inode); 1334 - ihold(inode); 1335 1335 } else { 1336 1336 dout("ceph_queue_writeback %p failed\n", inode); 1337 + iput(inode); 1337 1338 } 1338 1339 } 1339 1340 ··· 1354 1353 */ 1355 1354 void ceph_queue_invalidate(struct inode *inode) 1356 1355 { 1356 + ihold(inode); 1357 1357 if (queue_work(ceph_inode_to_client(inode)->pg_inv_wq, 1358 1358 &ceph_inode(inode)->i_pg_inv_work)) { 1359 1359 dout("ceph_queue_invalidate %p\n", inode); 1360 - ihold(inode); 1361 1360 } else { 1362 1361 dout("ceph_queue_invalidate %p failed\n", inode); 1362 + iput(inode); 1363 1363 } 1364 1364 } 1365 1365 ··· 1436 1434 { 1437 1435 struct ceph_inode_info *ci = ceph_inode(inode); 1438 1436 1437 + ihold(inode); 1439 1438 if (queue_work(ceph_sb_to_client(inode->i_sb)->trunc_wq, 1440 1439 &ci->i_vmtruncate_work)) { 1441 1440 dout("ceph_queue_vmtruncate %p\n", inode); 1442 - ihold(inode); 1443 1441 } else { 1444 1442 dout("ceph_queue_vmtruncate %p failed, pending=%d\n", 1445 1443 inode, ci->i_truncate_pending); 1444 + iput(inode); 1446 1445 } 1447 1446 } 1448 1447
+4 -2
fs/ceph/super.c
··· 638 638 if (err == 0) { 639 639 dout("open_root_inode success\n"); 640 640 if (ceph_ino(req->r_target_inode) == CEPH_INO_ROOT && 641 - fsc->sb->s_root == NULL) 641 + fsc->sb->s_root == NULL) { 642 642 root = d_alloc_root(req->r_target_inode); 643 - else 643 + ceph_init_dentry(root); 644 + } else { 644 645 root = d_obtain_alias(req->r_target_inode); 646 + } 645 647 req->r_target_inode = NULL; 646 648 dout("open_root_inode success, root dentry is %p\n", root); 647 649 } else {
+10 -1
fs/dcache.c
··· 36 36 #include <linux/bit_spinlock.h> 37 37 #include <linux/rculist_bl.h> 38 38 #include <linux/prefetch.h> 39 + #include <linux/ratelimit.h> 39 40 #include "internal.h" 40 41 41 42 /* ··· 2384 2383 actual = __d_unalias(inode, dentry, alias); 2385 2384 } 2386 2385 write_sequnlock(&rename_lock); 2387 - if (IS_ERR(actual)) 2386 + if (IS_ERR(actual)) { 2387 + if (PTR_ERR(actual) == -ELOOP) 2388 + pr_warn_ratelimited( 2389 + "VFS: Lookup of '%s' in %s %s" 2390 + " would have caused loop\n", 2391 + dentry->d_name.name, 2392 + inode->i_sb->s_type->name, 2393 + inode->i_sb->s_id); 2388 2394 dput(alias); 2395 + } 2389 2396 goto out_nolock; 2390 2397 } 2391 2398 }
+14 -12
fs/ecryptfs/crypto.c
··· 967 967 968 968 /** 969 969 * ecryptfs_new_file_context 970 - * @ecryptfs_dentry: The eCryptfs dentry 970 + * @ecryptfs_inode: The eCryptfs inode 971 971 * 972 972 * If the crypto context for the file has not yet been established, 973 973 * this is where we do that. Establishing a new crypto context ··· 984 984 * 985 985 * Returns zero on success; non-zero otherwise 986 986 */ 987 - int ecryptfs_new_file_context(struct dentry *ecryptfs_dentry) 987 + int ecryptfs_new_file_context(struct inode *ecryptfs_inode) 988 988 { 989 989 struct ecryptfs_crypt_stat *crypt_stat = 990 - &ecryptfs_inode_to_private(ecryptfs_dentry->d_inode)->crypt_stat; 990 + &ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat; 991 991 struct ecryptfs_mount_crypt_stat *mount_crypt_stat = 992 992 &ecryptfs_superblock_to_private( 993 - ecryptfs_dentry->d_sb)->mount_crypt_stat; 993 + ecryptfs_inode->i_sb)->mount_crypt_stat; 994 994 int cipher_name_len; 995 995 int rc = 0; 996 996 ··· 1299 1299 } 1300 1300 1301 1301 static int 1302 - ecryptfs_write_metadata_to_contents(struct dentry *ecryptfs_dentry, 1302 + ecryptfs_write_metadata_to_contents(struct inode *ecryptfs_inode, 1303 1303 char *virt, size_t virt_len) 1304 1304 { 1305 1305 int rc; 1306 1306 1307 - rc = ecryptfs_write_lower(ecryptfs_dentry->d_inode, virt, 1307 + rc = ecryptfs_write_lower(ecryptfs_inode, virt, 1308 1308 0, virt_len); 1309 1309 if (rc < 0) 1310 1310 printk(KERN_ERR "%s: Error attempting to write header " ··· 1338 1338 1339 1339 /** 1340 1340 * ecryptfs_write_metadata 1341 - * @ecryptfs_dentry: The eCryptfs dentry 1341 + * @ecryptfs_dentry: The eCryptfs dentry, which should be negative 1342 + * @ecryptfs_inode: The newly created eCryptfs inode 1342 1343 * 1343 1344 * Write the file headers out. This will likely involve a userspace 1344 1345 * callout, in which the session key is encrypted with one or more ··· 1349 1348 * 1350 1349 * Returns zero on success; non-zero on error 1351 1350 */ 1352 - int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry) 1351 + int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry, 1352 + struct inode *ecryptfs_inode) 1353 1353 { 1354 1354 struct ecryptfs_crypt_stat *crypt_stat = 1355 - &ecryptfs_inode_to_private(ecryptfs_dentry->d_inode)->crypt_stat; 1355 + &ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat; 1356 1356 unsigned int order; 1357 1357 char *virt; 1358 1358 size_t virt_len; ··· 1393 1391 rc = ecryptfs_write_metadata_to_xattr(ecryptfs_dentry, virt, 1394 1392 size); 1395 1393 else 1396 - rc = ecryptfs_write_metadata_to_contents(ecryptfs_dentry, virt, 1394 + rc = ecryptfs_write_metadata_to_contents(ecryptfs_inode, virt, 1397 1395 virt_len); 1398 1396 if (rc) { 1399 1397 printk(KERN_ERR "%s: Error writing metadata out to lower file; " ··· 1945 1943 1946 1944 /* We could either offset on every reverse map or just pad some 0x00's 1947 1945 * at the front here */ 1948 - static const unsigned char filename_rev_map[] = { 1946 + static const unsigned char filename_rev_map[256] = { 1949 1947 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 7 */ 1950 1948 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 15 */ 1951 1949 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 23 */ ··· 1961 1959 0x00, 0x26, 0x27, 0x28, 0x29, 0x2A, 0x2B, 0x2C, /* 103 */ 1962 1960 0x2D, 0x2E, 0x2F, 0x30, 0x31, 0x32, 0x33, 0x34, /* 111 */ 1963 1961 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A, 0x3B, 0x3C, /* 119 */ 1964 - 0x3D, 0x3E, 0x3F 1962 + 0x3D, 0x3E, 0x3F /* 123 - 255 initialized to 0x00 */ 1965 1963 }; 1966 1964 1967 1965 /**
+3 -2
fs/ecryptfs/ecryptfs_kernel.h
··· 584 584 int ecryptfs_write_inode_size_to_metadata(struct inode *ecryptfs_inode); 585 585 int ecryptfs_encrypt_page(struct page *page); 586 586 int ecryptfs_decrypt_page(struct page *page); 587 - int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry); 587 + int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry, 588 + struct inode *ecryptfs_inode); 588 589 int ecryptfs_read_metadata(struct dentry *ecryptfs_dentry); 589 - int ecryptfs_new_file_context(struct dentry *ecryptfs_dentry); 590 + int ecryptfs_new_file_context(struct inode *ecryptfs_inode); 590 591 void ecryptfs_write_crypt_stat_flags(char *page_virt, 591 592 struct ecryptfs_crypt_stat *crypt_stat, 592 593 size_t *written);
+22 -1
fs/ecryptfs/file.c
··· 139 139 return rc; 140 140 } 141 141 142 + static void ecryptfs_vma_close(struct vm_area_struct *vma) 143 + { 144 + filemap_write_and_wait(vma->vm_file->f_mapping); 145 + } 146 + 147 + static const struct vm_operations_struct ecryptfs_file_vm_ops = { 148 + .close = ecryptfs_vma_close, 149 + .fault = filemap_fault, 150 + }; 151 + 152 + static int ecryptfs_file_mmap(struct file *file, struct vm_area_struct *vma) 153 + { 154 + int rc; 155 + 156 + rc = generic_file_mmap(file, vma); 157 + if (!rc) 158 + vma->vm_ops = &ecryptfs_file_vm_ops; 159 + 160 + return rc; 161 + } 162 + 142 163 struct kmem_cache *ecryptfs_file_info_cache; 143 164 144 165 /** ··· 370 349 #ifdef CONFIG_COMPAT 371 350 .compat_ioctl = ecryptfs_compat_ioctl, 372 351 #endif 373 - .mmap = generic_file_mmap, 352 + .mmap = ecryptfs_file_mmap, 374 353 .open = ecryptfs_open, 375 354 .flush = ecryptfs_flush, 376 355 .release = ecryptfs_release,
+31 -21
fs/ecryptfs/inode.c
··· 172 172 * it. It will also update the eCryptfs directory inode to mimic the 173 173 * stat of the lower directory inode. 174 174 * 175 - * Returns zero on success; non-zero on error condition 175 + * Returns the new eCryptfs inode on success; an ERR_PTR on error condition 176 176 */ 177 - static int 177 + static struct inode * 178 178 ecryptfs_do_create(struct inode *directory_inode, 179 179 struct dentry *ecryptfs_dentry, int mode) 180 180 { 181 181 int rc; 182 182 struct dentry *lower_dentry; 183 183 struct dentry *lower_dir_dentry; 184 + struct inode *inode; 184 185 185 186 lower_dentry = ecryptfs_dentry_to_lower(ecryptfs_dentry); 186 187 lower_dir_dentry = lock_parent(lower_dentry); 187 188 if (IS_ERR(lower_dir_dentry)) { 188 189 ecryptfs_printk(KERN_ERR, "Error locking directory of " 189 190 "dentry\n"); 190 - rc = PTR_ERR(lower_dir_dentry); 191 + inode = ERR_CAST(lower_dir_dentry); 191 192 goto out; 192 193 } 193 194 rc = ecryptfs_create_underlying_file(lower_dir_dentry->d_inode, ··· 196 195 if (rc) { 197 196 printk(KERN_ERR "%s: Failure to create dentry in lower fs; " 198 197 "rc = [%d]\n", __func__, rc); 198 + inode = ERR_PTR(rc); 199 199 goto out_lock; 200 200 } 201 - rc = ecryptfs_interpose(lower_dentry, ecryptfs_dentry, 202 - directory_inode->i_sb); 203 - if (rc) { 204 - ecryptfs_printk(KERN_ERR, "Failure in ecryptfs_interpose\n"); 201 + inode = __ecryptfs_get_inode(lower_dentry->d_inode, 202 + directory_inode->i_sb); 203 + if (IS_ERR(inode)) 205 204 goto out_lock; 206 - } 207 205 fsstack_copy_attr_times(directory_inode, lower_dir_dentry->d_inode); 208 206 fsstack_copy_inode_size(directory_inode, lower_dir_dentry->d_inode); 209 207 out_lock: 210 208 unlock_dir(lower_dir_dentry); 211 209 out: 212 - return rc; 210 + return inode; 213 211 } 214 212 215 213 /** ··· 219 219 * 220 220 * Returns zero on success 221 221 */ 222 - static int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry) 222 + static int ecryptfs_initialize_file(struct dentry *ecryptfs_dentry, 223 + struct inode *ecryptfs_inode) 223 224 { 224 225 struct ecryptfs_crypt_stat *crypt_stat = 225 - &ecryptfs_inode_to_private(ecryptfs_dentry->d_inode)->crypt_stat; 226 + &ecryptfs_inode_to_private(ecryptfs_inode)->crypt_stat; 226 227 int rc = 0; 227 228 228 - if (S_ISDIR(ecryptfs_dentry->d_inode->i_mode)) { 229 + if (S_ISDIR(ecryptfs_inode->i_mode)) { 229 230 ecryptfs_printk(KERN_DEBUG, "This is a directory\n"); 230 231 crypt_stat->flags &= ~(ECRYPTFS_ENCRYPTED); 231 232 goto out; 232 233 } 233 234 ecryptfs_printk(KERN_DEBUG, "Initializing crypto context\n"); 234 - rc = ecryptfs_new_file_context(ecryptfs_dentry); 235 + rc = ecryptfs_new_file_context(ecryptfs_inode); 235 236 if (rc) { 236 237 ecryptfs_printk(KERN_ERR, "Error creating new file " 237 238 "context; rc = [%d]\n", rc); 238 239 goto out; 239 240 } 240 - rc = ecryptfs_get_lower_file(ecryptfs_dentry, 241 - ecryptfs_dentry->d_inode); 241 + rc = ecryptfs_get_lower_file(ecryptfs_dentry, ecryptfs_inode); 242 242 if (rc) { 243 243 printk(KERN_ERR "%s: Error attempting to initialize " 244 244 "the lower file for the dentry with name " ··· 246 246 ecryptfs_dentry->d_name.name, rc); 247 247 goto out; 248 248 } 249 - rc = ecryptfs_write_metadata(ecryptfs_dentry); 249 + rc = ecryptfs_write_metadata(ecryptfs_dentry, ecryptfs_inode); 250 250 if (rc) 251 251 printk(KERN_ERR "Error writing headers; rc = [%d]\n", rc); 252 - ecryptfs_put_lower_file(ecryptfs_dentry->d_inode); 252 + ecryptfs_put_lower_file(ecryptfs_inode); 253 253 out: 254 254 return rc; 255 255 } ··· 269 269 ecryptfs_create(struct inode *directory_inode, struct dentry *ecryptfs_dentry, 270 270 int mode, struct nameidata *nd) 271 271 { 272 + struct inode *ecryptfs_inode; 272 273 int rc; 273 274 274 - /* ecryptfs_do_create() calls ecryptfs_interpose() */ 275 - rc = ecryptfs_do_create(directory_inode, ecryptfs_dentry, mode); 276 - if (unlikely(rc)) { 275 + ecryptfs_inode = ecryptfs_do_create(directory_inode, ecryptfs_dentry, 276 + mode); 277 + if (unlikely(IS_ERR(ecryptfs_inode))) { 277 278 ecryptfs_printk(KERN_WARNING, "Failed to create file in" 278 279 "lower filesystem\n"); 280 + rc = PTR_ERR(ecryptfs_inode); 279 281 goto out; 280 282 } 281 283 /* At this point, a file exists on "disk"; we need to make sure 282 284 * that this on disk file is prepared to be an ecryptfs file */ 283 - rc = ecryptfs_initialize_file(ecryptfs_dentry); 285 + rc = ecryptfs_initialize_file(ecryptfs_dentry, ecryptfs_inode); 286 + if (rc) { 287 + drop_nlink(ecryptfs_inode); 288 + unlock_new_inode(ecryptfs_inode); 289 + iput(ecryptfs_inode); 290 + goto out; 291 + } 292 + d_instantiate(ecryptfs_dentry, ecryptfs_inode); 293 + unlock_new_inode(ecryptfs_inode); 284 294 out: 285 295 return rc; 286 296 }
+1 -1
fs/ext4/balloc.c
··· 565 565 brelse(bitmap_bh); 566 566 printk(KERN_DEBUG "ext4_count_free_clusters: stored = %llu" 567 567 ", computed = %llu, %llu\n", 568 - EXT4_B2C(sbi, ext4_free_blocks_count(es)), 568 + EXT4_B2C(EXT4_SB(sb), ext4_free_blocks_count(es)), 569 569 desc_count, bitmap_count); 570 570 return bitmap_count; 571 571 #else
+2 -1
fs/ext4/inode.c
··· 2270 2270 ext4_msg(inode->i_sb, KERN_CRIT, "%s: jbd2_start: " 2271 2271 "%ld pages, ino %lu; err %d", __func__, 2272 2272 wbc->nr_to_write, inode->i_ino, ret); 2273 + blk_finish_plug(&plug); 2273 2274 goto out_writepages; 2274 2275 } 2275 2276 ··· 2807 2806 spin_unlock_irqrestore(&ei->i_completed_io_lock, flags); 2808 2807 2809 2808 /* queue the work to convert unwritten extents to written */ 2810 - queue_work(wq, &io_end->work); 2811 2809 iocb->private = NULL; 2810 + queue_work(wq, &io_end->work); 2812 2811 2813 2812 /* XXX: probably should move into the real I/O completion handler */ 2814 2813 inode_dio_done(inode);
+3 -3
fs/ext4/super.c
··· 1683 1683 data_opt = EXT4_MOUNT_WRITEBACK_DATA; 1684 1684 datacheck: 1685 1685 if (is_remount) { 1686 - if (test_opt(sb, DATA_FLAGS) != data_opt) { 1686 + if (!sbi->s_journal) 1687 + ext4_msg(sb, KERN_WARNING, "Remounting file system with no journal so ignoring journalled data option"); 1688 + else if (test_opt(sb, DATA_FLAGS) != data_opt) { 1687 1689 ext4_msg(sb, KERN_ERR, 1688 1690 "Cannot change data mode on remount"); 1689 1691 return 0; ··· 3101 3099 } 3102 3100 3103 3101 static int ext4_fill_super(struct super_block *sb, void *data, int silent) 3104 - __releases(kernel_lock) 3105 - __acquires(kernel_lock) 3106 3102 { 3107 3103 char *orig_data = kstrdup(data, GFP_KERNEL); 3108 3104 struct buffer_head *bh;
+25 -32
fs/minix/bitmap.c
··· 16 16 #include <linux/bitops.h> 17 17 #include <linux/sched.h> 18 18 19 - static const int nibblemap[] = { 4,3,3,2,3,2,2,1,3,2,2,1,2,1,1,0 }; 20 - 21 19 static DEFINE_SPINLOCK(bitmap_lock); 22 20 23 - static unsigned long count_free(struct buffer_head *map[], unsigned numblocks, __u32 numbits) 21 + /* 22 + * bitmap consists of blocks filled with 16bit words 23 + * bit set == busy, bit clear == free 24 + * endianness is a mess, but for counting zero bits it really doesn't matter... 25 + */ 26 + static __u32 count_free(struct buffer_head *map[], unsigned blocksize, __u32 numbits) 24 27 { 25 - unsigned i, j, sum = 0; 26 - struct buffer_head *bh; 27 - 28 - for (i=0; i<numblocks-1; i++) { 29 - if (!(bh=map[i])) 30 - return(0); 31 - for (j=0; j<bh->b_size; j++) 32 - sum += nibblemap[bh->b_data[j] & 0xf] 33 - + nibblemap[(bh->b_data[j]>>4) & 0xf]; 28 + __u32 sum = 0; 29 + unsigned blocks = DIV_ROUND_UP(numbits, blocksize * 8); 30 + 31 + while (blocks--) { 32 + unsigned words = blocksize / 2; 33 + __u16 *p = (__u16 *)(*map++)->b_data; 34 + while (words--) 35 + sum += 16 - hweight16(*p++); 34 36 } 35 37 36 - if (numblocks==0 || !(bh=map[numblocks-1])) 37 - return(0); 38 - i = ((numbits - (numblocks-1) * bh->b_size * 8) / 16) * 2; 39 - for (j=0; j<i; j++) { 40 - sum += nibblemap[bh->b_data[j] & 0xf] 41 - + nibblemap[(bh->b_data[j]>>4) & 0xf]; 42 - } 43 - 44 - i = numbits%16; 45 - if (i!=0) { 46 - i = *(__u16 *)(&bh->b_data[j]) | ~((1<<i) - 1); 47 - sum += nibblemap[i & 0xf] + nibblemap[(i>>4) & 0xf]; 48 - sum += nibblemap[(i>>8) & 0xf] + nibblemap[(i>>12) & 0xf]; 49 - } 50 - return(sum); 38 + return sum; 51 39 } 52 40 53 41 void minix_free_block(struct inode *inode, unsigned long block) ··· 93 105 return 0; 94 106 } 95 107 96 - unsigned long minix_count_free_blocks(struct minix_sb_info *sbi) 108 + unsigned long minix_count_free_blocks(struct super_block *sb) 97 109 { 98 - return (count_free(sbi->s_zmap, sbi->s_zmap_blocks, 99 - sbi->s_nzones - sbi->s_firstdatazone + 1) 110 + struct minix_sb_info *sbi = minix_sb(sb); 111 + u32 bits = sbi->s_nzones - (sbi->s_firstdatazone + 1); 112 + 113 + return (count_free(sbi->s_zmap, sb->s_blocksize, bits) 100 114 << sbi->s_log_zone_size); 101 115 } 102 116 ··· 263 273 return inode; 264 274 } 265 275 266 - unsigned long minix_count_free_inodes(struct minix_sb_info *sbi) 276 + unsigned long minix_count_free_inodes(struct super_block *sb) 267 277 { 268 - return count_free(sbi->s_imap, sbi->s_imap_blocks, sbi->s_ninodes + 1); 278 + struct minix_sb_info *sbi = minix_sb(sb); 279 + u32 bits = sbi->s_ninodes + 1; 280 + 281 + return count_free(sbi->s_imap, sb->s_blocksize, bits); 269 282 }
+23 -2
fs/minix/inode.c
··· 279 279 else if (sbi->s_mount_state & MINIX_ERROR_FS) 280 280 printk("MINIX-fs: mounting file system with errors, " 281 281 "running fsck is recommended\n"); 282 + 283 + /* Apparently minix can create filesystems that allocate more blocks for 284 + * the bitmaps than needed. We simply ignore that, but verify it didn't 285 + * create one with not enough blocks and bail out if so. 286 + */ 287 + block = minix_blocks_needed(sbi->s_ninodes, s->s_blocksize); 288 + if (sbi->s_imap_blocks < block) { 289 + printk("MINIX-fs: file system does not have enough " 290 + "imap blocks allocated. Refusing to mount\n"); 291 + goto out_iput; 292 + } 293 + 294 + block = minix_blocks_needed( 295 + (sbi->s_nzones - (sbi->s_firstdatazone + 1)), 296 + s->s_blocksize); 297 + if (sbi->s_zmap_blocks < block) { 298 + printk("MINIX-fs: file system does not have enough " 299 + "zmap blocks allocated. Refusing to mount.\n"); 300 + goto out_iput; 301 + } 302 + 282 303 return 0; 283 304 284 305 out_iput: ··· 360 339 buf->f_type = sb->s_magic; 361 340 buf->f_bsize = sb->s_blocksize; 362 341 buf->f_blocks = (sbi->s_nzones - sbi->s_firstdatazone) << sbi->s_log_zone_size; 363 - buf->f_bfree = minix_count_free_blocks(sbi); 342 + buf->f_bfree = minix_count_free_blocks(sb); 364 343 buf->f_bavail = buf->f_bfree; 365 344 buf->f_files = sbi->s_ninodes; 366 - buf->f_ffree = minix_count_free_inodes(sbi); 345 + buf->f_ffree = minix_count_free_inodes(sb); 367 346 buf->f_namelen = sbi->s_namelen; 368 347 buf->f_fsid.val[0] = (u32)id; 369 348 buf->f_fsid.val[1] = (u32)(id >> 32);
+8 -3
fs/minix/minix.h
··· 48 48 extern struct minix2_inode * minix_V2_raw_inode(struct super_block *, ino_t, struct buffer_head **); 49 49 extern struct inode * minix_new_inode(const struct inode *, int, int *); 50 50 extern void minix_free_inode(struct inode * inode); 51 - extern unsigned long minix_count_free_inodes(struct minix_sb_info *sbi); 51 + extern unsigned long minix_count_free_inodes(struct super_block *sb); 52 52 extern int minix_new_block(struct inode * inode); 53 53 extern void minix_free_block(struct inode *inode, unsigned long block); 54 - extern unsigned long minix_count_free_blocks(struct minix_sb_info *sbi); 54 + extern unsigned long minix_count_free_blocks(struct super_block *sb); 55 55 extern int minix_getattr(struct vfsmount *, struct dentry *, struct kstat *); 56 56 extern int minix_prepare_chunk(struct page *page, loff_t pos, unsigned len); 57 57 ··· 86 86 static inline struct minix_inode_info *minix_i(struct inode *inode) 87 87 { 88 88 return list_entry(inode, struct minix_inode_info, vfs_inode); 89 + } 90 + 91 + static inline unsigned minix_blocks_needed(unsigned bits, unsigned blocksize) 92 + { 93 + return DIV_ROUND_UP(bits, blocksize * 8); 89 94 } 90 95 91 96 #if defined(CONFIG_MINIX_FS_NATIVE_ENDIAN) && \ ··· 130 125 if (!size) 131 126 return 0; 132 127 133 - size = (size >> 4) + ((size & 15) > 0); 128 + size >>= 4; 134 129 while (*p++ == 0xffff) { 135 130 if (--size == 0) 136 131 return (p - addr) << 4;
+4 -2
fs/namespace.c
··· 2493 2493 struct dentry *mount_subtree(struct vfsmount *mnt, const char *name) 2494 2494 { 2495 2495 struct mnt_namespace *ns; 2496 + struct super_block *s; 2496 2497 struct path path; 2497 2498 int err; 2498 2499 ··· 2510 2509 return ERR_PTR(err); 2511 2510 2512 2511 /* trade a vfsmount reference for active sb one */ 2513 - atomic_inc(&path.mnt->mnt_sb->s_active); 2512 + s = path.mnt->mnt_sb; 2513 + atomic_inc(&s->s_active); 2514 2514 mntput(path.mnt); 2515 2515 /* lock the sucker */ 2516 - down_write(&path.mnt->mnt_sb->s_umount); 2516 + down_write(&s->s_umount); 2517 2517 /* ... and return the root of (sub)tree on it */ 2518 2518 return path.dentry; 2519 2519 }
+1 -1
fs/nfs/dir.c
··· 1468 1468 res = NULL; 1469 1469 goto out; 1470 1470 /* This turned out not to be a regular file */ 1471 + case -EISDIR: 1471 1472 case -ENOTDIR: 1472 1473 goto no_open; 1473 1474 case -ELOOP: 1474 1475 if (!(nd->intent.open.flags & O_NOFOLLOW)) 1475 1476 goto no_open; 1476 - /* case -EISDIR: */ 1477 1477 /* case -EINVAL: */ 1478 1478 default: 1479 1479 res = ERR_CAST(inode);
+51 -40
fs/nfs/file.c
··· 40 40 41 41 #define NFSDBG_FACILITY NFSDBG_FILE 42 42 43 - static int nfs_file_open(struct inode *, struct file *); 44 - static int nfs_file_release(struct inode *, struct file *); 45 - static loff_t nfs_file_llseek(struct file *file, loff_t offset, int origin); 46 - static int nfs_file_mmap(struct file *, struct vm_area_struct *); 47 - static ssize_t nfs_file_splice_read(struct file *filp, loff_t *ppos, 48 - struct pipe_inode_info *pipe, 49 - size_t count, unsigned int flags); 50 - static ssize_t nfs_file_read(struct kiocb *, const struct iovec *iov, 51 - unsigned long nr_segs, loff_t pos); 52 - static ssize_t nfs_file_splice_write(struct pipe_inode_info *pipe, 53 - struct file *filp, loff_t *ppos, 54 - size_t count, unsigned int flags); 55 - static ssize_t nfs_file_write(struct kiocb *, const struct iovec *iov, 56 - unsigned long nr_segs, loff_t pos); 57 - static int nfs_file_flush(struct file *, fl_owner_t id); 58 - static int nfs_file_fsync(struct file *, loff_t, loff_t, int datasync); 59 - static int nfs_check_flags(int flags); 60 - static int nfs_lock(struct file *filp, int cmd, struct file_lock *fl); 61 - static int nfs_flock(struct file *filp, int cmd, struct file_lock *fl); 62 - static int nfs_setlease(struct file *file, long arg, struct file_lock **fl); 63 - 64 43 static const struct vm_operations_struct nfs_file_vm_ops; 65 - 66 - const struct file_operations nfs_file_operations = { 67 - .llseek = nfs_file_llseek, 68 - .read = do_sync_read, 69 - .write = do_sync_write, 70 - .aio_read = nfs_file_read, 71 - .aio_write = nfs_file_write, 72 - .mmap = nfs_file_mmap, 73 - .open = nfs_file_open, 74 - .flush = nfs_file_flush, 75 - .release = nfs_file_release, 76 - .fsync = nfs_file_fsync, 77 - .lock = nfs_lock, 78 - .flock = nfs_flock, 79 - .splice_read = nfs_file_splice_read, 80 - .splice_write = nfs_file_splice_write, 81 - .check_flags = nfs_check_flags, 82 - .setlease = nfs_setlease, 83 - }; 84 44 85 45 const struct inode_operations nfs_file_inode_operations = { 86 46 .permission = nfs_permission, ··· 846 886 file->f_path.dentry->d_name.name, arg); 847 887 return -EINVAL; 848 888 } 889 + 890 + const struct file_operations nfs_file_operations = { 891 + .llseek = nfs_file_llseek, 892 + .read = do_sync_read, 893 + .write = do_sync_write, 894 + .aio_read = nfs_file_read, 895 + .aio_write = nfs_file_write, 896 + .mmap = nfs_file_mmap, 897 + .open = nfs_file_open, 898 + .flush = nfs_file_flush, 899 + .release = nfs_file_release, 900 + .fsync = nfs_file_fsync, 901 + .lock = nfs_lock, 902 + .flock = nfs_flock, 903 + .splice_read = nfs_file_splice_read, 904 + .splice_write = nfs_file_splice_write, 905 + .check_flags = nfs_check_flags, 906 + .setlease = nfs_setlease, 907 + }; 908 + 909 + #ifdef CONFIG_NFS_V4 910 + static int 911 + nfs4_file_open(struct inode *inode, struct file *filp) 912 + { 913 + /* 914 + * NFSv4 opens are handled in d_lookup and d_revalidate. If we get to 915 + * this point, then something is very wrong 916 + */ 917 + dprintk("NFS: %s called! inode=%p filp=%p\n", __func__, inode, filp); 918 + return -ENOTDIR; 919 + } 920 + 921 + const struct file_operations nfs4_file_operations = { 922 + .llseek = nfs_file_llseek, 923 + .read = do_sync_read, 924 + .write = do_sync_write, 925 + .aio_read = nfs_file_read, 926 + .aio_write = nfs_file_write, 927 + .mmap = nfs_file_mmap, 928 + .open = nfs4_file_open, 929 + .flush = nfs_file_flush, 930 + .release = nfs_file_release, 931 + .fsync = nfs_file_fsync, 932 + .lock = nfs_lock, 933 + .flock = nfs_flock, 934 + .splice_read = nfs_file_splice_read, 935 + .splice_write = nfs_file_splice_write, 936 + .check_flags = nfs_check_flags, 937 + .setlease = nfs_setlease, 938 + }; 939 + #endif /* CONFIG_NFS_V4 */
+1 -1
fs/nfs/inode.c
··· 291 291 */ 292 292 inode->i_op = NFS_SB(sb)->nfs_client->rpc_ops->file_inode_ops; 293 293 if (S_ISREG(inode->i_mode)) { 294 - inode->i_fop = &nfs_file_operations; 294 + inode->i_fop = NFS_SB(sb)->nfs_client->rpc_ops->file_ops; 295 295 inode->i_data.a_ops = &nfs_file_aops; 296 296 inode->i_data.backing_dev_info = &NFS_SB(sb)->backing_dev_info; 297 297 } else if (S_ISDIR(inode->i_mode)) {
+2
fs/nfs/internal.h
··· 299 299 extern int nfs_generic_pagein(struct nfs_pageio_descriptor *desc, 300 300 struct list_head *head); 301 301 302 + extern void nfs_pageio_init_read_mds(struct nfs_pageio_descriptor *pgio, 303 + struct inode *inode); 302 304 extern void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio); 303 305 extern void nfs_readdata_release(struct nfs_read_data *rdata); 304 306
+1
fs/nfs/nfs3proc.c
··· 853 853 .dentry_ops = &nfs_dentry_operations, 854 854 .dir_inode_ops = &nfs3_dir_inode_operations, 855 855 .file_inode_ops = &nfs3_file_inode_operations, 856 + .file_ops = &nfs_file_operations, 856 857 .getroot = nfs3_proc_get_root, 857 858 .getattr = nfs3_proc_getattr, 858 859 .setattr = nfs3_proc_setattr,
+2 -2
fs/nfs/nfs4proc.c
··· 2464 2464 case -NFS4ERR_BADNAME: 2465 2465 return -ENOENT; 2466 2466 case -NFS4ERR_MOVED: 2467 - err = nfs4_get_referral(dir, name, fattr, fhandle); 2468 - break; 2467 + return nfs4_get_referral(dir, name, fattr, fhandle); 2469 2468 case -NFS4ERR_WRONGSEC: 2470 2469 nfs_fixup_secinfo_attributes(fattr, fhandle); 2471 2470 } ··· 6252 6253 .dentry_ops = &nfs4_dentry_operations, 6253 6254 .dir_inode_ops = &nfs4_dir_inode_operations, 6254 6255 .file_inode_ops = &nfs4_file_inode_operations, 6256 + .file_ops = &nfs4_file_operations, 6255 6257 .getroot = nfs4_proc_get_root, 6256 6258 .getattr = nfs4_proc_getattr, 6257 6259 .setattr = nfs4_proc_setattr,
+21 -5
fs/nfs/pnfs.c
··· 1260 1260 } 1261 1261 EXPORT_SYMBOL_GPL(pnfs_generic_pg_writepages); 1262 1262 1263 + static void pnfs_ld_handle_read_error(struct nfs_read_data *data) 1264 + { 1265 + struct nfs_pageio_descriptor pgio; 1266 + 1267 + put_lseg(data->lseg); 1268 + data->lseg = NULL; 1269 + dprintk("pnfs write error = %d\n", data->pnfs_error); 1270 + 1271 + nfs_pageio_init_read_mds(&pgio, data->inode); 1272 + 1273 + while (!list_empty(&data->pages)) { 1274 + struct nfs_page *req = nfs_list_entry(data->pages.next); 1275 + 1276 + nfs_list_remove_request(req); 1277 + nfs_pageio_add_request(&pgio, req); 1278 + } 1279 + nfs_pageio_complete(&pgio); 1280 + } 1281 + 1263 1282 /* 1264 1283 * Called by non rpc-based layout drivers 1265 1284 */ ··· 1287 1268 if (likely(!data->pnfs_error)) { 1288 1269 __nfs4_read_done_cb(data); 1289 1270 data->mds_ops->rpc_call_done(&data->task, data); 1290 - } else { 1291 - put_lseg(data->lseg); 1292 - data->lseg = NULL; 1293 - dprintk("pnfs write error = %d\n", data->pnfs_error); 1294 - } 1271 + } else 1272 + pnfs_ld_handle_read_error(data); 1295 1273 data->mds_ops->rpc_release(data); 1296 1274 } 1297 1275 EXPORT_SYMBOL_GPL(pnfs_ld_read_done);
+1
fs/nfs/proc.c
··· 710 710 .dentry_ops = &nfs_dentry_operations, 711 711 .dir_inode_ops = &nfs_dir_inode_operations, 712 712 .file_inode_ops = &nfs_file_inode_operations, 713 + .file_ops = &nfs_file_operations, 713 714 .getroot = nfs_proc_get_root, 714 715 .getattr = nfs_proc_getattr, 715 716 .setattr = nfs_proc_setattr,
+2 -12
fs/nfs/read.c
··· 109 109 } 110 110 } 111 111 112 - static void nfs_pageio_init_read_mds(struct nfs_pageio_descriptor *pgio, 112 + void nfs_pageio_init_read_mds(struct nfs_pageio_descriptor *pgio, 113 113 struct inode *inode) 114 114 { 115 115 nfs_pageio_init(pgio, inode, &nfs_pageio_read_ops, ··· 534 534 static void nfs_readpage_release_full(void *calldata) 535 535 { 536 536 struct nfs_read_data *data = calldata; 537 - struct nfs_pageio_descriptor pgio; 538 537 539 - if (data->pnfs_error) { 540 - nfs_pageio_init_read_mds(&pgio, data->inode); 541 - pgio.pg_recoalesce = 1; 542 - } 543 538 while (!list_empty(&data->pages)) { 544 539 struct nfs_page *req = nfs_list_entry(data->pages.next); 545 540 546 541 nfs_list_remove_request(req); 547 - if (!data->pnfs_error) 548 - nfs_readpage_release(req); 549 - else 550 - nfs_pageio_add_request(&pgio, req); 542 + nfs_readpage_release(req); 551 543 } 552 - if (data->pnfs_error) 553 - nfs_pageio_complete(&pgio); 554 544 nfs_readdata_release(calldata); 555 545 } 556 546
+1 -1
fs/ocfs2/alloc.c
··· 5699 5699 OCFS2_JOURNAL_ACCESS_WRITE); 5700 5700 if (ret) { 5701 5701 mlog_errno(ret); 5702 - goto out; 5702 + goto out_commit; 5703 5703 } 5704 5704 5705 5705 dquot_free_space_nodirty(inode,
+61 -8
fs/ocfs2/aops.c
··· 290 290 } 291 291 292 292 if (down_read_trylock(&oi->ip_alloc_sem) == 0) { 293 + /* 294 + * Unlock the page and cycle ip_alloc_sem so that we don't 295 + * busyloop waiting for ip_alloc_sem to unlock 296 + */ 293 297 ret = AOP_TRUNCATED_PAGE; 298 + unlock_page(page); 299 + unlock = 0; 300 + down_read(&oi->ip_alloc_sem); 301 + up_read(&oi->ip_alloc_sem); 294 302 goto out_inode_unlock; 295 303 } 296 304 ··· 571 563 { 572 564 struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode; 573 565 int level; 566 + wait_queue_head_t *wq = ocfs2_ioend_wq(inode); 574 567 575 568 /* this io's submitter should not have unlocked this before we could */ 576 569 BUG_ON(!ocfs2_iocb_is_rw_locked(iocb)); 577 570 578 571 if (ocfs2_iocb_is_sem_locked(iocb)) 579 572 ocfs2_iocb_clear_sem_locked(iocb); 573 + 574 + if (ocfs2_iocb_is_unaligned_aio(iocb)) { 575 + ocfs2_iocb_clear_unaligned_aio(iocb); 576 + 577 + if (atomic_dec_and_test(&OCFS2_I(inode)->ip_unaligned_aio) && 578 + waitqueue_active(wq)) { 579 + wake_up_all(wq); 580 + } 581 + } 580 582 581 583 ocfs2_iocb_clear_rw_locked(iocb); 582 584 ··· 881 863 struct page *w_target_page; 882 864 883 865 /* 866 + * w_target_locked is used for page_mkwrite path indicating no unlocking 867 + * against w_target_page in ocfs2_write_end_nolock. 868 + */ 869 + unsigned int w_target_locked:1; 870 + 871 + /* 884 872 * ocfs2_write_end() uses this to know what the real range to 885 873 * write in the target should be. 886 874 */ ··· 919 895 920 896 static void ocfs2_free_write_ctxt(struct ocfs2_write_ctxt *wc) 921 897 { 898 + int i; 899 + 900 + /* 901 + * w_target_locked is only set to true in the page_mkwrite() case. 902 + * The intent is to allow us to lock the target page from write_begin() 903 + * to write_end(). The caller must hold a ref on w_target_page. 904 + */ 905 + if (wc->w_target_locked) { 906 + BUG_ON(!wc->w_target_page); 907 + for (i = 0; i < wc->w_num_pages; i++) { 908 + if (wc->w_target_page == wc->w_pages[i]) { 909 + wc->w_pages[i] = NULL; 910 + break; 911 + } 912 + } 913 + mark_page_accessed(wc->w_target_page); 914 + page_cache_release(wc->w_target_page); 915 + } 922 916 ocfs2_unlock_and_free_pages(wc->w_pages, wc->w_num_pages); 923 917 924 918 brelse(wc->w_di_bh); ··· 1174 1132 */ 1175 1133 lock_page(mmap_page); 1176 1134 1135 + /* Exit and let the caller retry */ 1177 1136 if (mmap_page->mapping != mapping) { 1137 + WARN_ON(mmap_page->mapping); 1178 1138 unlock_page(mmap_page); 1179 - /* 1180 - * Sanity check - the locking in 1181 - * ocfs2_pagemkwrite() should ensure 1182 - * that this code doesn't trigger. 1183 - */ 1184 - ret = -EINVAL; 1185 - mlog_errno(ret); 1139 + ret = -EAGAIN; 1186 1140 goto out; 1187 1141 } 1188 1142 1189 1143 page_cache_get(mmap_page); 1190 1144 wc->w_pages[i] = mmap_page; 1145 + wc->w_target_locked = true; 1191 1146 } else { 1192 1147 wc->w_pages[i] = find_or_create_page(mapping, index, 1193 1148 GFP_NOFS); ··· 1199 1160 wc->w_target_page = wc->w_pages[i]; 1200 1161 } 1201 1162 out: 1163 + if (ret) 1164 + wc->w_target_locked = false; 1202 1165 return ret; 1203 1166 } 1204 1167 ··· 1858 1817 */ 1859 1818 ret = ocfs2_grab_pages_for_write(mapping, wc, wc->w_cpos, pos, len, 1860 1819 cluster_of_pages, mmap_page); 1861 - if (ret) { 1820 + if (ret && ret != -EAGAIN) { 1862 1821 mlog_errno(ret); 1822 + goto out_quota; 1823 + } 1824 + 1825 + /* 1826 + * ocfs2_grab_pages_for_write() returns -EAGAIN if it could not lock 1827 + * the target page. In this case, we exit with no error and no target 1828 + * page. This will trigger the caller, page_mkwrite(), to re-try 1829 + * the operation. 1830 + */ 1831 + if (ret == -EAGAIN) { 1832 + BUG_ON(wc->w_target_page); 1833 + ret = 0; 1863 1834 goto out_quota; 1864 1835 } 1865 1836
+14
fs/ocfs2/aops.h
··· 78 78 OCFS2_IOCB_RW_LOCK = 0, 79 79 OCFS2_IOCB_RW_LOCK_LEVEL, 80 80 OCFS2_IOCB_SEM, 81 + OCFS2_IOCB_UNALIGNED_IO, 81 82 OCFS2_IOCB_NUM_LOCKS 82 83 }; 83 84 ··· 92 91 clear_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private) 93 92 #define ocfs2_iocb_is_sem_locked(iocb) \ 94 93 test_bit(OCFS2_IOCB_SEM, (unsigned long *)&iocb->private) 94 + 95 + #define ocfs2_iocb_set_unaligned_aio(iocb) \ 96 + set_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 97 + #define ocfs2_iocb_clear_unaligned_aio(iocb) \ 98 + clear_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 99 + #define ocfs2_iocb_is_unaligned_aio(iocb) \ 100 + test_bit(OCFS2_IOCB_UNALIGNED_IO, (unsigned long *)&iocb->private) 101 + 102 + #define OCFS2_IOEND_WQ_HASH_SZ 37 103 + #define ocfs2_ioend_wq(v) (&ocfs2__ioend_wq[((unsigned long)(v)) %\ 104 + OCFS2_IOEND_WQ_HASH_SZ]) 105 + extern wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ]; 106 + 95 107 #endif /* OCFS2_FILE_H */
+123 -73
fs/ocfs2/cluster/heartbeat.c
··· 216 216 217 217 struct list_head hr_all_item; 218 218 unsigned hr_unclean_stop:1, 219 + hr_aborted_start:1, 219 220 hr_item_pinned:1, 220 221 hr_item_dropped:1; 221 222 ··· 254 253 * has reached a 'steady' state. This will be fixed when we have 255 254 * a more complete api that doesn't lead to this sort of fragility. */ 256 255 atomic_t hr_steady_iterations; 256 + 257 + /* terminate o2hb thread if it does not reach steady state 258 + * (hr_steady_iterations == 0) within hr_unsteady_iterations */ 259 + atomic_t hr_unsteady_iterations; 257 260 258 261 char hr_dev_name[BDEVNAME_SIZE]; 259 262 ··· 329 324 330 325 static void o2hb_arm_write_timeout(struct o2hb_region *reg) 331 326 { 327 + /* Arm writeout only after thread reaches steady state */ 328 + if (atomic_read(&reg->hr_steady_iterations) != 0) 329 + return; 330 + 332 331 mlog(ML_HEARTBEAT, "Queue write timeout for %u ms\n", 333 332 O2HB_MAX_WRITE_TIMEOUT_MS); 334 333 ··· 546 537 return read == computed; 547 538 } 548 539 549 - /* We want to make sure that nobody is heartbeating on top of us -- 550 - * this will help detect an invalid configuration. */ 551 - static void o2hb_check_last_timestamp(struct o2hb_region *reg) 540 + /* 541 + * Compare the slot data with what we wrote in the last iteration. 542 + * If the match fails, print an appropriate error message. This is to 543 + * detect errors like... another node hearting on the same slot, 544 + * flaky device that is losing writes, etc. 545 + * Returns 1 if check succeeds, 0 otherwise. 546 + */ 547 + static int o2hb_check_own_slot(struct o2hb_region *reg) 552 548 { 553 549 struct o2hb_disk_slot *slot; 554 550 struct o2hb_disk_heartbeat_block *hb_block; ··· 562 548 slot = &reg->hr_slots[o2nm_this_node()]; 563 549 /* Don't check on our 1st timestamp */ 564 550 if (!slot->ds_last_time) 565 - return; 551 + return 0; 566 552 567 553 hb_block = slot->ds_raw_block; 568 554 if (le64_to_cpu(hb_block->hb_seq) == slot->ds_last_time && 569 555 le64_to_cpu(hb_block->hb_generation) == slot->ds_last_generation && 570 556 hb_block->hb_node == slot->ds_node_num) 571 - return; 557 + return 1; 572 558 573 559 #define ERRSTR1 "Another node is heartbeating on device" 574 560 #define ERRSTR2 "Heartbeat generation mismatch on device" ··· 588 574 (unsigned long long)slot->ds_last_time, hb_block->hb_node, 589 575 (unsigned long long)le64_to_cpu(hb_block->hb_generation), 590 576 (unsigned long long)le64_to_cpu(hb_block->hb_seq)); 577 + 578 + return 0; 591 579 } 592 580 593 581 static inline void o2hb_prepare_block(struct o2hb_region *reg, ··· 735 719 o2nm_node_put(node); 736 720 } 737 721 738 - static void o2hb_set_quorum_device(struct o2hb_region *reg, 739 - struct o2hb_disk_slot *slot) 722 + static void o2hb_set_quorum_device(struct o2hb_region *reg) 740 723 { 741 - assert_spin_locked(&o2hb_live_lock); 742 - 743 724 if (!o2hb_global_heartbeat_active()) 744 725 return; 745 726 746 - if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 727 + /* Prevent race with o2hb_heartbeat_group_drop_item() */ 728 + if (kthread_should_stop()) 747 729 return; 730 + 731 + /* Tag region as quorum only after thread reaches steady state */ 732 + if (atomic_read(&reg->hr_steady_iterations) != 0) 733 + return; 734 + 735 + spin_lock(&o2hb_live_lock); 736 + 737 + if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 738 + goto unlock; 748 739 749 740 /* 750 741 * A region can be added to the quorum only when it sees all ··· 760 737 */ 761 738 if (memcmp(reg->hr_live_node_bitmap, o2hb_live_node_bitmap, 762 739 sizeof(o2hb_live_node_bitmap))) 763 - return; 740 + goto unlock; 764 741 765 - if (slot->ds_changed_samples < O2HB_LIVE_THRESHOLD) 766 - return; 767 - 768 - printk(KERN_NOTICE "o2hb: Region %s is now a quorum device\n", 769 - config_item_name(&reg->hr_item)); 742 + printk(KERN_NOTICE "o2hb: Region %s (%s) is now a quorum device\n", 743 + config_item_name(&reg->hr_item), reg->hr_dev_name); 770 744 771 745 set_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 772 746 ··· 774 754 if (o2hb_pop_count(&o2hb_quorum_region_bitmap, 775 755 O2NM_MAX_REGIONS) > O2HB_PIN_CUT_OFF) 776 756 o2hb_region_unpin(NULL); 757 + unlock: 758 + spin_unlock(&o2hb_live_lock); 777 759 } 778 760 779 761 static int o2hb_check_slot(struct o2hb_region *reg, ··· 947 925 slot->ds_equal_samples = 0; 948 926 } 949 927 out: 950 - o2hb_set_quorum_device(reg, slot); 951 - 952 928 spin_unlock(&o2hb_live_lock); 953 929 954 930 o2hb_run_event_list(&event); ··· 977 957 978 958 static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) 979 959 { 980 - int i, ret, highest_node, change = 0; 960 + int i, ret, highest_node; 961 + int membership_change = 0, own_slot_ok = 0; 981 962 unsigned long configured_nodes[BITS_TO_LONGS(O2NM_MAX_NODES)]; 982 963 unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 983 964 struct o2hb_bio_wait_ctxt write_wc; ··· 987 966 sizeof(configured_nodes)); 988 967 if (ret) { 989 968 mlog_errno(ret); 990 - return ret; 969 + goto bail; 991 970 } 992 971 993 972 /* ··· 1003 982 1004 983 highest_node = o2hb_highest_node(configured_nodes, O2NM_MAX_NODES); 1005 984 if (highest_node >= O2NM_MAX_NODES) { 1006 - mlog(ML_NOTICE, "ocfs2_heartbeat: no configured nodes found!\n"); 1007 - return -EINVAL; 985 + mlog(ML_NOTICE, "o2hb: No configured nodes found!\n"); 986 + ret = -EINVAL; 987 + goto bail; 1008 988 } 1009 989 1010 990 /* No sense in reading the slots of nodes that don't exist ··· 1015 993 ret = o2hb_read_slots(reg, highest_node + 1); 1016 994 if (ret < 0) { 1017 995 mlog_errno(ret); 1018 - return ret; 996 + goto bail; 1019 997 } 1020 998 1021 999 /* With an up to date view of the slots, we can check that no 1022 1000 * other node has been improperly configured to heartbeat in 1023 1001 * our slot. */ 1024 - o2hb_check_last_timestamp(reg); 1002 + own_slot_ok = o2hb_check_own_slot(reg); 1025 1003 1026 1004 /* fill in the proper info for our next heartbeat */ 1027 1005 o2hb_prepare_block(reg, reg->hr_generation); 1028 1006 1029 - /* And fire off the write. Note that we don't wait on this I/O 1030 - * until later. */ 1031 1007 ret = o2hb_issue_node_write(reg, &write_wc); 1032 1008 if (ret < 0) { 1033 1009 mlog_errno(ret); 1034 - return ret; 1010 + goto bail; 1035 1011 } 1036 1012 1037 1013 i = -1; 1038 1014 while((i = find_next_bit(configured_nodes, 1039 1015 O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) { 1040 - change |= o2hb_check_slot(reg, &reg->hr_slots[i]); 1016 + membership_change |= o2hb_check_slot(reg, &reg->hr_slots[i]); 1041 1017 } 1042 1018 1043 1019 /* ··· 1050 1030 * disk */ 1051 1031 mlog(ML_ERROR, "Write error %d on device \"%s\"\n", 1052 1032 write_wc.wc_error, reg->hr_dev_name); 1053 - return write_wc.wc_error; 1033 + ret = write_wc.wc_error; 1034 + goto bail; 1054 1035 } 1055 1036 1056 - o2hb_arm_write_timeout(reg); 1037 + /* Skip disarming the timeout if own slot has stale/bad data */ 1038 + if (own_slot_ok) { 1039 + o2hb_set_quorum_device(reg); 1040 + o2hb_arm_write_timeout(reg); 1041 + } 1057 1042 1043 + bail: 1058 1044 /* let the person who launched us know when things are steady */ 1059 - if (!change && (atomic_read(&reg->hr_steady_iterations) != 0)) { 1060 - if (atomic_dec_and_test(&reg->hr_steady_iterations)) 1061 - wake_up(&o2hb_steady_queue); 1045 + if (atomic_read(&reg->hr_steady_iterations) != 0) { 1046 + if (!ret && own_slot_ok && !membership_change) { 1047 + if (atomic_dec_and_test(&reg->hr_steady_iterations)) 1048 + wake_up(&o2hb_steady_queue); 1049 + } 1062 1050 } 1063 1051 1064 - return 0; 1052 + if (atomic_read(&reg->hr_steady_iterations) != 0) { 1053 + if (atomic_dec_and_test(&reg->hr_unsteady_iterations)) { 1054 + printk(KERN_NOTICE "o2hb: Unable to stabilize " 1055 + "heartbeart on region %s (%s)\n", 1056 + config_item_name(&reg->hr_item), 1057 + reg->hr_dev_name); 1058 + atomic_set(&reg->hr_steady_iterations, 0); 1059 + reg->hr_aborted_start = 1; 1060 + wake_up(&o2hb_steady_queue); 1061 + ret = -EIO; 1062 + } 1063 + } 1064 + 1065 + return ret; 1065 1066 } 1066 1067 1067 1068 /* Subtract b from a, storing the result in a. a *must* have a larger ··· 1136 1095 /* Pin node */ 1137 1096 o2nm_depend_this_node(); 1138 1097 1139 - while (!kthread_should_stop() && !reg->hr_unclean_stop) { 1098 + while (!kthread_should_stop() && 1099 + !reg->hr_unclean_stop && !reg->hr_aborted_start) { 1140 1100 /* We track the time spent inside 1141 1101 * o2hb_do_disk_heartbeat so that we avoid more than 1142 1102 * hr_timeout_ms between disk writes. On busy systems ··· 1145 1103 * likely to time itself out. */ 1146 1104 do_gettimeofday(&before_hb); 1147 1105 1148 - i = 0; 1149 - do { 1150 - ret = o2hb_do_disk_heartbeat(reg); 1151 - } while (ret && ++i < 2); 1106 + ret = o2hb_do_disk_heartbeat(reg); 1152 1107 1153 1108 do_gettimeofday(&after_hb); 1154 1109 elapsed_msec = o2hb_elapsed_msecs(&before_hb, &after_hb); ··· 1156 1117 after_hb.tv_sec, (unsigned long) after_hb.tv_usec, 1157 1118 elapsed_msec); 1158 1119 1159 - if (elapsed_msec < reg->hr_timeout_ms) { 1120 + if (!kthread_should_stop() && 1121 + elapsed_msec < reg->hr_timeout_ms) { 1160 1122 /* the kthread api has blocked signals for us so no 1161 1123 * need to record the return value. */ 1162 1124 msleep_interruptible(reg->hr_timeout_ms - elapsed_msec); ··· 1174 1134 * to timeout on this region when we could just as easily 1175 1135 * write a clear generation - thus indicating to them that 1176 1136 * this node has left this region. 1177 - * 1178 - * XXX: Should we skip this on unclean_stop? */ 1179 - o2hb_prepare_block(reg, 0); 1180 - ret = o2hb_issue_node_write(reg, &write_wc); 1181 - if (ret == 0) { 1182 - o2hb_wait_on_io(reg, &write_wc); 1183 - } else { 1184 - mlog_errno(ret); 1137 + */ 1138 + if (!reg->hr_unclean_stop && !reg->hr_aborted_start) { 1139 + o2hb_prepare_block(reg, 0); 1140 + ret = o2hb_issue_node_write(reg, &write_wc); 1141 + if (ret == 0) 1142 + o2hb_wait_on_io(reg, &write_wc); 1143 + else 1144 + mlog_errno(ret); 1185 1145 } 1186 1146 1187 1147 /* Unpin node */ 1188 1148 o2nm_undepend_this_node(); 1189 1149 1190 - mlog(ML_HEARTBEAT|ML_KTHREAD, "hb thread exiting\n"); 1150 + mlog(ML_HEARTBEAT|ML_KTHREAD, "o2hb thread exiting\n"); 1191 1151 1192 1152 return 0; 1193 1153 } ··· 1198 1158 struct o2hb_debug_buf *db = inode->i_private; 1199 1159 struct o2hb_region *reg; 1200 1160 unsigned long map[BITS_TO_LONGS(O2NM_MAX_NODES)]; 1161 + unsigned long lts; 1201 1162 char *buf = NULL; 1202 1163 int i = -1; 1203 1164 int out = 0; ··· 1235 1194 1236 1195 case O2HB_DB_TYPE_REGION_ELAPSED_TIME: 1237 1196 reg = (struct o2hb_region *)db->db_data; 1238 - out += snprintf(buf + out, PAGE_SIZE - out, "%u\n", 1239 - jiffies_to_msecs(jiffies - 1240 - reg->hr_last_timeout_start)); 1197 + lts = reg->hr_last_timeout_start; 1198 + /* If 0, it has never been set before */ 1199 + if (lts) 1200 + lts = jiffies_to_msecs(jiffies - lts); 1201 + out += snprintf(buf + out, PAGE_SIZE - out, "%lu\n", lts); 1241 1202 goto done; 1242 1203 1243 1204 case O2HB_DB_TYPE_REGION_PINNED: ··· 1468 1425 int i; 1469 1426 struct page *page; 1470 1427 struct o2hb_region *reg = to_o2hb_region(item); 1428 + 1429 + mlog(ML_HEARTBEAT, "hb region release (%s)\n", reg->hr_dev_name); 1471 1430 1472 1431 if (reg->hr_tmp_block) 1473 1432 kfree(reg->hr_tmp_block); ··· 1837 1792 live_threshold <<= 1; 1838 1793 spin_unlock(&o2hb_live_lock); 1839 1794 } 1840 - atomic_set(&reg->hr_steady_iterations, live_threshold + 1); 1795 + ++live_threshold; 1796 + atomic_set(&reg->hr_steady_iterations, live_threshold); 1797 + /* unsteady_iterations is double the steady_iterations */ 1798 + atomic_set(&reg->hr_unsteady_iterations, (live_threshold << 1)); 1841 1799 1842 1800 hb_task = kthread_run(o2hb_thread, reg, "o2hb-%s", 1843 1801 reg->hr_item.ci_name); ··· 1857 1809 ret = wait_event_interruptible(o2hb_steady_queue, 1858 1810 atomic_read(&reg->hr_steady_iterations) == 0); 1859 1811 if (ret) { 1860 - /* We got interrupted (hello ptrace!). Clean up */ 1861 - spin_lock(&o2hb_live_lock); 1862 - hb_task = reg->hr_task; 1863 - reg->hr_task = NULL; 1864 - spin_unlock(&o2hb_live_lock); 1812 + atomic_set(&reg->hr_steady_iterations, 0); 1813 + reg->hr_aborted_start = 1; 1814 + } 1865 1815 1866 - if (hb_task) 1867 - kthread_stop(hb_task); 1816 + if (reg->hr_aborted_start) { 1817 + ret = -EIO; 1868 1818 goto out; 1869 1819 } 1870 1820 ··· 1879 1833 ret = -EIO; 1880 1834 1881 1835 if (hb_task && o2hb_global_heartbeat_active()) 1882 - printk(KERN_NOTICE "o2hb: Heartbeat started on region %s\n", 1883 - config_item_name(&reg->hr_item)); 1836 + printk(KERN_NOTICE "o2hb: Heartbeat started on region %s (%s)\n", 1837 + config_item_name(&reg->hr_item), reg->hr_dev_name); 1884 1838 1885 1839 out: 1886 1840 if (filp) ··· 2138 2092 2139 2093 /* stop the thread when the user removes the region dir */ 2140 2094 spin_lock(&o2hb_live_lock); 2141 - if (o2hb_global_heartbeat_active()) { 2142 - clear_bit(reg->hr_region_num, o2hb_region_bitmap); 2143 - clear_bit(reg->hr_region_num, o2hb_live_region_bitmap); 2144 - if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 2145 - quorum_region = 1; 2146 - clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 2147 - } 2148 2095 hb_task = reg->hr_task; 2149 2096 reg->hr_task = NULL; 2150 2097 reg->hr_item_dropped = 1; ··· 2146 2107 if (hb_task) 2147 2108 kthread_stop(hb_task); 2148 2109 2110 + if (o2hb_global_heartbeat_active()) { 2111 + spin_lock(&o2hb_live_lock); 2112 + clear_bit(reg->hr_region_num, o2hb_region_bitmap); 2113 + clear_bit(reg->hr_region_num, o2hb_live_region_bitmap); 2114 + if (test_bit(reg->hr_region_num, o2hb_quorum_region_bitmap)) 2115 + quorum_region = 1; 2116 + clear_bit(reg->hr_region_num, o2hb_quorum_region_bitmap); 2117 + spin_unlock(&o2hb_live_lock); 2118 + printk(KERN_NOTICE "o2hb: Heartbeat %s on region %s (%s)\n", 2119 + ((atomic_read(&reg->hr_steady_iterations) == 0) ? 2120 + "stopped" : "start aborted"), config_item_name(item), 2121 + reg->hr_dev_name); 2122 + } 2123 + 2149 2124 /* 2150 2125 * If we're racing a dev_write(), we need to wake them. They will 2151 2126 * check reg->hr_task 2152 2127 */ 2153 2128 if (atomic_read(&reg->hr_steady_iterations) != 0) { 2129 + reg->hr_aborted_start = 1; 2154 2130 atomic_set(&reg->hr_steady_iterations, 0); 2155 2131 wake_up(&o2hb_steady_queue); 2156 2132 } 2157 - 2158 - if (o2hb_global_heartbeat_active()) 2159 - printk(KERN_NOTICE "o2hb: Heartbeat stopped on region %s\n", 2160 - config_item_name(&reg->hr_item)); 2161 2133 2162 2134 config_item_put(item); 2163 2135
+69 -33
fs/ocfs2/cluster/netdebug.c
··· 47 47 #define SC_DEBUG_NAME "sock_containers" 48 48 #define NST_DEBUG_NAME "send_tracking" 49 49 #define STATS_DEBUG_NAME "stats" 50 + #define NODES_DEBUG_NAME "connected_nodes" 50 51 51 52 #define SHOW_SOCK_CONTAINERS 0 52 53 #define SHOW_SOCK_STATS 1 ··· 56 55 static struct dentry *sc_dentry; 57 56 static struct dentry *nst_dentry; 58 57 static struct dentry *stats_dentry; 58 + static struct dentry *nodes_dentry; 59 59 60 60 static DEFINE_SPINLOCK(o2net_debug_lock); 61 61 ··· 493 491 .release = sc_fop_release, 494 492 }; 495 493 496 - int o2net_debugfs_init(void) 494 + static int o2net_fill_bitmap(char *buf, int len) 497 495 { 498 - o2net_dentry = debugfs_create_dir(O2NET_DEBUG_DIR, NULL); 499 - if (!o2net_dentry) { 500 - mlog_errno(-ENOMEM); 501 - goto bail; 502 - } 496 + unsigned long map[BITS_TO_LONGS(O2NM_MAX_NODES)]; 497 + int i = -1, out = 0; 503 498 504 - nst_dentry = debugfs_create_file(NST_DEBUG_NAME, S_IFREG|S_IRUSR, 505 - o2net_dentry, NULL, 506 - &nst_seq_fops); 507 - if (!nst_dentry) { 508 - mlog_errno(-ENOMEM); 509 - goto bail; 510 - } 499 + o2net_fill_node_map(map, sizeof(map)); 511 500 512 - sc_dentry = debugfs_create_file(SC_DEBUG_NAME, S_IFREG|S_IRUSR, 513 - o2net_dentry, NULL, 514 - &sc_seq_fops); 515 - if (!sc_dentry) { 516 - mlog_errno(-ENOMEM); 517 - goto bail; 518 - } 501 + while ((i = find_next_bit(map, O2NM_MAX_NODES, i + 1)) < O2NM_MAX_NODES) 502 + out += snprintf(buf + out, PAGE_SIZE - out, "%d ", i); 503 + out += snprintf(buf + out, PAGE_SIZE - out, "\n"); 519 504 520 - stats_dentry = debugfs_create_file(STATS_DEBUG_NAME, S_IFREG|S_IRUSR, 521 - o2net_dentry, NULL, 522 - &stats_seq_fops); 523 - if (!stats_dentry) { 524 - mlog_errno(-ENOMEM); 525 - goto bail; 526 - } 505 + return out; 506 + } 507 + 508 + static int nodes_fop_open(struct inode *inode, struct file *file) 509 + { 510 + char *buf; 511 + 512 + buf = kmalloc(PAGE_SIZE, GFP_KERNEL); 513 + if (!buf) 514 + return -ENOMEM; 515 + 516 + i_size_write(inode, o2net_fill_bitmap(buf, PAGE_SIZE)); 517 + 518 + file->private_data = buf; 527 519 528 520 return 0; 529 - bail: 530 - debugfs_remove(stats_dentry); 531 - debugfs_remove(sc_dentry); 532 - debugfs_remove(nst_dentry); 533 - debugfs_remove(o2net_dentry); 534 - return -ENOMEM; 535 521 } 522 + 523 + static int o2net_debug_release(struct inode *inode, struct file *file) 524 + { 525 + kfree(file->private_data); 526 + return 0; 527 + } 528 + 529 + static ssize_t o2net_debug_read(struct file *file, char __user *buf, 530 + size_t nbytes, loff_t *ppos) 531 + { 532 + return simple_read_from_buffer(buf, nbytes, ppos, file->private_data, 533 + i_size_read(file->f_mapping->host)); 534 + } 535 + 536 + static const struct file_operations nodes_fops = { 537 + .open = nodes_fop_open, 538 + .release = o2net_debug_release, 539 + .read = o2net_debug_read, 540 + .llseek = generic_file_llseek, 541 + }; 536 542 537 543 void o2net_debugfs_exit(void) 538 544 { 545 + debugfs_remove(nodes_dentry); 539 546 debugfs_remove(stats_dentry); 540 547 debugfs_remove(sc_dentry); 541 548 debugfs_remove(nst_dentry); 542 549 debugfs_remove(o2net_dentry); 550 + } 551 + 552 + int o2net_debugfs_init(void) 553 + { 554 + mode_t mode = S_IFREG|S_IRUSR; 555 + 556 + o2net_dentry = debugfs_create_dir(O2NET_DEBUG_DIR, NULL); 557 + if (o2net_dentry) 558 + nst_dentry = debugfs_create_file(NST_DEBUG_NAME, mode, 559 + o2net_dentry, NULL, &nst_seq_fops); 560 + if (nst_dentry) 561 + sc_dentry = debugfs_create_file(SC_DEBUG_NAME, mode, 562 + o2net_dentry, NULL, &sc_seq_fops); 563 + if (sc_dentry) 564 + stats_dentry = debugfs_create_file(STATS_DEBUG_NAME, mode, 565 + o2net_dentry, NULL, &stats_seq_fops); 566 + if (stats_dentry) 567 + nodes_dentry = debugfs_create_file(NODES_DEBUG_NAME, mode, 568 + o2net_dentry, NULL, &nodes_fops); 569 + if (nodes_dentry) 570 + return 0; 571 + 572 + o2net_debugfs_exit(); 573 + mlog_errno(-ENOMEM); 574 + return -ENOMEM; 543 575 } 544 576 545 577 #endif /* CONFIG_DEBUG_FS */
+72 -66
fs/ocfs2/cluster/tcp.c
··· 546 546 } 547 547 548 548 if (was_valid && !valid) { 549 - printk(KERN_NOTICE "o2net: no longer connected to " 549 + printk(KERN_NOTICE "o2net: No longer connected to " 550 550 SC_NODEF_FMT "\n", SC_NODEF_ARGS(old_sc)); 551 551 o2net_complete_nodes_nsw(nn); 552 552 } ··· 556 556 cancel_delayed_work(&nn->nn_connect_expired); 557 557 printk(KERN_NOTICE "o2net: %s " SC_NODEF_FMT "\n", 558 558 o2nm_this_node() > sc->sc_node->nd_num ? 559 - "connected to" : "accepted connection from", 559 + "Connected to" : "Accepted connection from", 560 560 SC_NODEF_ARGS(sc)); 561 561 } 562 562 ··· 644 644 o2net_sc_queue_work(sc, &sc->sc_connect_work); 645 645 break; 646 646 default: 647 - printk(KERN_INFO "o2net: connection to " SC_NODEF_FMT 647 + printk(KERN_INFO "o2net: Connection to " SC_NODEF_FMT 648 648 " shutdown, state %d\n", 649 649 SC_NODEF_ARGS(sc), sk->sk_state); 650 650 o2net_sc_queue_work(sc, &sc->sc_shutdown_work); ··· 1035 1035 return ret; 1036 1036 } 1037 1037 1038 + /* Get a map of all nodes to which this node is currently connected to */ 1039 + void o2net_fill_node_map(unsigned long *map, unsigned bytes) 1040 + { 1041 + struct o2net_sock_container *sc; 1042 + int node, ret; 1043 + 1044 + BUG_ON(bytes < (BITS_TO_LONGS(O2NM_MAX_NODES) * sizeof(unsigned long))); 1045 + 1046 + memset(map, 0, bytes); 1047 + for (node = 0; node < O2NM_MAX_NODES; ++node) { 1048 + o2net_tx_can_proceed(o2net_nn_from_num(node), &sc, &ret); 1049 + if (!ret) { 1050 + set_bit(node, map); 1051 + sc_put(sc); 1052 + } 1053 + } 1054 + } 1055 + EXPORT_SYMBOL_GPL(o2net_fill_node_map); 1056 + 1038 1057 int o2net_send_message_vec(u32 msg_type, u32 key, struct kvec *caller_vec, 1039 1058 size_t caller_veclen, u8 target_node, int *status) 1040 1059 { ··· 1304 1285 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num); 1305 1286 1306 1287 if (hand->protocol_version != cpu_to_be64(O2NET_PROTOCOL_VERSION)) { 1307 - mlog(ML_NOTICE, SC_NODEF_FMT " advertised net protocol " 1308 - "version %llu but %llu is required, disconnecting\n", 1309 - SC_NODEF_ARGS(sc), 1310 - (unsigned long long)be64_to_cpu(hand->protocol_version), 1311 - O2NET_PROTOCOL_VERSION); 1288 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " Advertised net " 1289 + "protocol version %llu but %llu is required. " 1290 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1291 + (unsigned long long)be64_to_cpu(hand->protocol_version), 1292 + O2NET_PROTOCOL_VERSION); 1312 1293 1313 1294 /* don't bother reconnecting if its the wrong version. */ 1314 1295 o2net_ensure_shutdown(nn, sc, -ENOTCONN); ··· 1322 1303 */ 1323 1304 if (be32_to_cpu(hand->o2net_idle_timeout_ms) != 1324 1305 o2net_idle_timeout()) { 1325 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a network idle timeout of " 1326 - "%u ms, but we use %u ms locally. disconnecting\n", 1327 - SC_NODEF_ARGS(sc), 1328 - be32_to_cpu(hand->o2net_idle_timeout_ms), 1329 - o2net_idle_timeout()); 1306 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a network " 1307 + "idle timeout of %u ms, but we use %u ms locally. " 1308 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1309 + be32_to_cpu(hand->o2net_idle_timeout_ms), 1310 + o2net_idle_timeout()); 1330 1311 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1331 1312 return -1; 1332 1313 } 1333 1314 1334 1315 if (be32_to_cpu(hand->o2net_keepalive_delay_ms) != 1335 1316 o2net_keepalive_delay()) { 1336 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a keepalive delay of " 1337 - "%u ms, but we use %u ms locally. disconnecting\n", 1338 - SC_NODEF_ARGS(sc), 1339 - be32_to_cpu(hand->o2net_keepalive_delay_ms), 1340 - o2net_keepalive_delay()); 1317 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a keepalive " 1318 + "delay of %u ms, but we use %u ms locally. " 1319 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1320 + be32_to_cpu(hand->o2net_keepalive_delay_ms), 1321 + o2net_keepalive_delay()); 1341 1322 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1342 1323 return -1; 1343 1324 } 1344 1325 1345 1326 if (be32_to_cpu(hand->o2hb_heartbeat_timeout_ms) != 1346 1327 O2HB_MAX_WRITE_TIMEOUT_MS) { 1347 - mlog(ML_NOTICE, SC_NODEF_FMT " uses a heartbeat timeout of " 1348 - "%u ms, but we use %u ms locally. disconnecting\n", 1349 - SC_NODEF_ARGS(sc), 1350 - be32_to_cpu(hand->o2hb_heartbeat_timeout_ms), 1351 - O2HB_MAX_WRITE_TIMEOUT_MS); 1328 + printk(KERN_NOTICE "o2net: " SC_NODEF_FMT " uses a heartbeat " 1329 + "timeout of %u ms, but we use %u ms locally. " 1330 + "Disconnecting.\n", SC_NODEF_ARGS(sc), 1331 + be32_to_cpu(hand->o2hb_heartbeat_timeout_ms), 1332 + O2HB_MAX_WRITE_TIMEOUT_MS); 1352 1333 o2net_ensure_shutdown(nn, sc, -ENOTCONN); 1353 1334 return -1; 1354 1335 } ··· 1559 1540 { 1560 1541 struct o2net_sock_container *sc = (struct o2net_sock_container *)data; 1561 1542 struct o2net_node *nn = o2net_nn_from_num(sc->sc_node->nd_num); 1562 - 1563 1543 #ifdef CONFIG_DEBUG_FS 1564 - ktime_t now = ktime_get(); 1544 + unsigned long msecs = ktime_to_ms(ktime_get()) - 1545 + ktime_to_ms(sc->sc_tv_timer); 1546 + #else 1547 + unsigned long msecs = o2net_idle_timeout(); 1565 1548 #endif 1566 1549 1567 - printk(KERN_NOTICE "o2net: connection to " SC_NODEF_FMT " has been idle for %u.%u " 1568 - "seconds, shutting it down.\n", SC_NODEF_ARGS(sc), 1569 - o2net_idle_timeout() / 1000, 1570 - o2net_idle_timeout() % 1000); 1571 - 1572 - #ifdef CONFIG_DEBUG_FS 1573 - mlog(ML_NOTICE, "Here are some times that might help debug the " 1574 - "situation: (Timer: %lld, Now %lld, DataReady %lld, Advance %lld-%lld, " 1575 - "Key 0x%08x, Func %u, FuncTime %lld-%lld)\n", 1576 - (long long)ktime_to_us(sc->sc_tv_timer), (long long)ktime_to_us(now), 1577 - (long long)ktime_to_us(sc->sc_tv_data_ready), 1578 - (long long)ktime_to_us(sc->sc_tv_advance_start), 1579 - (long long)ktime_to_us(sc->sc_tv_advance_stop), 1580 - sc->sc_msg_key, sc->sc_msg_type, 1581 - (long long)ktime_to_us(sc->sc_tv_func_start), 1582 - (long long)ktime_to_us(sc->sc_tv_func_stop)); 1583 - #endif 1550 + printk(KERN_NOTICE "o2net: Connection to " SC_NODEF_FMT " has been " 1551 + "idle for %lu.%lu secs, shutting it down.\n", SC_NODEF_ARGS(sc), 1552 + msecs / 1000, msecs % 1000); 1584 1553 1585 1554 /* 1586 1555 * Initialize the nn_timeout so that the next connection attempt ··· 1701 1694 1702 1695 out: 1703 1696 if (ret) { 1704 - mlog(ML_NOTICE, "connect attempt to " SC_NODEF_FMT " failed " 1705 - "with errno %d\n", SC_NODEF_ARGS(sc), ret); 1697 + printk(KERN_NOTICE "o2net: Connect attempt to " SC_NODEF_FMT 1698 + " failed with errno %d\n", SC_NODEF_ARGS(sc), ret); 1706 1699 /* 0 err so that another will be queued and attempted 1707 1700 * from set_nn_state */ 1708 1701 if (sc) ··· 1725 1718 1726 1719 spin_lock(&nn->nn_lock); 1727 1720 if (!nn->nn_sc_valid) { 1728 - mlog(ML_ERROR, "no connection established with node %u after " 1729 - "%u.%u seconds, giving up and returning errors.\n", 1721 + printk(KERN_NOTICE "o2net: No connection established with " 1722 + "node %u after %u.%u seconds, giving up.\n", 1730 1723 o2net_num_from_nn(nn), 1731 1724 o2net_idle_timeout() / 1000, 1732 1725 o2net_idle_timeout() % 1000); ··· 1869 1862 1870 1863 node = o2nm_get_node_by_ip(sin.sin_addr.s_addr); 1871 1864 if (node == NULL) { 1872 - mlog(ML_NOTICE, "attempt to connect from unknown node at %pI4:%d\n", 1873 - &sin.sin_addr.s_addr, ntohs(sin.sin_port)); 1865 + printk(KERN_NOTICE "o2net: Attempt to connect from unknown " 1866 + "node at %pI4:%d\n", &sin.sin_addr.s_addr, 1867 + ntohs(sin.sin_port)); 1874 1868 ret = -EINVAL; 1875 1869 goto out; 1876 1870 } 1877 1871 1878 1872 if (o2nm_this_node() >= node->nd_num) { 1879 1873 local_node = o2nm_get_node_by_num(o2nm_this_node()); 1880 - mlog(ML_NOTICE, "unexpected connect attempt seen at node '%s' (" 1881 - "%u, %pI4:%d) from node '%s' (%u, %pI4:%d)\n", 1882 - local_node->nd_name, local_node->nd_num, 1883 - &(local_node->nd_ipv4_address), 1884 - ntohs(local_node->nd_ipv4_port), 1885 - node->nd_name, node->nd_num, &sin.sin_addr.s_addr, 1886 - ntohs(sin.sin_port)); 1874 + printk(KERN_NOTICE "o2net: Unexpected connect attempt seen " 1875 + "at node '%s' (%u, %pI4:%d) from node '%s' (%u, " 1876 + "%pI4:%d)\n", local_node->nd_name, local_node->nd_num, 1877 + &(local_node->nd_ipv4_address), 1878 + ntohs(local_node->nd_ipv4_port), node->nd_name, 1879 + node->nd_num, &sin.sin_addr.s_addr, ntohs(sin.sin_port)); 1887 1880 ret = -EINVAL; 1888 1881 goto out; 1889 1882 } ··· 1908 1901 ret = 0; 1909 1902 spin_unlock(&nn->nn_lock); 1910 1903 if (ret) { 1911 - mlog(ML_NOTICE, "attempt to connect from node '%s' at " 1912 - "%pI4:%d but it already has an open connection\n", 1913 - node->nd_name, &sin.sin_addr.s_addr, 1914 - ntohs(sin.sin_port)); 1904 + printk(KERN_NOTICE "o2net: Attempt to connect from node '%s' " 1905 + "at %pI4:%d but it already has an open connection\n", 1906 + node->nd_name, &sin.sin_addr.s_addr, 1907 + ntohs(sin.sin_port)); 1915 1908 goto out; 1916 1909 } 1917 1910 ··· 1991 1984 1992 1985 ret = sock_create(PF_INET, SOCK_STREAM, IPPROTO_TCP, &sock); 1993 1986 if (ret < 0) { 1994 - mlog(ML_ERROR, "unable to create socket, ret=%d\n", ret); 1987 + printk(KERN_ERR "o2net: Error %d while creating socket\n", ret); 1995 1988 goto out; 1996 1989 } 1997 1990 ··· 2008 2001 sock->sk->sk_reuse = 1; 2009 2002 ret = sock->ops->bind(sock, (struct sockaddr *)&sin, sizeof(sin)); 2010 2003 if (ret < 0) { 2011 - mlog(ML_ERROR, "unable to bind socket at %pI4:%u, " 2012 - "ret=%d\n", &addr, ntohs(port), ret); 2004 + printk(KERN_ERR "o2net: Error %d while binding socket at " 2005 + "%pI4:%u\n", ret, &addr, ntohs(port)); 2013 2006 goto out; 2014 2007 } 2015 2008 2016 2009 ret = sock->ops->listen(sock, 64); 2017 - if (ret < 0) { 2018 - mlog(ML_ERROR, "unable to listen on %pI4:%u, ret=%d\n", 2019 - &addr, ntohs(port), ret); 2020 - } 2010 + if (ret < 0) 2011 + printk(KERN_ERR "o2net: Error %d while listening on %pI4:%u\n", 2012 + ret, &addr, ntohs(port)); 2021 2013 2022 2014 out: 2023 2015 if (ret) {
+2
fs/ocfs2/cluster/tcp.h
··· 106 106 struct list_head *unreg_list); 107 107 void o2net_unregister_handler_list(struct list_head *list); 108 108 109 + void o2net_fill_node_map(unsigned long *map, unsigned bytes); 110 + 109 111 struct o2nm_node; 110 112 int o2net_register_hb_callbacks(void); 111 113 void o2net_unregister_hb_callbacks(void);
+1 -2
fs/ocfs2/dir.c
··· 1184 1184 if (pde) 1185 1185 le16_add_cpu(&pde->rec_len, 1186 1186 le16_to_cpu(de->rec_len)); 1187 - else 1188 - de->inode = 0; 1187 + de->inode = 0; 1189 1188 dir->i_version++; 1190 1189 ocfs2_journal_dirty(handle, bh); 1191 1190 goto bail;
+12 -44
fs/ocfs2/dlm/dlmcommon.h
··· 859 859 void dlm_wait_for_recovery(struct dlm_ctxt *dlm); 860 860 void dlm_kick_recovery_thread(struct dlm_ctxt *dlm); 861 861 int dlm_is_node_dead(struct dlm_ctxt *dlm, u8 node); 862 - int dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout); 863 - int dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout); 862 + void dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout); 863 + void dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout); 864 864 865 865 void dlm_put(struct dlm_ctxt *dlm); 866 866 struct dlm_ctxt *dlm_grab(struct dlm_ctxt *dlm); ··· 877 877 kref_get(&res->refs); 878 878 } 879 879 void dlm_lockres_put(struct dlm_lock_resource *res); 880 - void __dlm_unhash_lockres(struct dlm_lock_resource *res); 881 - void __dlm_insert_lockres(struct dlm_ctxt *dlm, 882 - struct dlm_lock_resource *res); 880 + void __dlm_unhash_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res); 881 + void __dlm_insert_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res); 883 882 struct dlm_lock_resource * __dlm_lookup_lockres_full(struct dlm_ctxt *dlm, 884 883 const char *name, 885 884 unsigned int len, ··· 901 902 const char *name, 902 903 unsigned int namelen); 903 904 904 - #define dlm_lockres_set_refmap_bit(bit,res) \ 905 - __dlm_lockres_set_refmap_bit(bit,res,__FILE__,__LINE__) 906 - #define dlm_lockres_clear_refmap_bit(bit,res) \ 907 - __dlm_lockres_clear_refmap_bit(bit,res,__FILE__,__LINE__) 905 + void dlm_lockres_set_refmap_bit(struct dlm_ctxt *dlm, 906 + struct dlm_lock_resource *res, int bit); 907 + void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm, 908 + struct dlm_lock_resource *res, int bit); 908 909 909 - static inline void __dlm_lockres_set_refmap_bit(int bit, 910 - struct dlm_lock_resource *res, 911 - const char *file, 912 - int line) 913 - { 914 - //printk("%s:%d:%.*s: setting bit %d\n", file, line, 915 - // res->lockname.len, res->lockname.name, bit); 916 - set_bit(bit, res->refmap); 917 - } 918 - 919 - static inline void __dlm_lockres_clear_refmap_bit(int bit, 920 - struct dlm_lock_resource *res, 921 - const char *file, 922 - int line) 923 - { 924 - //printk("%s:%d:%.*s: clearing bit %d\n", file, line, 925 - // res->lockname.len, res->lockname.name, bit); 926 - clear_bit(bit, res->refmap); 927 - } 928 - 929 - void __dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 930 - struct dlm_lock_resource *res, 931 - const char *file, 932 - int line); 933 - void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 934 - struct dlm_lock_resource *res, 935 - int new_lockres, 936 - const char *file, 937 - int line); 938 - #define dlm_lockres_drop_inflight_ref(d,r) \ 939 - __dlm_lockres_drop_inflight_ref(d,r,__FILE__,__LINE__) 940 - #define dlm_lockres_grab_inflight_ref(d,r) \ 941 - __dlm_lockres_grab_inflight_ref(d,r,0,__FILE__,__LINE__) 942 - #define dlm_lockres_grab_inflight_ref_new(d,r) \ 943 - __dlm_lockres_grab_inflight_ref(d,r,1,__FILE__,__LINE__) 910 + void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 911 + struct dlm_lock_resource *res); 912 + void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 913 + struct dlm_lock_resource *res); 944 914 945 915 void dlm_queue_ast(struct dlm_ctxt *dlm, struct dlm_lock *lock); 946 916 void dlm_queue_bast(struct dlm_ctxt *dlm, struct dlm_lock *lock);
+22 -22
fs/ocfs2/dlm/dlmdomain.c
··· 157 157 158 158 static void dlm_unregister_domain_handlers(struct dlm_ctxt *dlm); 159 159 160 - void __dlm_unhash_lockres(struct dlm_lock_resource *lockres) 160 + void __dlm_unhash_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res) 161 161 { 162 - if (!hlist_unhashed(&lockres->hash_node)) { 163 - hlist_del_init(&lockres->hash_node); 164 - dlm_lockres_put(lockres); 165 - } 162 + if (hlist_unhashed(&res->hash_node)) 163 + return; 164 + 165 + mlog(0, "%s: Unhash res %.*s\n", dlm->name, res->lockname.len, 166 + res->lockname.name); 167 + hlist_del_init(&res->hash_node); 168 + dlm_lockres_put(res); 166 169 } 167 170 168 - void __dlm_insert_lockres(struct dlm_ctxt *dlm, 169 - struct dlm_lock_resource *res) 171 + void __dlm_insert_lockres(struct dlm_ctxt *dlm, struct dlm_lock_resource *res) 170 172 { 171 173 struct hlist_head *bucket; 172 174 struct qstr *q; ··· 182 180 dlm_lockres_get(res); 183 181 184 182 hlist_add_head(&res->hash_node, bucket); 183 + 184 + mlog(0, "%s: Hash res %.*s\n", dlm->name, res->lockname.len, 185 + res->lockname.name); 185 186 } 186 187 187 188 struct dlm_lock_resource * __dlm_lookup_lockres_full(struct dlm_ctxt *dlm, ··· 544 539 545 540 static void __dlm_print_nodes(struct dlm_ctxt *dlm) 546 541 { 547 - int node = -1; 542 + int node = -1, num = 0; 548 543 549 544 assert_spin_locked(&dlm->spinlock); 550 545 551 - printk(KERN_NOTICE "o2dlm: Nodes in domain %s: ", dlm->name); 552 - 546 + printk("( "); 553 547 while ((node = find_next_bit(dlm->domain_map, O2NM_MAX_NODES, 554 548 node + 1)) < O2NM_MAX_NODES) { 555 549 printk("%d ", node); 550 + ++num; 556 551 } 557 - printk("\n"); 552 + printk(") %u nodes\n", num); 558 553 } 559 554 560 555 static int dlm_exit_domain_handler(struct o2net_msg *msg, u32 len, void *data, ··· 571 566 572 567 node = exit_msg->node_idx; 573 568 574 - printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s\n", node, dlm->name); 575 - 576 569 spin_lock(&dlm->spinlock); 577 570 clear_bit(node, dlm->domain_map); 578 571 clear_bit(node, dlm->exit_domain_map); 572 + printk(KERN_NOTICE "o2dlm: Node %u leaves domain %s ", node, dlm->name); 579 573 __dlm_print_nodes(dlm); 580 574 581 575 /* notify anything attached to the heartbeat events */ ··· 759 755 760 756 dlm_mark_domain_leaving(dlm); 761 757 dlm_leave_domain(dlm); 758 + printk(KERN_NOTICE "o2dlm: Leaving domain %s\n", dlm->name); 762 759 dlm_force_free_mles(dlm); 763 760 dlm_complete_dlm_shutdown(dlm); 764 761 } ··· 975 970 clear_bit(assert->node_idx, dlm->exit_domain_map); 976 971 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN); 977 972 978 - printk(KERN_NOTICE "o2dlm: Node %u joins domain %s\n", 973 + printk(KERN_NOTICE "o2dlm: Node %u joins domain %s ", 979 974 assert->node_idx, dlm->name); 980 975 __dlm_print_nodes(dlm); 981 976 ··· 1706 1701 bail: 1707 1702 spin_lock(&dlm->spinlock); 1708 1703 __dlm_set_joining_node(dlm, DLM_LOCK_RES_OWNER_UNKNOWN); 1709 - if (!status) 1704 + if (!status) { 1705 + printk(KERN_NOTICE "o2dlm: Joining domain %s ", dlm->name); 1710 1706 __dlm_print_nodes(dlm); 1707 + } 1711 1708 spin_unlock(&dlm->spinlock); 1712 1709 1713 1710 if (ctxt) { ··· 2135 2128 if (strlen(domain) >= O2NM_MAX_NAME_LEN) { 2136 2129 ret = -ENAMETOOLONG; 2137 2130 mlog(ML_ERROR, "domain name length too long\n"); 2138 - goto leave; 2139 - } 2140 - 2141 - if (!o2hb_check_local_node_heartbeating()) { 2142 - mlog(ML_ERROR, "the local node has not been configured, or is " 2143 - "not heartbeating\n"); 2144 - ret = -EPROTO; 2145 2131 goto leave; 2146 2132 } 2147 2133
+26 -28
fs/ocfs2/dlm/dlmlock.c
··· 183 183 kick_thread = 1; 184 184 } 185 185 } 186 - /* reduce the inflight count, this may result in the lockres 187 - * being purged below during calc_usage */ 188 - if (lock->ml.node == dlm->node_num) 189 - dlm_lockres_drop_inflight_ref(dlm, res); 190 186 191 187 spin_unlock(&res->spinlock); 192 188 wake_up(&res->wq); ··· 227 231 lock->ml.type, res->lockname.len, 228 232 res->lockname.name, flags); 229 233 234 + /* 235 + * Wait if resource is getting recovered, remastered, etc. 236 + * If the resource was remastered and new owner is self, then exit. 237 + */ 230 238 spin_lock(&res->spinlock); 231 - 232 - /* will exit this call with spinlock held */ 233 239 __dlm_wait_on_lockres(res); 240 + if (res->owner == dlm->node_num) { 241 + spin_unlock(&res->spinlock); 242 + return DLM_RECOVERING; 243 + } 234 244 res->state |= DLM_LOCK_RES_IN_PROGRESS; 235 245 236 246 /* add lock to local (secondary) queue */ ··· 321 319 tmpret = o2net_send_message(DLM_CREATE_LOCK_MSG, dlm->key, &create, 322 320 sizeof(create), res->owner, &status); 323 321 if (tmpret >= 0) { 324 - // successfully sent and received 325 - ret = status; // this is already a dlm_status 322 + ret = status; 326 323 if (ret == DLM_REJECTED) { 327 - mlog(ML_ERROR, "%s:%.*s: BUG. this is a stale lockres " 328 - "no longer owned by %u. that node is coming back " 329 - "up currently.\n", dlm->name, create.namelen, 324 + mlog(ML_ERROR, "%s: res %.*s, Stale lockres no longer " 325 + "owned by node %u. That node is coming back up " 326 + "currently.\n", dlm->name, create.namelen, 330 327 create.name, res->owner); 331 328 dlm_print_one_lock_resource(res); 332 329 BUG(); 333 330 } 334 331 } else { 335 - mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to " 336 - "node %u\n", tmpret, DLM_CREATE_LOCK_MSG, dlm->key, 337 - res->owner); 338 - if (dlm_is_host_down(tmpret)) { 332 + mlog(ML_ERROR, "%s: res %.*s, Error %d send CREATE LOCK to " 333 + "node %u\n", dlm->name, create.namelen, create.name, 334 + tmpret, res->owner); 335 + if (dlm_is_host_down(tmpret)) 339 336 ret = DLM_RECOVERING; 340 - mlog(0, "node %u died so returning DLM_RECOVERING " 341 - "from lock message!\n", res->owner); 342 - } else { 337 + else 343 338 ret = dlm_err_to_dlm_status(tmpret); 344 - } 345 339 } 346 340 347 341 return ret; ··· 438 440 /* zero memory only if kernel-allocated */ 439 441 lksb = kzalloc(sizeof(*lksb), GFP_NOFS); 440 442 if (!lksb) { 441 - kfree(lock); 443 + kmem_cache_free(dlm_lock_cache, lock); 442 444 return NULL; 443 445 } 444 446 kernel_allocated = 1; ··· 716 718 717 719 if (status == DLM_RECOVERING || status == DLM_MIGRATING || 718 720 status == DLM_FORWARD) { 719 - mlog(0, "retrying lock with migration/" 720 - "recovery/in progress\n"); 721 721 msleep(100); 722 - /* no waiting for dlm_reco_thread */ 723 722 if (recovery) { 724 723 if (status != DLM_RECOVERING) 725 724 goto retry_lock; 726 - 727 - mlog(0, "%s: got RECOVERING " 728 - "for $RECOVERY lock, master " 729 - "was %u\n", dlm->name, 730 - res->owner); 731 725 /* wait to see the node go down, then 732 726 * drop down and allow the lockres to 733 727 * get cleaned up. need to remaster. */ ··· 730 740 goto retry_lock; 731 741 } 732 742 } 743 + 744 + /* Inflight taken in dlm_get_lock_resource() is dropped here */ 745 + spin_lock(&res->spinlock); 746 + dlm_lockres_drop_inflight_ref(dlm, res); 747 + spin_unlock(&res->spinlock); 748 + 749 + dlm_lockres_calc_usage(dlm, res); 750 + dlm_kick_thread(dlm, res); 733 751 734 752 if (status != DLM_NORMAL) { 735 753 lock->lksb->flags &= ~DLM_LKSB_GET_LVB;
+91 -90
fs/ocfs2/dlm/dlmmaster.c
··· 631 631 return NULL; 632 632 } 633 633 634 - void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 635 - struct dlm_lock_resource *res, 636 - int new_lockres, 637 - const char *file, 638 - int line) 634 + void dlm_lockres_set_refmap_bit(struct dlm_ctxt *dlm, 635 + struct dlm_lock_resource *res, int bit) 639 636 { 640 - if (!new_lockres) 641 - assert_spin_locked(&res->spinlock); 637 + assert_spin_locked(&res->spinlock); 642 638 643 - if (!test_bit(dlm->node_num, res->refmap)) { 644 - BUG_ON(res->inflight_locks != 0); 645 - dlm_lockres_set_refmap_bit(dlm->node_num, res); 646 - } 647 - res->inflight_locks++; 648 - mlog(0, "%s:%.*s: inflight++: now %u\n", 649 - dlm->name, res->lockname.len, res->lockname.name, 650 - res->inflight_locks); 639 + mlog(0, "res %.*s, set node %u, %ps()\n", res->lockname.len, 640 + res->lockname.name, bit, __builtin_return_address(0)); 641 + 642 + set_bit(bit, res->refmap); 651 643 } 652 644 653 - void __dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 654 - struct dlm_lock_resource *res, 655 - const char *file, 656 - int line) 645 + void dlm_lockres_clear_refmap_bit(struct dlm_ctxt *dlm, 646 + struct dlm_lock_resource *res, int bit) 647 + { 648 + assert_spin_locked(&res->spinlock); 649 + 650 + mlog(0, "res %.*s, clr node %u, %ps()\n", res->lockname.len, 651 + res->lockname.name, bit, __builtin_return_address(0)); 652 + 653 + clear_bit(bit, res->refmap); 654 + } 655 + 656 + 657 + void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 658 + struct dlm_lock_resource *res) 659 + { 660 + assert_spin_locked(&res->spinlock); 661 + 662 + res->inflight_locks++; 663 + 664 + mlog(0, "%s: res %.*s, inflight++: now %u, %ps()\n", dlm->name, 665 + res->lockname.len, res->lockname.name, res->inflight_locks, 666 + __builtin_return_address(0)); 667 + } 668 + 669 + void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, 670 + struct dlm_lock_resource *res) 657 671 { 658 672 assert_spin_locked(&res->spinlock); 659 673 660 674 BUG_ON(res->inflight_locks == 0); 675 + 661 676 res->inflight_locks--; 662 - mlog(0, "%s:%.*s: inflight--: now %u\n", 663 - dlm->name, res->lockname.len, res->lockname.name, 664 - res->inflight_locks); 665 - if (res->inflight_locks == 0) 666 - dlm_lockres_clear_refmap_bit(dlm->node_num, res); 677 + 678 + mlog(0, "%s: res %.*s, inflight--: now %u, %ps()\n", dlm->name, 679 + res->lockname.len, res->lockname.name, res->inflight_locks, 680 + __builtin_return_address(0)); 681 + 667 682 wake_up(&res->wq); 668 683 } 669 684 ··· 712 697 unsigned int hash; 713 698 int tries = 0; 714 699 int bit, wait_on_recovery = 0; 715 - int drop_inflight_if_nonlocal = 0; 716 700 717 701 BUG_ON(!lockid); 718 702 ··· 723 709 spin_lock(&dlm->spinlock); 724 710 tmpres = __dlm_lookup_lockres_full(dlm, lockid, namelen, hash); 725 711 if (tmpres) { 726 - int dropping_ref = 0; 727 - 728 712 spin_unlock(&dlm->spinlock); 729 - 730 713 spin_lock(&tmpres->spinlock); 731 - /* We wait for the other thread that is mastering the resource */ 714 + /* Wait on the thread that is mastering the resource */ 732 715 if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) { 733 716 __dlm_wait_on_lockres(tmpres); 734 717 BUG_ON(tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN); 735 - } 736 - 737 - if (tmpres->owner == dlm->node_num) { 738 - BUG_ON(tmpres->state & DLM_LOCK_RES_DROPPING_REF); 739 - dlm_lockres_grab_inflight_ref(dlm, tmpres); 740 - } else if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) 741 - dropping_ref = 1; 742 - spin_unlock(&tmpres->spinlock); 743 - 744 - /* wait until done messaging the master, drop our ref to allow 745 - * the lockres to be purged, start over. */ 746 - if (dropping_ref) { 747 - spin_lock(&tmpres->spinlock); 748 - __dlm_wait_on_lockres_flags(tmpres, DLM_LOCK_RES_DROPPING_REF); 749 718 spin_unlock(&tmpres->spinlock); 750 719 dlm_lockres_put(tmpres); 751 720 tmpres = NULL; 752 721 goto lookup; 753 722 } 754 723 755 - mlog(0, "found in hash!\n"); 724 + /* Wait on the resource purge to complete before continuing */ 725 + if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) { 726 + BUG_ON(tmpres->owner == dlm->node_num); 727 + __dlm_wait_on_lockres_flags(tmpres, 728 + DLM_LOCK_RES_DROPPING_REF); 729 + spin_unlock(&tmpres->spinlock); 730 + dlm_lockres_put(tmpres); 731 + tmpres = NULL; 732 + goto lookup; 733 + } 734 + 735 + /* Grab inflight ref to pin the resource */ 736 + dlm_lockres_grab_inflight_ref(dlm, tmpres); 737 + 738 + spin_unlock(&tmpres->spinlock); 756 739 if (res) 757 740 dlm_lockres_put(res); 758 741 res = tmpres; ··· 840 829 * but they might own this lockres. wait on them. */ 841 830 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0); 842 831 if (bit < O2NM_MAX_NODES) { 843 - mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to " 844 - "recover before lock mastery can begin\n", 832 + mlog(0, "%s: res %.*s, At least one node (%d) " 833 + "to recover before lock mastery can begin\n", 845 834 dlm->name, namelen, (char *)lockid, bit); 846 835 wait_on_recovery = 1; 847 836 } ··· 854 843 855 844 /* finally add the lockres to its hash bucket */ 856 845 __dlm_insert_lockres(dlm, res); 857 - /* since this lockres is new it doesn't not require the spinlock */ 858 - dlm_lockres_grab_inflight_ref_new(dlm, res); 859 846 860 - /* if this node does not become the master make sure to drop 861 - * this inflight reference below */ 862 - drop_inflight_if_nonlocal = 1; 847 + /* Grab inflight ref to pin the resource */ 848 + spin_lock(&res->spinlock); 849 + dlm_lockres_grab_inflight_ref(dlm, res); 850 + spin_unlock(&res->spinlock); 863 851 864 852 /* get an extra ref on the mle in case this is a BLOCK 865 853 * if so, the creator of the BLOCK may try to put the last ··· 874 864 * dlm spinlock would be detectable be a change on the mle, 875 865 * so we only need to clear out the recovery map once. */ 876 866 if (dlm_is_recovery_lock(lockid, namelen)) { 877 - mlog(ML_NOTICE, "%s: recovery map is not empty, but " 878 - "must master $RECOVERY lock now\n", dlm->name); 867 + mlog(0, "%s: Recovery map is not empty, but must " 868 + "master $RECOVERY lock now\n", dlm->name); 879 869 if (!dlm_pre_master_reco_lockres(dlm, res)) 880 870 wait_on_recovery = 0; 881 871 else { ··· 893 883 spin_lock(&dlm->spinlock); 894 884 bit = find_next_bit(dlm->recovery_map, O2NM_MAX_NODES, 0); 895 885 if (bit < O2NM_MAX_NODES) { 896 - mlog(ML_NOTICE, "%s:%.*s: at least one node (%d) to " 897 - "recover before lock mastery can begin\n", 886 + mlog(0, "%s: res %.*s, At least one node (%d) " 887 + "to recover before lock mastery can begin\n", 898 888 dlm->name, namelen, (char *)lockid, bit); 899 889 wait_on_recovery = 1; 900 890 } else ··· 923 913 * yet, keep going until it does. this is how the 924 914 * master will know that asserts are needed back to 925 915 * the lower nodes. */ 926 - mlog(0, "%s:%.*s: requests only up to %u but master " 927 - "is %u, keep going\n", dlm->name, namelen, 916 + mlog(0, "%s: res %.*s, Requests only up to %u but " 917 + "master is %u, keep going\n", dlm->name, namelen, 928 918 lockid, nodenum, mle->master); 929 919 } 930 920 } ··· 934 924 ret = dlm_wait_for_lock_mastery(dlm, res, mle, &blocked); 935 925 if (ret < 0) { 936 926 wait_on_recovery = 1; 937 - mlog(0, "%s:%.*s: node map changed, redo the " 938 - "master request now, blocked=%d\n", 939 - dlm->name, res->lockname.len, 927 + mlog(0, "%s: res %.*s, Node map changed, redo the master " 928 + "request now, blocked=%d\n", dlm->name, res->lockname.len, 940 929 res->lockname.name, blocked); 941 930 if (++tries > 20) { 942 - mlog(ML_ERROR, "%s:%.*s: spinning on " 943 - "dlm_wait_for_lock_mastery, blocked=%d\n", 931 + mlog(ML_ERROR, "%s: res %.*s, Spinning on " 932 + "dlm_wait_for_lock_mastery, blocked = %d\n", 944 933 dlm->name, res->lockname.len, 945 934 res->lockname.name, blocked); 946 935 dlm_print_one_lock_resource(res); ··· 949 940 goto redo_request; 950 941 } 951 942 952 - mlog(0, "lockres mastered by %u\n", res->owner); 943 + mlog(0, "%s: res %.*s, Mastered by %u\n", dlm->name, res->lockname.len, 944 + res->lockname.name, res->owner); 953 945 /* make sure we never continue without this */ 954 946 BUG_ON(res->owner == O2NM_MAX_NODES); 955 947 ··· 962 952 963 953 wake_waiters: 964 954 spin_lock(&res->spinlock); 965 - if (res->owner != dlm->node_num && drop_inflight_if_nonlocal) 966 - dlm_lockres_drop_inflight_ref(dlm, res); 967 955 res->state &= ~DLM_LOCK_RES_IN_PROGRESS; 968 956 spin_unlock(&res->spinlock); 969 957 wake_up(&res->wq); ··· 1434 1426 } 1435 1427 1436 1428 if (res->owner == dlm->node_num) { 1437 - mlog(0, "%s:%.*s: setting bit %u in refmap\n", 1438 - dlm->name, namelen, name, request->node_idx); 1439 - dlm_lockres_set_refmap_bit(request->node_idx, res); 1429 + dlm_lockres_set_refmap_bit(dlm, res, request->node_idx); 1440 1430 spin_unlock(&res->spinlock); 1441 1431 response = DLM_MASTER_RESP_YES; 1442 1432 if (mle) ··· 1499 1493 * go back and clean the mles on any 1500 1494 * other nodes */ 1501 1495 dispatch_assert = 1; 1502 - dlm_lockres_set_refmap_bit(request->node_idx, res); 1503 - mlog(0, "%s:%.*s: setting bit %u in refmap\n", 1504 - dlm->name, namelen, name, 1505 - request->node_idx); 1496 + dlm_lockres_set_refmap_bit(dlm, res, 1497 + request->node_idx); 1506 1498 } else 1507 1499 response = DLM_MASTER_RESP_NO; 1508 1500 } else { ··· 1706 1702 "lockres, set the bit in the refmap\n", 1707 1703 namelen, lockname, to); 1708 1704 spin_lock(&res->spinlock); 1709 - dlm_lockres_set_refmap_bit(to, res); 1705 + dlm_lockres_set_refmap_bit(dlm, res, to); 1710 1706 spin_unlock(&res->spinlock); 1711 1707 } 1712 1708 } ··· 2191 2187 namelen = res->lockname.len; 2192 2188 BUG_ON(namelen > O2NM_MAX_NAME_LEN); 2193 2189 2194 - mlog(0, "%s:%.*s: sending deref to %d\n", 2195 - dlm->name, namelen, lockname, res->owner); 2196 2190 memset(&deref, 0, sizeof(deref)); 2197 2191 deref.node_idx = dlm->node_num; 2198 2192 deref.namelen = namelen; ··· 2199 2197 ret = o2net_send_message(DLM_DEREF_LOCKRES_MSG, dlm->key, 2200 2198 &deref, sizeof(deref), res->owner, &r); 2201 2199 if (ret < 0) 2202 - mlog(ML_ERROR, "Error %d when sending message %u (key 0x%x) to " 2203 - "node %u\n", ret, DLM_DEREF_LOCKRES_MSG, dlm->key, 2204 - res->owner); 2200 + mlog(ML_ERROR, "%s: res %.*s, error %d send DEREF to node %u\n", 2201 + dlm->name, namelen, lockname, ret, res->owner); 2205 2202 else if (r < 0) { 2206 2203 /* BAD. other node says I did not have a ref. */ 2207 - mlog(ML_ERROR,"while dropping ref on %s:%.*s " 2208 - "(master=%u) got %d.\n", dlm->name, namelen, 2209 - lockname, res->owner, r); 2204 + mlog(ML_ERROR, "%s: res %.*s, DEREF to node %u got %d\n", 2205 + dlm->name, namelen, lockname, res->owner, r); 2210 2206 dlm_print_one_lock_resource(res); 2211 2207 BUG(); 2212 2208 } ··· 2260 2260 else { 2261 2261 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF); 2262 2262 if (test_bit(node, res->refmap)) { 2263 - dlm_lockres_clear_refmap_bit(node, res); 2263 + dlm_lockres_clear_refmap_bit(dlm, res, node); 2264 2264 cleared = 1; 2265 2265 } 2266 2266 } ··· 2320 2320 BUG_ON(res->state & DLM_LOCK_RES_DROPPING_REF); 2321 2321 if (test_bit(node, res->refmap)) { 2322 2322 __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_SETREF_INPROG); 2323 - dlm_lockres_clear_refmap_bit(node, res); 2323 + dlm_lockres_clear_refmap_bit(dlm, res, node); 2324 2324 cleared = 1; 2325 2325 } 2326 2326 spin_unlock(&res->spinlock); ··· 2802 2802 BUG_ON(!list_empty(&lock->bast_list)); 2803 2803 BUG_ON(lock->ast_pending); 2804 2804 BUG_ON(lock->bast_pending); 2805 - dlm_lockres_clear_refmap_bit(lock->ml.node, res); 2805 + dlm_lockres_clear_refmap_bit(dlm, res, 2806 + lock->ml.node); 2806 2807 list_del_init(&lock->list); 2807 2808 dlm_lock_put(lock); 2808 2809 /* In a normal unlock, we would have added a ··· 2824 2823 mlog(0, "%s:%.*s: node %u had a ref to this " 2825 2824 "migrating lockres, clearing\n", dlm->name, 2826 2825 res->lockname.len, res->lockname.name, bit); 2827 - dlm_lockres_clear_refmap_bit(bit, res); 2826 + dlm_lockres_clear_refmap_bit(dlm, res, bit); 2828 2827 } 2829 2828 bit++; 2830 2829 } ··· 2917 2916 &migrate, sizeof(migrate), nodenum, 2918 2917 &status); 2919 2918 if (ret < 0) { 2920 - mlog(ML_ERROR, "Error %d when sending message %u (key " 2921 - "0x%x) to node %u\n", ret, DLM_MIGRATE_REQUEST_MSG, 2922 - dlm->key, nodenum); 2919 + mlog(ML_ERROR, "%s: res %.*s, Error %d send " 2920 + "MIGRATE_REQUEST to node %u\n", dlm->name, 2921 + migrate.namelen, migrate.name, ret, nodenum); 2923 2922 if (!dlm_is_host_down(ret)) { 2924 2923 mlog(ML_ERROR, "unhandled error=%d!\n", ret); 2925 2924 BUG(); ··· 2938 2937 dlm->name, res->lockname.len, res->lockname.name, 2939 2938 nodenum); 2940 2939 spin_lock(&res->spinlock); 2941 - dlm_lockres_set_refmap_bit(nodenum, res); 2940 + dlm_lockres_set_refmap_bit(dlm, res, nodenum); 2942 2941 spin_unlock(&res->spinlock); 2943 2942 } 2944 2943 } ··· 3272 3271 * mastery reference here since old_master will briefly have 3273 3272 * a reference after the migration completes */ 3274 3273 spin_lock(&res->spinlock); 3275 - dlm_lockres_set_refmap_bit(old_master, res); 3274 + dlm_lockres_set_refmap_bit(dlm, res, old_master); 3276 3275 spin_unlock(&res->spinlock); 3277 3276 3278 3277 mlog(0, "now time to do a migrate request to other nodes\n");
+82 -82
fs/ocfs2/dlm/dlmrecovery.c
··· 362 362 } 363 363 364 364 365 - int dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout) 365 + void dlm_wait_for_node_death(struct dlm_ctxt *dlm, u8 node, int timeout) 366 366 { 367 - if (timeout) { 368 - mlog(ML_NOTICE, "%s: waiting %dms for notification of " 369 - "death of node %u\n", dlm->name, timeout, node); 367 + if (dlm_is_node_dead(dlm, node)) 368 + return; 369 + 370 + printk(KERN_NOTICE "o2dlm: Waiting on the death of node %u in " 371 + "domain %s\n", node, dlm->name); 372 + 373 + if (timeout) 370 374 wait_event_timeout(dlm->dlm_reco_thread_wq, 371 - dlm_is_node_dead(dlm, node), 372 - msecs_to_jiffies(timeout)); 373 - } else { 374 - mlog(ML_NOTICE, "%s: waiting indefinitely for notification " 375 - "of death of node %u\n", dlm->name, node); 375 + dlm_is_node_dead(dlm, node), 376 + msecs_to_jiffies(timeout)); 377 + else 376 378 wait_event(dlm->dlm_reco_thread_wq, 377 379 dlm_is_node_dead(dlm, node)); 378 - } 379 - /* for now, return 0 */ 380 - return 0; 381 380 } 382 381 383 - int dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout) 382 + void dlm_wait_for_node_recovery(struct dlm_ctxt *dlm, u8 node, int timeout) 384 383 { 385 - if (timeout) { 386 - mlog(0, "%s: waiting %dms for notification of " 387 - "recovery of node %u\n", dlm->name, timeout, node); 384 + if (dlm_is_node_recovered(dlm, node)) 385 + return; 386 + 387 + printk(KERN_NOTICE "o2dlm: Waiting on the recovery of node %u in " 388 + "domain %s\n", node, dlm->name); 389 + 390 + if (timeout) 388 391 wait_event_timeout(dlm->dlm_reco_thread_wq, 389 - dlm_is_node_recovered(dlm, node), 390 - msecs_to_jiffies(timeout)); 391 - } else { 392 - mlog(0, "%s: waiting indefinitely for notification " 393 - "of recovery of node %u\n", dlm->name, node); 392 + dlm_is_node_recovered(dlm, node), 393 + msecs_to_jiffies(timeout)); 394 + else 394 395 wait_event(dlm->dlm_reco_thread_wq, 395 396 dlm_is_node_recovered(dlm, node)); 396 - } 397 - /* for now, return 0 */ 398 - return 0; 399 397 } 400 398 401 399 /* callers of the top-level api calls (dlmlock/dlmunlock) should ··· 428 430 { 429 431 spin_lock(&dlm->spinlock); 430 432 BUG_ON(dlm->reco.state & DLM_RECO_STATE_ACTIVE); 433 + printk(KERN_NOTICE "o2dlm: Begin recovery on domain %s for node %u\n", 434 + dlm->name, dlm->reco.dead_node); 431 435 dlm->reco.state |= DLM_RECO_STATE_ACTIVE; 432 436 spin_unlock(&dlm->spinlock); 433 437 } ··· 440 440 BUG_ON(!(dlm->reco.state & DLM_RECO_STATE_ACTIVE)); 441 441 dlm->reco.state &= ~DLM_RECO_STATE_ACTIVE; 442 442 spin_unlock(&dlm->spinlock); 443 + printk(KERN_NOTICE "o2dlm: End recovery on domain %s\n", dlm->name); 443 444 wake_up(&dlm->reco.event); 445 + } 446 + 447 + static void dlm_print_recovery_master(struct dlm_ctxt *dlm) 448 + { 449 + printk(KERN_NOTICE "o2dlm: Node %u (%s) is the Recovery Master for the " 450 + "dead node %u in domain %s\n", dlm->reco.new_master, 451 + (dlm->node_num == dlm->reco.new_master ? "me" : "he"), 452 + dlm->reco.dead_node, dlm->name); 444 453 } 445 454 446 455 static int dlm_do_recovery(struct dlm_ctxt *dlm) ··· 514 505 } 515 506 mlog(0, "another node will master this recovery session.\n"); 516 507 } 517 - mlog(0, "dlm=%s (%d), new_master=%u, this node=%u, dead_node=%u\n", 518 - dlm->name, task_pid_nr(dlm->dlm_reco_thread_task), dlm->reco.new_master, 519 - dlm->node_num, dlm->reco.dead_node); 508 + 509 + dlm_print_recovery_master(dlm); 520 510 521 511 /* it is safe to start everything back up here 522 512 * because all of the dead node's lock resources ··· 526 518 return 0; 527 519 528 520 master_here: 529 - mlog(ML_NOTICE, "(%d) Node %u is the Recovery Master for the Dead Node " 530 - "%u for Domain %s\n", task_pid_nr(dlm->dlm_reco_thread_task), 531 - dlm->node_num, dlm->reco.dead_node, dlm->name); 521 + dlm_print_recovery_master(dlm); 532 522 533 523 status = dlm_remaster_locks(dlm, dlm->reco.dead_node); 534 524 if (status < 0) { 535 525 /* we should never hit this anymore */ 536 - mlog(ML_ERROR, "error %d remastering locks for node %u, " 537 - "retrying.\n", status, dlm->reco.dead_node); 526 + mlog(ML_ERROR, "%s: Error %d remastering locks for node %u, " 527 + "retrying.\n", dlm->name, status, dlm->reco.dead_node); 538 528 /* yield a bit to allow any final network messages 539 529 * to get handled on remaining nodes */ 540 530 msleep(100); ··· 573 567 BUG_ON(ndata->state != DLM_RECO_NODE_DATA_INIT); 574 568 ndata->state = DLM_RECO_NODE_DATA_REQUESTING; 575 569 576 - mlog(0, "requesting lock info from node %u\n", 570 + mlog(0, "%s: Requesting lock info from node %u\n", dlm->name, 577 571 ndata->node_num); 578 572 579 573 if (ndata->node_num == dlm->node_num) { ··· 646 640 spin_unlock(&dlm_reco_state_lock); 647 641 } 648 642 649 - mlog(0, "done requesting all lock info\n"); 643 + mlog(0, "%s: Done requesting all lock info\n", dlm->name); 650 644 651 645 /* nodes should be sending reco data now 652 646 * just need to wait */ ··· 808 802 809 803 /* negative status is handled by caller */ 810 804 if (ret < 0) 811 - mlog(ML_ERROR, "Error %d when sending message %u (key " 812 - "0x%x) to node %u\n", ret, DLM_LOCK_REQUEST_MSG, 813 - dlm->key, request_from); 814 - 805 + mlog(ML_ERROR, "%s: Error %d send LOCK_REQUEST to node %u " 806 + "to recover dead node %u\n", dlm->name, ret, 807 + request_from, dead_node); 815 808 // return from here, then 816 809 // sleep until all received or error 817 810 return ret; ··· 961 956 ret = o2net_send_message(DLM_RECO_DATA_DONE_MSG, dlm->key, &done_msg, 962 957 sizeof(done_msg), send_to, &tmpret); 963 958 if (ret < 0) { 964 - mlog(ML_ERROR, "Error %d when sending message %u (key " 965 - "0x%x) to node %u\n", ret, DLM_RECO_DATA_DONE_MSG, 966 - dlm->key, send_to); 959 + mlog(ML_ERROR, "%s: Error %d send RECO_DATA_DONE to node %u " 960 + "to recover dead node %u\n", dlm->name, ret, send_to, 961 + dead_node); 967 962 if (!dlm_is_host_down(ret)) { 968 963 BUG(); 969 964 } ··· 1132 1127 if (ret < 0) { 1133 1128 /* XXX: negative status is not handled. 1134 1129 * this will end up killing this node. */ 1135 - mlog(ML_ERROR, "Error %d when sending message %u (key " 1136 - "0x%x) to node %u\n", ret, DLM_MIG_LOCKRES_MSG, 1137 - dlm->key, send_to); 1130 + mlog(ML_ERROR, "%s: res %.*s, Error %d send MIG_LOCKRES to " 1131 + "node %u (%s)\n", dlm->name, mres->lockname_len, 1132 + mres->lockname, ret, send_to, 1133 + (orig_flags & DLM_MRES_MIGRATION ? 1134 + "migration" : "recovery")); 1138 1135 } else { 1139 1136 /* might get an -ENOMEM back here */ 1140 1137 ret = status; ··· 1774 1767 dlm->name, mres->lockname_len, mres->lockname, 1775 1768 from); 1776 1769 spin_lock(&res->spinlock); 1777 - dlm_lockres_set_refmap_bit(from, res); 1770 + dlm_lockres_set_refmap_bit(dlm, res, from); 1778 1771 spin_unlock(&res->spinlock); 1779 1772 added++; 1780 1773 break; ··· 1972 1965 mlog(0, "%s:%.*s: added lock for node %u, " 1973 1966 "setting refmap bit\n", dlm->name, 1974 1967 res->lockname.len, res->lockname.name, ml->node); 1975 - dlm_lockres_set_refmap_bit(ml->node, res); 1968 + dlm_lockres_set_refmap_bit(dlm, res, ml->node); 1976 1969 added++; 1977 1970 } 1978 1971 spin_unlock(&res->spinlock); ··· 2091 2084 2092 2085 list_for_each_entry_safe(res, next, &dlm->reco.resources, recovering) { 2093 2086 if (res->owner == dead_node) { 2087 + mlog(0, "%s: res %.*s, Changing owner from %u to %u\n", 2088 + dlm->name, res->lockname.len, res->lockname.name, 2089 + res->owner, new_master); 2094 2090 list_del_init(&res->recovering); 2095 2091 spin_lock(&res->spinlock); 2096 2092 /* new_master has our reference from ··· 2115 2105 for (i = 0; i < DLM_HASH_BUCKETS; i++) { 2116 2106 bucket = dlm_lockres_hash(dlm, i); 2117 2107 hlist_for_each_entry(res, hash_iter, bucket, hash_node) { 2118 - if (res->state & DLM_LOCK_RES_RECOVERING) { 2119 - if (res->owner == dead_node) { 2120 - mlog(0, "(this=%u) res %.*s owner=%u " 2121 - "was not on recovering list, but " 2122 - "clearing state anyway\n", 2123 - dlm->node_num, res->lockname.len, 2124 - res->lockname.name, new_master); 2125 - } else if (res->owner == dlm->node_num) { 2126 - mlog(0, "(this=%u) res %.*s owner=%u " 2127 - "was not on recovering list, " 2128 - "owner is THIS node, clearing\n", 2129 - dlm->node_num, res->lockname.len, 2130 - res->lockname.name, new_master); 2131 - } else 2132 - continue; 2108 + if (!(res->state & DLM_LOCK_RES_RECOVERING)) 2109 + continue; 2133 2110 2134 - if (!list_empty(&res->recovering)) { 2135 - mlog(0, "%s:%.*s: lockres was " 2136 - "marked RECOVERING, owner=%u\n", 2137 - dlm->name, res->lockname.len, 2138 - res->lockname.name, res->owner); 2139 - list_del_init(&res->recovering); 2140 - dlm_lockres_put(res); 2141 - } 2142 - spin_lock(&res->spinlock); 2143 - /* new_master has our reference from 2144 - * the lock state sent during recovery */ 2145 - dlm_change_lockres_owner(dlm, res, new_master); 2146 - res->state &= ~DLM_LOCK_RES_RECOVERING; 2147 - if (__dlm_lockres_has_locks(res)) 2148 - __dlm_dirty_lockres(dlm, res); 2149 - spin_unlock(&res->spinlock); 2150 - wake_up(&res->wq); 2111 + if (res->owner != dead_node && 2112 + res->owner != dlm->node_num) 2113 + continue; 2114 + 2115 + if (!list_empty(&res->recovering)) { 2116 + list_del_init(&res->recovering); 2117 + dlm_lockres_put(res); 2151 2118 } 2119 + 2120 + /* new_master has our reference from 2121 + * the lock state sent during recovery */ 2122 + mlog(0, "%s: res %.*s, Changing owner from %u to %u\n", 2123 + dlm->name, res->lockname.len, res->lockname.name, 2124 + res->owner, new_master); 2125 + spin_lock(&res->spinlock); 2126 + dlm_change_lockres_owner(dlm, res, new_master); 2127 + res->state &= ~DLM_LOCK_RES_RECOVERING; 2128 + if (__dlm_lockres_has_locks(res)) 2129 + __dlm_dirty_lockres(dlm, res); 2130 + spin_unlock(&res->spinlock); 2131 + wake_up(&res->wq); 2152 2132 } 2153 2133 } 2154 2134 } ··· 2252 2252 res->lockname.len, res->lockname.name, freed, dead_node); 2253 2253 __dlm_print_one_lock_resource(res); 2254 2254 } 2255 - dlm_lockres_clear_refmap_bit(dead_node, res); 2255 + dlm_lockres_clear_refmap_bit(dlm, res, dead_node); 2256 2256 } else if (test_bit(dead_node, res->refmap)) { 2257 2257 mlog(0, "%s:%.*s: dead node %u had a ref, but had " 2258 2258 "no locks and had not purged before dying\n", dlm->name, 2259 2259 res->lockname.len, res->lockname.name, dead_node); 2260 - dlm_lockres_clear_refmap_bit(dead_node, res); 2260 + dlm_lockres_clear_refmap_bit(dlm, res, dead_node); 2261 2261 } 2262 2262 2263 2263 /* do not kick thread yet */ ··· 2324 2324 dlm_revalidate_lvb(dlm, res, dead_node); 2325 2325 if (res->owner == dead_node) { 2326 2326 if (res->state & DLM_LOCK_RES_DROPPING_REF) { 2327 - mlog(ML_NOTICE, "Ignore %.*s for " 2327 + mlog(ML_NOTICE, "%s: res %.*s, Skip " 2328 2328 "recovery as it is being freed\n", 2329 - res->lockname.len, 2329 + dlm->name, res->lockname.len, 2330 2330 res->lockname.name); 2331 2331 } else 2332 2332 dlm_move_lockres_to_recovery_list(dlm,
+8 -8
fs/ocfs2/dlm/dlmthread.c
··· 94 94 { 95 95 int bit; 96 96 97 + assert_spin_locked(&res->spinlock); 98 + 97 99 if (__dlm_lockres_has_locks(res)) 100 + return 0; 101 + 102 + /* Locks are in the process of being created */ 103 + if (res->inflight_locks) 98 104 return 0; 99 105 100 106 if (!list_empty(&res->dirty) || res->state & DLM_LOCK_RES_DIRTY) ··· 109 103 if (res->state & DLM_LOCK_RES_RECOVERING) 110 104 return 0; 111 105 106 + /* Another node has this resource with this node as the master */ 112 107 bit = find_next_bit(res->refmap, O2NM_MAX_NODES, 0); 113 108 if (bit < O2NM_MAX_NODES) 114 109 return 0; 115 110 116 - /* 117 - * since the bit for dlm->node_num is not set, inflight_locks better 118 - * be zero 119 - */ 120 - BUG_ON(res->inflight_locks != 0); 121 111 return 1; 122 112 } 123 113 ··· 187 185 /* clear our bit from the master's refmap, ignore errors */ 188 186 ret = dlm_drop_lockres_ref(dlm, res); 189 187 if (ret < 0) { 190 - mlog(ML_ERROR, "%s: deref %.*s failed %d\n", dlm->name, 191 - res->lockname.len, res->lockname.name, ret); 192 188 if (!dlm_is_host_down(ret)) 193 189 BUG(); 194 190 } ··· 209 209 BUG(); 210 210 } 211 211 212 - __dlm_unhash_lockres(res); 212 + __dlm_unhash_lockres(dlm, res); 213 213 214 214 /* lockres is not in the hash now. drop the flag and wake up 215 215 * any processes waiting in dlm_get_lock_resource. */
+15 -6
fs/ocfs2/dlmglue.c
··· 1692 1692 mlog(0, "inode %llu take PRMODE open lock\n", 1693 1693 (unsigned long long)OCFS2_I(inode)->ip_blkno); 1694 1694 1695 - if (ocfs2_mount_local(osb)) 1695 + if (ocfs2_is_hard_readonly(osb) || ocfs2_mount_local(osb)) 1696 1696 goto out; 1697 1697 1698 1698 lockres = &OCFS2_I(inode)->ip_open_lockres; ··· 1717 1717 mlog(0, "inode %llu try to take %s open lock\n", 1718 1718 (unsigned long long)OCFS2_I(inode)->ip_blkno, 1719 1719 write ? "EXMODE" : "PRMODE"); 1720 + 1721 + if (ocfs2_is_hard_readonly(osb)) { 1722 + if (write) 1723 + status = -EROFS; 1724 + goto out; 1725 + } 1720 1726 1721 1727 if (ocfs2_mount_local(osb)) 1722 1728 goto out; ··· 2304 2298 if (ocfs2_is_hard_readonly(osb)) { 2305 2299 if (ex) 2306 2300 status = -EROFS; 2307 - goto bail; 2301 + goto getbh; 2308 2302 } 2309 2303 2310 2304 if (ocfs2_mount_local(osb)) ··· 2362 2356 mlog_errno(status); 2363 2357 goto bail; 2364 2358 } 2365 - 2359 + getbh: 2366 2360 if (ret_bh) { 2367 2361 status = ocfs2_assign_bh(inode, ret_bh, local_bh); 2368 2362 if (status < 0) { ··· 2634 2628 2635 2629 BUG_ON(!dl); 2636 2630 2637 - if (ocfs2_is_hard_readonly(osb)) 2638 - return -EROFS; 2631 + if (ocfs2_is_hard_readonly(osb)) { 2632 + if (ex) 2633 + return -EROFS; 2634 + return 0; 2635 + } 2639 2636 2640 2637 if (ocfs2_mount_local(osb)) 2641 2638 return 0; ··· 2656 2647 struct ocfs2_dentry_lock *dl = dentry->d_fsdata; 2657 2648 struct ocfs2_super *osb = OCFS2_SB(dentry->d_sb); 2658 2649 2659 - if (!ocfs2_mount_local(osb)) 2650 + if (!ocfs2_is_hard_readonly(osb) && !ocfs2_mount_local(osb)) 2660 2651 ocfs2_cluster_unlock(osb, &dl->dl_lockres, level); 2661 2652 } 2662 2653
+96
fs/ocfs2/extent_map.c
··· 832 832 return ret; 833 833 } 834 834 835 + int ocfs2_seek_data_hole_offset(struct file *file, loff_t *offset, int origin) 836 + { 837 + struct inode *inode = file->f_mapping->host; 838 + int ret; 839 + unsigned int is_last = 0, is_data = 0; 840 + u16 cs_bits = OCFS2_SB(inode->i_sb)->s_clustersize_bits; 841 + u32 cpos, cend, clen, hole_size; 842 + u64 extoff, extlen; 843 + struct buffer_head *di_bh = NULL; 844 + struct ocfs2_extent_rec rec; 845 + 846 + BUG_ON(origin != SEEK_DATA && origin != SEEK_HOLE); 847 + 848 + ret = ocfs2_inode_lock(inode, &di_bh, 0); 849 + if (ret) { 850 + mlog_errno(ret); 851 + goto out; 852 + } 853 + 854 + down_read(&OCFS2_I(inode)->ip_alloc_sem); 855 + 856 + if (*offset >= inode->i_size) { 857 + ret = -ENXIO; 858 + goto out_unlock; 859 + } 860 + 861 + if (OCFS2_I(inode)->ip_dyn_features & OCFS2_INLINE_DATA_FL) { 862 + if (origin == SEEK_HOLE) 863 + *offset = inode->i_size; 864 + goto out_unlock; 865 + } 866 + 867 + clen = 0; 868 + cpos = *offset >> cs_bits; 869 + cend = ocfs2_clusters_for_bytes(inode->i_sb, inode->i_size); 870 + 871 + while (cpos < cend && !is_last) { 872 + ret = ocfs2_get_clusters_nocache(inode, di_bh, cpos, &hole_size, 873 + &rec, &is_last); 874 + if (ret) { 875 + mlog_errno(ret); 876 + goto out_unlock; 877 + } 878 + 879 + extoff = cpos; 880 + extoff <<= cs_bits; 881 + 882 + if (rec.e_blkno == 0ULL) { 883 + clen = hole_size; 884 + is_data = 0; 885 + } else { 886 + clen = le16_to_cpu(rec.e_leaf_clusters) - 887 + (cpos - le32_to_cpu(rec.e_cpos)); 888 + is_data = (rec.e_flags & OCFS2_EXT_UNWRITTEN) ? 0 : 1; 889 + } 890 + 891 + if ((!is_data && origin == SEEK_HOLE) || 892 + (is_data && origin == SEEK_DATA)) { 893 + if (extoff > *offset) 894 + *offset = extoff; 895 + goto out_unlock; 896 + } 897 + 898 + if (!is_last) 899 + cpos += clen; 900 + } 901 + 902 + if (origin == SEEK_HOLE) { 903 + extoff = cpos; 904 + extoff <<= cs_bits; 905 + extlen = clen; 906 + extlen <<= cs_bits; 907 + 908 + if ((extoff + extlen) > inode->i_size) 909 + extlen = inode->i_size - extoff; 910 + extoff += extlen; 911 + if (extoff > *offset) 912 + *offset = extoff; 913 + goto out_unlock; 914 + } 915 + 916 + ret = -ENXIO; 917 + 918 + out_unlock: 919 + 920 + brelse(di_bh); 921 + 922 + up_read(&OCFS2_I(inode)->ip_alloc_sem); 923 + 924 + ocfs2_inode_unlock(inode, 0); 925 + out: 926 + if (ret && ret != -ENXIO) 927 + ret = -ENXIO; 928 + return ret; 929 + } 930 + 835 931 int ocfs2_read_virt_blocks(struct inode *inode, u64 v_block, int nr, 836 932 struct buffer_head *bhs[], int flags, 837 933 int (*validate)(struct super_block *sb,
+2
fs/ocfs2/extent_map.h
··· 53 53 int ocfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 54 54 u64 map_start, u64 map_len); 55 55 56 + int ocfs2_seek_data_hole_offset(struct file *file, loff_t *offset, int origin); 57 + 56 58 int ocfs2_xattr_get_clusters(struct inode *inode, u32 v_cluster, 57 59 u32 *p_cluster, u32 *num_clusters, 58 60 struct ocfs2_extent_list *el,
+94 -2
fs/ocfs2/file.c
··· 1950 1950 if (ret < 0) 1951 1951 mlog_errno(ret); 1952 1952 1953 + if (file->f_flags & O_SYNC) 1954 + handle->h_sync = 1; 1955 + 1953 1956 ocfs2_commit_trans(osb, handle); 1954 1957 1955 1958 out_inode_unlock: ··· 2053 2050 } 2054 2051 out: 2055 2052 return ret; 2053 + } 2054 + 2055 + static void ocfs2_aiodio_wait(struct inode *inode) 2056 + { 2057 + wait_queue_head_t *wq = ocfs2_ioend_wq(inode); 2058 + 2059 + wait_event(*wq, (atomic_read(&OCFS2_I(inode)->ip_unaligned_aio) == 0)); 2060 + } 2061 + 2062 + static int ocfs2_is_io_unaligned(struct inode *inode, size_t count, loff_t pos) 2063 + { 2064 + int blockmask = inode->i_sb->s_blocksize - 1; 2065 + loff_t final_size = pos + count; 2066 + 2067 + if ((pos & blockmask) || (final_size & blockmask)) 2068 + return 1; 2069 + return 0; 2056 2070 } 2057 2071 2058 2072 static int ocfs2_prepare_inode_for_refcount(struct inode *inode, ··· 2250 2230 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 2251 2231 int full_coherency = !(osb->s_mount_opt & 2252 2232 OCFS2_MOUNT_COHERENCY_BUFFERED); 2233 + int unaligned_dio = 0; 2253 2234 2254 2235 trace_ocfs2_file_aio_write(inode, file, file->f_path.dentry, 2255 2236 (unsigned long long)OCFS2_I(inode)->ip_blkno, ··· 2318 2297 goto out; 2319 2298 } 2320 2299 2300 + if (direct_io && !is_sync_kiocb(iocb)) 2301 + unaligned_dio = ocfs2_is_io_unaligned(inode, iocb->ki_left, 2302 + *ppos); 2303 + 2321 2304 /* 2322 2305 * We can't complete the direct I/O as requested, fall back to 2323 2306 * buffered I/O. ··· 2334 2309 2335 2310 direct_io = 0; 2336 2311 goto relock; 2312 + } 2313 + 2314 + if (unaligned_dio) { 2315 + /* 2316 + * Wait on previous unaligned aio to complete before 2317 + * proceeding. 2318 + */ 2319 + ocfs2_aiodio_wait(inode); 2320 + 2321 + /* Mark the iocb as needing a decrement in ocfs2_dio_end_io */ 2322 + atomic_inc(&OCFS2_I(inode)->ip_unaligned_aio); 2323 + ocfs2_iocb_set_unaligned_aio(iocb); 2337 2324 } 2338 2325 2339 2326 /* ··· 2419 2382 if ((ret == -EIOCBQUEUED) || (!ocfs2_iocb_is_rw_locked(iocb))) { 2420 2383 rw_level = -1; 2421 2384 have_alloc_sem = 0; 2385 + unaligned_dio = 0; 2422 2386 } 2387 + 2388 + if (unaligned_dio) 2389 + atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio); 2423 2390 2424 2391 out: 2425 2392 if (rw_level != -1) ··· 2632 2591 return ret; 2633 2592 } 2634 2593 2594 + /* Refer generic_file_llseek_unlocked() */ 2595 + static loff_t ocfs2_file_llseek(struct file *file, loff_t offset, int origin) 2596 + { 2597 + struct inode *inode = file->f_mapping->host; 2598 + int ret = 0; 2599 + 2600 + mutex_lock(&inode->i_mutex); 2601 + 2602 + switch (origin) { 2603 + case SEEK_SET: 2604 + break; 2605 + case SEEK_END: 2606 + offset += inode->i_size; 2607 + break; 2608 + case SEEK_CUR: 2609 + if (offset == 0) { 2610 + offset = file->f_pos; 2611 + goto out; 2612 + } 2613 + offset += file->f_pos; 2614 + break; 2615 + case SEEK_DATA: 2616 + case SEEK_HOLE: 2617 + ret = ocfs2_seek_data_hole_offset(file, &offset, origin); 2618 + if (ret) 2619 + goto out; 2620 + break; 2621 + default: 2622 + ret = -EINVAL; 2623 + goto out; 2624 + } 2625 + 2626 + if (offset < 0 && !(file->f_mode & FMODE_UNSIGNED_OFFSET)) 2627 + ret = -EINVAL; 2628 + if (!ret && offset > inode->i_sb->s_maxbytes) 2629 + ret = -EINVAL; 2630 + if (ret) 2631 + goto out; 2632 + 2633 + if (offset != file->f_pos) { 2634 + file->f_pos = offset; 2635 + file->f_version = 0; 2636 + } 2637 + 2638 + out: 2639 + mutex_unlock(&inode->i_mutex); 2640 + if (ret) 2641 + return ret; 2642 + return offset; 2643 + } 2644 + 2635 2645 const struct inode_operations ocfs2_file_iops = { 2636 2646 .setattr = ocfs2_setattr, 2637 2647 .getattr = ocfs2_getattr, ··· 2707 2615 * ocfs2_fops_no_plocks and ocfs2_dops_no_plocks! 2708 2616 */ 2709 2617 const struct file_operations ocfs2_fops = { 2710 - .llseek = generic_file_llseek, 2618 + .llseek = ocfs2_file_llseek, 2711 2619 .read = do_sync_read, 2712 2620 .write = do_sync_write, 2713 2621 .mmap = ocfs2_mmap, ··· 2755 2663 * the cluster. 2756 2664 */ 2757 2665 const struct file_operations ocfs2_fops_no_plocks = { 2758 - .llseek = generic_file_llseek, 2666 + .llseek = ocfs2_file_llseek, 2759 2667 .read = do_sync_read, 2760 2668 .write = do_sync_write, 2761 2669 .mmap = ocfs2_mmap,
+1 -1
fs/ocfs2/inode.c
··· 951 951 trace_ocfs2_cleanup_delete_inode( 952 952 (unsigned long long)OCFS2_I(inode)->ip_blkno, sync_data); 953 953 if (sync_data) 954 - write_inode_now(inode, 1); 954 + filemap_write_and_wait(inode->i_mapping); 955 955 truncate_inode_pages(&inode->i_data, 0); 956 956 } 957 957
+3
fs/ocfs2/inode.h
··· 43 43 /* protects extended attribute changes on this inode */ 44 44 struct rw_semaphore ip_xattr_sem; 45 45 46 + /* Number of outstanding AIO's which are not page aligned */ 47 + atomic_t ip_unaligned_aio; 48 + 46 49 /* These fields are protected by ip_lock */ 47 50 spinlock_t ip_lock; 48 51 u32 ip_open_count;
+6 -5
fs/ocfs2/ioctl.c
··· 122 122 if ((oldflags & OCFS2_IMMUTABLE_FL) || ((flags ^ oldflags) & 123 123 (OCFS2_APPEND_FL | OCFS2_IMMUTABLE_FL))) { 124 124 if (!capable(CAP_LINUX_IMMUTABLE)) 125 - goto bail_unlock; 125 + goto bail_commit; 126 126 } 127 127 128 128 ocfs2_inode->ip_attr = flags; ··· 132 132 if (status < 0) 133 133 mlog_errno(status); 134 134 135 + bail_commit: 135 136 ocfs2_commit_trans(osb, handle); 136 137 bail_unlock: 137 138 ocfs2_inode_unlock(inode, 1); ··· 382 381 if (!oifi) { 383 382 status = -ENOMEM; 384 383 mlog_errno(status); 385 - goto bail; 384 + goto out_err; 386 385 } 387 386 388 387 if (o2info_from_user(*oifi, req)) ··· 432 431 o2info_set_request_error(&oifi->ifi_req, req); 433 432 434 433 kfree(oifi); 435 - 434 + out_err: 436 435 return status; 437 436 } 438 437 ··· 667 666 if (!oiff) { 668 667 status = -ENOMEM; 669 668 mlog_errno(status); 670 - goto bail; 669 + goto out_err; 671 670 } 672 671 673 672 if (o2info_from_user(*oiff, req)) ··· 717 716 o2info_set_request_error(&oiff->iff_req, req); 718 717 719 718 kfree(oiff); 720 - 719 + out_err: 721 720 return status; 722 721 } 723 722
+20 -3
fs/ocfs2/journal.c
··· 1544 1544 /* we need to run complete recovery for offline orphan slots */ 1545 1545 ocfs2_replay_map_set_state(osb, REPLAY_NEEDED); 1546 1546 1547 - mlog(ML_NOTICE, "Recovering node %d from slot %d on device (%u,%u)\n", 1548 - node_num, slot_num, 1549 - MAJOR(osb->sb->s_dev), MINOR(osb->sb->s_dev)); 1547 + printk(KERN_NOTICE "ocfs2: Begin replay journal (node %d, slot %d) on "\ 1548 + "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev), 1549 + MINOR(osb->sb->s_dev)); 1550 1550 1551 1551 OCFS2_I(inode)->ip_clusters = le32_to_cpu(fe->i_clusters); 1552 1552 ··· 1601 1601 1602 1602 jbd2_journal_destroy(journal); 1603 1603 1604 + printk(KERN_NOTICE "ocfs2: End replay journal (node %d, slot %d) on "\ 1605 + "device (%u,%u)\n", node_num, slot_num, MAJOR(osb->sb->s_dev), 1606 + MINOR(osb->sb->s_dev)); 1604 1607 done: 1605 1608 /* drop the lock on this nodes journal */ 1606 1609 if (got_lock) ··· 1810 1807 * ocfs2_queue_orphan_scan calls ocfs2_queue_recovery_completion for 1811 1808 * every slot, queuing a recovery of the slot on the ocfs2_wq thread. This 1812 1809 * is done to catch any orphans that are left over in orphan directories. 1810 + * 1811 + * It scans all slots, even ones that are in use. It does so to handle the 1812 + * case described below: 1813 + * 1814 + * Node 1 has an inode it was using. The dentry went away due to memory 1815 + * pressure. Node 1 closes the inode, but it's on the free list. The node 1816 + * has the open lock. 1817 + * Node 2 unlinks the inode. It grabs the dentry lock to notify others, 1818 + * but node 1 has no dentry and doesn't get the message. It trylocks the 1819 + * open lock, sees that another node has a PR, and does nothing. 1820 + * Later node 2 runs its orphan dir. It igets the inode, trylocks the 1821 + * open lock, sees the PR still, and does nothing. 1822 + * Basically, we have to trigger an orphan iput on node 1. The only way 1823 + * for this to happen is if node 1 runs node 2's orphan dir. 1813 1824 * 1814 1825 * ocfs2_queue_orphan_scan gets called every ORPHAN_SCAN_SCHEDULE_TIMEOUT 1815 1826 * seconds. It gets an EX lock on os_lockres and checks sequence number
+3 -2
fs/ocfs2/journal.h
··· 441 441 #define OCFS2_SIMPLE_DIR_EXTEND_CREDITS (2) 442 442 443 443 /* file update (nlink, etc) + directory mtime/ctime + dir entry block + quota 444 - * update on dir + index leaf + dx root update for free list */ 444 + * update on dir + index leaf + dx root update for free list + 445 + * previous dirblock update in the free list */ 445 446 static inline int ocfs2_link_credits(struct super_block *sb) 446 447 { 447 - return 2*OCFS2_INODE_UPDATE_CREDITS + 3 + 448 + return 2*OCFS2_INODE_UPDATE_CREDITS + 4 + 448 449 ocfs2_quota_trans_credits(sb); 449 450 } 450 451
+24 -29
fs/ocfs2/mmap.c
··· 61 61 static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh, 62 62 struct page *page) 63 63 { 64 - int ret; 64 + int ret = VM_FAULT_NOPAGE; 65 65 struct inode *inode = file->f_path.dentry->d_inode; 66 66 struct address_space *mapping = inode->i_mapping; 67 67 loff_t pos = page_offset(page); ··· 71 71 void *fsdata; 72 72 loff_t size = i_size_read(inode); 73 73 74 - /* 75 - * Another node might have truncated while we were waiting on 76 - * cluster locks. 77 - * We don't check size == 0 before the shift. This is borrowed 78 - * from do_generic_file_read. 79 - */ 80 74 last_index = (size - 1) >> PAGE_CACHE_SHIFT; 81 - if (unlikely(!size || page->index > last_index)) { 82 - ret = -EINVAL; 83 - goto out; 84 - } 85 75 86 76 /* 87 - * The i_size check above doesn't catch the case where nodes 88 - * truncated and then re-extended the file. We'll re-check the 89 - * page mapping after taking the page lock inside of 90 - * ocfs2_write_begin_nolock(). 77 + * There are cases that lead to the page no longer bebongs to the 78 + * mapping. 79 + * 1) pagecache truncates locally due to memory pressure. 80 + * 2) pagecache truncates when another is taking EX lock against 81 + * inode lock. see ocfs2_data_convert_worker. 82 + * 83 + * The i_size check doesn't catch the case where nodes truncated and 84 + * then re-extended the file. We'll re-check the page mapping after 85 + * taking the page lock inside of ocfs2_write_begin_nolock(). 86 + * 87 + * Let VM retry with these cases. 91 88 */ 92 - if (!PageUptodate(page) || page->mapping != inode->i_mapping) { 93 - /* 94 - * the page has been umapped in ocfs2_data_downconvert_worker. 95 - * So return 0 here and let VFS retry. 96 - */ 97 - ret = 0; 89 + if ((page->mapping != inode->i_mapping) || 90 + (!PageUptodate(page)) || 91 + (page_offset(page) >= size)) 98 92 goto out; 99 - } 100 93 101 94 /* 102 95 * Call ocfs2_write_begin() and ocfs2_write_end() to take ··· 109 116 if (ret) { 110 117 if (ret != -ENOSPC) 111 118 mlog_errno(ret); 119 + if (ret == -ENOMEM) 120 + ret = VM_FAULT_OOM; 121 + else 122 + ret = VM_FAULT_SIGBUS; 112 123 goto out; 113 124 } 114 125 115 - ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page, 116 - fsdata); 117 - if (ret < 0) { 118 - mlog_errno(ret); 126 + if (!locked_page) { 127 + ret = VM_FAULT_NOPAGE; 119 128 goto out; 120 129 } 130 + ret = ocfs2_write_end_nolock(mapping, pos, len, len, locked_page, 131 + fsdata); 121 132 BUG_ON(ret != len); 122 - ret = 0; 133 + ret = VM_FAULT_LOCKED; 123 134 out: 124 135 return ret; 125 136 } ··· 165 168 166 169 out: 167 170 ocfs2_unblock_signals(&oldset); 168 - if (ret) 169 - ret = VM_FAULT_SIGBUS; 170 171 return ret; 171 172 } 172 173
+1 -1
fs/ocfs2/move_extents.c
··· 745 745 */ 746 746 ocfs2_probe_alloc_group(inode, gd_bh, &goal_bit, len, move_max_hop, 747 747 new_phys_cpos); 748 - if (!new_phys_cpos) { 748 + if (!*new_phys_cpos) { 749 749 ret = -ENOSPC; 750 750 goto out_commit; 751 751 }
+49 -2
fs/ocfs2/ocfs2.h
··· 836 836 837 837 static inline void _ocfs2_set_bit(unsigned int bit, unsigned long *bitmap) 838 838 { 839 - __test_and_set_bit_le(bit, bitmap); 839 + __set_bit_le(bit, bitmap); 840 840 } 841 841 #define ocfs2_set_bit(bit, addr) _ocfs2_set_bit((bit), (unsigned long *)(addr)) 842 842 843 843 static inline void _ocfs2_clear_bit(unsigned int bit, unsigned long *bitmap) 844 844 { 845 - __test_and_clear_bit_le(bit, bitmap); 845 + __clear_bit_le(bit, bitmap); 846 846 } 847 847 #define ocfs2_clear_bit(bit, addr) _ocfs2_clear_bit((bit), (unsigned long *)(addr)) 848 848 849 849 #define ocfs2_test_bit test_bit_le 850 850 #define ocfs2_find_next_zero_bit find_next_zero_bit_le 851 851 #define ocfs2_find_next_bit find_next_bit_le 852 + 853 + static inline void *correct_addr_and_bit_unaligned(int *bit, void *addr) 854 + { 855 + #if BITS_PER_LONG == 64 856 + *bit += ((unsigned long) addr & 7UL) << 3; 857 + addr = (void *) ((unsigned long) addr & ~7UL); 858 + #elif BITS_PER_LONG == 32 859 + *bit += ((unsigned long) addr & 3UL) << 3; 860 + addr = (void *) ((unsigned long) addr & ~3UL); 861 + #else 862 + #error "how many bits you are?!" 863 + #endif 864 + return addr; 865 + } 866 + 867 + static inline void ocfs2_set_bit_unaligned(int bit, void *bitmap) 868 + { 869 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 870 + ocfs2_set_bit(bit, bitmap); 871 + } 872 + 873 + static inline void ocfs2_clear_bit_unaligned(int bit, void *bitmap) 874 + { 875 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 876 + ocfs2_clear_bit(bit, bitmap); 877 + } 878 + 879 + static inline int ocfs2_test_bit_unaligned(int bit, void *bitmap) 880 + { 881 + bitmap = correct_addr_and_bit_unaligned(&bit, bitmap); 882 + return ocfs2_test_bit(bit, bitmap); 883 + } 884 + 885 + static inline int ocfs2_find_next_zero_bit_unaligned(void *bitmap, int max, 886 + int start) 887 + { 888 + int fix = 0, ret, tmpmax; 889 + bitmap = correct_addr_and_bit_unaligned(&fix, bitmap); 890 + tmpmax = max + fix; 891 + start += fix; 892 + 893 + ret = ocfs2_find_next_zero_bit(bitmap, tmpmax, start) - fix; 894 + if (ret > max) 895 + return max; 896 + return ret; 897 + } 898 + 852 899 #endif /* OCFS2_H */ 853 900
+14 -9
fs/ocfs2/quota_local.c
··· 404 404 int status = 0; 405 405 struct ocfs2_quota_recovery *rec; 406 406 407 - mlog(ML_NOTICE, "Beginning quota recovery in slot %u\n", slot_num); 407 + printk(KERN_NOTICE "ocfs2: Beginning quota recovery on device (%s) for " 408 + "slot %u\n", osb->dev_str, slot_num); 409 + 408 410 rec = ocfs2_alloc_quota_recovery(); 409 411 if (!rec) 410 412 return ERR_PTR(-ENOMEM); ··· 551 549 goto out_commit; 552 550 } 553 551 lock_buffer(qbh); 554 - WARN_ON(!ocfs2_test_bit(bit, dchunk->dqc_bitmap)); 555 - ocfs2_clear_bit(bit, dchunk->dqc_bitmap); 552 + WARN_ON(!ocfs2_test_bit_unaligned(bit, dchunk->dqc_bitmap)); 553 + ocfs2_clear_bit_unaligned(bit, dchunk->dqc_bitmap); 556 554 le32_add_cpu(&dchunk->dqc_free, 1); 557 555 unlock_buffer(qbh); 558 556 ocfs2_journal_dirty(handle, qbh); ··· 598 596 struct inode *lqinode; 599 597 unsigned int flags; 600 598 601 - mlog(ML_NOTICE, "Finishing quota recovery in slot %u\n", slot_num); 599 + printk(KERN_NOTICE "ocfs2: Finishing quota recovery on device (%s) for " 600 + "slot %u\n", osb->dev_str, slot_num); 601 + 602 602 mutex_lock(&sb_dqopt(sb)->dqonoff_mutex); 603 603 for (type = 0; type < MAXQUOTAS; type++) { 604 604 if (list_empty(&(rec->r_list[type]))) ··· 616 612 /* Someone else is holding the lock? Then he must be 617 613 * doing the recovery. Just skip the file... */ 618 614 if (status == -EAGAIN) { 619 - mlog(ML_NOTICE, "skipping quota recovery for slot %d " 620 - "because quota file is locked.\n", slot_num); 615 + printk(KERN_NOTICE "ocfs2: Skipping quota recovery on " 616 + "device (%s) for slot %d because quota file is " 617 + "locked.\n", osb->dev_str, slot_num); 621 618 status = 0; 622 619 goto out_put; 623 620 } else if (status < 0) { ··· 949 944 * ol_quota_entries_per_block(sb); 950 945 } 951 946 952 - found = ocfs2_find_next_zero_bit(dchunk->dqc_bitmap, len, 0); 947 + found = ocfs2_find_next_zero_bit_unaligned(dchunk->dqc_bitmap, len, 0); 953 948 /* We failed? */ 954 949 if (found == len) { 955 950 mlog(ML_ERROR, "Did not find empty entry in chunk %d with %u" ··· 1213 1208 struct ocfs2_local_disk_chunk *dchunk; 1214 1209 1215 1210 dchunk = (struct ocfs2_local_disk_chunk *)bh->b_data; 1216 - ocfs2_set_bit(*offset, dchunk->dqc_bitmap); 1211 + ocfs2_set_bit_unaligned(*offset, dchunk->dqc_bitmap); 1217 1212 le32_add_cpu(&dchunk->dqc_free, -1); 1218 1213 } 1219 1214 ··· 1294 1289 (od->dq_chunk->qc_headerbh->b_data); 1295 1290 /* Mark structure as freed */ 1296 1291 lock_buffer(od->dq_chunk->qc_headerbh); 1297 - ocfs2_clear_bit(offset, dchunk->dqc_bitmap); 1292 + ocfs2_clear_bit_unaligned(offset, dchunk->dqc_bitmap); 1298 1293 le32_add_cpu(&dchunk->dqc_free, 1); 1299 1294 unlock_buffer(od->dq_chunk->qc_headerbh); 1300 1295 ocfs2_journal_dirty(handle, od->dq_chunk->qc_headerbh);
+2 -2
fs/ocfs2/slot_map.c
··· 493 493 goto bail; 494 494 } 495 495 } else 496 - mlog(ML_NOTICE, "slot %d is already allocated to this node!\n", 497 - slot); 496 + printk(KERN_INFO "ocfs2: Slot %d on device (%s) was already " 497 + "allocated to this node!\n", slot, osb->dev_str); 498 498 499 499 ocfs2_set_slot(si, slot, osb->node_num); 500 500 osb->slot_num = slot;
+63 -8
fs/ocfs2/stack_o2cb.c
··· 28 28 #include "cluster/masklog.h" 29 29 #include "cluster/nodemanager.h" 30 30 #include "cluster/heartbeat.h" 31 + #include "cluster/tcp.h" 31 32 32 33 #include "stackglue.h" 33 34 ··· 257 256 } 258 257 259 258 /* 259 + * Check if this node is heartbeating and is connected to all other 260 + * heartbeating nodes. 261 + */ 262 + static int o2cb_cluster_check(void) 263 + { 264 + u8 node_num; 265 + int i; 266 + unsigned long hbmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 267 + unsigned long netmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; 268 + 269 + node_num = o2nm_this_node(); 270 + if (node_num == O2NM_MAX_NODES) { 271 + printk(KERN_ERR "o2cb: This node has not been configured.\n"); 272 + return -EINVAL; 273 + } 274 + 275 + /* 276 + * o2dlm expects o2net sockets to be created. If not, then 277 + * dlm_join_domain() fails with a stack of errors which are both cryptic 278 + * and incomplete. The idea here is to detect upfront whether we have 279 + * managed to connect to all nodes or not. If not, then list the nodes 280 + * to allow the user to check the configuration (incorrect IP, firewall, 281 + * etc.) Yes, this is racy. But its not the end of the world. 282 + */ 283 + #define O2CB_MAP_STABILIZE_COUNT 60 284 + for (i = 0; i < O2CB_MAP_STABILIZE_COUNT; ++i) { 285 + o2hb_fill_node_map(hbmap, sizeof(hbmap)); 286 + if (!test_bit(node_num, hbmap)) { 287 + printk(KERN_ERR "o2cb: %s heartbeat has not been " 288 + "started.\n", (o2hb_global_heartbeat_active() ? 289 + "Global" : "Local")); 290 + return -EINVAL; 291 + } 292 + o2net_fill_node_map(netmap, sizeof(netmap)); 293 + /* Force set the current node to allow easy compare */ 294 + set_bit(node_num, netmap); 295 + if (!memcmp(hbmap, netmap, sizeof(hbmap))) 296 + return 0; 297 + if (i < O2CB_MAP_STABILIZE_COUNT) 298 + msleep(1000); 299 + } 300 + 301 + printk(KERN_ERR "o2cb: This node could not connect to nodes:"); 302 + i = -1; 303 + while ((i = find_next_bit(hbmap, O2NM_MAX_NODES, 304 + i + 1)) < O2NM_MAX_NODES) { 305 + if (!test_bit(i, netmap)) 306 + printk(" %u", i); 307 + } 308 + printk(".\n"); 309 + 310 + return -ENOTCONN; 311 + } 312 + 313 + /* 260 314 * Called from the dlm when it's about to evict a node. This is how the 261 315 * classic stack signals node death. 262 316 */ ··· 319 263 { 320 264 struct ocfs2_cluster_connection *conn = data; 321 265 322 - mlog(ML_NOTICE, "o2dlm has evicted node %d from group %.*s\n", 323 - node_num, conn->cc_namelen, conn->cc_name); 266 + printk(KERN_NOTICE "o2cb: o2dlm has evicted node %d from domain %.*s\n", 267 + node_num, conn->cc_namelen, conn->cc_name); 324 268 325 269 conn->cc_recovery_handler(node_num, conn->cc_recovery_data); 326 270 } ··· 336 280 BUG_ON(conn == NULL); 337 281 BUG_ON(conn->cc_proto == NULL); 338 282 339 - /* for now we only have one cluster/node, make sure we see it 340 - * in the heartbeat universe */ 341 - if (!o2hb_check_local_node_heartbeating()) { 342 - if (o2hb_global_heartbeat_active()) 343 - mlog(ML_ERROR, "Global heartbeat not started\n"); 344 - rc = -EINVAL; 283 + /* Ensure cluster stack is up and all nodes are connected */ 284 + rc = o2cb_cluster_check(); 285 + if (rc) { 286 + printk(KERN_ERR "o2cb: Cluster check failed. Fix errors " 287 + "before retrying.\n"); 345 288 goto out; 346 289 } 347 290
+16 -9
fs/ocfs2/super.c
··· 54 54 #include "ocfs1_fs_compat.h" 55 55 56 56 #include "alloc.h" 57 + #include "aops.h" 57 58 #include "blockcheck.h" 58 59 #include "dlmglue.h" 59 60 #include "export.h" ··· 1108 1107 1109 1108 ocfs2_set_ro_flag(osb, 1); 1110 1109 1111 - printk(KERN_NOTICE "Readonly device detected. No cluster " 1112 - "services will be utilized for this mount. Recovery " 1113 - "will be skipped.\n"); 1110 + printk(KERN_NOTICE "ocfs2: Readonly device (%s) detected. " 1111 + "Cluster services will not be used for this mount. " 1112 + "Recovery will be skipped.\n", osb->dev_str); 1114 1113 } 1115 1114 1116 1115 if (!ocfs2_is_hard_readonly(osb)) { ··· 1617 1616 return 0; 1618 1617 } 1619 1618 1619 + wait_queue_head_t ocfs2__ioend_wq[OCFS2_IOEND_WQ_HASH_SZ]; 1620 + 1620 1621 static int __init ocfs2_init(void) 1621 1622 { 1622 - int status; 1623 + int status, i; 1623 1624 1624 1625 ocfs2_print_version(); 1626 + 1627 + for (i = 0; i < OCFS2_IOEND_WQ_HASH_SZ; i++) 1628 + init_waitqueue_head(&ocfs2__ioend_wq[i]); 1625 1629 1626 1630 status = init_ocfs2_uptodate_cache(); 1627 1631 if (status < 0) { ··· 1766 1760 ocfs2_extent_map_init(&oi->vfs_inode); 1767 1761 INIT_LIST_HEAD(&oi->ip_io_markers); 1768 1762 oi->ip_dir_start_lookup = 0; 1769 - 1763 + atomic_set(&oi->ip_unaligned_aio, 0); 1770 1764 init_rwsem(&oi->ip_alloc_sem); 1771 1765 init_rwsem(&oi->ip_xattr_sem); 1772 1766 mutex_init(&oi->ip_io_mutex); ··· 1980 1974 * If we failed before we got a uuid_str yet, we can't stop 1981 1975 * heartbeat. Otherwise, do it. 1982 1976 */ 1983 - if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str) 1977 + if (!mnt_err && !ocfs2_mount_local(osb) && osb->uuid_str && 1978 + !ocfs2_is_hard_readonly(osb)) 1984 1979 hangup_needed = 1; 1985 1980 1986 1981 if (osb->cconn) ··· 2360 2353 mlog_errno(status); 2361 2354 goto bail; 2362 2355 } 2363 - cleancache_init_shared_fs((char *)&uuid_net_key, sb); 2356 + cleancache_init_shared_fs((char *)&di->id2.i_super.s_uuid, sb); 2364 2357 2365 2358 bail: 2366 2359 return status; ··· 2469 2462 goto finally; 2470 2463 } 2471 2464 } else { 2472 - mlog(ML_NOTICE, "File system was not unmounted cleanly, " 2473 - "recovering volume.\n"); 2465 + printk(KERN_NOTICE "ocfs2: File system on device (%s) was not " 2466 + "unmounted cleanly, recovering it.\n", osb->dev_str); 2474 2467 } 2475 2468 2476 2469 local = ocfs2_mount_local(osb);
+6 -4
fs/ocfs2/xattr.c
··· 2376 2376 } 2377 2377 2378 2378 ret = ocfs2_xattr_value_truncate(inode, vb, 0, &ctxt); 2379 - if (ret < 0) { 2380 - mlog_errno(ret); 2381 - break; 2382 - } 2383 2379 2384 2380 ocfs2_commit_trans(osb, ctxt.handle); 2385 2381 if (ctxt.meta_ac) { 2386 2382 ocfs2_free_alloc_context(ctxt.meta_ac); 2387 2383 ctxt.meta_ac = NULL; 2388 2384 } 2385 + 2386 + if (ret < 0) { 2387 + mlog_errno(ret); 2388 + break; 2389 + } 2390 + 2389 2391 } 2390 2392 2391 2393 if (ctxt.meta_ac)
+8 -5
fs/pstore/platform.c
··· 167 167 } 168 168 169 169 psinfo = psi; 170 + mutex_init(&psinfo->read_mutex); 170 171 spin_unlock(&pstore_lock); 171 172 172 173 if (owner && !try_module_get(owner)) { ··· 196 195 void pstore_get_records(int quiet) 197 196 { 198 197 struct pstore_info *psi = psinfo; 198 + char *buf = NULL; 199 199 ssize_t size; 200 200 u64 id; 201 201 enum pstore_type_id type; 202 202 struct timespec time; 203 203 int failed = 0, rc; 204 - unsigned long flags; 205 204 206 205 if (!psi) 207 206 return; 208 207 209 - spin_lock_irqsave(&psinfo->buf_lock, flags); 208 + mutex_lock(&psi->read_mutex); 210 209 rc = psi->open(psi); 211 210 if (rc) 212 211 goto out; 213 212 214 - while ((size = psi->read(&id, &type, &time, psi)) > 0) { 215 - rc = pstore_mkfile(type, psi->name, id, psi->buf, (size_t)size, 213 + while ((size = psi->read(&id, &type, &time, &buf, psi)) > 0) { 214 + rc = pstore_mkfile(type, psi->name, id, buf, (size_t)size, 216 215 time, psi); 216 + kfree(buf); 217 + buf = NULL; 217 218 if (rc && (rc != -EEXIST || !quiet)) 218 219 failed++; 219 220 } 220 221 psi->close(psi); 221 222 out: 222 - spin_unlock_irqrestore(&psinfo->buf_lock, flags); 223 + mutex_unlock(&psi->read_mutex); 223 224 224 225 if (failed) 225 226 printk(KERN_WARNING "pstore: failed to load %d record(s) from '%s'\n",
+2
include/drm/drm_mode.h
··· 235 235 #define DRM_MODE_FB_DIRTY_ANNOTATE_FILL 0x02 236 236 #define DRM_MODE_FB_DIRTY_FLAGS 0x03 237 237 238 + #define DRM_MODE_FB_DIRTY_MAX_CLIPS 256 239 + 238 240 /* 239 241 * Mark a region of a framebuffer as dirty. 240 242 *
+4 -5
include/drm/exynos_drm.h
··· 32 32 /** 33 33 * User-desired buffer creation information structure. 34 34 * 35 - * @size: requested size for the object. 35 + * @size: user-desired memory allocation size. 36 36 * - this size value would be page-aligned internally. 37 37 * @flags: user request for setting memory type or cache attributes. 38 - * @handle: returned handle for the object. 39 - * @pad: just padding to be 64-bit aligned. 38 + * @handle: returned a handle to created gem object. 39 + * - this handle will be set by gem module of kernel side. 40 40 */ 41 41 struct drm_exynos_gem_create { 42 - unsigned int size; 42 + uint64_t size; 43 43 unsigned int flags; 44 44 unsigned int handle; 45 - unsigned int pad; 46 45 }; 47 46 48 47 /**
+4
include/drm/radeon_drm.h
··· 874 874 875 875 #define RADEON_CHUNK_ID_RELOCS 0x01 876 876 #define RADEON_CHUNK_ID_IB 0x02 877 + #define RADEON_CHUNK_ID_FLAGS 0x03 878 + 879 + /* The first dword of RADEON_CHUNK_ID_FLAGS is a uint32 of these flags: */ 880 + #define RADEON_CS_KEEP_TILING_FLAGS 0x01 877 881 878 882 struct drm_radeon_cs_chunk { 879 883 uint32_t chunk_id;
+7 -1
include/linux/ceph/osd_client.h
··· 10 10 #include "osdmap.h" 11 11 #include "messenger.h" 12 12 13 + /* 14 + * Maximum object name size 15 + * (must be at least as big as RBD_MAX_MD_NAME_LEN -- currently 100) 16 + */ 17 + #define MAX_OBJ_NAME_SIZE 100 18 + 13 19 struct ceph_msg; 14 20 struct ceph_snap_context; 15 21 struct ceph_osd_request; ··· 81 75 struct inode *r_inode; /* for use by callbacks */ 82 76 void *r_priv; /* ditto */ 83 77 84 - char r_oid[40]; /* object name */ 78 + char r_oid[MAX_OBJ_NAME_SIZE]; /* object name */ 85 79 int r_oid_len; 86 80 unsigned long r_stamp; /* send OR check time */ 87 81
+2 -1
include/linux/clocksource.h
··· 156 156 * @mult: cycle to nanosecond multiplier 157 157 * @shift: cycle to nanosecond divisor (power of two) 158 158 * @max_idle_ns: max idle time permitted by the clocksource (nsecs) 159 + * @maxadj maximum adjustment value to mult (~11%) 159 160 * @flags: flags describing special properties 160 161 * @archdata: arch-specific data 161 162 * @suspend: suspend function for the clocksource, if necessary ··· 173 172 u32 mult; 174 173 u32 shift; 175 174 u64 max_idle_ns; 176 - 175 + u32 maxadj; 177 176 #ifdef CONFIG_ARCH_CLOCKSOURCE_DATA 178 177 struct arch_clocksource_data archdata; 179 178 #endif
+1 -1
include/linux/device.h
··· 69 69 * @resume: Called to bring a device on this bus out of sleep mode. 70 70 * @pm: Power management operations of this bus, callback the specific 71 71 * device driver's pm-ops. 72 - * @iommu_ops IOMMU specific operations for this bus, used to attach IOMMU 72 + * @iommu_ops: IOMMU specific operations for this bus, used to attach IOMMU 73 73 * driver implementations to a bus and allow the driver to do 74 74 * bus-specific setup 75 75 * @p: The private data of the driver core, only the driver core can
-3
include/linux/i2c.h
··· 432 432 /* Internal numbers to terminate lists */ 433 433 #define I2C_CLIENT_END 0xfffeU 434 434 435 - /* The numbers to use to set I2C bus address */ 436 - #define ANY_I2C_BUS 0xffff 437 - 438 435 /* Construct an I2C_CLIENT_END-terminated array of i2c addresses */ 439 436 #define I2C_ADDRS(addr, addrs...) \ 440 437 ((const unsigned short []){ addr, ## addrs, I2C_CLIENT_END })
-1
include/linux/init_task.h
··· 184 184 [PIDTYPE_SID] = INIT_PID_LINK(PIDTYPE_SID), \ 185 185 }, \ 186 186 .thread_group = LIST_HEAD_INIT(tsk.thread_group), \ 187 - .dirties = INIT_PROP_LOCAL_SINGLE(dirties), \ 188 187 INIT_IDS \ 189 188 INIT_PERF_EVENTS(tsk) \ 190 189 INIT_TRACE_IRQFLAGS \
+2 -1
include/linux/mfd/tps65910.h
··· 243 243 244 244 245 245 /*Registers VDD1, VDD2 voltage values definitions */ 246 - #define VDD1_2_NUM_VOLTS 73 246 + #define VDD1_2_NUM_VOLT_FINE 73 247 + #define VDD1_2_NUM_VOLT_COARSE 3 247 248 #define VDD1_2_MIN_VOLT 6000 248 249 #define VDD1_2_OFFSET 125 249 250
+2
include/linux/netdevice.h
··· 2552 2552 extern void *dev_seq_start(struct seq_file *seq, loff_t *pos); 2553 2553 extern void *dev_seq_next(struct seq_file *seq, void *v, loff_t *pos); 2554 2554 extern void dev_seq_stop(struct seq_file *seq, void *v); 2555 + extern int dev_seq_open_ops(struct inode *inode, struct file *file, 2556 + const struct seq_operations *ops); 2555 2557 #endif 2556 2558 2557 2559 extern int netdev_class_create_file(struct class_attribute *class_attr);
+3
include/linux/nfs_fs.h
··· 410 410 extern const struct inode_operations nfs3_file_inode_operations; 411 411 #endif /* CONFIG_NFS_V3 */ 412 412 extern const struct file_operations nfs_file_operations; 413 + #ifdef CONFIG_NFS_V4 414 + extern const struct file_operations nfs4_file_operations; 415 + #endif /* CONFIG_NFS_V4 */ 413 416 extern const struct address_space_operations nfs_file_aops; 414 417 extern const struct address_space_operations nfs_dir_aops; 415 418
+1
include/linux/nfs_xdr.h
··· 1192 1192 const struct dentry_operations *dentry_ops; 1193 1193 const struct inode_operations *dir_inode_ops; 1194 1194 const struct inode_operations *file_inode_ops; 1195 + const struct file_operations *file_ops; 1195 1196 1196 1197 int (*getroot) (struct nfs_server *, struct nfs_fh *, 1197 1198 struct nfs_fsinfo *);
+3 -3
include/linux/pci-ats.h
··· 12 12 unsigned int is_enabled:1; /* Enable bit is set */ 13 13 }; 14 14 15 - #ifdef CONFIG_PCI_IOV 15 + #ifdef CONFIG_PCI_ATS 16 16 17 17 extern int pci_enable_ats(struct pci_dev *dev, int ps); 18 18 extern void pci_disable_ats(struct pci_dev *dev); ··· 29 29 return dev->ats && dev->ats->is_enabled; 30 30 } 31 31 32 - #else /* CONFIG_PCI_IOV */ 32 + #else /* CONFIG_PCI_ATS */ 33 33 34 34 static inline int pci_enable_ats(struct pci_dev *dev, int ps) 35 35 { ··· 50 50 return 0; 51 51 } 52 52 53 - #endif /* CONFIG_PCI_IOV */ 53 + #endif /* CONFIG_PCI_ATS */ 54 54 55 55 #ifdef CONFIG_PCI_PRI 56 56
+1 -1
include/linux/pci.h
··· 338 338 struct list_head msi_list; 339 339 #endif 340 340 struct pci_vpd *vpd; 341 - #ifdef CONFIG_PCI_IOV 341 + #ifdef CONFIG_PCI_ATS 342 342 union { 343 343 struct pci_sriov *sriov; /* SR-IOV capability related */ 344 344 struct pci_dev *physfn; /* the PF this VF is associated with */
+127 -88
include/linux/pm.h
··· 54 54 /** 55 55 * struct dev_pm_ops - device PM callbacks 56 56 * 57 - * Several driver power state transitions are externally visible, affecting 57 + * Several device power state transitions are externally visible, affecting 58 58 * the state of pending I/O queues and (for drivers that touch hardware) 59 59 * interrupts, wakeups, DMA, and other hardware state. There may also be 60 - * internal transitions to various low power modes, which are transparent 60 + * internal transitions to various low-power modes which are transparent 61 61 * to the rest of the driver stack (such as a driver that's ON gating off 62 62 * clocks which are not in active use). 63 63 * 64 - * The externally visible transitions are handled with the help of the following 65 - * callbacks included in this structure: 64 + * The externally visible transitions are handled with the help of callbacks 65 + * included in this structure in such a way that two levels of callbacks are 66 + * involved. First, the PM core executes callbacks provided by PM domains, 67 + * device types, classes and bus types. They are the subsystem-level callbacks 68 + * supposed to execute callbacks provided by device drivers, although they may 69 + * choose not to do that. If the driver callbacks are executed, they have to 70 + * collaborate with the subsystem-level callbacks to achieve the goals 71 + * appropriate for the given system transition, given transition phase and the 72 + * subsystem the device belongs to. 66 73 * 67 - * @prepare: Prepare the device for the upcoming transition, but do NOT change 68 - * its hardware state. Prevent new children of the device from being 69 - * registered after @prepare() returns (the driver's subsystem and 70 - * generally the rest of the kernel is supposed to prevent new calls to the 71 - * probe method from being made too once @prepare() has succeeded). If 72 - * @prepare() detects a situation it cannot handle (e.g. registration of a 73 - * child already in progress), it may return -EAGAIN, so that the PM core 74 - * can execute it once again (e.g. after the new child has been registered) 75 - * to recover from the race condition. This method is executed for all 76 - * kinds of suspend transitions and is followed by one of the suspend 77 - * callbacks: @suspend(), @freeze(), or @poweroff(). 78 - * The PM core executes @prepare() for all devices before starting to 79 - * execute suspend callbacks for any of them, so drivers may assume all of 80 - * the other devices to be present and functional while @prepare() is being 81 - * executed. In particular, it is safe to make GFP_KERNEL memory 82 - * allocations from within @prepare(). However, drivers may NOT assume 83 - * anything about the availability of the user space at that time and it 84 - * is not correct to request firmware from within @prepare() (it's too 85 - * late to do that). [To work around this limitation, drivers may 86 - * register suspend and hibernation notifiers that are executed before the 87 - * freezing of tasks.] 74 + * @prepare: The principal role of this callback is to prevent new children of 75 + * the device from being registered after it has returned (the driver's 76 + * subsystem and generally the rest of the kernel is supposed to prevent 77 + * new calls to the probe method from being made too once @prepare() has 78 + * succeeded). If @prepare() detects a situation it cannot handle (e.g. 79 + * registration of a child already in progress), it may return -EAGAIN, so 80 + * that the PM core can execute it once again (e.g. after a new child has 81 + * been registered) to recover from the race condition. 82 + * This method is executed for all kinds of suspend transitions and is 83 + * followed by one of the suspend callbacks: @suspend(), @freeze(), or 84 + * @poweroff(). The PM core executes subsystem-level @prepare() for all 85 + * devices before starting to invoke suspend callbacks for any of them, so 86 + * generally devices may be assumed to be functional or to respond to 87 + * runtime resume requests while @prepare() is being executed. However, 88 + * device drivers may NOT assume anything about the availability of user 89 + * space at that time and it is NOT valid to request firmware from within 90 + * @prepare() (it's too late to do that). It also is NOT valid to allocate 91 + * substantial amounts of memory from @prepare() in the GFP_KERNEL mode. 92 + * [To work around these limitations, drivers may register suspend and 93 + * hibernation notifiers to be executed before the freezing of tasks.] 88 94 * 89 95 * @complete: Undo the changes made by @prepare(). This method is executed for 90 96 * all kinds of resume transitions, following one of the resume callbacks: 91 97 * @resume(), @thaw(), @restore(). Also called if the state transition 92 - * fails before the driver's suspend callback (@suspend(), @freeze(), 93 - * @poweroff()) can be executed (e.g. if the suspend callback fails for one 98 + * fails before the driver's suspend callback: @suspend(), @freeze() or 99 + * @poweroff(), can be executed (e.g. if the suspend callback fails for one 94 100 * of the other devices that the PM core has unsuccessfully attempted to 95 101 * suspend earlier). 96 - * The PM core executes @complete() after it has executed the appropriate 97 - * resume callback for all devices. 102 + * The PM core executes subsystem-level @complete() after it has executed 103 + * the appropriate resume callbacks for all devices. 98 104 * 99 105 * @suspend: Executed before putting the system into a sleep state in which the 100 - * contents of main memory are preserved. Quiesce the device, put it into 101 - * a low power state appropriate for the upcoming system state (such as 102 - * PCI_D3hot), and enable wakeup events as appropriate. 106 + * contents of main memory are preserved. The exact action to perform 107 + * depends on the device's subsystem (PM domain, device type, class or bus 108 + * type), but generally the device must be quiescent after subsystem-level 109 + * @suspend() has returned, so that it doesn't do any I/O or DMA. 110 + * Subsystem-level @suspend() is executed for all devices after invoking 111 + * subsystem-level @prepare() for all of them. 103 112 * 104 113 * @resume: Executed after waking the system up from a sleep state in which the 105 - * contents of main memory were preserved. Put the device into the 106 - * appropriate state, according to the information saved in memory by the 107 - * preceding @suspend(). The driver starts working again, responding to 108 - * hardware events and software requests. The hardware may have gone 109 - * through a power-off reset, or it may have maintained state from the 110 - * previous suspend() which the driver may rely on while resuming. On most 111 - * platforms, there are no restrictions on availability of resources like 112 - * clocks during @resume(). 114 + * contents of main memory were preserved. The exact action to perform 115 + * depends on the device's subsystem, but generally the driver is expected 116 + * to start working again, responding to hardware events and software 117 + * requests (the device itself may be left in a low-power state, waiting 118 + * for a runtime resume to occur). The state of the device at the time its 119 + * driver's @resume() callback is run depends on the platform and subsystem 120 + * the device belongs to. On most platforms, there are no restrictions on 121 + * availability of resources like clocks during @resume(). 122 + * Subsystem-level @resume() is executed for all devices after invoking 123 + * subsystem-level @resume_noirq() for all of them. 113 124 * 114 125 * @freeze: Hibernation-specific, executed before creating a hibernation image. 115 - * Quiesce operations so that a consistent image can be created, but do NOT 116 - * otherwise put the device into a low power device state and do NOT emit 117 - * system wakeup events. Save in main memory the device settings to be 118 - * used by @restore() during the subsequent resume from hibernation or by 119 - * the subsequent @thaw(), if the creation of the image or the restoration 120 - * of main memory contents from it fails. 126 + * Analogous to @suspend(), but it should not enable the device to signal 127 + * wakeup events or change its power state. The majority of subsystems 128 + * (with the notable exception of the PCI bus type) expect the driver-level 129 + * @freeze() to save the device settings in memory to be used by @restore() 130 + * during the subsequent resume from hibernation. 131 + * Subsystem-level @freeze() is executed for all devices after invoking 132 + * subsystem-level @prepare() for all of them. 121 133 * 122 134 * @thaw: Hibernation-specific, executed after creating a hibernation image OR 123 - * if the creation of the image fails. Also executed after a failing 135 + * if the creation of an image has failed. Also executed after a failing 124 136 * attempt to restore the contents of main memory from such an image. 125 137 * Undo the changes made by the preceding @freeze(), so the device can be 126 138 * operated in the same way as immediately before the call to @freeze(). 139 + * Subsystem-level @thaw() is executed for all devices after invoking 140 + * subsystem-level @thaw_noirq() for all of them. It also may be executed 141 + * directly after @freeze() in case of a transition error. 127 142 * 128 143 * @poweroff: Hibernation-specific, executed after saving a hibernation image. 129 - * Quiesce the device, put it into a low power state appropriate for the 130 - * upcoming system state (such as PCI_D3hot), and enable wakeup events as 131 - * appropriate. 144 + * Analogous to @suspend(), but it need not save the device's settings in 145 + * memory. 146 + * Subsystem-level @poweroff() is executed for all devices after invoking 147 + * subsystem-level @prepare() for all of them. 132 148 * 133 149 * @restore: Hibernation-specific, executed after restoring the contents of main 134 - * memory from a hibernation image. Driver starts working again, 135 - * responding to hardware events and software requests. Drivers may NOT 136 - * make ANY assumptions about the hardware state right prior to @restore(). 137 - * On most platforms, there are no restrictions on availability of 138 - * resources like clocks during @restore(). 150 + * memory from a hibernation image, analogous to @resume(). 139 151 * 140 - * @suspend_noirq: Complete the operations of ->suspend() by carrying out any 141 - * actions required for suspending the device that need interrupts to be 142 - * disabled 152 + * @suspend_noirq: Complete the actions started by @suspend(). Carry out any 153 + * additional operations required for suspending the device that might be 154 + * racing with its driver's interrupt handler, which is guaranteed not to 155 + * run while @suspend_noirq() is being executed. 156 + * It generally is expected that the device will be in a low-power state 157 + * (appropriate for the target system sleep state) after subsystem-level 158 + * @suspend_noirq() has returned successfully. If the device can generate 159 + * system wakeup signals and is enabled to wake up the system, it should be 160 + * configured to do so at that time. However, depending on the platform 161 + * and device's subsystem, @suspend() may be allowed to put the device into 162 + * the low-power state and configure it to generate wakeup signals, in 163 + * which case it generally is not necessary to define @suspend_noirq(). 143 164 * 144 - * @resume_noirq: Prepare for the execution of ->resume() by carrying out any 145 - * actions required for resuming the device that need interrupts to be 146 - * disabled 165 + * @resume_noirq: Prepare for the execution of @resume() by carrying out any 166 + * operations required for resuming the device that might be racing with 167 + * its driver's interrupt handler, which is guaranteed not to run while 168 + * @resume_noirq() is being executed. 147 169 * 148 - * @freeze_noirq: Complete the operations of ->freeze() by carrying out any 149 - * actions required for freezing the device that need interrupts to be 150 - * disabled 170 + * @freeze_noirq: Complete the actions started by @freeze(). Carry out any 171 + * additional operations required for freezing the device that might be 172 + * racing with its driver's interrupt handler, which is guaranteed not to 173 + * run while @freeze_noirq() is being executed. 174 + * The power state of the device should not be changed by either @freeze() 175 + * or @freeze_noirq() and it should not be configured to signal system 176 + * wakeup by any of these callbacks. 151 177 * 152 - * @thaw_noirq: Prepare for the execution of ->thaw() by carrying out any 153 - * actions required for thawing the device that need interrupts to be 154 - * disabled 178 + * @thaw_noirq: Prepare for the execution of @thaw() by carrying out any 179 + * operations required for thawing the device that might be racing with its 180 + * driver's interrupt handler, which is guaranteed not to run while 181 + * @thaw_noirq() is being executed. 155 182 * 156 - * @poweroff_noirq: Complete the operations of ->poweroff() by carrying out any 157 - * actions required for handling the device that need interrupts to be 158 - * disabled 183 + * @poweroff_noirq: Complete the actions started by @poweroff(). Analogous to 184 + * @suspend_noirq(), but it need not save the device's settings in memory. 159 185 * 160 - * @restore_noirq: Prepare for the execution of ->restore() by carrying out any 161 - * actions required for restoring the operations of the device that need 162 - * interrupts to be disabled 186 + * @restore_noirq: Prepare for the execution of @restore() by carrying out any 187 + * operations required for thawing the device that might be racing with its 188 + * driver's interrupt handler, which is guaranteed not to run while 189 + * @restore_noirq() is being executed. Analogous to @resume_noirq(). 163 190 * 164 191 * All of the above callbacks, except for @complete(), return error codes. 165 192 * However, the error codes returned by the resume operations, @resume(), 166 - * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq() do 193 + * @thaw(), @restore(), @resume_noirq(), @thaw_noirq(), and @restore_noirq(), do 167 194 * not cause the PM core to abort the resume transition during which they are 168 - * returned. The error codes returned in that cases are only printed by the PM 195 + * returned. The error codes returned in those cases are only printed by the PM 169 196 * core to the system logs for debugging purposes. Still, it is recommended 170 197 * that drivers only return error codes from their resume methods in case of an 171 198 * unrecoverable failure (i.e. when the device being handled refuses to resume ··· 201 174 * their children. 202 175 * 203 176 * It is allowed to unregister devices while the above callbacks are being 204 - * executed. However, it is not allowed to unregister a device from within any 205 - * of its own callbacks. 177 + * executed. However, a callback routine must NOT try to unregister the device 178 + * it was called for, although it may unregister children of that device (for 179 + * example, if it detects that a child was unplugged while the system was 180 + * asleep). 206 181 * 207 - * There also are the following callbacks related to run-time power management 208 - * of devices: 182 + * Refer to Documentation/power/devices.txt for more information about the role 183 + * of the above callbacks in the system suspend process. 184 + * 185 + * There also are callbacks related to runtime power management of devices. 186 + * Again, these callbacks are executed by the PM core only for subsystems 187 + * (PM domains, device types, classes and bus types) and the subsystem-level 188 + * callbacks are supposed to invoke the driver callbacks. Moreover, the exact 189 + * actions to be performed by a device driver's callbacks generally depend on 190 + * the platform and subsystem the device belongs to. 209 191 * 210 192 * @runtime_suspend: Prepare the device for a condition in which it won't be 211 193 * able to communicate with the CPU(s) and RAM due to power management. 212 - * This need not mean that the device should be put into a low power state. 194 + * This need not mean that the device should be put into a low-power state. 213 195 * For example, if the device is behind a link which is about to be turned 214 196 * off, the device may remain at full power. If the device does go to low 215 - * power and is capable of generating run-time wake-up events, remote 216 - * wake-up (i.e., a hardware mechanism allowing the device to request a 217 - * change of its power state via a wake-up event, such as PCI PME) should 218 - * be enabled for it. 197 + * power and is capable of generating runtime wakeup events, remote wakeup 198 + * (i.e., a hardware mechanism allowing the device to request a change of 199 + * its power state via an interrupt) should be enabled for it. 219 200 * 220 201 * @runtime_resume: Put the device into the fully active state in response to a 221 - * wake-up event generated by hardware or at the request of software. If 222 - * necessary, put the device into the full power state and restore its 202 + * wakeup event generated by hardware or at the request of software. If 203 + * necessary, put the device into the full-power state and restore its 223 204 * registers, so that it is fully operational. 224 205 * 225 - * @runtime_idle: Device appears to be inactive and it might be put into a low 226 - * power state if all of the necessary conditions are satisfied. Check 206 + * @runtime_idle: Device appears to be inactive and it might be put into a 207 + * low-power state if all of the necessary conditions are satisfied. Check 227 208 * these conditions and handle the device as appropriate, possibly queueing 228 209 * a suspend request for it. The return value is ignored by the PM core. 210 + * 211 + * Refer to Documentation/power/runtime_pm.txt for more information about the 212 + * role of the above callbacks in device runtime power management. 213 + * 229 214 */ 230 215 231 216 struct dev_pm_ops {
+3 -1
include/linux/pstore.h
··· 35 35 spinlock_t buf_lock; /* serialize access to 'buf' */ 36 36 char *buf; 37 37 size_t bufsize; 38 + struct mutex read_mutex; /* serialize open/read/close */ 38 39 int (*open)(struct pstore_info *psi); 39 40 int (*close)(struct pstore_info *psi); 40 41 ssize_t (*read)(u64 *id, enum pstore_type_id *type, 41 - struct timespec *time, struct pstore_info *psi); 42 + struct timespec *time, char **buf, 43 + struct pstore_info *psi); 42 44 int (*write)(enum pstore_type_id type, u64 *id, 43 45 unsigned int part, size_t size, struct pstore_info *psi); 44 46 int (*erase)(enum pstore_type_id type, u64 id,
-1
include/linux/sched.h
··· 1521 1521 #ifdef CONFIG_FAULT_INJECTION 1522 1522 int make_it_fail; 1523 1523 #endif 1524 - struct prop_local_single dirties; 1525 1524 /* 1526 1525 * when (nr_dirtied >= nr_dirtied_pause), it's time to call 1527 1526 * balance_dirty_pages() for some dirty throttling pause
+8 -6
include/linux/serial.h
··· 207 207 208 208 struct serial_rs485 { 209 209 __u32 flags; /* RS485 feature flags */ 210 - #define SER_RS485_ENABLED (1 << 0) 211 - #define SER_RS485_RTS_ON_SEND (1 << 1) 212 - #define SER_RS485_RTS_AFTER_SEND (1 << 2) 213 - #define SER_RS485_RTS_BEFORE_SEND (1 << 3) 210 + #define SER_RS485_ENABLED (1 << 0) /* If enabled */ 211 + #define SER_RS485_RTS_ON_SEND (1 << 1) /* Logical level for 212 + RTS pin when 213 + sending */ 214 + #define SER_RS485_RTS_AFTER_SEND (1 << 2) /* Logical level for 215 + RTS pin after sent*/ 214 216 #define SER_RS485_RX_DURING_TX (1 << 4) 215 - __u32 delay_rts_before_send; /* Milliseconds */ 216 - __u32 delay_rts_after_send; /* Milliseconds */ 217 + __u32 delay_rts_before_send; /* Delay before send (milliseconds) */ 218 + __u32 delay_rts_after_send; /* Delay after send (milliseconds) */ 217 219 __u32 padding[5]; /* Memory is cheap, new structs 218 220 are a royal PITA .. */ 219 221 };
+2
include/linux/virtio_config.h
··· 85 85 * @reset: reset the device 86 86 * vdev: the virtio device 87 87 * After this, status and feature negotiation must be done again 88 + * Device must not be reset from its vq/config callbacks, or in 89 + * parallel with being added/removed. 88 90 * @find_vqs: find virtqueues and instantiate them. 89 91 * vdev: the virtio_device 90 92 * nvqs: the number of virtqueues to find
+1 -1
include/linux/virtio_mmio.h
··· 63 63 #define VIRTIO_MMIO_GUEST_FEATURES 0x020 64 64 65 65 /* Activated features set selector - Write Only */ 66 - #define VIRTIO_MMIO_GUEST_FEATURES_SET 0x024 66 + #define VIRTIO_MMIO_GUEST_FEATURES_SEL 0x024 67 67 68 68 /* Guest's memory page size in bytes - Write Only */ 69 69 #define VIRTIO_MMIO_GUEST_PAGE_SIZE 0x028
+1
include/net/inetpeer.h
··· 35 35 36 36 u32 metrics[RTAX_MAX]; 37 37 u32 rate_tokens; /* rate limiting for ICMP */ 38 + int redirect_genid; 38 39 unsigned long rate_last; 39 40 unsigned long pmtu_expires; 40 41 u32 pmtu_orig;
+10 -9
include/net/netfilter/nf_conntrack_ecache.h
··· 67 67 int (*fcn)(unsigned int events, struct nf_ct_event *item); 68 68 }; 69 69 70 - extern struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb; 71 - extern int nf_conntrack_register_notifier(struct nf_ct_event_notifier *nb); 72 - extern void nf_conntrack_unregister_notifier(struct nf_ct_event_notifier *nb); 70 + extern int nf_conntrack_register_notifier(struct net *net, struct nf_ct_event_notifier *nb); 71 + extern void nf_conntrack_unregister_notifier(struct net *net, struct nf_ct_event_notifier *nb); 73 72 74 73 extern void nf_ct_deliver_cached_events(struct nf_conn *ct); 75 74 76 75 static inline void 77 76 nf_conntrack_event_cache(enum ip_conntrack_events event, struct nf_conn *ct) 78 77 { 78 + struct net *net = nf_ct_net(ct); 79 79 struct nf_conntrack_ecache *e; 80 80 81 - if (nf_conntrack_event_cb == NULL) 81 + if (net->ct.nf_conntrack_event_cb == NULL) 82 82 return; 83 83 84 84 e = nf_ct_ecache_find(ct); ··· 95 95 int report) 96 96 { 97 97 int ret = 0; 98 + struct net *net = nf_ct_net(ct); 98 99 struct nf_ct_event_notifier *notify; 99 100 struct nf_conntrack_ecache *e; 100 101 101 102 rcu_read_lock(); 102 - notify = rcu_dereference(nf_conntrack_event_cb); 103 + notify = rcu_dereference(net->ct.nf_conntrack_event_cb); 103 104 if (notify == NULL) 104 105 goto out_unlock; 105 106 ··· 165 164 int (*fcn)(unsigned int events, struct nf_exp_event *item); 166 165 }; 167 166 168 - extern struct nf_exp_event_notifier __rcu *nf_expect_event_cb; 169 - extern int nf_ct_expect_register_notifier(struct nf_exp_event_notifier *nb); 170 - extern void nf_ct_expect_unregister_notifier(struct nf_exp_event_notifier *nb); 167 + extern int nf_ct_expect_register_notifier(struct net *net, struct nf_exp_event_notifier *nb); 168 + extern void nf_ct_expect_unregister_notifier(struct net *net, struct nf_exp_event_notifier *nb); 171 169 172 170 static inline void 173 171 nf_ct_expect_event_report(enum ip_conntrack_expect_events event, ··· 174 174 u32 pid, 175 175 int report) 176 176 { 177 + struct net *net = nf_ct_exp_net(exp); 177 178 struct nf_exp_event_notifier *notify; 178 179 struct nf_conntrack_ecache *e; 179 180 180 181 rcu_read_lock(); 181 - notify = rcu_dereference(nf_expect_event_cb); 182 + notify = rcu_dereference(net->ct.nf_expect_event_cb); 182 183 if (notify == NULL) 183 184 goto out_unlock; 184 185
+2
include/net/netns/conntrack.h
··· 18 18 struct hlist_nulls_head unconfirmed; 19 19 struct hlist_nulls_head dying; 20 20 struct ip_conntrack_stat __percpu *stat; 21 + struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb; 22 + struct nf_exp_event_notifier __rcu *nf_expect_event_cb; 21 23 int sysctl_events; 22 24 unsigned int sysctl_events_retry_timeout; 23 25 int sysctl_acct;
+6 -9
include/net/red.h
··· 116 116 u32 qR; /* Cached random number */ 117 117 118 118 unsigned long qavg; /* Average queue length: A scaled */ 119 - psched_time_t qidlestart; /* Start of current idle period */ 119 + ktime_t qidlestart; /* Start of current idle period */ 120 120 }; 121 121 122 122 static inline u32 red_rmask(u8 Plog) ··· 148 148 149 149 static inline int red_is_idling(struct red_parms *p) 150 150 { 151 - return p->qidlestart != PSCHED_PASTPERFECT; 151 + return p->qidlestart.tv64 != 0; 152 152 } 153 153 154 154 static inline void red_start_of_idle_period(struct red_parms *p) 155 155 { 156 - p->qidlestart = psched_get_time(); 156 + p->qidlestart = ktime_get(); 157 157 } 158 158 159 159 static inline void red_end_of_idle_period(struct red_parms *p) 160 160 { 161 - p->qidlestart = PSCHED_PASTPERFECT; 161 + p->qidlestart.tv64 = 0; 162 162 } 163 163 164 164 static inline void red_restart(struct red_parms *p) ··· 170 170 171 171 static inline unsigned long red_calc_qavg_from_idle_time(struct red_parms *p) 172 172 { 173 - psched_time_t now; 174 - long us_idle; 173 + s64 delta = ktime_us_delta(ktime_get(), p->qidlestart); 174 + long us_idle = min_t(s64, delta, p->Scell_max); 175 175 int shift; 176 - 177 - now = psched_get_time(); 178 - us_idle = psched_tdiff_bounded(now, p->qidlestart, p->Scell_max); 179 176 180 177 /* 181 178 * The problem: ideally, average length queue recalcultion should
-7
include/video/omapdss.h
··· 307 307 void (*dsi_disable_pads)(int dsi_id, unsigned lane_mask); 308 308 }; 309 309 310 - #if defined(CONFIG_OMAP2_DSS_MODULE) || defined(CONFIG_OMAP2_DSS) 311 310 /* Init with the board info */ 312 311 extern int omap_display_init(struct omap_dss_board_info *board_data); 313 - #else 314 - static inline int omap_display_init(struct omap_dss_board_info *board_data) 315 - { 316 - return 0; 317 - } 318 - #endif 319 312 320 313 struct omap_display_platform_data { 321 314 struct omap_dss_board_info *board_data;
+9 -2
kernel/cgroup_freezer.c
··· 153 153 kfree(cgroup_freezer(cgroup)); 154 154 } 155 155 156 + /* task is frozen or will freeze immediately when next it gets woken */ 157 + static bool is_task_frozen_enough(struct task_struct *task) 158 + { 159 + return frozen(task) || 160 + (task_is_stopped_or_traced(task) && freezing(task)); 161 + } 162 + 156 163 /* 157 164 * The call to cgroup_lock() in the freezer.state write method prevents 158 165 * a write to that file racing against an attach, and hence the ··· 238 231 cgroup_iter_start(cgroup, &it); 239 232 while ((task = cgroup_iter_next(cgroup, &it))) { 240 233 ntotal++; 241 - if (frozen(task)) 234 + if (is_task_frozen_enough(task)) 242 235 nfrozen++; 243 236 } 244 237 ··· 291 284 while ((task = cgroup_iter_next(cgroup, &it))) { 292 285 if (!freeze_task(task, true)) 293 286 continue; 294 - if (frozen(task)) 287 + if (is_task_frozen_enough(task)) 295 288 continue; 296 289 if (!freezing(task) && !freezer_should_skip(task)) 297 290 num_cant_freeze_now++;
-5
kernel/fork.c
··· 162 162 163 163 void free_task(struct task_struct *tsk) 164 164 { 165 - prop_local_destroy_single(&tsk->dirties); 166 165 account_kernel_stack(tsk->stack, -1); 167 166 free_thread_info(tsk->stack); 168 167 rt_mutex_debug_task_free(tsk); ··· 272 273 goto out; 273 274 274 275 tsk->stack = ti; 275 - 276 - err = prop_local_init_single(&tsk->dirties); 277 - if (err) 278 - goto out; 279 276 280 277 setup_thread_stack(tsk, orig); 281 278 clear_user_return_notifier(tsk);
+4 -2
kernel/hrtimer.c
··· 885 885 struct hrtimer_clock_base *base, 886 886 unsigned long newstate, int reprogram) 887 887 { 888 + struct timerqueue_node *next_timer; 888 889 if (!(timer->state & HRTIMER_STATE_ENQUEUED)) 889 890 goto out; 890 891 891 - if (&timer->node == timerqueue_getnext(&base->active)) { 892 + next_timer = timerqueue_getnext(&base->active); 893 + timerqueue_del(&base->active, &timer->node); 894 + if (&timer->node == next_timer) { 892 895 #ifdef CONFIG_HIGH_RES_TIMERS 893 896 /* Reprogram the clock event device. if enabled */ 894 897 if (reprogram && hrtimer_hres_active()) { ··· 904 901 } 905 902 #endif 906 903 } 907 - timerqueue_del(&base->active, &timer->node); 908 904 if (!timerqueue_getnext(&base->active)) 909 905 base->cpu_base->active_bases &= ~(1 << base->index); 910 906 out:
+1 -1
kernel/irq/manage.c
··· 1596 1596 return -ENOMEM; 1597 1597 1598 1598 action->handler = handler; 1599 - action->flags = IRQF_PERCPU; 1599 + action->flags = IRQF_PERCPU | IRQF_NO_SUSPEND; 1600 1600 action->name = devname; 1601 1601 action->percpu_dev_id = dev_id; 1602 1602
+3 -1
kernel/irq/spurious.c
··· 84 84 */ 85 85 action = desc->action; 86 86 if (!action || !(action->flags & IRQF_SHARED) || 87 - (action->flags & __IRQF_TIMER) || !action->next) 87 + (action->flags & __IRQF_TIMER) || 88 + (action->handler(irq, action->dev_id) == IRQ_HANDLED) || 89 + !action->next) 88 90 goto out; 89 91 90 92 /* Already running on another processor */
+10 -6
kernel/power/hibernate.c
··· 347 347 348 348 error = freeze_kernel_threads(); 349 349 if (error) 350 - goto Close; 350 + goto Cleanup; 351 351 352 352 if (hibernation_test(TEST_FREEZER) || 353 353 hibernation_testmode(HIBERNATION_TESTPROC)) { ··· 357 357 * successful freezer test. 358 358 */ 359 359 freezer_test_done = true; 360 - goto Close; 360 + goto Cleanup; 361 361 } 362 362 363 363 error = dpm_prepare(PMSG_FREEZE); 364 - if (error) 365 - goto Complete_devices; 364 + if (error) { 365 + dpm_complete(msg); 366 + goto Cleanup; 367 + } 366 368 367 369 suspend_console(); 368 370 pm_restrict_gfp_mask(); ··· 393 391 pm_restore_gfp_mask(); 394 392 395 393 resume_console(); 396 - 397 - Complete_devices: 398 394 dpm_complete(msg); 399 395 400 396 Close: ··· 402 402 Recover_platform: 403 403 platform_recover(platform_mode); 404 404 goto Resume_devices; 405 + 406 + Cleanup: 407 + swsusp_free(); 408 + goto Close; 405 409 } 406 410 407 411 /**
+48 -10
kernel/time/clocksource.c
··· 492 492 } 493 493 494 494 /** 495 + * clocksource_max_adjustment- Returns max adjustment amount 496 + * @cs: Pointer to clocksource 497 + * 498 + */ 499 + static u32 clocksource_max_adjustment(struct clocksource *cs) 500 + { 501 + u64 ret; 502 + /* 503 + * We won't try to correct for more then 11% adjustments (110,000 ppm), 504 + */ 505 + ret = (u64)cs->mult * 11; 506 + do_div(ret,100); 507 + return (u32)ret; 508 + } 509 + 510 + /** 495 511 * clocksource_max_deferment - Returns max time the clocksource can be deferred 496 512 * @cs: Pointer to clocksource 497 513 * ··· 519 503 /* 520 504 * Calculate the maximum number of cycles that we can pass to the 521 505 * cyc2ns function without overflowing a 64-bit signed result. The 522 - * maximum number of cycles is equal to ULLONG_MAX/cs->mult which 523 - * is equivalent to the below. 524 - * max_cycles < (2^63)/cs->mult 525 - * max_cycles < 2^(log2((2^63)/cs->mult)) 526 - * max_cycles < 2^(log2(2^63) - log2(cs->mult)) 527 - * max_cycles < 2^(63 - log2(cs->mult)) 528 - * max_cycles < 1 << (63 - log2(cs->mult)) 506 + * maximum number of cycles is equal to ULLONG_MAX/(cs->mult+cs->maxadj) 507 + * which is equivalent to the below. 508 + * max_cycles < (2^63)/(cs->mult + cs->maxadj) 509 + * max_cycles < 2^(log2((2^63)/(cs->mult + cs->maxadj))) 510 + * max_cycles < 2^(log2(2^63) - log2(cs->mult + cs->maxadj)) 511 + * max_cycles < 2^(63 - log2(cs->mult + cs->maxadj)) 512 + * max_cycles < 1 << (63 - log2(cs->mult + cs->maxadj)) 529 513 * Please note that we add 1 to the result of the log2 to account for 530 514 * any rounding errors, ensure the above inequality is satisfied and 531 515 * no overflow will occur. 532 516 */ 533 - max_cycles = 1ULL << (63 - (ilog2(cs->mult) + 1)); 517 + max_cycles = 1ULL << (63 - (ilog2(cs->mult + cs->maxadj) + 1)); 534 518 535 519 /* 536 520 * The actual maximum number of cycles we can defer the clocksource is 537 521 * determined by the minimum of max_cycles and cs->mask. 522 + * Note: Here we subtract the maxadj to make sure we don't sleep for 523 + * too long if there's a large negative adjustment. 538 524 */ 539 525 max_cycles = min_t(u64, max_cycles, (u64) cs->mask); 540 - max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult, cs->shift); 526 + max_nsecs = clocksource_cyc2ns(max_cycles, cs->mult - cs->maxadj, 527 + cs->shift); 541 528 542 529 /* 543 530 * To ensure that the clocksource does not wrap whilst we are idle, ··· 659 640 void __clocksource_updatefreq_scale(struct clocksource *cs, u32 scale, u32 freq) 660 641 { 661 642 u64 sec; 662 - 663 643 /* 664 644 * Calc the maximum number of seconds which we can run before 665 645 * wrapping around. For clocksources which have a mask > 32bit ··· 679 661 680 662 clocks_calc_mult_shift(&cs->mult, &cs->shift, freq, 681 663 NSEC_PER_SEC / scale, sec * scale); 664 + 665 + /* 666 + * for clocksources that have large mults, to avoid overflow. 667 + * Since mult may be adjusted by ntp, add an safety extra margin 668 + * 669 + */ 670 + cs->maxadj = clocksource_max_adjustment(cs); 671 + while ((cs->mult + cs->maxadj < cs->mult) 672 + || (cs->mult - cs->maxadj > cs->mult)) { 673 + cs->mult >>= 1; 674 + cs->shift--; 675 + cs->maxadj = clocksource_max_adjustment(cs); 676 + } 677 + 682 678 cs->max_idle_ns = clocksource_max_deferment(cs); 683 679 } 684 680 EXPORT_SYMBOL_GPL(__clocksource_updatefreq_scale); ··· 733 701 */ 734 702 int clocksource_register(struct clocksource *cs) 735 703 { 704 + /* calculate max adjustment for given mult/shift */ 705 + cs->maxadj = clocksource_max_adjustment(cs); 706 + WARN_ONCE(cs->mult + cs->maxadj < cs->mult, 707 + "Clocksource %s might overflow on 11%% adjustment\n", 708 + cs->name); 709 + 736 710 /* calculate max idle time permitted for this clocksource */ 737 711 cs->max_idle_ns = clocksource_max_deferment(cs); 738 712
+91 -1
kernel/time/timekeeping.c
··· 249 249 secs = xtime.tv_sec + wall_to_monotonic.tv_sec; 250 250 nsecs = xtime.tv_nsec + wall_to_monotonic.tv_nsec; 251 251 nsecs += timekeeping_get_ns(); 252 + /* If arch requires, add in gettimeoffset() */ 253 + nsecs += arch_gettimeoffset(); 252 254 253 255 } while (read_seqretry(&xtime_lock, seq)); 254 256 /* ··· 282 280 *ts = xtime; 283 281 tomono = wall_to_monotonic; 284 282 nsecs = timekeeping_get_ns(); 283 + /* If arch requires, add in gettimeoffset() */ 284 + nsecs += arch_gettimeoffset(); 285 285 286 286 } while (read_seqretry(&xtime_lock, seq)); 287 287 ··· 806 802 s64 error, interval = timekeeper.cycle_interval; 807 803 int adj; 808 804 805 + /* 806 + * The point of this is to check if the error is greater then half 807 + * an interval. 808 + * 809 + * First we shift it down from NTP_SHIFT to clocksource->shifted nsecs. 810 + * 811 + * Note we subtract one in the shift, so that error is really error*2. 812 + * This "saves" dividing(shifting) intererval twice, but keeps the 813 + * (error > interval) comparision as still measuring if error is 814 + * larger then half an interval. 815 + * 816 + * Note: It does not "save" on aggrivation when reading the code. 817 + */ 809 818 error = timekeeper.ntp_error >> (timekeeper.ntp_error_shift - 1); 810 819 if (error > interval) { 820 + /* 821 + * We now divide error by 4(via shift), which checks if 822 + * the error is greater then twice the interval. 823 + * If it is greater, we need a bigadjust, if its smaller, 824 + * we can adjust by 1. 825 + */ 811 826 error >>= 2; 827 + /* 828 + * XXX - In update_wall_time, we round up to the next 829 + * nanosecond, and store the amount rounded up into 830 + * the error. This causes the likely below to be unlikely. 831 + * 832 + * The properfix is to avoid rounding up by using 833 + * the high precision timekeeper.xtime_nsec instead of 834 + * xtime.tv_nsec everywhere. Fixing this will take some 835 + * time. 836 + */ 812 837 if (likely(error <= interval)) 813 838 adj = 1; 814 839 else 815 840 adj = timekeeping_bigadjust(error, &interval, &offset); 816 841 } else if (error < -interval) { 842 + /* See comment above, this is just switched for the negative */ 817 843 error >>= 2; 818 844 if (likely(error >= -interval)) { 819 845 adj = -1; ··· 851 817 offset = -offset; 852 818 } else 853 819 adj = timekeeping_bigadjust(error, &interval, &offset); 854 - } else 820 + } else /* No adjustment needed */ 855 821 return; 856 822 823 + WARN_ONCE(timekeeper.clock->maxadj && 824 + (timekeeper.mult + adj > timekeeper.clock->mult + 825 + timekeeper.clock->maxadj), 826 + "Adjusting %s more then 11%% (%ld vs %ld)\n", 827 + timekeeper.clock->name, (long)timekeeper.mult + adj, 828 + (long)timekeeper.clock->mult + 829 + timekeeper.clock->maxadj); 830 + /* 831 + * So the following can be confusing. 832 + * 833 + * To keep things simple, lets assume adj == 1 for now. 834 + * 835 + * When adj != 1, remember that the interval and offset values 836 + * have been appropriately scaled so the math is the same. 837 + * 838 + * The basic idea here is that we're increasing the multiplier 839 + * by one, this causes the xtime_interval to be incremented by 840 + * one cycle_interval. This is because: 841 + * xtime_interval = cycle_interval * mult 842 + * So if mult is being incremented by one: 843 + * xtime_interval = cycle_interval * (mult + 1) 844 + * Its the same as: 845 + * xtime_interval = (cycle_interval * mult) + cycle_interval 846 + * Which can be shortened to: 847 + * xtime_interval += cycle_interval 848 + * 849 + * So offset stores the non-accumulated cycles. Thus the current 850 + * time (in shifted nanoseconds) is: 851 + * now = (offset * adj) + xtime_nsec 852 + * Now, even though we're adjusting the clock frequency, we have 853 + * to keep time consistent. In other words, we can't jump back 854 + * in time, and we also want to avoid jumping forward in time. 855 + * 856 + * So given the same offset value, we need the time to be the same 857 + * both before and after the freq adjustment. 858 + * now = (offset * adj_1) + xtime_nsec_1 859 + * now = (offset * adj_2) + xtime_nsec_2 860 + * So: 861 + * (offset * adj_1) + xtime_nsec_1 = 862 + * (offset * adj_2) + xtime_nsec_2 863 + * And we know: 864 + * adj_2 = adj_1 + 1 865 + * So: 866 + * (offset * adj_1) + xtime_nsec_1 = 867 + * (offset * (adj_1+1)) + xtime_nsec_2 868 + * (offset * adj_1) + xtime_nsec_1 = 869 + * (offset * adj_1) + offset + xtime_nsec_2 870 + * Canceling the sides: 871 + * xtime_nsec_1 = offset + xtime_nsec_2 872 + * Which gives us: 873 + * xtime_nsec_2 = xtime_nsec_1 - offset 874 + * Which simplfies to: 875 + * xtime_nsec -= offset 876 + * 877 + * XXX - TODO: Doc ntp_error calculation. 878 + */ 857 879 timekeeper.mult += adj; 858 880 timekeeper.xtime_interval += interval; 859 881 timekeeper.xtime_nsec -= offset;
+7 -16
mm/page-writeback.c
··· 128 128 * 129 129 */ 130 130 static struct prop_descriptor vm_completions; 131 - static struct prop_descriptor vm_dirties; 132 131 133 132 /* 134 133 * couple the period to the dirty_ratio: ··· 153 154 { 154 155 int shift = calc_period_shift(); 155 156 prop_change_shift(&vm_completions, shift); 156 - prop_change_shift(&vm_dirties, shift); 157 157 158 158 writeback_set_ratelimit(); 159 159 } ··· 232 234 local_irq_restore(flags); 233 235 } 234 236 EXPORT_SYMBOL_GPL(bdi_writeout_inc); 235 - 236 - void task_dirty_inc(struct task_struct *tsk) 237 - { 238 - prop_inc_single(&vm_dirties, &tsk->dirties); 239 - } 240 237 241 238 /* 242 239 * Obtain an accurate fraction of the BDI's portion. ··· 1126 1133 pages_dirtied, 1127 1134 pause, 1128 1135 start_time); 1129 - __set_current_state(TASK_UNINTERRUPTIBLE); 1136 + __set_current_state(TASK_KILLABLE); 1130 1137 io_schedule_timeout(pause); 1131 1138 1132 - dirty_thresh = hard_dirty_limit(dirty_thresh); 1133 1139 /* 1134 - * max-pause area. If dirty exceeded but still within this 1135 - * area, no need to sleep for more than 200ms: (a) 8 pages per 1136 - * 200ms is typically more than enough to curb heavy dirtiers; 1137 - * (b) the pause time limit makes the dirtiers more responsive. 1140 + * This is typically equal to (nr_dirty < dirty_thresh) and can 1141 + * also keep "1000+ dd on a slow USB stick" under control. 1138 1142 */ 1139 - if (nr_dirty < dirty_thresh) 1143 + if (task_ratelimit) 1144 + break; 1145 + 1146 + if (fatal_signal_pending(current)) 1140 1147 break; 1141 1148 } 1142 1149 ··· 1388 1395 1389 1396 shift = calc_period_shift(); 1390 1397 prop_descriptor_init(&vm_completions, shift); 1391 - prop_descriptor_init(&vm_dirties, shift); 1392 1398 } 1393 1399 1394 1400 /** ··· 1716 1724 __inc_zone_page_state(page, NR_DIRTIED); 1717 1725 __inc_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE); 1718 1726 __inc_bdi_stat(mapping->backing_dev_info, BDI_DIRTIED); 1719 - task_dirty_inc(current); 1720 1727 task_io_account_write(PAGE_CACHE_SIZE); 1721 1728 } 1722 1729 }
+8 -9
mm/percpu-vm.c
··· 50 50 51 51 if (!pages || !bitmap) { 52 52 if (may_alloc && !pages) 53 - pages = pcpu_mem_alloc(pages_size); 53 + pages = pcpu_mem_zalloc(pages_size); 54 54 if (may_alloc && !bitmap) 55 - bitmap = pcpu_mem_alloc(bitmap_size); 55 + bitmap = pcpu_mem_zalloc(bitmap_size); 56 56 if (!pages || !bitmap) 57 57 return NULL; 58 58 } 59 59 60 - memset(pages, 0, pages_size); 61 60 bitmap_copy(bitmap, chunk->populated, pcpu_unit_pages); 62 61 63 62 *bitmapp = bitmap; ··· 142 143 int page_start, int page_end) 143 144 { 144 145 flush_cache_vunmap( 145 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 146 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 146 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 147 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 147 148 } 148 149 149 150 static void __pcpu_unmap_pages(unsigned long addr, int nr_pages) ··· 205 206 int page_start, int page_end) 206 207 { 207 208 flush_tlb_kernel_range( 208 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 209 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 209 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 210 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 210 211 } 211 212 212 213 static int __pcpu_map_pages(unsigned long addr, struct page **pages, ··· 283 284 int page_start, int page_end) 284 285 { 285 286 flush_cache_vmap( 286 - pcpu_chunk_addr(chunk, pcpu_first_unit_cpu, page_start), 287 - pcpu_chunk_addr(chunk, pcpu_last_unit_cpu, page_end)); 287 + pcpu_chunk_addr(chunk, pcpu_low_unit_cpu, page_start), 288 + pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); 288 289 } 289 290 290 291 /**
+40 -22
mm/percpu.c
··· 116 116 static int pcpu_nr_slots __read_mostly; 117 117 static size_t pcpu_chunk_struct_size __read_mostly; 118 118 119 - /* cpus with the lowest and highest unit numbers */ 120 - static unsigned int pcpu_first_unit_cpu __read_mostly; 121 - static unsigned int pcpu_last_unit_cpu __read_mostly; 119 + /* cpus with the lowest and highest unit addresses */ 120 + static unsigned int pcpu_low_unit_cpu __read_mostly; 121 + static unsigned int pcpu_high_unit_cpu __read_mostly; 122 122 123 123 /* the address of the first chunk which starts with the kernel static area */ 124 124 void *pcpu_base_addr __read_mostly; ··· 273 273 (rs) = (re) + 1, pcpu_next_pop((chunk), &(rs), &(re), (end))) 274 274 275 275 /** 276 - * pcpu_mem_alloc - allocate memory 276 + * pcpu_mem_zalloc - allocate memory 277 277 * @size: bytes to allocate 278 278 * 279 279 * Allocate @size bytes. If @size is smaller than PAGE_SIZE, 280 - * kzalloc() is used; otherwise, vmalloc() is used. The returned 280 + * kzalloc() is used; otherwise, vzalloc() is used. The returned 281 281 * memory is always zeroed. 282 282 * 283 283 * CONTEXT: ··· 286 286 * RETURNS: 287 287 * Pointer to the allocated area on success, NULL on failure. 288 288 */ 289 - static void *pcpu_mem_alloc(size_t size) 289 + static void *pcpu_mem_zalloc(size_t size) 290 290 { 291 291 if (WARN_ON_ONCE(!slab_is_available())) 292 292 return NULL; ··· 302 302 * @ptr: memory to free 303 303 * @size: size of the area 304 304 * 305 - * Free @ptr. @ptr should have been allocated using pcpu_mem_alloc(). 305 + * Free @ptr. @ptr should have been allocated using pcpu_mem_zalloc(). 306 306 */ 307 307 static void pcpu_mem_free(void *ptr, size_t size) 308 308 { ··· 384 384 size_t old_size = 0, new_size = new_alloc * sizeof(new[0]); 385 385 unsigned long flags; 386 386 387 - new = pcpu_mem_alloc(new_size); 387 + new = pcpu_mem_zalloc(new_size); 388 388 if (!new) 389 389 return -ENOMEM; 390 390 ··· 604 604 { 605 605 struct pcpu_chunk *chunk; 606 606 607 - chunk = pcpu_mem_alloc(pcpu_chunk_struct_size); 607 + chunk = pcpu_mem_zalloc(pcpu_chunk_struct_size); 608 608 if (!chunk) 609 609 return NULL; 610 610 611 - chunk->map = pcpu_mem_alloc(PCPU_DFL_MAP_ALLOC * sizeof(chunk->map[0])); 611 + chunk->map = pcpu_mem_zalloc(PCPU_DFL_MAP_ALLOC * 612 + sizeof(chunk->map[0])); 612 613 if (!chunk->map) { 613 614 kfree(chunk); 614 615 return NULL; ··· 978 977 * address. The caller is responsible for ensuring @addr stays valid 979 978 * until this function finishes. 980 979 * 980 + * percpu allocator has special setup for the first chunk, which currently 981 + * supports either embedding in linear address space or vmalloc mapping, 982 + * and, from the second one, the backing allocator (currently either vm or 983 + * km) provides translation. 984 + * 985 + * The addr can be tranlated simply without checking if it falls into the 986 + * first chunk. But the current code reflects better how percpu allocator 987 + * actually works, and the verification can discover both bugs in percpu 988 + * allocator itself and per_cpu_ptr_to_phys() callers. So we keep current 989 + * code. 990 + * 981 991 * RETURNS: 982 992 * The physical address for @addr. 983 993 */ ··· 996 984 { 997 985 void __percpu *base = __addr_to_pcpu_ptr(pcpu_base_addr); 998 986 bool in_first_chunk = false; 999 - unsigned long first_start, first_end; 987 + unsigned long first_low, first_high; 1000 988 unsigned int cpu; 1001 989 1002 990 /* 1003 - * The following test on first_start/end isn't strictly 991 + * The following test on unit_low/high isn't strictly 1004 992 * necessary but will speed up lookups of addresses which 1005 993 * aren't in the first chunk. 1006 994 */ 1007 - first_start = pcpu_chunk_addr(pcpu_first_chunk, pcpu_first_unit_cpu, 0); 1008 - first_end = pcpu_chunk_addr(pcpu_first_chunk, pcpu_last_unit_cpu, 1009 - pcpu_unit_pages); 1010 - if ((unsigned long)addr >= first_start && 1011 - (unsigned long)addr < first_end) { 995 + first_low = pcpu_chunk_addr(pcpu_first_chunk, pcpu_low_unit_cpu, 0); 996 + first_high = pcpu_chunk_addr(pcpu_first_chunk, pcpu_high_unit_cpu, 997 + pcpu_unit_pages); 998 + if ((unsigned long)addr >= first_low && 999 + (unsigned long)addr < first_high) { 1012 1000 for_each_possible_cpu(cpu) { 1013 1001 void *start = per_cpu_ptr(base, cpu); 1014 1002 ··· 1245 1233 1246 1234 for (cpu = 0; cpu < nr_cpu_ids; cpu++) 1247 1235 unit_map[cpu] = UINT_MAX; 1248 - pcpu_first_unit_cpu = NR_CPUS; 1236 + 1237 + pcpu_low_unit_cpu = NR_CPUS; 1238 + pcpu_high_unit_cpu = NR_CPUS; 1249 1239 1250 1240 for (group = 0, unit = 0; group < ai->nr_groups; group++, unit += i) { 1251 1241 const struct pcpu_group_info *gi = &ai->groups[group]; ··· 1267 1253 unit_map[cpu] = unit + i; 1268 1254 unit_off[cpu] = gi->base_offset + i * ai->unit_size; 1269 1255 1270 - if (pcpu_first_unit_cpu == NR_CPUS) 1271 - pcpu_first_unit_cpu = cpu; 1272 - pcpu_last_unit_cpu = cpu; 1256 + /* determine low/high unit_cpu */ 1257 + if (pcpu_low_unit_cpu == NR_CPUS || 1258 + unit_off[cpu] < unit_off[pcpu_low_unit_cpu]) 1259 + pcpu_low_unit_cpu = cpu; 1260 + if (pcpu_high_unit_cpu == NR_CPUS || 1261 + unit_off[cpu] > unit_off[pcpu_high_unit_cpu]) 1262 + pcpu_high_unit_cpu = cpu; 1273 1263 } 1274 1264 } 1275 1265 pcpu_nr_units = unit; ··· 1907 1889 1908 1890 BUILD_BUG_ON(size > PAGE_SIZE); 1909 1891 1910 - map = pcpu_mem_alloc(size); 1892 + map = pcpu_mem_zalloc(size); 1911 1893 BUG_ON(!map); 1912 1894 1913 1895 spin_lock_irqsave(&pcpu_lock, flags);
+26 -16
mm/slub.c
··· 1862 1862 { 1863 1863 struct kmem_cache_node *n = NULL; 1864 1864 struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab); 1865 - struct page *page; 1865 + struct page *page, *discard_page = NULL; 1866 1866 1867 1867 while ((page = c->partial)) { 1868 1868 enum slab_modes { M_PARTIAL, M_FREE }; ··· 1904 1904 if (l == M_PARTIAL) 1905 1905 remove_partial(n, page); 1906 1906 else 1907 - add_partial(n, page, 1); 1907 + add_partial(n, page, 1908 + DEACTIVATE_TO_TAIL); 1908 1909 1909 1910 l = m; 1910 1911 } ··· 1916 1915 "unfreezing slab")); 1917 1916 1918 1917 if (m == M_FREE) { 1919 - stat(s, DEACTIVATE_EMPTY); 1920 - discard_slab(s, page); 1921 - stat(s, FREE_SLAB); 1918 + page->next = discard_page; 1919 + discard_page = page; 1922 1920 } 1923 1921 } 1924 1922 1925 1923 if (n) 1926 1924 spin_unlock(&n->list_lock); 1925 + 1926 + while (discard_page) { 1927 + page = discard_page; 1928 + discard_page = discard_page->next; 1929 + 1930 + stat(s, DEACTIVATE_EMPTY); 1931 + discard_slab(s, page); 1932 + stat(s, FREE_SLAB); 1933 + } 1927 1934 } 1928 1935 1929 1936 /* ··· 1978 1969 page->pobjects = pobjects; 1979 1970 page->next = oldpage; 1980 1971 1981 - } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage); 1972 + } while (irqsafe_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage); 1982 1973 stat(s, CPU_PARTIAL_FREE); 1983 1974 return pobjects; 1984 1975 } ··· 4444 4435 4445 4436 for_each_possible_cpu(cpu) { 4446 4437 struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); 4438 + int node = ACCESS_ONCE(c->node); 4447 4439 struct page *page; 4448 4440 4449 - if (!c || c->node < 0) 4441 + if (node < 0) 4450 4442 continue; 4451 - 4452 - if (c->page) { 4453 - if (flags & SO_TOTAL) 4454 - x = c->page->objects; 4443 + page = ACCESS_ONCE(c->page); 4444 + if (page) { 4445 + if (flags & SO_TOTAL) 4446 + x = page->objects; 4455 4447 else if (flags & SO_OBJECTS) 4456 - x = c->page->inuse; 4448 + x = page->inuse; 4457 4449 else 4458 4450 x = 1; 4459 4451 4460 4452 total += x; 4461 - nodes[c->node] += x; 4453 + nodes[node] += x; 4462 4454 } 4463 4455 page = c->partial; 4464 4456 4465 4457 if (page) { 4466 4458 x = page->pobjects; 4467 - total += x; 4468 - nodes[c->node] += x; 4459 + total += x; 4460 + nodes[node] += x; 4469 4461 } 4470 - per_cpu[c->node]++; 4462 + per_cpu[node]++; 4471 4463 } 4472 4464 } 4473 4465
+6
net/bridge/br_netlink.c
··· 18 18 #include <net/sock.h> 19 19 20 20 #include "br_private.h" 21 + #include "br_private_stp.h" 21 22 22 23 static inline size_t br_nlmsg_size(void) 23 24 { ··· 189 188 190 189 p->state = new_state; 191 190 br_log_state(p); 191 + 192 + spin_lock_bh(&p->br->lock); 193 + br_port_state_selection(p->br); 194 + spin_unlock_bh(&p->br->lock); 195 + 192 196 br_ifinfo_notify(RTM_NEWLINK, p); 193 197 194 198 return 0;
+14 -15
net/bridge/br_stp.c
··· 399 399 struct net_bridge_port *p; 400 400 unsigned int liveports = 0; 401 401 402 - /* Don't change port states if userspace is handling STP */ 403 - if (br->stp_enabled == BR_USER_STP) 404 - return; 405 - 406 402 list_for_each_entry(p, &br->port_list, list) { 407 403 if (p->state == BR_STATE_DISABLED) 408 404 continue; 409 405 410 - if (p->port_no == br->root_port) { 411 - p->config_pending = 0; 412 - p->topology_change_ack = 0; 413 - br_make_forwarding(p); 414 - } else if (br_is_designated_port(p)) { 415 - del_timer(&p->message_age_timer); 416 - br_make_forwarding(p); 417 - } else { 418 - p->config_pending = 0; 419 - p->topology_change_ack = 0; 420 - br_make_blocking(p); 406 + /* Don't change port states if userspace is handling STP */ 407 + if (br->stp_enabled != BR_USER_STP) { 408 + if (p->port_no == br->root_port) { 409 + p->config_pending = 0; 410 + p->topology_change_ack = 0; 411 + br_make_forwarding(p); 412 + } else if (br_is_designated_port(p)) { 413 + del_timer(&p->message_age_timer); 414 + br_make_forwarding(p); 415 + } else { 416 + p->config_pending = 0; 417 + p->topology_change_ack = 0; 418 + br_make_blocking(p); 419 + } 421 420 } 422 421 423 422 if (p->state == BR_STATE_FORWARDING)
+1 -1
net/ceph/osd_client.c
··· 244 244 ceph_pagelist_init(req->r_trail); 245 245 } 246 246 /* create request message; allow space for oid */ 247 - msg_size += 40; 247 + msg_size += MAX_OBJ_NAME_SIZE; 248 248 if (snapc) 249 249 msg_size += sizeof(u64) * snapc->num_snaps; 250 250 if (use_mempool)
+8 -1
net/core/dev.c
··· 1387 1387 for_each_net(net) { 1388 1388 for_each_netdev(net, dev) { 1389 1389 if (dev == last) 1390 - break; 1390 + goto outroll; 1391 1391 1392 1392 if (dev->flags & IFF_UP) { 1393 1393 nb->notifier_call(nb, NETDEV_GOING_DOWN, dev); ··· 1398 1398 } 1399 1399 } 1400 1400 1401 + outroll: 1401 1402 raw_notifier_chain_unregister(&netdev_chain, nb); 1402 1403 goto unlock; 1403 1404 } ··· 4208 4207 { 4209 4208 return seq_open_net(inode, file, &dev_seq_ops, 4210 4209 sizeof(struct dev_iter_state)); 4210 + } 4211 + 4212 + int dev_seq_open_ops(struct inode *inode, struct file *file, 4213 + const struct seq_operations *ops) 4214 + { 4215 + return seq_open_net(inode, file, ops, sizeof(struct dev_iter_state)); 4211 4216 } 4212 4217 4213 4218 static const struct file_operations dev_seq_fops = {
+1 -2
net/core/dev_addr_lists.c
··· 696 696 697 697 static int dev_mc_seq_open(struct inode *inode, struct file *file) 698 698 { 699 - return seq_open_net(inode, file, &dev_mc_seq_ops, 700 - sizeof(struct seq_net_private)); 699 + return dev_seq_open_ops(inode, file, &dev_mc_seq_ops); 701 700 } 702 701 703 702 static const struct file_operations dev_mc_seq_fops = {
+5 -12
net/decnet/dn_timer.c
··· 36 36 37 37 void dn_start_slow_timer(struct sock *sk) 38 38 { 39 - sk->sk_timer.expires = jiffies + SLOW_INTERVAL; 40 - sk->sk_timer.function = dn_slow_timer; 41 - sk->sk_timer.data = (unsigned long)sk; 42 - 43 - add_timer(&sk->sk_timer); 39 + setup_timer(&sk->sk_timer, dn_slow_timer, (unsigned long)sk); 40 + sk_reset_timer(sk, &sk->sk_timer, jiffies + SLOW_INTERVAL); 44 41 } 45 42 46 43 void dn_stop_slow_timer(struct sock *sk) 47 44 { 48 - del_timer(&sk->sk_timer); 45 + sk_stop_timer(sk, &sk->sk_timer); 49 46 } 50 47 51 48 static void dn_slow_timer(unsigned long arg) ··· 50 53 struct sock *sk = (struct sock *)arg; 51 54 struct dn_scp *scp = DN_SK(sk); 52 55 53 - sock_hold(sk); 54 56 bh_lock_sock(sk); 55 57 56 58 if (sock_owned_by_user(sk)) { 57 - sk->sk_timer.expires = jiffies + HZ / 10; 58 - add_timer(&sk->sk_timer); 59 + sk_reset_timer(sk, &sk->sk_timer, jiffies + HZ / 10); 59 60 goto out; 60 61 } 61 62 ··· 95 100 scp->keepalive_fxn(sk); 96 101 } 97 102 98 - sk->sk_timer.expires = jiffies + SLOW_INTERVAL; 99 - 100 - add_timer(&sk->sk_timer); 103 + sk_reset_timer(sk, &sk->sk_timer, jiffies + SLOW_INTERVAL); 101 104 out: 102 105 bh_unlock_sock(sk); 103 106 sock_put(sk);
+5
net/ipv4/devinet.c
··· 1490 1490 void __user *buffer, 1491 1491 size_t *lenp, loff_t *ppos) 1492 1492 { 1493 + int old_value = *(int *)ctl->data; 1493 1494 int ret = proc_dointvec(ctl, write, buffer, lenp, ppos); 1495 + int new_value = *(int *)ctl->data; 1494 1496 1495 1497 if (write) { 1496 1498 struct ipv4_devconf *cnf = ctl->extra1; ··· 1503 1501 1504 1502 if (cnf == net->ipv4.devconf_dflt) 1505 1503 devinet_copy_dflt_conf(net, i); 1504 + if (i == IPV4_DEVCONF_ACCEPT_LOCAL - 1) 1505 + if ((new_value == 0) && (old_value != 0)) 1506 + rt_cache_flush(net, 0); 1506 1507 } 1507 1508 1508 1509 return ret;
+2 -1
net/ipv4/netfilter.c
··· 64 64 /* Change in oif may mean change in hh_len. */ 65 65 hh_len = skb_dst(skb)->dev->hard_header_len; 66 66 if (skb_headroom(skb) < hh_len && 67 - pskb_expand_head(skb, hh_len - skb_headroom(skb), 0, GFP_ATOMIC)) 67 + pskb_expand_head(skb, HH_DATA_ALIGN(hh_len - skb_headroom(skb)), 68 + 0, GFP_ATOMIC)) 68 69 return -1; 69 70 70 71 return 0;
+34 -10
net/ipv4/route.c
··· 130 130 static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20; 131 131 static int ip_rt_min_advmss __read_mostly = 256; 132 132 static int rt_chain_length_max __read_mostly = 20; 133 + static int redirect_genid; 133 134 134 135 /* 135 136 * Interface to generic destination cache. ··· 416 415 else { 417 416 struct rtable *r = v; 418 417 struct neighbour *n; 419 - int len; 418 + int len, HHUptod; 420 419 420 + rcu_read_lock(); 421 421 n = dst_get_neighbour(&r->dst); 422 + HHUptod = (n && (n->nud_state & NUD_CONNECTED)) ? 1 : 0; 423 + rcu_read_unlock(); 424 + 422 425 seq_printf(seq, "%s\t%08X\t%08X\t%8X\t%d\t%u\t%d\t" 423 426 "%08X\t%d\t%u\t%u\t%02X\t%d\t%1d\t%08X%n", 424 427 r->dst.dev ? r->dst.dev->name : "*", ··· 436 431 dst_metric(&r->dst, RTAX_RTTVAR)), 437 432 r->rt_key_tos, 438 433 -1, 439 - (n && (n->nud_state & NUD_CONNECTED)) ? 1 : 0, 434 + HHUptod, 440 435 r->rt_spec_dst, &len); 441 436 442 437 seq_printf(seq, "%*s\n", 127 - len, ""); ··· 841 836 842 837 get_random_bytes(&shuffle, sizeof(shuffle)); 843 838 atomic_add(shuffle + 1U, &net->ipv4.rt_genid); 839 + redirect_genid++; 844 840 } 845 841 846 842 /* ··· 1391 1385 1392 1386 peer = rt->peer; 1393 1387 if (peer) { 1394 - if (peer->redirect_learned.a4 != new_gw) { 1388 + if (peer->redirect_learned.a4 != new_gw || 1389 + peer->redirect_genid != redirect_genid) { 1395 1390 peer->redirect_learned.a4 = new_gw; 1391 + peer->redirect_genid = redirect_genid; 1396 1392 atomic_inc(&__rt_peer_genid); 1397 1393 } 1398 1394 check_peer_redir(&rt->dst, peer); ··· 1687 1679 } 1688 1680 1689 1681 1690 - static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie) 1682 + static struct rtable *ipv4_validate_peer(struct rtable *rt) 1691 1683 { 1692 - struct rtable *rt = (struct rtable *) dst; 1693 - 1694 - if (rt_is_expired(rt)) 1695 - return NULL; 1696 1684 if (rt->rt_peer_genid != rt_peer_genid()) { 1697 1685 struct inet_peer *peer; 1698 1686 ··· 1697 1693 1698 1694 peer = rt->peer; 1699 1695 if (peer) { 1700 - check_peer_pmtu(dst, peer); 1696 + check_peer_pmtu(&rt->dst, peer); 1701 1697 1698 + if (peer->redirect_genid != redirect_genid) 1699 + peer->redirect_learned.a4 = 0; 1702 1700 if (peer->redirect_learned.a4 && 1703 1701 peer->redirect_learned.a4 != rt->rt_gateway) { 1704 - if (check_peer_redir(dst, peer)) 1702 + if (check_peer_redir(&rt->dst, peer)) 1705 1703 return NULL; 1706 1704 } 1707 1705 } 1708 1706 1709 1707 rt->rt_peer_genid = rt_peer_genid(); 1710 1708 } 1709 + return rt; 1710 + } 1711 + 1712 + static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie) 1713 + { 1714 + struct rtable *rt = (struct rtable *) dst; 1715 + 1716 + if (rt_is_expired(rt)) 1717 + return NULL; 1718 + dst = (struct dst_entry *) ipv4_validate_peer(rt); 1711 1719 return dst; 1712 1720 } 1713 1721 ··· 1867 1851 dst_init_metrics(&rt->dst, peer->metrics, false); 1868 1852 1869 1853 check_peer_pmtu(&rt->dst, peer); 1854 + if (peer->redirect_genid != redirect_genid) 1855 + peer->redirect_learned.a4 = 0; 1870 1856 if (peer->redirect_learned.a4 && 1871 1857 peer->redirect_learned.a4 != rt->rt_gateway) { 1872 1858 rt->rt_gateway = peer->redirect_learned.a4; ··· 2374 2356 rth->rt_mark == skb->mark && 2375 2357 net_eq(dev_net(rth->dst.dev), net) && 2376 2358 !rt_is_expired(rth)) { 2359 + rth = ipv4_validate_peer(rth); 2360 + if (!rth) 2361 + continue; 2377 2362 if (noref) { 2378 2363 dst_use_noref(&rth->dst, jiffies); 2379 2364 skb_dst_set_noref(skb, &rth->dst); ··· 2752 2731 (IPTOS_RT_MASK | RTO_ONLINK)) && 2753 2732 net_eq(dev_net(rth->dst.dev), net) && 2754 2733 !rt_is_expired(rth)) { 2734 + rth = ipv4_validate_peer(rth); 2735 + if (!rth) 2736 + continue; 2755 2737 dst_use(&rth->dst, jiffies); 2756 2738 RT_CACHE_STAT_INC(out_hit); 2757 2739 rcu_read_unlock_bh();
+8 -7
net/ipv4/udp.c
··· 1164 1164 struct inet_sock *inet = inet_sk(sk); 1165 1165 struct sockaddr_in *sin = (struct sockaddr_in *)msg->msg_name; 1166 1166 struct sk_buff *skb; 1167 - unsigned int ulen; 1167 + unsigned int ulen, copied; 1168 1168 int peeked; 1169 1169 int err; 1170 1170 int is_udplite = IS_UDPLITE(sk); ··· 1186 1186 goto out; 1187 1187 1188 1188 ulen = skb->len - sizeof(struct udphdr); 1189 - if (len > ulen) 1190 - len = ulen; 1191 - else if (len < ulen) 1189 + copied = len; 1190 + if (copied > ulen) 1191 + copied = ulen; 1192 + else if (copied < ulen) 1192 1193 msg->msg_flags |= MSG_TRUNC; 1193 1194 1194 1195 /* ··· 1198 1197 * coverage checksum (UDP-Lite), do it before the copy. 1199 1198 */ 1200 1199 1201 - if (len < ulen || UDP_SKB_CB(skb)->partial_cov) { 1200 + if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) { 1202 1201 if (udp_lib_checksum_complete(skb)) 1203 1202 goto csum_copy_err; 1204 1203 } 1205 1204 1206 1205 if (skb_csum_unnecessary(skb)) 1207 1206 err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), 1208 - msg->msg_iov, len); 1207 + msg->msg_iov, copied); 1209 1208 else { 1210 1209 err = skb_copy_and_csum_datagram_iovec(skb, 1211 1210 sizeof(struct udphdr), ··· 1234 1233 if (inet->cmsg_flags) 1235 1234 ip_cmsg_recv(msg, skb); 1236 1235 1237 - err = len; 1236 + err = copied; 1238 1237 if (flags & MSG_TRUNC) 1239 1238 err = ulen; 1240 1239
+1 -1
net/ipv6/ipv6_sockglue.c
··· 503 503 goto e_inval; 504 504 if (val > 255 || val < -1) 505 505 goto e_inval; 506 - np->mcast_hops = val; 506 + np->mcast_hops = (val == -1 ? IPV6_DEFAULT_MCASTHOPS : val); 507 507 retv = 0; 508 508 break; 509 509
+8 -7
net/ipv6/udp.c
··· 340 340 struct ipv6_pinfo *np = inet6_sk(sk); 341 341 struct inet_sock *inet = inet_sk(sk); 342 342 struct sk_buff *skb; 343 - unsigned int ulen; 343 + unsigned int ulen, copied; 344 344 int peeked; 345 345 int err; 346 346 int is_udplite = IS_UDPLITE(sk); ··· 363 363 goto out; 364 364 365 365 ulen = skb->len - sizeof(struct udphdr); 366 - if (len > ulen) 367 - len = ulen; 368 - else if (len < ulen) 366 + copied = len; 367 + if (copied > ulen) 368 + copied = ulen; 369 + else if (copied < ulen) 369 370 msg->msg_flags |= MSG_TRUNC; 370 371 371 372 is_udp4 = (skb->protocol == htons(ETH_P_IP)); ··· 377 376 * coverage checksum (UDP-Lite), do it before the copy. 378 377 */ 379 378 380 - if (len < ulen || UDP_SKB_CB(skb)->partial_cov) { 379 + if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) { 381 380 if (udp_lib_checksum_complete(skb)) 382 381 goto csum_copy_err; 383 382 } 384 383 385 384 if (skb_csum_unnecessary(skb)) 386 385 err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), 387 - msg->msg_iov,len); 386 + msg->msg_iov, copied ); 388 387 else { 389 388 err = skb_copy_and_csum_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov); 390 389 if (err == -EINVAL) ··· 432 431 datagram_recv_ctl(sk, msg, skb); 433 432 } 434 433 435 - err = len; 434 + err = copied; 436 435 if (flags & MSG_TRUNC) 437 436 err = ulen; 438 437
+1 -1
net/l2tp/l2tp_core.c
··· 1072 1072 1073 1073 /* Get routing info from the tunnel socket */ 1074 1074 skb_dst_drop(skb); 1075 - skb_dst_set(skb, dst_clone(__sk_dst_get(sk))); 1075 + skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0))); 1076 1076 1077 1077 inet = inet_sk(sk); 1078 1078 fl = &inet->cork.fl;
+39 -3
net/mac80211/agg-tx.c
··· 162 162 return -ENOENT; 163 163 } 164 164 165 + /* if we're already stopping ignore any new requests to stop */ 166 + if (test_bit(HT_AGG_STATE_STOPPING, &tid_tx->state)) { 167 + spin_unlock_bh(&sta->lock); 168 + return -EALREADY; 169 + } 170 + 165 171 if (test_bit(HT_AGG_STATE_WANT_START, &tid_tx->state)) { 166 172 /* not even started yet! */ 167 173 ieee80211_assign_tid_tx(sta, tid, NULL); ··· 176 170 return 0; 177 171 } 178 172 173 + set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state); 174 + 179 175 spin_unlock_bh(&sta->lock); 180 176 181 177 #ifdef CONFIG_MAC80211_HT_DEBUG 182 178 printk(KERN_DEBUG "Tx BA session stop requested for %pM tid %u\n", 183 179 sta->sta.addr, tid); 184 180 #endif /* CONFIG_MAC80211_HT_DEBUG */ 185 - 186 - set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state); 187 181 188 182 del_timer_sync(&tid_tx->addba_resp_timer); 189 183 ··· 193 187 * with locking to ensure proper access. 194 188 */ 195 189 clear_bit(HT_AGG_STATE_OPERATIONAL, &tid_tx->state); 190 + 191 + /* 192 + * There might be a few packets being processed right now (on 193 + * another CPU) that have already gotten past the aggregation 194 + * check when it was still OPERATIONAL and consequently have 195 + * IEEE80211_TX_CTL_AMPDU set. In that case, this code might 196 + * call into the driver at the same time or even before the 197 + * TX paths calls into it, which could confuse the driver. 198 + * 199 + * Wait for all currently running TX paths to finish before 200 + * telling the driver. New packets will not go through since 201 + * the aggregation session is no longer OPERATIONAL. 202 + */ 203 + synchronize_net(); 196 204 197 205 tid_tx->stop_initiator = initiator; 198 206 tid_tx->tx_stop = tx; ··· 773 753 goto out; 774 754 } 775 755 776 - del_timer(&tid_tx->addba_resp_timer); 756 + del_timer_sync(&tid_tx->addba_resp_timer); 777 757 778 758 #ifdef CONFIG_MAC80211_HT_DEBUG 779 759 printk(KERN_DEBUG "switched off addBA timer for tid %d\n", tid); 780 760 #endif 761 + 762 + /* 763 + * addba_resp_timer may have fired before we got here, and 764 + * caused WANT_STOP to be set. If the stop then was already 765 + * processed further, STOPPING might be set. 766 + */ 767 + if (test_bit(HT_AGG_STATE_WANT_STOP, &tid_tx->state) || 768 + test_bit(HT_AGG_STATE_STOPPING, &tid_tx->state)) { 769 + #ifdef CONFIG_MAC80211_HT_DEBUG 770 + printk(KERN_DEBUG 771 + "got addBA resp for tid %d but we already gave up\n", 772 + tid); 773 + #endif 774 + goto out; 775 + } 776 + 781 777 /* 782 778 * IEEE 802.11-2007 7.3.1.14: 783 779 * In an ADDBA Response frame, when the Status Code field
-1
net/netfilter/Kconfig
··· 201 201 202 202 config NF_CONNTRACK_NETBIOS_NS 203 203 tristate "NetBIOS name service protocol support" 204 - depends on NETFILTER_ADVANCED 205 204 select NF_CONNTRACK_BROADCAST 206 205 help 207 206 NetBIOS name service requests are sent as broadcast messages from an
+1 -1
net/netfilter/ipset/ip_set_hash_ipport.c
··· 158 158 const struct ip_set_hash *h = set->data; 159 159 ipset_adtfn adtfn = set->variant->adt[adt]; 160 160 struct hash_ipport4_elem data = { }; 161 - u32 ip, ip_to, p = 0, port, port_to; 161 + u32 ip, ip_to = 0, p = 0, port, port_to; 162 162 u32 timeout = h->timeout; 163 163 bool with_ports = false; 164 164 int ret;
+1 -1
net/netfilter/ipset/ip_set_hash_ipportip.c
··· 162 162 const struct ip_set_hash *h = set->data; 163 163 ipset_adtfn adtfn = set->variant->adt[adt]; 164 164 struct hash_ipportip4_elem data = { }; 165 - u32 ip, ip_to, p = 0, port, port_to; 165 + u32 ip, ip_to = 0, p = 0, port, port_to; 166 166 u32 timeout = h->timeout; 167 167 bool with_ports = false; 168 168 int ret;
+1 -1
net/netfilter/ipset/ip_set_hash_ipportnet.c
··· 184 184 const struct ip_set_hash *h = set->data; 185 185 ipset_adtfn adtfn = set->variant->adt[adt]; 186 186 struct hash_ipportnet4_elem data = { .cidr = HOST_MASK }; 187 - u32 ip, ip_to, p = 0, port, port_to; 187 + u32 ip, ip_to = 0, p = 0, port, port_to; 188 188 u32 ip2_from = 0, ip2_to, ip2_last, ip2; 189 189 u32 timeout = h->timeout; 190 190 bool with_ports = false;
+18 -19
net/netfilter/nf_conntrack_ecache.c
··· 27 27 28 28 static DEFINE_MUTEX(nf_ct_ecache_mutex); 29 29 30 - struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb __read_mostly; 31 - EXPORT_SYMBOL_GPL(nf_conntrack_event_cb); 32 - 33 - struct nf_exp_event_notifier __rcu *nf_expect_event_cb __read_mostly; 34 - EXPORT_SYMBOL_GPL(nf_expect_event_cb); 35 - 36 30 /* deliver cached events and clear cache entry - must be called with locally 37 31 * disabled softirqs */ 38 32 void nf_ct_deliver_cached_events(struct nf_conn *ct) 39 33 { 34 + struct net *net = nf_ct_net(ct); 40 35 unsigned long events; 41 36 struct nf_ct_event_notifier *notify; 42 37 struct nf_conntrack_ecache *e; 43 38 44 39 rcu_read_lock(); 45 - notify = rcu_dereference(nf_conntrack_event_cb); 40 + notify = rcu_dereference(net->ct.nf_conntrack_event_cb); 46 41 if (notify == NULL) 47 42 goto out_unlock; 48 43 ··· 78 83 } 79 84 EXPORT_SYMBOL_GPL(nf_ct_deliver_cached_events); 80 85 81 - int nf_conntrack_register_notifier(struct nf_ct_event_notifier *new) 86 + int nf_conntrack_register_notifier(struct net *net, 87 + struct nf_ct_event_notifier *new) 82 88 { 83 89 int ret = 0; 84 90 struct nf_ct_event_notifier *notify; 85 91 86 92 mutex_lock(&nf_ct_ecache_mutex); 87 - notify = rcu_dereference_protected(nf_conntrack_event_cb, 93 + notify = rcu_dereference_protected(net->ct.nf_conntrack_event_cb, 88 94 lockdep_is_held(&nf_ct_ecache_mutex)); 89 95 if (notify != NULL) { 90 96 ret = -EBUSY; 91 97 goto out_unlock; 92 98 } 93 - RCU_INIT_POINTER(nf_conntrack_event_cb, new); 99 + RCU_INIT_POINTER(net->ct.nf_conntrack_event_cb, new); 94 100 mutex_unlock(&nf_ct_ecache_mutex); 95 101 return ret; 96 102 ··· 101 105 } 102 106 EXPORT_SYMBOL_GPL(nf_conntrack_register_notifier); 103 107 104 - void nf_conntrack_unregister_notifier(struct nf_ct_event_notifier *new) 108 + void nf_conntrack_unregister_notifier(struct net *net, 109 + struct nf_ct_event_notifier *new) 105 110 { 106 111 struct nf_ct_event_notifier *notify; 107 112 108 113 mutex_lock(&nf_ct_ecache_mutex); 109 - notify = rcu_dereference_protected(nf_conntrack_event_cb, 114 + notify = rcu_dereference_protected(net->ct.nf_conntrack_event_cb, 110 115 lockdep_is_held(&nf_ct_ecache_mutex)); 111 116 BUG_ON(notify != new); 112 - RCU_INIT_POINTER(nf_conntrack_event_cb, NULL); 117 + RCU_INIT_POINTER(net->ct.nf_conntrack_event_cb, NULL); 113 118 mutex_unlock(&nf_ct_ecache_mutex); 114 119 } 115 120 EXPORT_SYMBOL_GPL(nf_conntrack_unregister_notifier); 116 121 117 - int nf_ct_expect_register_notifier(struct nf_exp_event_notifier *new) 122 + int nf_ct_expect_register_notifier(struct net *net, 123 + struct nf_exp_event_notifier *new) 118 124 { 119 125 int ret = 0; 120 126 struct nf_exp_event_notifier *notify; 121 127 122 128 mutex_lock(&nf_ct_ecache_mutex); 123 - notify = rcu_dereference_protected(nf_expect_event_cb, 129 + notify = rcu_dereference_protected(net->ct.nf_expect_event_cb, 124 130 lockdep_is_held(&nf_ct_ecache_mutex)); 125 131 if (notify != NULL) { 126 132 ret = -EBUSY; 127 133 goto out_unlock; 128 134 } 129 - RCU_INIT_POINTER(nf_expect_event_cb, new); 135 + RCU_INIT_POINTER(net->ct.nf_expect_event_cb, new); 130 136 mutex_unlock(&nf_ct_ecache_mutex); 131 137 return ret; 132 138 ··· 138 140 } 139 141 EXPORT_SYMBOL_GPL(nf_ct_expect_register_notifier); 140 142 141 - void nf_ct_expect_unregister_notifier(struct nf_exp_event_notifier *new) 143 + void nf_ct_expect_unregister_notifier(struct net *net, 144 + struct nf_exp_event_notifier *new) 142 145 { 143 146 struct nf_exp_event_notifier *notify; 144 147 145 148 mutex_lock(&nf_ct_ecache_mutex); 146 - notify = rcu_dereference_protected(nf_expect_event_cb, 149 + notify = rcu_dereference_protected(net->ct.nf_expect_event_cb, 147 150 lockdep_is_held(&nf_ct_ecache_mutex)); 148 151 BUG_ON(notify != new); 149 - RCU_INIT_POINTER(nf_expect_event_cb, NULL); 152 + RCU_INIT_POINTER(net->ct.nf_expect_event_cb, NULL); 150 153 mutex_unlock(&nf_ct_ecache_mutex); 151 154 } 152 155 EXPORT_SYMBOL_GPL(nf_ct_expect_unregister_notifier);
+52 -21
net/netfilter/nf_conntrack_netlink.c
··· 4 4 * (C) 2001 by Jay Schulist <jschlst@samba.org> 5 5 * (C) 2002-2006 by Harald Welte <laforge@gnumonks.org> 6 6 * (C) 2003 by Patrick Mchardy <kaber@trash.net> 7 - * (C) 2005-2008 by Pablo Neira Ayuso <pablo@netfilter.org> 7 + * (C) 2005-2011 by Pablo Neira Ayuso <pablo@netfilter.org> 8 8 * 9 9 * Initial connection tracking via netlink development funded and 10 10 * generally made possible by Network Robots, Inc. (www.networkrobots.com) ··· 2163 2163 MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_CTNETLINK); 2164 2164 MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_CTNETLINK_EXP); 2165 2165 2166 + static int __net_init ctnetlink_net_init(struct net *net) 2167 + { 2168 + #ifdef CONFIG_NF_CONNTRACK_EVENTS 2169 + int ret; 2170 + 2171 + ret = nf_conntrack_register_notifier(net, &ctnl_notifier); 2172 + if (ret < 0) { 2173 + pr_err("ctnetlink_init: cannot register notifier.\n"); 2174 + goto err_out; 2175 + } 2176 + 2177 + ret = nf_ct_expect_register_notifier(net, &ctnl_notifier_exp); 2178 + if (ret < 0) { 2179 + pr_err("ctnetlink_init: cannot expect register notifier.\n"); 2180 + goto err_unreg_notifier; 2181 + } 2182 + #endif 2183 + return 0; 2184 + 2185 + #ifdef CONFIG_NF_CONNTRACK_EVENTS 2186 + err_unreg_notifier: 2187 + nf_conntrack_unregister_notifier(net, &ctnl_notifier); 2188 + err_out: 2189 + return ret; 2190 + #endif 2191 + } 2192 + 2193 + static void ctnetlink_net_exit(struct net *net) 2194 + { 2195 + #ifdef CONFIG_NF_CONNTRACK_EVENTS 2196 + nf_ct_expect_unregister_notifier(net, &ctnl_notifier_exp); 2197 + nf_conntrack_unregister_notifier(net, &ctnl_notifier); 2198 + #endif 2199 + } 2200 + 2201 + static void __net_exit ctnetlink_net_exit_batch(struct list_head *net_exit_list) 2202 + { 2203 + struct net *net; 2204 + 2205 + list_for_each_entry(net, net_exit_list, exit_list) 2206 + ctnetlink_net_exit(net); 2207 + } 2208 + 2209 + static struct pernet_operations ctnetlink_net_ops = { 2210 + .init = ctnetlink_net_init, 2211 + .exit_batch = ctnetlink_net_exit_batch, 2212 + }; 2213 + 2166 2214 static int __init ctnetlink_init(void) 2167 2215 { 2168 2216 int ret; ··· 2228 2180 goto err_unreg_subsys; 2229 2181 } 2230 2182 2231 - #ifdef CONFIG_NF_CONNTRACK_EVENTS 2232 - ret = nf_conntrack_register_notifier(&ctnl_notifier); 2233 - if (ret < 0) { 2234 - pr_err("ctnetlink_init: cannot register notifier.\n"); 2183 + if (register_pernet_subsys(&ctnetlink_net_ops)) { 2184 + pr_err("ctnetlink_init: cannot register pernet operations\n"); 2235 2185 goto err_unreg_exp_subsys; 2236 2186 } 2237 2187 2238 - ret = nf_ct_expect_register_notifier(&ctnl_notifier_exp); 2239 - if (ret < 0) { 2240 - pr_err("ctnetlink_init: cannot expect register notifier.\n"); 2241 - goto err_unreg_notifier; 2242 - } 2243 - #endif 2244 - 2245 2188 return 0; 2246 2189 2247 - #ifdef CONFIG_NF_CONNTRACK_EVENTS 2248 - err_unreg_notifier: 2249 - nf_conntrack_unregister_notifier(&ctnl_notifier); 2250 2190 err_unreg_exp_subsys: 2251 2191 nfnetlink_subsys_unregister(&ctnl_exp_subsys); 2252 - #endif 2253 2192 err_unreg_subsys: 2254 2193 nfnetlink_subsys_unregister(&ctnl_subsys); 2255 2194 err_out: ··· 2248 2213 pr_info("ctnetlink: unregistering from nfnetlink.\n"); 2249 2214 2250 2215 nf_ct_remove_userspace_expectations(); 2251 - #ifdef CONFIG_NF_CONNTRACK_EVENTS 2252 - nf_ct_expect_unregister_notifier(&ctnl_notifier_exp); 2253 - nf_conntrack_unregister_notifier(&ctnl_notifier); 2254 - #endif 2255 - 2216 + unregister_pernet_subsys(&ctnetlink_net_ops); 2256 2217 nfnetlink_subsys_unregister(&ctnl_exp_subsys); 2257 2218 nfnetlink_subsys_unregister(&ctnl_subsys); 2258 2219 }
+14 -8
net/netlabel/netlabel_kapi.c
··· 111 111 struct netlbl_domaddr_map *addrmap = NULL; 112 112 struct netlbl_domaddr4_map *map4 = NULL; 113 113 struct netlbl_domaddr6_map *map6 = NULL; 114 - const struct in_addr *addr4, *mask4; 115 - const struct in6_addr *addr6, *mask6; 116 114 117 115 entry = kzalloc(sizeof(*entry), GFP_ATOMIC); 118 116 if (entry == NULL) ··· 131 133 INIT_LIST_HEAD(&addrmap->list6); 132 134 133 135 switch (family) { 134 - case AF_INET: 135 - addr4 = addr; 136 - mask4 = mask; 136 + case AF_INET: { 137 + const struct in_addr *addr4 = addr; 138 + const struct in_addr *mask4 = mask; 137 139 map4 = kzalloc(sizeof(*map4), GFP_ATOMIC); 138 140 if (map4 == NULL) 139 141 goto cfg_unlbl_map_add_failure; ··· 146 148 if (ret_val != 0) 147 149 goto cfg_unlbl_map_add_failure; 148 150 break; 149 - case AF_INET6: 150 - addr6 = addr; 151 - mask6 = mask; 151 + } 152 + #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) 153 + case AF_INET6: { 154 + const struct in6_addr *addr6 = addr; 155 + const struct in6_addr *mask6 = mask; 152 156 map6 = kzalloc(sizeof(*map6), GFP_ATOMIC); 153 157 if (map6 == NULL) 154 158 goto cfg_unlbl_map_add_failure; ··· 167 167 if (ret_val != 0) 168 168 goto cfg_unlbl_map_add_failure; 169 169 break; 170 + } 171 + #endif /* IPv6 */ 170 172 default: 171 173 goto cfg_unlbl_map_add_failure; 172 174 break; ··· 227 225 case AF_INET: 228 226 addr_len = sizeof(struct in_addr); 229 227 break; 228 + #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) 230 229 case AF_INET6: 231 230 addr_len = sizeof(struct in6_addr); 232 231 break; 232 + #endif /* IPv6 */ 233 233 default: 234 234 return -EPFNOSUPPORT; 235 235 } ··· 270 266 case AF_INET: 271 267 addr_len = sizeof(struct in_addr); 272 268 break; 269 + #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) 273 270 case AF_INET6: 274 271 addr_len = sizeof(struct in6_addr); 275 272 break; 273 + #endif /* IPv6 */ 276 274 default: 277 275 return -EPFNOSUPPORT; 278 276 }
+2 -2
net/sched/sch_red.c
··· 209 209 ctl->Plog, ctl->Scell_log, 210 210 nla_data(tb[TCA_RED_STAB])); 211 211 212 - if (skb_queue_empty(&sch->q)) 213 - red_end_of_idle_period(&q->parms); 212 + if (!q->qdisc->q.qlen) 213 + red_start_of_idle_period(&q->parms); 214 214 215 215 sch_tree_unlock(sch); 216 216 return 0;
+20 -11
net/sched/sch_teql.c
··· 225 225 226 226 227 227 static int 228 - __teql_resolve(struct sk_buff *skb, struct sk_buff *skb_res, struct net_device *dev) 228 + __teql_resolve(struct sk_buff *skb, struct sk_buff *skb_res, 229 + struct net_device *dev, struct netdev_queue *txq, 230 + struct neighbour *mn) 229 231 { 230 - struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, 0); 231 - struct teql_sched_data *q = qdisc_priv(dev_queue->qdisc); 232 - struct neighbour *mn = dst_get_neighbour(skb_dst(skb)); 232 + struct teql_sched_data *q = qdisc_priv(txq->qdisc); 233 233 struct neighbour *n = q->ncache; 234 234 235 235 if (mn->tbl == NULL) ··· 262 262 } 263 263 264 264 static inline int teql_resolve(struct sk_buff *skb, 265 - struct sk_buff *skb_res, struct net_device *dev) 265 + struct sk_buff *skb_res, 266 + struct net_device *dev, 267 + struct netdev_queue *txq) 266 268 { 267 - struct netdev_queue *txq = netdev_get_tx_queue(dev, 0); 269 + struct dst_entry *dst = skb_dst(skb); 270 + struct neighbour *mn; 271 + int res; 272 + 268 273 if (txq->qdisc == &noop_qdisc) 269 274 return -ENODEV; 270 275 271 - if (dev->header_ops == NULL || 272 - skb_dst(skb) == NULL || 273 - dst_get_neighbour(skb_dst(skb)) == NULL) 276 + if (!dev->header_ops || !dst) 274 277 return 0; 275 - return __teql_resolve(skb, skb_res, dev); 278 + 279 + rcu_read_lock(); 280 + mn = dst_get_neighbour(dst); 281 + res = mn ? __teql_resolve(skb, skb_res, dev, txq, mn) : 0; 282 + rcu_read_unlock(); 283 + 284 + return res; 276 285 } 277 286 278 287 static netdev_tx_t teql_master_xmit(struct sk_buff *skb, struct net_device *dev) ··· 316 307 continue; 317 308 } 318 309 319 - switch (teql_resolve(skb, skb_res, slave)) { 310 + switch (teql_resolve(skb, skb_res, slave, slave_txq)) { 320 311 case 0: 321 312 if (__netif_tx_trylock(slave_txq)) { 322 313 unsigned int length = qdisc_pkt_len(skb);
+1 -1
net/sctp/auth.c
··· 82 82 struct sctp_auth_bytes *key; 83 83 84 84 /* Verify that we are not going to overflow INT_MAX */ 85 - if ((INT_MAX - key_len) < sizeof(struct sctp_auth_bytes)) 85 + if (key_len > (INT_MAX - sizeof(struct sctp_auth_bytes))) 86 86 return NULL; 87 87 88 88 /* Allocate the shared key */
+4 -3
net/sunrpc/xprtsock.c
··· 496 496 struct rpc_rqst *req = task->tk_rqstp; 497 497 struct rpc_xprt *xprt = req->rq_xprt; 498 498 struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt); 499 - int ret = 0; 499 + int ret = -EAGAIN; 500 500 501 501 dprintk("RPC: %5u xmit incomplete (%u left of %u)\n", 502 502 task->tk_pid, req->rq_slen - req->rq_bytes_sent, ··· 508 508 /* Don't race with disconnect */ 509 509 if (xprt_connected(xprt)) { 510 510 if (test_bit(SOCK_ASYNC_NOSPACE, &transport->sock->flags)) { 511 - ret = -EAGAIN; 512 511 /* 513 512 * Notify TCP that we're limited by the application 514 513 * window size ··· 2529 2530 int err; 2530 2531 err = xs_init_anyaddr(args->dstaddr->sa_family, 2531 2532 (struct sockaddr *)&new->srcaddr); 2532 - if (err != 0) 2533 + if (err != 0) { 2534 + xprt_free(xprt); 2533 2535 return ERR_PTR(err); 2536 + } 2534 2537 } 2535 2538 2536 2539 return xprt;
+4
net/unix/af_unix.c
··· 1957 1957 if ((UNIXCB(skb).pid != siocb->scm->pid) || 1958 1958 (UNIXCB(skb).cred != siocb->scm->cred)) { 1959 1959 skb_queue_head(&sk->sk_receive_queue, skb); 1960 + sk->sk_data_ready(sk, skb->len); 1960 1961 break; 1961 1962 } 1962 1963 } else { ··· 1975 1974 chunk = min_t(unsigned int, skb->len, size); 1976 1975 if (memcpy_toiovec(msg->msg_iov, skb->data, chunk)) { 1977 1976 skb_queue_head(&sk->sk_receive_queue, skb); 1977 + sk->sk_data_ready(sk, skb->len); 1978 1978 if (copied == 0) 1979 1979 copied = -EFAULT; 1980 1980 break; ··· 1993 1991 /* put the skb back if we didn't use it up.. */ 1994 1992 if (skb->len) { 1995 1993 skb_queue_head(&sk->sk_receive_queue, skb); 1994 + sk->sk_data_ready(sk, skb->len); 1996 1995 break; 1997 1996 } 1998 1997 ··· 2009 2006 2010 2007 /* put message back and return */ 2011 2008 skb_queue_head(&sk->sk_receive_queue, skb); 2009 + sk->sk_data_ready(sk, skb->len); 2012 2010 break; 2013 2011 } 2014 2012 } while (size);
+2 -2
net/wireless/nl80211.c
··· 89 89 [NL80211_ATTR_IFINDEX] = { .type = NLA_U32 }, 90 90 [NL80211_ATTR_IFNAME] = { .type = NLA_NUL_STRING, .len = IFNAMSIZ-1 }, 91 91 92 - [NL80211_ATTR_MAC] = { .type = NLA_BINARY, .len = ETH_ALEN }, 93 - [NL80211_ATTR_PREV_BSSID] = { .type = NLA_BINARY, .len = ETH_ALEN }, 92 + [NL80211_ATTR_MAC] = { .len = ETH_ALEN }, 93 + [NL80211_ATTR_PREV_BSSID] = { .len = ETH_ALEN }, 94 94 95 95 [NL80211_ATTR_KEY] = { .type = NLA_NESTED, }, 96 96 [NL80211_ATTR_KEY_DATA] = { .type = NLA_BINARY,
+1 -1
sound/pci/cs5535audio/cs5535audio_pcm.c
··· 148 148 struct cs5535audio_dma_desc *desc = 149 149 &((struct cs5535audio_dma_desc *) dma->desc_buf.area)[i]; 150 150 desc->addr = cpu_to_le32(addr); 151 - desc->size = cpu_to_le32(period_bytes); 151 + desc->size = cpu_to_le16(period_bytes); 152 152 desc->ctlreserved = cpu_to_le16(PRD_EOP); 153 153 desc_addr += sizeof(struct cs5535audio_dma_desc); 154 154 addr += period_bytes;
+3 -3
sound/pci/hda/hda_codec.c
··· 4046 4046 4047 4047 /* Search for codec ID */ 4048 4048 for (q = tbl; q->subvendor; q++) { 4049 - unsigned long vendorid = (q->subdevice) | (q->subvendor << 16); 4050 - 4051 - if (vendorid == codec->subsystem_id) 4049 + unsigned int mask = 0xffff0000 | q->subdevice_mask; 4050 + unsigned int id = (q->subdevice | (q->subvendor << 16)) & mask; 4051 + if ((codec->subsystem_id & mask) == id) 4052 4052 break; 4053 4053 } 4054 4054
+19 -9
sound/pci/hda/hda_eld.c
··· 347 347 348 348 for (i = 0; i < size; i++) { 349 349 unsigned int val = hdmi_get_eld_data(codec, nid, i); 350 + /* 351 + * Graphics driver might be writing to ELD buffer right now. 352 + * Just abort. The caller will repoll after a while. 353 + */ 350 354 if (!(val & AC_ELDD_ELD_VALID)) { 351 - if (!i) { 352 - snd_printd(KERN_INFO 353 - "HDMI: invalid ELD data\n"); 354 - ret = -EINVAL; 355 - goto error; 356 - } 357 355 snd_printd(KERN_INFO 358 356 "HDMI: invalid ELD data byte %d\n", i); 359 - val = 0; 360 - } else 361 - val &= AC_ELDD_ELD_DATA; 357 + ret = -EINVAL; 358 + goto error; 359 + } 360 + val &= AC_ELDD_ELD_DATA; 361 + /* 362 + * The first byte cannot be zero. This can happen on some DVI 363 + * connections. Some Intel chips may also need some 250ms delay 364 + * to return non-zero ELD data, even when the graphics driver 365 + * correctly writes ELD content before setting ELD_valid bit. 366 + */ 367 + if (!val && !i) { 368 + snd_printdd(KERN_INFO "HDMI: 0 ELD data\n"); 369 + ret = -EINVAL; 370 + goto error; 371 + } 362 372 buf[i] = val; 363 373 } 364 374
+23 -9
sound/pci/hda/patch_cirrus.c
··· 58 58 unsigned int gpio_mask; 59 59 unsigned int gpio_dir; 60 60 unsigned int gpio_data; 61 + unsigned int gpio_eapd_hp; /* EAPD GPIO bit for headphones */ 62 + unsigned int gpio_eapd_speaker; /* EAPD GPIO bit for speakers */ 61 63 62 64 struct hda_pcm pcm_rec[2]; /* PCM information */ 63 65 ··· 78 76 CS420X_MBP53, 79 77 CS420X_MBP55, 80 78 CS420X_IMAC27, 79 + CS420X_APPLE, 81 80 CS420X_AUTO, 82 81 CS420X_MODELS 83 82 }; ··· 931 928 spdif_present ? 0 : PIN_OUT); 932 929 } 933 930 } 934 - if (spec->board_config == CS420X_MBP53 || 935 - spec->board_config == CS420X_MBP55 || 936 - spec->board_config == CS420X_IMAC27) { 937 - unsigned int gpio = hp_present ? 0x02 : 0x08; 931 + if (spec->gpio_eapd_hp) { 932 + unsigned int gpio = hp_present ? 933 + spec->gpio_eapd_hp : spec->gpio_eapd_speaker; 938 934 snd_hda_codec_write(codec, 0x01, 0, 939 935 AC_VERB_SET_GPIO_DATA, gpio); 940 936 } ··· 1278 1276 [CS420X_MBP53] = "mbp53", 1279 1277 [CS420X_MBP55] = "mbp55", 1280 1278 [CS420X_IMAC27] = "imac27", 1279 + [CS420X_APPLE] = "apple", 1281 1280 [CS420X_AUTO] = "auto", 1282 1281 }; 1283 1282 ··· 1288 1285 SND_PCI_QUIRK(0x10de, 0x0d94, "MacBookAir 3,1(2)", CS420X_MBP55), 1289 1286 SND_PCI_QUIRK(0x10de, 0xcb79, "MacBookPro 5,5", CS420X_MBP55), 1290 1287 SND_PCI_QUIRK(0x10de, 0xcb89, "MacBookPro 7,1", CS420X_MBP55), 1291 - SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27), 1288 + /* this conflicts with too many other models */ 1289 + /*SND_PCI_QUIRK(0x8086, 0x7270, "IMac 27 Inch", CS420X_IMAC27),*/ 1290 + {} /* terminator */ 1291 + }; 1292 + 1293 + static const struct snd_pci_quirk cs420x_codec_cfg_tbl[] = { 1294 + SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE), 1292 1295 {} /* terminator */ 1293 1296 }; 1294 1297 ··· 1376 1367 spec->board_config = 1377 1368 snd_hda_check_board_config(codec, CS420X_MODELS, 1378 1369 cs420x_models, cs420x_cfg_tbl); 1370 + if (spec->board_config < 0) 1371 + spec->board_config = 1372 + snd_hda_check_board_codec_sid_config(codec, 1373 + CS420X_MODELS, NULL, cs420x_codec_cfg_tbl); 1379 1374 if (spec->board_config >= 0) 1380 1375 fix_pincfg(codec, spec->board_config, cs_pincfgs); 1381 1376 ··· 1387 1374 case CS420X_IMAC27: 1388 1375 case CS420X_MBP53: 1389 1376 case CS420X_MBP55: 1390 - /* GPIO1 = headphones */ 1391 - /* GPIO3 = speakers */ 1392 - spec->gpio_mask = 0x0a; 1393 - spec->gpio_dir = 0x0a; 1377 + case CS420X_APPLE: 1378 + spec->gpio_eapd_hp = 2; /* GPIO1 = headphones */ 1379 + spec->gpio_eapd_speaker = 8; /* GPIO3 = speakers */ 1380 + spec->gpio_mask = spec->gpio_dir = 1381 + spec->gpio_eapd_hp | spec->gpio_eapd_speaker; 1394 1382 break; 1395 1383 } 1396 1384
+10 -6
sound/pci/hda/patch_hdmi.c
··· 69 69 struct hda_codec *codec; 70 70 struct hdmi_eld sink_eld; 71 71 struct delayed_work work; 72 + int repoll_count; 72 73 }; 73 74 74 75 struct hdmi_spec { ··· 749 748 * Unsolicited events 750 749 */ 751 750 752 - static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, bool retry); 751 + static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll); 753 752 754 753 static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res) 755 754 { ··· 767 766 if (pin_idx < 0) 768 767 return; 769 768 770 - hdmi_present_sense(&spec->pins[pin_idx], true); 769 + hdmi_present_sense(&spec->pins[pin_idx], 1); 771 770 } 772 771 773 772 static void hdmi_non_intrinsic_event(struct hda_codec *codec, unsigned int res) ··· 961 960 return 0; 962 961 } 963 962 964 - static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, bool retry) 963 + static void hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll) 965 964 { 966 965 struct hda_codec *codec = per_pin->codec; 967 966 struct hdmi_eld *eld = &per_pin->sink_eld; ··· 990 989 if (eld_valid) { 991 990 if (!snd_hdmi_get_eld(eld, codec, pin_nid)) 992 991 snd_hdmi_show_eld(eld); 993 - else if (retry) { 992 + else if (repoll) { 994 993 queue_delayed_work(codec->bus->workq, 995 994 &per_pin->work, 996 995 msecs_to_jiffies(300)); ··· 1005 1004 struct hdmi_spec_per_pin *per_pin = 1006 1005 container_of(to_delayed_work(work), struct hdmi_spec_per_pin, work); 1007 1006 1008 - hdmi_present_sense(per_pin, false); 1007 + if (per_pin->repoll_count++ > 6) 1008 + per_pin->repoll_count = 0; 1009 + 1010 + hdmi_present_sense(per_pin, per_pin->repoll_count); 1009 1011 } 1010 1012 1011 1013 static int hdmi_add_pin(struct hda_codec *codec, hda_nid_t pin_nid) ··· 1239 1235 if (err < 0) 1240 1236 return err; 1241 1237 1242 - hdmi_present_sense(per_pin, false); 1238 + hdmi_present_sense(per_pin, 0); 1243 1239 return 0; 1244 1240 } 1245 1241
+23 -11
sound/pci/hda/patch_realtek.c
··· 277 277 return false; 278 278 } 279 279 280 + static inline hda_nid_t get_capsrc(struct alc_spec *spec, int idx) 281 + { 282 + return spec->capsrc_nids ? 283 + spec->capsrc_nids[idx] : spec->adc_nids[idx]; 284 + } 285 + 280 286 /* select the given imux item; either unmute exclusively or select the route */ 281 287 static int alc_mux_select(struct hda_codec *codec, unsigned int adc_idx, 282 288 unsigned int idx, bool force) ··· 309 303 adc_idx = spec->dyn_adc_idx[idx]; 310 304 } 311 305 312 - nid = spec->capsrc_nids ? 313 - spec->capsrc_nids[adc_idx] : spec->adc_nids[adc_idx]; 306 + nid = get_capsrc(spec, adc_idx); 314 307 315 308 /* no selection? */ 316 309 num_conns = snd_hda_get_conn_list(codec, nid, NULL); ··· 1059 1054 spec->imux_pins[2] = spec->dock_mic_pin; 1060 1055 for (i = 0; i < 3; i++) { 1061 1056 strcpy(imux->items[i].label, texts[i]); 1062 - if (spec->imux_pins[i]) 1057 + if (spec->imux_pins[i]) { 1058 + hda_nid_t pin = spec->imux_pins[i]; 1059 + int c; 1060 + for (c = 0; c < spec->num_adc_nids; c++) { 1061 + hda_nid_t cap = get_capsrc(spec, c); 1062 + int idx = get_connection_index(codec, cap, pin); 1063 + if (idx >= 0) { 1064 + imux->items[i].index = idx; 1065 + break; 1066 + } 1067 + } 1063 1068 imux->num_items = i + 1; 1069 + } 1064 1070 } 1065 1071 spec->num_mux_defs = 1; 1066 1072 spec->input_mux = imux; ··· 1973 1957 if (!kctl) 1974 1958 kctl = snd_hda_find_mixer_ctl(codec, "Input Source"); 1975 1959 for (i = 0; kctl && i < kctl->count; i++) { 1976 - const hda_nid_t *nids = spec->capsrc_nids; 1977 - if (!nids) 1978 - nids = spec->adc_nids; 1979 - err = snd_hda_add_nid(codec, kctl, i, nids[i]); 1960 + err = snd_hda_add_nid(codec, kctl, i, 1961 + get_capsrc(spec, i)); 1980 1962 if (err < 0) 1981 1963 return err; 1982 1964 } ··· 2761 2747 } 2762 2748 2763 2749 for (c = 0; c < num_adcs; c++) { 2764 - hda_nid_t cap = spec->capsrc_nids ? 2765 - spec->capsrc_nids[c] : spec->adc_nids[c]; 2750 + hda_nid_t cap = get_capsrc(spec, c); 2766 2751 idx = get_connection_index(codec, cap, pin); 2767 2752 if (idx >= 0) { 2768 2753 spec->imux_pins[imux->num_items] = pin; ··· 3707 3694 if (!pin) 3708 3695 return 0; 3709 3696 for (i = 0; i < spec->num_adc_nids; i++) { 3710 - hda_nid_t cap = spec->capsrc_nids ? 3711 - spec->capsrc_nids[i] : spec->adc_nids[i]; 3697 + hda_nid_t cap = get_capsrc(spec, i); 3712 3698 int idx; 3713 3699 3714 3700 idx = get_connection_index(codec, cap, pin);
+2
sound/pci/hda/patch_sigmatel.c
··· 1641 1641 "Alienware M17x", STAC_ALIENWARE_M17X), 1642 1642 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x043a, 1643 1643 "Alienware M17x", STAC_ALIENWARE_M17X), 1644 + SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0490, 1645 + "Alienware M17x", STAC_ALIENWARE_M17X), 1644 1646 {} /* terminator */ 1645 1647 }; 1646 1648
+45 -35
sound/pci/hda/patch_via.c
··· 208 208 /* work to check hp jack state */ 209 209 struct hda_codec *codec; 210 210 struct delayed_work vt1708_hp_work; 211 + int hp_work_active; 211 212 int vt1708_jack_detect; 212 213 int vt1708_hp_present; 213 214 ··· 306 305 static void analog_low_current_mode(struct hda_codec *codec); 307 306 static bool is_aa_path_mute(struct hda_codec *codec); 308 307 309 - static void vt1708_start_hp_work(struct via_spec *spec) 310 - { 311 - if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 312 - return; 313 - snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 314 - !spec->vt1708_jack_detect); 315 - if (!delayed_work_pending(&spec->vt1708_hp_work)) 316 - schedule_delayed_work(&spec->vt1708_hp_work, 317 - msecs_to_jiffies(100)); 318 - } 308 + #define hp_detect_with_aa(codec) \ 309 + (snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") == 1 && \ 310 + !is_aa_path_mute(codec)) 319 311 320 312 static void vt1708_stop_hp_work(struct via_spec *spec) 321 313 { 322 314 if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 323 315 return; 324 - if (snd_hda_get_bool_hint(spec->codec, "analog_loopback_hp_detect") == 1 325 - && !is_aa_path_mute(spec->codec)) 316 + if (spec->hp_work_active) { 317 + snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 1); 318 + cancel_delayed_work_sync(&spec->vt1708_hp_work); 319 + spec->hp_work_active = 0; 320 + } 321 + } 322 + 323 + static void vt1708_update_hp_work(struct via_spec *spec) 324 + { 325 + if (spec->codec_type != VT1708 || spec->autocfg.hp_pins[0] == 0) 326 326 return; 327 - snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 328 - !spec->vt1708_jack_detect); 329 - cancel_delayed_work_sync(&spec->vt1708_hp_work); 327 + if (spec->vt1708_jack_detect && 328 + (spec->active_streams || hp_detect_with_aa(spec->codec))) { 329 + if (!spec->hp_work_active) { 330 + snd_hda_codec_write(spec->codec, 0x1, 0, 0xf81, 0); 331 + schedule_delayed_work(&spec->vt1708_hp_work, 332 + msecs_to_jiffies(100)); 333 + spec->hp_work_active = 1; 334 + } 335 + } else if (!hp_detect_with_aa(spec->codec)) 336 + vt1708_stop_hp_work(spec); 330 337 } 331 338 332 339 static void set_widgets_power_state(struct hda_codec *codec) ··· 352 343 353 344 set_widgets_power_state(codec); 354 345 analog_low_current_mode(snd_kcontrol_chip(kcontrol)); 355 - if (snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") == 1) { 356 - if (is_aa_path_mute(codec)) 357 - vt1708_start_hp_work(codec->spec); 358 - else 359 - vt1708_stop_hp_work(codec->spec); 360 - } 346 + vt1708_update_hp_work(codec->spec); 361 347 return change; 362 348 } 363 349 ··· 1158 1154 spec->cur_dac_stream_tag = stream_tag; 1159 1155 spec->cur_dac_format = format; 1160 1156 mutex_unlock(&spec->config_mutex); 1161 - vt1708_start_hp_work(spec); 1157 + vt1708_update_hp_work(spec); 1162 1158 return 0; 1163 1159 } 1164 1160 ··· 1178 1174 spec->cur_hp_stream_tag = stream_tag; 1179 1175 spec->cur_hp_format = format; 1180 1176 mutex_unlock(&spec->config_mutex); 1181 - vt1708_start_hp_work(spec); 1177 + vt1708_update_hp_work(spec); 1182 1178 return 0; 1183 1179 } 1184 1180 ··· 1192 1188 snd_hda_multi_out_analog_cleanup(codec, &spec->multiout); 1193 1189 spec->active_streams &= ~STREAM_MULTI_OUT; 1194 1190 mutex_unlock(&spec->config_mutex); 1195 - vt1708_stop_hp_work(spec); 1191 + vt1708_update_hp_work(spec); 1196 1192 return 0; 1197 1193 } 1198 1194 ··· 1207 1203 snd_hda_codec_setup_stream(codec, spec->hp_dac_nid, 0, 0, 0); 1208 1204 spec->active_streams &= ~STREAM_INDEP_HP; 1209 1205 mutex_unlock(&spec->config_mutex); 1210 - vt1708_stop_hp_work(spec); 1206 + vt1708_update_hp_work(spec); 1211 1207 return 0; 1212 1208 } 1213 1209 ··· 1649 1645 int nums; 1650 1646 struct via_spec *spec = codec->spec; 1651 1647 1652 - if (!spec->hp_independent_mode && spec->autocfg.hp_pins[0]) 1648 + if (!spec->hp_independent_mode && spec->autocfg.hp_pins[0] && 1649 + (spec->codec_type != VT1708 || spec->vt1708_jack_detect)) 1653 1650 present = snd_hda_jack_detect(codec, spec->autocfg.hp_pins[0]); 1654 1651 1655 1652 if (spec->smart51_enabled) ··· 2617 2612 2618 2613 if (spec->codec_type != VT1708) 2619 2614 return 0; 2620 - spec->vt1708_jack_detect = 2621 - !((snd_hda_codec_read(codec, 0x1, 0, 0xf84, 0) >> 8) & 0x1); 2622 2615 ucontrol->value.integer.value[0] = spec->vt1708_jack_detect; 2623 2616 return 0; 2624 2617 } ··· 2626 2623 { 2627 2624 struct hda_codec *codec = snd_kcontrol_chip(kcontrol); 2628 2625 struct via_spec *spec = codec->spec; 2629 - int change; 2626 + int val; 2630 2627 2631 2628 if (spec->codec_type != VT1708) 2632 2629 return 0; 2633 - spec->vt1708_jack_detect = ucontrol->value.integer.value[0]; 2634 - change = (0x1 & (snd_hda_codec_read(codec, 0x1, 0, 0xf84, 0) >> 8)) 2635 - == !spec->vt1708_jack_detect; 2636 - if (spec->vt1708_jack_detect) { 2630 + val = !!ucontrol->value.integer.value[0]; 2631 + if (spec->vt1708_jack_detect == val) 2632 + return 0; 2633 + spec->vt1708_jack_detect = val; 2634 + if (spec->vt1708_jack_detect && 2635 + snd_hda_get_bool_hint(codec, "analog_loopback_hp_detect") != 1) { 2637 2636 mute_aa_path(codec, 1); 2638 2637 notify_aa_path_ctls(codec); 2639 2638 } 2640 - return change; 2639 + via_hp_automute(codec); 2640 + vt1708_update_hp_work(spec); 2641 + return 1; 2641 2642 } 2642 2643 2643 2644 static const struct snd_kcontrol_new vt1708_jack_detect_ctl = { ··· 2778 2771 via_auto_init_unsol_event(codec); 2779 2772 2780 2773 via_hp_automute(codec); 2774 + vt1708_update_hp_work(spec); 2781 2775 2782 2776 return 0; 2783 2777 } ··· 2795 2787 spec->vt1708_hp_present ^= 1; 2796 2788 via_hp_automute(spec->codec); 2797 2789 } 2798 - vt1708_start_hp_work(spec); 2790 + if (spec->vt1708_jack_detect) 2791 + schedule_delayed_work(&spec->vt1708_hp_work, 2792 + msecs_to_jiffies(100)); 2799 2793 } 2800 2794 2801 2795 static int get_mux_nids(struct hda_codec *codec)
+16 -7
sound/pci/lx6464es/lx_core.c
··· 78 78 return ioread32(address); 79 79 } 80 80 81 - void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, u32 len) 81 + static void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, 82 + u32 len) 82 83 { 83 - void __iomem *address = lx_dsp_register(chip, port); 84 - memcpy_fromio(data, address, len*sizeof(u32)); 84 + u32 __iomem *address = lx_dsp_register(chip, port); 85 + int i; 86 + 87 + /* we cannot use memcpy_fromio */ 88 + for (i = 0; i != len; ++i) 89 + data[i] = ioread32(address + i); 85 90 } 86 91 87 92 ··· 96 91 iowrite32(data, address); 97 92 } 98 93 99 - void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, const u32 *data, 100 - u32 len) 94 + static void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, 95 + const u32 *data, u32 len) 101 96 { 102 - void __iomem *address = lx_dsp_register(chip, port); 103 - memcpy_toio(address, data, len*sizeof(u32)); 97 + u32 __iomem *address = lx_dsp_register(chip, port); 98 + int i; 99 + 100 + /* we cannot use memcpy_to */ 101 + for (i = 0; i != len; ++i) 102 + iowrite32(data[i], address + i); 104 103 } 105 104 106 105
-3
sound/pci/lx6464es/lx_core.h
··· 72 72 }; 73 73 74 74 unsigned long lx_dsp_reg_read(struct lx6464es *chip, int port); 75 - void lx_dsp_reg_readbuf(struct lx6464es *chip, int port, u32 *data, u32 len); 76 75 void lx_dsp_reg_write(struct lx6464es *chip, int port, unsigned data); 77 - void lx_dsp_reg_writebuf(struct lx6464es *chip, int port, const u32 *data, 78 - u32 len); 79 76 80 77 /* plx register access */ 81 78 enum {
+1 -1
sound/pci/rme9652/hdspm.c
··· 6518 6518 hdspm->io_type = AES32; 6519 6519 hdspm->card_name = "RME AES32"; 6520 6520 hdspm->midiPorts = 2; 6521 - } else if ((hdspm->firmware_rev == 0xd5) || 6521 + } else if ((hdspm->firmware_rev == 0xd2) || 6522 6522 ((hdspm->firmware_rev >= 0xc8) && 6523 6523 (hdspm->firmware_rev <= 0xcf))) { 6524 6524 hdspm->io_type = MADI;
+1 -1
sound/soc/codecs/adau1373.c
··· 245 245 }; 246 246 247 247 static const unsigned int adau1373_bass_tlv[] = { 248 - TLV_DB_RANGE_HEAD(4), 248 + TLV_DB_RANGE_HEAD(3), 249 249 0, 2, TLV_DB_SCALE_ITEM(-600, 600, 1), 250 250 3, 4, TLV_DB_SCALE_ITEM(950, 250, 0), 251 251 5, 7, TLV_DB_SCALE_ITEM(1400, 150, 0),
+5 -3
sound/soc/codecs/cs4271.c
··· 434 434 { 435 435 int ret; 436 436 /* Set power-down bit */ 437 - ret = snd_soc_update_bits(codec, CS4271_MODE2, 0, CS4271_MODE2_PDN); 437 + ret = snd_soc_update_bits(codec, CS4271_MODE2, CS4271_MODE2_PDN, 438 + CS4271_MODE2_PDN); 438 439 if (ret < 0) 439 440 return ret; 440 441 return 0; ··· 502 501 return ret; 503 502 } 504 503 505 - ret = snd_soc_update_bits(codec, CS4271_MODE2, 0, 506 - CS4271_MODE2_PDN | CS4271_MODE2_CPEN); 504 + ret = snd_soc_update_bits(codec, CS4271_MODE2, 505 + CS4271_MODE2_PDN | CS4271_MODE2_CPEN, 506 + CS4271_MODE2_PDN | CS4271_MODE2_CPEN); 507 507 if (ret < 0) 508 508 return ret; 509 509 ret = snd_soc_update_bits(codec, CS4271_MODE2, CS4271_MODE2_PDN, 0);
+1 -1
sound/soc/codecs/rt5631.c
··· 177 177 static const DECLARE_TLV_DB_SCALE(in_vol_tlv, -3450, 150, 0); 178 178 /* {0, +20, +24, +30, +35, +40, +44, +50, +52}dB */ 179 179 static unsigned int mic_bst_tlv[] = { 180 - TLV_DB_RANGE_HEAD(6), 180 + TLV_DB_RANGE_HEAD(7), 181 181 0, 0, TLV_DB_SCALE_ITEM(0, 0, 0), 182 182 1, 1, TLV_DB_SCALE_ITEM(2000, 0, 0), 183 183 2, 2, TLV_DB_SCALE_ITEM(2400, 0, 0),
+1 -1
sound/soc/codecs/sgtl5000.c
··· 365 365 366 366 /* tlv for mic gain, 0db 20db 30db 40db */ 367 367 static const unsigned int mic_gain_tlv[] = { 368 - TLV_DB_RANGE_HEAD(4), 368 + TLV_DB_RANGE_HEAD(2), 369 369 0, 0, TLV_DB_SCALE_ITEM(0, 0, 0), 370 370 1, 3, TLV_DB_SCALE_ITEM(2000, 1000, 0), 371 371 };
+62 -1
sound/soc/codecs/sta32x.c
··· 76 76 77 77 unsigned int mclk; 78 78 unsigned int format; 79 + 80 + u32 coef_shadow[STA32X_COEF_COUNT]; 79 81 }; 80 82 81 83 static const DECLARE_TLV_DB_SCALE(mvol_tlv, -12700, 50, 1); ··· 229 227 struct snd_ctl_elem_value *ucontrol) 230 228 { 231 229 struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); 230 + struct sta32x_priv *sta32x = snd_soc_codec_get_drvdata(codec); 232 231 int numcoef = kcontrol->private_value >> 16; 233 232 int index = kcontrol->private_value & 0xffff; 234 233 unsigned int cfud; ··· 242 239 snd_soc_write(codec, STA32X_CFUD, cfud); 243 240 244 241 snd_soc_write(codec, STA32X_CFADDR2, index); 242 + for (i = 0; i < numcoef && (index + i < STA32X_COEF_COUNT); i++) 243 + sta32x->coef_shadow[index + i] = 244 + (ucontrol->value.bytes.data[3 * i] << 16) 245 + | (ucontrol->value.bytes.data[3 * i + 1] << 8) 246 + | (ucontrol->value.bytes.data[3 * i + 2]); 245 247 for (i = 0; i < 3 * numcoef; i++) 246 248 snd_soc_write(codec, STA32X_B1CF1 + i, 247 249 ucontrol->value.bytes.data[i]); ··· 258 250 return -EINVAL; 259 251 260 252 return 0; 253 + } 254 + 255 + int sta32x_sync_coef_shadow(struct snd_soc_codec *codec) 256 + { 257 + struct sta32x_priv *sta32x = snd_soc_codec_get_drvdata(codec); 258 + unsigned int cfud; 259 + int i; 260 + 261 + /* preserve reserved bits in STA32X_CFUD */ 262 + cfud = snd_soc_read(codec, STA32X_CFUD) & 0xf0; 263 + 264 + for (i = 0; i < STA32X_COEF_COUNT; i++) { 265 + snd_soc_write(codec, STA32X_CFADDR2, i); 266 + snd_soc_write(codec, STA32X_B1CF1, 267 + (sta32x->coef_shadow[i] >> 16) & 0xff); 268 + snd_soc_write(codec, STA32X_B1CF2, 269 + (sta32x->coef_shadow[i] >> 8) & 0xff); 270 + snd_soc_write(codec, STA32X_B1CF3, 271 + (sta32x->coef_shadow[i]) & 0xff); 272 + /* chip documentation does not say if the bits are 273 + * self-clearing, so do it explicitly */ 274 + snd_soc_write(codec, STA32X_CFUD, cfud); 275 + snd_soc_write(codec, STA32X_CFUD, cfud | 0x01); 276 + } 277 + return 0; 278 + } 279 + 280 + int sta32x_cache_sync(struct snd_soc_codec *codec) 281 + { 282 + unsigned int mute; 283 + int rc; 284 + 285 + if (!codec->cache_sync) 286 + return 0; 287 + 288 + /* mute during register sync */ 289 + mute = snd_soc_read(codec, STA32X_MMUTE); 290 + snd_soc_write(codec, STA32X_MMUTE, mute | STA32X_MMUTE_MMUTE); 291 + sta32x_sync_coef_shadow(codec); 292 + rc = snd_soc_cache_sync(codec); 293 + snd_soc_write(codec, STA32X_MMUTE, mute); 294 + return rc; 261 295 } 262 296 263 297 #define SINGLE_COEF(xname, index) \ ··· 711 661 return ret; 712 662 } 713 663 714 - snd_soc_cache_sync(codec); 664 + sta32x_cache_sync(codec); 715 665 } 716 666 717 667 /* Power up to mute */ ··· 839 789 snd_soc_update_bits(codec, STA32X_C3CFG, 840 790 STA32X_CxCFG_OM_MASK, 841 791 2 << STA32X_CxCFG_OM_SHIFT); 792 + 793 + /* initialize coefficient shadow RAM with reset values */ 794 + for (i = 4; i <= 49; i += 5) 795 + sta32x->coef_shadow[i] = 0x400000; 796 + for (i = 50; i <= 54; i++) 797 + sta32x->coef_shadow[i] = 0x7fffff; 798 + sta32x->coef_shadow[55] = 0x5a9df7; 799 + sta32x->coef_shadow[56] = 0x7fffff; 800 + sta32x->coef_shadow[59] = 0x7fffff; 801 + sta32x->coef_shadow[60] = 0x400000; 802 + sta32x->coef_shadow[61] = 0x400000; 842 803 843 804 sta32x_set_bias_level(codec, SND_SOC_BIAS_STANDBY); 844 805 /* Bias level configuration will have done an extra enable */
+1
sound/soc/codecs/sta32x.h
··· 19 19 /* STA326 register addresses */ 20 20 21 21 #define STA32X_REGISTER_COUNT 0x2d 22 + #define STA32X_COEF_COUNT 62 22 23 23 24 #define STA32X_CONFA 0x00 24 25 #define STA32X_CONFB 0x01
+1
sound/soc/codecs/wm8731.c
··· 453 453 snd_soc_write(codec, WM8731_PWR, 0xffff); 454 454 regulator_bulk_disable(ARRAY_SIZE(wm8731->supplies), 455 455 wm8731->supplies); 456 + codec->cache_sync = 1; 456 457 break; 457 458 } 458 459 codec->dapm.bias_level = level;
+3
sound/soc/codecs/wm8753.c
··· 190 190 struct wm8753_priv *wm8753 = snd_soc_codec_get_drvdata(codec); 191 191 u16 ioctl; 192 192 193 + if (wm8753->dai_func == ucontrol->value.integer.value[0]) 194 + return 0; 195 + 193 196 if (codec->active) 194 197 return -EBUSY; 195 198
+2 -2
sound/soc/codecs/wm8962.c
··· 1973 1973 static const DECLARE_TLV_DB_SCALE(inpga_tlv, -2325, 75, 0); 1974 1974 static const DECLARE_TLV_DB_SCALE(mixin_tlv, -1500, 300, 0); 1975 1975 static const unsigned int mixinpga_tlv[] = { 1976 - TLV_DB_RANGE_HEAD(7), 1976 + TLV_DB_RANGE_HEAD(5), 1977 1977 0, 1, TLV_DB_SCALE_ITEM(0, 600, 0), 1978 1978 2, 2, TLV_DB_SCALE_ITEM(1300, 1300, 0), 1979 1979 3, 4, TLV_DB_SCALE_ITEM(1800, 200, 0), ··· 1988 1988 static const DECLARE_TLV_DB_SCALE(out_tlv, -12100, 100, 1); 1989 1989 static const DECLARE_TLV_DB_SCALE(hp_tlv, -700, 100, 0); 1990 1990 static const unsigned int classd_tlv[] = { 1991 - TLV_DB_RANGE_HEAD(7), 1991 + TLV_DB_RANGE_HEAD(2), 1992 1992 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 1993 1993 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 1994 1994 };
+1 -1
sound/soc/codecs/wm8993.c
··· 512 512 static const DECLARE_TLV_DB_SCALE(drc_comp_amp, -2250, 75, 0); 513 513 static const DECLARE_TLV_DB_SCALE(drc_min_tlv, -1800, 600, 0); 514 514 static const unsigned int drc_max_tlv[] = { 515 - TLV_DB_RANGE_HEAD(4), 515 + TLV_DB_RANGE_HEAD(2), 516 516 0, 2, TLV_DB_SCALE_ITEM(1200, 600, 0), 517 517 3, 3, TLV_DB_SCALE_ITEM(3600, 0, 0), 518 518 };
+5 -5
sound/soc/codecs/wm9081.c
··· 807 807 mdelay(100); 808 808 809 809 /* Normal bias enable & soft start off */ 810 - reg |= WM9081_BIAS_ENA; 811 810 reg &= ~WM9081_VMID_RAMP; 812 811 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); 813 812 ··· 817 818 } 818 819 819 820 /* VMID 2*240k */ 820 - reg = snd_soc_read(codec, WM9081_BIAS_CONTROL_1); 821 + reg = snd_soc_read(codec, WM9081_VMID_CONTROL); 821 822 reg &= ~WM9081_VMID_SEL_MASK; 822 823 reg |= 0x04; 823 824 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); ··· 829 830 break; 830 831 831 832 case SND_SOC_BIAS_OFF: 832 - /* Startup bias source */ 833 + /* Startup bias source and disable bias */ 833 834 reg = snd_soc_read(codec, WM9081_BIAS_CONTROL_1); 834 835 reg |= WM9081_BIAS_SRC; 836 + reg &= ~WM9081_BIAS_ENA; 835 837 snd_soc_write(codec, WM9081_BIAS_CONTROL_1, reg); 836 838 837 - /* Disable VMID and biases with soft ramping */ 839 + /* Disable VMID with soft ramping */ 838 840 reg = snd_soc_read(codec, WM9081_VMID_CONTROL); 839 - reg &= ~(WM9081_VMID_SEL_MASK | WM9081_BIAS_ENA); 841 + reg &= ~WM9081_VMID_SEL_MASK; 840 842 reg |= WM9081_VMID_RAMP; 841 843 snd_soc_write(codec, WM9081_VMID_CONTROL, reg); 842 844
+3 -3
sound/soc/codecs/wm9090.c
··· 177 177 } 178 178 179 179 static const unsigned int in_tlv[] = { 180 - TLV_DB_RANGE_HEAD(6), 180 + TLV_DB_RANGE_HEAD(3), 181 181 0, 0, TLV_DB_SCALE_ITEM(-600, 0, 0), 182 182 1, 3, TLV_DB_SCALE_ITEM(-350, 350, 0), 183 183 4, 6, TLV_DB_SCALE_ITEM(600, 600, 0), 184 184 }; 185 185 static const unsigned int mix_tlv[] = { 186 - TLV_DB_RANGE_HEAD(4), 186 + TLV_DB_RANGE_HEAD(2), 187 187 0, 2, TLV_DB_SCALE_ITEM(-1200, 300, 0), 188 188 3, 3, TLV_DB_SCALE_ITEM(0, 0, 0), 189 189 }; 190 190 static const DECLARE_TLV_DB_SCALE(out_tlv, -5700, 100, 0); 191 191 static const unsigned int spkboost_tlv[] = { 192 - TLV_DB_RANGE_HEAD(7), 192 + TLV_DB_RANGE_HEAD(2), 193 193 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 194 194 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 195 195 };
+1 -1
sound/soc/codecs/wm_hubs.c
··· 40 40 static const DECLARE_TLV_DB_SCALE(spkmixout_tlv, -1800, 600, 1); 41 41 static const DECLARE_TLV_DB_SCALE(outpga_tlv, -5700, 100, 0); 42 42 static const unsigned int spkboost_tlv[] = { 43 - TLV_DB_RANGE_HEAD(7), 43 + TLV_DB_RANGE_HEAD(2), 44 44 0, 6, TLV_DB_SCALE_ITEM(0, 150, 0), 45 45 7, 7, TLV_DB_SCALE_ITEM(1200, 0, 0), 46 46 };
+1
sound/soc/fsl/fsl_ssi.c
··· 694 694 695 695 /* Initialize the the device_attribute structure */ 696 696 dev_attr = &ssi_private->dev_attr; 697 + sysfs_attr_init(&dev_attr->attr); 697 698 dev_attr->attr.name = "statistics"; 698 699 dev_attr->attr.mode = S_IRUGO; 699 700 dev_attr->show = fsl_sysfs_ssi_show;
+2 -1
sound/soc/nuc900/nuc900-ac97.c
··· 365 365 if (ret) 366 366 goto out3; 367 367 368 - mfp_set_groupg(nuc900_audio->dev); /* enbale ac97 multifunction pin*/ 368 + /* enbale ac97 multifunction pin */ 369 + mfp_set_groupg(nuc900_audio->dev, "nuc900-audio"); 369 370 370 371 return 0; 371 372
+16
tools/testing/ktest/ktest.pl
··· 747 747 # Add space to evaluate the character before $ 748 748 $option = " $option"; 749 749 my $retval = ""; 750 + my $repeated = 0; 751 + my $parent = 0; 752 + 753 + foreach my $test (keys %repeat_tests) { 754 + if ($i >= $test && 755 + $i < $test + $repeat_tests{$test}) { 756 + 757 + $repeated = 1; 758 + $parent = $test; 759 + last; 760 + } 761 + } 750 762 751 763 while ($option =~ /(.*?[^\\])\$\{(.*?)\}(.*)/) { 752 764 my $start = $1; ··· 772 760 # otherwise see if the default OPT (without [$i]) exists. 773 761 774 762 my $o = "$var\[$i\]"; 763 + my $parento = "$var\[$parent\]"; 775 764 776 765 if (defined($opt{$o})) { 777 766 $o = $opt{$o}; 767 + $retval = "$retval$o"; 768 + } elsif ($repeated && defined($opt{$parento})) { 769 + $o = $opt{$parento}; 778 770 $retval = "$retval$o"; 779 771 } elsif (defined($opt{$var})) { 780 772 $o = $opt{$var};